title
stringlengths
4
295
pmid
stringlengths
8
8
background_abstract
stringlengths
12
1.65k
background_abstract_label
stringclasses
12 values
methods_abstract
stringlengths
39
1.48k
methods_abstract_label
stringlengths
6
31
results_abstract
stringlengths
65
1.93k
results_abstract_label
stringclasses
10 values
conclusions_abstract
stringlengths
57
1.02k
conclusions_abstract_label
stringclasses
22 values
mesh_descriptor_names
sequence
pmcid
stringlengths
6
8
background_title
stringlengths
10
86
background_text
stringlengths
215
23.3k
methods_title
stringlengths
6
74
methods_text
stringlengths
99
42.9k
results_title
stringlengths
6
172
results_text
stringlengths
141
62.9k
conclusions_title
stringlengths
9
44
conclusions_text
stringlengths
5
13.6k
other_sections_titles
sequence
other_sections_texts
sequence
other_sections_sec_types
sequence
all_sections_titles
sequence
all_sections_texts
sequence
all_sections_sec_types
sequence
keywords
sequence
whole_article_text
stringlengths
6.93k
126k
whole_article_abstract
stringlengths
936
2.95k
background_conclusion_text
stringlengths
587
24.7k
background_conclusion_abstract
stringlengths
936
2.83k
whole_article_text_length
int64
1.3k
22.5k
whole_article_abstract_length
int64
183
490
other_sections_lengths
sequence
num_sections
int64
3
28
most_frequent_words
sequence
keybert_topics
sequence
annotated_base_background_abstract_prompt
stringclasses
1 value
annotated_base_methods_abstract_prompt
stringclasses
1 value
annotated_base_results_abstract_prompt
stringclasses
1 value
annotated_base_conclusions_abstract_prompt
stringclasses
1 value
annotated_base_whole_article_abstract_prompt
stringclasses
1 value
annotated_base_background_conclusion_abstract_prompt
stringclasses
1 value
annotated_keywords_background_abstract_prompt
stringlengths
28
460
annotated_keywords_methods_abstract_prompt
stringlengths
28
701
annotated_keywords_results_abstract_prompt
stringlengths
28
701
annotated_keywords_conclusions_abstract_prompt
stringlengths
28
428
annotated_keywords_whole_article_abstract_prompt
stringlengths
28
701
annotated_keywords_background_conclusion_abstract_prompt
stringlengths
28
428
annotated_mesh_background_abstract_prompt
stringlengths
53
701
annotated_mesh_methods_abstract_prompt
stringlengths
53
701
annotated_mesh_results_abstract_prompt
stringlengths
53
692
annotated_mesh_conclusions_abstract_prompt
stringlengths
54
701
annotated_mesh_whole_article_abstract_prompt
stringlengths
53
701
annotated_mesh_background_conclusion_abstract_prompt
stringlengths
54
701
annotated_keybert_background_abstract_prompt
stringlengths
100
219
annotated_keybert_methods_abstract_prompt
stringlengths
100
219
annotated_keybert_results_abstract_prompt
stringlengths
101
219
annotated_keybert_conclusions_abstract_prompt
stringlengths
100
240
annotated_keybert_whole_article_abstract_prompt
stringlengths
100
240
annotated_keybert_background_conclusion_abstract_prompt
stringlengths
100
211
annotated_most_frequent_background_abstract_prompt
stringlengths
67
217
annotated_most_frequent_methods_abstract_prompt
stringlengths
67
217
annotated_most_frequent_results_abstract_prompt
stringlengths
67
217
annotated_most_frequent_conclusions_abstract_prompt
stringlengths
71
217
annotated_most_frequent_whole_article_abstract_prompt
stringlengths
67
217
annotated_most_frequent_background_conclusion_abstract_prompt
stringlengths
71
217
annotated_tf_idf_background_abstract_prompt
stringlengths
74
283
annotated_tf_idf_methods_abstract_prompt
stringlengths
67
325
annotated_tf_idf_results_abstract_prompt
stringlengths
69
340
annotated_tf_idf_conclusions_abstract_prompt
stringlengths
83
403
annotated_tf_idf_whole_article_abstract_prompt
stringlengths
70
254
annotated_tf_idf_background_conclusion_abstract_prompt
stringlengths
71
254
annotated_entity_plan_background_abstract_prompt
stringlengths
20
313
annotated_entity_plan_methods_abstract_prompt
stringlengths
20
452
annotated_entity_plan_results_abstract_prompt
stringlengths
20
596
annotated_entity_plan_conclusions_abstract_prompt
stringlengths
20
150
annotated_entity_plan_whole_article_abstract_prompt
stringlengths
50
758
annotated_entity_plan_background_conclusion_abstract_prompt
stringlengths
50
758
Effects of oral alkali drug therapy on clinical outcomes in pre-dialysis chronic kidney disease patients: a systematic review and meta-analysis.
35176947
Metabolic acidosis accelerates the progression of chronic kidney disease (CKD) and increases the mortality rate. Whether oral alkali drug therapy benefits pre-dialysis CKD patients is controversial. We performed a meta-analysis of the effects of oral alkali drug therapy on major clinical outcomes in pre-dialysis CKD patients.
BACKGROUND
We systematically searched MEDLINE using the Ovid, EMBASE, and Cochrane Library databases without language restriction. We included all eligible clinical studies that involved pre-dialysis CKD adults and compared those who received oral alkali drug therapy with controls.
METHODS
A total of 18 eligible studies, including 14 randomized controlled trials and 4 cohort studies reported in 19 publications with 3695 participants, were included. Oral alkali drug therapy led to a 55% reduction in renal failure events (relative risk [RR]: 0.45; 95% confidence interval [CI]: 0.25-0.82), a rate of decline in the estimated glomerular filtration rate (eGFR) of 2.59 mL/min/1.73 m2 per year (95% CI, 0.88-4.31). There was no significant effect on decline in eGFR events (RR: 0.34; 95% CI: 0.09-1.23), proteinuria (standardized mean difference: -0.32; 95% CI: -1.08 to 0.43), all-cause mortality events (RR: 0.90; 95% CI: 0.40-2.02) and cardiovascular (CV) events (RR: 1.03; 95% CI: 0.32-3.37) compared with the control groups.
RESULTS
Based on the available and low-to-moderate certainty evidence, oral alkali drug therapy might potentially reduce the risk of kidney failure events, but no benefit in reducing all-cause mortality events, CV events, decline in eGFR and porteninuria.
CONCLUSION
[ "Acidosis", "Administration, Oral", "Adult", "Alkalies", "Cause of Death", "Disease Progression", "Glomerular Filtration Rate", "Humans", "Proteinuria", "Randomized Controlled Trials as Topic", "Renal Dialysis", "Renal Insufficiency, Chronic" ]
8865123
Introduction
Metabolic acidosis (MA), a common complication of chronic kidney disease (CKD) caused by failure to balance the daily acid load, causes kidney damage, leading to protein-energy consumption, chronic inflammation, endocrine disorders, and the aggravation of metabolic osteopathy [1]. MA is associated with adverse outcomes in CKD patients, including the progression of CKD, all-cause mortality, and cardiovascular (CV) events [2,3]. Oral bicarbonate supplementation is commonly used to correct MA and improve the prognosis of CKD patients. According to the 2020 KDIGO guidelines for glomerulonephritis, MA should be treated with supplementation with oral sodium bicarbonate if the serum bicarbonate level is < 22 mmol/L [4]. However, it remains unclear whether treatment of MA based on oral alkali supplementation would translate into improved clinical outcomes, including delaying the progression of MA and decreasing the risks of all-cause mortality and CV events. In the UBI study, treatment of MA with sodium bicarbonate in patients with stage 3–5 CKD was safe and reduced the risks of CKD progression and all-cause mortality [5]. However, other studies did not confirm the benefits of alkali supplementation in terms of delaying CKD progression and improving survival. Randomized controlled trials (RCTs) showed no significant effects of oral alkali supplementation on renal outcomes and mortality [6,7]. In a cohort study of 386 CKD patients, compared with those who did not receive bicarbonate supplementation, the risk of ischemic heart disease was significantly lower in patients who received bicarbonate supplementation [8]. In a multi-center RCT, no differences were found in CV events between the sodium bicarbonate group and the placebo group [6]. Therefore, the association between oral alkali drug supplementation and clinical outcomes in pre-dialysis CKD patients is unclear. In this systematic review, we summarized all available clinical study data to evaluate the benefits of oral alkali drug therapy regarding kidney outcomes, all-cause mortality, and CV events in pre-dialysis CKD patients.
null
null
Results
Overview of included trials The literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,. PRISMA flow chart for the included studies. The Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3). As shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist. The literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,. PRISMA flow chart for the included studies. The Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3). As shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist. Effects of oral alkali drug therapy on renal outcomes Nine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1). Forest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk. Subgroup analysis of renal failure events. Note. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk. Data regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found. Thirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5). Forest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation. Data on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5). Forest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference. Nine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1). Forest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk. Subgroup analysis of renal failure events. Note. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk. Data regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found. Thirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5). Forest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation. Data on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5). Forest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference. Effects of oral alkali drug therapy on all-cause mortality and CV events Seven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5). Forest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk. Data for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5). Seven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5). Forest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk. Data for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5). Sensitivity analysis The results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6). The results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6).
null
null
[ "Data sources and search strategy", "Study selection and outcome estimation", "Data extraction and quality assessment", "Data synthesis and statistical analyses", "Overview of included trials", "Effects of oral alkali drug therapy on renal outcomes", "Effects of oral alkali drug therapy on all-cause mortality and CV events", "Sensitivity analysis" ]
[ "We performed this systematic review according to a pre-specified protocol [9] registered in the International Prospective Register of Systematic Reviews (CRD42018111030), and the reporting was in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. A comprehensive search was conducted using the following databases: MEDLINE by Ovid (1946 to February 2020), EMBASE (1966 to February 2020), and Cochrane Central Register of Controlled Trials (no date restriction), with relevant keywords and medical subject headings that included various spellings of ‘CKD’, ‘RCT’, ‘Cohort Studies’, and ‘Oral Alkali Therapy’ (the terms ‘Sodium Bicarbonate’, ‘Alkali’, (see item S1). Studies were considered without any language restriction. To ensure a comprehensive literature search, we also screened reference lists from included articles. The ClinicalTrials.gov website was searched for ongoing but unpublished trials in this field.", "We included data from RCTs and cohort studies in which oral alkali drug therapy was provided to adults with pre-dialysis CKD (participants who were pregnant, had malignancies or acute illnesses, or had a follow-up time of less than 3 months were excluded) and comparisons were made with subjects receiving the usual therapy.\nPre-defined outcomes that contained analyzable data were extracted as follows. A renal failure event was defined as a more than 50% decline in estimated glomerular filtration rate (eGFR) from baseline during follow-up, doubling of serum creatinine, or progression to end-stage renal disease (ESRD) [10]. Decline in eGFR was defined as a decrease of eGFR >3 mL/min/1.73 m2 per year [11]. The rate of change in eGFR per year and changes in urinary protein or urinary albumin during follow-up, including urinary protein excretion, urinary albumin excretion, and the urinary albumin/creatinine ratio, were recorded. Additionally, the incidences of all-cause mortality events and CV events, defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, CV disease, and CV death, were recorded.", "Two independent reviewers (H.S. and X.S.) extracted data and assessed their quality according to the pre-specified protocol. Disagreements were resolved by a third reviewer (L.W.). Data from all eligible studies were extracted into a spreadsheet. The data sought included the characteristics of the studies (study type, randomization method, follow-up time, withdrawals/dropouts), baseline patient characteristics (age, sex, baseline eGFR), intake of alkali drug supplementation, and outcome events.\nWe used the Cochrane Collaboration risk-of-bias tool [12] to assess all potential sources of bias for the included RCTs. Trials were assessed as being at low or high risk of bias or subject to other risks or some concerns, and the overall risk of bias generally corresponded to the worst risk of bias in any of the domains. However, if a study was judged to be subject to some concerns about the risk of bias for multiple domains, it might be judged as being at high risk of bias overall. In addition, the quality of the RCTs was assessed using the Jadad scale [13]. We used the Newcastle-Ottawa Scale (NOS) to assess the quality of cohort studies in terms of selection of cohorts, comparability of cohorts, and assessments of outcomes [14].", "When dichotomous outcome data from individual studies were analyzed, relative risks (RRs) and their 95% confidence intervals (CIs) were calculated. If the RR for an individual study was unavailable in the original article, the RR and 95% CI were calculated from event numbers extracted from each study before data pooling. In calculating the RR values, we used the total number of patients randomized in each group as the denominator. Continuous outcome data from individual trials were analyzed using differences in means (MDs) with 95% CIs to pool eGFR data, whereas the standardized mean differences (SMDs) with 95% CIs were used to pool proteinuria or albuminuria data. When continuous outcome data were analyzed, the difference in the mean change between values at baseline and the end of treatment was used. If data on changes between baseline and end-of-treatment values were not available in the studies, we calculated them using correlations estimated from other included studies that had a similar follow-up period and reported their results in considerable detail according to the imputed formulation and its related interpretations in the Cochrane Handbook [15].\nBecause of the poor stability of the Der Simonian-Laird procedure for small numbers of studies, we used the empirical Bayes procedure to estimate all outcomes [16,17]. We also used the Der Simonian-Laird random effects model and restricted maximum likelihood approach to assess summary effects as part of sensitivity analyses [18,19]. Considering the inevitable heterogeneity among studies, subgroup and sensitivity analyses were performed. Subgroup analyses were performed based on a pre-specified protocol according to the study type, baseline serum bicarbonate, baseline eGFR, mean age, follow-up time, and sample size. In addition, we performed sensitivity analyses using different random-effects estimation methods, excluding studies with a sample size <50, those with a follow-up of < 12 months, and studies of low quality (Jadad score <3, NOS score <5★). Heterogeneity among studies was evaluated using the I2 or τ2 statistic. Stata version 15.0 (StataCorp LP, College Station, TX) was used for statistical analysis, and a two-sided p-value < 0.05 was considered indicative of significance.", "The literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,.\nPRISMA flow chart for the included studies.\nThe Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3).\nAs shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist.", "Nine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1).\nForest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk.\nSubgroup analysis of renal failure events.\nNote. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk.\nData regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found.\nThirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5).\nForest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation.\nData on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5).\nForest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference.", "Seven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5).\nForest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk.\nData for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5).", "The results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6)." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Data sources and search strategy", "Study selection and outcome estimation", "Data extraction and quality assessment", "Data synthesis and statistical analyses", "Results", "Overview of included trials", "Effects of oral alkali drug therapy on renal outcomes", "Effects of oral alkali drug therapy on all-cause mortality and CV events", "Sensitivity analysis", "Discussion", "Supplementary Material" ]
[ "Metabolic acidosis (MA), a common complication of chronic kidney disease (CKD) caused by failure to balance the daily acid load, causes kidney damage, leading to protein-energy consumption, chronic inflammation, endocrine disorders, and the aggravation of metabolic osteopathy [1]. MA is associated with adverse outcomes in CKD patients, including the progression of CKD, all-cause mortality, and cardiovascular (CV) events [2,3].\nOral bicarbonate supplementation is commonly used to correct MA and improve the prognosis of CKD patients. According to the 2020 KDIGO guidelines for glomerulonephritis, MA should be treated with supplementation with oral sodium bicarbonate if the serum bicarbonate level is < 22 mmol/L [4]. However, it remains unclear whether treatment of MA based on oral alkali supplementation would translate into improved clinical outcomes, including delaying the progression of MA and decreasing the risks of all-cause mortality and CV events. In the UBI study, treatment of MA with sodium bicarbonate in patients with stage 3–5 CKD was safe and reduced the risks of CKD progression and all-cause mortality [5]. However, other studies did not confirm the benefits of alkali supplementation in terms of delaying CKD progression and improving survival. Randomized controlled trials (RCTs) showed no significant effects of oral alkali supplementation on renal outcomes and mortality [6,7]. In a cohort study of 386 CKD patients, compared with those who did not receive bicarbonate supplementation, the risk of ischemic heart disease was significantly lower in patients who received bicarbonate supplementation [8]. In a multi-center RCT, no differences were found in CV events between the sodium bicarbonate group and the placebo group [6]. Therefore, the association between oral alkali drug supplementation and clinical outcomes in pre-dialysis CKD patients is unclear.\nIn this systematic review, we summarized all available clinical study data to evaluate the benefits of oral alkali drug therapy regarding kidney outcomes, all-cause mortality, and CV events in pre-dialysis CKD patients.", "Data sources and search strategy We performed this systematic review according to a pre-specified protocol [9] registered in the International Prospective Register of Systematic Reviews (CRD42018111030), and the reporting was in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. A comprehensive search was conducted using the following databases: MEDLINE by Ovid (1946 to February 2020), EMBASE (1966 to February 2020), and Cochrane Central Register of Controlled Trials (no date restriction), with relevant keywords and medical subject headings that included various spellings of ‘CKD’, ‘RCT’, ‘Cohort Studies’, and ‘Oral Alkali Therapy’ (the terms ‘Sodium Bicarbonate’, ‘Alkali’, (see item S1). Studies were considered without any language restriction. To ensure a comprehensive literature search, we also screened reference lists from included articles. The ClinicalTrials.gov website was searched for ongoing but unpublished trials in this field.\nWe performed this systematic review according to a pre-specified protocol [9] registered in the International Prospective Register of Systematic Reviews (CRD42018111030), and the reporting was in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. A comprehensive search was conducted using the following databases: MEDLINE by Ovid (1946 to February 2020), EMBASE (1966 to February 2020), and Cochrane Central Register of Controlled Trials (no date restriction), with relevant keywords and medical subject headings that included various spellings of ‘CKD’, ‘RCT’, ‘Cohort Studies’, and ‘Oral Alkali Therapy’ (the terms ‘Sodium Bicarbonate’, ‘Alkali’, (see item S1). Studies were considered without any language restriction. To ensure a comprehensive literature search, we also screened reference lists from included articles. The ClinicalTrials.gov website was searched for ongoing but unpublished trials in this field.\nStudy selection and outcome estimation We included data from RCTs and cohort studies in which oral alkali drug therapy was provided to adults with pre-dialysis CKD (participants who were pregnant, had malignancies or acute illnesses, or had a follow-up time of less than 3 months were excluded) and comparisons were made with subjects receiving the usual therapy.\nPre-defined outcomes that contained analyzable data were extracted as follows. A renal failure event was defined as a more than 50% decline in estimated glomerular filtration rate (eGFR) from baseline during follow-up, doubling of serum creatinine, or progression to end-stage renal disease (ESRD) [10]. Decline in eGFR was defined as a decrease of eGFR >3 mL/min/1.73 m2 per year [11]. The rate of change in eGFR per year and changes in urinary protein or urinary albumin during follow-up, including urinary protein excretion, urinary albumin excretion, and the urinary albumin/creatinine ratio, were recorded. Additionally, the incidences of all-cause mortality events and CV events, defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, CV disease, and CV death, were recorded.\nWe included data from RCTs and cohort studies in which oral alkali drug therapy was provided to adults with pre-dialysis CKD (participants who were pregnant, had malignancies or acute illnesses, or had a follow-up time of less than 3 months were excluded) and comparisons were made with subjects receiving the usual therapy.\nPre-defined outcomes that contained analyzable data were extracted as follows. A renal failure event was defined as a more than 50% decline in estimated glomerular filtration rate (eGFR) from baseline during follow-up, doubling of serum creatinine, or progression to end-stage renal disease (ESRD) [10]. Decline in eGFR was defined as a decrease of eGFR >3 mL/min/1.73 m2 per year [11]. The rate of change in eGFR per year and changes in urinary protein or urinary albumin during follow-up, including urinary protein excretion, urinary albumin excretion, and the urinary albumin/creatinine ratio, were recorded. Additionally, the incidences of all-cause mortality events and CV events, defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, CV disease, and CV death, were recorded.\nData extraction and quality assessment Two independent reviewers (H.S. and X.S.) extracted data and assessed their quality according to the pre-specified protocol. Disagreements were resolved by a third reviewer (L.W.). Data from all eligible studies were extracted into a spreadsheet. The data sought included the characteristics of the studies (study type, randomization method, follow-up time, withdrawals/dropouts), baseline patient characteristics (age, sex, baseline eGFR), intake of alkali drug supplementation, and outcome events.\nWe used the Cochrane Collaboration risk-of-bias tool [12] to assess all potential sources of bias for the included RCTs. Trials were assessed as being at low or high risk of bias or subject to other risks or some concerns, and the overall risk of bias generally corresponded to the worst risk of bias in any of the domains. However, if a study was judged to be subject to some concerns about the risk of bias for multiple domains, it might be judged as being at high risk of bias overall. In addition, the quality of the RCTs was assessed using the Jadad scale [13]. We used the Newcastle-Ottawa Scale (NOS) to assess the quality of cohort studies in terms of selection of cohorts, comparability of cohorts, and assessments of outcomes [14].\nTwo independent reviewers (H.S. and X.S.) extracted data and assessed their quality according to the pre-specified protocol. Disagreements were resolved by a third reviewer (L.W.). Data from all eligible studies were extracted into a spreadsheet. The data sought included the characteristics of the studies (study type, randomization method, follow-up time, withdrawals/dropouts), baseline patient characteristics (age, sex, baseline eGFR), intake of alkali drug supplementation, and outcome events.\nWe used the Cochrane Collaboration risk-of-bias tool [12] to assess all potential sources of bias for the included RCTs. Trials were assessed as being at low or high risk of bias or subject to other risks or some concerns, and the overall risk of bias generally corresponded to the worst risk of bias in any of the domains. However, if a study was judged to be subject to some concerns about the risk of bias for multiple domains, it might be judged as being at high risk of bias overall. In addition, the quality of the RCTs was assessed using the Jadad scale [13]. We used the Newcastle-Ottawa Scale (NOS) to assess the quality of cohort studies in terms of selection of cohorts, comparability of cohorts, and assessments of outcomes [14].\nData synthesis and statistical analyses When dichotomous outcome data from individual studies were analyzed, relative risks (RRs) and their 95% confidence intervals (CIs) were calculated. If the RR for an individual study was unavailable in the original article, the RR and 95% CI were calculated from event numbers extracted from each study before data pooling. In calculating the RR values, we used the total number of patients randomized in each group as the denominator. Continuous outcome data from individual trials were analyzed using differences in means (MDs) with 95% CIs to pool eGFR data, whereas the standardized mean differences (SMDs) with 95% CIs were used to pool proteinuria or albuminuria data. When continuous outcome data were analyzed, the difference in the mean change between values at baseline and the end of treatment was used. If data on changes between baseline and end-of-treatment values were not available in the studies, we calculated them using correlations estimated from other included studies that had a similar follow-up period and reported their results in considerable detail according to the imputed formulation and its related interpretations in the Cochrane Handbook [15].\nBecause of the poor stability of the Der Simonian-Laird procedure for small numbers of studies, we used the empirical Bayes procedure to estimate all outcomes [16,17]. We also used the Der Simonian-Laird random effects model and restricted maximum likelihood approach to assess summary effects as part of sensitivity analyses [18,19]. Considering the inevitable heterogeneity among studies, subgroup and sensitivity analyses were performed. Subgroup analyses were performed based on a pre-specified protocol according to the study type, baseline serum bicarbonate, baseline eGFR, mean age, follow-up time, and sample size. In addition, we performed sensitivity analyses using different random-effects estimation methods, excluding studies with a sample size <50, those with a follow-up of < 12 months, and studies of low quality (Jadad score <3, NOS score <5★). Heterogeneity among studies was evaluated using the I2 or τ2 statistic. Stata version 15.0 (StataCorp LP, College Station, TX) was used for statistical analysis, and a two-sided p-value < 0.05 was considered indicative of significance.\nWhen dichotomous outcome data from individual studies were analyzed, relative risks (RRs) and their 95% confidence intervals (CIs) were calculated. If the RR for an individual study was unavailable in the original article, the RR and 95% CI were calculated from event numbers extracted from each study before data pooling. In calculating the RR values, we used the total number of patients randomized in each group as the denominator. Continuous outcome data from individual trials were analyzed using differences in means (MDs) with 95% CIs to pool eGFR data, whereas the standardized mean differences (SMDs) with 95% CIs were used to pool proteinuria or albuminuria data. When continuous outcome data were analyzed, the difference in the mean change between values at baseline and the end of treatment was used. If data on changes between baseline and end-of-treatment values were not available in the studies, we calculated them using correlations estimated from other included studies that had a similar follow-up period and reported their results in considerable detail according to the imputed formulation and its related interpretations in the Cochrane Handbook [15].\nBecause of the poor stability of the Der Simonian-Laird procedure for small numbers of studies, we used the empirical Bayes procedure to estimate all outcomes [16,17]. We also used the Der Simonian-Laird random effects model and restricted maximum likelihood approach to assess summary effects as part of sensitivity analyses [18,19]. Considering the inevitable heterogeneity among studies, subgroup and sensitivity analyses were performed. Subgroup analyses were performed based on a pre-specified protocol according to the study type, baseline serum bicarbonate, baseline eGFR, mean age, follow-up time, and sample size. In addition, we performed sensitivity analyses using different random-effects estimation methods, excluding studies with a sample size <50, those with a follow-up of < 12 months, and studies of low quality (Jadad score <3, NOS score <5★). Heterogeneity among studies was evaluated using the I2 or τ2 statistic. Stata version 15.0 (StataCorp LP, College Station, TX) was used for statistical analysis, and a two-sided p-value < 0.05 was considered indicative of significance.", "We performed this systematic review according to a pre-specified protocol [9] registered in the International Prospective Register of Systematic Reviews (CRD42018111030), and the reporting was in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. A comprehensive search was conducted using the following databases: MEDLINE by Ovid (1946 to February 2020), EMBASE (1966 to February 2020), and Cochrane Central Register of Controlled Trials (no date restriction), with relevant keywords and medical subject headings that included various spellings of ‘CKD’, ‘RCT’, ‘Cohort Studies’, and ‘Oral Alkali Therapy’ (the terms ‘Sodium Bicarbonate’, ‘Alkali’, (see item S1). Studies were considered without any language restriction. To ensure a comprehensive literature search, we also screened reference lists from included articles. The ClinicalTrials.gov website was searched for ongoing but unpublished trials in this field.", "We included data from RCTs and cohort studies in which oral alkali drug therapy was provided to adults with pre-dialysis CKD (participants who were pregnant, had malignancies or acute illnesses, or had a follow-up time of less than 3 months were excluded) and comparisons were made with subjects receiving the usual therapy.\nPre-defined outcomes that contained analyzable data were extracted as follows. A renal failure event was defined as a more than 50% decline in estimated glomerular filtration rate (eGFR) from baseline during follow-up, doubling of serum creatinine, or progression to end-stage renal disease (ESRD) [10]. Decline in eGFR was defined as a decrease of eGFR >3 mL/min/1.73 m2 per year [11]. The rate of change in eGFR per year and changes in urinary protein or urinary albumin during follow-up, including urinary protein excretion, urinary albumin excretion, and the urinary albumin/creatinine ratio, were recorded. Additionally, the incidences of all-cause mortality events and CV events, defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, CV disease, and CV death, were recorded.", "Two independent reviewers (H.S. and X.S.) extracted data and assessed their quality according to the pre-specified protocol. Disagreements were resolved by a third reviewer (L.W.). Data from all eligible studies were extracted into a spreadsheet. The data sought included the characteristics of the studies (study type, randomization method, follow-up time, withdrawals/dropouts), baseline patient characteristics (age, sex, baseline eGFR), intake of alkali drug supplementation, and outcome events.\nWe used the Cochrane Collaboration risk-of-bias tool [12] to assess all potential sources of bias for the included RCTs. Trials were assessed as being at low or high risk of bias or subject to other risks or some concerns, and the overall risk of bias generally corresponded to the worst risk of bias in any of the domains. However, if a study was judged to be subject to some concerns about the risk of bias for multiple domains, it might be judged as being at high risk of bias overall. In addition, the quality of the RCTs was assessed using the Jadad scale [13]. We used the Newcastle-Ottawa Scale (NOS) to assess the quality of cohort studies in terms of selection of cohorts, comparability of cohorts, and assessments of outcomes [14].", "When dichotomous outcome data from individual studies were analyzed, relative risks (RRs) and their 95% confidence intervals (CIs) were calculated. If the RR for an individual study was unavailable in the original article, the RR and 95% CI were calculated from event numbers extracted from each study before data pooling. In calculating the RR values, we used the total number of patients randomized in each group as the denominator. Continuous outcome data from individual trials were analyzed using differences in means (MDs) with 95% CIs to pool eGFR data, whereas the standardized mean differences (SMDs) with 95% CIs were used to pool proteinuria or albuminuria data. When continuous outcome data were analyzed, the difference in the mean change between values at baseline and the end of treatment was used. If data on changes between baseline and end-of-treatment values were not available in the studies, we calculated them using correlations estimated from other included studies that had a similar follow-up period and reported their results in considerable detail according to the imputed formulation and its related interpretations in the Cochrane Handbook [15].\nBecause of the poor stability of the Der Simonian-Laird procedure for small numbers of studies, we used the empirical Bayes procedure to estimate all outcomes [16,17]. We also used the Der Simonian-Laird random effects model and restricted maximum likelihood approach to assess summary effects as part of sensitivity analyses [18,19]. Considering the inevitable heterogeneity among studies, subgroup and sensitivity analyses were performed. Subgroup analyses were performed based on a pre-specified protocol according to the study type, baseline serum bicarbonate, baseline eGFR, mean age, follow-up time, and sample size. In addition, we performed sensitivity analyses using different random-effects estimation methods, excluding studies with a sample size <50, those with a follow-up of < 12 months, and studies of low quality (Jadad score <3, NOS score <5★). Heterogeneity among studies was evaluated using the I2 or τ2 statistic. Stata version 15.0 (StataCorp LP, College Station, TX) was used for statistical analysis, and a two-sided p-value < 0.05 was considered indicative of significance.", "Overview of included trials The literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,.\nPRISMA flow chart for the included studies.\nThe Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3).\nAs shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist.\nThe literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,.\nPRISMA flow chart for the included studies.\nThe Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3).\nAs shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist.\nEffects of oral alkali drug therapy on renal outcomes Nine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1).\nForest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk.\nSubgroup analysis of renal failure events.\nNote. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk.\nData regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found.\nThirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5).\nForest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation.\nData on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5).\nForest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference.\nNine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1).\nForest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk.\nSubgroup analysis of renal failure events.\nNote. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk.\nData regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found.\nThirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5).\nForest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation.\nData on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5).\nForest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference.\nEffects of oral alkali drug therapy on all-cause mortality and CV events Seven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5).\nForest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk.\nData for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5).\nSeven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5).\nForest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk.\nData for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5).\nSensitivity analysis The results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6).\nThe results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6).", "The literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,.\nPRISMA flow chart for the included studies.\nThe Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3).\nAs shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist.", "Nine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1).\nForest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk.\nSubgroup analysis of renal failure events.\nNote. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk.\nData regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found.\nThirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5).\nForest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation.\nData on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5).\nForest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference.", "Seven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5).\nForest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk.\nData for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5).", "The results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6).", "MA, a common complication of CKD, is associated with CKD progression and higher mortality. The benefits of oral alkali drug supplementation for renal outcome events, all-cause mortality, and CV events in pre-dialysis CKD patients are controversial. This meta-analysis of 18 studies including 3695 participants suggests that oral alkali drug therapy produces a 55% reduction in renal failure events. No significant effects were observed for decline in eGFR, proteinuria, the risk of all-cause mortality events and CV events. The results were broadly consistent across major subgroups, as demonstrated by the sensitivity analyses. Of note, the existence of significant heterogeneity may limit the interpretation and clinical application of these results. Heterogeneity in the study included different CKD stages of patients, different baseline serum bicarbonate levels across studies, considerable variation in follow-up time, and different strategies to correct MA. Although we preformed subgroup analyses, which remains a concern for meaningful interpretation of the results.\nMA is associated with the progression of CKD [35,36]. However, there are sparse data on the effects of oral alkali drug supplementation on renal function in CKD patients, with inconsistent effects reported to date. In accordance with our study, a 2012 systematic review suggested that oral alkali therapy could slow the decline of the eGFR in patients with MA [37]. A 2019 systematic review indicated that oral alkali supplementation was associated with an improvement in eGFR and a reduction in the risk of progression to ESRD [38]. However, only two studies evaluated the effect of oral alkali therapy on the incidence of ESRD. Compared with previous meta-analyses [37,38], our study included many new studies on the effects of oral alkali therapy on changes in eGFR and ESRD events. Our summary data showed smaller reduction in kidney function decline, although with significant heterogeneity between the included studies. These data should be interpreted with caution. . . Additional well- designed trials are needed to explore the effect of treatment of MA on the risk of kidney disease progression with these different types of interventions. In the Chronic Renal Insufficiency Cohort study [2], the role of serum bicarbonate level as a risk factor for renal outcomes (ESRD or 50% reduction in eGFR) was evaluated in 3939 individuals with stage 2–4 CKD. After adjusting for covariates, the risk of developing a renal endpoint was 3% lower per 1 mmol/L increase in serum bicarbonate level [2]. A retrospective study from the National Health and Nutrition Examination Survey III involving 1486 CKD patients with a median 14.2 years of follow-up demonstrated that a higher dietary acid load was independently associated with an increased risk of ESRD, and this association was more pronounced in individuals with advanced CKD than in those with mild or moderate CKD [39]. Several studies included in this meta-analysis reported that oral alkali supplementation can delay the progression of CKD in patients with MA [28,29,32,35,37]. However, Mahajan [21] found that oral alkali supplementation delayed the progression of CKD in stage 2 CKD patients without MA. Wesson and Simoni [40] demonstrated that oral alkali dietary supplementation prevented eGFR decline in the two-thirds nephrectomy rat model (a model of early-stage CKD that does not include MA) compared with control rats over a 24-week period. This implies that mechanisms other than the correction of MA are involved in the renoprotective effect of oral alkali supplementation and raises important questions regarding the potential use of oral alkali supplementation in other conditions. Several potential mechanisms may be involved, including decreasing interstitial ammonium levels and reducing complement activation; the correction of interstitial acidosis and decreasing the local production of endothelin-1 and angiotensin II; decreasing tubular H+ secretion, which can limit tubular cast formation; activation of the cholinergic anti-inflammatory pathway and decreasing renal inflammation; or the correction of MA leading to enhanced blood glucose control [41]. The mechanisms underlying the renoprotective effect of oral alkali therapy need to be further explored. No high-quality study has assessed the effects of oral alkali therapy on CKD progression in patients with and without MA. Therefore, well-designed studies are needed. Both the BiCARB Study Group [7] and Raphael KL [28] studies failed to find a benefit of oral alkali therapy in terms of preventing an eGFR decline in CKD. In both studies, the mean age of participants was around 72.5 years, which may explain why some patients were not responsive to oral alkali therapy. The eGFR typically declines with age. It is possible that sodium bicarbonate is less effective in older patients with CKD compared to younger patients [7,28], which was consistent with the results of subgroup analyses. Further well-designed studies are needed to explore this.\nIn our study, there was no compelling evidence that oral alkali drug therapy was associated with a lower incidence of all-cause mortality events and CV events. The scarcity of data on all-cause mortality events (seven studies with 127 events) and CV events (four studies with 160 events) available for the meta-analysis might have introduced a risk of false-negative results because of low statistical power. Among 740 individuals with 3 years of follow-up enrolled in the UBI study, the correction of MA reduced the risk of all-cause mortality in stage 3–5 CKD (fully adjusted hazard ratio: 0.36; 95% CI: 0.18–0.74) [5]. The Large Chronic Renal Insufficiency Cohort study showed that maintenance of a serum bicarbonate level >26 mmol/L was associated with increased risks of congestive heart failure events and mortality [42]. Numerous trials showed a U-shaped relationship between serum bicarbonate and mortality in patients with CKD [43,44]. The overall effects of alkali supplementation on all-cause mortality and CV events are uncertain. Chronic alkaline therapy for renoprotection may impact vascular calcification. In animal studies, alkali supplementation worsened arterial calcification [45,46]. Mixing sodium bicarbonate and calcium results in an insoluble precipitate, calcium carbonate (CaCO3). Supplementation with sodium bicarbonate increased the levels of serum phosphorous and FGF-23, risk factors for CV events and mortality [47,48] . In addition, a high sodium retention level is a cause for concern. There are 0.0123 mmol of sodium in every 1 mg of sodium bicarbonate, and high sodium can cause hypertension, a fluid overload, and an increased risk of heart failure in CKD patients [2]. Therefore, salt restrictions should be stricter in patients taking oral sodium bicarbonate. The optimal dosage of supplementary oral alkali drugs that provides renal and cardiovascular protection and minimizes side effects is uncertain. It is important to determine the optimal serum bicarbonate level and safe dose of oral alkali drugs according to CKD stage as well as monitor the serum bicarbonate level when an oral alkali supplement is given to pre-dialysis CKD patients.\nThis study has several potential limitations. First, as a result of different abilities to regulate the acid-base balance according to CKD stage, different baseline serum bicarbonate levels across studies, the inclusion of some patients without MA at baseline, and different target serum bicarbonate levels across studies, the data do not provide insight into the safe and upper dosage limits of oral alkali supplementation or the optimal serum bicarbonate level for patients with CKD. Second, findings related to proteinuria, decline in eGFR, and all-cause mortality and CV events were based on limited studies, restricting the reliability of the results related to these outcomes. Third, the small sample size in some studies, as well as the existence of statistical heterogeneity and clinical heterogeneity, limited the reliability of our conclusions. Fourth, veverimer, a novel drug correcting MA, corrected MA by selectively binding and removing hydrochloric acid from the gastrointestinal tract, resulting in increased serum bicarbonate concentrations. However, there is only one RCT currently comparing veverimer and placebo. Futher studies including comparison between veverimer and sodium bicarbonate supplementation would be helpful to determine if there are any significant differences between the two strategies. Finally, none of the studies included patients with uncontrolled hypertension or obvious chronic heart failure. Whether oral alkali drug therapy is safe in these patients is unclear and needs to be further explored.\nIn summary, based on the available and low-to-moderate certainty evidence, oral alkali drug therapy might potentially reduce the risk of kidney failure events, but no benefits in reducing all-cause mortality events, CV events, decline in eGFR and porteninuria. Notably, due to significant heterogeneity among studies the findings are not the final word. Further studies are needed to confirm these results for patients with CKD.", "Click here for additional data file." ]
[ "intro", "materials", null, null, null, null, "results", null, null, null, null, "discussion", "supplementary-material" ]
[ "Oral alkali drug therapy", "pre-dialysis chronic kidney disease", "meta-analysis", "renal outcomes", "all-cause mortality", "cardiovascular events" ]
Introduction: Metabolic acidosis (MA), a common complication of chronic kidney disease (CKD) caused by failure to balance the daily acid load, causes kidney damage, leading to protein-energy consumption, chronic inflammation, endocrine disorders, and the aggravation of metabolic osteopathy [1]. MA is associated with adverse outcomes in CKD patients, including the progression of CKD, all-cause mortality, and cardiovascular (CV) events [2,3]. Oral bicarbonate supplementation is commonly used to correct MA and improve the prognosis of CKD patients. According to the 2020 KDIGO guidelines for glomerulonephritis, MA should be treated with supplementation with oral sodium bicarbonate if the serum bicarbonate level is < 22 mmol/L [4]. However, it remains unclear whether treatment of MA based on oral alkali supplementation would translate into improved clinical outcomes, including delaying the progression of MA and decreasing the risks of all-cause mortality and CV events. In the UBI study, treatment of MA with sodium bicarbonate in patients with stage 3–5 CKD was safe and reduced the risks of CKD progression and all-cause mortality [5]. However, other studies did not confirm the benefits of alkali supplementation in terms of delaying CKD progression and improving survival. Randomized controlled trials (RCTs) showed no significant effects of oral alkali supplementation on renal outcomes and mortality [6,7]. In a cohort study of 386 CKD patients, compared with those who did not receive bicarbonate supplementation, the risk of ischemic heart disease was significantly lower in patients who received bicarbonate supplementation [8]. In a multi-center RCT, no differences were found in CV events between the sodium bicarbonate group and the placebo group [6]. Therefore, the association between oral alkali drug supplementation and clinical outcomes in pre-dialysis CKD patients is unclear. In this systematic review, we summarized all available clinical study data to evaluate the benefits of oral alkali drug therapy regarding kidney outcomes, all-cause mortality, and CV events in pre-dialysis CKD patients. Materials and methods: Data sources and search strategy We performed this systematic review according to a pre-specified protocol [9] registered in the International Prospective Register of Systematic Reviews (CRD42018111030), and the reporting was in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. A comprehensive search was conducted using the following databases: MEDLINE by Ovid (1946 to February 2020), EMBASE (1966 to February 2020), and Cochrane Central Register of Controlled Trials (no date restriction), with relevant keywords and medical subject headings that included various spellings of ‘CKD’, ‘RCT’, ‘Cohort Studies’, and ‘Oral Alkali Therapy’ (the terms ‘Sodium Bicarbonate’, ‘Alkali’, (see item S1). Studies were considered without any language restriction. To ensure a comprehensive literature search, we also screened reference lists from included articles. The ClinicalTrials.gov website was searched for ongoing but unpublished trials in this field. We performed this systematic review according to a pre-specified protocol [9] registered in the International Prospective Register of Systematic Reviews (CRD42018111030), and the reporting was in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. A comprehensive search was conducted using the following databases: MEDLINE by Ovid (1946 to February 2020), EMBASE (1966 to February 2020), and Cochrane Central Register of Controlled Trials (no date restriction), with relevant keywords and medical subject headings that included various spellings of ‘CKD’, ‘RCT’, ‘Cohort Studies’, and ‘Oral Alkali Therapy’ (the terms ‘Sodium Bicarbonate’, ‘Alkali’, (see item S1). Studies were considered without any language restriction. To ensure a comprehensive literature search, we also screened reference lists from included articles. The ClinicalTrials.gov website was searched for ongoing but unpublished trials in this field. Study selection and outcome estimation We included data from RCTs and cohort studies in which oral alkali drug therapy was provided to adults with pre-dialysis CKD (participants who were pregnant, had malignancies or acute illnesses, or had a follow-up time of less than 3 months were excluded) and comparisons were made with subjects receiving the usual therapy. Pre-defined outcomes that contained analyzable data were extracted as follows. A renal failure event was defined as a more than 50% decline in estimated glomerular filtration rate (eGFR) from baseline during follow-up, doubling of serum creatinine, or progression to end-stage renal disease (ESRD) [10]. Decline in eGFR was defined as a decrease of eGFR >3 mL/min/1.73 m2 per year [11]. The rate of change in eGFR per year and changes in urinary protein or urinary albumin during follow-up, including urinary protein excretion, urinary albumin excretion, and the urinary albumin/creatinine ratio, were recorded. Additionally, the incidences of all-cause mortality events and CV events, defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, CV disease, and CV death, were recorded. We included data from RCTs and cohort studies in which oral alkali drug therapy was provided to adults with pre-dialysis CKD (participants who were pregnant, had malignancies or acute illnesses, or had a follow-up time of less than 3 months were excluded) and comparisons were made with subjects receiving the usual therapy. Pre-defined outcomes that contained analyzable data were extracted as follows. A renal failure event was defined as a more than 50% decline in estimated glomerular filtration rate (eGFR) from baseline during follow-up, doubling of serum creatinine, or progression to end-stage renal disease (ESRD) [10]. Decline in eGFR was defined as a decrease of eGFR >3 mL/min/1.73 m2 per year [11]. The rate of change in eGFR per year and changes in urinary protein or urinary albumin during follow-up, including urinary protein excretion, urinary albumin excretion, and the urinary albumin/creatinine ratio, were recorded. Additionally, the incidences of all-cause mortality events and CV events, defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, CV disease, and CV death, were recorded. Data extraction and quality assessment Two independent reviewers (H.S. and X.S.) extracted data and assessed their quality according to the pre-specified protocol. Disagreements were resolved by a third reviewer (L.W.). Data from all eligible studies were extracted into a spreadsheet. The data sought included the characteristics of the studies (study type, randomization method, follow-up time, withdrawals/dropouts), baseline patient characteristics (age, sex, baseline eGFR), intake of alkali drug supplementation, and outcome events. We used the Cochrane Collaboration risk-of-bias tool [12] to assess all potential sources of bias for the included RCTs. Trials were assessed as being at low or high risk of bias or subject to other risks or some concerns, and the overall risk of bias generally corresponded to the worst risk of bias in any of the domains. However, if a study was judged to be subject to some concerns about the risk of bias for multiple domains, it might be judged as being at high risk of bias overall. In addition, the quality of the RCTs was assessed using the Jadad scale [13]. We used the Newcastle-Ottawa Scale (NOS) to assess the quality of cohort studies in terms of selection of cohorts, comparability of cohorts, and assessments of outcomes [14]. Two independent reviewers (H.S. and X.S.) extracted data and assessed their quality according to the pre-specified protocol. Disagreements were resolved by a third reviewer (L.W.). Data from all eligible studies were extracted into a spreadsheet. The data sought included the characteristics of the studies (study type, randomization method, follow-up time, withdrawals/dropouts), baseline patient characteristics (age, sex, baseline eGFR), intake of alkali drug supplementation, and outcome events. We used the Cochrane Collaboration risk-of-bias tool [12] to assess all potential sources of bias for the included RCTs. Trials were assessed as being at low or high risk of bias or subject to other risks or some concerns, and the overall risk of bias generally corresponded to the worst risk of bias in any of the domains. However, if a study was judged to be subject to some concerns about the risk of bias for multiple domains, it might be judged as being at high risk of bias overall. In addition, the quality of the RCTs was assessed using the Jadad scale [13]. We used the Newcastle-Ottawa Scale (NOS) to assess the quality of cohort studies in terms of selection of cohorts, comparability of cohorts, and assessments of outcomes [14]. Data synthesis and statistical analyses When dichotomous outcome data from individual studies were analyzed, relative risks (RRs) and their 95% confidence intervals (CIs) were calculated. If the RR for an individual study was unavailable in the original article, the RR and 95% CI were calculated from event numbers extracted from each study before data pooling. In calculating the RR values, we used the total number of patients randomized in each group as the denominator. Continuous outcome data from individual trials were analyzed using differences in means (MDs) with 95% CIs to pool eGFR data, whereas the standardized mean differences (SMDs) with 95% CIs were used to pool proteinuria or albuminuria data. When continuous outcome data were analyzed, the difference in the mean change between values at baseline and the end of treatment was used. If data on changes between baseline and end-of-treatment values were not available in the studies, we calculated them using correlations estimated from other included studies that had a similar follow-up period and reported their results in considerable detail according to the imputed formulation and its related interpretations in the Cochrane Handbook [15]. Because of the poor stability of the Der Simonian-Laird procedure for small numbers of studies, we used the empirical Bayes procedure to estimate all outcomes [16,17]. We also used the Der Simonian-Laird random effects model and restricted maximum likelihood approach to assess summary effects as part of sensitivity analyses [18,19]. Considering the inevitable heterogeneity among studies, subgroup and sensitivity analyses were performed. Subgroup analyses were performed based on a pre-specified protocol according to the study type, baseline serum bicarbonate, baseline eGFR, mean age, follow-up time, and sample size. In addition, we performed sensitivity analyses using different random-effects estimation methods, excluding studies with a sample size <50, those with a follow-up of < 12 months, and studies of low quality (Jadad score <3, NOS score <5★). Heterogeneity among studies was evaluated using the I2 or τ2 statistic. Stata version 15.0 (StataCorp LP, College Station, TX) was used for statistical analysis, and a two-sided p-value < 0.05 was considered indicative of significance. When dichotomous outcome data from individual studies were analyzed, relative risks (RRs) and their 95% confidence intervals (CIs) were calculated. If the RR for an individual study was unavailable in the original article, the RR and 95% CI were calculated from event numbers extracted from each study before data pooling. In calculating the RR values, we used the total number of patients randomized in each group as the denominator. Continuous outcome data from individual trials were analyzed using differences in means (MDs) with 95% CIs to pool eGFR data, whereas the standardized mean differences (SMDs) with 95% CIs were used to pool proteinuria or albuminuria data. When continuous outcome data were analyzed, the difference in the mean change between values at baseline and the end of treatment was used. If data on changes between baseline and end-of-treatment values were not available in the studies, we calculated them using correlations estimated from other included studies that had a similar follow-up period and reported their results in considerable detail according to the imputed formulation and its related interpretations in the Cochrane Handbook [15]. Because of the poor stability of the Der Simonian-Laird procedure for small numbers of studies, we used the empirical Bayes procedure to estimate all outcomes [16,17]. We also used the Der Simonian-Laird random effects model and restricted maximum likelihood approach to assess summary effects as part of sensitivity analyses [18,19]. Considering the inevitable heterogeneity among studies, subgroup and sensitivity analyses were performed. Subgroup analyses were performed based on a pre-specified protocol according to the study type, baseline serum bicarbonate, baseline eGFR, mean age, follow-up time, and sample size. In addition, we performed sensitivity analyses using different random-effects estimation methods, excluding studies with a sample size <50, those with a follow-up of < 12 months, and studies of low quality (Jadad score <3, NOS score <5★). Heterogeneity among studies was evaluated using the I2 or τ2 statistic. Stata version 15.0 (StataCorp LP, College Station, TX) was used for statistical analysis, and a two-sided p-value < 0.05 was considered indicative of significance. Data sources and search strategy: We performed this systematic review according to a pre-specified protocol [9] registered in the International Prospective Register of Systematic Reviews (CRD42018111030), and the reporting was in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. A comprehensive search was conducted using the following databases: MEDLINE by Ovid (1946 to February 2020), EMBASE (1966 to February 2020), and Cochrane Central Register of Controlled Trials (no date restriction), with relevant keywords and medical subject headings that included various spellings of ‘CKD’, ‘RCT’, ‘Cohort Studies’, and ‘Oral Alkali Therapy’ (the terms ‘Sodium Bicarbonate’, ‘Alkali’, (see item S1). Studies were considered without any language restriction. To ensure a comprehensive literature search, we also screened reference lists from included articles. The ClinicalTrials.gov website was searched for ongoing but unpublished trials in this field. Study selection and outcome estimation: We included data from RCTs and cohort studies in which oral alkali drug therapy was provided to adults with pre-dialysis CKD (participants who were pregnant, had malignancies or acute illnesses, or had a follow-up time of less than 3 months were excluded) and comparisons were made with subjects receiving the usual therapy. Pre-defined outcomes that contained analyzable data were extracted as follows. A renal failure event was defined as a more than 50% decline in estimated glomerular filtration rate (eGFR) from baseline during follow-up, doubling of serum creatinine, or progression to end-stage renal disease (ESRD) [10]. Decline in eGFR was defined as a decrease of eGFR >3 mL/min/1.73 m2 per year [11]. The rate of change in eGFR per year and changes in urinary protein or urinary albumin during follow-up, including urinary protein excretion, urinary albumin excretion, and the urinary albumin/creatinine ratio, were recorded. Additionally, the incidences of all-cause mortality events and CV events, defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, CV disease, and CV death, were recorded. Data extraction and quality assessment: Two independent reviewers (H.S. and X.S.) extracted data and assessed their quality according to the pre-specified protocol. Disagreements were resolved by a third reviewer (L.W.). Data from all eligible studies were extracted into a spreadsheet. The data sought included the characteristics of the studies (study type, randomization method, follow-up time, withdrawals/dropouts), baseline patient characteristics (age, sex, baseline eGFR), intake of alkali drug supplementation, and outcome events. We used the Cochrane Collaboration risk-of-bias tool [12] to assess all potential sources of bias for the included RCTs. Trials were assessed as being at low or high risk of bias or subject to other risks or some concerns, and the overall risk of bias generally corresponded to the worst risk of bias in any of the domains. However, if a study was judged to be subject to some concerns about the risk of bias for multiple domains, it might be judged as being at high risk of bias overall. In addition, the quality of the RCTs was assessed using the Jadad scale [13]. We used the Newcastle-Ottawa Scale (NOS) to assess the quality of cohort studies in terms of selection of cohorts, comparability of cohorts, and assessments of outcomes [14]. Data synthesis and statistical analyses: When dichotomous outcome data from individual studies were analyzed, relative risks (RRs) and their 95% confidence intervals (CIs) were calculated. If the RR for an individual study was unavailable in the original article, the RR and 95% CI were calculated from event numbers extracted from each study before data pooling. In calculating the RR values, we used the total number of patients randomized in each group as the denominator. Continuous outcome data from individual trials were analyzed using differences in means (MDs) with 95% CIs to pool eGFR data, whereas the standardized mean differences (SMDs) with 95% CIs were used to pool proteinuria or albuminuria data. When continuous outcome data were analyzed, the difference in the mean change between values at baseline and the end of treatment was used. If data on changes between baseline and end-of-treatment values were not available in the studies, we calculated them using correlations estimated from other included studies that had a similar follow-up period and reported their results in considerable detail according to the imputed formulation and its related interpretations in the Cochrane Handbook [15]. Because of the poor stability of the Der Simonian-Laird procedure for small numbers of studies, we used the empirical Bayes procedure to estimate all outcomes [16,17]. We also used the Der Simonian-Laird random effects model and restricted maximum likelihood approach to assess summary effects as part of sensitivity analyses [18,19]. Considering the inevitable heterogeneity among studies, subgroup and sensitivity analyses were performed. Subgroup analyses were performed based on a pre-specified protocol according to the study type, baseline serum bicarbonate, baseline eGFR, mean age, follow-up time, and sample size. In addition, we performed sensitivity analyses using different random-effects estimation methods, excluding studies with a sample size <50, those with a follow-up of < 12 months, and studies of low quality (Jadad score <3, NOS score <5★). Heterogeneity among studies was evaluated using the I2 or τ2 statistic. Stata version 15.0 (StataCorp LP, College Station, TX) was used for statistical analysis, and a two-sided p-value < 0.05 was considered indicative of significance. Results: Overview of included trials The literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,. PRISMA flow chart for the included studies. The Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3). As shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist. The literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,. PRISMA flow chart for the included studies. The Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3). As shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist. Effects of oral alkali drug therapy on renal outcomes Nine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1). Forest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk. Subgroup analysis of renal failure events. Note. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk. Data regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found. Thirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5). Forest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation. Data on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5). Forest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference. Nine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1). Forest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk. Subgroup analysis of renal failure events. Note. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk. Data regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found. Thirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5). Forest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation. Data on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5). Forest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference. Effects of oral alkali drug therapy on all-cause mortality and CV events Seven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5). Forest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk. Data for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5). Seven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5). Forest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk. Data for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5). Sensitivity analysis The results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6). The results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6). Overview of included trials: The literature search yielded 9284 potentially relevant records, of which the full texts of 185 publications were reviewed (Figure 1). After screening and eligibility assessment, 14 RCTs [5–7,20–32] and 4 cohort studies [8,29,33,34] reported in 19 publications with 3695 individuals were included in this systematic review and meta-analysis. Baseline and key characteristics of the enrolled studies are presented in Supplementary Table S1 and Table S2. The median follow-up time was 19.5 months. Individuals were enrolled at an average age of 55.78 years, and male participants accounted for 59.55% of the total. The average eGFR of participants was 31.51 mL/min/1.73 m2. A total of 2 oral alkali drug therapies were studied, including those featuring sodium bicarbonate in 17 studies, veverimer in 1 study,. PRISMA flow chart for the included studies. The Jadad score for each included RCT is presented in Supplementary Table S1. Eleven trials had a Jadad score of 3–5, and the others scored less than 3. Of all RCTs, 78.57% were associated with a low risk of bias arising from the randomization process, and all studies had a low risk of bias due to deviations from intended interventions, due to missing outcome data, associated with measurements of outcomes, and associated with selection of the reported results. In terms of overall bias, 78.57% of the research trials were assessed as at low risk, and 21.43% as subject to some concerns (Supplementary Table S3). As shown in Supplementary Tables S1 and S4, all included cohort studies were considered of high quality (with scores of 7★–8★) according to the NOS checklist. Effects of oral alkali drug therapy on renal outcomes: Nine RCTs with 1833 participants reported 250 renal failure events. Compared with the control group, oral alkali drug therapy was associated with a 55% reduction in the risk of renal failure events (RR: 0.45; 95% CI: 0.25–0.82), with significant heterogeneity across studies (I2 = 67.8%, p = 0.005; Figure 2). No significant heterogeneity was observed in any subgroup analysis (Table 1). Forest plot for renal failure events and decline in eGFR events. Renal failure was defined as a more than 50% decline in eGFR from baseline during follow-up, doubling of serum creatinine or ESRD. CI: confidence interval; RR: relative risk. Subgroup analysis of renal failure events. Note. a p value calculated by χ2 statistics was shown. CI: confidence interval; n: number of patients; RCT: randomized parallel-group controlled trial; RR: relative risk. Data regarding the effects of oral alkali drug therapy on decline in eGFR events were available from three RCTs that included 404 individuals and 123 events. Overall, there was no significant effect of oral alkali drug therapy on decline in eGFR events (RR: 0.34; 95% CI: 0.09–1.23) compared with the control group. Moderate heterogeneity across these trials (I2 =54.1%, p = 0.113; Figure 2) was found. Thirtine RCTs and three cohort studies with 2746 participants provided data on differences in the rate of change in eGFR. Compared with the control group, oral alkali drug therapy slowed the rate of eGFR decline by 2.59 mL/min/1.73 m2 per year (95% CI: 0.88–4.31), with significant heterogeneity observed (I2 = 97.6%, p < 0.001; Figure 3). Subgroup analyses showed that effect sizes were greater in studies that enrolled patients baseline serum bicarbonate < 20.95 mmol/L, baseline eGFR 30-59 mL/min/1.73 m2, age < 55 years (p < 0.001; Supplementary Table S5). Forest plot for rate of change in estimated glomerular filtration rate (eGFR). CI: confidence interval; MD: mean difference; SD: standard deviation. Data on the effects of oral alkali drug therapy on proteinuria or albuminuria were available in only five studies (four RCTs and one cohort study) with 591 participants, and no significant effect was found (SMD: −0.32; 95% CI: −1.08 to 0.43). I2 statistics (88.2%, p < 0.001; Figure 4) indicated significant heterogeneity across studies. Subgroup analyses did not reveal heterogeneity regarding pre-specified characteristics (Supplementary Table S5). Forest plot for the change in proteinuria or albuminuria. CI: confidence interval; SD: standard deviation; SMD: standard mean difference. Effects of oral alkali drug therapy on all-cause mortality and CV events: Seven RCTs involving 1709 individuals reported 127 all-cause mortality events. There was no significant effect of oral alkali drug therapy on the risk of all-cause mortality compared with the control groups (RR: 0.90; 95% CI: 0.40–2.02). Significant heterogeneity was noted across the included trials (I2 = 54.7%, p = 0.05; Figure 5). No significant heterogeneity was found for all-cause mortality in the subgroup analyses (Supplementary Table S5). Forest plot for all-cause mortality and cardiovascular events. Cardiovascular events were defined as a composite, including fatal or non-fatal myocardial infarction, fatal or non-fatal stroke, coronary artery revascularization, cardiovascular disease and cardiovascular death. CI: confidence interval; N: number of trials; RR: relative risk. Data for CV events were available from three RCTs and one cohort studie that included 1098 participants and 160 events. There was no significant difference in the risk of CV events between treatment and control groups (RR: 1.03; 95% CI: 0.32–3.37), with significant heterogeneity observed among trials (I2 = 65.1%, p = 0.057; Figure 5). Subgroup analyses revealed no heterogeneity for CV events (Supplementary Table S5). Sensitivity analysis: The results did not change after the exclusion of studies with a follow-up duration of < 12 months, with a sample size < 50, or assessed as low quality, or when different random-effects estimination methods were used (Supplementary Table S6). Discussion: MA, a common complication of CKD, is associated with CKD progression and higher mortality. The benefits of oral alkali drug supplementation for renal outcome events, all-cause mortality, and CV events in pre-dialysis CKD patients are controversial. This meta-analysis of 18 studies including 3695 participants suggests that oral alkali drug therapy produces a 55% reduction in renal failure events. No significant effects were observed for decline in eGFR, proteinuria, the risk of all-cause mortality events and CV events. The results were broadly consistent across major subgroups, as demonstrated by the sensitivity analyses. Of note, the existence of significant heterogeneity may limit the interpretation and clinical application of these results. Heterogeneity in the study included different CKD stages of patients, different baseline serum bicarbonate levels across studies, considerable variation in follow-up time, and different strategies to correct MA. Although we preformed subgroup analyses, which remains a concern for meaningful interpretation of the results. MA is associated with the progression of CKD [35,36]. However, there are sparse data on the effects of oral alkali drug supplementation on renal function in CKD patients, with inconsistent effects reported to date. In accordance with our study, a 2012 systematic review suggested that oral alkali therapy could slow the decline of the eGFR in patients with MA [37]. A 2019 systematic review indicated that oral alkali supplementation was associated with an improvement in eGFR and a reduction in the risk of progression to ESRD [38]. However, only two studies evaluated the effect of oral alkali therapy on the incidence of ESRD. Compared with previous meta-analyses [37,38], our study included many new studies on the effects of oral alkali therapy on changes in eGFR and ESRD events. Our summary data showed smaller reduction in kidney function decline, although with significant heterogeneity between the included studies. These data should be interpreted with caution. . . Additional well- designed trials are needed to explore the effect of treatment of MA on the risk of kidney disease progression with these different types of interventions. In the Chronic Renal Insufficiency Cohort study [2], the role of serum bicarbonate level as a risk factor for renal outcomes (ESRD or 50% reduction in eGFR) was evaluated in 3939 individuals with stage 2–4 CKD. After adjusting for covariates, the risk of developing a renal endpoint was 3% lower per 1 mmol/L increase in serum bicarbonate level [2]. A retrospective study from the National Health and Nutrition Examination Survey III involving 1486 CKD patients with a median 14.2 years of follow-up demonstrated that a higher dietary acid load was independently associated with an increased risk of ESRD, and this association was more pronounced in individuals with advanced CKD than in those with mild or moderate CKD [39]. Several studies included in this meta-analysis reported that oral alkali supplementation can delay the progression of CKD in patients with MA [28,29,32,35,37]. However, Mahajan [21] found that oral alkali supplementation delayed the progression of CKD in stage 2 CKD patients without MA. Wesson and Simoni [40] demonstrated that oral alkali dietary supplementation prevented eGFR decline in the two-thirds nephrectomy rat model (a model of early-stage CKD that does not include MA) compared with control rats over a 24-week period. This implies that mechanisms other than the correction of MA are involved in the renoprotective effect of oral alkali supplementation and raises important questions regarding the potential use of oral alkali supplementation in other conditions. Several potential mechanisms may be involved, including decreasing interstitial ammonium levels and reducing complement activation; the correction of interstitial acidosis and decreasing the local production of endothelin-1 and angiotensin II; decreasing tubular H+ secretion, which can limit tubular cast formation; activation of the cholinergic anti-inflammatory pathway and decreasing renal inflammation; or the correction of MA leading to enhanced blood glucose control [41]. The mechanisms underlying the renoprotective effect of oral alkali therapy need to be further explored. No high-quality study has assessed the effects of oral alkali therapy on CKD progression in patients with and without MA. Therefore, well-designed studies are needed. Both the BiCARB Study Group [7] and Raphael KL [28] studies failed to find a benefit of oral alkali therapy in terms of preventing an eGFR decline in CKD. In both studies, the mean age of participants was around 72.5 years, which may explain why some patients were not responsive to oral alkali therapy. The eGFR typically declines with age. It is possible that sodium bicarbonate is less effective in older patients with CKD compared to younger patients [7,28], which was consistent with the results of subgroup analyses. Further well-designed studies are needed to explore this. In our study, there was no compelling evidence that oral alkali drug therapy was associated with a lower incidence of all-cause mortality events and CV events. The scarcity of data on all-cause mortality events (seven studies with 127 events) and CV events (four studies with 160 events) available for the meta-analysis might have introduced a risk of false-negative results because of low statistical power. Among 740 individuals with 3 years of follow-up enrolled in the UBI study, the correction of MA reduced the risk of all-cause mortality in stage 3–5 CKD (fully adjusted hazard ratio: 0.36; 95% CI: 0.18–0.74) [5]. The Large Chronic Renal Insufficiency Cohort study showed that maintenance of a serum bicarbonate level >26 mmol/L was associated with increased risks of congestive heart failure events and mortality [42]. Numerous trials showed a U-shaped relationship between serum bicarbonate and mortality in patients with CKD [43,44]. The overall effects of alkali supplementation on all-cause mortality and CV events are uncertain. Chronic alkaline therapy for renoprotection may impact vascular calcification. In animal studies, alkali supplementation worsened arterial calcification [45,46]. Mixing sodium bicarbonate and calcium results in an insoluble precipitate, calcium carbonate (CaCO3). Supplementation with sodium bicarbonate increased the levels of serum phosphorous and FGF-23, risk factors for CV events and mortality [47,48] . In addition, a high sodium retention level is a cause for concern. There are 0.0123 mmol of sodium in every 1 mg of sodium bicarbonate, and high sodium can cause hypertension, a fluid overload, and an increased risk of heart failure in CKD patients [2]. Therefore, salt restrictions should be stricter in patients taking oral sodium bicarbonate. The optimal dosage of supplementary oral alkali drugs that provides renal and cardiovascular protection and minimizes side effects is uncertain. It is important to determine the optimal serum bicarbonate level and safe dose of oral alkali drugs according to CKD stage as well as monitor the serum bicarbonate level when an oral alkali supplement is given to pre-dialysis CKD patients. This study has several potential limitations. First, as a result of different abilities to regulate the acid-base balance according to CKD stage, different baseline serum bicarbonate levels across studies, the inclusion of some patients without MA at baseline, and different target serum bicarbonate levels across studies, the data do not provide insight into the safe and upper dosage limits of oral alkali supplementation or the optimal serum bicarbonate level for patients with CKD. Second, findings related to proteinuria, decline in eGFR, and all-cause mortality and CV events were based on limited studies, restricting the reliability of the results related to these outcomes. Third, the small sample size in some studies, as well as the existence of statistical heterogeneity and clinical heterogeneity, limited the reliability of our conclusions. Fourth, veverimer, a novel drug correcting MA, corrected MA by selectively binding and removing hydrochloric acid from the gastrointestinal tract, resulting in increased serum bicarbonate concentrations. However, there is only one RCT currently comparing veverimer and placebo. Futher studies including comparison between veverimer and sodium bicarbonate supplementation would be helpful to determine if there are any significant differences between the two strategies. Finally, none of the studies included patients with uncontrolled hypertension or obvious chronic heart failure. Whether oral alkali drug therapy is safe in these patients is unclear and needs to be further explored. In summary, based on the available and low-to-moderate certainty evidence, oral alkali drug therapy might potentially reduce the risk of kidney failure events, but no benefits in reducing all-cause mortality events, CV events, decline in eGFR and porteninuria. Notably, due to significant heterogeneity among studies the findings are not the final word. Further studies are needed to confirm these results for patients with CKD. Supplementary Material: Click here for additional data file.
Background: Metabolic acidosis accelerates the progression of chronic kidney disease (CKD) and increases the mortality rate. Whether oral alkali drug therapy benefits pre-dialysis CKD patients is controversial. We performed a meta-analysis of the effects of oral alkali drug therapy on major clinical outcomes in pre-dialysis CKD patients. Methods: We systematically searched MEDLINE using the Ovid, EMBASE, and Cochrane Library databases without language restriction. We included all eligible clinical studies that involved pre-dialysis CKD adults and compared those who received oral alkali drug therapy with controls. Results: A total of 18 eligible studies, including 14 randomized controlled trials and 4 cohort studies reported in 19 publications with 3695 participants, were included. Oral alkali drug therapy led to a 55% reduction in renal failure events (relative risk [RR]: 0.45; 95% confidence interval [CI]: 0.25-0.82), a rate of decline in the estimated glomerular filtration rate (eGFR) of 2.59 mL/min/1.73 m2 per year (95% CI, 0.88-4.31). There was no significant effect on decline in eGFR events (RR: 0.34; 95% CI: 0.09-1.23), proteinuria (standardized mean difference: -0.32; 95% CI: -1.08 to 0.43), all-cause mortality events (RR: 0.90; 95% CI: 0.40-2.02) and cardiovascular (CV) events (RR: 1.03; 95% CI: 0.32-3.37) compared with the control groups. Conclusions: Based on the available and low-to-moderate certainty evidence, oral alkali drug therapy might potentially reduce the risk of kidney failure events, but no benefit in reducing all-cause mortality events, CV events, decline in eGFR and porteninuria.
null
null
8,900
341
[ 178, 237, 250, 424, 316, 541, 240, 51 ]
13
[ "studies", "events", "alkali", "data", "oral", "oral alkali", "egfr", "risk", "included", "therapy" ]
[ "oral bicarbonate supplementation", "bicarbonate supplementation risk", "effects alkali supplementation", "supplementation renal outcomes", "alkali supplementation renal" ]
null
null
null
[CONTENT] Oral alkali drug therapy | pre-dialysis chronic kidney disease | meta-analysis | renal outcomes | all-cause mortality | cardiovascular events [SUMMARY]
null
[CONTENT] Oral alkali drug therapy | pre-dialysis chronic kidney disease | meta-analysis | renal outcomes | all-cause mortality | cardiovascular events [SUMMARY]
null
[CONTENT] Oral alkali drug therapy | pre-dialysis chronic kidney disease | meta-analysis | renal outcomes | all-cause mortality | cardiovascular events [SUMMARY]
null
[CONTENT] Acidosis | Administration, Oral | Adult | Alkalies | Cause of Death | Disease Progression | Glomerular Filtration Rate | Humans | Proteinuria | Randomized Controlled Trials as Topic | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
null
[CONTENT] Acidosis | Administration, Oral | Adult | Alkalies | Cause of Death | Disease Progression | Glomerular Filtration Rate | Humans | Proteinuria | Randomized Controlled Trials as Topic | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
null
[CONTENT] Acidosis | Administration, Oral | Adult | Alkalies | Cause of Death | Disease Progression | Glomerular Filtration Rate | Humans | Proteinuria | Randomized Controlled Trials as Topic | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
null
[CONTENT] oral bicarbonate supplementation | bicarbonate supplementation risk | effects alkali supplementation | supplementation renal outcomes | alkali supplementation renal [SUMMARY]
null
[CONTENT] oral bicarbonate supplementation | bicarbonate supplementation risk | effects alkali supplementation | supplementation renal outcomes | alkali supplementation renal [SUMMARY]
null
[CONTENT] oral bicarbonate supplementation | bicarbonate supplementation risk | effects alkali supplementation | supplementation renal outcomes | alkali supplementation renal [SUMMARY]
null
[CONTENT] studies | events | alkali | data | oral | oral alkali | egfr | risk | included | therapy [SUMMARY]
null
[CONTENT] studies | events | alkali | data | oral | oral alkali | egfr | risk | included | therapy [SUMMARY]
null
[CONTENT] studies | events | alkali | data | oral | oral alkali | egfr | risk | included | therapy [SUMMARY]
null
[CONTENT] ckd | ma | supplementation | ckd patients | patients | bicarbonate | mortality | progression | oral | clinical [SUMMARY]
null
[CONTENT] events | significant | ci | table | heterogeneity | supplementary | supplementary table | significant heterogeneity | figure | studies [SUMMARY]
null
[CONTENT] studies | events | data | risk | egfr | ckd | alkali | bias | oral | significant [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] 18 | 14 | 4 | 19 | 3695 ||| 55% | 0.45 | 95% | CI | 0.25-0.82 | 2.59 mL | 95% | CI | 0.88 ||| 0.34 | 95% | CI | 0.09-1.23 | proteinuria ( | 95% | CI | -1.08 | 0.43 | 0.90 | 95% | CI | 0.40 | CV | 1.03 | 95% | CI | 0.32-3.37 [SUMMARY]
null
[CONTENT] ||| ||| ||| MEDLINE | Ovid | EMBASE | Cochrane Library ||| CKD ||| ||| 18 | 14 | 4 | 19 | 3695 ||| 55% | 0.45 | 95% | CI | 0.25-0.82 | 2.59 mL | 95% | CI | 0.88 ||| 0.34 | 95% | CI | 0.09-1.23 | proteinuria ( | 95% | CI | -1.08 | 0.43 | 0.90 | 95% | CI | 0.40 | CV | 1.03 | 95% | CI | 0.32-3.37 ||| CV [SUMMARY]
null
Survey about Intention to Engage in Specific Disaster Activities among Disaster Medical Assistance Team Members.
34658320
Different disaster activities should be performed smoothly. In relation to this, human resources for disaster activities must be secured. To achieve a stable supply of human resources, it is essential to improve the intentions of individuals responding to each type of disaster. However, the current intention of Disaster Medical Assistance Team (DMAT) members has not yet been assessed.
INTRODUCTION
An anonymous web questionnaire survey was conducted. Japanese DMAT members in the nuclear disaster-affected area (Group A; n = 79) and the non-affected area (Group N; n = 99) were included in the analysis. The outcome was the answer to the following question: "Will you actively engage in activities during natural, human-made, and chemical (C), biological (B), radiological/nuclear (R/N), and explosive (E) (CBRNE) disasters?" Then, questionnaire responses were compared according to disaster type.
METHODS
The intention to engage in C (50), B (47), R/N (58), and E (52) disasters was significantly lower than that in natural (82) and human-made (82) disasters (P <.001). The intention to engage in CBRNE disasters among younger participants (age ≤39 years) was significantly higher in Group A than in Group N. By contrast, the intention to engage in R/N disasters alone among older participants (age ≥40 years) was higher in Group A than in Group N. However, there was no difference between the two groups in terms of intention to engage in C, B, and E disasters. Moreover, the intention to engage in all disasters between younger and older participants in Group A did not differ. In Group N, older participants had a significantly higher intention to engage in B and R/N disasters.
RESULTS
Experience with a specific type of calamity at a young age may improve intention to engage in not only disasters encountered, but also other types. In addition, the intention to engage in CBRNE disasters improved with age in the non-experienced population. To respond smoothly to specific disasters in the future, measures must be taken to improve the intention to engage in CBRNE disasters among DMAT members.
CONCLUSION
[ "Adult", "Disaster Planning", "Disasters", "Humans", "Intention", "Medical Assistance", "Surveys and Questionnaires", "Workforce" ]
8607140
Introduction
In patients with critical conditions, the initial response of the rapid response team or medical emergency team is the most important factor correlated with prognosis.1–3 In recent years, people have sustained injuries caused by different types of disasters, which can be classified as natural (ie, earthquakes), human-made (eg, transport accidents), and specific (ie, coronavirus disease 2019 [COVID-19] and chemical terrorism). Hence, the need to manage these disasters is increasing.4 Among them, chemical (C), biological (B), radiological (R), nuclear (N), and explosive (E) (CBRNE) disasters are considered specific. In such disasters, a rapid and smooth response is required to save the lives of patients. However, in the Fukushima Daiichi Nuclear Power Plant (FDNPP; Ōkuma, Fukushima, Japan) accident (2011), one of the most well-known radiological/nuclear (R/N) disasters, it was challenging to smoothly run disaster response activities at all times.5 Therefore, when providing medical treatment in areas with various hazards, measures should be taken in advance to facilitate disaster activities. With consideration of factors that can prevent a smooth response to CBRNE disasters, the lack of human resources is a major concern. Disaster responders have a low intention to engage in specific disaster activities. Some surveys have shown that even individuals who are willing to respond to natural hazards avoid involvement in nuclear disasters or those involving communicable diseases due to anxiety and lack of knowledge.6–8 Hence, this is a major cause for the lack of human resources and is associated with difficulties in facilitating CBRNE disaster activities. A previous study revealed that factors such as self-confidence, incentives, and family understanding affect the intention of firefighters to engage in nuclear disaster activities.9 However, the current intention of all medical responders to participate in CBRNE disaster activities has not been fully elucidated. In Japan, the Great Hanshin earthquake of 1995 has led to the development of a disaster medical system. Moreover, the Disaster Medical Assistance Team (DMAT), which responds to various disasters, has been established. The DMAT comprises physicians, nurses, and logisticians, as defined in the Basic Disaster Management Plan based on the Japan’s Disaster Countermeasures Basic Act.10,11 Japanese DMAT members can decide whether or not to respond when they receive dispatch requests. On the other hand, to date, the team plays a major role in large-scale disasters in Japan, and its members are the most important disaster medical responders in Japan. Therefore, a survey about the intention of DMAT members to engage in short-term CBRNE disaster activities must be conducted to facilitate a smooth response. This study aimed to conduct a web questionnaire survey among DMAT members from two different areas (one with nuclear disaster experience and the other without). To smoothen each specific disaster response, the current intentions of DMAT members to engage in CBRNE disaster activities were examined. Moreover, future measures that can improve such intentions were evaluated.
Methods
This was a cross-sectional study. An anonymous web questionnaire survey was conducted from October 2020 through November 2020. The website URL of the questionnaire was distributed by research members via e-mail to DMAT members in the two different areas. That is, one was a nuclear disaster-affected area (Group A) and the other was a non-affected area (Group N). In total, 204 participants from both areas responded. However, only 178 provided complete responses (effective response rate: total 87.3%; Group A 84.9%; Group N 89.2%). These data were then included in the analysis (Figure 1). The sample size was estimated using the pwr.anova.test function of R 4.0.3 software (R Foundation for Statistical Computing; Vienna, Austria). The following three parameters were included: group size (k = 4), medium effect (f = 0.25), and power (0.8). The estimated sample size was 45 per group; therefore, the total size of the response group was determined to be 180. The questionnaire was used to collect information such as sex, age, occupation, family status, and experience in disaster activities. To validate the intention to engage in disaster activities (natural, human-made, CBRNE disasters), the following question was created: “Will you actively engage in response activities during a natural, human-made, or CBRNE disaster?” The participants were required to answer using the Engagement Intent Score (EIS), which indicates their agreement to the abovementioned question (0%-100%). Participants with an EIS of <50% were instructed to provide a free answer as to why they did not wish to engage. Figure 1.Flow Chart Showing the Selection of Participants. Note: The web URL of the questionnaire was sent via e-mail; addresses were in the two DMAT mailing lists. That is, one list was for a nuclear disaster-affected area and the other was for a non-affected area. In total, 204 members answered the questionnaire. After excluding 26 incomplete response data, 178 participants were finally included in the analysis.Abbreviation: DMAT, Disaster Medical Assistance Team. Flow Chart Showing the Selection of Participants. Note: The web URL of the questionnaire was sent via e-mail; addresses were in the two DMAT mailing lists. That is, one list was for a nuclear disaster-affected area and the other was for a non-affected area. In total, 204 members answered the questionnaire. After excluding 26 incomplete response data, 178 participants were finally included in the analysis. Abbreviation: DMAT, Disaster Medical Assistance Team. The participants were divided into four groups according to age and area: younger Group A (≤39 years old; nuclear disaster-affected area), older Group A (≥40 years old; nuclear disaster-affected area), younger Group N (≤39 years old; non-affected area), and older Group N (≥40 years old; non-affected area; Figure 2). The reference age was set at 39 years as the mean age of DMAT members was 38.8 years, and a specific medical checkup is available for those aged >40 years in Japan.12,13 The background characteristics were compared between the four groups using the chi-squared test. Each EIS was presented as the mean and standard deviation (SD). The EIS between each disaster was compared with the analysis of variance and the Tukey-Kramer test for multiple comparisons. Sub-analysis was performed for male participants only. Further analyses were conducted according to age and area. The EIS was compared between the younger and older and nuclear disaster-affected and non-affected groups using the student’s t-test. All statistical analyses were performed using JMP 14 (SAS Institute Inc.; Cary, North Carolina USA), and a P value of .05 was considered statistically significant. Figure 2.Diagrammatic Image Representation of a Stratified Comparison. Note: The horizontal axis indicates age and the vertical axis represents disaster experience. Each comparison in Figures 3A, 3B, 4A, and 4B is depicted with a black bidirectional arrow. Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area. Diagrammatic Image Representation of a Stratified Comparison. Note: The horizontal axis indicates age and the vertical axis represents disaster experience. Each comparison in Figures 3A, 3B, 4A, and 4B is depicted with a black bidirectional arrow. Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area. Ethics Committee Approval The ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130). The ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130).
Results
The characteristics of younger participants in Group A (n = 28), younger participants in Group N (n = 56), older participants in Group A (n = 51), and older participants in Group N (n = 43) are depicted in Table 1. There were differences in terms of background characteristics between the four groups in terms of occupation and experience in natural disaster activities (Table 1). According to the primary outcome, the mean EIS for each type of disaster were as follows: natural, 82.2 (SD = 20.3); human-made, 81.7 (SD = 23.2); C, 50.0 (SD = 34.9); B, 47.4 (SD = 35.3); R/N, 57.6 (SD = 35.5); and E, 52.4 (SD = 36.1). After multiple comparisons, the proportion of natural and human-made disasters was significantly higher than that of C, B, R/N, and E disasters (all P values <.001). Furthermore, R/N disasters had a higher EIS than B disasters (P <.05; Table 2). In addition, a sub-analysis of only the male participants showed the same results (Supplemental Table 1; available online only). Table 1.Characteristics of the ParticipantsYounger Group A(n = 28)Younger Group N(n = 56)Older Group A(n = 51)Older Group N(n = 43) P ValueSex, n (%)  Female6 (21.4)17 (30.4)18 (35.3)9 (20.9).368  Male22 (78.6)39 (69.6)33 (64.7)34 (79.1)Age (years), n (%)  20-296 (21.4)8 (14.3)−−.408a   30-3922 (78.6)48 (85.7)−−  40-49−−33 (64.7)34 (79.1).125b   Over 50−−18 (35.3)9 (20.9)Occupation, n (%)  Physician3 (10.7)8 (14.3)17 (33.3)13 (30.2).048  Nurse7 (25.0)27 (48.2)20 (39.3)15 (34.9)  Administrative Staff7 (25.0)9 (16.1)7 (13.7)7 (16.3)  Others11 (39.3)12 (21.4)7 (13.7)8 (18.6)Family, n (%)  Without6 (21.4)16 (28.6)11 (21.6)10 (23.3).822  With22 (78.6)40 (71.4)40 (78.4)33 (76.7)Experience in Natural Disaster Activities, n (%)  No12 (42.9)24 (42.9)9 (17.6)10 (23.3).012  Yes16 (57.1)32 (57.1)42 (82.4)33 (76.7)Experience in CBRNE Disaster Activities, n (%)  No27 (96.4)53 (94.6)45 (88.2)39 (90.7).495  Yes1 (3.6)3 (5.4)6 (11.8)4 (9.3)Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area.a Comparison between younger Group A and younger Group N.b Comparison between older Group A and older Group N. Characteristics of the Participants Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area. Comparison between younger Group A and younger Group N. Comparison between older Group A and older Group N. Table 2.Multiple Comparison of EIS between the Six Types of DisastersNo.Disaster TypeMean (SD) EIS95% CI P Valuea vs. 2vs. 3vs. 4vs. 5vs. 61Natural82.3 (SD = 20.3)79.2-85.31.00<.01<.01<.01<.012Human-Made81.7 (SD = 23.2)78.3-85.1—<.01<.01<.01<.013Chemical50.0 (SD = 34.9)44.8-55.1——.97.21.984Biological47.4 (SD = 35.3)42.2-52.6———.03.675Radiological/Nuclear57.6 (SD = 35.5)52.3-62.8————.636Explosive52.4 (SD = 36.1)47.0-57.7—————Abbreviation: EIS, enga_gement intent score.a P values <.05 were considered statistically significant. Multiple Comparison of EIS between the Six Types of Disasters Abbreviation: EIS, enga_gement intent score. P values <.05 were considered statistically significant. Based on the intention to engage in various types of disasters, the EIS for all CBRNE disasters among younger participants was significantly higher in younger Group A than in younger Group N (C: 60.9 [SD = 30.8] versus 37.4 [SD = 32.3], P <.01; B: 55.3 [SD = 31.9] versus 35.5 [SD = 32.6], P <.01; R/N: 63.0 [SD = 31.9] versus 41.3 [SD = 35.0], P <.01; E: 61.1 [SD = 32.4] versus 44.6 [SD = 35.2], P <.05; Figure 3A and Table 3). Meanwhile, the EIS for R/N disasters alone among older participants was significantly higher in older Group A than in older Group N (72.1 [SD = 31.2] versus 58.0 [SD = 35.3]; P <.05), but those for other disasters (natural, human-made, C, B, and E) did not significantly differ between the two groups (Figure 3B and Table 3). According to age, there was no difference in the intention to engage in all types of disasters between younger and older participants in Group A (Figure 4A and Table 4). However, older participants in Group N had a significantly higher EIS for B (35.5 [SD = 32.6] versus 49.4 [SD = 35.7]; P <.05) and R/N (41.3 [SD = 35.0] versus 58.0 [SD = 35.3]; P <.05) disasters than younger participants in Group N (Figure 4B and Table 4). Figure 3.Comparison of Engagement Intent Score According to the Type of Disaster in Each Age Group. A. There was no significant difference between younger participants in Group A and Group N in terms of intention to engage in natural and human-made disasters. Group A had a significantly higher intention to engage in all CBRNE disaster activities. B. The score for radiological/nuclear disaster alone was significantly higher among older participants in Group A than in Group N. However, the results for other disasters, except radiological/nuclear ones, did not significantly differ between the two groups.Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area.* P <.05;** P <.01. Comparison of Engagement Intent Score According to the Type of Disaster in Each Age Group. A. There was no significant difference between younger participants in Group A and Group N in terms of intention to engage in natural and human-made disasters. Group A had a significantly higher intention to engage in all CBRNE disaster activities. B. The score for radiological/nuclear disaster alone was significantly higher among older participants in Group A than in Group N. However, the results for other disasters, except radiological/nuclear ones, did not significantly differ between the two groups. Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area. * P <.05; ** P <.01. Table 3.Comparison of EIS for Each Type of Disaster among the Same Age GroupDisaster TypeYounger Group A(n = 28)Younger Group N(n = 56) P Valuea Older Group A(n = 51)Older Group N(n = 43) P Valuea Natural78.0 (SD = 25.9)80.1 (SD = 20.9).6885.9 (SD = 17.8)83.4 (SD = 18.1).50Human-Made78.0 (SD = 27.1)82.9 (SD = 21.2).3681.2 (SD = 25.6)83.1 (SD = 20.2).70Chemical60.9 (SD = 30.8)37.4 (SD = 32.3)<.0157.6 (SD = 35.7)50.3 (SD = 35.6).33Biological55.3 (SD = 31.9)35.5 (SD = 32.6)<.0154.5 (SD = 37.3)49.4 (SD = 35.7).50Radiological/Nuclear63.0 (SD = 31.9)41.3 (SD = 35.0)<.0172.1 (SD = 31.2)58.0 (SD = 35.3).04Explosive61.1 (SD = 32.4)44.6 (SD = 35.2).0456.8 (SD = 36.4)51.6 (SD = 38.3).51Abbreviation: EIS, engagement intent score.a P values <.05 were considered statistically significant. Comparison of EIS for Each Type of Disaster among the Same Age Group Abbreviation: EIS, engagement intent score. P values <.05 were considered statistically significant. Figure 4.Comparison of Engagement Intent Score in Terms of the Type of Disasters in Each Group. A. There was no difference in the intention to engage in all types of disasters between younger and older participants in Group A. B. Older participants in Group N had a significantly higher intention to engage in biological and radiological/nuclear disaster activities. The same trend was observed for chemical disasters. However, the results did not significantly differ.Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area.*P <.05. Comparison of Engagement Intent Score in Terms of the Type of Disasters in Each Group. A. There was no difference in the intention to engage in all types of disasters between younger and older participants in Group A. B. Older participants in Group N had a significantly higher intention to engage in biological and radiological/nuclear disaster activities. The same trend was observed for chemical disasters. However, the results did not significantly differ. Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area. *P <.05. Table 4.Comparison of EIS for Each Type of Disaster among the Same AreaDisaster TypeYounger Group A(n = 28)Older Group A(n = 51) P Valuea Younger Group N(n = 56)Older Group N(n = 43) P Valuea Natural78.0 (SD = 25.9)85.9 (SD = 17.8).1180.1 (SD = 20.9)83.4 (SD = 18.1).42Human-Made78.0 (SD = 27.1)81.2 (SD = 25.6).6182.9 (SD = 21.2)83.1 (SD = 20.2).97Chemical60.9 (SD = 30.8)57.6 (SD = 35.7).6837.4 (SD = 35.5)50.3 (SD = 35.6).06Biological55.3 (SD = 31.9)54.5 (SD = 37.3).9335.5 (SD = 32.6)49.4 (SD = 35.7).05Radiological/Nuclear63.0 (SD = 31.9)72.1 (SD = 31.2).2241.3 (SD = 35.0)58.0 (SD = 35.3).02Explosive61.1 (SD = 32.4)56.8 (SD = 36.4).6044.6 (SD = 35.2)51.6 (SD = 38.3).34Abbreviation: EIS, engagement intent score.a P values <.05 were considered statistically significant. Comparison of EIS for Each Type of Disaster among the Same Area Abbreviation: EIS, engagement intent score. P values <.05 were considered statistically significant. According to the free answers, the DMAT members were not willing to engage in natural and human-made disasters mainly due to safety and the thought of leaving their family members behind in case of injuries or death. Nevertheless, the main reasons why DMAT members were not willing to engage in CBRN disaster activities were lack of knowledge and skills along with the anxiety and fear attributable to the fact that the R or C agents cannot be visualized with the naked eye. Meanwhile, the reasons for not engaging in E disasters slightly differed. That is, several participants answered lack of security due to the risk of second or third explosion.
Conclusion
Japanese DMAT members had a low intention to engage in CBRNE disaster activities compared with natural and human-made disaster activities. To respond smoothly to specific disasters, measures to efficiently improve the intention to engage in CBRNE disaster activities are required.
[ "Ethics Committee Approval" ]
[ "The ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130)." ]
[ "other" ]
[ "Introduction", "Methods", "Ethics Committee Approval", "Results", "Discussion", "Limitations", "Conclusion" ]
[ "In patients with critical conditions, the initial response of the rapid response team or medical emergency team is the most important factor correlated with prognosis.1–3 In recent years, people have sustained injuries caused by different types of disasters, which can be classified as natural (ie, earthquakes), human-made (eg, transport accidents), and specific (ie, coronavirus disease 2019 [COVID-19] and chemical terrorism). Hence, the need to manage these disasters is increasing.4 Among them, chemical (C), biological (B), radiological (R), nuclear (N), and explosive (E) (CBRNE) disasters are considered specific. In such disasters, a rapid and smooth response is required to save the lives of patients. However, in the Fukushima Daiichi Nuclear Power Plant (FDNPP; Ōkuma, Fukushima, Japan) accident (2011), one of the most well-known radiological/nuclear (R/N) disasters, it was challenging to smoothly run disaster response activities at all times.5 Therefore, when providing medical treatment in areas with various hazards, measures should be taken in advance to facilitate disaster activities.\nWith consideration of factors that can prevent a smooth response to CBRNE disasters, the lack of human resources is a major concern. Disaster responders have a low intention to engage in specific disaster activities. Some surveys have shown that even individuals who are willing to respond to natural hazards avoid involvement in nuclear disasters or those involving communicable diseases due to anxiety and lack of knowledge.6–8 Hence, this is a major cause for the lack of human resources and is associated with difficulties in facilitating CBRNE disaster activities. A previous study revealed that factors such as self-confidence, incentives, and family understanding affect the intention of firefighters to engage in nuclear disaster activities.9 However, the current intention of all medical responders to participate in CBRNE disaster activities has not been fully elucidated.\nIn Japan, the Great Hanshin earthquake of 1995 has led to the development of a disaster medical system. Moreover, the Disaster Medical Assistance Team (DMAT), which responds to various disasters, has been established. The DMAT comprises physicians, nurses, and logisticians, as defined in the Basic Disaster Management Plan based on the Japan’s Disaster Countermeasures Basic Act.10,11 Japanese DMAT members can decide whether or not to respond when they receive dispatch requests. On the other hand, to date, the team plays a major role in large-scale disasters in Japan, and its members are the most important disaster medical responders in Japan. Therefore, a survey about the intention of DMAT members to engage in short-term CBRNE disaster activities must be conducted to facilitate a smooth response.\nThis study aimed to conduct a web questionnaire survey among DMAT members from two different areas (one with nuclear disaster experience and the other without). To smoothen each specific disaster response, the current intentions of DMAT members to engage in CBRNE disaster activities were examined. Moreover, future measures that can improve such intentions were evaluated.", "This was a cross-sectional study. An anonymous web questionnaire survey was conducted from October 2020 through November 2020. The website URL of the questionnaire was distributed by research members via e-mail to DMAT members in the two different areas. That is, one was a nuclear disaster-affected area (Group A) and the other was a non-affected area (Group N). In total, 204 participants from both areas responded. However, only 178 provided complete responses (effective response rate: total 87.3%; Group A 84.9%; Group N 89.2%). These data were then included in the analysis (Figure 1). The sample size was estimated using the pwr.anova.test function of R 4.0.3 software (R Foundation for Statistical Computing; Vienna, Austria). The following three parameters were included: group size (k = 4), medium effect (f = 0.25), and power (0.8). The estimated sample size was 45 per group; therefore, the total size of the response group was determined to be 180. The questionnaire was used to collect information such as sex, age, occupation, family status, and experience in disaster activities. To validate the intention to engage in disaster activities (natural, human-made, CBRNE disasters), the following question was created: “Will you actively engage in response activities during a natural, human-made, or CBRNE disaster?” The participants were required to answer using the Engagement Intent Score (EIS), which indicates their agreement to the abovementioned question (0%-100%). Participants with an EIS of <50% were instructed to provide a free answer as to why they did not wish to engage.\n\nFigure 1.Flow Chart Showing the Selection of Participants. Note: The web URL of the questionnaire was sent via e-mail; addresses were in the two DMAT mailing lists. That is, one list was for a nuclear disaster-affected area and the other was for a non-affected area. In total, 204 members answered the questionnaire. After excluding 26 incomplete response data, 178 participants were finally included in the analysis.Abbreviation: DMAT, Disaster Medical Assistance Team.\n\nFlow Chart Showing the Selection of Participants. Note: The web URL of the questionnaire was sent via e-mail; addresses were in the two DMAT mailing lists. That is, one list was for a nuclear disaster-affected area and the other was for a non-affected area. In total, 204 members answered the questionnaire. After excluding 26 incomplete response data, 178 participants were finally included in the analysis.\nAbbreviation: DMAT, Disaster Medical Assistance Team.\nThe participants were divided into four groups according to age and area: younger Group A (≤39 years old; nuclear disaster-affected area), older Group A (≥40 years old; nuclear disaster-affected area), younger Group N (≤39 years old; non-affected area), and older Group N (≥40 years old; non-affected area; Figure 2). The reference age was set at 39 years as the mean age of DMAT members was 38.8 years, and a specific medical checkup is available for those aged >40 years in Japan.12,13 The background characteristics were compared between the four groups using the chi-squared test. Each EIS was presented as the mean and standard deviation (SD). The EIS between each disaster was compared with the analysis of variance and the Tukey-Kramer test for multiple comparisons. Sub-analysis was performed for male participants only. Further analyses were conducted according to age and area. The EIS was compared between the younger and older and nuclear disaster-affected and non-affected groups using the student’s t-test. All statistical analyses were performed using JMP 14 (SAS Institute Inc.; Cary, North Carolina USA), and a P value of .05 was considered statistically significant.\n\nFigure 2.Diagrammatic Image Representation of a Stratified Comparison. Note: The horizontal axis indicates age and the vertical axis represents disaster experience. Each comparison in Figures 3A, 3B, 4A, and 4B is depicted with a black bidirectional arrow. Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area.\n\nDiagrammatic Image Representation of a Stratified Comparison. Note: The horizontal axis indicates age and the vertical axis represents disaster experience. Each comparison in Figures 3A, 3B, 4A, and 4B is depicted with a black bidirectional arrow. Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area.\nEthics Committee Approval The ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130).\nThe ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130).", "The ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130).", "The characteristics of younger participants in Group A (n = 28), younger participants in Group N (n = 56), older participants in Group A (n = 51), and older participants in Group N (n = 43) are depicted in Table 1. There were differences in terms of background characteristics between the four groups in terms of occupation and experience in natural disaster activities (Table 1). According to the primary outcome, the mean EIS for each type of disaster were as follows: natural, 82.2 (SD = 20.3); human-made, 81.7 (SD = 23.2); C, 50.0 (SD = 34.9); B, 47.4 (SD = 35.3); R/N, 57.6 (SD = 35.5); and E, 52.4 (SD = 36.1). After multiple comparisons, the proportion of natural and human-made disasters was significantly higher than that of C, B, R/N, and E disasters (all P values <.001). Furthermore, R/N disasters had a higher EIS than B disasters (P <.05; Table 2). In addition, a sub-analysis of only the male participants showed the same results (Supplemental Table 1; available online only).\n\nTable 1.Characteristics of the ParticipantsYounger Group A(n = 28)Younger Group N(n = 56)Older Group A(n = 51)Older Group N(n = 43)\nP ValueSex, n (%)  Female6 (21.4)17 (30.4)18 (35.3)9 (20.9).368  Male22 (78.6)39 (69.6)33 (64.7)34 (79.1)Age (years), n (%)  20-296 (21.4)8 (14.3)−−.408a\n  30-3922 (78.6)48 (85.7)−−  40-49−−33 (64.7)34 (79.1).125b\n  Over 50−−18 (35.3)9 (20.9)Occupation, n (%)  Physician3 (10.7)8 (14.3)17 (33.3)13 (30.2).048  Nurse7 (25.0)27 (48.2)20 (39.3)15 (34.9)  Administrative Staff7 (25.0)9 (16.1)7 (13.7)7 (16.3)  Others11 (39.3)12 (21.4)7 (13.7)8 (18.6)Family, n (%)  Without6 (21.4)16 (28.6)11 (21.6)10 (23.3).822  With22 (78.6)40 (71.4)40 (78.4)33 (76.7)Experience in Natural Disaster Activities, n (%)  No12 (42.9)24 (42.9)9 (17.6)10 (23.3).012  Yes16 (57.1)32 (57.1)42 (82.4)33 (76.7)Experience in CBRNE Disaster Activities, n (%)  No27 (96.4)53 (94.6)45 (88.2)39 (90.7).495  Yes1 (3.6)3 (5.4)6 (11.8)4 (9.3)Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area.a\nComparison between younger Group A and younger Group N.b\nComparison between older Group A and older Group N.\n\nCharacteristics of the Participants\nAbbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area.\nComparison between younger Group A and younger Group N.\nComparison between older Group A and older Group N.\n\nTable 2.Multiple Comparison of EIS between the Six Types of DisastersNo.Disaster TypeMean (SD) EIS95% CI\nP Valuea\nvs. 2vs. 3vs. 4vs. 5vs. 61Natural82.3 (SD = 20.3)79.2-85.31.00<.01<.01<.01<.012Human-Made81.7 (SD = 23.2)78.3-85.1—<.01<.01<.01<.013Chemical50.0 (SD = 34.9)44.8-55.1——.97.21.984Biological47.4 (SD = 35.3)42.2-52.6———.03.675Radiological/Nuclear57.6 (SD = 35.5)52.3-62.8————.636Explosive52.4 (SD = 36.1)47.0-57.7—————Abbreviation: EIS, enga_gement intent score.a\nP values <.05 were considered statistically significant.\n\nMultiple Comparison of EIS between the Six Types of Disasters\nAbbreviation: EIS, enga_gement intent score.\nP values <.05 were considered statistically significant.\nBased on the intention to engage in various types of disasters, the EIS for all CBRNE disasters among younger participants was significantly higher in younger Group A than in younger Group N (C: 60.9 [SD = 30.8] versus 37.4 [SD = 32.3], P <.01; B: 55.3 [SD = 31.9] versus 35.5 [SD = 32.6], P <.01; R/N: 63.0 [SD = 31.9] versus 41.3 [SD = 35.0], P <.01; E: 61.1 [SD = 32.4] versus 44.6 [SD = 35.2], P <.05; Figure 3A and Table 3). Meanwhile, the EIS for R/N disasters alone among older participants was significantly higher in older Group A than in older Group N (72.1 [SD = 31.2] versus 58.0 [SD = 35.3]; P <.05), but those for other disasters (natural, human-made, C, B, and E) did not significantly differ between the two groups (Figure 3B and Table 3). According to age, there was no difference in the intention to engage in all types of disasters between younger and older participants in Group A (Figure 4A and Table 4). However, older participants in Group N had a significantly higher EIS for B (35.5 [SD = 32.6] versus 49.4 [SD = 35.7]; P <.05) and R/N (41.3 [SD = 35.0] versus 58.0 [SD = 35.3]; P <.05) disasters than younger participants in Group N (Figure 4B and Table 4).\n\nFigure 3.Comparison of Engagement Intent Score According to the Type of Disaster in Each Age Group. A. There was no significant difference between younger participants in Group A and Group N in terms of intention to engage in natural and human-made disasters. Group A had a significantly higher intention to engage in all CBRNE disaster activities. B. The score for radiological/nuclear disaster alone was significantly higher among older participants in Group A than in Group N. However, the results for other disasters, except radiological/nuclear ones, did not significantly differ between the two groups.Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area.* P <.05;** P <.01.\n\nComparison of Engagement Intent Score According to the Type of Disaster in Each Age Group. A. There was no significant difference between younger participants in Group A and Group N in terms of intention to engage in natural and human-made disasters. Group A had a significantly higher intention to engage in all CBRNE disaster activities. B. The score for radiological/nuclear disaster alone was significantly higher among older participants in Group A than in Group N. However, the results for other disasters, except radiological/nuclear ones, did not significantly differ between the two groups.\nAbbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area.\n* P <.05;\n** P <.01.\n\nTable 3.Comparison of EIS for Each Type of Disaster among the Same Age GroupDisaster TypeYounger Group A(n = 28)Younger Group N(n = 56)\nP Valuea\nOlder Group A(n = 51)Older Group N(n = 43)\nP Valuea\nNatural78.0 (SD = 25.9)80.1 (SD = 20.9).6885.9 (SD = 17.8)83.4 (SD = 18.1).50Human-Made78.0 (SD = 27.1)82.9 (SD = 21.2).3681.2 (SD = 25.6)83.1 (SD = 20.2).70Chemical60.9 (SD = 30.8)37.4 (SD = 32.3)<.0157.6 (SD = 35.7)50.3 (SD = 35.6).33Biological55.3 (SD = 31.9)35.5 (SD = 32.6)<.0154.5 (SD = 37.3)49.4 (SD = 35.7).50Radiological/Nuclear63.0 (SD = 31.9)41.3 (SD = 35.0)<.0172.1 (SD = 31.2)58.0 (SD = 35.3).04Explosive61.1 (SD = 32.4)44.6 (SD = 35.2).0456.8 (SD = 36.4)51.6 (SD = 38.3).51Abbreviation: EIS, engagement intent score.a\n\nP values <.05 were considered statistically significant.\n\nComparison of EIS for Each Type of Disaster among the Same Age Group\nAbbreviation: EIS, engagement intent score.\n\nP values <.05 were considered statistically significant.\n\nFigure 4.Comparison of Engagement Intent Score in Terms of the Type of Disasters in Each Group. A. There was no difference in the intention to engage in all types of disasters between younger and older participants in Group A. B. Older participants in Group N had a significantly higher intention to engage in biological and radiological/nuclear disaster activities. The same trend was observed for chemical disasters. However, the results did not significantly differ.Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area.*P <.05.\n\nComparison of Engagement Intent Score in Terms of the Type of Disasters in Each Group. A. There was no difference in the intention to engage in all types of disasters between younger and older participants in Group A. B. Older participants in Group N had a significantly higher intention to engage in biological and radiological/nuclear disaster activities. The same trend was observed for chemical disasters. However, the results did not significantly differ.\nAbbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area.\n*P <.05.\n\nTable 4.Comparison of EIS for Each Type of Disaster among the Same AreaDisaster TypeYounger Group A(n = 28)Older Group A(n = 51)\nP Valuea\nYounger Group N(n = 56)Older Group N(n = 43)\nP Valuea\nNatural78.0 (SD = 25.9)85.9 (SD = 17.8).1180.1 (SD = 20.9)83.4 (SD = 18.1).42Human-Made78.0 (SD = 27.1)81.2 (SD = 25.6).6182.9 (SD = 21.2)83.1 (SD = 20.2).97Chemical60.9 (SD = 30.8)57.6 (SD = 35.7).6837.4 (SD = 35.5)50.3 (SD = 35.6).06Biological55.3 (SD = 31.9)54.5 (SD = 37.3).9335.5 (SD = 32.6)49.4 (SD = 35.7).05Radiological/Nuclear63.0 (SD = 31.9)72.1 (SD = 31.2).2241.3 (SD = 35.0)58.0 (SD = 35.3).02Explosive61.1 (SD = 32.4)56.8 (SD = 36.4).6044.6 (SD = 35.2)51.6 (SD = 38.3).34Abbreviation: EIS, engagement intent score.a\n\nP values <.05 were considered statistically significant.\n\nComparison of EIS for Each Type of Disaster among the Same Area\nAbbreviation: EIS, engagement intent score.\n\nP values <.05 were considered statistically significant.\nAccording to the free answers, the DMAT members were not willing to engage in natural and human-made disasters mainly due to safety and the thought of leaving their family members behind in case of injuries or death. Nevertheless, the main reasons why DMAT members were not willing to engage in CBRN disaster activities were lack of knowledge and skills along with the anxiety and fear attributable to the fact that the R or C agents cannot be visualized with the naked eye. Meanwhile, the reasons for not engaging in E disasters slightly differed. That is, several participants answered lack of security due to the risk of second or third explosion.", "This study aimed to assess and elucidate the response provided by DMAT members to CBRNE disasters; this is considering the fact that an organized response is imperative in disaster management. To achieve the goal, this study initially focused on securing human resources. As a first step toward this goal, this study investigated the DMAT’s intention to engage in various types of disaster activities. Some questionnaire surveys asked about the intention to engage in disaster activities. However, most studies were based on a two-to-ten scale of responses to the questionnaire. To the best of the authors’ knowledge, this is the first survey about the intention of DMAT members to participate in a specific disaster activity, and the intention was scored on a scale of 100%. Using a continuous scale, the intention to engage in disaster activities could be accurately evaluated.\nIn previous studies, approximately 40% of medical personnel or firefighter-training students in a non-affected area6,7 and 56.3% in a nuclear disaster-affected area9 were willing to engage in R/N disaster activities. Based on these reports, disaster responders in a nuclear disaster-affected area had a higher intention to engage in R/N disasters, and this finding is understandable. With actual experience in helping individuals affected by a disaster, people can accept things as their own, and their interest in disaster activities increases. A greater interest leads to intention to take action.14 In this study, both age groups who experienced nuclear disasters had a higher EIS for R/N disasters, and the result was comparable to that of previous reports about firefighters in nuclear disaster areas. This finding indicated that experiences with disasters might have a positive influence on intention among not only firefighters, but also DMAT members.\nAs shown in Table 2, the EIS of CBRNE disasters was significantly lower than that of natural or human-made disasters. Moreover, the SD of the EIS in each CBRNE disaster was higher than that of natural or human-made disasters. This result indicates that the intention to engage in CBRNE disasters varies according to the member’s experience, knowledge, or skill; however, difference in the intention to engage in natural and human-made disasters appears to be less. According to the free answer, there was a trend for the reasons for not willing to engage in disaster activities. For natural and human-made disasters, the main reasons were safety and leaving family members behind in case of critical injuries or death. However, the main reasons for not willing to engage in CBRN disasters were lack of knowledge and skills, anxiety, and fear since these disasters cannot be prevented. The reasons for not willing to engage in E disasters slightly differed, and the most common reason was the lack of assurance regarding safety from the effects of second or third impact. Thus, DMAT members must have appropriate knowledge and skills to help improve their intention to engage in CBRNE disaster activities.\nBiological disasters had the lowest EIS. This result may be attributed to the recent COVID-19 pandemic. That is, it is similar to a previous survey, which reported that there is a trend toward lower intention to engage in situations involving communicable diseases compared with R/N hazards.7 Japan experienced large C, R/N, or E disasters, such as the Tokyo subway sarin attack (1995), FDNPP accident (2011), and atomic bomb explosion during World War II (1945).5,15,16 By contrast, within the previous decades, there was no significant B disaster, except the COVID-19 pandemic. Moreover, lack of knowledge to disaster-related matters may result in an increase in the DMAT members’ perception of risks, which may consequently lead to avoidance in engaging to disaster activities.17 Based on these aspects, the fact of unknown might lead to a lack of disaster response image for this study population, and this might affect the result of low EIS in B disasters.\nIn the younger group (age: ≤39 years), the EIS was significantly higher in Group A than in Group N for all CBRNE disasters. It is easy to imagine that those who have experienced R/N disasters (Group A) have a higher EIS for R/N disasters. However, the result showed that they also had a higher EIS for C, B, and E disasters. In other words, the results indicated that the experience with R/N disasters had a positive impact on the intention to engage in other CBRNE disasters. Thus, an experience with one specific disaster may help develop adaptability or gain confidence to face other specific disasters in general, thereby improving one’s intention to engage to such disasters. In this study, the reason for the abovementioned phenomenon has not been validated. However, authors have considered the high-risk perception of young-aged Japanese population. A previous study has revealed that there is a relationship between behavior and risk perception.18 The latter is strongly influenced by not only experience, but also culture or nationality, and Japanese are known to have a higher-risk perception than other nationalities.19,20 Moreover, age affects risk perception.19 Individuals may perceive CBRNE disasters as similar based on experiences with a certain type of CBRNE disaster at a young age. Moreover, they can lower their risk perception for other CBRNE disasters. By contrast, the EIS for R/N disasters alone in older participants was significantly higher in Group A than in Group N, and there was no difference in the EIS for other CBRNE disasters between the two older groups. In the sub-analysis comparing the EIS of younger participants in Group A and older participants in Group N, there was no significant difference according to all types of disasters. Group A experienced R/N disasters at a young age, indicating that they acquired knowledge and experience at an early stage, which could have developed over a long period of time. As shown in Table 1, the background characteristics significantly differed in terms of occupation and natural disaster activity experience in each group. This study compared the EIS between the two groups according to each factor because it is impossible to exclude background characteristics if a simultaneous comparison of the four groups is performed.\nYounger participants in Group N had a low EIS for CBRNE disasters. Thus, EIS increases with age and experience. Assessing what they have gained through aging or experience will help to identify the important points for interventions to elevate the EIS of individuals who engage in disaster activities in the future. There are several possible factors with consideration of age and experience. These include acquisition of knowledge and skills or interest. However, further research on specific measures should be conducted. By efficiently and accurately addressing these factors, the number of people who will engage in disaster activities will increase. Then, this will lead to a stable supply of human resources in the future.", "The website URL of the questionnaire was sent to the registered e-mail addresses in the mailing list. However, whether all DMAT members have received the e-mail was not validated. Moreover, some people might have registered more than one e-mail address. Meanwhile, others might not have received the e-mail due to changes in address. Therefore, the actual collection rate was not verified. This survey was conducted on DMAT members from two areas only. Thus, DMAT members from other areas or individuals with other occupations that might involve engagement in various disasters were not included. Hence, further surveys must be conducted.", "Japanese DMAT members had a low intention to engage in CBRNE disaster activities compared with natural and human-made disaster activities. To respond smoothly to specific disasters, measures to efficiently improve the intention to engage in CBRNE disaster activities are required." ]
[ "intro", "methods", "other", "results", "discussion", "other", "conclusions" ]
[ "disaster", "emergency responders", "hazard", "human resources", "intention" ]
Introduction: In patients with critical conditions, the initial response of the rapid response team or medical emergency team is the most important factor correlated with prognosis.1–3 In recent years, people have sustained injuries caused by different types of disasters, which can be classified as natural (ie, earthquakes), human-made (eg, transport accidents), and specific (ie, coronavirus disease 2019 [COVID-19] and chemical terrorism). Hence, the need to manage these disasters is increasing.4 Among them, chemical (C), biological (B), radiological (R), nuclear (N), and explosive (E) (CBRNE) disasters are considered specific. In such disasters, a rapid and smooth response is required to save the lives of patients. However, in the Fukushima Daiichi Nuclear Power Plant (FDNPP; Ōkuma, Fukushima, Japan) accident (2011), one of the most well-known radiological/nuclear (R/N) disasters, it was challenging to smoothly run disaster response activities at all times.5 Therefore, when providing medical treatment in areas with various hazards, measures should be taken in advance to facilitate disaster activities. With consideration of factors that can prevent a smooth response to CBRNE disasters, the lack of human resources is a major concern. Disaster responders have a low intention to engage in specific disaster activities. Some surveys have shown that even individuals who are willing to respond to natural hazards avoid involvement in nuclear disasters or those involving communicable diseases due to anxiety and lack of knowledge.6–8 Hence, this is a major cause for the lack of human resources and is associated with difficulties in facilitating CBRNE disaster activities. A previous study revealed that factors such as self-confidence, incentives, and family understanding affect the intention of firefighters to engage in nuclear disaster activities.9 However, the current intention of all medical responders to participate in CBRNE disaster activities has not been fully elucidated. In Japan, the Great Hanshin earthquake of 1995 has led to the development of a disaster medical system. Moreover, the Disaster Medical Assistance Team (DMAT), which responds to various disasters, has been established. The DMAT comprises physicians, nurses, and logisticians, as defined in the Basic Disaster Management Plan based on the Japan’s Disaster Countermeasures Basic Act.10,11 Japanese DMAT members can decide whether or not to respond when they receive dispatch requests. On the other hand, to date, the team plays a major role in large-scale disasters in Japan, and its members are the most important disaster medical responders in Japan. Therefore, a survey about the intention of DMAT members to engage in short-term CBRNE disaster activities must be conducted to facilitate a smooth response. This study aimed to conduct a web questionnaire survey among DMAT members from two different areas (one with nuclear disaster experience and the other without). To smoothen each specific disaster response, the current intentions of DMAT members to engage in CBRNE disaster activities were examined. Moreover, future measures that can improve such intentions were evaluated. Methods: This was a cross-sectional study. An anonymous web questionnaire survey was conducted from October 2020 through November 2020. The website URL of the questionnaire was distributed by research members via e-mail to DMAT members in the two different areas. That is, one was a nuclear disaster-affected area (Group A) and the other was a non-affected area (Group N). In total, 204 participants from both areas responded. However, only 178 provided complete responses (effective response rate: total 87.3%; Group A 84.9%; Group N 89.2%). These data were then included in the analysis (Figure 1). The sample size was estimated using the pwr.anova.test function of R 4.0.3 software (R Foundation for Statistical Computing; Vienna, Austria). The following three parameters were included: group size (k = 4), medium effect (f = 0.25), and power (0.8). The estimated sample size was 45 per group; therefore, the total size of the response group was determined to be 180. The questionnaire was used to collect information such as sex, age, occupation, family status, and experience in disaster activities. To validate the intention to engage in disaster activities (natural, human-made, CBRNE disasters), the following question was created: “Will you actively engage in response activities during a natural, human-made, or CBRNE disaster?” The participants were required to answer using the Engagement Intent Score (EIS), which indicates their agreement to the abovementioned question (0%-100%). Participants with an EIS of <50% were instructed to provide a free answer as to why they did not wish to engage. Figure 1.Flow Chart Showing the Selection of Participants. Note: The web URL of the questionnaire was sent via e-mail; addresses were in the two DMAT mailing lists. That is, one list was for a nuclear disaster-affected area and the other was for a non-affected area. In total, 204 members answered the questionnaire. After excluding 26 incomplete response data, 178 participants were finally included in the analysis.Abbreviation: DMAT, Disaster Medical Assistance Team. Flow Chart Showing the Selection of Participants. Note: The web URL of the questionnaire was sent via e-mail; addresses were in the two DMAT mailing lists. That is, one list was for a nuclear disaster-affected area and the other was for a non-affected area. In total, 204 members answered the questionnaire. After excluding 26 incomplete response data, 178 participants were finally included in the analysis. Abbreviation: DMAT, Disaster Medical Assistance Team. The participants were divided into four groups according to age and area: younger Group A (≤39 years old; nuclear disaster-affected area), older Group A (≥40 years old; nuclear disaster-affected area), younger Group N (≤39 years old; non-affected area), and older Group N (≥40 years old; non-affected area; Figure 2). The reference age was set at 39 years as the mean age of DMAT members was 38.8 years, and a specific medical checkup is available for those aged >40 years in Japan.12,13 The background characteristics were compared between the four groups using the chi-squared test. Each EIS was presented as the mean and standard deviation (SD). The EIS between each disaster was compared with the analysis of variance and the Tukey-Kramer test for multiple comparisons. Sub-analysis was performed for male participants only. Further analyses were conducted according to age and area. The EIS was compared between the younger and older and nuclear disaster-affected and non-affected groups using the student’s t-test. All statistical analyses were performed using JMP 14 (SAS Institute Inc.; Cary, North Carolina USA), and a P value of .05 was considered statistically significant. Figure 2.Diagrammatic Image Representation of a Stratified Comparison. Note: The horizontal axis indicates age and the vertical axis represents disaster experience. Each comparison in Figures 3A, 3B, 4A, and 4B is depicted with a black bidirectional arrow. Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area. Diagrammatic Image Representation of a Stratified Comparison. Note: The horizontal axis indicates age and the vertical axis represents disaster experience. Each comparison in Figures 3A, 3B, 4A, and 4B is depicted with a black bidirectional arrow. Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area. Ethics Committee Approval The ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130). The ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130). Ethics Committee Approval: The ethics committee of Fukushima Medical University approved the study protocol (Fukushima, Japan; approval number: 2020-130). Results: The characteristics of younger participants in Group A (n = 28), younger participants in Group N (n = 56), older participants in Group A (n = 51), and older participants in Group N (n = 43) are depicted in Table 1. There were differences in terms of background characteristics between the four groups in terms of occupation and experience in natural disaster activities (Table 1). According to the primary outcome, the mean EIS for each type of disaster were as follows: natural, 82.2 (SD = 20.3); human-made, 81.7 (SD = 23.2); C, 50.0 (SD = 34.9); B, 47.4 (SD = 35.3); R/N, 57.6 (SD = 35.5); and E, 52.4 (SD = 36.1). After multiple comparisons, the proportion of natural and human-made disasters was significantly higher than that of C, B, R/N, and E disasters (all P values <.001). Furthermore, R/N disasters had a higher EIS than B disasters (P <.05; Table 2). In addition, a sub-analysis of only the male participants showed the same results (Supplemental Table 1; available online only). Table 1.Characteristics of the ParticipantsYounger Group A(n = 28)Younger Group N(n = 56)Older Group A(n = 51)Older Group N(n = 43) P ValueSex, n (%)  Female6 (21.4)17 (30.4)18 (35.3)9 (20.9).368  Male22 (78.6)39 (69.6)33 (64.7)34 (79.1)Age (years), n (%)  20-296 (21.4)8 (14.3)−−.408a   30-3922 (78.6)48 (85.7)−−  40-49−−33 (64.7)34 (79.1).125b   Over 50−−18 (35.3)9 (20.9)Occupation, n (%)  Physician3 (10.7)8 (14.3)17 (33.3)13 (30.2).048  Nurse7 (25.0)27 (48.2)20 (39.3)15 (34.9)  Administrative Staff7 (25.0)9 (16.1)7 (13.7)7 (16.3)  Others11 (39.3)12 (21.4)7 (13.7)8 (18.6)Family, n (%)  Without6 (21.4)16 (28.6)11 (21.6)10 (23.3).822  With22 (78.6)40 (71.4)40 (78.4)33 (76.7)Experience in Natural Disaster Activities, n (%)  No12 (42.9)24 (42.9)9 (17.6)10 (23.3).012  Yes16 (57.1)32 (57.1)42 (82.4)33 (76.7)Experience in CBRNE Disaster Activities, n (%)  No27 (96.4)53 (94.6)45 (88.2)39 (90.7).495  Yes1 (3.6)3 (5.4)6 (11.8)4 (9.3)Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area.a Comparison between younger Group A and younger Group N.b Comparison between older Group A and older Group N. Characteristics of the Participants Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area. Comparison between younger Group A and younger Group N. Comparison between older Group A and older Group N. Table 2.Multiple Comparison of EIS between the Six Types of DisastersNo.Disaster TypeMean (SD) EIS95% CI P Valuea vs. 2vs. 3vs. 4vs. 5vs. 61Natural82.3 (SD = 20.3)79.2-85.31.00<.01<.01<.01<.012Human-Made81.7 (SD = 23.2)78.3-85.1—<.01<.01<.01<.013Chemical50.0 (SD = 34.9)44.8-55.1——.97.21.984Biological47.4 (SD = 35.3)42.2-52.6———.03.675Radiological/Nuclear57.6 (SD = 35.5)52.3-62.8————.636Explosive52.4 (SD = 36.1)47.0-57.7—————Abbreviation: EIS, enga_gement intent score.a P values <.05 were considered statistically significant. Multiple Comparison of EIS between the Six Types of Disasters Abbreviation: EIS, enga_gement intent score. P values <.05 were considered statistically significant. Based on the intention to engage in various types of disasters, the EIS for all CBRNE disasters among younger participants was significantly higher in younger Group A than in younger Group N (C: 60.9 [SD = 30.8] versus 37.4 [SD = 32.3], P <.01; B: 55.3 [SD = 31.9] versus 35.5 [SD = 32.6], P <.01; R/N: 63.0 [SD = 31.9] versus 41.3 [SD = 35.0], P <.01; E: 61.1 [SD = 32.4] versus 44.6 [SD = 35.2], P <.05; Figure 3A and Table 3). Meanwhile, the EIS for R/N disasters alone among older participants was significantly higher in older Group A than in older Group N (72.1 [SD = 31.2] versus 58.0 [SD = 35.3]; P <.05), but those for other disasters (natural, human-made, C, B, and E) did not significantly differ between the two groups (Figure 3B and Table 3). According to age, there was no difference in the intention to engage in all types of disasters between younger and older participants in Group A (Figure 4A and Table 4). However, older participants in Group N had a significantly higher EIS for B (35.5 [SD = 32.6] versus 49.4 [SD = 35.7]; P <.05) and R/N (41.3 [SD = 35.0] versus 58.0 [SD = 35.3]; P <.05) disasters than younger participants in Group N (Figure 4B and Table 4). Figure 3.Comparison of Engagement Intent Score According to the Type of Disaster in Each Age Group. A. There was no significant difference between younger participants in Group A and Group N in terms of intention to engage in natural and human-made disasters. Group A had a significantly higher intention to engage in all CBRNE disaster activities. B. The score for radiological/nuclear disaster alone was significantly higher among older participants in Group A than in Group N. However, the results for other disasters, except radiological/nuclear ones, did not significantly differ between the two groups.Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area.* P <.05;** P <.01. Comparison of Engagement Intent Score According to the Type of Disaster in Each Age Group. A. There was no significant difference between younger participants in Group A and Group N in terms of intention to engage in natural and human-made disasters. Group A had a significantly higher intention to engage in all CBRNE disaster activities. B. The score for radiological/nuclear disaster alone was significantly higher among older participants in Group A than in Group N. However, the results for other disasters, except radiological/nuclear ones, did not significantly differ between the two groups. Abbreviations: CBRNE, chemical, biological, radiological, nuclear, and explosive; Group A, nuclear disaster-affected area; Group N, non-affected area. * P <.05; ** P <.01. Table 3.Comparison of EIS for Each Type of Disaster among the Same Age GroupDisaster TypeYounger Group A(n = 28)Younger Group N(n = 56) P Valuea Older Group A(n = 51)Older Group N(n = 43) P Valuea Natural78.0 (SD = 25.9)80.1 (SD = 20.9).6885.9 (SD = 17.8)83.4 (SD = 18.1).50Human-Made78.0 (SD = 27.1)82.9 (SD = 21.2).3681.2 (SD = 25.6)83.1 (SD = 20.2).70Chemical60.9 (SD = 30.8)37.4 (SD = 32.3)<.0157.6 (SD = 35.7)50.3 (SD = 35.6).33Biological55.3 (SD = 31.9)35.5 (SD = 32.6)<.0154.5 (SD = 37.3)49.4 (SD = 35.7).50Radiological/Nuclear63.0 (SD = 31.9)41.3 (SD = 35.0)<.0172.1 (SD = 31.2)58.0 (SD = 35.3).04Explosive61.1 (SD = 32.4)44.6 (SD = 35.2).0456.8 (SD = 36.4)51.6 (SD = 38.3).51Abbreviation: EIS, engagement intent score.a P values <.05 were considered statistically significant. Comparison of EIS for Each Type of Disaster among the Same Age Group Abbreviation: EIS, engagement intent score. P values <.05 were considered statistically significant. Figure 4.Comparison of Engagement Intent Score in Terms of the Type of Disasters in Each Group. A. There was no difference in the intention to engage in all types of disasters between younger and older participants in Group A. B. Older participants in Group N had a significantly higher intention to engage in biological and radiological/nuclear disaster activities. The same trend was observed for chemical disasters. However, the results did not significantly differ.Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area.*P <.05. Comparison of Engagement Intent Score in Terms of the Type of Disasters in Each Group. A. There was no difference in the intention to engage in all types of disasters between younger and older participants in Group A. B. Older participants in Group N had a significantly higher intention to engage in biological and radiological/nuclear disaster activities. The same trend was observed for chemical disasters. However, the results did not significantly differ. Abbreviations: Group A, nuclear disaster-affected area; Group N, non-affected area. *P <.05. Table 4.Comparison of EIS for Each Type of Disaster among the Same AreaDisaster TypeYounger Group A(n = 28)Older Group A(n = 51) P Valuea Younger Group N(n = 56)Older Group N(n = 43) P Valuea Natural78.0 (SD = 25.9)85.9 (SD = 17.8).1180.1 (SD = 20.9)83.4 (SD = 18.1).42Human-Made78.0 (SD = 27.1)81.2 (SD = 25.6).6182.9 (SD = 21.2)83.1 (SD = 20.2).97Chemical60.9 (SD = 30.8)57.6 (SD = 35.7).6837.4 (SD = 35.5)50.3 (SD = 35.6).06Biological55.3 (SD = 31.9)54.5 (SD = 37.3).9335.5 (SD = 32.6)49.4 (SD = 35.7).05Radiological/Nuclear63.0 (SD = 31.9)72.1 (SD = 31.2).2241.3 (SD = 35.0)58.0 (SD = 35.3).02Explosive61.1 (SD = 32.4)56.8 (SD = 36.4).6044.6 (SD = 35.2)51.6 (SD = 38.3).34Abbreviation: EIS, engagement intent score.a P values <.05 were considered statistically significant. Comparison of EIS for Each Type of Disaster among the Same Area Abbreviation: EIS, engagement intent score. P values <.05 were considered statistically significant. According to the free answers, the DMAT members were not willing to engage in natural and human-made disasters mainly due to safety and the thought of leaving their family members behind in case of injuries or death. Nevertheless, the main reasons why DMAT members were not willing to engage in CBRN disaster activities were lack of knowledge and skills along with the anxiety and fear attributable to the fact that the R or C agents cannot be visualized with the naked eye. Meanwhile, the reasons for not engaging in E disasters slightly differed. That is, several participants answered lack of security due to the risk of second or third explosion. Discussion: This study aimed to assess and elucidate the response provided by DMAT members to CBRNE disasters; this is considering the fact that an organized response is imperative in disaster management. To achieve the goal, this study initially focused on securing human resources. As a first step toward this goal, this study investigated the DMAT’s intention to engage in various types of disaster activities. Some questionnaire surveys asked about the intention to engage in disaster activities. However, most studies were based on a two-to-ten scale of responses to the questionnaire. To the best of the authors’ knowledge, this is the first survey about the intention of DMAT members to participate in a specific disaster activity, and the intention was scored on a scale of 100%. Using a continuous scale, the intention to engage in disaster activities could be accurately evaluated. In previous studies, approximately 40% of medical personnel or firefighter-training students in a non-affected area6,7 and 56.3% in a nuclear disaster-affected area9 were willing to engage in R/N disaster activities. Based on these reports, disaster responders in a nuclear disaster-affected area had a higher intention to engage in R/N disasters, and this finding is understandable. With actual experience in helping individuals affected by a disaster, people can accept things as their own, and their interest in disaster activities increases. A greater interest leads to intention to take action.14 In this study, both age groups who experienced nuclear disasters had a higher EIS for R/N disasters, and the result was comparable to that of previous reports about firefighters in nuclear disaster areas. This finding indicated that experiences with disasters might have a positive influence on intention among not only firefighters, but also DMAT members. As shown in Table 2, the EIS of CBRNE disasters was significantly lower than that of natural or human-made disasters. Moreover, the SD of the EIS in each CBRNE disaster was higher than that of natural or human-made disasters. This result indicates that the intention to engage in CBRNE disasters varies according to the member’s experience, knowledge, or skill; however, difference in the intention to engage in natural and human-made disasters appears to be less. According to the free answer, there was a trend for the reasons for not willing to engage in disaster activities. For natural and human-made disasters, the main reasons were safety and leaving family members behind in case of critical injuries or death. However, the main reasons for not willing to engage in CBRN disasters were lack of knowledge and skills, anxiety, and fear since these disasters cannot be prevented. The reasons for not willing to engage in E disasters slightly differed, and the most common reason was the lack of assurance regarding safety from the effects of second or third impact. Thus, DMAT members must have appropriate knowledge and skills to help improve their intention to engage in CBRNE disaster activities. Biological disasters had the lowest EIS. This result may be attributed to the recent COVID-19 pandemic. That is, it is similar to a previous survey, which reported that there is a trend toward lower intention to engage in situations involving communicable diseases compared with R/N hazards.7 Japan experienced large C, R/N, or E disasters, such as the Tokyo subway sarin attack (1995), FDNPP accident (2011), and atomic bomb explosion during World War II (1945).5,15,16 By contrast, within the previous decades, there was no significant B disaster, except the COVID-19 pandemic. Moreover, lack of knowledge to disaster-related matters may result in an increase in the DMAT members’ perception of risks, which may consequently lead to avoidance in engaging to disaster activities.17 Based on these aspects, the fact of unknown might lead to a lack of disaster response image for this study population, and this might affect the result of low EIS in B disasters. In the younger group (age: ≤39 years), the EIS was significantly higher in Group A than in Group N for all CBRNE disasters. It is easy to imagine that those who have experienced R/N disasters (Group A) have a higher EIS for R/N disasters. However, the result showed that they also had a higher EIS for C, B, and E disasters. In other words, the results indicated that the experience with R/N disasters had a positive impact on the intention to engage in other CBRNE disasters. Thus, an experience with one specific disaster may help develop adaptability or gain confidence to face other specific disasters in general, thereby improving one’s intention to engage to such disasters. In this study, the reason for the abovementioned phenomenon has not been validated. However, authors have considered the high-risk perception of young-aged Japanese population. A previous study has revealed that there is a relationship between behavior and risk perception.18 The latter is strongly influenced by not only experience, but also culture or nationality, and Japanese are known to have a higher-risk perception than other nationalities.19,20 Moreover, age affects risk perception.19 Individuals may perceive CBRNE disasters as similar based on experiences with a certain type of CBRNE disaster at a young age. Moreover, they can lower their risk perception for other CBRNE disasters. By contrast, the EIS for R/N disasters alone in older participants was significantly higher in Group A than in Group N, and there was no difference in the EIS for other CBRNE disasters between the two older groups. In the sub-analysis comparing the EIS of younger participants in Group A and older participants in Group N, there was no significant difference according to all types of disasters. Group A experienced R/N disasters at a young age, indicating that they acquired knowledge and experience at an early stage, which could have developed over a long period of time. As shown in Table 1, the background characteristics significantly differed in terms of occupation and natural disaster activity experience in each group. This study compared the EIS between the two groups according to each factor because it is impossible to exclude background characteristics if a simultaneous comparison of the four groups is performed. Younger participants in Group N had a low EIS for CBRNE disasters. Thus, EIS increases with age and experience. Assessing what they have gained through aging or experience will help to identify the important points for interventions to elevate the EIS of individuals who engage in disaster activities in the future. There are several possible factors with consideration of age and experience. These include acquisition of knowledge and skills or interest. However, further research on specific measures should be conducted. By efficiently and accurately addressing these factors, the number of people who will engage in disaster activities will increase. Then, this will lead to a stable supply of human resources in the future. Limitations: The website URL of the questionnaire was sent to the registered e-mail addresses in the mailing list. However, whether all DMAT members have received the e-mail was not validated. Moreover, some people might have registered more than one e-mail address. Meanwhile, others might not have received the e-mail due to changes in address. Therefore, the actual collection rate was not verified. This survey was conducted on DMAT members from two areas only. Thus, DMAT members from other areas or individuals with other occupations that might involve engagement in various disasters were not included. Hence, further surveys must be conducted. Conclusion: Japanese DMAT members had a low intention to engage in CBRNE disaster activities compared with natural and human-made disaster activities. To respond smoothly to specific disasters, measures to efficiently improve the intention to engage in CBRNE disaster activities are required.
Background: Different disaster activities should be performed smoothly. In relation to this, human resources for disaster activities must be secured. To achieve a stable supply of human resources, it is essential to improve the intentions of individuals responding to each type of disaster. However, the current intention of Disaster Medical Assistance Team (DMAT) members has not yet been assessed. Methods: An anonymous web questionnaire survey was conducted. Japanese DMAT members in the nuclear disaster-affected area (Group A; n = 79) and the non-affected area (Group N; n = 99) were included in the analysis. The outcome was the answer to the following question: "Will you actively engage in activities during natural, human-made, and chemical (C), biological (B), radiological/nuclear (R/N), and explosive (E) (CBRNE) disasters?" Then, questionnaire responses were compared according to disaster type. Results: The intention to engage in C (50), B (47), R/N (58), and E (52) disasters was significantly lower than that in natural (82) and human-made (82) disasters (P <.001). The intention to engage in CBRNE disasters among younger participants (age ≤39 years) was significantly higher in Group A than in Group N. By contrast, the intention to engage in R/N disasters alone among older participants (age ≥40 years) was higher in Group A than in Group N. However, there was no difference between the two groups in terms of intention to engage in C, B, and E disasters. Moreover, the intention to engage in all disasters between younger and older participants in Group A did not differ. In Group N, older participants had a significantly higher intention to engage in B and R/N disasters. Conclusions: Experience with a specific type of calamity at a young age may improve intention to engage in not only disasters encountered, but also other types. In addition, the intention to engage in CBRNE disasters improved with age in the non-experienced population. To respond smoothly to specific disasters in the future, measures must be taken to improve the intention to engage in CBRNE disasters among DMAT members.
Introduction: In patients with critical conditions, the initial response of the rapid response team or medical emergency team is the most important factor correlated with prognosis.1–3 In recent years, people have sustained injuries caused by different types of disasters, which can be classified as natural (ie, earthquakes), human-made (eg, transport accidents), and specific (ie, coronavirus disease 2019 [COVID-19] and chemical terrorism). Hence, the need to manage these disasters is increasing.4 Among them, chemical (C), biological (B), radiological (R), nuclear (N), and explosive (E) (CBRNE) disasters are considered specific. In such disasters, a rapid and smooth response is required to save the lives of patients. However, in the Fukushima Daiichi Nuclear Power Plant (FDNPP; Ōkuma, Fukushima, Japan) accident (2011), one of the most well-known radiological/nuclear (R/N) disasters, it was challenging to smoothly run disaster response activities at all times.5 Therefore, when providing medical treatment in areas with various hazards, measures should be taken in advance to facilitate disaster activities. With consideration of factors that can prevent a smooth response to CBRNE disasters, the lack of human resources is a major concern. Disaster responders have a low intention to engage in specific disaster activities. Some surveys have shown that even individuals who are willing to respond to natural hazards avoid involvement in nuclear disasters or those involving communicable diseases due to anxiety and lack of knowledge.6–8 Hence, this is a major cause for the lack of human resources and is associated with difficulties in facilitating CBRNE disaster activities. A previous study revealed that factors such as self-confidence, incentives, and family understanding affect the intention of firefighters to engage in nuclear disaster activities.9 However, the current intention of all medical responders to participate in CBRNE disaster activities has not been fully elucidated. In Japan, the Great Hanshin earthquake of 1995 has led to the development of a disaster medical system. Moreover, the Disaster Medical Assistance Team (DMAT), which responds to various disasters, has been established. The DMAT comprises physicians, nurses, and logisticians, as defined in the Basic Disaster Management Plan based on the Japan’s Disaster Countermeasures Basic Act.10,11 Japanese DMAT members can decide whether or not to respond when they receive dispatch requests. On the other hand, to date, the team plays a major role in large-scale disasters in Japan, and its members are the most important disaster medical responders in Japan. Therefore, a survey about the intention of DMAT members to engage in short-term CBRNE disaster activities must be conducted to facilitate a smooth response. This study aimed to conduct a web questionnaire survey among DMAT members from two different areas (one with nuclear disaster experience and the other without). To smoothen each specific disaster response, the current intentions of DMAT members to engage in CBRNE disaster activities were examined. Moreover, future measures that can improve such intentions were evaluated. Conclusion: Japanese DMAT members had a low intention to engage in CBRNE disaster activities compared with natural and human-made disaster activities. To respond smoothly to specific disasters, measures to efficiently improve the intention to engage in CBRNE disaster activities are required.
Background: Different disaster activities should be performed smoothly. In relation to this, human resources for disaster activities must be secured. To achieve a stable supply of human resources, it is essential to improve the intentions of individuals responding to each type of disaster. However, the current intention of Disaster Medical Assistance Team (DMAT) members has not yet been assessed. Methods: An anonymous web questionnaire survey was conducted. Japanese DMAT members in the nuclear disaster-affected area (Group A; n = 79) and the non-affected area (Group N; n = 99) were included in the analysis. The outcome was the answer to the following question: "Will you actively engage in activities during natural, human-made, and chemical (C), biological (B), radiological/nuclear (R/N), and explosive (E) (CBRNE) disasters?" Then, questionnaire responses were compared according to disaster type. Results: The intention to engage in C (50), B (47), R/N (58), and E (52) disasters was significantly lower than that in natural (82) and human-made (82) disasters (P <.001). The intention to engage in CBRNE disasters among younger participants (age ≤39 years) was significantly higher in Group A than in Group N. By contrast, the intention to engage in R/N disasters alone among older participants (age ≥40 years) was higher in Group A than in Group N. However, there was no difference between the two groups in terms of intention to engage in C, B, and E disasters. Moreover, the intention to engage in all disasters between younger and older participants in Group A did not differ. In Group N, older participants had a significantly higher intention to engage in B and R/N disasters. Conclusions: Experience with a specific type of calamity at a young age may improve intention to engage in not only disasters encountered, but also other types. In addition, the intention to engage in CBRNE disasters improved with age in the non-experienced population. To respond smoothly to specific disasters in the future, measures must be taken to improve the intention to engage in CBRNE disasters among DMAT members.
5,056
442
[ 24 ]
7
[ "group", "disaster", "sd", "disasters", "eis", "engage", "nuclear", "participants", "affected", "activities" ]
[ "nuclear disaster activities", "firefighters nuclear disaster", "disaster medical", "radiological nuclear disasters", "important disaster medical" ]
[CONTENT] disaster | emergency responders | hazard | human resources | intention [SUMMARY]
[CONTENT] disaster | emergency responders | hazard | human resources | intention [SUMMARY]
[CONTENT] disaster | emergency responders | hazard | human resources | intention [SUMMARY]
[CONTENT] disaster | emergency responders | hazard | human resources | intention [SUMMARY]
[CONTENT] disaster | emergency responders | hazard | human resources | intention [SUMMARY]
[CONTENT] disaster | emergency responders | hazard | human resources | intention [SUMMARY]
[CONTENT] Adult | Disaster Planning | Disasters | Humans | Intention | Medical Assistance | Surveys and Questionnaires | Workforce [SUMMARY]
[CONTENT] Adult | Disaster Planning | Disasters | Humans | Intention | Medical Assistance | Surveys and Questionnaires | Workforce [SUMMARY]
[CONTENT] Adult | Disaster Planning | Disasters | Humans | Intention | Medical Assistance | Surveys and Questionnaires | Workforce [SUMMARY]
[CONTENT] Adult | Disaster Planning | Disasters | Humans | Intention | Medical Assistance | Surveys and Questionnaires | Workforce [SUMMARY]
[CONTENT] Adult | Disaster Planning | Disasters | Humans | Intention | Medical Assistance | Surveys and Questionnaires | Workforce [SUMMARY]
[CONTENT] Adult | Disaster Planning | Disasters | Humans | Intention | Medical Assistance | Surveys and Questionnaires | Workforce [SUMMARY]
[CONTENT] nuclear disaster activities | firefighters nuclear disaster | disaster medical | radiological nuclear disasters | important disaster medical [SUMMARY]
[CONTENT] nuclear disaster activities | firefighters nuclear disaster | disaster medical | radiological nuclear disasters | important disaster medical [SUMMARY]
[CONTENT] nuclear disaster activities | firefighters nuclear disaster | disaster medical | radiological nuclear disasters | important disaster medical [SUMMARY]
[CONTENT] nuclear disaster activities | firefighters nuclear disaster | disaster medical | radiological nuclear disasters | important disaster medical [SUMMARY]
[CONTENT] nuclear disaster activities | firefighters nuclear disaster | disaster medical | radiological nuclear disasters | important disaster medical [SUMMARY]
[CONTENT] nuclear disaster activities | firefighters nuclear disaster | disaster medical | radiological nuclear disasters | important disaster medical [SUMMARY]
[CONTENT] group | disaster | sd | disasters | eis | engage | nuclear | participants | affected | activities [SUMMARY]
[CONTENT] group | disaster | sd | disasters | eis | engage | nuclear | participants | affected | activities [SUMMARY]
[CONTENT] group | disaster | sd | disasters | eis | engage | nuclear | participants | affected | activities [SUMMARY]
[CONTENT] group | disaster | sd | disasters | eis | engage | nuclear | participants | affected | activities [SUMMARY]
[CONTENT] group | disaster | sd | disasters | eis | engage | nuclear | participants | affected | activities [SUMMARY]
[CONTENT] group | disaster | sd | disasters | eis | engage | nuclear | participants | affected | activities [SUMMARY]
[CONTENT] disaster | response | activities | disasters | disaster activities | nuclear | medical | team | cbrne | japan [SUMMARY]
[CONTENT] area | affected | group | affected area | disaster | participants | non affected area | non | disaster affected | non affected [SUMMARY]
[CONTENT] sd | group | 35 | sd 35 | older | participants | disaster | younger | 05 | participants group [SUMMARY]
[CONTENT] activities | disaster activities | disaster | intention engage cbrne disaster | intention engage cbrne | cbrne disaster activities | engage cbrne disaster activities | engage cbrne | engage cbrne disaster | engage [SUMMARY]
[CONTENT] disaster | group | disasters | activities | sd | disaster activities | engage | cbrne | intention | dmat [SUMMARY]
[CONTENT] disaster | group | disasters | activities | sd | disaster activities | engage | cbrne | intention | dmat [SUMMARY]
[CONTENT] ||| ||| ||| Disaster Medical Assistance Team (DMAT [SUMMARY]
[CONTENT] ||| Japanese | DMAT | Group A | 79 | Group N | 99 ||| ||| [SUMMARY]
[CONTENT] 50 | R/N | 58 | 52 | 82 | 82 ||| CBRNE | age ≤39 years | Group A | Group N. By | Group A ||| Group N. However | two ||| Group A ||| Group N [SUMMARY]
[CONTENT] ||| CBRNE ||| CBRNE | DMAT [SUMMARY]
[CONTENT] ||| ||| ||| Disaster Medical Assistance Team (DMAT ||| ||| Japanese | DMAT | Group A | 79 | Group N | 99 ||| ||| ||| ||| 50 | R/N | 58 | 52 | 82 | 82 ||| CBRNE | age ≤39 years | Group A | Group N. By | Group A ||| Group N. However | two ||| Group A ||| Group N ||| ||| CBRNE ||| CBRNE | DMAT [SUMMARY]
[CONTENT] ||| ||| ||| Disaster Medical Assistance Team (DMAT ||| ||| Japanese | DMAT | Group A | 79 | Group N | 99 ||| ||| ||| ||| 50 | R/N | 58 | 52 | 82 | 82 ||| CBRNE | age ≤39 years | Group A | Group N. By | Group A ||| Group N. However | two ||| Group A ||| Group N ||| ||| CBRNE ||| CBRNE | DMAT [SUMMARY]
Comparison of 4 Methods for Dynamization of Locking Plates: Differences in the Amount and Type of Fracture Motion.
28657927
Decreasing the stiffness of locked plating constructs can promote natural fracture healing by controlled dynamization of the fracture. This biomechanical study compared the effect of 4 different stiffness reduction methods on interfragmentary motion by measuring axial motion and shear motion at the fracture site.
BACKGROUND
Distal femur locking plates were applied to bridge a metadiaphyseal fracture in femur surrogates. A locked construct with a short-bridge span served as the nondynamized control group (LOCKED). Four different methods for stiffness reduction were evaluated: replacing diaphyseal locking screws with nonlocked screws (NONLOCKED); bridge dynamization (BRIDGE) with 2 empty screw holes proximal to the fracture; screw dynamization with far cortical locking (FCL) screws; and plate dynamization with active locking plates (ACTIVE). Construct stiffness, axial motion, and shear motion at the fracture site were measured to characterize each dynamization methods.
METHODS
Compared with LOCKED control constructs, NONLOCKED constructs had a similar stiffness (P = 0.08), axial motion (P = 0.07), and shear motion (P = 0.97). BRIDGE constructs reduced stiffness by 45% compared with LOCKED constructs (P < 0.001), but interfragmentary motion was dominated by shear. Compared with LOCKED constructs, FCL and ACTIVE constructs reduced stiffness by 62% (P < 0.001) and 75% (P < 0.001), respectively, and significantly increased axial motion, but not shear motion.
RESULTS
In a surrogate model of a distal femur fracture, replacing locked with nonlocked diaphyseal screws does not significantly decrease construct stiffness and does not enhance interfragmentary motion. A longer bridge span primarily increases shear motion, not axial motion. The use of FCL screws or active plating delivers axial dynamization without introducing shear motion.
CONCLUSIONS
[ "Biomechanical Phenomena", "Bone Plates", "Bone Screws", "Diaphyses", "Equipment Design", "Femoral Fractures", "Fracture Fixation, Internal", "Humans", "Models, Anatomic", "Shear Strength" ]
5603978
INTRODUCTION
Angle-stable locked plating has become the standard treatment for most difficult fractures of the distal femur. Despite excellent early results, there is growing concern surrounding the relatively high nonunion rates after locked plate fixation of distal femur fractures. Most recent studies quote nonunion or fixation failure rates after locked plating of distal femur fractures of 10%–23%.1–6 There is abundant evidence that deficient fracture motion caused by overly stiff locking plates can suppress natural fracture healing, contributing to delayed unions, nonunions, and fixation failure.3,7–9 Conversely, research over the past 50 years has consistently demonstrated that controlled axial dynamization can improve the speed and strength of fracture healing by dynamically stimulating natural bone healing through callus formation.7,10–15 Two primary mechanical conditions critical for natural fracture healing have been identified: Callus formation is promoted by axial interfragmentary motion greater than 0.2 mm16,17; and fracture healing is inhibited when interfragmentary motion is dominated by shear displacement.18 Strategies aimed at altering the mechanical environment created by locked plating constructs and at promoting fracture healing by spontaneous callus formation were proposed as early as 2003.19 Four principal methods are currently promoted to reduce the stiffness of locked bridge plating constructs: diaphyseal fixation with nonlocking screws rather than locking screws; increasing the length of the bridge spanning the fracture zone19; screw dynamization with far cortical locking (FCL) screws8; and plate dynamization with active plates that have elastically suspended locking holes.20 It is not clear which of these 4 constructs provide the best mechanical environment to achieve the goal of early fracture dynamization to promote healing while minimizing any detrimental effects from motion. This study measured construct stiffness as well as axial and shear motion at the fracture site to assess the efficacy by which each strategy can satisfy the basic conditions for mechanical stimulation of fracture healing. Specifically, this study tested the hypotheses that the 4 dynamization strategies will differ in their efficacy to decrease construct stiffness, to increase interfragmentary axial motion, and to prevent excessive shear motion.
null
null
RESULTS
Construct Stiffness There was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3). Construct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. There was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3). Construct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Axial Interfragmentary Motion In each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001). Axial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. In each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001). Axial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. Shear Motion Shear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B). Resulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. Shear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B). Resulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article.
null
null
[ "Specimens", "Loading", "Outcome Assessment", "Statistical Analysis", "Construct Stiffness", "Axial Interfragmentary Motion", "Shear Motion" ]
[ "Plating constructs were evaluated in fourth generation femur surrogate specimens made of fiber-reinforced epoxy composite (#3406, large size; Sawbones, Vashon, WA) to minimize interspecimen variability. An unstable distal femur fracture (AO/Orthopaedic Trauma Association 33-A3) was modeled by introducing a 10-mm gap osteotomy located 60 mm proximal to the intercondylar notch.21,22 This gap osteotomy simulated the biomechanical constraints of a comminuted fracture that relies on full-load transfer through the bridge plating construct because of a lack of bony apposition at the fracture site. In the LOCKED control group, this gap osteotomy was stabilized with a 286-mm long distal femur plate (ZPLP; Zimmer, Warsaw, IN) made of stainless steel. The plate had 13 holes for diaphyseal fixation, 7 of which were locking holes (Fig. 2A). The diaphyseal plate segment was applied using 3 evenly spaced 4.5-mm locking screws placed in the first, fourth, and seventh locking hole from the fracture site, resulting in a short-bridge span of 25 mm. A plate elevation of 1 mm over the proximal diaphysis was achieved with temporary spacers to simulate biological fixation with preservation of periosteal perfusion.23 The distal plate segment was applied to the metaphysis using six 5.5-mm cannulated locking screws in accordance with the manufacturer's technique guide. All screws were tightened to 4 Nm.\nA, Distal femur plate with alternating locked and nonlocked holes; (B) 3 distinct screws used with standard femur plate; (C and D) active plate with screw holes located in elastically suspended sliding elements. Editor's Note: A color image accompanies the online version of this article.\nSubsequently, 4 additional construct configurations were assembled by changing one variable of the LOCKED control group constructs at a time: For constructs of the NONLOCKED group, nonlocking screws were used in place of locking screws for diaphyseal fixation, using the first, fourth, and sixth nonlocking hole from the fracture site (Fig. 2B). Because of the alternating locking/nonlocking screw hole configuration of this plate, the nonlocking construct had an intermediate bridge span of 40 mm. BRIDGE group constructs used a longer bridge span (87 mm) than LOCKED control group constructs (25 mm) by placing diaphyseal locking screws in the third, fifth, and seventh locking hole from the fracture site. FCL group constructs replaced the 3 diaphyseal locking screws of LOCKED control group constructs with 3 FCL screws (4.5 mm MotionLoc; Zimmer) made of stainless steel. FCL screws rigidly lock into the plate and the far cortex, but they are not rigidly constrained in the near cortex underlying the plate. The elastic shaft of FCL screws can flex within the near cortex motion envelope to generate symmetric interfragmentary motion.8 ACTIVE group constructs had a screw configuration identical to that of the LOCKED control group, but used an active locking plate. Screw holes of active locking plates are integrated into individual sliding elements that are elastically suspended in a silicone envelope inside lateral plate pockets (Fig. 2C). Lateral pockets are arranged in an alternating pattern from both plate sides, resulting in a staggered locking hole configuration. The pocket geometry combined with the silicone suspension allows controlled axial translation, which enables up to 1.5 mm of axial motion across a fracture while providing stable fixation in response to bending and torsional loading.24 The silicone suspension consisted of long-term implantable medical-grade silicone elastomer. The active locking plate was made of stainless steel and was geometrically equivalent to the standard locking plate of the LOCKED control group (Fig. 2D). Five specimens of each of the 5 constructs were tested for reproducibility requiring a total of 25 construct tests.", "For stiffness assessment, constructs were tested under quasi-physiological loading in a material test system according to an established loading protocol (see Figure, Supplemental Digital Content 1, http://links.lww.com/BOT/A993, describing specimen loading and outcome assessment).21,25 The femoral condyles were embedded in a mounting fixture using bone cement and were rigidly connected to the base of the test system (8874, Instron, Canton, MA). The metaphyseal plate segment was coated with soft clay to prevent nonphysiologic plate constraints. The femoral head was placed in a spherical recess of a polymer block that was attached to the test system actuator. This enabled axial load application while allowing unconstrained rotation of the femoral head. Load was induced along the mechanical axis of the femur, with the load vector intersecting the femoral head and the epicondylar center. Each construct was loaded in 50-N increments up to 700 N, corresponding to approximately one body weight.", "Constructs were characterized by determining their construct stiffness and interfragmentary motion using noncontact optical photogrammetry. For this purpose, an array of 4 active luminescent markers consisting of miniature light emitting diodes were glued to the osteotomy surfaces. An 18 megapixel digital camera (Canon EOS T6) captured the marker locations with a resolution of 0.01 mm after each incremental loading step. ImageJ quantitative image analysis software developed by the National Institute of Health (www.imagej.net) was used to extract marker displacement and to calculate the average axial motion dA and shear motion dS between osteotomy surfaces in response to incremental load steps. Because plate bending induces different amounts of axial motion at the near cortex and far cortex,26 axial motion dA was extracted individually for the near cortex (dA, NC) from markers 1 and 3, and for the far cortex (dA, FC) from markers 2 and 4, as depicted in Supplemental Digital Content 1 (see Figure, http://links.lww.com/BOT/A993). Construct stiffness SC was calculated by dividing the applied axial load by the axial motion dA at the midpoint between the near and far cortex, with dA = (dA, FC\n+ dA, NC)/2.", "All results are reported as their mean and SD. Construct stiffness SC and interfragmentary motion parameters dS, dA, FC, and dA, FC of the 4 experimental groups was statistically compared with the LOCKED control group results using one-way analysis of variance testing including a post hoc Turkey honest significant difference (HSD) to identify significant differences. Each outcome parameter was analyzed individually, and a level of significance of α = 0.05 was used to detect significant differences.", "There was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3).\nConstruct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct.", "In each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001).\nAxial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article.", "Shear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B).\nResulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Specimens", "Loading", "Outcome Assessment", "Statistical Analysis", "RESULTS", "Construct Stiffness", "Axial Interfragmentary Motion", "Shear Motion", "DISCUSSION", "Supplementary Material" ]
[ "Angle-stable locked plating has become the standard treatment for most difficult fractures of the distal femur. Despite excellent early results, there is growing concern surrounding the relatively high nonunion rates after locked plate fixation of distal femur fractures. Most recent studies quote nonunion or fixation failure rates after locked plating of distal femur fractures of 10%–23%.1–6 There is abundant evidence that deficient fracture motion caused by overly stiff locking plates can suppress natural fracture healing, contributing to delayed unions, nonunions, and fixation failure.3,7–9 Conversely, research over the past 50 years has consistently demonstrated that controlled axial dynamization can improve the speed and strength of fracture healing by dynamically stimulating natural bone healing through callus formation.7,10–15\nTwo primary mechanical conditions critical for natural fracture healing have been identified: Callus formation is promoted by axial interfragmentary motion greater than 0.2 mm16,17; and fracture healing is inhibited when interfragmentary motion is dominated by shear displacement.18 Strategies aimed at altering the mechanical environment created by locked plating constructs and at promoting fracture healing by spontaneous callus formation were proposed as early as 2003.19\nFour principal methods are currently promoted to reduce the stiffness of locked bridge plating constructs: diaphyseal fixation with nonlocking screws rather than locking screws; increasing the length of the bridge spanning the fracture zone19; screw dynamization with far cortical locking (FCL) screws8; and plate dynamization with active plates that have elastically suspended locking holes.20 It is not clear which of these 4 constructs provide the best mechanical environment to achieve the goal of early fracture dynamization to promote healing while minimizing any detrimental effects from motion.\nThis study measured construct stiffness as well as axial and shear motion at the fracture site to assess the efficacy by which each strategy can satisfy the basic conditions for mechanical stimulation of fracture healing. Specifically, this study tested the hypotheses that the 4 dynamization strategies will differ in their efficacy to decrease construct stiffness, to increase interfragmentary axial motion, and to prevent excessive shear motion.", "In a biomechanical bench-top study, periarticular locking plates were applied to bridge a metadiaphyseal fracture in femur surrogates. Construct stiffness was assessed under quasi-physiologic loading by measuring the resulting axial and shear motion at the fracture site. In the LOCKED control group, the periarticular plate was applied to the diaphysis with bicortical locking screws. The first locking screw was placed adjacent to the fracture to achieve a short-bridge span. Subsequently, 4 different strategies to decrease construct stiffness and to dynamize the fracture site were evaluated (Fig. 1): replacing diaphyseal locking screws with nonlocking screws (NONLOCKED group); bridge dynamization (BRIDGE group) by increasing the bridging span with locked screws in the diaphysis19; screw dynamization with FCL screws (FCL group)8; and plate dynamization with active locking plates (ACTIVE group).10,20 Construct stiffness was characterized by measuring the construct deformation in response to quasi-physiologic loading. Dynamization of the fracture site was characterized by measuring the interfragmentary motion in axial and shear direction. Finally, the stiffness and interfragmentary motion results of the 4 strategies for stiffness reduction were compared with the LOCKED control group to determine their effectiveness in dynamizing the fracture site.\nStrategies to dynamize a locked plating construct (LOCKED) for distal femur fractures. Editor's Note: A color image accompanies the online version of this article.\n Specimens Plating constructs were evaluated in fourth generation femur surrogate specimens made of fiber-reinforced epoxy composite (#3406, large size; Sawbones, Vashon, WA) to minimize interspecimen variability. An unstable distal femur fracture (AO/Orthopaedic Trauma Association 33-A3) was modeled by introducing a 10-mm gap osteotomy located 60 mm proximal to the intercondylar notch.21,22 This gap osteotomy simulated the biomechanical constraints of a comminuted fracture that relies on full-load transfer through the bridge plating construct because of a lack of bony apposition at the fracture site. In the LOCKED control group, this gap osteotomy was stabilized with a 286-mm long distal femur plate (ZPLP; Zimmer, Warsaw, IN) made of stainless steel. The plate had 13 holes for diaphyseal fixation, 7 of which were locking holes (Fig. 2A). The diaphyseal plate segment was applied using 3 evenly spaced 4.5-mm locking screws placed in the first, fourth, and seventh locking hole from the fracture site, resulting in a short-bridge span of 25 mm. A plate elevation of 1 mm over the proximal diaphysis was achieved with temporary spacers to simulate biological fixation with preservation of periosteal perfusion.23 The distal plate segment was applied to the metaphysis using six 5.5-mm cannulated locking screws in accordance with the manufacturer's technique guide. All screws were tightened to 4 Nm.\nA, Distal femur plate with alternating locked and nonlocked holes; (B) 3 distinct screws used with standard femur plate; (C and D) active plate with screw holes located in elastically suspended sliding elements. Editor's Note: A color image accompanies the online version of this article.\nSubsequently, 4 additional construct configurations were assembled by changing one variable of the LOCKED control group constructs at a time: For constructs of the NONLOCKED group, nonlocking screws were used in place of locking screws for diaphyseal fixation, using the first, fourth, and sixth nonlocking hole from the fracture site (Fig. 2B). Because of the alternating locking/nonlocking screw hole configuration of this plate, the nonlocking construct had an intermediate bridge span of 40 mm. BRIDGE group constructs used a longer bridge span (87 mm) than LOCKED control group constructs (25 mm) by placing diaphyseal locking screws in the third, fifth, and seventh locking hole from the fracture site. FCL group constructs replaced the 3 diaphyseal locking screws of LOCKED control group constructs with 3 FCL screws (4.5 mm MotionLoc; Zimmer) made of stainless steel. FCL screws rigidly lock into the plate and the far cortex, but they are not rigidly constrained in the near cortex underlying the plate. The elastic shaft of FCL screws can flex within the near cortex motion envelope to generate symmetric interfragmentary motion.8 ACTIVE group constructs had a screw configuration identical to that of the LOCKED control group, but used an active locking plate. Screw holes of active locking plates are integrated into individual sliding elements that are elastically suspended in a silicone envelope inside lateral plate pockets (Fig. 2C). Lateral pockets are arranged in an alternating pattern from both plate sides, resulting in a staggered locking hole configuration. The pocket geometry combined with the silicone suspension allows controlled axial translation, which enables up to 1.5 mm of axial motion across a fracture while providing stable fixation in response to bending and torsional loading.24 The silicone suspension consisted of long-term implantable medical-grade silicone elastomer. The active locking plate was made of stainless steel and was geometrically equivalent to the standard locking plate of the LOCKED control group (Fig. 2D). Five specimens of each of the 5 constructs were tested for reproducibility requiring a total of 25 construct tests.\nPlating constructs were evaluated in fourth generation femur surrogate specimens made of fiber-reinforced epoxy composite (#3406, large size; Sawbones, Vashon, WA) to minimize interspecimen variability. An unstable distal femur fracture (AO/Orthopaedic Trauma Association 33-A3) was modeled by introducing a 10-mm gap osteotomy located 60 mm proximal to the intercondylar notch.21,22 This gap osteotomy simulated the biomechanical constraints of a comminuted fracture that relies on full-load transfer through the bridge plating construct because of a lack of bony apposition at the fracture site. In the LOCKED control group, this gap osteotomy was stabilized with a 286-mm long distal femur plate (ZPLP; Zimmer, Warsaw, IN) made of stainless steel. The plate had 13 holes for diaphyseal fixation, 7 of which were locking holes (Fig. 2A). The diaphyseal plate segment was applied using 3 evenly spaced 4.5-mm locking screws placed in the first, fourth, and seventh locking hole from the fracture site, resulting in a short-bridge span of 25 mm. A plate elevation of 1 mm over the proximal diaphysis was achieved with temporary spacers to simulate biological fixation with preservation of periosteal perfusion.23 The distal plate segment was applied to the metaphysis using six 5.5-mm cannulated locking screws in accordance with the manufacturer's technique guide. All screws were tightened to 4 Nm.\nA, Distal femur plate with alternating locked and nonlocked holes; (B) 3 distinct screws used with standard femur plate; (C and D) active plate with screw holes located in elastically suspended sliding elements. Editor's Note: A color image accompanies the online version of this article.\nSubsequently, 4 additional construct configurations were assembled by changing one variable of the LOCKED control group constructs at a time: For constructs of the NONLOCKED group, nonlocking screws were used in place of locking screws for diaphyseal fixation, using the first, fourth, and sixth nonlocking hole from the fracture site (Fig. 2B). Because of the alternating locking/nonlocking screw hole configuration of this plate, the nonlocking construct had an intermediate bridge span of 40 mm. BRIDGE group constructs used a longer bridge span (87 mm) than LOCKED control group constructs (25 mm) by placing diaphyseal locking screws in the third, fifth, and seventh locking hole from the fracture site. FCL group constructs replaced the 3 diaphyseal locking screws of LOCKED control group constructs with 3 FCL screws (4.5 mm MotionLoc; Zimmer) made of stainless steel. FCL screws rigidly lock into the plate and the far cortex, but they are not rigidly constrained in the near cortex underlying the plate. The elastic shaft of FCL screws can flex within the near cortex motion envelope to generate symmetric interfragmentary motion.8 ACTIVE group constructs had a screw configuration identical to that of the LOCKED control group, but used an active locking plate. Screw holes of active locking plates are integrated into individual sliding elements that are elastically suspended in a silicone envelope inside lateral plate pockets (Fig. 2C). Lateral pockets are arranged in an alternating pattern from both plate sides, resulting in a staggered locking hole configuration. The pocket geometry combined with the silicone suspension allows controlled axial translation, which enables up to 1.5 mm of axial motion across a fracture while providing stable fixation in response to bending and torsional loading.24 The silicone suspension consisted of long-term implantable medical-grade silicone elastomer. The active locking plate was made of stainless steel and was geometrically equivalent to the standard locking plate of the LOCKED control group (Fig. 2D). Five specimens of each of the 5 constructs were tested for reproducibility requiring a total of 25 construct tests.\n Loading For stiffness assessment, constructs were tested under quasi-physiological loading in a material test system according to an established loading protocol (see Figure, Supplemental Digital Content 1, http://links.lww.com/BOT/A993, describing specimen loading and outcome assessment).21,25 The femoral condyles were embedded in a mounting fixture using bone cement and were rigidly connected to the base of the test system (8874, Instron, Canton, MA). The metaphyseal plate segment was coated with soft clay to prevent nonphysiologic plate constraints. The femoral head was placed in a spherical recess of a polymer block that was attached to the test system actuator. This enabled axial load application while allowing unconstrained rotation of the femoral head. Load was induced along the mechanical axis of the femur, with the load vector intersecting the femoral head and the epicondylar center. Each construct was loaded in 50-N increments up to 700 N, corresponding to approximately one body weight.\nFor stiffness assessment, constructs were tested under quasi-physiological loading in a material test system according to an established loading protocol (see Figure, Supplemental Digital Content 1, http://links.lww.com/BOT/A993, describing specimen loading and outcome assessment).21,25 The femoral condyles were embedded in a mounting fixture using bone cement and were rigidly connected to the base of the test system (8874, Instron, Canton, MA). The metaphyseal plate segment was coated with soft clay to prevent nonphysiologic plate constraints. The femoral head was placed in a spherical recess of a polymer block that was attached to the test system actuator. This enabled axial load application while allowing unconstrained rotation of the femoral head. Load was induced along the mechanical axis of the femur, with the load vector intersecting the femoral head and the epicondylar center. Each construct was loaded in 50-N increments up to 700 N, corresponding to approximately one body weight.\n Outcome Assessment Constructs were characterized by determining their construct stiffness and interfragmentary motion using noncontact optical photogrammetry. For this purpose, an array of 4 active luminescent markers consisting of miniature light emitting diodes were glued to the osteotomy surfaces. An 18 megapixel digital camera (Canon EOS T6) captured the marker locations with a resolution of 0.01 mm after each incremental loading step. ImageJ quantitative image analysis software developed by the National Institute of Health (www.imagej.net) was used to extract marker displacement and to calculate the average axial motion dA and shear motion dS between osteotomy surfaces in response to incremental load steps. Because plate bending induces different amounts of axial motion at the near cortex and far cortex,26 axial motion dA was extracted individually for the near cortex (dA, NC) from markers 1 and 3, and for the far cortex (dA, FC) from markers 2 and 4, as depicted in Supplemental Digital Content 1 (see Figure, http://links.lww.com/BOT/A993). Construct stiffness SC was calculated by dividing the applied axial load by the axial motion dA at the midpoint between the near and far cortex, with dA = (dA, FC\n+ dA, NC)/2.\nConstructs were characterized by determining their construct stiffness and interfragmentary motion using noncontact optical photogrammetry. For this purpose, an array of 4 active luminescent markers consisting of miniature light emitting diodes were glued to the osteotomy surfaces. An 18 megapixel digital camera (Canon EOS T6) captured the marker locations with a resolution of 0.01 mm after each incremental loading step. ImageJ quantitative image analysis software developed by the National Institute of Health (www.imagej.net) was used to extract marker displacement and to calculate the average axial motion dA and shear motion dS between osteotomy surfaces in response to incremental load steps. Because plate bending induces different amounts of axial motion at the near cortex and far cortex,26 axial motion dA was extracted individually for the near cortex (dA, NC) from markers 1 and 3, and for the far cortex (dA, FC) from markers 2 and 4, as depicted in Supplemental Digital Content 1 (see Figure, http://links.lww.com/BOT/A993). Construct stiffness SC was calculated by dividing the applied axial load by the axial motion dA at the midpoint between the near and far cortex, with dA = (dA, FC\n+ dA, NC)/2.\n Statistical Analysis All results are reported as their mean and SD. Construct stiffness SC and interfragmentary motion parameters dS, dA, FC, and dA, FC of the 4 experimental groups was statistically compared with the LOCKED control group results using one-way analysis of variance testing including a post hoc Turkey honest significant difference (HSD) to identify significant differences. Each outcome parameter was analyzed individually, and a level of significance of α = 0.05 was used to detect significant differences.\nAll results are reported as their mean and SD. Construct stiffness SC and interfragmentary motion parameters dS, dA, FC, and dA, FC of the 4 experimental groups was statistically compared with the LOCKED control group results using one-way analysis of variance testing including a post hoc Turkey honest significant difference (HSD) to identify significant differences. Each outcome parameter was analyzed individually, and a level of significance of α = 0.05 was used to detect significant differences.", "Plating constructs were evaluated in fourth generation femur surrogate specimens made of fiber-reinforced epoxy composite (#3406, large size; Sawbones, Vashon, WA) to minimize interspecimen variability. An unstable distal femur fracture (AO/Orthopaedic Trauma Association 33-A3) was modeled by introducing a 10-mm gap osteotomy located 60 mm proximal to the intercondylar notch.21,22 This gap osteotomy simulated the biomechanical constraints of a comminuted fracture that relies on full-load transfer through the bridge plating construct because of a lack of bony apposition at the fracture site. In the LOCKED control group, this gap osteotomy was stabilized with a 286-mm long distal femur plate (ZPLP; Zimmer, Warsaw, IN) made of stainless steel. The plate had 13 holes for diaphyseal fixation, 7 of which were locking holes (Fig. 2A). The diaphyseal plate segment was applied using 3 evenly spaced 4.5-mm locking screws placed in the first, fourth, and seventh locking hole from the fracture site, resulting in a short-bridge span of 25 mm. A plate elevation of 1 mm over the proximal diaphysis was achieved with temporary spacers to simulate biological fixation with preservation of periosteal perfusion.23 The distal plate segment was applied to the metaphysis using six 5.5-mm cannulated locking screws in accordance with the manufacturer's technique guide. All screws were tightened to 4 Nm.\nA, Distal femur plate with alternating locked and nonlocked holes; (B) 3 distinct screws used with standard femur plate; (C and D) active plate with screw holes located in elastically suspended sliding elements. Editor's Note: A color image accompanies the online version of this article.\nSubsequently, 4 additional construct configurations were assembled by changing one variable of the LOCKED control group constructs at a time: For constructs of the NONLOCKED group, nonlocking screws were used in place of locking screws for diaphyseal fixation, using the first, fourth, and sixth nonlocking hole from the fracture site (Fig. 2B). Because of the alternating locking/nonlocking screw hole configuration of this plate, the nonlocking construct had an intermediate bridge span of 40 mm. BRIDGE group constructs used a longer bridge span (87 mm) than LOCKED control group constructs (25 mm) by placing diaphyseal locking screws in the third, fifth, and seventh locking hole from the fracture site. FCL group constructs replaced the 3 diaphyseal locking screws of LOCKED control group constructs with 3 FCL screws (4.5 mm MotionLoc; Zimmer) made of stainless steel. FCL screws rigidly lock into the plate and the far cortex, but they are not rigidly constrained in the near cortex underlying the plate. The elastic shaft of FCL screws can flex within the near cortex motion envelope to generate symmetric interfragmentary motion.8 ACTIVE group constructs had a screw configuration identical to that of the LOCKED control group, but used an active locking plate. Screw holes of active locking plates are integrated into individual sliding elements that are elastically suspended in a silicone envelope inside lateral plate pockets (Fig. 2C). Lateral pockets are arranged in an alternating pattern from both plate sides, resulting in a staggered locking hole configuration. The pocket geometry combined with the silicone suspension allows controlled axial translation, which enables up to 1.5 mm of axial motion across a fracture while providing stable fixation in response to bending and torsional loading.24 The silicone suspension consisted of long-term implantable medical-grade silicone elastomer. The active locking plate was made of stainless steel and was geometrically equivalent to the standard locking plate of the LOCKED control group (Fig. 2D). Five specimens of each of the 5 constructs were tested for reproducibility requiring a total of 25 construct tests.", "For stiffness assessment, constructs were tested under quasi-physiological loading in a material test system according to an established loading protocol (see Figure, Supplemental Digital Content 1, http://links.lww.com/BOT/A993, describing specimen loading and outcome assessment).21,25 The femoral condyles were embedded in a mounting fixture using bone cement and were rigidly connected to the base of the test system (8874, Instron, Canton, MA). The metaphyseal plate segment was coated with soft clay to prevent nonphysiologic plate constraints. The femoral head was placed in a spherical recess of a polymer block that was attached to the test system actuator. This enabled axial load application while allowing unconstrained rotation of the femoral head. Load was induced along the mechanical axis of the femur, with the load vector intersecting the femoral head and the epicondylar center. Each construct was loaded in 50-N increments up to 700 N, corresponding to approximately one body weight.", "Constructs were characterized by determining their construct stiffness and interfragmentary motion using noncontact optical photogrammetry. For this purpose, an array of 4 active luminescent markers consisting of miniature light emitting diodes were glued to the osteotomy surfaces. An 18 megapixel digital camera (Canon EOS T6) captured the marker locations with a resolution of 0.01 mm after each incremental loading step. ImageJ quantitative image analysis software developed by the National Institute of Health (www.imagej.net) was used to extract marker displacement and to calculate the average axial motion dA and shear motion dS between osteotomy surfaces in response to incremental load steps. Because plate bending induces different amounts of axial motion at the near cortex and far cortex,26 axial motion dA was extracted individually for the near cortex (dA, NC) from markers 1 and 3, and for the far cortex (dA, FC) from markers 2 and 4, as depicted in Supplemental Digital Content 1 (see Figure, http://links.lww.com/BOT/A993). Construct stiffness SC was calculated by dividing the applied axial load by the axial motion dA at the midpoint between the near and far cortex, with dA = (dA, FC\n+ dA, NC)/2.", "All results are reported as their mean and SD. Construct stiffness SC and interfragmentary motion parameters dS, dA, FC, and dA, FC of the 4 experimental groups was statistically compared with the LOCKED control group results using one-way analysis of variance testing including a post hoc Turkey honest significant difference (HSD) to identify significant differences. Each outcome parameter was analyzed individually, and a level of significance of α = 0.05 was used to detect significant differences.", " Construct Stiffness There was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3).\nConstruct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct.\nThere was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3).\nConstruct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct.\n Axial Interfragmentary Motion In each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001).\nAxial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article.\nIn each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001).\nAxial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article.\n Shear Motion Shear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B).\nResulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article.\nShear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B).\nResulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article.", "There was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3).\nConstruct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct.", "In each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001).\nAxial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article.", "Shear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B).\nResulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article.", "Results confirmed the study hypothesis by demonstrating that the 4 dynamization strategies yielded not only different amounts of construct stiffness and interfragmentary motion, but also different types of interfragmentary motion.\nLOCKED group results confirmed that a locking plate with a short-bridge span results in asymmetric interfragmentary motion that is deficient for callus formation.3,13,16,26 Although one body weight loading may be excessive for early postoperative loading, it resulted only in 0.1-mm motion at the near cortex. This motion remained below the 0.2-mm motion threshold that has been established as the lower boundary for fracture motion required to promote callus formation.17 This result is supported by several in vivo and clinical studies that demonstrate suppression of callus formation and healing at the near cortex adjacent to a locking plate.3,7,9,20 The far cortex motion was greater than measured at the near cortex, likely secondary to plate bending. Clinically, the increased far cortex motion may allow callus to form, but the repetitive bending may also play a role in eventual fatigue failure of the plate before fracture healing occurs.\nNONLOCKED constructs represent an intuitive response to the stiffness concern associated with locked plating by reverting to nonlocking screws in the diaphysis. However, substituting nonlocking diaphyseal fixation had no significant effect on construct stiffness or interfragmentary motion. This may be explained by the rigid compression of the plate onto the bone surface, which is required to retain stable fixation. Compressing the plate to the bone prevents any motion at the plate–bone interface, which is a prerequisite to induce symmetric interfragmentary motion.26 In contrast to locked plating constructs, the stiffness of a nonlocked construct will gradually decay as a result of dynamic loading.27 Although this can lead to increased fracture motion over time, the resulting uncontrolled motion is not a reliable strategy for dynamization. In addition, the natural fracture healing process responds with much more robust callus formation when exposed to early motion relative to delayed motion.28\nBRIDGE constructs resembled the earliest and most widely proposed strategy to dynamize locked plating constructs. In a biomechanical study, Stoffel et al19 reported in 2003 that axial stiffness of locked plating constructs was mainly influenced by their bridge span. They recommended that one or 2 holes should be omitted on each side of the fracture to allow callus formation. They found that omitting 2 holes made the construct almost twice as flexible, but also 42% less strong. This study found a 45% stiffness reduction by omitting 2 screw holes. However, the greater flexibility of the longer bridge span increased motion primarily at the far cortex, whereas near cortex motion remained deficient. In addition, the longer bridge span induced up to 3 times more shear motion than axial motion. Although shear motion does not necessarily inhibit healing,29,30 several studies have shown that excessive or predominant shear motion will significantly delay healing.18,31 A recent study on the effect of bridge span on fracture motion also confirmed a disproportionate increase in shear motion.32 By analyzing 66 distal femur fractures stabilized with locking plates, they furthermore established a direct association between shear-dominated fracture motion and callus inhibition. Their findings, combined with the results of this study question the technique of increasing the bridge span to dynamize a locked construct, because this may weaken the construct and may cause asymmetric axial motion and excessive shear motion that inhibits fracture healing.\nFCL and ACTIVE group constructs reduced stiffness compared with the LOCKED control group by 62%–75% to 1130 and 759 N/mm, respectively. Nevertheless, their stiffness remained substantially higher than the stiffness range of 50–400 N/mm reported for external fixators33,34 and Ilizarov frames.35 The fact that external fixators and Ilizarov frames are established clinical tools that promote fracture healing by callus formation suggests that the stiffness reduction of FCL and ACTIVE constructs is rather conservative and does not introduce excessive dynamization. FCL and ACTIVE constructs enhanced interfragmentary motion at the near and far cortex well above the 0.2-mm threshold needed to stimulate callus formation. The highest axial motion of 1.1 mm was observed at the far cortex of ACTIVE constructs, and remained at the lower limit of the 1–4 mm motion range reported for functional bracing.36 Most importantly, FCL and ACTIVE constructs delivered dynamization that was dominated by axial motion, not shear motion. These constructs allow a screw to be placed close to the fracture site without affecting stiffness, and therefore, limit the amount of shear motion possible. The controlled axial dynamization provided by FCL and ACTIVE constructs delivers faster and stronger healing. In an ovine fracture healing study, FCL constructs induced consistent and circumferential callus bridging and yielded 157% stronger healing compared with standard locked plating.7 Clinically, a prospective study of 31 consecutive distal femur fractures stabilized with FCL constructs reported no implant or fixation failure, an average time to union of 16 weeks and a nonunion rate of 3%.37 Similar to FCL constructs, ACTIVE plating induced 6 times more callus at 3 weeks postsurgery, and yielded 4 times stronger healing compared with rigid locked plating in an ovine fracture healing study.20 These in vivo and clinical studies of FCL and ACTIVE plating constructs demonstrated that controlled axial dynamization reliably promoted natural fracture healing.\nResults of this study are limited by the use of femur surrogates. Validated surrogates were used to extract relative differences between constructs under highly reproducible test conditions.38 Because the surrogates represented a strong, nonosteoporotic femur, results may not be extrapolated to fracture fixation in the osteopenic femur. Moreover, this study only investigated construct stability in terms of stiffness and related interfragmentary motion, without loading constructs to failure to determine their strength. The strength of the tested constructs has been evaluated in previous studies, showing that increasing the bridge span will decrease construct strength,19 whereas the strength of FCL and active plating constructs is comparable with that of standard locked plating constructs.8,24 Testing was furthermore limited to static loading and did not investigate gradual loosening or fatigue of constructs under dynamic loading. Moreover, this study has been limited to a principal loading mode that combines axial compression and bending but not torsion. Only plates made of stainless steel were tested, which are approximately twice as stiff as geometrically equivalent plates made of Titanium alloy. Although results of this study raise concerns on the negative effect of shear-dominated interfragmentary motion on fracture healing, this concern should be formally investigated in a future in vivo study. Most importantly, emerging implant technologies that can provide controlled dynamization will require more clinical studies to document their effect on fracture healing, and to better define the range of interfragmentary motion that will promote healing of different fracture patterns at specific fracture locations.\nIn conclusion, results of this study indicate that intuitive technical tricks, such as reverting to nonlocking screws or using long plates to maximize the bridge span may not reliably achieve relative stability and adequate interfragmentary motion for promoting natural fracture healing. Conversely, engineered implant solutions in the form of FCL screws or active plates can reliably dynamize a locked plating construct to stimulate fracture healing. As such, results should encourage implant manufacturers to provide engineered solutions that reliably promote rather than potentially hinder fracture healing to avoid the need for and uncertainty of technical tricks intended to optimize construct stability.", "" ]
[ "intro", "materials|methods", null, null, null, null, "results", null, null, null, "discussion", "supplementary-material" ]
[ "plate fixation", "locked plating", "dynamization", "interfragmentary motion", "stiffness", "active plating", "femur" ]
INTRODUCTION: Angle-stable locked plating has become the standard treatment for most difficult fractures of the distal femur. Despite excellent early results, there is growing concern surrounding the relatively high nonunion rates after locked plate fixation of distal femur fractures. Most recent studies quote nonunion or fixation failure rates after locked plating of distal femur fractures of 10%–23%.1–6 There is abundant evidence that deficient fracture motion caused by overly stiff locking plates can suppress natural fracture healing, contributing to delayed unions, nonunions, and fixation failure.3,7–9 Conversely, research over the past 50 years has consistently demonstrated that controlled axial dynamization can improve the speed and strength of fracture healing by dynamically stimulating natural bone healing through callus formation.7,10–15 Two primary mechanical conditions critical for natural fracture healing have been identified: Callus formation is promoted by axial interfragmentary motion greater than 0.2 mm16,17; and fracture healing is inhibited when interfragmentary motion is dominated by shear displacement.18 Strategies aimed at altering the mechanical environment created by locked plating constructs and at promoting fracture healing by spontaneous callus formation were proposed as early as 2003.19 Four principal methods are currently promoted to reduce the stiffness of locked bridge plating constructs: diaphyseal fixation with nonlocking screws rather than locking screws; increasing the length of the bridge spanning the fracture zone19; screw dynamization with far cortical locking (FCL) screws8; and plate dynamization with active plates that have elastically suspended locking holes.20 It is not clear which of these 4 constructs provide the best mechanical environment to achieve the goal of early fracture dynamization to promote healing while minimizing any detrimental effects from motion. This study measured construct stiffness as well as axial and shear motion at the fracture site to assess the efficacy by which each strategy can satisfy the basic conditions for mechanical stimulation of fracture healing. Specifically, this study tested the hypotheses that the 4 dynamization strategies will differ in their efficacy to decrease construct stiffness, to increase interfragmentary axial motion, and to prevent excessive shear motion. MATERIAL AND METHODS: In a biomechanical bench-top study, periarticular locking plates were applied to bridge a metadiaphyseal fracture in femur surrogates. Construct stiffness was assessed under quasi-physiologic loading by measuring the resulting axial and shear motion at the fracture site. In the LOCKED control group, the periarticular plate was applied to the diaphysis with bicortical locking screws. The first locking screw was placed adjacent to the fracture to achieve a short-bridge span. Subsequently, 4 different strategies to decrease construct stiffness and to dynamize the fracture site were evaluated (Fig. 1): replacing diaphyseal locking screws with nonlocking screws (NONLOCKED group); bridge dynamization (BRIDGE group) by increasing the bridging span with locked screws in the diaphysis19; screw dynamization with FCL screws (FCL group)8; and plate dynamization with active locking plates (ACTIVE group).10,20 Construct stiffness was characterized by measuring the construct deformation in response to quasi-physiologic loading. Dynamization of the fracture site was characterized by measuring the interfragmentary motion in axial and shear direction. Finally, the stiffness and interfragmentary motion results of the 4 strategies for stiffness reduction were compared with the LOCKED control group to determine their effectiveness in dynamizing the fracture site. Strategies to dynamize a locked plating construct (LOCKED) for distal femur fractures. Editor's Note: A color image accompanies the online version of this article. Specimens Plating constructs were evaluated in fourth generation femur surrogate specimens made of fiber-reinforced epoxy composite (#3406, large size; Sawbones, Vashon, WA) to minimize interspecimen variability. An unstable distal femur fracture (AO/Orthopaedic Trauma Association 33-A3) was modeled by introducing a 10-mm gap osteotomy located 60 mm proximal to the intercondylar notch.21,22 This gap osteotomy simulated the biomechanical constraints of a comminuted fracture that relies on full-load transfer through the bridge plating construct because of a lack of bony apposition at the fracture site. In the LOCKED control group, this gap osteotomy was stabilized with a 286-mm long distal femur plate (ZPLP; Zimmer, Warsaw, IN) made of stainless steel. The plate had 13 holes for diaphyseal fixation, 7 of which were locking holes (Fig. 2A). The diaphyseal plate segment was applied using 3 evenly spaced 4.5-mm locking screws placed in the first, fourth, and seventh locking hole from the fracture site, resulting in a short-bridge span of 25 mm. A plate elevation of 1 mm over the proximal diaphysis was achieved with temporary spacers to simulate biological fixation with preservation of periosteal perfusion.23 The distal plate segment was applied to the metaphysis using six 5.5-mm cannulated locking screws in accordance with the manufacturer's technique guide. All screws were tightened to 4 Nm. A, Distal femur plate with alternating locked and nonlocked holes; (B) 3 distinct screws used with standard femur plate; (C and D) active plate with screw holes located in elastically suspended sliding elements. Editor's Note: A color image accompanies the online version of this article. Subsequently, 4 additional construct configurations were assembled by changing one variable of the LOCKED control group constructs at a time: For constructs of the NONLOCKED group, nonlocking screws were used in place of locking screws for diaphyseal fixation, using the first, fourth, and sixth nonlocking hole from the fracture site (Fig. 2B). Because of the alternating locking/nonlocking screw hole configuration of this plate, the nonlocking construct had an intermediate bridge span of 40 mm. BRIDGE group constructs used a longer bridge span (87 mm) than LOCKED control group constructs (25 mm) by placing diaphyseal locking screws in the third, fifth, and seventh locking hole from the fracture site. FCL group constructs replaced the 3 diaphyseal locking screws of LOCKED control group constructs with 3 FCL screws (4.5 mm MotionLoc; Zimmer) made of stainless steel. FCL screws rigidly lock into the plate and the far cortex, but they are not rigidly constrained in the near cortex underlying the plate. The elastic shaft of FCL screws can flex within the near cortex motion envelope to generate symmetric interfragmentary motion.8 ACTIVE group constructs had a screw configuration identical to that of the LOCKED control group, but used an active locking plate. Screw holes of active locking plates are integrated into individual sliding elements that are elastically suspended in a silicone envelope inside lateral plate pockets (Fig. 2C). Lateral pockets are arranged in an alternating pattern from both plate sides, resulting in a staggered locking hole configuration. The pocket geometry combined with the silicone suspension allows controlled axial translation, which enables up to 1.5 mm of axial motion across a fracture while providing stable fixation in response to bending and torsional loading.24 The silicone suspension consisted of long-term implantable medical-grade silicone elastomer. The active locking plate was made of stainless steel and was geometrically equivalent to the standard locking plate of the LOCKED control group (Fig. 2D). Five specimens of each of the 5 constructs were tested for reproducibility requiring a total of 25 construct tests. Plating constructs were evaluated in fourth generation femur surrogate specimens made of fiber-reinforced epoxy composite (#3406, large size; Sawbones, Vashon, WA) to minimize interspecimen variability. An unstable distal femur fracture (AO/Orthopaedic Trauma Association 33-A3) was modeled by introducing a 10-mm gap osteotomy located 60 mm proximal to the intercondylar notch.21,22 This gap osteotomy simulated the biomechanical constraints of a comminuted fracture that relies on full-load transfer through the bridge plating construct because of a lack of bony apposition at the fracture site. In the LOCKED control group, this gap osteotomy was stabilized with a 286-mm long distal femur plate (ZPLP; Zimmer, Warsaw, IN) made of stainless steel. The plate had 13 holes for diaphyseal fixation, 7 of which were locking holes (Fig. 2A). The diaphyseal plate segment was applied using 3 evenly spaced 4.5-mm locking screws placed in the first, fourth, and seventh locking hole from the fracture site, resulting in a short-bridge span of 25 mm. A plate elevation of 1 mm over the proximal diaphysis was achieved with temporary spacers to simulate biological fixation with preservation of periosteal perfusion.23 The distal plate segment was applied to the metaphysis using six 5.5-mm cannulated locking screws in accordance with the manufacturer's technique guide. All screws were tightened to 4 Nm. A, Distal femur plate with alternating locked and nonlocked holes; (B) 3 distinct screws used with standard femur plate; (C and D) active plate with screw holes located in elastically suspended sliding elements. Editor's Note: A color image accompanies the online version of this article. Subsequently, 4 additional construct configurations were assembled by changing one variable of the LOCKED control group constructs at a time: For constructs of the NONLOCKED group, nonlocking screws were used in place of locking screws for diaphyseal fixation, using the first, fourth, and sixth nonlocking hole from the fracture site (Fig. 2B). Because of the alternating locking/nonlocking screw hole configuration of this plate, the nonlocking construct had an intermediate bridge span of 40 mm. BRIDGE group constructs used a longer bridge span (87 mm) than LOCKED control group constructs (25 mm) by placing diaphyseal locking screws in the third, fifth, and seventh locking hole from the fracture site. FCL group constructs replaced the 3 diaphyseal locking screws of LOCKED control group constructs with 3 FCL screws (4.5 mm MotionLoc; Zimmer) made of stainless steel. FCL screws rigidly lock into the plate and the far cortex, but they are not rigidly constrained in the near cortex underlying the plate. The elastic shaft of FCL screws can flex within the near cortex motion envelope to generate symmetric interfragmentary motion.8 ACTIVE group constructs had a screw configuration identical to that of the LOCKED control group, but used an active locking plate. Screw holes of active locking plates are integrated into individual sliding elements that are elastically suspended in a silicone envelope inside lateral plate pockets (Fig. 2C). Lateral pockets are arranged in an alternating pattern from both plate sides, resulting in a staggered locking hole configuration. The pocket geometry combined with the silicone suspension allows controlled axial translation, which enables up to 1.5 mm of axial motion across a fracture while providing stable fixation in response to bending and torsional loading.24 The silicone suspension consisted of long-term implantable medical-grade silicone elastomer. The active locking plate was made of stainless steel and was geometrically equivalent to the standard locking plate of the LOCKED control group (Fig. 2D). Five specimens of each of the 5 constructs were tested for reproducibility requiring a total of 25 construct tests. Loading For stiffness assessment, constructs were tested under quasi-physiological loading in a material test system according to an established loading protocol (see Figure, Supplemental Digital Content 1, http://links.lww.com/BOT/A993, describing specimen loading and outcome assessment).21,25 The femoral condyles were embedded in a mounting fixture using bone cement and were rigidly connected to the base of the test system (8874, Instron, Canton, MA). The metaphyseal plate segment was coated with soft clay to prevent nonphysiologic plate constraints. The femoral head was placed in a spherical recess of a polymer block that was attached to the test system actuator. This enabled axial load application while allowing unconstrained rotation of the femoral head. Load was induced along the mechanical axis of the femur, with the load vector intersecting the femoral head and the epicondylar center. Each construct was loaded in 50-N increments up to 700 N, corresponding to approximately one body weight. For stiffness assessment, constructs were tested under quasi-physiological loading in a material test system according to an established loading protocol (see Figure, Supplemental Digital Content 1, http://links.lww.com/BOT/A993, describing specimen loading and outcome assessment).21,25 The femoral condyles were embedded in a mounting fixture using bone cement and were rigidly connected to the base of the test system (8874, Instron, Canton, MA). The metaphyseal plate segment was coated with soft clay to prevent nonphysiologic plate constraints. The femoral head was placed in a spherical recess of a polymer block that was attached to the test system actuator. This enabled axial load application while allowing unconstrained rotation of the femoral head. Load was induced along the mechanical axis of the femur, with the load vector intersecting the femoral head and the epicondylar center. Each construct was loaded in 50-N increments up to 700 N, corresponding to approximately one body weight. Outcome Assessment Constructs were characterized by determining their construct stiffness and interfragmentary motion using noncontact optical photogrammetry. For this purpose, an array of 4 active luminescent markers consisting of miniature light emitting diodes were glued to the osteotomy surfaces. An 18 megapixel digital camera (Canon EOS T6) captured the marker locations with a resolution of 0.01 mm after each incremental loading step. ImageJ quantitative image analysis software developed by the National Institute of Health (www.imagej.net) was used to extract marker displacement and to calculate the average axial motion dA and shear motion dS between osteotomy surfaces in response to incremental load steps. Because plate bending induces different amounts of axial motion at the near cortex and far cortex,26 axial motion dA was extracted individually for the near cortex (dA, NC) from markers 1 and 3, and for the far cortex (dA, FC) from markers 2 and 4, as depicted in Supplemental Digital Content 1 (see Figure, http://links.lww.com/BOT/A993). Construct stiffness SC was calculated by dividing the applied axial load by the axial motion dA at the midpoint between the near and far cortex, with dA = (dA, FC + dA, NC)/2. Constructs were characterized by determining their construct stiffness and interfragmentary motion using noncontact optical photogrammetry. For this purpose, an array of 4 active luminescent markers consisting of miniature light emitting diodes were glued to the osteotomy surfaces. An 18 megapixel digital camera (Canon EOS T6) captured the marker locations with a resolution of 0.01 mm after each incremental loading step. ImageJ quantitative image analysis software developed by the National Institute of Health (www.imagej.net) was used to extract marker displacement and to calculate the average axial motion dA and shear motion dS between osteotomy surfaces in response to incremental load steps. Because plate bending induces different amounts of axial motion at the near cortex and far cortex,26 axial motion dA was extracted individually for the near cortex (dA, NC) from markers 1 and 3, and for the far cortex (dA, FC) from markers 2 and 4, as depicted in Supplemental Digital Content 1 (see Figure, http://links.lww.com/BOT/A993). Construct stiffness SC was calculated by dividing the applied axial load by the axial motion dA at the midpoint between the near and far cortex, with dA = (dA, FC + dA, NC)/2. Statistical Analysis All results are reported as their mean and SD. Construct stiffness SC and interfragmentary motion parameters dS, dA, FC, and dA, FC of the 4 experimental groups was statistically compared with the LOCKED control group results using one-way analysis of variance testing including a post hoc Turkey honest significant difference (HSD) to identify significant differences. Each outcome parameter was analyzed individually, and a level of significance of α = 0.05 was used to detect significant differences. All results are reported as their mean and SD. Construct stiffness SC and interfragmentary motion parameters dS, dA, FC, and dA, FC of the 4 experimental groups was statistically compared with the LOCKED control group results using one-way analysis of variance testing including a post hoc Turkey honest significant difference (HSD) to identify significant differences. Each outcome parameter was analyzed individually, and a level of significance of α = 0.05 was used to detect significant differences. Specimens: Plating constructs were evaluated in fourth generation femur surrogate specimens made of fiber-reinforced epoxy composite (#3406, large size; Sawbones, Vashon, WA) to minimize interspecimen variability. An unstable distal femur fracture (AO/Orthopaedic Trauma Association 33-A3) was modeled by introducing a 10-mm gap osteotomy located 60 mm proximal to the intercondylar notch.21,22 This gap osteotomy simulated the biomechanical constraints of a comminuted fracture that relies on full-load transfer through the bridge plating construct because of a lack of bony apposition at the fracture site. In the LOCKED control group, this gap osteotomy was stabilized with a 286-mm long distal femur plate (ZPLP; Zimmer, Warsaw, IN) made of stainless steel. The plate had 13 holes for diaphyseal fixation, 7 of which were locking holes (Fig. 2A). The diaphyseal plate segment was applied using 3 evenly spaced 4.5-mm locking screws placed in the first, fourth, and seventh locking hole from the fracture site, resulting in a short-bridge span of 25 mm. A plate elevation of 1 mm over the proximal diaphysis was achieved with temporary spacers to simulate biological fixation with preservation of periosteal perfusion.23 The distal plate segment was applied to the metaphysis using six 5.5-mm cannulated locking screws in accordance with the manufacturer's technique guide. All screws were tightened to 4 Nm. A, Distal femur plate with alternating locked and nonlocked holes; (B) 3 distinct screws used with standard femur plate; (C and D) active plate with screw holes located in elastically suspended sliding elements. Editor's Note: A color image accompanies the online version of this article. Subsequently, 4 additional construct configurations were assembled by changing one variable of the LOCKED control group constructs at a time: For constructs of the NONLOCKED group, nonlocking screws were used in place of locking screws for diaphyseal fixation, using the first, fourth, and sixth nonlocking hole from the fracture site (Fig. 2B). Because of the alternating locking/nonlocking screw hole configuration of this plate, the nonlocking construct had an intermediate bridge span of 40 mm. BRIDGE group constructs used a longer bridge span (87 mm) than LOCKED control group constructs (25 mm) by placing diaphyseal locking screws in the third, fifth, and seventh locking hole from the fracture site. FCL group constructs replaced the 3 diaphyseal locking screws of LOCKED control group constructs with 3 FCL screws (4.5 mm MotionLoc; Zimmer) made of stainless steel. FCL screws rigidly lock into the plate and the far cortex, but they are not rigidly constrained in the near cortex underlying the plate. The elastic shaft of FCL screws can flex within the near cortex motion envelope to generate symmetric interfragmentary motion.8 ACTIVE group constructs had a screw configuration identical to that of the LOCKED control group, but used an active locking plate. Screw holes of active locking plates are integrated into individual sliding elements that are elastically suspended in a silicone envelope inside lateral plate pockets (Fig. 2C). Lateral pockets are arranged in an alternating pattern from both plate sides, resulting in a staggered locking hole configuration. The pocket geometry combined with the silicone suspension allows controlled axial translation, which enables up to 1.5 mm of axial motion across a fracture while providing stable fixation in response to bending and torsional loading.24 The silicone suspension consisted of long-term implantable medical-grade silicone elastomer. The active locking plate was made of stainless steel and was geometrically equivalent to the standard locking plate of the LOCKED control group (Fig. 2D). Five specimens of each of the 5 constructs were tested for reproducibility requiring a total of 25 construct tests. Loading: For stiffness assessment, constructs were tested under quasi-physiological loading in a material test system according to an established loading protocol (see Figure, Supplemental Digital Content 1, http://links.lww.com/BOT/A993, describing specimen loading and outcome assessment).21,25 The femoral condyles were embedded in a mounting fixture using bone cement and were rigidly connected to the base of the test system (8874, Instron, Canton, MA). The metaphyseal plate segment was coated with soft clay to prevent nonphysiologic plate constraints. The femoral head was placed in a spherical recess of a polymer block that was attached to the test system actuator. This enabled axial load application while allowing unconstrained rotation of the femoral head. Load was induced along the mechanical axis of the femur, with the load vector intersecting the femoral head and the epicondylar center. Each construct was loaded in 50-N increments up to 700 N, corresponding to approximately one body weight. Outcome Assessment: Constructs were characterized by determining their construct stiffness and interfragmentary motion using noncontact optical photogrammetry. For this purpose, an array of 4 active luminescent markers consisting of miniature light emitting diodes were glued to the osteotomy surfaces. An 18 megapixel digital camera (Canon EOS T6) captured the marker locations with a resolution of 0.01 mm after each incremental loading step. ImageJ quantitative image analysis software developed by the National Institute of Health (www.imagej.net) was used to extract marker displacement and to calculate the average axial motion dA and shear motion dS between osteotomy surfaces in response to incremental load steps. Because plate bending induces different amounts of axial motion at the near cortex and far cortex,26 axial motion dA was extracted individually for the near cortex (dA, NC) from markers 1 and 3, and for the far cortex (dA, FC) from markers 2 and 4, as depicted in Supplemental Digital Content 1 (see Figure, http://links.lww.com/BOT/A993). Construct stiffness SC was calculated by dividing the applied axial load by the axial motion dA at the midpoint between the near and far cortex, with dA = (dA, FC + dA, NC)/2. Statistical Analysis: All results are reported as their mean and SD. Construct stiffness SC and interfragmentary motion parameters dS, dA, FC, and dA, FC of the 4 experimental groups was statistically compared with the LOCKED control group results using one-way analysis of variance testing including a post hoc Turkey honest significant difference (HSD) to identify significant differences. Each outcome parameter was analyzed individually, and a level of significance of α = 0.05 was used to detect significant differences. RESULTS: Construct Stiffness There was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3). Construct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. There was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3). Construct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Axial Interfragmentary Motion In each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001). Axial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. In each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001). Axial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. Shear Motion Shear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B). Resulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. Shear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B). Resulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. Construct Stiffness: There was no significant difference between the stiffness of LOCKED control constructs (2998 ± 361 N/mm) and NONLOCKED constructs (2549 ± 355 N/mm, P = 0.08) (see Table, Supplemental Digital Content 2, http://links.lww.com/BOT/A994, summarizing construct stiffness and interfragmentary motion). However, compared with the LOCKED control group, BRIDGE group constructs had a 45% lower stiffness (P < 0.001), FCL group constructs had a 62% lower stiffness (P < 0.001), and ACTIVE group constructs had a 75% lower stiffness (P < 0.001) (Fig. 3). Construct stiffness achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Axial Interfragmentary Motion: In each group, near cortex motion dA, NC was smaller than far cortex motion dA, FC (Fig. 4). Near cortex motion in response to one body weight loading (700 N) was the same for LOCKED control constructs and NONLOCKED constructs (0.10 ± 0.02 mm, P = 0.85) and remained below the 0.2-mm axial motion threshold required for callus stimulation. Compared with the LOCKED control group, dA, NC was 2 times greater in the BRIDGE group (P < 0.001), over 4 times greater in the FCL group (P < 0.001), and over 7 times greater in the ACTIVE group (P < 0.001). Similarly, far cortex motion was not significantly different between the LOCKED control group (0.37 ± 0.04 mm) and the NONLOCKED group (0.46 ± 0.08 mm) (P = 0.07). However, compared with the LOCKED control group, dA, FC was 73% greater in the BRIDGE group (P < 0.001), 105% greater in the FCL group (P < 0.001), and 303% greater in the ACTIVE group (P < 0.001). Axial motion at the near and far cortex, achieved with the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. Shear Motion: Shear motion dS remained below 0.2 mm in all groups except in the BRIDGE group, which exhibited on average 0.96 ± 0.14 mm shear motion in response to one body weight loading (Fig. 5A). In BRIDGE constructs, this magnitude of shear motion was 50% greater than the corresponding far cortex motion and over 3 times greater than the near cortex motion. Using image analysis, shear-dominant motion in BRIDGE constructs was attributed to rotation of the femoral diaphysis around the proximal plate segment because of plate bending under axial loading, which caused the proximal osteotomy surface to be translated toward the locking plate (Fig. 5B). Resulting transverse shear, resulting from the 4 strategies for plate dynamization, relative to the LOCKED control construct. Editor's Note: A color image accompanies the online version of this article. DISCUSSION: Results confirmed the study hypothesis by demonstrating that the 4 dynamization strategies yielded not only different amounts of construct stiffness and interfragmentary motion, but also different types of interfragmentary motion. LOCKED group results confirmed that a locking plate with a short-bridge span results in asymmetric interfragmentary motion that is deficient for callus formation.3,13,16,26 Although one body weight loading may be excessive for early postoperative loading, it resulted only in 0.1-mm motion at the near cortex. This motion remained below the 0.2-mm motion threshold that has been established as the lower boundary for fracture motion required to promote callus formation.17 This result is supported by several in vivo and clinical studies that demonstrate suppression of callus formation and healing at the near cortex adjacent to a locking plate.3,7,9,20 The far cortex motion was greater than measured at the near cortex, likely secondary to plate bending. Clinically, the increased far cortex motion may allow callus to form, but the repetitive bending may also play a role in eventual fatigue failure of the plate before fracture healing occurs. NONLOCKED constructs represent an intuitive response to the stiffness concern associated with locked plating by reverting to nonlocking screws in the diaphysis. However, substituting nonlocking diaphyseal fixation had no significant effect on construct stiffness or interfragmentary motion. This may be explained by the rigid compression of the plate onto the bone surface, which is required to retain stable fixation. Compressing the plate to the bone prevents any motion at the plate–bone interface, which is a prerequisite to induce symmetric interfragmentary motion.26 In contrast to locked plating constructs, the stiffness of a nonlocked construct will gradually decay as a result of dynamic loading.27 Although this can lead to increased fracture motion over time, the resulting uncontrolled motion is not a reliable strategy for dynamization. In addition, the natural fracture healing process responds with much more robust callus formation when exposed to early motion relative to delayed motion.28 BRIDGE constructs resembled the earliest and most widely proposed strategy to dynamize locked plating constructs. In a biomechanical study, Stoffel et al19 reported in 2003 that axial stiffness of locked plating constructs was mainly influenced by their bridge span. They recommended that one or 2 holes should be omitted on each side of the fracture to allow callus formation. They found that omitting 2 holes made the construct almost twice as flexible, but also 42% less strong. This study found a 45% stiffness reduction by omitting 2 screw holes. However, the greater flexibility of the longer bridge span increased motion primarily at the far cortex, whereas near cortex motion remained deficient. In addition, the longer bridge span induced up to 3 times more shear motion than axial motion. Although shear motion does not necessarily inhibit healing,29,30 several studies have shown that excessive or predominant shear motion will significantly delay healing.18,31 A recent study on the effect of bridge span on fracture motion also confirmed a disproportionate increase in shear motion.32 By analyzing 66 distal femur fractures stabilized with locking plates, they furthermore established a direct association between shear-dominated fracture motion and callus inhibition. Their findings, combined with the results of this study question the technique of increasing the bridge span to dynamize a locked construct, because this may weaken the construct and may cause asymmetric axial motion and excessive shear motion that inhibits fracture healing. FCL and ACTIVE group constructs reduced stiffness compared with the LOCKED control group by 62%–75% to 1130 and 759 N/mm, respectively. Nevertheless, their stiffness remained substantially higher than the stiffness range of 50–400 N/mm reported for external fixators33,34 and Ilizarov frames.35 The fact that external fixators and Ilizarov frames are established clinical tools that promote fracture healing by callus formation suggests that the stiffness reduction of FCL and ACTIVE constructs is rather conservative and does not introduce excessive dynamization. FCL and ACTIVE constructs enhanced interfragmentary motion at the near and far cortex well above the 0.2-mm threshold needed to stimulate callus formation. The highest axial motion of 1.1 mm was observed at the far cortex of ACTIVE constructs, and remained at the lower limit of the 1–4 mm motion range reported for functional bracing.36 Most importantly, FCL and ACTIVE constructs delivered dynamization that was dominated by axial motion, not shear motion. These constructs allow a screw to be placed close to the fracture site without affecting stiffness, and therefore, limit the amount of shear motion possible. The controlled axial dynamization provided by FCL and ACTIVE constructs delivers faster and stronger healing. In an ovine fracture healing study, FCL constructs induced consistent and circumferential callus bridging and yielded 157% stronger healing compared with standard locked plating.7 Clinically, a prospective study of 31 consecutive distal femur fractures stabilized with FCL constructs reported no implant or fixation failure, an average time to union of 16 weeks and a nonunion rate of 3%.37 Similar to FCL constructs, ACTIVE plating induced 6 times more callus at 3 weeks postsurgery, and yielded 4 times stronger healing compared with rigid locked plating in an ovine fracture healing study.20 These in vivo and clinical studies of FCL and ACTIVE plating constructs demonstrated that controlled axial dynamization reliably promoted natural fracture healing. Results of this study are limited by the use of femur surrogates. Validated surrogates were used to extract relative differences between constructs under highly reproducible test conditions.38 Because the surrogates represented a strong, nonosteoporotic femur, results may not be extrapolated to fracture fixation in the osteopenic femur. Moreover, this study only investigated construct stability in terms of stiffness and related interfragmentary motion, without loading constructs to failure to determine their strength. The strength of the tested constructs has been evaluated in previous studies, showing that increasing the bridge span will decrease construct strength,19 whereas the strength of FCL and active plating constructs is comparable with that of standard locked plating constructs.8,24 Testing was furthermore limited to static loading and did not investigate gradual loosening or fatigue of constructs under dynamic loading. Moreover, this study has been limited to a principal loading mode that combines axial compression and bending but not torsion. Only plates made of stainless steel were tested, which are approximately twice as stiff as geometrically equivalent plates made of Titanium alloy. Although results of this study raise concerns on the negative effect of shear-dominated interfragmentary motion on fracture healing, this concern should be formally investigated in a future in vivo study. Most importantly, emerging implant technologies that can provide controlled dynamization will require more clinical studies to document their effect on fracture healing, and to better define the range of interfragmentary motion that will promote healing of different fracture patterns at specific fracture locations. In conclusion, results of this study indicate that intuitive technical tricks, such as reverting to nonlocking screws or using long plates to maximize the bridge span may not reliably achieve relative stability and adequate interfragmentary motion for promoting natural fracture healing. Conversely, engineered implant solutions in the form of FCL screws or active plates can reliably dynamize a locked plating construct to stimulate fracture healing. As such, results should encourage implant manufacturers to provide engineered solutions that reliably promote rather than potentially hinder fracture healing to avoid the need for and uncertainty of technical tricks intended to optimize construct stability. Supplementary Material:
Background: Decreasing the stiffness of locked plating constructs can promote natural fracture healing by controlled dynamization of the fracture. This biomechanical study compared the effect of 4 different stiffness reduction methods on interfragmentary motion by measuring axial motion and shear motion at the fracture site. Methods: Distal femur locking plates were applied to bridge a metadiaphyseal fracture in femur surrogates. A locked construct with a short-bridge span served as the nondynamized control group (LOCKED). Four different methods for stiffness reduction were evaluated: replacing diaphyseal locking screws with nonlocked screws (NONLOCKED); bridge dynamization (BRIDGE) with 2 empty screw holes proximal to the fracture; screw dynamization with far cortical locking (FCL) screws; and plate dynamization with active locking plates (ACTIVE). Construct stiffness, axial motion, and shear motion at the fracture site were measured to characterize each dynamization methods. Results: Compared with LOCKED control constructs, NONLOCKED constructs had a similar stiffness (P = 0.08), axial motion (P = 0.07), and shear motion (P = 0.97). BRIDGE constructs reduced stiffness by 45% compared with LOCKED constructs (P < 0.001), but interfragmentary motion was dominated by shear. Compared with LOCKED constructs, FCL and ACTIVE constructs reduced stiffness by 62% (P < 0.001) and 75% (P < 0.001), respectively, and significantly increased axial motion, but not shear motion. Conclusions: In a surrogate model of a distal femur fracture, replacing locked with nonlocked diaphyseal screws does not significantly decrease construct stiffness and does not enhance interfragmentary motion. A longer bridge span primarily increases shear motion, not axial motion. The use of FCL screws or active plating delivers axial dynamization without introducing shear motion.
null
null
7,145
332
[ 694, 169, 215, 88, 133, 257, 156 ]
12
[ "motion", "group", "plate", "constructs", "locked", "mm", "fracture", "construct", "locking", "cortex" ]
[ "locked distal femur", "fracture healing inhibited", "fracture fixation osteopenic", "fracture motion callus", "plating distal femur" ]
null
null
null
[CONTENT] plate fixation | locked plating | dynamization | interfragmentary motion | stiffness | active plating | femur [SUMMARY]
null
[CONTENT] plate fixation | locked plating | dynamization | interfragmentary motion | stiffness | active plating | femur [SUMMARY]
null
[CONTENT] plate fixation | locked plating | dynamization | interfragmentary motion | stiffness | active plating | femur [SUMMARY]
null
[CONTENT] Biomechanical Phenomena | Bone Plates | Bone Screws | Diaphyses | Equipment Design | Femoral Fractures | Fracture Fixation, Internal | Humans | Models, Anatomic | Shear Strength [SUMMARY]
null
[CONTENT] Biomechanical Phenomena | Bone Plates | Bone Screws | Diaphyses | Equipment Design | Femoral Fractures | Fracture Fixation, Internal | Humans | Models, Anatomic | Shear Strength [SUMMARY]
null
[CONTENT] Biomechanical Phenomena | Bone Plates | Bone Screws | Diaphyses | Equipment Design | Femoral Fractures | Fracture Fixation, Internal | Humans | Models, Anatomic | Shear Strength [SUMMARY]
null
[CONTENT] locked distal femur | fracture healing inhibited | fracture fixation osteopenic | fracture motion callus | plating distal femur [SUMMARY]
null
[CONTENT] locked distal femur | fracture healing inhibited | fracture fixation osteopenic | fracture motion callus | plating distal femur [SUMMARY]
null
[CONTENT] locked distal femur | fracture healing inhibited | fracture fixation osteopenic | fracture motion callus | plating distal femur [SUMMARY]
null
[CONTENT] motion | group | plate | constructs | locked | mm | fracture | construct | locking | cortex [SUMMARY]
null
[CONTENT] motion | group | plate | constructs | locked | mm | fracture | construct | locking | cortex [SUMMARY]
null
[CONTENT] motion | group | plate | constructs | locked | mm | fracture | construct | locking | cortex [SUMMARY]
null
[CONTENT] healing | fracture | fracture healing | mechanical | motion | plating | fixation | dynamization | formation | natural [SUMMARY]
null
[CONTENT] group | 001 | motion | group 001 | greater | locked control | control | locked | mm | constructs [SUMMARY]
null
[CONTENT] motion | group | constructs | locked | plate | mm | fracture | da | cortex | stiffness [SUMMARY]
null
[CONTENT] ||| 4 [SUMMARY]
null
[CONTENT] NONLOCKED | 0.08 | 0.07 | 0.97 ||| 45% ||| FCL | 62% | 75% [SUMMARY]
null
[CONTENT] ||| 4 ||| ||| ||| Four | 2 ||| ||| NONLOCKED | 0.08 | 0.07 | 0.97 ||| 45% ||| FCL | 62% | 75% ||| ||| ||| FCL [SUMMARY]
null
Efficacy and safety of endoscopic submucosal dissection for gastric tube cancer: A multicenter retrospective study.
33776371
Recent improvements in the prognosis of patients with esophageal cancer have led to the increased occurrence of gastric tube cancer (GTC) in the reconstructed gastric tube. However, there are few reports on the treatment results of endoscopic submucosal dissection (ESD) for GTC.
BACKGROUND
We retrospectively investigated 48 GTC lesions in 38 consecutive patients with GTC in the reconstructed gastric tube after esophagectomy who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group. The clinical indications of ESD for early gastric cancer were similarly applied for GTC after esophagectomy. ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines. Patient characteristics, treatment results, clinical course, and treatment outcomes were analyzed.
METHODS
The median age of patients was 71.5 years (range, 57-84years), and there were 34 men and 4 women. The median observation period after ESD was 884 d (range, 8-4040 d). The median procedure time was 81 min (range, 29-334 min), the en bloc resection rate was 91.7% (44/48), and the curative resection rate was 79% (38/48). Complications during ESD were seen in 4% (2/48) of case, and those after ESD were seen in 10% (5/48) of case. The survival rate at 5 years was 59.5%. During the observation period after ESD, 10 patients died of other diseases. Although there were differences in the procedure time between institutions, a multivariate analysis showed that tumor size was the only factor associated with prolonged procedure time.
RESULTS
ESD for GTC after esophagectomy was shown to be safe and effective.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Dissection", "Endoscopic Mucosal Resection", "Female", "Gastric Mucosa", "Humans", "Male", "Middle Aged", "Retrospective Studies", "Stomach Neoplasms", "Treatment Outcome" ]
7985736
INTRODUCTION
Recently, the survival of patients with esophageal cancer after esophagectomy has improved[1-5]. However, the risk of a subsequent occurrence of primary cancer is high in these patients. The most frequent cancer that overlaps with esophageal cancer is head and neck cancer, while the second most common is gastric cancer, including gastric tube cancer (GTC)[6-9]. Therefore, the improved prognosis of esophageal cancer patients has led to an increase in the occurrence of GTC in the reconstructed gastric tube. For the treatment of GTC after esophagectomy, total gastric tube resection (TGTR) or partial gastric tube resection (PGTR) has been proposed. However, surgical resection for GTC, being a secondary operation following esophagectomy, may lead to high mortality and morbidity[10,11]. On the other hand, in recent years, endoscopic therapy for early gastric cancer (EGC) has developed and become widespread[12]. Endoscopic submucosal dissection (ESD) enables the treatment of large lesions with a higher rate of en bloc resection that cannot be achieved by using conventional endoscopic mucosal resection. In addition, ESD is less invasive than surgery. For this reason, ESD has become widely used as a standard treatment for EGC, and ESD is often performed for GTC. However, ESD for GTC after esophagectomy is a technically difficult procedure compared with that for an unresected stomach because of the limited working space, unusual fluid-pooling area, food residue, bile reflux, fibrosis, and staples under the suture line[13]. Therefore, a high degree of skill is required for ESD of GTC. There are few reports about ESD for GTC after esophagectomy, and most are case reports and case series of a small number of patients[13-17]. A study by Nonaka et al[18] reported the effectiveness and safety of ESD for GTC in a high-volume national center, which had largest number of cases but was nonetheless a single-center study. Therefore, the aim of this study was to evaluate the efficacy and safety of ESD for GTC after esophagectomy in a multicenter context.
MATERIALS AND METHODS
Patients We retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction. Study measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities. The institutional review board of each hospital approved this study, and informed consent was obtained from all patients. We retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction. Study measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities. The institutional review board of each hospital approved this study, and informed consent was obtained from all patients. Endoscopic procedures All endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used. First, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically. All endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used. First, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically. Histopathological assessment of curability ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19]. ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19]. Statistical analysis Continuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant. Continuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant.
null
null
CONCLUSION
As some patients were observed for only a short time, the assessment of long-term prognosis after ESD for GTC was insufficient. Further accumulation and follow-up of cases of GTC are necessary in the future.
[ "INTRODUCTION", "Patients", "Endoscopic procedures", "Histopathological assessment of curability", "Statistical analysis", "RESULTS", "Patients’ characteristics and endoscopic findings", "Treatment results of ESD and histopathological findings", "Adverse events", "Patients’ clinical courses", "Comparison of clinical outcomes", "DISCUSSION", "CONCLUSION" ]
[ "Recently, the survival of patients with esophageal cancer after esophagectomy has improved[1-5]. However, the risk of a subsequent occurrence of primary cancer is high in these patients. The most frequent cancer that overlaps with esophageal cancer is head and neck cancer, while the second most common is gastric cancer, including gastric tube cancer (GTC)[6-9]. Therefore, the improved prognosis of esophageal cancer patients has led to an increase in the occurrence of GTC in the reconstructed gastric tube.\nFor the treatment of GTC after esophagectomy, total gastric tube resection (TGTR) or partial gastric tube resection (PGTR) has been proposed. However, surgical resection for GTC, being a secondary operation following esophagectomy, may lead to high mortality and morbidity[10,11]. On the other hand, in recent years, endoscopic therapy for early gastric cancer (EGC) has developed and become widespread[12]. Endoscopic submucosal dissection (ESD) enables the treatment of large lesions with a higher rate of en bloc resection that cannot be achieved by using conventional endoscopic mucosal resection. In addition, ESD is less invasive than surgery. For this reason, ESD has become widely used as a standard treatment for EGC, and ESD is often performed for GTC.\nHowever, ESD for GTC after esophagectomy is a technically difficult procedure compared with that for an unresected stomach because of the limited working space, unusual fluid-pooling area, food residue, bile reflux, fibrosis, and staples under the suture line[13]. Therefore, a high degree of skill is required for ESD of GTC. There are few reports about ESD for GTC after esophagectomy, and most are case reports and case series of a small number of patients[13-17]. A study by Nonaka et al[18] reported the effectiveness and safety of ESD for GTC in a high-volume national center, which had largest number of cases but was nonetheless a single-center study. Therefore, the aim of this study was to evaluate the efficacy and safety of ESD for GTC after esophagectomy in a multicenter context.", "We retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction.\nStudy measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities.\nThe institutional review board of each hospital approved this study, and informed consent was obtained from all patients.", "All endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used.\nFirst, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically.", "ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19].", "Continuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant.", "Patients’ characteristics and endoscopic findings A total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d).\nPatients’ characteristics and endoscopic findings\nESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower.\nA total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d).\nPatients’ characteristics and endoscopic findings\nESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower.\nTreatment results of ESD and histopathological findings Treatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2).\nTreatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings\nESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive.\nTreatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2).\nTreatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings\nESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive.\nAdverse events Complications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2).\nIt was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission.\n\nA case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow).\nComplications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2).\nIt was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission.\n\nA case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow).\nPatients’ clinical courses Of the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases.\n\nOverall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection.\nOf the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases.\n\nOverall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection.\nComparison of clinical outcomes A comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals.\nComparison of clinical outcomes between Okayama University Hospital and other hospitals\nOUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal.\nSince there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time.\nComparison of short (< 90 min) and long (≥ 90 min) procedure time groups\nU: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal.\nMultivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer\nU: Upper; M: Medium; CI: Confidence interval.\nA comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals.\nComparison of clinical outcomes between Okayama University Hospital and other hospitals\nOUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal.\nSince there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time.\nComparison of short (< 90 min) and long (≥ 90 min) procedure time groups\nU: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal.\nMultivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer\nU: Upper; M: Medium; CI: Confidence interval.", "A total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d).\nPatients’ characteristics and endoscopic findings\nESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower.", "Treatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2).\nTreatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings\nESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive.", "Complications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2).\nIt was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission.\n\nA case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow).", "Of the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases.\n\nOverall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection.", "A comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals.\nComparison of clinical outcomes between Okayama University Hospital and other hospitals\nOUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal.\nSince there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time.\nComparison of short (< 90 min) and long (≥ 90 min) procedure time groups\nU: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal.\nMultivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer\nU: Upper; M: Medium; CI: Confidence interval.", "This study was the first multicenter study on ESD for GTC in the reconstructed gastric tube after esophagectomy, and it included the second largest number of patients. According to a systematic review of GTC after esophagectomy, there are two surgical options for the treatment of GTC: PGTR or TGTR plus lymphadenectomy with colon or jejunal reconstruction[21]. However, surgical treatment for GTC is highly invasive and carries a certain degree of risk. Sugiura et al[10] reported that 5 of 7 cases of TGTR had surgical complications (leakage) and 2 died. In addition, 1 of 3 cases of PTGR had fatal complications. Akita et al[11] reported that 1 of 5 cases of TGTR died of postoperative complications. On the other hand, in previous studies on ESD for GTC, the proportions of R0 resection and curative resection were 87.5%-92% and 65%-85%, respectively, and complications were seen in 12.5%-18% of patients[13-18]. In the present study, the proportions of R0 resection and curative resection were 91.7% and 79%, respectively, and complications were seen in 10%. Overall, the treatment results of ESD for GTC in this study were similar to those of previous studies. In a previous study on gastric ESD of the unresected stomach, the proportions of R0 resection, curative resections, and complications were 92%-94.9%, 80.4%-94.7%, and 5.9%-6.3%, respectively[22,23]. Furthermore, in gastric ESD of the remnant stomach after gastrectomy, the proportions of R0 resection, curative resection, and complications were 84.7%-85%, 70.9%-78%, and 2.8%-21.1%, respectively[24,25]. ESD for GTC was considered a minimally invasive, effective, and relatively safe treatment.\nThere are some points of note in GTC. First, detection of early GTC requires long-term regular endoscopic surveillance after esophagectomy. GTC is often found long after esophagectomy; in some cases, GTC is detected after more than 10 years and the risk of metachronous GTC is high[15-18]. Second, GTC may be difficult to diagnose. The reasons are as follows: Food residue and bile reflux are often seen in the gastric tube, and the lumen of the gastric tube is long and narrow and can constrain endoscopic observation[13]. Therefore, it is necessary to pay attention to these points during endoscopy for patients with gastric tube reconstruction after esophagectomy. Third, when performing ESD for GTC, it is necessary to pay attention to complications specific to GTC. For example, in our study, a subcutaneous abscess formed after treatment in a case with perforation during ESD for GTC in the antethoracic reconstruction route. This case was cured by conservative treatment with antibiotics and percutaneous drainage. Moreover, Miyagi et al[26] reported that post-treatment precordial skin burns occurred in 5 of 8 patients with GTC in the antethoracic reconstruction route. In this report, all burns were diagnosed as first-degree burns based on the clinical classification of burn depth, developed on postoperative day 1-2, and took 4-7 d to heal.\nIn this study, since approximately half of the patients were treated at OUH, we defined it as a high-volume center and compared clinical outcomes with those of other institutions. As a result, there were no significant differences in the clinical outcomes of ESD between institutions. In addition, lesion size was the only factor related to long procedure time in multivariate analysis. We believe these results were attributable to the fact that all of the participating institutions specialized in gastrointestinal diseases with more than 500 cases of ESD for EGC. Moreover, ESD for GTC may have been performed by leading specialists given the relative rarity of GTC. For these reasons, ESD for GTC seems safe if performed by specialists with sufficient ESD experience.\nPreviously, not a few patients had complications or died of other diseases during the course after esophagectomy[27,28]. However, due to the widespread use of minimally invasive esophagectomy, such as thoracoscopic and laparoscopic surgery, the incidence of postoperative complications, including respiratory complications, has decreased and the general condition of patients after esophagectomy has improved in recent years[29-32]. With continued improvements in the prognosis of esophageal cancer, the number of cases of GTC after esophagectomy will likely increase in the future and the demand for ESD for GTC is expected to increase further.\nThere were several limitations to this study. First, as this was a retrospective study, the ESD indications and devices used for treatment were not standardized. However, treatment was performed according to the typical standards. Second, since data on Helicobacter pylori infection status were missing in some patients, the association between GTC and Helicobacter pylori could not be evaluated. Third, as some patients were observed for only a short time, the assessment of long-term prognosis after ESD for GTC was insufficient. Further follow-up studies are needed in the future.", "In conclusion, ESD for GTC after esophagectomy is a safe and effective treatment that can be performed without significant variability in treatment results at any specialized institution where standard gastric ESD can be performed with sufficient expertise. Further accumulation and follow-up of cases of GTC are necessary in the future." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients", "Endoscopic procedures", "Histopathological assessment of curability", "Statistical analysis", "RESULTS", "Patients’ characteristics and endoscopic findings", "Treatment results of ESD and histopathological findings", "Adverse events", "Patients’ clinical courses", "Comparison of clinical outcomes", "DISCUSSION", "CONCLUSION" ]
[ "Recently, the survival of patients with esophageal cancer after esophagectomy has improved[1-5]. However, the risk of a subsequent occurrence of primary cancer is high in these patients. The most frequent cancer that overlaps with esophageal cancer is head and neck cancer, while the second most common is gastric cancer, including gastric tube cancer (GTC)[6-9]. Therefore, the improved prognosis of esophageal cancer patients has led to an increase in the occurrence of GTC in the reconstructed gastric tube.\nFor the treatment of GTC after esophagectomy, total gastric tube resection (TGTR) or partial gastric tube resection (PGTR) has been proposed. However, surgical resection for GTC, being a secondary operation following esophagectomy, may lead to high mortality and morbidity[10,11]. On the other hand, in recent years, endoscopic therapy for early gastric cancer (EGC) has developed and become widespread[12]. Endoscopic submucosal dissection (ESD) enables the treatment of large lesions with a higher rate of en bloc resection that cannot be achieved by using conventional endoscopic mucosal resection. In addition, ESD is less invasive than surgery. For this reason, ESD has become widely used as a standard treatment for EGC, and ESD is often performed for GTC.\nHowever, ESD for GTC after esophagectomy is a technically difficult procedure compared with that for an unresected stomach because of the limited working space, unusual fluid-pooling area, food residue, bile reflux, fibrosis, and staples under the suture line[13]. Therefore, a high degree of skill is required for ESD of GTC. There are few reports about ESD for GTC after esophagectomy, and most are case reports and case series of a small number of patients[13-17]. A study by Nonaka et al[18] reported the effectiveness and safety of ESD for GTC in a high-volume national center, which had largest number of cases but was nonetheless a single-center study. Therefore, the aim of this study was to evaluate the efficacy and safety of ESD for GTC after esophagectomy in a multicenter context.", "Patients We retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction.\nStudy measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities.\nThe institutional review board of each hospital approved this study, and informed consent was obtained from all patients.\nWe retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction.\nStudy measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities.\nThe institutional review board of each hospital approved this study, and informed consent was obtained from all patients.\nEndoscopic procedures All endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used.\nFirst, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically.\nAll endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used.\nFirst, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically.\nHistopathological assessment of curability ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19].\nESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19].\nStatistical analysis Continuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant.\nContinuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant.", "We retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction.\nStudy measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities.\nThe institutional review board of each hospital approved this study, and informed consent was obtained from all patients.", "All endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used.\nFirst, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically.", "ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19].", "Continuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant.", "Patients’ characteristics and endoscopic findings A total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d).\nPatients’ characteristics and endoscopic findings\nESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower.\nA total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d).\nPatients’ characteristics and endoscopic findings\nESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower.\nTreatment results of ESD and histopathological findings Treatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2).\nTreatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings\nESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive.\nTreatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2).\nTreatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings\nESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive.\nAdverse events Complications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2).\nIt was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission.\n\nA case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow).\nComplications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2).\nIt was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission.\n\nA case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow).\nPatients’ clinical courses Of the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases.\n\nOverall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection.\nOf the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases.\n\nOverall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection.\nComparison of clinical outcomes A comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals.\nComparison of clinical outcomes between Okayama University Hospital and other hospitals\nOUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal.\nSince there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time.\nComparison of short (< 90 min) and long (≥ 90 min) procedure time groups\nU: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal.\nMultivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer\nU: Upper; M: Medium; CI: Confidence interval.\nA comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals.\nComparison of clinical outcomes between Okayama University Hospital and other hospitals\nOUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal.\nSince there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time.\nComparison of short (< 90 min) and long (≥ 90 min) procedure time groups\nU: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal.\nMultivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer\nU: Upper; M: Medium; CI: Confidence interval.", "A total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d).\nPatients’ characteristics and endoscopic findings\nESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower.", "Treatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2).\nTreatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings\nESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive.", "Complications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2).\nIt was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission.\n\nA case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow).", "Of the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases.\n\nOverall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection.", "A comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals.\nComparison of clinical outcomes between Okayama University Hospital and other hospitals\nOUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal.\nSince there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time.\nComparison of short (< 90 min) and long (≥ 90 min) procedure time groups\nU: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal.\nMultivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer\nU: Upper; M: Medium; CI: Confidence interval.", "This study was the first multicenter study on ESD for GTC in the reconstructed gastric tube after esophagectomy, and it included the second largest number of patients. According to a systematic review of GTC after esophagectomy, there are two surgical options for the treatment of GTC: PGTR or TGTR plus lymphadenectomy with colon or jejunal reconstruction[21]. However, surgical treatment for GTC is highly invasive and carries a certain degree of risk. Sugiura et al[10] reported that 5 of 7 cases of TGTR had surgical complications (leakage) and 2 died. In addition, 1 of 3 cases of PTGR had fatal complications. Akita et al[11] reported that 1 of 5 cases of TGTR died of postoperative complications. On the other hand, in previous studies on ESD for GTC, the proportions of R0 resection and curative resection were 87.5%-92% and 65%-85%, respectively, and complications were seen in 12.5%-18% of patients[13-18]. In the present study, the proportions of R0 resection and curative resection were 91.7% and 79%, respectively, and complications were seen in 10%. Overall, the treatment results of ESD for GTC in this study were similar to those of previous studies. In a previous study on gastric ESD of the unresected stomach, the proportions of R0 resection, curative resections, and complications were 92%-94.9%, 80.4%-94.7%, and 5.9%-6.3%, respectively[22,23]. Furthermore, in gastric ESD of the remnant stomach after gastrectomy, the proportions of R0 resection, curative resection, and complications were 84.7%-85%, 70.9%-78%, and 2.8%-21.1%, respectively[24,25]. ESD for GTC was considered a minimally invasive, effective, and relatively safe treatment.\nThere are some points of note in GTC. First, detection of early GTC requires long-term regular endoscopic surveillance after esophagectomy. GTC is often found long after esophagectomy; in some cases, GTC is detected after more than 10 years and the risk of metachronous GTC is high[15-18]. Second, GTC may be difficult to diagnose. The reasons are as follows: Food residue and bile reflux are often seen in the gastric tube, and the lumen of the gastric tube is long and narrow and can constrain endoscopic observation[13]. Therefore, it is necessary to pay attention to these points during endoscopy for patients with gastric tube reconstruction after esophagectomy. Third, when performing ESD for GTC, it is necessary to pay attention to complications specific to GTC. For example, in our study, a subcutaneous abscess formed after treatment in a case with perforation during ESD for GTC in the antethoracic reconstruction route. This case was cured by conservative treatment with antibiotics and percutaneous drainage. Moreover, Miyagi et al[26] reported that post-treatment precordial skin burns occurred in 5 of 8 patients with GTC in the antethoracic reconstruction route. In this report, all burns were diagnosed as first-degree burns based on the clinical classification of burn depth, developed on postoperative day 1-2, and took 4-7 d to heal.\nIn this study, since approximately half of the patients were treated at OUH, we defined it as a high-volume center and compared clinical outcomes with those of other institutions. As a result, there were no significant differences in the clinical outcomes of ESD between institutions. In addition, lesion size was the only factor related to long procedure time in multivariate analysis. We believe these results were attributable to the fact that all of the participating institutions specialized in gastrointestinal diseases with more than 500 cases of ESD for EGC. Moreover, ESD for GTC may have been performed by leading specialists given the relative rarity of GTC. For these reasons, ESD for GTC seems safe if performed by specialists with sufficient ESD experience.\nPreviously, not a few patients had complications or died of other diseases during the course after esophagectomy[27,28]. However, due to the widespread use of minimally invasive esophagectomy, such as thoracoscopic and laparoscopic surgery, the incidence of postoperative complications, including respiratory complications, has decreased and the general condition of patients after esophagectomy has improved in recent years[29-32]. With continued improvements in the prognosis of esophageal cancer, the number of cases of GTC after esophagectomy will likely increase in the future and the demand for ESD for GTC is expected to increase further.\nThere were several limitations to this study. First, as this was a retrospective study, the ESD indications and devices used for treatment were not standardized. However, treatment was performed according to the typical standards. Second, since data on Helicobacter pylori infection status were missing in some patients, the association between GTC and Helicobacter pylori could not be evaluated. Third, as some patients were observed for only a short time, the assessment of long-term prognosis after ESD for GTC was insufficient. Further follow-up studies are needed in the future.", "In conclusion, ESD for GTC after esophagectomy is a safe and effective treatment that can be performed without significant variability in treatment results at any specialized institution where standard gastric ESD can be performed with sufficient expertise. Further accumulation and follow-up of cases of GTC are necessary in the future." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Endoscopic submucosal dissection", "Gastric tube", "Gastric cancer", "Eso-phagectomy", "Multicenter study", "Retrospective study" ]
INTRODUCTION: Recently, the survival of patients with esophageal cancer after esophagectomy has improved[1-5]. However, the risk of a subsequent occurrence of primary cancer is high in these patients. The most frequent cancer that overlaps with esophageal cancer is head and neck cancer, while the second most common is gastric cancer, including gastric tube cancer (GTC)[6-9]. Therefore, the improved prognosis of esophageal cancer patients has led to an increase in the occurrence of GTC in the reconstructed gastric tube. For the treatment of GTC after esophagectomy, total gastric tube resection (TGTR) or partial gastric tube resection (PGTR) has been proposed. However, surgical resection for GTC, being a secondary operation following esophagectomy, may lead to high mortality and morbidity[10,11]. On the other hand, in recent years, endoscopic therapy for early gastric cancer (EGC) has developed and become widespread[12]. Endoscopic submucosal dissection (ESD) enables the treatment of large lesions with a higher rate of en bloc resection that cannot be achieved by using conventional endoscopic mucosal resection. In addition, ESD is less invasive than surgery. For this reason, ESD has become widely used as a standard treatment for EGC, and ESD is often performed for GTC. However, ESD for GTC after esophagectomy is a technically difficult procedure compared with that for an unresected stomach because of the limited working space, unusual fluid-pooling area, food residue, bile reflux, fibrosis, and staples under the suture line[13]. Therefore, a high degree of skill is required for ESD of GTC. There are few reports about ESD for GTC after esophagectomy, and most are case reports and case series of a small number of patients[13-17]. A study by Nonaka et al[18] reported the effectiveness and safety of ESD for GTC in a high-volume national center, which had largest number of cases but was nonetheless a single-center study. Therefore, the aim of this study was to evaluate the efficacy and safety of ESD for GTC after esophagectomy in a multicenter context. MATERIALS AND METHODS: Patients We retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction. Study measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities. The institutional review board of each hospital approved this study, and informed consent was obtained from all patients. We retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction. Study measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities. The institutional review board of each hospital approved this study, and informed consent was obtained from all patients. Endoscopic procedures All endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used. First, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically. All endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used. First, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically. Histopathological assessment of curability ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19]. ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19]. Statistical analysis Continuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant. Continuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant. Patients: We retrospectively investigated patients with GTC in the reconstructed gastric tube after esophagectomy for esophageal squamous cell carcinoma who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group (O-GUTS). All of the participating institutions in O-GUTS, except Okayama University Hospital (OUH), were considered core hospitals in each area. During the study period, 48 GTC lesions in 38 consecutive patients were treated. The clinical indications of ESD for EGC were based on the Gastric Cancer Treatment Guidelines[19]. These indications were similarly applied for GTC after esophagectomy with gastric tube reconstruction. Study measurements were as follows: patient characteristics, endoscopic findings, treatment results, adverse events, histopathological results, and clinical courses. In addition, we defined OUH as a high-volume center and compared the patients’ background and clinical outcomes between OUH and other facilities. The institutional review board of each hospital approved this study, and informed consent was obtained from all patients. Endoscopic procedures: All endoscopic procedures were performed by experts in ESD who had experience with more than 500 clinical cases. There were no restrictions on the scopes and devices used by each endoscopist for ESD. The scopes used were GIF-Q260J or GIF-H260 (Olympus, Tokyo, Japan), and the devices were an insulation-tipped diathermic knife (IT Knife), IT Knife 2, IT Knife nano, or Dual Knife J (Olympus, Tokyo, Japan). Other devices, such as an argon plasma coagulation probe (ERBE, Tubingen, Germany) for marking dots or a needle knife (ZEON MEDICAL, Tokyo, Japan) for the initial incision, were occasionally used. First, marking dots for the incision lines were placed around the lesion. Next, fructose-added glycerol (Glyceol; TAIYO Pharma CO, Tokyo, Japan) with a minute amount of indigo carmine dye was injected into the submucosal layer. In some cases, 0.4% sodium hyaluronate (MucoUp; Boston Scientific, Tokyo, Japan) was used. After submucosal injection, a precut was made with the Dual Knife J or needle knife, followed by a circumferential mucosal incision around the lesion using the dots as a landmark and submucosal dissection with the IT Knife, IT Knife 2, IT Knife nano, or Dual Knife J. The resected specimens were evaluated pathologically. Histopathological assessment of curability: ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines[20]. R0 resection indicated that the lesion was resected en bloc with both the horizontal and vertical margins tumor-free histopathologically, but did not include findings regarding lymphovascular infiltration, the type of adenocarcinoma, or an assessment of the depth of invasion for curability. A curative resection was divided into eCura A and eCura B. A non-curative resection was defined as not meeting the criteria of curative resection and was further separated into 2 groups, eCura C-1 and eCura C-2, based on histopathological results per the Gastric Cancer Treatment Guidelines[19]. Statistical analysis: Continuous and categorical variables are expressed as median (range) and n (%), respectively. Overall survival was calculated according to the Kaplan-Meier method. Differences in the clinical outcomes of ESD for GTC between institutions were evaluated using the Mann-Whitney U test for continuous data and the Chi-squared test for categorical variables. The risk factors for long procedure time were evaluated using logistic regression analysis. All statistical analyses were performed using the statistical analysis software JMP Pro, version 15 (SAS Institute Inc., Cary, NC, United States). P values < 0.05 were considered statistically significant. RESULTS: Patients’ characteristics and endoscopic findings A total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d). Patients’ characteristics and endoscopic findings ESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower. A total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d). Patients’ characteristics and endoscopic findings ESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower. Treatment results of ESD and histopathological findings Treatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2). Treatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings ESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive. Treatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2). Treatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings ESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive. Adverse events Complications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2). It was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission. A case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow). Complications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2). It was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission. A case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow). Patients’ clinical courses Of the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases. Overall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection. Of the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases. Overall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection. Comparison of clinical outcomes A comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals. Comparison of clinical outcomes between Okayama University Hospital and other hospitals OUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal. Since there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time. Comparison of short (< 90 min) and long (≥ 90 min) procedure time groups U: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal. Multivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer U: Upper; M: Medium; CI: Confidence interval. A comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals. Comparison of clinical outcomes between Okayama University Hospital and other hospitals OUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal. Since there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time. Comparison of short (< 90 min) and long (≥ 90 min) procedure time groups U: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal. Multivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer U: Upper; M: Medium; CI: Confidence interval. Patients’ characteristics and endoscopic findings: A total of 38 consecutive patients with 48 GTC lesions were treated with ESD between January 2005 and December 2019 (Table 1). The median age of these patients was 71.5 years (range, 57-84 years), and they included 34 men and 4 women. The median period from esophagectomy to the treatment of GTC was 2106 d (range, 38-9523 d). This included patients who had a diagnosis of EGC before surgery for esophageal cancer and had undergone ESD after esophagectomy (5 patients). The reconstruction routes were antethoracic, retrosternal, and posterior mediastinal in 7, 11, and 20 patients, respectively. The location of the GTC lesion was upper, middle, and low in 2, 18, and 28 patients, respectively. Regarding the macroscopic type, there were 21 lesions of 0-IIa, 22 lesions of 0-IIc, 2 lesions of 0-IIb, 1 lesion of 0-III, and 2 combined lesions. The median observation period after ESD was 884 d (range, 8-4040 d). Patients’ characteristics and endoscopic findings ESD: Endoscopic submucosal dissection; GTC: Gastric tube cancer; U: Upper; M: Medium; L: Lower. Treatment results of ESD and histopathological findings: Treatment results of ESD for GTC after esophagectomy and pathological findings are shown in Table 2. The median procedure time was 81 min (range, 29-334 min). En bloc resection was performed in 44 of 48 lesions (91.7%). The median tumor size of the resected specimen was 15 mm (range, 4-60 mm). Among the 48 lesions, 43 were differentiated (90%) and 5 were undifferentiated (10%). Regarding the tumor depth, 40 lesions were intramucosal carcinoma (M, 84%), 4 were submucosal superficial carcinoma (SM1, 8%), and 4 were submucosal deep invasive carcinoma (SM2 or deeper, 8%). Ulcerative findings were seen in 6 lesions (13%). Lymphatic infiltration was seen in 3 lesions (6%), and vascular infiltration was seen in 1 lesion (2%). According to the Japanese Gastric Cancer Treatment Guidelines, 38 lesions (79%) achieved curative resection (eCura A) and 10 lesions (21%) were classified as non-curative resection. The reasons for non-curative resection were as follows: 3 lesions were horizontal margin positive (HM1) or cutting into the lesion (eCura C-1), 2 were undifferentiated and showed SM invasion, 2 showed lymphatic infiltration, 2 showed SM invasion with ulcerative findings, and 1 was undifferentiated and showed SM invasion and lymphatic and vascular infiltration (eCura C-2). Treatment results of endoscopic submucosal dissection for gastric tube cancer and histopathological findings ESD: Endoscopic submucosal dissection; M: Intramucosal; SM1: Submucosal superficial; SM2: Submucosal deep invasive. Adverse events: Complications during ESD were seen in 2 cases (4%), with 1 case of perforation, and 1 case of bleeding. Complications after ESD were seen in 5 cases (10%), with 2 cases of bleeding, 1 case of subcutaneous abscess, 1 case of liver failure, and 1 case of respiratory failure (Table 2). It was the same patient who had perforation during ESD and who formed subcutaneous abscess after ESD (Figure 1). In this case, perforation during ESD was sealed immediately with endoclips. Nevertheless, 20 d after ESD, the patient was admitted to the hospital with redness of the skin in the precordial area and excretion of pus from the skin. Computed tomography showed formation of a subcutaneous abscess around the gastric tube of the antethoracic reconstruction route. The patient was treated conservatively with antibiotics and percutaneous drainage and was discharged on the 16th day after the start of re-admission. A case of subcutaneous abscess formation after perforation during endoscopic submucosal dissection for gastric tube cancer. A: Gastric tube cancer located at the anterior wall of gastric body; B: Marking dots were placed around the lesion, and endoscopic submucosal dissection (ESD) was performed as usual; C: Perforation occurred during ESD; D: Perforation was sealed immediately with 4 endoclips; E: Redness of the skin in the precordial area, 20 d after ESD; F: Computed tomography performed 20 d after ESD. A subcutaneous abscess (yellow arrow) had formed around the gastric tube of the antethoracic reconstruction route (orange arrow). Patients’ clinical courses: Of the 38 cases, 2 had local recurrence and 3 had metachronous recurrence. In the 2 cases with local recurrence, 1 received additional surgery and the other received additional ESD. In the 3 cases with metachronous recurrence, 1 received additional surgery and the others received additional ESD. The patients’ overall survival curve is shown in Figure 2. The survival rate at 5 years was 59.5%. During the observation period after ESD, no patient died of GTC. However, 10 patients died of other diseases, including pneumonia, which was the most common and occurred in 4 patients, heart failure and hepatocellular carcinoma in 1 patient each, and other unknown diseases. Overall survival curve after endoscopic submucosal dissection for gastric tube cancer. The survival rate at 5 year was 59.5%. ESD: Endoscopic submucosal dissection. Comparison of clinical outcomes: A comparison of the patients’ background and clinical outcomes between OUH and other hospitals is shown in Table 3. In terms of the patients’ backgrounds, the posterior mediastinal route was used as a reconstruction route in more cases at other hospitals. Treatment results were generally similar in both groups; however, procedure time was significantly longer at other hospitals. Comparison of clinical outcomes between Okayama University Hospital and other hospitals OUH: Okayama University Hospital; ESD: Endoscopic submucosal dissection; M: Intramucosal; SM: Submucosal. Since there were differences in procedure time between institutions, we divided patients into two groups, a short procedure time group (< 90 min) and a long procedure time group (≥ 90 min), and examined the factors affecting the procedure time. In univariate analysis (Table 4), the treatment institution and tumor size showed significant differences between the two groups. However, in multivariate analysis (Table 5), tumor size was the only factor associated with a long procedure time. Comparison of short (< 90 min) and long (≥ 90 min) procedure time groups U: Upper; M: Medium; L: Lower; M: Intramucosal; SM: Submucosal. Multivariate analysis about risk factors for a long procedure time of endoscopic submucosal dissection for gastric tube cancer U: Upper; M: Medium; CI: Confidence interval. DISCUSSION: This study was the first multicenter study on ESD for GTC in the reconstructed gastric tube after esophagectomy, and it included the second largest number of patients. According to a systematic review of GTC after esophagectomy, there are two surgical options for the treatment of GTC: PGTR or TGTR plus lymphadenectomy with colon or jejunal reconstruction[21]. However, surgical treatment for GTC is highly invasive and carries a certain degree of risk. Sugiura et al[10] reported that 5 of 7 cases of TGTR had surgical complications (leakage) and 2 died. In addition, 1 of 3 cases of PTGR had fatal complications. Akita et al[11] reported that 1 of 5 cases of TGTR died of postoperative complications. On the other hand, in previous studies on ESD for GTC, the proportions of R0 resection and curative resection were 87.5%-92% and 65%-85%, respectively, and complications were seen in 12.5%-18% of patients[13-18]. In the present study, the proportions of R0 resection and curative resection were 91.7% and 79%, respectively, and complications were seen in 10%. Overall, the treatment results of ESD for GTC in this study were similar to those of previous studies. In a previous study on gastric ESD of the unresected stomach, the proportions of R0 resection, curative resections, and complications were 92%-94.9%, 80.4%-94.7%, and 5.9%-6.3%, respectively[22,23]. Furthermore, in gastric ESD of the remnant stomach after gastrectomy, the proportions of R0 resection, curative resection, and complications were 84.7%-85%, 70.9%-78%, and 2.8%-21.1%, respectively[24,25]. ESD for GTC was considered a minimally invasive, effective, and relatively safe treatment. There are some points of note in GTC. First, detection of early GTC requires long-term regular endoscopic surveillance after esophagectomy. GTC is often found long after esophagectomy; in some cases, GTC is detected after more than 10 years and the risk of metachronous GTC is high[15-18]. Second, GTC may be difficult to diagnose. The reasons are as follows: Food residue and bile reflux are often seen in the gastric tube, and the lumen of the gastric tube is long and narrow and can constrain endoscopic observation[13]. Therefore, it is necessary to pay attention to these points during endoscopy for patients with gastric tube reconstruction after esophagectomy. Third, when performing ESD for GTC, it is necessary to pay attention to complications specific to GTC. For example, in our study, a subcutaneous abscess formed after treatment in a case with perforation during ESD for GTC in the antethoracic reconstruction route. This case was cured by conservative treatment with antibiotics and percutaneous drainage. Moreover, Miyagi et al[26] reported that post-treatment precordial skin burns occurred in 5 of 8 patients with GTC in the antethoracic reconstruction route. In this report, all burns were diagnosed as first-degree burns based on the clinical classification of burn depth, developed on postoperative day 1-2, and took 4-7 d to heal. In this study, since approximately half of the patients were treated at OUH, we defined it as a high-volume center and compared clinical outcomes with those of other institutions. As a result, there were no significant differences in the clinical outcomes of ESD between institutions. In addition, lesion size was the only factor related to long procedure time in multivariate analysis. We believe these results were attributable to the fact that all of the participating institutions specialized in gastrointestinal diseases with more than 500 cases of ESD for EGC. Moreover, ESD for GTC may have been performed by leading specialists given the relative rarity of GTC. For these reasons, ESD for GTC seems safe if performed by specialists with sufficient ESD experience. Previously, not a few patients had complications or died of other diseases during the course after esophagectomy[27,28]. However, due to the widespread use of minimally invasive esophagectomy, such as thoracoscopic and laparoscopic surgery, the incidence of postoperative complications, including respiratory complications, has decreased and the general condition of patients after esophagectomy has improved in recent years[29-32]. With continued improvements in the prognosis of esophageal cancer, the number of cases of GTC after esophagectomy will likely increase in the future and the demand for ESD for GTC is expected to increase further. There were several limitations to this study. First, as this was a retrospective study, the ESD indications and devices used for treatment were not standardized. However, treatment was performed according to the typical standards. Second, since data on Helicobacter pylori infection status were missing in some patients, the association between GTC and Helicobacter pylori could not be evaluated. Third, as some patients were observed for only a short time, the assessment of long-term prognosis after ESD for GTC was insufficient. Further follow-up studies are needed in the future. CONCLUSION: In conclusion, ESD for GTC after esophagectomy is a safe and effective treatment that can be performed without significant variability in treatment results at any specialized institution where standard gastric ESD can be performed with sufficient expertise. Further accumulation and follow-up of cases of GTC are necessary in the future.
Background: Recent improvements in the prognosis of patients with esophageal cancer have led to the increased occurrence of gastric tube cancer (GTC) in the reconstructed gastric tube. However, there are few reports on the treatment results of endoscopic submucosal dissection (ESD) for GTC. Methods: We retrospectively investigated 48 GTC lesions in 38 consecutive patients with GTC in the reconstructed gastric tube after esophagectomy who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group. The clinical indications of ESD for early gastric cancer were similarly applied for GTC after esophagectomy. ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines. Patient characteristics, treatment results, clinical course, and treatment outcomes were analyzed. Results: The median age of patients was 71.5 years (range, 57-84years), and there were 34 men and 4 women. The median observation period after ESD was 884 d (range, 8-4040 d). The median procedure time was 81 min (range, 29-334 min), the en bloc resection rate was 91.7% (44/48), and the curative resection rate was 79% (38/48). Complications during ESD were seen in 4% (2/48) of case, and those after ESD were seen in 10% (5/48) of case. The survival rate at 5 years was 59.5%. During the observation period after ESD, 10 patients died of other diseases. Although there were differences in the procedure time between institutions, a multivariate analysis showed that tumor size was the only factor associated with prolonged procedure time. Conclusions: ESD for GTC after esophagectomy was shown to be safe and effective.
INTRODUCTION: Recently, the survival of patients with esophageal cancer after esophagectomy has improved[1-5]. However, the risk of a subsequent occurrence of primary cancer is high in these patients. The most frequent cancer that overlaps with esophageal cancer is head and neck cancer, while the second most common is gastric cancer, including gastric tube cancer (GTC)[6-9]. Therefore, the improved prognosis of esophageal cancer patients has led to an increase in the occurrence of GTC in the reconstructed gastric tube. For the treatment of GTC after esophagectomy, total gastric tube resection (TGTR) or partial gastric tube resection (PGTR) has been proposed. However, surgical resection for GTC, being a secondary operation following esophagectomy, may lead to high mortality and morbidity[10,11]. On the other hand, in recent years, endoscopic therapy for early gastric cancer (EGC) has developed and become widespread[12]. Endoscopic submucosal dissection (ESD) enables the treatment of large lesions with a higher rate of en bloc resection that cannot be achieved by using conventional endoscopic mucosal resection. In addition, ESD is less invasive than surgery. For this reason, ESD has become widely used as a standard treatment for EGC, and ESD is often performed for GTC. However, ESD for GTC after esophagectomy is a technically difficult procedure compared with that for an unresected stomach because of the limited working space, unusual fluid-pooling area, food residue, bile reflux, fibrosis, and staples under the suture line[13]. Therefore, a high degree of skill is required for ESD of GTC. There are few reports about ESD for GTC after esophagectomy, and most are case reports and case series of a small number of patients[13-17]. A study by Nonaka et al[18] reported the effectiveness and safety of ESD for GTC in a high-volume national center, which had largest number of cases but was nonetheless a single-center study. Therefore, the aim of this study was to evaluate the efficacy and safety of ESD for GTC after esophagectomy in a multicenter context. CONCLUSION: As some patients were observed for only a short time, the assessment of long-term prognosis after ESD for GTC was insufficient. Further accumulation and follow-up of cases of GTC are necessary in the future.
Background: Recent improvements in the prognosis of patients with esophageal cancer have led to the increased occurrence of gastric tube cancer (GTC) in the reconstructed gastric tube. However, there are few reports on the treatment results of endoscopic submucosal dissection (ESD) for GTC. Methods: We retrospectively investigated 48 GTC lesions in 38 consecutive patients with GTC in the reconstructed gastric tube after esophagectomy who had undergone ESD between January 2005 and December 2019 at 8 institutions participating in the Okayama Gut Study group. The clinical indications of ESD for early gastric cancer were similarly applied for GTC after esophagectomy. ESD specimens were evaluated in 2-mm slices according to the Japanese Classification of Gastric Carcinoma with curability assessments divided into curative and non-curative resection based on the Gastric Cancer Treatment Guidelines. Patient characteristics, treatment results, clinical course, and treatment outcomes were analyzed. Results: The median age of patients was 71.5 years (range, 57-84years), and there were 34 men and 4 women. The median observation period after ESD was 884 d (range, 8-4040 d). The median procedure time was 81 min (range, 29-334 min), the en bloc resection rate was 91.7% (44/48), and the curative resection rate was 79% (38/48). Complications during ESD were seen in 4% (2/48) of case, and those after ESD were seen in 10% (5/48) of case. The survival rate at 5 years was 59.5%. During the observation period after ESD, 10 patients died of other diseases. Although there were differences in the procedure time between institutions, a multivariate analysis showed that tumor size was the only factor associated with prolonged procedure time. Conclusions: ESD for GTC after esophagectomy was shown to be safe and effective.
7,384
349
[ 391, 190, 257, 136, 115, 2564, 235, 313, 297, 155, 266, 917, 55 ]
14
[ "esd", "patients", "gtc", "gastric", "submucosal", "treatment", "endoscopic", "cancer", "lesions", "resection" ]
[ "gtc esophagectomy total", "gtc esophagectomy pathological", "review gtc esophagectomy", "gastric tube esophagectomy", "gtc esophagectomy gastric" ]
null
[CONTENT] Endoscopic submucosal dissection | Gastric tube | Gastric cancer | Eso-phagectomy | Multicenter study | Retrospective study [SUMMARY]
[CONTENT] Endoscopic submucosal dissection | Gastric tube | Gastric cancer | Eso-phagectomy | Multicenter study | Retrospective study [SUMMARY]
null
[CONTENT] Endoscopic submucosal dissection | Gastric tube | Gastric cancer | Eso-phagectomy | Multicenter study | Retrospective study [SUMMARY]
[CONTENT] Endoscopic submucosal dissection | Gastric tube | Gastric cancer | Eso-phagectomy | Multicenter study | Retrospective study [SUMMARY]
[CONTENT] Endoscopic submucosal dissection | Gastric tube | Gastric cancer | Eso-phagectomy | Multicenter study | Retrospective study [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Dissection | Endoscopic Mucosal Resection | Female | Gastric Mucosa | Humans | Male | Middle Aged | Retrospective Studies | Stomach Neoplasms | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Dissection | Endoscopic Mucosal Resection | Female | Gastric Mucosa | Humans | Male | Middle Aged | Retrospective Studies | Stomach Neoplasms | Treatment Outcome [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Dissection | Endoscopic Mucosal Resection | Female | Gastric Mucosa | Humans | Male | Middle Aged | Retrospective Studies | Stomach Neoplasms | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Dissection | Endoscopic Mucosal Resection | Female | Gastric Mucosa | Humans | Male | Middle Aged | Retrospective Studies | Stomach Neoplasms | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Dissection | Endoscopic Mucosal Resection | Female | Gastric Mucosa | Humans | Male | Middle Aged | Retrospective Studies | Stomach Neoplasms | Treatment Outcome [SUMMARY]
[CONTENT] gtc esophagectomy total | gtc esophagectomy pathological | review gtc esophagectomy | gastric tube esophagectomy | gtc esophagectomy gastric [SUMMARY]
[CONTENT] gtc esophagectomy total | gtc esophagectomy pathological | review gtc esophagectomy | gastric tube esophagectomy | gtc esophagectomy gastric [SUMMARY]
null
[CONTENT] gtc esophagectomy total | gtc esophagectomy pathological | review gtc esophagectomy | gastric tube esophagectomy | gtc esophagectomy gastric [SUMMARY]
[CONTENT] gtc esophagectomy total | gtc esophagectomy pathological | review gtc esophagectomy | gastric tube esophagectomy | gtc esophagectomy gastric [SUMMARY]
[CONTENT] gtc esophagectomy total | gtc esophagectomy pathological | review gtc esophagectomy | gastric tube esophagectomy | gtc esophagectomy gastric [SUMMARY]
[CONTENT] esd | patients | gtc | gastric | submucosal | treatment | endoscopic | cancer | lesions | resection [SUMMARY]
[CONTENT] esd | patients | gtc | gastric | submucosal | treatment | endoscopic | cancer | lesions | resection [SUMMARY]
null
[CONTENT] esd | patients | gtc | gastric | submucosal | treatment | endoscopic | cancer | lesions | resection [SUMMARY]
[CONTENT] esd | patients | gtc | gastric | submucosal | treatment | endoscopic | cancer | lesions | resection [SUMMARY]
[CONTENT] esd | patients | gtc | gastric | submucosal | treatment | endoscopic | cancer | lesions | resection [SUMMARY]
[CONTENT] gtc | cancer | esophagectomy | esd | resection | high | esd gtc | gastric | gtc esophagectomy | study [SUMMARY]
[CONTENT] knife | knife knife | tokyo japan | japan | tokyo | curative | resection | study | ecura | clinical [SUMMARY]
null
[CONTENT] significant variability treatment | accumulation follow cases gtc | expertise accumulation follow cases | gtc esophagectomy safe effective | results specialized institution standard | results specialized institution | results specialized | accumulation follow | accumulation follow cases | treatment performed significant [SUMMARY]
[CONTENT] esd | patients | gtc | knife | gastric | resection | lesions | submucosal | treatment | cancer [SUMMARY]
[CONTENT] esd | patients | gtc | knife | gastric | resection | lesions | submucosal | treatment | cancer [SUMMARY]
[CONTENT] GTC ||| ESD | GTC [SUMMARY]
[CONTENT] 48 | GTC | 38 | GTC | ESD | between January 2005 | December 2019 | 8 ||| ESD | GTC ||| ESD | 2-mm | the Japanese Classification of Gastric Carcinoma | the Gastric Cancer Treatment Guidelines ||| [SUMMARY]
null
[CONTENT] ESD | GTC [SUMMARY]
[CONTENT] GTC ||| ESD | GTC ||| 48 | GTC | 38 | GTC | ESD | between January 2005 | December 2019 | 8 ||| ESD | GTC ||| ESD | 2-mm | the Japanese Classification of Gastric Carcinoma | the Gastric Cancer Treatment Guidelines ||| ||| ||| 71.5 years | 57-84years | 34 | 4 ||| ESD | 884 | 8-4040 ||| 81 | 29 | 91.7% | 44/48 | 79% | 38/48 ||| ESD | 4% | 2/48 | ESD | 10% | 5/48 ||| 5 years | 59.5% ||| ESD | 10 ||| ||| GTC [SUMMARY]
[CONTENT] GTC ||| ESD | GTC ||| 48 | GTC | 38 | GTC | ESD | between January 2005 | December 2019 | 8 ||| ESD | GTC ||| ESD | 2-mm | the Japanese Classification of Gastric Carcinoma | the Gastric Cancer Treatment Guidelines ||| ||| ||| 71.5 years | 57-84years | 34 | 4 ||| ESD | 884 | 8-4040 ||| 81 | 29 | 91.7% | 44/48 | 79% | 38/48 ||| ESD | 4% | 2/48 | ESD | 10% | 5/48 ||| 5 years | 59.5% ||| ESD | 10 ||| ||| GTC [SUMMARY]
OATP1B1 and tumour OATP1B3 modulate exposure, toxicity, and survival after irinotecan-based chemotherapy.
25611302
Treatment of advanced and metastatic colorectal cancer with irinotecan is hampered by severe toxicities. The active metabolite of irinotecan, SN-38, is a known substrate of drug-metabolising enzymes, including UGT1A1, as well as OATP and ABC drug transporters.
BACKGROUND
Blood samples (n=127) and tumour tissue (n=30) were obtained from advanced cancer patients treated with irinotecan-based regimens for pharmacogenetic and drug level analysis and transporter expression. Clinical variables, toxicity, and outcomes data were collected.
METHODS
SLCO1B1 521C was significantly associated with increased SN-38 exposure (P<0.001), which was additive with UGT1A1*28. ABCC5 (rs562) carriers had significantly reduced SN-38 glucuronide and APC metabolite levels. Reduced risk of neutropenia and diarrhoea was associated with ABCC2-24C/T (odds ratio (OR)=0.22, 0.06-0.85) and CES1 (rs2244613; OR=0.29, 0.09-0.89), respectively. Progression-free survival (PFS) was significantly longer in SLCO1B1 388G/G patients and reduced in ABCC2-24T/T and UGT1A1*28 carriers. Notably, higher OATP1B3 tumour expression was associated with reduced PFS.
RESULTS
Clarifying the association of host genetic variation in OATP and ABC transporters to SN-38 exposure, toxicity and PFS provides rationale for personalising irinotecan-based chemotherapy. Our findings suggest that OATP polymorphisms and expression in tumour tissue may serve as important new biomarkers.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Antineoplastic Agents, Phytogenic", "Camptothecin", "Colorectal Neoplasms", "Disease-Free Survival", "Female", "Humans", "Irinotecan", "Liver-Specific Organic Anion Transporter 1", "Male", "Middle Aged", "Multidrug Resistance-Associated Protein 2", "Multidrug Resistance-Associated Proteins", "Organic Anion Transporters", "Organic Anion Transporters, Sodium-Independent", "Polymorphism, Single Nucleotide", "Solute Carrier Organic Anion Transporter Family Member 1B3" ]
4453959
null
null
null
null
Results
Study population Patient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2). Patient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2). OATP1B1 and ABCC5 are important determinants of SN-38 and SN-38G levels The primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05). Several genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3). Interestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively). The model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01). The primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05). Several genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3). Interestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively). The model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01). Drug transporters predict CPT-11-related toxicities CPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36). Non-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models. CPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36). Non-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models. Biomarkers of PFS Approximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis. Approximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis. Tumour OATP1B3 expression suggests poorer clinical outcomes The role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy. The role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy.
null
null
[ "Study population", "Chart review", "Genotyping", "Drug levels", "Immunohistochemistry", "Statistics", "Study population", "OATP1B1 and ABCC5 are important determinants of SN-38 and SN-38G levels", "Drug transporters predict CPT-11-related toxicities", "Biomarkers of PFS", "Tumour OATP1B3 expression suggests poorer clinical outcomes" ]
[ "Metastatic colorectal cancer and advanced or metastatic pancreatic cancer patients (n=127) being treated with CPT-11-based chemotherapy regimens were prospectively recruited between January 2010 and November 2012 from the London Regional Cancer Program, London Health Sciences Centre, London, Ontario, Canada. The majority of patients were prescribed CPT-11 at 180 mg m−2 biweekly in combination with 5-fluorouracil and leucovorin (FOLFIRI regimen) with or without bevacuzimab. Patients were included if they were aged 18 years or above with an ECOG (Eastern Cooperative Oncology Group) performance status ⩽2. Exclusion criteria included: >35 μmol l−1 total bilirubin, >3X upper normal limit AST or ALT without liver metastases or >5X with liver metastases, known hypersensitivity to CPT-11, known history of Gilbert's syndrome, and concurrent use of ketoconazole. All participants provided written informed consent. The study was approved by the Research Ethics Board at Western University.", "Paper and electronic record chart review for each consented patient was conducted by a single reviewer. Age recorded was the age at initiation of CPT-11-based chemotherapy. A cycle of CPT-11 was defined as a single administration of CPT-11 alone or in combination, irrespective of the chemotherapy regimen used. Treatment-related toxicities were recorded for the cycle where the blood sample was obtained in addition to any toxicities occurring throughout the duration of treatment with CPT-11. The blood sample cycle was considered to extend to the subsequent measurement of basic laboratory values and/or clinical assessment. Toxicities during this period were not considered to be associated with the particular treatment if they were documented more than 3 weeks after sample blood drawn. NCI-CTCAE version 3.0 (Bethesda, MD, USA) was used to grade toxicities. Toxicity grade was determined using subjective measures (described in clinical notes) when exact grading was not documented.\nResponders were defined as having a stable or reduced tumour size on the first CT scan following CPT-11-based chemotherapy initiation. The interval between CT scans was at the discretion of the treating oncologist. Progression was defined as the date the CT or MRI reported an increased tumour size. Progression-free survival was considered to be the time from initiation of CPT-11-based chemotherapy to the date of progression, death, last contact, or censor date (13 August 2013), whichever occurred first.", "DNA was extracted from whole blood using the Gentra Purgene Blood Kit (Qiagen, Toronto, Ontario, Canada). The following TaqMan allelic discrimination assays (Applied Biosystems, Carlsbad, CA, USA) were used for genotyping: ABCB1 (c.3435C>T, rs1045642), ABCG2 (c.421C>A, rs2231142; c.34G>A, rs2231137), ABCC2 (c.-24C>T, rs717620; c.1249G>A, rs2273697), ABCC5 (T>C, rs562), SLCO1B1*1b (c.388A>G, rs2306283), SLCO1B1*5 (c.521T>C, rs4149056), SLCO1B3 (g.699G>A, rs7311358), UGT1A1*28 (TA(6/7), rs8175347), CES1 (g.14506G>A, rs71647871; g.27467A>C, rs2244613), CYP3A4*22 (intron 6 C>T, rs35599367), and CYP3A5*3 (g.6986A>G, rs776746). Hardy–Weinburg equilibrium was assessed for all genotypes using the χ2 goodness-of-fit test.", "Blood samples were obtained by venipuncture of the opposite arm immediately following the end of the CPT-11 infusion (90 min) during any cycle of CPT-11-based chemotherapy. Plasma was collected and stored at −80 °C until analysis. Plasma concentrations of CPT-11 and metabolites SN-38, SN-38G, and APC (7-ethyl-10-[4-N-(5-aminopentanoic acid)-1-piperidino] carbonyloxycamptothecin) were measured by liquid chromatography-tandem mass spectrometry. Standards were obtained from Sigma-Aldrich (Oakville, Ontario, Canada; CPT-11, SN-38) and Toronto Research Chemicals (Toronto, Ontario, Canada; SN-38-glucuronide, APC). Plasma samples (100 μl) were precipitated upon addition of 3 volumes of acetonitrile and 0.1% formic acid (FA) spiked with 15 μl internal standard (camptothecin, 500 ng ml−1, Toronto Research Chemicals). Samples were vortexed, centrifuged, and diluted with H2O (0.1% FA). Analytes were injected (30 μl) into the liquid chromatograph (Agilent 1200) and separated on a reverse-phase column (Hypersil Gold, 50 × 5 mm, 5 μM particle size) over 6 min using gradient elution with H2O (0.1% FA) and acetonitrile (0.1% FA) (10–90%). Standard curves and quality controls (co-efficient of variation (%), high (13.5), med (2.6), low (3.7)) were prepared in drug-free plasma. The mass spectrometer (Thermo TSQ Vantage, Burlington, ON, Canada) with heated electrospray ionisation source was set in positive mode for detection of CPT-11, SN-38, SN-38G, APC, and camptothecin with transitions 587→124 m/z, 393→349.3 m/z, 569→393.3 m/z, 691→227 m/z, 349→305 m/z, respectively.", "Archived normal and tumour tissue biopsy slides were obtained from a subset of study participants (n=30) following approval by the Tissue and Archive Committee (Department of Pathology, London Health Sciences Centre). Antigen retrieval was performed with citrate buffer and slides were subsequently incubated with pre-immune serum or anti-OATP1B3 polyclonal antibody (1 : 200) followed by an avidin-biotin immunoperoxidase assay and developed using an AEC (3-Amino-9-ethylcarbazole) staining kit (Sigma-Aldrich) using a modified protocol (Lee et al, 2008). Nuclei were counterstained using Mayer's haematoxylin (Sigma-Aldrich). Scoring for OATP1B3 expression in normal, normal adjacent, and tumour tissue was performed independently by one pathologist (JP). Staining intensity for OATP1B3 was defined and evaluated using the following semi-quantitative previously published method: (0) no staining, (1) weakly positive, (2) moderately positive, and (3) strongly positive (Lee et al, 2008).", "The primary objective was to determine covariates associated with interindividual variability of CPT-11 and metabolite plasma concentration. All statistical analysis was performed in GraphPad Prism and the statistical software R. One-way analysis of variance with Bonferroni correction and Student's t-tests was used to compare drug levels between genotypic groups. Multiple linear regression analysis was performed to determine significant covariates on the interindividual variability of dose-normalised CPT-11 and metabolite plasma concentrations (natural log-transformed). Covariates considered included: age, sex, treatment regimen, ABCB1, ABCG2, ABCC2, ABCC5, SLCO1B1, SLCO1B3, UGT1A1, CES1, CYP3A4, and CYP3A5 genotype. Covariates were assessed individually and were considered for the final model at a significance level of P<0.2. Covariates meeting these criteria were entered into a multiple linear regression model adjusting for age, sex, and treatment regimen and remained in the final model if P<0.1.\nSecondary outcomes included assessing covariates associated with toxicity and PFS. Multinomial logistic regression analysis was used to determine association of genotype with toxicity events after adjustment for sex, age, and treatment regimen. Toxicity categories used in the regression analysis were: neutropenia (no event vs low (grade 1 or 2) vs high (grade 3 or 4)) and diarrhoea, nausea/vomiting and oral mucositis (no event vs low (grade 1) vs high (grade 2 or 3)). Univariate analysis was performed for each covariate and only significant genotypes were included in the final model with the exception of adjustment covariates as indicated above. Logistic regression analysis was also performed to determine covariates associated with neutropenia (low (grade 0, 1 or 2) vs high (grade 3 or 4)). Cox regression analysis was used to determine association of covariates with PFS in patients treated with BEV-FOLFIRI or FOLFIRI regimens (excluding patients with pancreatic cancer). Univariate analysis was performed and covariates with a cut value of P<0.2 were included in the final multivariate analysis with the exception of adjusting covariates, age at enrolment, sex and treatment regimen. Kruskal–Wallis with Dunn's multiple comparative test and Wilcoxon signed rank test was used to compare OATP1B3 pathology scores. Mann–Whitney U-test was used to compare OATP1B3 pathology scores and PFS in patients treated with CPT-11-based chemotherapy regimens scored for OATP1B3 tumour expression (0–1 vs 2–3).", "Patient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2).", "The primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05).\nSeveral genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3).\nInterestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively).\nThe model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01).", "CPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36).\nNon-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models.", "Approximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis.", "The role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Materials and Methods", "Study population", "Chart review", "Genotyping", "Drug levels", "Immunohistochemistry", "Statistics", "Results", "Study population", "OATP1B1 and ABCC5 are important determinants of SN-38 and SN-38G levels", "Drug transporters predict CPT-11-related toxicities", "Biomarkers of PFS", "Tumour OATP1B3 expression suggests poorer clinical outcomes", "Discussion" ]
[ " Study population Metastatic colorectal cancer and advanced or metastatic pancreatic cancer patients (n=127) being treated with CPT-11-based chemotherapy regimens were prospectively recruited between January 2010 and November 2012 from the London Regional Cancer Program, London Health Sciences Centre, London, Ontario, Canada. The majority of patients were prescribed CPT-11 at 180 mg m−2 biweekly in combination with 5-fluorouracil and leucovorin (FOLFIRI regimen) with or without bevacuzimab. Patients were included if they were aged 18 years or above with an ECOG (Eastern Cooperative Oncology Group) performance status ⩽2. Exclusion criteria included: >35 μmol l−1 total bilirubin, >3X upper normal limit AST or ALT without liver metastases or >5X with liver metastases, known hypersensitivity to CPT-11, known history of Gilbert's syndrome, and concurrent use of ketoconazole. All participants provided written informed consent. The study was approved by the Research Ethics Board at Western University.\nMetastatic colorectal cancer and advanced or metastatic pancreatic cancer patients (n=127) being treated with CPT-11-based chemotherapy regimens were prospectively recruited between January 2010 and November 2012 from the London Regional Cancer Program, London Health Sciences Centre, London, Ontario, Canada. The majority of patients were prescribed CPT-11 at 180 mg m−2 biweekly in combination with 5-fluorouracil and leucovorin (FOLFIRI regimen) with or without bevacuzimab. Patients were included if they were aged 18 years or above with an ECOG (Eastern Cooperative Oncology Group) performance status ⩽2. Exclusion criteria included: >35 μmol l−1 total bilirubin, >3X upper normal limit AST or ALT without liver metastases or >5X with liver metastases, known hypersensitivity to CPT-11, known history of Gilbert's syndrome, and concurrent use of ketoconazole. All participants provided written informed consent. The study was approved by the Research Ethics Board at Western University.\n Chart review Paper and electronic record chart review for each consented patient was conducted by a single reviewer. Age recorded was the age at initiation of CPT-11-based chemotherapy. A cycle of CPT-11 was defined as a single administration of CPT-11 alone or in combination, irrespective of the chemotherapy regimen used. Treatment-related toxicities were recorded for the cycle where the blood sample was obtained in addition to any toxicities occurring throughout the duration of treatment with CPT-11. The blood sample cycle was considered to extend to the subsequent measurement of basic laboratory values and/or clinical assessment. Toxicities during this period were not considered to be associated with the particular treatment if they were documented more than 3 weeks after sample blood drawn. NCI-CTCAE version 3.0 (Bethesda, MD, USA) was used to grade toxicities. Toxicity grade was determined using subjective measures (described in clinical notes) when exact grading was not documented.\nResponders were defined as having a stable or reduced tumour size on the first CT scan following CPT-11-based chemotherapy initiation. The interval between CT scans was at the discretion of the treating oncologist. Progression was defined as the date the CT or MRI reported an increased tumour size. Progression-free survival was considered to be the time from initiation of CPT-11-based chemotherapy to the date of progression, death, last contact, or censor date (13 August 2013), whichever occurred first.\nPaper and electronic record chart review for each consented patient was conducted by a single reviewer. Age recorded was the age at initiation of CPT-11-based chemotherapy. A cycle of CPT-11 was defined as a single administration of CPT-11 alone or in combination, irrespective of the chemotherapy regimen used. Treatment-related toxicities were recorded for the cycle where the blood sample was obtained in addition to any toxicities occurring throughout the duration of treatment with CPT-11. The blood sample cycle was considered to extend to the subsequent measurement of basic laboratory values and/or clinical assessment. Toxicities during this period were not considered to be associated with the particular treatment if they were documented more than 3 weeks after sample blood drawn. NCI-CTCAE version 3.0 (Bethesda, MD, USA) was used to grade toxicities. Toxicity grade was determined using subjective measures (described in clinical notes) when exact grading was not documented.\nResponders were defined as having a stable or reduced tumour size on the first CT scan following CPT-11-based chemotherapy initiation. The interval between CT scans was at the discretion of the treating oncologist. Progression was defined as the date the CT or MRI reported an increased tumour size. Progression-free survival was considered to be the time from initiation of CPT-11-based chemotherapy to the date of progression, death, last contact, or censor date (13 August 2013), whichever occurred first.\n Genotyping DNA was extracted from whole blood using the Gentra Purgene Blood Kit (Qiagen, Toronto, Ontario, Canada). The following TaqMan allelic discrimination assays (Applied Biosystems, Carlsbad, CA, USA) were used for genotyping: ABCB1 (c.3435C>T, rs1045642), ABCG2 (c.421C>A, rs2231142; c.34G>A, rs2231137), ABCC2 (c.-24C>T, rs717620; c.1249G>A, rs2273697), ABCC5 (T>C, rs562), SLCO1B1*1b (c.388A>G, rs2306283), SLCO1B1*5 (c.521T>C, rs4149056), SLCO1B3 (g.699G>A, rs7311358), UGT1A1*28 (TA(6/7), rs8175347), CES1 (g.14506G>A, rs71647871; g.27467A>C, rs2244613), CYP3A4*22 (intron 6 C>T, rs35599367), and CYP3A5*3 (g.6986A>G, rs776746). Hardy–Weinburg equilibrium was assessed for all genotypes using the χ2 goodness-of-fit test.\nDNA was extracted from whole blood using the Gentra Purgene Blood Kit (Qiagen, Toronto, Ontario, Canada). The following TaqMan allelic discrimination assays (Applied Biosystems, Carlsbad, CA, USA) were used for genotyping: ABCB1 (c.3435C>T, rs1045642), ABCG2 (c.421C>A, rs2231142; c.34G>A, rs2231137), ABCC2 (c.-24C>T, rs717620; c.1249G>A, rs2273697), ABCC5 (T>C, rs562), SLCO1B1*1b (c.388A>G, rs2306283), SLCO1B1*5 (c.521T>C, rs4149056), SLCO1B3 (g.699G>A, rs7311358), UGT1A1*28 (TA(6/7), rs8175347), CES1 (g.14506G>A, rs71647871; g.27467A>C, rs2244613), CYP3A4*22 (intron 6 C>T, rs35599367), and CYP3A5*3 (g.6986A>G, rs776746). Hardy–Weinburg equilibrium was assessed for all genotypes using the χ2 goodness-of-fit test.\n Drug levels Blood samples were obtained by venipuncture of the opposite arm immediately following the end of the CPT-11 infusion (90 min) during any cycle of CPT-11-based chemotherapy. Plasma was collected and stored at −80 °C until analysis. Plasma concentrations of CPT-11 and metabolites SN-38, SN-38G, and APC (7-ethyl-10-[4-N-(5-aminopentanoic acid)-1-piperidino] carbonyloxycamptothecin) were measured by liquid chromatography-tandem mass spectrometry. Standards were obtained from Sigma-Aldrich (Oakville, Ontario, Canada; CPT-11, SN-38) and Toronto Research Chemicals (Toronto, Ontario, Canada; SN-38-glucuronide, APC). Plasma samples (100 μl) were precipitated upon addition of 3 volumes of acetonitrile and 0.1% formic acid (FA) spiked with 15 μl internal standard (camptothecin, 500 ng ml−1, Toronto Research Chemicals). Samples were vortexed, centrifuged, and diluted with H2O (0.1% FA). Analytes were injected (30 μl) into the liquid chromatograph (Agilent 1200) and separated on a reverse-phase column (Hypersil Gold, 50 × 5 mm, 5 μM particle size) over 6 min using gradient elution with H2O (0.1% FA) and acetonitrile (0.1% FA) (10–90%). Standard curves and quality controls (co-efficient of variation (%), high (13.5), med (2.6), low (3.7)) were prepared in drug-free plasma. The mass spectrometer (Thermo TSQ Vantage, Burlington, ON, Canada) with heated electrospray ionisation source was set in positive mode for detection of CPT-11, SN-38, SN-38G, APC, and camptothecin with transitions 587→124 m/z, 393→349.3 m/z, 569→393.3 m/z, 691→227 m/z, 349→305 m/z, respectively.\nBlood samples were obtained by venipuncture of the opposite arm immediately following the end of the CPT-11 infusion (90 min) during any cycle of CPT-11-based chemotherapy. Plasma was collected and stored at −80 °C until analysis. Plasma concentrations of CPT-11 and metabolites SN-38, SN-38G, and APC (7-ethyl-10-[4-N-(5-aminopentanoic acid)-1-piperidino] carbonyloxycamptothecin) were measured by liquid chromatography-tandem mass spectrometry. Standards were obtained from Sigma-Aldrich (Oakville, Ontario, Canada; CPT-11, SN-38) and Toronto Research Chemicals (Toronto, Ontario, Canada; SN-38-glucuronide, APC). Plasma samples (100 μl) were precipitated upon addition of 3 volumes of acetonitrile and 0.1% formic acid (FA) spiked with 15 μl internal standard (camptothecin, 500 ng ml−1, Toronto Research Chemicals). Samples were vortexed, centrifuged, and diluted with H2O (0.1% FA). Analytes were injected (30 μl) into the liquid chromatograph (Agilent 1200) and separated on a reverse-phase column (Hypersil Gold, 50 × 5 mm, 5 μM particle size) over 6 min using gradient elution with H2O (0.1% FA) and acetonitrile (0.1% FA) (10–90%). Standard curves and quality controls (co-efficient of variation (%), high (13.5), med (2.6), low (3.7)) were prepared in drug-free plasma. The mass spectrometer (Thermo TSQ Vantage, Burlington, ON, Canada) with heated electrospray ionisation source was set in positive mode for detection of CPT-11, SN-38, SN-38G, APC, and camptothecin with transitions 587→124 m/z, 393→349.3 m/z, 569→393.3 m/z, 691→227 m/z, 349→305 m/z, respectively.\n Immunohistochemistry Archived normal and tumour tissue biopsy slides were obtained from a subset of study participants (n=30) following approval by the Tissue and Archive Committee (Department of Pathology, London Health Sciences Centre). Antigen retrieval was performed with citrate buffer and slides were subsequently incubated with pre-immune serum or anti-OATP1B3 polyclonal antibody (1 : 200) followed by an avidin-biotin immunoperoxidase assay and developed using an AEC (3-Amino-9-ethylcarbazole) staining kit (Sigma-Aldrich) using a modified protocol (Lee et al, 2008). Nuclei were counterstained using Mayer's haematoxylin (Sigma-Aldrich). Scoring for OATP1B3 expression in normal, normal adjacent, and tumour tissue was performed independently by one pathologist (JP). Staining intensity for OATP1B3 was defined and evaluated using the following semi-quantitative previously published method: (0) no staining, (1) weakly positive, (2) moderately positive, and (3) strongly positive (Lee et al, 2008).\nArchived normal and tumour tissue biopsy slides were obtained from a subset of study participants (n=30) following approval by the Tissue and Archive Committee (Department of Pathology, London Health Sciences Centre). Antigen retrieval was performed with citrate buffer and slides were subsequently incubated with pre-immune serum or anti-OATP1B3 polyclonal antibody (1 : 200) followed by an avidin-biotin immunoperoxidase assay and developed using an AEC (3-Amino-9-ethylcarbazole) staining kit (Sigma-Aldrich) using a modified protocol (Lee et al, 2008). Nuclei were counterstained using Mayer's haematoxylin (Sigma-Aldrich). Scoring for OATP1B3 expression in normal, normal adjacent, and tumour tissue was performed independently by one pathologist (JP). Staining intensity for OATP1B3 was defined and evaluated using the following semi-quantitative previously published method: (0) no staining, (1) weakly positive, (2) moderately positive, and (3) strongly positive (Lee et al, 2008).\n Statistics The primary objective was to determine covariates associated with interindividual variability of CPT-11 and metabolite plasma concentration. All statistical analysis was performed in GraphPad Prism and the statistical software R. One-way analysis of variance with Bonferroni correction and Student's t-tests was used to compare drug levels between genotypic groups. Multiple linear regression analysis was performed to determine significant covariates on the interindividual variability of dose-normalised CPT-11 and metabolite plasma concentrations (natural log-transformed). Covariates considered included: age, sex, treatment regimen, ABCB1, ABCG2, ABCC2, ABCC5, SLCO1B1, SLCO1B3, UGT1A1, CES1, CYP3A4, and CYP3A5 genotype. Covariates were assessed individually and were considered for the final model at a significance level of P<0.2. Covariates meeting these criteria were entered into a multiple linear regression model adjusting for age, sex, and treatment regimen and remained in the final model if P<0.1.\nSecondary outcomes included assessing covariates associated with toxicity and PFS. Multinomial logistic regression analysis was used to determine association of genotype with toxicity events after adjustment for sex, age, and treatment regimen. Toxicity categories used in the regression analysis were: neutropenia (no event vs low (grade 1 or 2) vs high (grade 3 or 4)) and diarrhoea, nausea/vomiting and oral mucositis (no event vs low (grade 1) vs high (grade 2 or 3)). Univariate analysis was performed for each covariate and only significant genotypes were included in the final model with the exception of adjustment covariates as indicated above. Logistic regression analysis was also performed to determine covariates associated with neutropenia (low (grade 0, 1 or 2) vs high (grade 3 or 4)). Cox regression analysis was used to determine association of covariates with PFS in patients treated with BEV-FOLFIRI or FOLFIRI regimens (excluding patients with pancreatic cancer). Univariate analysis was performed and covariates with a cut value of P<0.2 were included in the final multivariate analysis with the exception of adjusting covariates, age at enrolment, sex and treatment regimen. Kruskal–Wallis with Dunn's multiple comparative test and Wilcoxon signed rank test was used to compare OATP1B3 pathology scores. Mann–Whitney U-test was used to compare OATP1B3 pathology scores and PFS in patients treated with CPT-11-based chemotherapy regimens scored for OATP1B3 tumour expression (0–1 vs 2–3).\nThe primary objective was to determine covariates associated with interindividual variability of CPT-11 and metabolite plasma concentration. All statistical analysis was performed in GraphPad Prism and the statistical software R. One-way analysis of variance with Bonferroni correction and Student's t-tests was used to compare drug levels between genotypic groups. Multiple linear regression analysis was performed to determine significant covariates on the interindividual variability of dose-normalised CPT-11 and metabolite plasma concentrations (natural log-transformed). Covariates considered included: age, sex, treatment regimen, ABCB1, ABCG2, ABCC2, ABCC5, SLCO1B1, SLCO1B3, UGT1A1, CES1, CYP3A4, and CYP3A5 genotype. Covariates were assessed individually and were considered for the final model at a significance level of P<0.2. Covariates meeting these criteria were entered into a multiple linear regression model adjusting for age, sex, and treatment regimen and remained in the final model if P<0.1.\nSecondary outcomes included assessing covariates associated with toxicity and PFS. Multinomial logistic regression analysis was used to determine association of genotype with toxicity events after adjustment for sex, age, and treatment regimen. Toxicity categories used in the regression analysis were: neutropenia (no event vs low (grade 1 or 2) vs high (grade 3 or 4)) and diarrhoea, nausea/vomiting and oral mucositis (no event vs low (grade 1) vs high (grade 2 or 3)). Univariate analysis was performed for each covariate and only significant genotypes were included in the final model with the exception of adjustment covariates as indicated above. Logistic regression analysis was also performed to determine covariates associated with neutropenia (low (grade 0, 1 or 2) vs high (grade 3 or 4)). Cox regression analysis was used to determine association of covariates with PFS in patients treated with BEV-FOLFIRI or FOLFIRI regimens (excluding patients with pancreatic cancer). Univariate analysis was performed and covariates with a cut value of P<0.2 were included in the final multivariate analysis with the exception of adjusting covariates, age at enrolment, sex and treatment regimen. Kruskal–Wallis with Dunn's multiple comparative test and Wilcoxon signed rank test was used to compare OATP1B3 pathology scores. Mann–Whitney U-test was used to compare OATP1B3 pathology scores and PFS in patients treated with CPT-11-based chemotherapy regimens scored for OATP1B3 tumour expression (0–1 vs 2–3).", "Metastatic colorectal cancer and advanced or metastatic pancreatic cancer patients (n=127) being treated with CPT-11-based chemotherapy regimens were prospectively recruited between January 2010 and November 2012 from the London Regional Cancer Program, London Health Sciences Centre, London, Ontario, Canada. The majority of patients were prescribed CPT-11 at 180 mg m−2 biweekly in combination with 5-fluorouracil and leucovorin (FOLFIRI regimen) with or without bevacuzimab. Patients were included if they were aged 18 years or above with an ECOG (Eastern Cooperative Oncology Group) performance status ⩽2. Exclusion criteria included: >35 μmol l−1 total bilirubin, >3X upper normal limit AST or ALT without liver metastases or >5X with liver metastases, known hypersensitivity to CPT-11, known history of Gilbert's syndrome, and concurrent use of ketoconazole. All participants provided written informed consent. The study was approved by the Research Ethics Board at Western University.", "Paper and electronic record chart review for each consented patient was conducted by a single reviewer. Age recorded was the age at initiation of CPT-11-based chemotherapy. A cycle of CPT-11 was defined as a single administration of CPT-11 alone or in combination, irrespective of the chemotherapy regimen used. Treatment-related toxicities were recorded for the cycle where the blood sample was obtained in addition to any toxicities occurring throughout the duration of treatment with CPT-11. The blood sample cycle was considered to extend to the subsequent measurement of basic laboratory values and/or clinical assessment. Toxicities during this period were not considered to be associated with the particular treatment if they were documented more than 3 weeks after sample blood drawn. NCI-CTCAE version 3.0 (Bethesda, MD, USA) was used to grade toxicities. Toxicity grade was determined using subjective measures (described in clinical notes) when exact grading was not documented.\nResponders were defined as having a stable or reduced tumour size on the first CT scan following CPT-11-based chemotherapy initiation. The interval between CT scans was at the discretion of the treating oncologist. Progression was defined as the date the CT or MRI reported an increased tumour size. Progression-free survival was considered to be the time from initiation of CPT-11-based chemotherapy to the date of progression, death, last contact, or censor date (13 August 2013), whichever occurred first.", "DNA was extracted from whole blood using the Gentra Purgene Blood Kit (Qiagen, Toronto, Ontario, Canada). The following TaqMan allelic discrimination assays (Applied Biosystems, Carlsbad, CA, USA) were used for genotyping: ABCB1 (c.3435C>T, rs1045642), ABCG2 (c.421C>A, rs2231142; c.34G>A, rs2231137), ABCC2 (c.-24C>T, rs717620; c.1249G>A, rs2273697), ABCC5 (T>C, rs562), SLCO1B1*1b (c.388A>G, rs2306283), SLCO1B1*5 (c.521T>C, rs4149056), SLCO1B3 (g.699G>A, rs7311358), UGT1A1*28 (TA(6/7), rs8175347), CES1 (g.14506G>A, rs71647871; g.27467A>C, rs2244613), CYP3A4*22 (intron 6 C>T, rs35599367), and CYP3A5*3 (g.6986A>G, rs776746). Hardy–Weinburg equilibrium was assessed for all genotypes using the χ2 goodness-of-fit test.", "Blood samples were obtained by venipuncture of the opposite arm immediately following the end of the CPT-11 infusion (90 min) during any cycle of CPT-11-based chemotherapy. Plasma was collected and stored at −80 °C until analysis. Plasma concentrations of CPT-11 and metabolites SN-38, SN-38G, and APC (7-ethyl-10-[4-N-(5-aminopentanoic acid)-1-piperidino] carbonyloxycamptothecin) were measured by liquid chromatography-tandem mass spectrometry. Standards were obtained from Sigma-Aldrich (Oakville, Ontario, Canada; CPT-11, SN-38) and Toronto Research Chemicals (Toronto, Ontario, Canada; SN-38-glucuronide, APC). Plasma samples (100 μl) were precipitated upon addition of 3 volumes of acetonitrile and 0.1% formic acid (FA) spiked with 15 μl internal standard (camptothecin, 500 ng ml−1, Toronto Research Chemicals). Samples were vortexed, centrifuged, and diluted with H2O (0.1% FA). Analytes were injected (30 μl) into the liquid chromatograph (Agilent 1200) and separated on a reverse-phase column (Hypersil Gold, 50 × 5 mm, 5 μM particle size) over 6 min using gradient elution with H2O (0.1% FA) and acetonitrile (0.1% FA) (10–90%). Standard curves and quality controls (co-efficient of variation (%), high (13.5), med (2.6), low (3.7)) were prepared in drug-free plasma. The mass spectrometer (Thermo TSQ Vantage, Burlington, ON, Canada) with heated electrospray ionisation source was set in positive mode for detection of CPT-11, SN-38, SN-38G, APC, and camptothecin with transitions 587→124 m/z, 393→349.3 m/z, 569→393.3 m/z, 691→227 m/z, 349→305 m/z, respectively.", "Archived normal and tumour tissue biopsy slides were obtained from a subset of study participants (n=30) following approval by the Tissue and Archive Committee (Department of Pathology, London Health Sciences Centre). Antigen retrieval was performed with citrate buffer and slides were subsequently incubated with pre-immune serum or anti-OATP1B3 polyclonal antibody (1 : 200) followed by an avidin-biotin immunoperoxidase assay and developed using an AEC (3-Amino-9-ethylcarbazole) staining kit (Sigma-Aldrich) using a modified protocol (Lee et al, 2008). Nuclei were counterstained using Mayer's haematoxylin (Sigma-Aldrich). Scoring for OATP1B3 expression in normal, normal adjacent, and tumour tissue was performed independently by one pathologist (JP). Staining intensity for OATP1B3 was defined and evaluated using the following semi-quantitative previously published method: (0) no staining, (1) weakly positive, (2) moderately positive, and (3) strongly positive (Lee et al, 2008).", "The primary objective was to determine covariates associated with interindividual variability of CPT-11 and metabolite plasma concentration. All statistical analysis was performed in GraphPad Prism and the statistical software R. One-way analysis of variance with Bonferroni correction and Student's t-tests was used to compare drug levels between genotypic groups. Multiple linear regression analysis was performed to determine significant covariates on the interindividual variability of dose-normalised CPT-11 and metabolite plasma concentrations (natural log-transformed). Covariates considered included: age, sex, treatment regimen, ABCB1, ABCG2, ABCC2, ABCC5, SLCO1B1, SLCO1B3, UGT1A1, CES1, CYP3A4, and CYP3A5 genotype. Covariates were assessed individually and were considered for the final model at a significance level of P<0.2. Covariates meeting these criteria were entered into a multiple linear regression model adjusting for age, sex, and treatment regimen and remained in the final model if P<0.1.\nSecondary outcomes included assessing covariates associated with toxicity and PFS. Multinomial logistic regression analysis was used to determine association of genotype with toxicity events after adjustment for sex, age, and treatment regimen. Toxicity categories used in the regression analysis were: neutropenia (no event vs low (grade 1 or 2) vs high (grade 3 or 4)) and diarrhoea, nausea/vomiting and oral mucositis (no event vs low (grade 1) vs high (grade 2 or 3)). Univariate analysis was performed for each covariate and only significant genotypes were included in the final model with the exception of adjustment covariates as indicated above. Logistic regression analysis was also performed to determine covariates associated with neutropenia (low (grade 0, 1 or 2) vs high (grade 3 or 4)). Cox regression analysis was used to determine association of covariates with PFS in patients treated with BEV-FOLFIRI or FOLFIRI regimens (excluding patients with pancreatic cancer). Univariate analysis was performed and covariates with a cut value of P<0.2 were included in the final multivariate analysis with the exception of adjusting covariates, age at enrolment, sex and treatment regimen. Kruskal–Wallis with Dunn's multiple comparative test and Wilcoxon signed rank test was used to compare OATP1B3 pathology scores. Mann–Whitney U-test was used to compare OATP1B3 pathology scores and PFS in patients treated with CPT-11-based chemotherapy regimens scored for OATP1B3 tumour expression (0–1 vs 2–3).", " Study population Patient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2).\nPatient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2).\n OATP1B1 and ABCC5 are important determinants of SN-38 and SN-38G levels The primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05).\nSeveral genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3).\nInterestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively).\nThe model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01).\nThe primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05).\nSeveral genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3).\nInterestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively).\nThe model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01).\n Drug transporters predict CPT-11-related toxicities CPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36).\nNon-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models.\nCPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36).\nNon-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models.\n Biomarkers of PFS Approximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis.\nApproximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis.\n Tumour OATP1B3 expression suggests poorer clinical outcomes The role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy.\nThe role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy.", "Patient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2).", "The primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05).\nSeveral genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3).\nInterestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively).\nThe model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01).", "CPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36).\nNon-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models.", "Approximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis.", "The role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy.", "Pharmacogenetics of irinotecan therapy has been widely studied and UGT1A1*28 in particular, shown to be of potential clinical relevance especially for patients prescribed high-dose CPT-11 therapy (Hoskins et al, 2007). As severe toxicities are often observed in patients treated with combination regimens at lower CPT-11 doses, we examined pharmacogenetic determinants of CPT-11 and metabolite exposure in this setting.\nSN-38 plasma exposure was significantly increased SLCO1B1 521C (*5) allele carriers (P<0.001) (Figure 3) consistent with studies correlating SLCO1B1 521C to higher SN-38 and CPT-11 AUC in both colorectal cancer and non-small-lung cell cancer patients (Han et al, 2008; Innocenti et al, 2009). UGT1A1*28 carriers had significantly increased SN-38 levels compared with non-carriers (P<0.01) consistent with previous reports, which can result in greater toxicity risk (Iyer et al, 2002; Innocenti et al, 2009; Hu et al, 2010; Sai et al, 2010; Cai et al, 2013). Importantly, we demonstrate an additive effect of SLCO1B1 521C and UGT1A1*28 on SN-38 exposure as patients with an increasing number of combined variant alleles had higher plasma levels and a corresponding decrease in the SN-38G/SN-38 ratio. To our knowledge this is the first report of an additive effect in a primarily Caucasian (94%) population, which remained significant upon exclusion of non-Caucasian patients. A similar additive effect has been noted for SN-38 AUC with combined SLCO1B1*15 and UGT1A1*6 or *28 polymorphisms in Japanese patients, likely due to the presence of the SLCO1B1 521C allele (Sai et al, 2010). Together, these results suggest that heterozygous carriers of both SLCO1B1 521C and UGT1A1*28 may reach a toxicity risk comparable to homozygous UGT1A1*28 patients as evidenced by several case reports of SLCO521C and UGT1A1*28 carriers presenting with life-threatening toxicities (Sakaguchi et al, 2009; Takane et al, 2009). Here we observed an OR of 4.15 (95%CI=1.06–16.36) for grade 3/4 neutropenia in patients with two or more combined variant allele. The clinical relevance of the additive effect of these two genes may be underestimated and need to be further examined as a better strategy for personalising CPT-11 therapy.\nAlthough plasma SN-38 levels appear to be the most predictive of toxicity risk, secondary metabolites, SN-38G and APC, may also be important contributors. We observed decreased plasma levels of these metabolites associated with an ABCC5 polymorphism (rs562) that was recently identified as a significant predictor of GI toxicity (Di Martino et al, 2011). These effects may be due to reduced ABCC5 hepatic efflux leading to SN-38G and APC accumulation within the liver (Figure 3). Higher hepatic concentrations may ultimately lead to increased intestinal SN-38G levels, through increased biliary excretion via other ABC transporters, which may then undergo β-glucuronidase-dependent reconversion to SN-38 thereby augmenting GI toxicity risk (Figure 3).\nIn this study, CPT-11 levels were lower in homozygous ABCB1 3435T patients. ABCB1 is a well-established transporter of CPT-11 and SN-38, but its clinical relevance for CPT-11 therapy remains inconclusive. ABCB1 polymorphisms have been associated with both increased or reduced exposure, decreased clearance and increased toxicity and response, whereas other studies were unable to confirm these results (Mathijssen et al, 2003; Sai et al, 2003; Mathijssen et al, 2004; Cote et al, 2007; Innocenti et al, 2009; Sai et al, 2010; Glimelius et al, 2011). This may be due to the redundancy of other transporters, including ABCC2 and ABCG2, capable of biliary excretion of CPT-11 and metabolites. The lack of consistent evidence suggests that ABCB1 may not be useful in personalising CPT-11 therapy.\nAs a secondary objective we examined the impact of pharmacogenetic factors on adverse events and PFS. Here, the majority (97%) of patients were prescribed 180 mg m−2 CPT-11 with 17% experiencing severe neutropenia. Neutropenia was associated with UGT1A1*28 as has been reported (Hoskins et al, 2007). More importantly, patients carrying two or more combined SLCO1B1 521C and UGT1A1*28 alleles were at significantly increased risk of myelosuppression, suggesting these SNPs together may provide a more comprehensive strategy for assessing haematological toxicity risk. Heterozygous ABCC2–24C/T but not TT carriers predicted reduced neutropenia risk indicating further validation is needed.\nNon-haematological toxicities including diarrhoea, nausea/vomiting, and oral mucositis were assessed. Most notably, a common SNP in CES1 (rs2244613) was associated with lower risk of diarrhoea, which may be due to reduced conversion of CPT-11 to SN-38. CES1 (rs2244613) was recently associated with reduced trough levels of dabigatran, a new oral anticoagulant drug, and a significantly decreased bleeding risk in patients, suggesting this SNP may be clinically relevant for many CES-dependent drugs (Pare et al, 2013). Interestingly, homozygous ABCB1 3435T carriers had a much higher likelihood of experiencing higher-grade nausea/vomiting (OR=10.52, P<0.05) and patients expressing CYP3A5 had a significantly increased risk of oral mucositis, suggesting higher levels of M4, APC, or NPC metabolites may contribute its development. A potential limitation to this analysis is the concurrent use of 5-fluorouracil with CPT-11 in most patients. Although the side-effect profile of the two drugs is similar, the majority of markers assessed are not specific to 5-fluorouracil, suggesting these correlations are likely due to modulation of irinotecan disposition but require confirmation in a study designed to assess toxicity as a primary objective.\nPFS analysis was limited to patients treated with FOLFIRI with or without BEV. SLCO1B1 388(G/G) was associated with longer PFS, suggesting variant carriers may experience better response to FOLFIRI-based regimens. This effect may be due to increased OATP1B1 expression as this variant has recently been correlated to increased expression in Caucasian liver samples (Nies et al, 2013). ABCC2–24TT was associated with reduced PFS, consistent with lower response rates and shorter PFS observed in Japanese patients, but contrary to the lack of association in Caucasian metastatic colorectal cancer patients treated with FOLFIRI regimes (Akiyama et al, 2012). UGT1A1*28 was also associated with reduced PFS, but a recent meta-analysis suggested that UGT1A1*28 status may not be a reliable predictor of PFS (Liu et al, 2013). Our evaluation of PFS may be confounded by patients still undergoing active therapy or surveillance at the date of censoring. Although the genes investigated are not thought to have a role in the other drugs in the FOLFIRI regimen, we cannot rule out the effect of these drugs on PFS.\nImportantly, we analysed OATP1B3 tumour expression in a subset of patients. To date, tumour biomarkers of CPT-11 response remain unknown. Our group was the first to note OATP1B3 overexpression in colon tumours and this expression was recently identified exclusively as a cancer-specific (cs)OATP1B3 splice variant (Lee et al, 2008; Han et al, 2013; Imai et al, 2013; Thakkar et al, 2013). The localisation (thought to be intracellular), function, and clinical relevance of csOATP1B3 are under investigation (Imai et al, 2013; Thakkar et al, 2013). Here we show the first evidence, to our knowledge, that higher OATP1B3 expression in colon tumours was significantly associated with reduced PFS (Figure 3). Lack of membrane expression of a functional OATP1B3 transporter may lead to reduced SN-38 tumour influx leading to a poorer clinical response or alternatively, overexpression of csOATP1B3 within the tumour may induce a p53-dependent survival mechanism, which has been demonstrated in WT-OATP1B3 colon cancer cell lines (Lee et al, 2008). Although we are unable to definitively confirm OATP1B3 to be the cancer-specific isoform due to use of an antibody recognising the common c-terminal region, the reported lack of wild-type OATP1B3 expression within colon tumours suggests this is the form expressed in these tumours (Thakkar et al, 2013).\nOwing to the complexity of CPT-11 disposition, it is questionable that one gene alone will be useful for personalising therapy and will likely require assessing the right combination of genes. Our data provide new insight regarding transporters, particularly members of the OATP1B subfamily, to the disposition and clinical effects of CPT-11 (summarised in Figure 3). The additive effect of SLCO1B1 521C and UGT1A1*28 on SN-38 exposure and neutropenia risk seen here in patients carrying two or more combined alleles (24.8% of this population) provides rationale for examining the utility of combined genotyping to better predict toxicity risk in CPT-11-based regimens. Future prospective studies should be designed to compare combined genotyping to UGT1A1*28 genotyping alone to advance the goal of personalising irinotecan." ]
[ "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion" ]
[ "OATP1B1", "OATP1B3", "Irinotecan", "SN-38", "colorectal cancer" ]
Materials and Methods: Study population Metastatic colorectal cancer and advanced or metastatic pancreatic cancer patients (n=127) being treated with CPT-11-based chemotherapy regimens were prospectively recruited between January 2010 and November 2012 from the London Regional Cancer Program, London Health Sciences Centre, London, Ontario, Canada. The majority of patients were prescribed CPT-11 at 180 mg m−2 biweekly in combination with 5-fluorouracil and leucovorin (FOLFIRI regimen) with or without bevacuzimab. Patients were included if they were aged 18 years or above with an ECOG (Eastern Cooperative Oncology Group) performance status ⩽2. Exclusion criteria included: >35 μmol l−1 total bilirubin, >3X upper normal limit AST or ALT without liver metastases or >5X with liver metastases, known hypersensitivity to CPT-11, known history of Gilbert's syndrome, and concurrent use of ketoconazole. All participants provided written informed consent. The study was approved by the Research Ethics Board at Western University. Metastatic colorectal cancer and advanced or metastatic pancreatic cancer patients (n=127) being treated with CPT-11-based chemotherapy regimens were prospectively recruited between January 2010 and November 2012 from the London Regional Cancer Program, London Health Sciences Centre, London, Ontario, Canada. The majority of patients were prescribed CPT-11 at 180 mg m−2 biweekly in combination with 5-fluorouracil and leucovorin (FOLFIRI regimen) with or without bevacuzimab. Patients were included if they were aged 18 years or above with an ECOG (Eastern Cooperative Oncology Group) performance status ⩽2. Exclusion criteria included: >35 μmol l−1 total bilirubin, >3X upper normal limit AST or ALT without liver metastases or >5X with liver metastases, known hypersensitivity to CPT-11, known history of Gilbert's syndrome, and concurrent use of ketoconazole. All participants provided written informed consent. The study was approved by the Research Ethics Board at Western University. Chart review Paper and electronic record chart review for each consented patient was conducted by a single reviewer. Age recorded was the age at initiation of CPT-11-based chemotherapy. A cycle of CPT-11 was defined as a single administration of CPT-11 alone or in combination, irrespective of the chemotherapy regimen used. Treatment-related toxicities were recorded for the cycle where the blood sample was obtained in addition to any toxicities occurring throughout the duration of treatment with CPT-11. The blood sample cycle was considered to extend to the subsequent measurement of basic laboratory values and/or clinical assessment. Toxicities during this period were not considered to be associated with the particular treatment if they were documented more than 3 weeks after sample blood drawn. NCI-CTCAE version 3.0 (Bethesda, MD, USA) was used to grade toxicities. Toxicity grade was determined using subjective measures (described in clinical notes) when exact grading was not documented. Responders were defined as having a stable or reduced tumour size on the first CT scan following CPT-11-based chemotherapy initiation. The interval between CT scans was at the discretion of the treating oncologist. Progression was defined as the date the CT or MRI reported an increased tumour size. Progression-free survival was considered to be the time from initiation of CPT-11-based chemotherapy to the date of progression, death, last contact, or censor date (13 August 2013), whichever occurred first. Paper and electronic record chart review for each consented patient was conducted by a single reviewer. Age recorded was the age at initiation of CPT-11-based chemotherapy. A cycle of CPT-11 was defined as a single administration of CPT-11 alone or in combination, irrespective of the chemotherapy regimen used. Treatment-related toxicities were recorded for the cycle where the blood sample was obtained in addition to any toxicities occurring throughout the duration of treatment with CPT-11. The blood sample cycle was considered to extend to the subsequent measurement of basic laboratory values and/or clinical assessment. Toxicities during this period were not considered to be associated with the particular treatment if they were documented more than 3 weeks after sample blood drawn. NCI-CTCAE version 3.0 (Bethesda, MD, USA) was used to grade toxicities. Toxicity grade was determined using subjective measures (described in clinical notes) when exact grading was not documented. Responders were defined as having a stable or reduced tumour size on the first CT scan following CPT-11-based chemotherapy initiation. The interval between CT scans was at the discretion of the treating oncologist. Progression was defined as the date the CT or MRI reported an increased tumour size. Progression-free survival was considered to be the time from initiation of CPT-11-based chemotherapy to the date of progression, death, last contact, or censor date (13 August 2013), whichever occurred first. Genotyping DNA was extracted from whole blood using the Gentra Purgene Blood Kit (Qiagen, Toronto, Ontario, Canada). The following TaqMan allelic discrimination assays (Applied Biosystems, Carlsbad, CA, USA) were used for genotyping: ABCB1 (c.3435C>T, rs1045642), ABCG2 (c.421C>A, rs2231142; c.34G>A, rs2231137), ABCC2 (c.-24C>T, rs717620; c.1249G>A, rs2273697), ABCC5 (T>C, rs562), SLCO1B1*1b (c.388A>G, rs2306283), SLCO1B1*5 (c.521T>C, rs4149056), SLCO1B3 (g.699G>A, rs7311358), UGT1A1*28 (TA(6/7), rs8175347), CES1 (g.14506G>A, rs71647871; g.27467A>C, rs2244613), CYP3A4*22 (intron 6 C>T, rs35599367), and CYP3A5*3 (g.6986A>G, rs776746). Hardy–Weinburg equilibrium was assessed for all genotypes using the χ2 goodness-of-fit test. DNA was extracted from whole blood using the Gentra Purgene Blood Kit (Qiagen, Toronto, Ontario, Canada). The following TaqMan allelic discrimination assays (Applied Biosystems, Carlsbad, CA, USA) were used for genotyping: ABCB1 (c.3435C>T, rs1045642), ABCG2 (c.421C>A, rs2231142; c.34G>A, rs2231137), ABCC2 (c.-24C>T, rs717620; c.1249G>A, rs2273697), ABCC5 (T>C, rs562), SLCO1B1*1b (c.388A>G, rs2306283), SLCO1B1*5 (c.521T>C, rs4149056), SLCO1B3 (g.699G>A, rs7311358), UGT1A1*28 (TA(6/7), rs8175347), CES1 (g.14506G>A, rs71647871; g.27467A>C, rs2244613), CYP3A4*22 (intron 6 C>T, rs35599367), and CYP3A5*3 (g.6986A>G, rs776746). Hardy–Weinburg equilibrium was assessed for all genotypes using the χ2 goodness-of-fit test. Drug levels Blood samples were obtained by venipuncture of the opposite arm immediately following the end of the CPT-11 infusion (90 min) during any cycle of CPT-11-based chemotherapy. Plasma was collected and stored at −80 °C until analysis. Plasma concentrations of CPT-11 and metabolites SN-38, SN-38G, and APC (7-ethyl-10-[4-N-(5-aminopentanoic acid)-1-piperidino] carbonyloxycamptothecin) were measured by liquid chromatography-tandem mass spectrometry. Standards were obtained from Sigma-Aldrich (Oakville, Ontario, Canada; CPT-11, SN-38) and Toronto Research Chemicals (Toronto, Ontario, Canada; SN-38-glucuronide, APC). Plasma samples (100 μl) were precipitated upon addition of 3 volumes of acetonitrile and 0.1% formic acid (FA) spiked with 15 μl internal standard (camptothecin, 500 ng ml−1, Toronto Research Chemicals). Samples were vortexed, centrifuged, and diluted with H2O (0.1% FA). Analytes were injected (30 μl) into the liquid chromatograph (Agilent 1200) and separated on a reverse-phase column (Hypersil Gold, 50 × 5 mm, 5 μM particle size) over 6 min using gradient elution with H2O (0.1% FA) and acetonitrile (0.1% FA) (10–90%). Standard curves and quality controls (co-efficient of variation (%), high (13.5), med (2.6), low (3.7)) were prepared in drug-free plasma. The mass spectrometer (Thermo TSQ Vantage, Burlington, ON, Canada) with heated electrospray ionisation source was set in positive mode for detection of CPT-11, SN-38, SN-38G, APC, and camptothecin with transitions 587→124 m/z, 393→349.3 m/z, 569→393.3 m/z, 691→227 m/z, 349→305 m/z, respectively. Blood samples were obtained by venipuncture of the opposite arm immediately following the end of the CPT-11 infusion (90 min) during any cycle of CPT-11-based chemotherapy. Plasma was collected and stored at −80 °C until analysis. Plasma concentrations of CPT-11 and metabolites SN-38, SN-38G, and APC (7-ethyl-10-[4-N-(5-aminopentanoic acid)-1-piperidino] carbonyloxycamptothecin) were measured by liquid chromatography-tandem mass spectrometry. Standards were obtained from Sigma-Aldrich (Oakville, Ontario, Canada; CPT-11, SN-38) and Toronto Research Chemicals (Toronto, Ontario, Canada; SN-38-glucuronide, APC). Plasma samples (100 μl) were precipitated upon addition of 3 volumes of acetonitrile and 0.1% formic acid (FA) spiked with 15 μl internal standard (camptothecin, 500 ng ml−1, Toronto Research Chemicals). Samples were vortexed, centrifuged, and diluted with H2O (0.1% FA). Analytes were injected (30 μl) into the liquid chromatograph (Agilent 1200) and separated on a reverse-phase column (Hypersil Gold, 50 × 5 mm, 5 μM particle size) over 6 min using gradient elution with H2O (0.1% FA) and acetonitrile (0.1% FA) (10–90%). Standard curves and quality controls (co-efficient of variation (%), high (13.5), med (2.6), low (3.7)) were prepared in drug-free plasma. The mass spectrometer (Thermo TSQ Vantage, Burlington, ON, Canada) with heated electrospray ionisation source was set in positive mode for detection of CPT-11, SN-38, SN-38G, APC, and camptothecin with transitions 587→124 m/z, 393→349.3 m/z, 569→393.3 m/z, 691→227 m/z, 349→305 m/z, respectively. Immunohistochemistry Archived normal and tumour tissue biopsy slides were obtained from a subset of study participants (n=30) following approval by the Tissue and Archive Committee (Department of Pathology, London Health Sciences Centre). Antigen retrieval was performed with citrate buffer and slides were subsequently incubated with pre-immune serum or anti-OATP1B3 polyclonal antibody (1 : 200) followed by an avidin-biotin immunoperoxidase assay and developed using an AEC (3-Amino-9-ethylcarbazole) staining kit (Sigma-Aldrich) using a modified protocol (Lee et al, 2008). Nuclei were counterstained using Mayer's haematoxylin (Sigma-Aldrich). Scoring for OATP1B3 expression in normal, normal adjacent, and tumour tissue was performed independently by one pathologist (JP). Staining intensity for OATP1B3 was defined and evaluated using the following semi-quantitative previously published method: (0) no staining, (1) weakly positive, (2) moderately positive, and (3) strongly positive (Lee et al, 2008). Archived normal and tumour tissue biopsy slides were obtained from a subset of study participants (n=30) following approval by the Tissue and Archive Committee (Department of Pathology, London Health Sciences Centre). Antigen retrieval was performed with citrate buffer and slides were subsequently incubated with pre-immune serum or anti-OATP1B3 polyclonal antibody (1 : 200) followed by an avidin-biotin immunoperoxidase assay and developed using an AEC (3-Amino-9-ethylcarbazole) staining kit (Sigma-Aldrich) using a modified protocol (Lee et al, 2008). Nuclei were counterstained using Mayer's haematoxylin (Sigma-Aldrich). Scoring for OATP1B3 expression in normal, normal adjacent, and tumour tissue was performed independently by one pathologist (JP). Staining intensity for OATP1B3 was defined and evaluated using the following semi-quantitative previously published method: (0) no staining, (1) weakly positive, (2) moderately positive, and (3) strongly positive (Lee et al, 2008). Statistics The primary objective was to determine covariates associated with interindividual variability of CPT-11 and metabolite plasma concentration. All statistical analysis was performed in GraphPad Prism and the statistical software R. One-way analysis of variance with Bonferroni correction and Student's t-tests was used to compare drug levels between genotypic groups. Multiple linear regression analysis was performed to determine significant covariates on the interindividual variability of dose-normalised CPT-11 and metabolite plasma concentrations (natural log-transformed). Covariates considered included: age, sex, treatment regimen, ABCB1, ABCG2, ABCC2, ABCC5, SLCO1B1, SLCO1B3, UGT1A1, CES1, CYP3A4, and CYP3A5 genotype. Covariates were assessed individually and were considered for the final model at a significance level of P<0.2. Covariates meeting these criteria were entered into a multiple linear regression model adjusting for age, sex, and treatment regimen and remained in the final model if P<0.1. Secondary outcomes included assessing covariates associated with toxicity and PFS. Multinomial logistic regression analysis was used to determine association of genotype with toxicity events after adjustment for sex, age, and treatment regimen. Toxicity categories used in the regression analysis were: neutropenia (no event vs low (grade 1 or 2) vs high (grade 3 or 4)) and diarrhoea, nausea/vomiting and oral mucositis (no event vs low (grade 1) vs high (grade 2 or 3)). Univariate analysis was performed for each covariate and only significant genotypes were included in the final model with the exception of adjustment covariates as indicated above. Logistic regression analysis was also performed to determine covariates associated with neutropenia (low (grade 0, 1 or 2) vs high (grade 3 or 4)). Cox regression analysis was used to determine association of covariates with PFS in patients treated with BEV-FOLFIRI or FOLFIRI regimens (excluding patients with pancreatic cancer). Univariate analysis was performed and covariates with a cut value of P<0.2 were included in the final multivariate analysis with the exception of adjusting covariates, age at enrolment, sex and treatment regimen. Kruskal–Wallis with Dunn's multiple comparative test and Wilcoxon signed rank test was used to compare OATP1B3 pathology scores. Mann–Whitney U-test was used to compare OATP1B3 pathology scores and PFS in patients treated with CPT-11-based chemotherapy regimens scored for OATP1B3 tumour expression (0–1 vs 2–3). The primary objective was to determine covariates associated with interindividual variability of CPT-11 and metabolite plasma concentration. All statistical analysis was performed in GraphPad Prism and the statistical software R. One-way analysis of variance with Bonferroni correction and Student's t-tests was used to compare drug levels between genotypic groups. Multiple linear regression analysis was performed to determine significant covariates on the interindividual variability of dose-normalised CPT-11 and metabolite plasma concentrations (natural log-transformed). Covariates considered included: age, sex, treatment regimen, ABCB1, ABCG2, ABCC2, ABCC5, SLCO1B1, SLCO1B3, UGT1A1, CES1, CYP3A4, and CYP3A5 genotype. Covariates were assessed individually and were considered for the final model at a significance level of P<0.2. Covariates meeting these criteria were entered into a multiple linear regression model adjusting for age, sex, and treatment regimen and remained in the final model if P<0.1. Secondary outcomes included assessing covariates associated with toxicity and PFS. Multinomial logistic regression analysis was used to determine association of genotype with toxicity events after adjustment for sex, age, and treatment regimen. Toxicity categories used in the regression analysis were: neutropenia (no event vs low (grade 1 or 2) vs high (grade 3 or 4)) and diarrhoea, nausea/vomiting and oral mucositis (no event vs low (grade 1) vs high (grade 2 or 3)). Univariate analysis was performed for each covariate and only significant genotypes were included in the final model with the exception of adjustment covariates as indicated above. Logistic regression analysis was also performed to determine covariates associated with neutropenia (low (grade 0, 1 or 2) vs high (grade 3 or 4)). Cox regression analysis was used to determine association of covariates with PFS in patients treated with BEV-FOLFIRI or FOLFIRI regimens (excluding patients with pancreatic cancer). Univariate analysis was performed and covariates with a cut value of P<0.2 were included in the final multivariate analysis with the exception of adjusting covariates, age at enrolment, sex and treatment regimen. Kruskal–Wallis with Dunn's multiple comparative test and Wilcoxon signed rank test was used to compare OATP1B3 pathology scores. Mann–Whitney U-test was used to compare OATP1B3 pathology scores and PFS in patients treated with CPT-11-based chemotherapy regimens scored for OATP1B3 tumour expression (0–1 vs 2–3). Study population: Metastatic colorectal cancer and advanced or metastatic pancreatic cancer patients (n=127) being treated with CPT-11-based chemotherapy regimens were prospectively recruited between January 2010 and November 2012 from the London Regional Cancer Program, London Health Sciences Centre, London, Ontario, Canada. The majority of patients were prescribed CPT-11 at 180 mg m−2 biweekly in combination with 5-fluorouracil and leucovorin (FOLFIRI regimen) with or without bevacuzimab. Patients were included if they were aged 18 years or above with an ECOG (Eastern Cooperative Oncology Group) performance status ⩽2. Exclusion criteria included: >35 μmol l−1 total bilirubin, >3X upper normal limit AST or ALT without liver metastases or >5X with liver metastases, known hypersensitivity to CPT-11, known history of Gilbert's syndrome, and concurrent use of ketoconazole. All participants provided written informed consent. The study was approved by the Research Ethics Board at Western University. Chart review: Paper and electronic record chart review for each consented patient was conducted by a single reviewer. Age recorded was the age at initiation of CPT-11-based chemotherapy. A cycle of CPT-11 was defined as a single administration of CPT-11 alone or in combination, irrespective of the chemotherapy regimen used. Treatment-related toxicities were recorded for the cycle where the blood sample was obtained in addition to any toxicities occurring throughout the duration of treatment with CPT-11. The blood sample cycle was considered to extend to the subsequent measurement of basic laboratory values and/or clinical assessment. Toxicities during this period were not considered to be associated with the particular treatment if they were documented more than 3 weeks after sample blood drawn. NCI-CTCAE version 3.0 (Bethesda, MD, USA) was used to grade toxicities. Toxicity grade was determined using subjective measures (described in clinical notes) when exact grading was not documented. Responders were defined as having a stable or reduced tumour size on the first CT scan following CPT-11-based chemotherapy initiation. The interval between CT scans was at the discretion of the treating oncologist. Progression was defined as the date the CT or MRI reported an increased tumour size. Progression-free survival was considered to be the time from initiation of CPT-11-based chemotherapy to the date of progression, death, last contact, or censor date (13 August 2013), whichever occurred first. Genotyping: DNA was extracted from whole blood using the Gentra Purgene Blood Kit (Qiagen, Toronto, Ontario, Canada). The following TaqMan allelic discrimination assays (Applied Biosystems, Carlsbad, CA, USA) were used for genotyping: ABCB1 (c.3435C>T, rs1045642), ABCG2 (c.421C>A, rs2231142; c.34G>A, rs2231137), ABCC2 (c.-24C>T, rs717620; c.1249G>A, rs2273697), ABCC5 (T>C, rs562), SLCO1B1*1b (c.388A>G, rs2306283), SLCO1B1*5 (c.521T>C, rs4149056), SLCO1B3 (g.699G>A, rs7311358), UGT1A1*28 (TA(6/7), rs8175347), CES1 (g.14506G>A, rs71647871; g.27467A>C, rs2244613), CYP3A4*22 (intron 6 C>T, rs35599367), and CYP3A5*3 (g.6986A>G, rs776746). Hardy–Weinburg equilibrium was assessed for all genotypes using the χ2 goodness-of-fit test. Drug levels: Blood samples were obtained by venipuncture of the opposite arm immediately following the end of the CPT-11 infusion (90 min) during any cycle of CPT-11-based chemotherapy. Plasma was collected and stored at −80 °C until analysis. Plasma concentrations of CPT-11 and metabolites SN-38, SN-38G, and APC (7-ethyl-10-[4-N-(5-aminopentanoic acid)-1-piperidino] carbonyloxycamptothecin) were measured by liquid chromatography-tandem mass spectrometry. Standards were obtained from Sigma-Aldrich (Oakville, Ontario, Canada; CPT-11, SN-38) and Toronto Research Chemicals (Toronto, Ontario, Canada; SN-38-glucuronide, APC). Plasma samples (100 μl) were precipitated upon addition of 3 volumes of acetonitrile and 0.1% formic acid (FA) spiked with 15 μl internal standard (camptothecin, 500 ng ml−1, Toronto Research Chemicals). Samples were vortexed, centrifuged, and diluted with H2O (0.1% FA). Analytes were injected (30 μl) into the liquid chromatograph (Agilent 1200) and separated on a reverse-phase column (Hypersil Gold, 50 × 5 mm, 5 μM particle size) over 6 min using gradient elution with H2O (0.1% FA) and acetonitrile (0.1% FA) (10–90%). Standard curves and quality controls (co-efficient of variation (%), high (13.5), med (2.6), low (3.7)) were prepared in drug-free plasma. The mass spectrometer (Thermo TSQ Vantage, Burlington, ON, Canada) with heated electrospray ionisation source was set in positive mode for detection of CPT-11, SN-38, SN-38G, APC, and camptothecin with transitions 587→124 m/z, 393→349.3 m/z, 569→393.3 m/z, 691→227 m/z, 349→305 m/z, respectively. Immunohistochemistry: Archived normal and tumour tissue biopsy slides were obtained from a subset of study participants (n=30) following approval by the Tissue and Archive Committee (Department of Pathology, London Health Sciences Centre). Antigen retrieval was performed with citrate buffer and slides were subsequently incubated with pre-immune serum or anti-OATP1B3 polyclonal antibody (1 : 200) followed by an avidin-biotin immunoperoxidase assay and developed using an AEC (3-Amino-9-ethylcarbazole) staining kit (Sigma-Aldrich) using a modified protocol (Lee et al, 2008). Nuclei were counterstained using Mayer's haematoxylin (Sigma-Aldrich). Scoring for OATP1B3 expression in normal, normal adjacent, and tumour tissue was performed independently by one pathologist (JP). Staining intensity for OATP1B3 was defined and evaluated using the following semi-quantitative previously published method: (0) no staining, (1) weakly positive, (2) moderately positive, and (3) strongly positive (Lee et al, 2008). Statistics: The primary objective was to determine covariates associated with interindividual variability of CPT-11 and metabolite plasma concentration. All statistical analysis was performed in GraphPad Prism and the statistical software R. One-way analysis of variance with Bonferroni correction and Student's t-tests was used to compare drug levels between genotypic groups. Multiple linear regression analysis was performed to determine significant covariates on the interindividual variability of dose-normalised CPT-11 and metabolite plasma concentrations (natural log-transformed). Covariates considered included: age, sex, treatment regimen, ABCB1, ABCG2, ABCC2, ABCC5, SLCO1B1, SLCO1B3, UGT1A1, CES1, CYP3A4, and CYP3A5 genotype. Covariates were assessed individually and were considered for the final model at a significance level of P<0.2. Covariates meeting these criteria were entered into a multiple linear regression model adjusting for age, sex, and treatment regimen and remained in the final model if P<0.1. Secondary outcomes included assessing covariates associated with toxicity and PFS. Multinomial logistic regression analysis was used to determine association of genotype with toxicity events after adjustment for sex, age, and treatment regimen. Toxicity categories used in the regression analysis were: neutropenia (no event vs low (grade 1 or 2) vs high (grade 3 or 4)) and diarrhoea, nausea/vomiting and oral mucositis (no event vs low (grade 1) vs high (grade 2 or 3)). Univariate analysis was performed for each covariate and only significant genotypes were included in the final model with the exception of adjustment covariates as indicated above. Logistic regression analysis was also performed to determine covariates associated with neutropenia (low (grade 0, 1 or 2) vs high (grade 3 or 4)). Cox regression analysis was used to determine association of covariates with PFS in patients treated with BEV-FOLFIRI or FOLFIRI regimens (excluding patients with pancreatic cancer). Univariate analysis was performed and covariates with a cut value of P<0.2 were included in the final multivariate analysis with the exception of adjusting covariates, age at enrolment, sex and treatment regimen. Kruskal–Wallis with Dunn's multiple comparative test and Wilcoxon signed rank test was used to compare OATP1B3 pathology scores. Mann–Whitney U-test was used to compare OATP1B3 pathology scores and PFS in patients treated with CPT-11-based chemotherapy regimens scored for OATP1B3 tumour expression (0–1 vs 2–3). Results: Study population Patient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2). Patient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2). OATP1B1 and ABCC5 are important determinants of SN-38 and SN-38G levels The primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05). Several genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3). Interestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively). The model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01). The primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05). Several genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3). Interestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively). The model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01). Drug transporters predict CPT-11-related toxicities CPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36). Non-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models. CPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36). Non-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models. Biomarkers of PFS Approximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis. Approximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis. Tumour OATP1B3 expression suggests poorer clinical outcomes The role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy. The role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy. Study population: Patient and tumour characteristics (n=127) are described in Table 1. The treatment profile of patients is presented in Table 2. Approximately half (55%) of the study population had not received previous chemotherapy and the majority of patients (82%) were treated with bevacizumab (BEV)-FOLFIRI or FOLFIRI chemotherapy regimens as first-line therapy (Table 2). OATP1B1 and ABCC5 are important determinants of SN-38 and SN-38G levels: The primary objective was to determine covariates associated with interindividual variability of plasma concentrations of CPT-11 and metabolites. Plasma concentrations of CPT-11, SN-38, SN-38G, and APC were measured from blood samples collected immediately following the end of CPT-11 infusion. Multiple linear regression was performed on dose-normalised drug levels (natural log transformed) adjusting for age at time of treatment initiation, sex and treatment regimen in the final models. ABCB1 (c.3435 C>T) was significantly associated with CPT-11 exposure as homozygous variant (T/T) carriers had lower levels compared with wild-type patients (P<0.05; Figure 1A and Supplementary Table 1). This model had one significant adjusting covariate (FOLFIRI treatment regimen, P <0.05). Several genotypes were significantly associated with variation in SN-38 levels as part of a model that explained approximately 27% of the variation in exposure (Table 3). SLCO1B1 521C allele carriers had significantly increased systemic exposure to SN-38 (P<0.001, Figure 1B). As expected, heterozygous (*1/*28) and homozygous variant (*28/*28) UGT1A1 carriers had increased SN-38 plasma levels (Figure 1C). Interestingly, a significant increase in SN-38 level was observed with an increasing number of combined SLCO1B1 521C and UGT1A1*28 variant alleles, suggesting an additive effect of polymorphisms in these two genes (Figure 1D). A corresponding decrease in the SN-38G/SN-38 ratio was observed with an increasing number of combined variant alleles (Figure 1E). Together, this suggests that patients heterozygous for both SLCO1B1 521C and UGT1A1*28 may have an equivalent risk of increased SN-38 exposure compared with patients homozygous for either polymorphism, respectively. Carriers of a rare SNP in CES1 (rs71647871, allele frequency, 0.024) had significantly decreased SN-38 levels (P<0.05, Table 3). Interestingly, SN-38G levels were significantly affected by ABCC5 genotype. Patients harbouring the ABCC5 rs562 C allele had reduced SN-38G plasma exposure compared with wild-type patients (P <0.001, Figure 1F, Supplementary Table 2). In addition, patients expressing CYP3A5 also had significantly reduced SN-38G plasma levels (P <0.05). FOLFIRI and FOLFIRINOX treatment regimens were significantly associated with SN-38G (P<0.001 and P <0.05, respectively). The model for the inactive metabolite APC explained approximately 24% of the variability in plasma exposure (Supplementary Table 3). SLCO1B3 g.699 and ABCC2 c.1249G>A were positively associated with APC levels (P<0.01). Decreased plasma levels were observed in patients carrying the ABCC5 rs562 C allele compared with wild-type patients (P<0.001). Significant adjusting covariates in this model included sex (P<0.05) and treatment regimen (P<0.01). Drug transporters predict CPT-11-related toxicities: CPT-11-related adverse reactions and frequencies of grade severity are described in Supplementary Table 4. Multinomial logistic regression analysis was performed to determine association of genotypes to adverse reactions comparing no event vs low-grade vs high-grade toxicity (Table 4). ABCC2 -24C/T carriers had significantly lower risk of grade 3/4 neutropenia compared with wild-type patients (odds ratio (OR)=0.22, 95% CI=0.06–0.85). UGT1A1*28 carriers were at increased risk for severe neutropenia (grade 3/4) compared with wild-type patients following binary logistic regression analysing low- vs high-grade events (OR=3.67; 95% CI=1.19–11.33; Table 4). In addition to their increased SN-38 exposure, patients with two or more combined SLCO1B1 521C and UGT1A1*28 variant alleles were also at significantly increased risk of severe neutropenia (grade 3/4) compared with patients wild-type for both alleles (OR=4.154, 95% CI=1.06–16.36). Non-haematological toxicity was associated with drug transporters and metabolising enzymes (Table 4). Carriers of a common SNP in CES1 (rs2244613) had significantly lower risk of higher-grade diarrhoea (OR=0.29, 95% CI=0.09–0.89). Patients with ABCB1 3435 C/T and T/T genotypes were much more likely to experience higher-grade nausea/vomiting (OR=9.06, 95% CI=1.03–79.41 and OR=10.52, 95% CI=1.10–100.2, respectively). Higher-grade oral mucositis was observed in patients expressing CYP3A5 (OR=8.10, 95% CI=1.57–41.90), whereas those heterozygous for SLCO1B1 388A/G had a significantly lower risk (OR=0.19, 95% CI=0.05–0.72). No significant adjusting covariates were found in the binary logistic or multinomial logistic regression models. Biomarkers of PFS: Approximately 73% of patients were considered responders having radiographic evidence of no change or tumour shrinkage during CPT-11-based therapy. At the time of analysis (censor date: 13 August 2013), disease progression was evident in the majority of patients with a median PFS time of 10.5 months (range, 0.2–43 months; Table 2). Cox-regression analysis for PFS was performed on patients (excluding pancreatic cancer patients) treated with BEV-FOLFIRI or FOLFIRI regimens only (n=103, Table 5). Patients homozygous for SLCO1B1 388G/G alleles had a significantly increased PFS compared with wild-type patients (HR=1.60, 95% CI=1.04–2.46). Patients carrying two ABCC2 c.-24T alleles had decreased PFS (HR=0.62, 95% CI=0.40–0.95). Reduced PFS was also observed for UGT1A1*28 (*1/*28, HR=0.69, 95% CI=0.52–0.92; *28/*28, HR=0.60, 95% CI=0.38–0.97). ABCC5 was associated with PFS in the univariate analysis (P=0.05), but did not remain significant following adjustment in the final model. No significant adjusting covariates were found in the multivariate analysis. Tumour OATP1B3 expression suggests poorer clinical outcomes: The role of OATP1B3 expression, known to transport SN-38, in response to CPT-11-based chemotherapy is unknown. Recently, OATP1B3 expression within colon tumours has been shown to be a cancer-specific isoform that may be expressed primarily as an intracellular protein calling into question the functional relevance of this transporter within the tumour (Thakkar et al, 2013). Paired normal and tumour samples were available for a subset of patients treated with CPT-11-based chemotherapy regimens (n=30: BEV-FOLFIRI, n=22; FOLFIRI, n=5; FOLFIRINOX, n=1; BEV-irinotecan, n=1; cetuximab-irinotecan, n=1). Staining for OATP1B3 was performed and intensity of expression was scored. Tumour tissue had a significantly higher OATP1B3 score compared with paired normal tissue (P<0.05, Figure 2A and C). Progression-free survival was significantly reduced in patients with high (score, 2 or 3) OATP1B3 expression compared with patients with low (score, 0 or 1) expression (Figure 2B), suggesting that OATP1B3 expression may correlate with poorer clinical response to CPT-11 therapy. Discussion: Pharmacogenetics of irinotecan therapy has been widely studied and UGT1A1*28 in particular, shown to be of potential clinical relevance especially for patients prescribed high-dose CPT-11 therapy (Hoskins et al, 2007). As severe toxicities are often observed in patients treated with combination regimens at lower CPT-11 doses, we examined pharmacogenetic determinants of CPT-11 and metabolite exposure in this setting. SN-38 plasma exposure was significantly increased SLCO1B1 521C (*5) allele carriers (P<0.001) (Figure 3) consistent with studies correlating SLCO1B1 521C to higher SN-38 and CPT-11 AUC in both colorectal cancer and non-small-lung cell cancer patients (Han et al, 2008; Innocenti et al, 2009). UGT1A1*28 carriers had significantly increased SN-38 levels compared with non-carriers (P<0.01) consistent with previous reports, which can result in greater toxicity risk (Iyer et al, 2002; Innocenti et al, 2009; Hu et al, 2010; Sai et al, 2010; Cai et al, 2013). Importantly, we demonstrate an additive effect of SLCO1B1 521C and UGT1A1*28 on SN-38 exposure as patients with an increasing number of combined variant alleles had higher plasma levels and a corresponding decrease in the SN-38G/SN-38 ratio. To our knowledge this is the first report of an additive effect in a primarily Caucasian (94%) population, which remained significant upon exclusion of non-Caucasian patients. A similar additive effect has been noted for SN-38 AUC with combined SLCO1B1*15 and UGT1A1*6 or *28 polymorphisms in Japanese patients, likely due to the presence of the SLCO1B1 521C allele (Sai et al, 2010). Together, these results suggest that heterozygous carriers of both SLCO1B1 521C and UGT1A1*28 may reach a toxicity risk comparable to homozygous UGT1A1*28 patients as evidenced by several case reports of SLCO521C and UGT1A1*28 carriers presenting with life-threatening toxicities (Sakaguchi et al, 2009; Takane et al, 2009). Here we observed an OR of 4.15 (95%CI=1.06–16.36) for grade 3/4 neutropenia in patients with two or more combined variant allele. The clinical relevance of the additive effect of these two genes may be underestimated and need to be further examined as a better strategy for personalising CPT-11 therapy. Although plasma SN-38 levels appear to be the most predictive of toxicity risk, secondary metabolites, SN-38G and APC, may also be important contributors. We observed decreased plasma levels of these metabolites associated with an ABCC5 polymorphism (rs562) that was recently identified as a significant predictor of GI toxicity (Di Martino et al, 2011). These effects may be due to reduced ABCC5 hepatic efflux leading to SN-38G and APC accumulation within the liver (Figure 3). Higher hepatic concentrations may ultimately lead to increased intestinal SN-38G levels, through increased biliary excretion via other ABC transporters, which may then undergo β-glucuronidase-dependent reconversion to SN-38 thereby augmenting GI toxicity risk (Figure 3). In this study, CPT-11 levels were lower in homozygous ABCB1 3435T patients. ABCB1 is a well-established transporter of CPT-11 and SN-38, but its clinical relevance for CPT-11 therapy remains inconclusive. ABCB1 polymorphisms have been associated with both increased or reduced exposure, decreased clearance and increased toxicity and response, whereas other studies were unable to confirm these results (Mathijssen et al, 2003; Sai et al, 2003; Mathijssen et al, 2004; Cote et al, 2007; Innocenti et al, 2009; Sai et al, 2010; Glimelius et al, 2011). This may be due to the redundancy of other transporters, including ABCC2 and ABCG2, capable of biliary excretion of CPT-11 and metabolites. The lack of consistent evidence suggests that ABCB1 may not be useful in personalising CPT-11 therapy. As a secondary objective we examined the impact of pharmacogenetic factors on adverse events and PFS. Here, the majority (97%) of patients were prescribed 180 mg m−2 CPT-11 with 17% experiencing severe neutropenia. Neutropenia was associated with UGT1A1*28 as has been reported (Hoskins et al, 2007). More importantly, patients carrying two or more combined SLCO1B1 521C and UGT1A1*28 alleles were at significantly increased risk of myelosuppression, suggesting these SNPs together may provide a more comprehensive strategy for assessing haematological toxicity risk. Heterozygous ABCC2–24C/T but not TT carriers predicted reduced neutropenia risk indicating further validation is needed. Non-haematological toxicities including diarrhoea, nausea/vomiting, and oral mucositis were assessed. Most notably, a common SNP in CES1 (rs2244613) was associated with lower risk of diarrhoea, which may be due to reduced conversion of CPT-11 to SN-38. CES1 (rs2244613) was recently associated with reduced trough levels of dabigatran, a new oral anticoagulant drug, and a significantly decreased bleeding risk in patients, suggesting this SNP may be clinically relevant for many CES-dependent drugs (Pare et al, 2013). Interestingly, homozygous ABCB1 3435T carriers had a much higher likelihood of experiencing higher-grade nausea/vomiting (OR=10.52, P<0.05) and patients expressing CYP3A5 had a significantly increased risk of oral mucositis, suggesting higher levels of M4, APC, or NPC metabolites may contribute its development. A potential limitation to this analysis is the concurrent use of 5-fluorouracil with CPT-11 in most patients. Although the side-effect profile of the two drugs is similar, the majority of markers assessed are not specific to 5-fluorouracil, suggesting these correlations are likely due to modulation of irinotecan disposition but require confirmation in a study designed to assess toxicity as a primary objective. PFS analysis was limited to patients treated with FOLFIRI with or without BEV. SLCO1B1 388(G/G) was associated with longer PFS, suggesting variant carriers may experience better response to FOLFIRI-based regimens. This effect may be due to increased OATP1B1 expression as this variant has recently been correlated to increased expression in Caucasian liver samples (Nies et al, 2013). ABCC2–24TT was associated with reduced PFS, consistent with lower response rates and shorter PFS observed in Japanese patients, but contrary to the lack of association in Caucasian metastatic colorectal cancer patients treated with FOLFIRI regimes (Akiyama et al, 2012). UGT1A1*28 was also associated with reduced PFS, but a recent meta-analysis suggested that UGT1A1*28 status may not be a reliable predictor of PFS (Liu et al, 2013). Our evaluation of PFS may be confounded by patients still undergoing active therapy or surveillance at the date of censoring. Although the genes investigated are not thought to have a role in the other drugs in the FOLFIRI regimen, we cannot rule out the effect of these drugs on PFS. Importantly, we analysed OATP1B3 tumour expression in a subset of patients. To date, tumour biomarkers of CPT-11 response remain unknown. Our group was the first to note OATP1B3 overexpression in colon tumours and this expression was recently identified exclusively as a cancer-specific (cs)OATP1B3 splice variant (Lee et al, 2008; Han et al, 2013; Imai et al, 2013; Thakkar et al, 2013). The localisation (thought to be intracellular), function, and clinical relevance of csOATP1B3 are under investigation (Imai et al, 2013; Thakkar et al, 2013). Here we show the first evidence, to our knowledge, that higher OATP1B3 expression in colon tumours was significantly associated with reduced PFS (Figure 3). Lack of membrane expression of a functional OATP1B3 transporter may lead to reduced SN-38 tumour influx leading to a poorer clinical response or alternatively, overexpression of csOATP1B3 within the tumour may induce a p53-dependent survival mechanism, which has been demonstrated in WT-OATP1B3 colon cancer cell lines (Lee et al, 2008). Although we are unable to definitively confirm OATP1B3 to be the cancer-specific isoform due to use of an antibody recognising the common c-terminal region, the reported lack of wild-type OATP1B3 expression within colon tumours suggests this is the form expressed in these tumours (Thakkar et al, 2013). Owing to the complexity of CPT-11 disposition, it is questionable that one gene alone will be useful for personalising therapy and will likely require assessing the right combination of genes. Our data provide new insight regarding transporters, particularly members of the OATP1B subfamily, to the disposition and clinical effects of CPT-11 (summarised in Figure 3). The additive effect of SLCO1B1 521C and UGT1A1*28 on SN-38 exposure and neutropenia risk seen here in patients carrying two or more combined alleles (24.8% of this population) provides rationale for examining the utility of combined genotyping to better predict toxicity risk in CPT-11-based regimens. Future prospective studies should be designed to compare combined genotyping to UGT1A1*28 genotyping alone to advance the goal of personalising irinotecan.
Background: Treatment of advanced and metastatic colorectal cancer with irinotecan is hampered by severe toxicities. The active metabolite of irinotecan, SN-38, is a known substrate of drug-metabolising enzymes, including UGT1A1, as well as OATP and ABC drug transporters. Methods: Blood samples (n=127) and tumour tissue (n=30) were obtained from advanced cancer patients treated with irinotecan-based regimens for pharmacogenetic and drug level analysis and transporter expression. Clinical variables, toxicity, and outcomes data were collected. Results: SLCO1B1 521C was significantly associated with increased SN-38 exposure (P<0.001), which was additive with UGT1A1*28. ABCC5 (rs562) carriers had significantly reduced SN-38 glucuronide and APC metabolite levels. Reduced risk of neutropenia and diarrhoea was associated with ABCC2-24C/T (odds ratio (OR)=0.22, 0.06-0.85) and CES1 (rs2244613; OR=0.29, 0.09-0.89), respectively. Progression-free survival (PFS) was significantly longer in SLCO1B1 388G/G patients and reduced in ABCC2-24T/T and UGT1A1*28 carriers. Notably, higher OATP1B3 tumour expression was associated with reduced PFS. Conclusions: Clarifying the association of host genetic variation in OATP and ABC transporters to SN-38 exposure, toxicity and PFS provides rationale for personalising irinotecan-based chemotherapy. Our findings suggest that OATP polymorphisms and expression in tumour tissue may serve as important new biomarkers.
null
null
10,545
266
[ 172, 263, 191, 350, 192, 443, 68, 507, 314, 206, 202 ]
14
[ "patients", "11", "cpt 11", "cpt", "sn", "38", "sn 38", "grade", "analysis", "28" ]
[ "chemotherapy regimens scored", "chemotherapy majority patients", "folfiri chemotherapy regimens", "metastatic colorectal cancer", "11 based chemotherapy" ]
null
null
null
null
null
null
[CONTENT] OATP1B1 | OATP1B3 | Irinotecan | SN-38 | colorectal cancer [SUMMARY]
null
[CONTENT] OATP1B1 | OATP1B3 | Irinotecan | SN-38 | colorectal cancer [SUMMARY]
null
null
null
[CONTENT] Adult | Aged | Aged, 80 and over | Antineoplastic Agents, Phytogenic | Camptothecin | Colorectal Neoplasms | Disease-Free Survival | Female | Humans | Irinotecan | Liver-Specific Organic Anion Transporter 1 | Male | Middle Aged | Multidrug Resistance-Associated Protein 2 | Multidrug Resistance-Associated Proteins | Organic Anion Transporters | Organic Anion Transporters, Sodium-Independent | Polymorphism, Single Nucleotide | Solute Carrier Organic Anion Transporter Family Member 1B3 [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Antineoplastic Agents, Phytogenic | Camptothecin | Colorectal Neoplasms | Disease-Free Survival | Female | Humans | Irinotecan | Liver-Specific Organic Anion Transporter 1 | Male | Middle Aged | Multidrug Resistance-Associated Protein 2 | Multidrug Resistance-Associated Proteins | Organic Anion Transporters | Organic Anion Transporters, Sodium-Independent | Polymorphism, Single Nucleotide | Solute Carrier Organic Anion Transporter Family Member 1B3 [SUMMARY]
null
null
null
[CONTENT] chemotherapy regimens scored | chemotherapy majority patients | folfiri chemotherapy regimens | metastatic colorectal cancer | 11 based chemotherapy [SUMMARY]
null
[CONTENT] chemotherapy regimens scored | chemotherapy majority patients | folfiri chemotherapy regimens | metastatic colorectal cancer | 11 based chemotherapy [SUMMARY]
null
null
null
[CONTENT] patients | 11 | cpt 11 | cpt | sn | 38 | sn 38 | grade | analysis | 28 [SUMMARY]
null
[CONTENT] patients | 11 | cpt 11 | cpt | sn | 38 | sn 38 | grade | analysis | 28 [SUMMARY]
null
null
null
[CONTENT] patients | sn | 95 | table | ci | 95 ci | significantly | levels | 28 | compared [SUMMARY]
null
[CONTENT] patients | 11 | cpt | cpt 11 | sn | table | grade | oatp1b3 | 95 | analysis [SUMMARY]
null
null
null
[CONTENT] 521C | 28 ||| ABCC5 | rs562 | SN-38 | APC ||| ABCC2-24C/T | 0.06-0.85 | CES1 | OR=0.29 | 0.09-0.89 ||| 388G | ABCC2-24T | 28 ||| [SUMMARY]
null
[CONTENT] irinotecan ||| irinotecan | SN-38 | ABC ||| irinotecan ||| ||| ||| 521C | 28 ||| ABCC5 | rs562 | SN-38 | APC ||| ABCC2-24C/T | 0.06-0.85 | CES1 | OR=0.29 | 0.09-0.89 ||| 388G | ABCC2-24T | 28 ||| ||| ABC | irinotecan ||| [SUMMARY]
null
A model to explain the challenges of emergency medical technicians' decision making process in emergency situations: a grounded theory.
35067498
To manage life-threatening conditions and reduce morbidity and mortality, pre-hospital's on-scene decision making is an influential factor. Since pre-hospital's decision making is a challenging process, it is necessary to be identified this process. This study was conducted to explore the model of Iranian emergency medical technicians' decision making in emergency situations.
BACKGROUND
This study was applied through grounded theory method using direct field observations and semi-structured interviews. Purposeful sampling with 26 participants including 17 emergency medical technicians including dispatchers, physicians of medical directions, managers and 1 representative for court affairs was performed. Interviews were lasted from October 2018 to July 2019. Corbin and Strauss approach, 2015 (open, axial and selective coding) were used to analyze data.
METHODS
A paradigm model was developed to explain the relationships among the main categories. Decision making in the context of fear and concern was emerged as the core category. Unclear duties, insufficient authorities and competencies as well as lack of enough decision making's protocols and guidelines were categorized as casual conditions. Other important categories linked to the core category were interactions, feelings and "customer focus approach". Action-interaction strategies were taken by Emergency Medical technicians lead to some negative consequences that can threaten clinical outcome and patient safety.
RESULTS
Based on the finding of this study, Emergency Medical technicians' decision making in the context of fear and concern, as the core concept of this model, lead to decrease in quality of the pre-hospital services, stakeholders' dissatisfaction, hospital emergency units' overload, decrease in reputation of the Emergency Medical Technicians, threat to patient clinical outcome and patient safety. To prevent of these negative consequences, facilitation of the Emergency Medical Technicians' on-scene decision making is recommended.
CONCLUSIONS
[ "Decision Making", "Emergency Medical Services", "Emergency Medical Technicians", "Grounded Theory", "Humans", "Iran", "Qualitative Research" ]
9115813
Introduction
Emergency Medical Services (EMS) centers were established to provide on-time and rapid services to the patients and injured from the scene to the hospital.1 These centers can reduce significantly morbidity and mortality with providing pre-hospital cares in life-threatening conditions.2 To manage life-threatening conditions, save lives and reduce morbidity and mortality, rapid and accurate on-scene decision making is an important and influential factor.3 Decisions about the type and priority of emergency medical interventions require triage.4 As the conditions of some patients are complex and variable, the technicians' decision changes frequently, which can lead to technician error.5,6 Decision making errors threaten patient’s safety and Emergency Medical Technicians (EMTs)’ reputation.7-10 However, proper emergency medical decision-making leads to prevention of hospital emergency unit overcrowding, increased quality of health services and ultimately reduced morbidity and mortality.9,11 EMTs' on-scene decision making is influenced by incredible barriers and challenges.7,12,13 These factors influence the thinking, feeling and performance of the EMTs and so, increase their decision error, especially in some interventions including endotracheal intubation, mediation and no transportation.14,15 Facing these barriers and challenges lead to the EMTs’ concern and fear which of these fears have a profound impact on patient outcomes.16,17 As stress and anxiety often arise at the time of providing on-scene health services,18 identifying these factors are necessary. There is a paucity of studies in the field of EMTs’ on-scene decision making that have provided a good overview of the factors affecting dispatch services and transmission, but the decision-making process at the scene was not explored yet.19-21 While EMTs’ decision-making is a dynamic and on- going process starting from the scene until the patient is delivered to the hospital.20,22 This process will be influenced by multiple factors, so will have multiple outcomes. Understanding this process is crucial and can significantly address challenges and improve patient outcomes. Therefore, this study was conducted to design EMTs' on-scene decision making model in order to simplify the relationship between the components of this process.
Methods
Study setting In Iran, national EMS system provides free services for patients and injured people from scene to hospital. In the provinces and cities, EMS centers are under supervision of universities medical sciences and national EMS system. EMTs including diploma(basic), nurses with associated degrees(intermediate), bachelor’s and master’s degrees (advanced/ paramedic) provide services in dispatch centers or on scene. There is a dispatch center in each provincial center or cities which response to calls, provide counselling, dispatch EMTs and coordinate dispatched ambulances. Two EMTs in each ambulance were dispatched to the scene who should get advice from general physician as Medical Direction’s Physicians (MDPs) of Dispatch in the provinces’ EMS center. In the cities’ EMS centers. EMTs typically operate independently due to lack of MDPs and communicating challenges. General physicians did not attend the ambulance and most of them as a medical director provide online radio or telephone counselling to EMTs at proveniences’ EMS centers and some of them are as head of emergency centers. There is no general physician in the cities’ EMS center. Study design A grounded theory method based on Corbin and Strauss approach in 2015 was conducted for data gathering. This method is useful for achieving new area or exploring new perspectives of known field.23,24 Participants’ selection Maximum variety sampling method was performed, of which 26 participants including Emergency Medical Technicians, Medical Direction’s Physicians (MDPs), dispatchers, representative for court affairs and EMS’ managers were chosen according to purposeful sampling (Table 1). Having practical or theoretical experience, being verbal and participations’ willingness were inclusion criteria. Moreover, observational field note by means of triangulation was used for data collection as well as data validation. Saturation principle was used for concept saturation, of which data saturation was reached after 26 interviews and 10 filed observations. Data collection Data were collected through in-depth interviews and field observations. Interviews were done in Farsi at the participants’ workplaces. Initially three unstructured interviews were conducted. These first three interviews helped to identify interview guideline and important concepts that effect on-scene decision makings. Following that 26 semi-structured interviews were conducted using interview guideline. Interviews were started with general question about participants’ experiences of on-scene decision making process. Following that, to explore participants’ experiences probing questions were performed. Examples of questions were including “What are your challenges at the time of on-scene decision making?”, “What factors effect on your on-scene decision making in emergency situations?”, “What strategies do you use to deal with on-scene decision making’s challenges?” The interviews’ duration lasted between 45 and 75 minutes from October 2018 to July 2019. Each recorded interview was listened several times and transcribed verbatim by the principal investigator (PI). The PI started observation to take field notes after three interviews. Moreover, in order to saturate the concept by means of triangulation, ten observations also conducted from November 2018 to March 2019. In order to do that, PI took part in the EMS’ mission as the participant as observer. In this regard, all relations and interactions were taken as the field notes. Data analysis Data analysis was carried out simultaneously and immediately after data collection. The PI listened the audio files several times and compared transcribed interviews with recorded digital audios file. Based on Corbin and Strauss recommendations in 2015,23 continuously during data analysis. Regarding open coding, transcribed interviews and filed note observations were analyzed line by line. In this regard, MSK and DKZ discussed interviews’ process and codes extracted to provide the guide of future interviews and analysis. Both are experts in the field of health in emergencies and disasters. Translate and back-translate was also carried out by KB who is expert in the field of emergency medicine and English language. As a result of the researchers’ engagement and their consensus, the extracted codes were integrated in sub-categories and categories. Following that, via axial coding, the sub-categories and categories were compared and categorized based on their similarities and differences. Finally, based on selective coding, the link of categories and core categories were obtained and after that conceptual model was designed based on initial theoretical structure. Following that the paradigm model was presented to visualize the core category that is the central and main phenomenon of study.25 This model presents the decision making in the context of fear and concern. This core category is affected by Causal condition as a group of situations that influence decision making in the context of fear and concern. Context conditions as a component of the model, refer to a group of conditions that the phenomenon is raised and people respond to it via some strategies. Intervening conditions are facilitators or barriers action-interaction strategies which are purposeful acts that are taken by people to resolve a problem and lead to a number of consequences.25 Trustworthiness Four strategies including credibility, confirmability, dependability and transferability were applied to achieve trustworthiness.26 To achieve credibility, data collection triangulations including interviews and observational field, researchers’ eligibility in the field of health in emergencies and disasters as well as their prolonged engagement with data was employed. In addition, member checking, peer checking and finally expert checking were applied to improve credibility of the findings. To determine the data consistency, beside the PI other team members, as external auditors, checked generated codes, categories and sub-categories. Expert check was carried out by the opinions of research supervisor, DKZ and KB on the findings. To establish Confirmability, triangulation for data collection including face-to-face interview, bracketing principle and peer evaluation were used. Transferability was achieved using detailed explanations of the data collection, analysis and result interpretation. Finally, dependability was met thought triangulation, code-recode strategy, and peer evaluation.
Results
This study explored the EMTS 'on-scene decision making process. In the initial analysis, 1352 codes were extracted and finally classified into six categories and 21 sub-categories. “Decision making in the context of fear and concern” was defined as a core category. This type of decision means that most EMTs are concerned about the consequences of their decisions due to ambiguous tasks and responsibilities, insufficient authority, competency and decision-making protocols. These conditions are in the setting concluding lack of trust and inadequate supportive rules for EMTs. Intervening conditions including interactions, fillings, and “Customer focus approach” intervene in the decision making. The participant number were included (by P) for each quote and (FNO) for each field notes observation. Based on the findings of this study, in dealing with fear and concern, EMTs adopt two types of action-interaction strategies in order to manage their stress and anxiety. They may first consult with their co-worker, MDP, or dispatch operator. If this strategy does not solve the problem, they will refer to the existing protocols. Due to the findings of this study, there are many weaknesses and barriers to these strategies, so due to frustration, some EMTs may take the strategies including irresponsibility, only transportation approach and leave the scene and transfer the patients without medical intervention. All of these strategies lead to negative consequences include reduced quality of EMS services, reduced stakeholders' satisfaction, hospital emergency unit overload, reduced EMT reputation and threat to patient safety and their outcomes (Figure 1). Causal condition In this study, factors including unclear duties and ambiguous tasks, insufficient EMTs’ authorities, inadequate some EMTs’ competencies, and lack of appropriate on-scene decision-making protocols and guidelines lead to decision making in the context of fear and concern. According to EMTs, unclear duties and responsibilities are one of the most important factors that cause fear and concern at the time of decisions making. The EMTs are not completely aware of their duties and often their intervention is not legal. They described that multiple and sometimes indefinite responsibilities not only lead to neglect of necessary interventions, but also lead to excessive stress and anxiety. "…I don't know exactly what is my duty and responsibility. If occurred complication in a patient due to my intervention and its possible failure, would we be prosecuted and compensated? ... What duty do we have for patients who refuse to be cared? Whether a patient's endotracheal intubation or medication is illegal without a MDP’s advice…? ". (P3) Inadequate authority in contrast with delegated responsibility, inadequate competencies including knowledge, skills, experience and accurate judgment were identified as four important factors influencing the decision-making process of EMTs. The EMTs were concerned about the lack of sufficient authority. They believed that delegating multiple responsibilities without giving them sufficient authority would deprive many patients of emergency medical interventions and cares. According to participants and observation field notes, some EMTs were insufficiently competent to provide emergency interventions and cares. They declared that clinical judgment, experience, knowledge and skills of effective communication and scene management ability are critical to making the right decision in a timely manner. Field observations showed that there were cases of poor emergency scene management when assisting the patients that their family were agitated, which resulted in violence against EMTs. "Some co-workers have enough experience and great skill to manage the scene. Judging right in spite of the patient's anger and illogical laypeople’s interference is of important. ...sometimes a patient does not have need an intervention and special care, but unfortunately, we don't have enough authority to leave such a patient. Sometimes the patient doesn't have to be taken to the hospital, but because we don't have the enough authorization, we are afraid of not taking the patient to the hospital." (FNO2) According to the EMTs, sometimes making a decision to manage the patients is very difficult and EMT's consultation may be ineffective, so guidelines and protocols are very helpful. If decisions are made on the basis of scientific protocols and guidelines, in addition to appropriate medical interventions, concerns and fears of EMTs about the negative consequences of their interventions will be eliminated. In other words, EMTs can make better decisions with these supporting documents. The lack of protocols for some emergencies and deficiencies in existing protocols, was a major challenge of decision making. "... sometimes I talk to my colleague about the patient, but we really don't know what to do ... a guidance can be very helpful, but the number of our pre-hospital emergency guidelines and protocols are limited ... the available guidelines and protocols because of their simplicity or complexity are unusable." (p16) Contextual conditions In this paradigm model, community setting, trust, supportive and supervisory rules were identified as contextual conditions. The EMTs were extremely unsatisfied with the irrational and sometimes violent intervention of laypeople on the scene. Distress caused by irrational interferences disrupts emergency medical cares and results in rapid transmission without medical care. Another important aspect of community setting is the level of public education and awareness. The participants believed that community perception and its’ attitudes toward the importance of pre-hospital cares have an important role in their cooperation and collaboration. In the field observation, the involvement of laypeople with severe verbal violence were observed. "Unfortunately, sometimes people angrily disrespect us. Although we do our best to help a patient with life-threatening conditions, they prevent us by irrational interferences. I am sure that the main reason for these behaviors and interferences is the low understanding and awareness. “(P10) The findings of the study showed that trust had a significant impact on EMT's decision making. All participants believed that lack of trust is a major barrier to participatory decision making. They mentioned that there is no sufficient trust between EMTs, dispatchers, MDPs, and hospital emergency physicians. Some MDPs do not trust the technician's description and only recommend transferring instead of providing medication and medical advice. Lack of proper decision-making condition due to insufficient trust leads to fear and anxiety of EMTs. The EMTs believed that there is no a " Chain of trust " to facilitate the decision-making process, so people do not have trust to knowledge, skills and importance of EMTs’ medical approaches and cares. "I believe that team-working in the pre-hospital emergency services is not possible without trust. I think there is no sufficient trust among our staff. When a physician does not have enough trust to the technician's report, how to order medical intervention. (P9) ... People do not trust our EMTs' capability ... because of the weak trust chain, sometimes some patients and their companions are not honest... ". (P6) This study addressed that supervisory and supportive laws play an important role in emergency medical decision making. Lack of clear rules and regulations to determine the scopes of EMTs’ duties and lack of proper professional liability coverage by insurance companies were obtained as one of the most important factors leading to EMTs’ concern and fear. The EMTs stated that they were constantly afraid of the possible negative consequences of their decisions and interventions. Uncertainties at the time of dealing with an alone elderly or child patient who refuse to receive care, lack of clear rules related to end-stage patient and cardiopulmonary arrest who had clinical death were the most important stressors for decision making. ".... We don't know what the legal consequences of leaving the alone patient who refuse to receive care or transport ...” (P1) …there are many missions that the patient is dead and there is nothing to do, but we transfer them to hospitals due to lack of supportive laws and insurance." (P21) Intervening conditions Interactions, feelings, and the “Customer focus approach” are three important intervening factors in EMTs’ on-scene decision-making that lead to Action-Interaction strategies. According to participants, although some patients don’t not have emergency medical problems, they or families tend to exaggerate their problem to use free services. On the other hand, some people behave violently with the EMTs that these verbal and non-verbal violence were identified as the most important disrupting interactions factor and consequently making decisions with fear. "Sometimes some patients pretend to be ill that we find out thorough physical and psychological examination...” (P13) Sometimes we face violent and inappropriate behavior from laypeople and bystanders that lead to stress, fear and wrong decisions." (P4) The EMTs declared that their decisions and actions are also influenced by their feelings. The findings of this study indicated that feelings such as altruism and empathy play an important role in the decisions of EMTs. Seeing patients' suffering affects the feelings and focus of the EMTs. Another negative feeling reported by EMTs was that they felt unhelpful and sense of inadequacy. They were sometimes unable to do something, in spite of being aware of patient's needs. "... Pain and groaning of some patients, especially those who are end-stage and those who are really poor affect me. We put all our focus and efforts on patients. Nevertheless, when I feel like some managers don't care, I'll be disappointed... “(P16) Excessive attention to the satisfaction of patients and their families by EMS system managers and policymakers, was noted repeatedly by the EMTs as "Customer focus approach”. Most participants claimed that some EMS Managers and health policy makers only want to obtain the satisfaction of caregivers. The data extracted from observations and interviews showed that some managers neglected the EMTs’ satisfaction that had negative effect on EMTs’ motivation and so their decisions. "... There are many cases with no emergencies that request an ambulance several times a week. Some patient or families threaten us very badly ...” (P14) … in the face violence, the executives say, in any case, they are sick and your duty is providing services ...” (P20) Action-Interaction strategies Based on the participants and fieldnotes, action strategies including protocol and guideline-based decision making, consultancy, and interaction strategies including “why me culture?”, only transportation approach and leaving scene without emergency medical intervention were obtained. EMTs act based on action strategies to overcome decision-making challenges and barriers. If consulting with another EMT could not resolve decision-making problems, they will ask dispatcher to consult with a physician, but most EMS centers do not have a physician. Protocol and guideline-based decision-making is another action strategy that face challenges such as the difficulty of using paper-based protocols, forgetfulness of protocols, the complexity of some protocols. "Sometimes I can't make a decision. I consult with my colleague. Sometimes my colleagues ask me what to do. We sometimes ask dispatcher to consult with a physician, but most of the time it is not possible to consult with...” (P2) In the case of EMTs’ frustration to adopt an action strategy, they adopt a strategy of irresponsibility that was coded as "Why me? Culture”. Most participants declared that because of distress caused by bystanders’ irrational interference, they refused to medical emergency interventions and their responsibility. Fear of the decision’s negative consequences, feelings, people's interference and scene conditions are factors that lead to the only transportation approach so that EMTs sometimes leave the scene without care or intervention. "In my opinion, the uncertainty, the pressure of the scene, the fear of the negative consequences of the decision have led some of our colleagues to not take responsibility...”. (P11) Consequences According to the most participants, the EMTs’ frustration in on-scene decision-making process lead to decrease in quality of the pre-hospital services, stakeholders’ dissatisfaction, hospital emergency units’ overload, decrease in reputation of the EMTs and threat to patient clinical outcome and patient safety. The participants believed that strategies such as "why me? Culture", only transportation approach, and leaving the scene without providing emergency medical cares would lead to decrease in quality of the pre-hospital services. The EMTs’ concern and fear lead to make hasty or incorrect decisions and so, threat to patients’ safety. Rapid and correct on-scene decision-making lead to on time medical intervention and emergency care so it concluded to caregivers’ satisfaction. Furthermore, as a result of correct decisions, on time, enough and adequate pre-hospital cares prevent from transportation of all cases to the hospital and hospital emergency unit burden. "A lot of times we have to move patients as soon as we arrived to the scene…” (P10) …transporting all cases to the hospital, cause overcrowding and overloading of the hospital…” (P22) … inadequate pre-hospital cares lead to threat to patients’ safety and patients’ dissatisfaction”. (P10) Another negative consequence was the reduction of EMT reputation. In other words, some people do not trust the knowledge and competence of EMTs. The EMTs claimed that some people do not believe in them. Furthermore, attitudes and perceptions of hospital staff including nurses and physicians based on inadequacy of EMTs' competence were unbearable to EMTs. "People think we are just drivers. I think, the main reason of this attitude, is transporting the patients without enough emergency cares”. (P1)
Conclusion
This study designed a paradigm model to explain the process and to explore the relationships between different components and affecting factors on EMTs' on-scene decision making process. Technicians' decision making in the context of fear and concern leads to action-interactive strategies that will ultimately lead to stockholders’ dissatisfaction, decrease in EMTs’ reputation, threat to patient safety and related outcome. According to the model, unclear tasks, and insufficient EMTs’ authority, inadequate some EMTs’ competencies, and lack of sufficient protocols and guidelines were categorized as Casual Conditions. Promoting EMTs’ competencies, delegating sufficient authority, and passing supportive rules for emergency personnel are suggested. Based on this model, there are several contextual and intervening conditions that effect on EMTs’ decisions. Finally, to facilitate on-scene decision making process, it is necessary that some strategies were implicated in national EMS system. Abbreviations EMS: Emergency Medical Services; EMTs: Emergency Medical Technicians; MDPs: Medical Direction’s Physicians; DKZ: Davoud Khorasani-Zavareh; MSK: Meysam Safi-Keykaleh; ZG: Zohreh Ghomian; KB: Katarina Bohm. Acknowledgements This study is part of the Ph.D. thesis in School of Public Health and Safety. The authors would like to thank Shahid Beheshti University of Medical Sciences as well as all the participants in this study especially the EMTs.
[]
[]
[]
[ "Introduction", "Methods ", "Results", "Discussion", "Conclusion" ]
[ "Emergency Medical Services (EMS) centers were established to provide on-time and rapid services to the patients and injured from the scene to the hospital.1 These centers can reduce significantly morbidity and mortality with providing pre-hospital cares in life-threatening conditions.2 To manage life-threatening conditions, save lives and reduce morbidity and mortality, rapid and accurate on-scene decision making is an important and influential factor.3\n\nDecisions about the type and priority of emergency medical interventions require triage.4 As the conditions of some patients are complex and variable, the technicians' decision changes frequently, which can lead to technician error.5,6 Decision making errors threaten patient’s safety and Emergency Medical Technicians (EMTs)’ reputation.7-10 However, proper emergency medical decision-making leads to prevention of hospital emergency unit overcrowding, increased quality of health services and ultimately reduced morbidity and mortality.9,11\n\nEMTs' on-scene decision making is influenced by incredible barriers and challenges.7,12,13 These factors influence the thinking, feeling and performance of the EMTs and so, increase their decision error, especially in some interventions including endotracheal intubation, mediation and no transportation.14,15 Facing these barriers and challenges lead to the EMTs’ concern and fear which of these fears have a profound impact on patient outcomes.16,17 As stress and anxiety often arise at the time of providing on-scene health services,18 identifying these factors are necessary.\nThere is a paucity of studies in the field of EMTs’ on-scene decision making that have provided a good overview of the factors affecting dispatch services and transmission, but the decision-making process at the scene was not explored yet.19-21 While EMTs’ decision-making is a dynamic and on- going process starting from the scene until the patient is delivered to the hospital.20,22 This process will be influenced by multiple factors, so will have multiple outcomes. Understanding this process is crucial and can significantly address challenges and improve patient outcomes. Therefore, this study was conducted to design EMTs' on-scene decision making model in order to simplify the relationship between the components of this process.", "\nStudy setting\n\nIn Iran, national EMS system provides free services for patients and injured people from scene to hospital. In the provinces and cities, EMS centers are under supervision of universities medical sciences and national EMS system. EMTs including diploma(basic), nurses with associated degrees(intermediate), bachelor’s and master’s degrees (advanced/ paramedic) provide services in dispatch centers or on scene. There is a dispatch center in each provincial center or cities which response to calls, provide counselling, dispatch EMTs and coordinate dispatched ambulances.\nTwo EMTs in each ambulance were dispatched to the scene who should get advice from general physician as Medical Direction’s Physicians (MDPs) of Dispatch in the provinces’ EMS center. In the cities’ EMS centers. EMTs typically operate independently due to lack of MDPs and communicating challenges. General physicians did not attend the ambulance and most of them as a medical director provide online radio or telephone counselling to EMTs at proveniences’ EMS centers and some of them are as head of emergency centers. There is no general physician in the cities’ EMS center.\n\nStudy design \n\nA grounded theory method based on Corbin and Strauss approach in 2015 was conducted for data gathering. This method is useful for achieving new area or exploring new perspectives of known field.23,24\n\n\nParticipants’ selection\n\nMaximum variety sampling method was performed, of which 26 participants including Emergency Medical Technicians, Medical Direction’s Physicians (MDPs), dispatchers, representative for court affairs and EMS’ managers were chosen according to purposeful sampling (Table 1). Having practical or theoretical experience, being verbal and participations’ willingness were inclusion criteria. Moreover, observational field note by means of triangulation was used for data collection as well as data validation. Saturation principle was used for concept saturation, of which data saturation was reached after 26 interviews and 10 filed observations.\n\nData collection\n\nData were collected through in-depth interviews and field observations. Interviews were done in Farsi at the participants’ workplaces. Initially three unstructured interviews were conducted. These first three interviews helped to identify interview guideline and important concepts that effect on-scene decision makings. Following that 26 semi-structured interviews were conducted using interview guideline. Interviews were started with general question about participants’ experiences of on-scene decision making process. Following that, to explore participants’ experiences probing questions were performed. Examples of questions were including “What are your challenges at the time of on-scene decision making?”, “What factors effect on your on-scene decision making in emergency situations?”, “What strategies do you use to deal with on-scene decision making’s challenges?” The interviews’ duration lasted between 45 and 75 minutes from October 2018 to July 2019. Each recorded interview was listened several times and transcribed verbatim by the principal investigator (PI). The PI started observation to take field notes after three interviews. Moreover, in order to saturate the concept by means of triangulation, ten observations also conducted from November 2018 to March 2019. In order to do that, PI took part in the EMS’ mission as the participant as observer. In this regard, all relations and interactions were taken as the field notes.\n\nData analysis\n\nData analysis was carried out simultaneously and immediately after data collection. The PI listened the audio files several times and compared transcribed interviews with recorded digital audios file. Based on Corbin and Strauss recommendations in 2015,23 continuously during data analysis. Regarding open coding, transcribed interviews and filed note observations were analyzed line by line. In this regard, MSK and DKZ discussed interviews’ process and codes extracted to provide the guide of future interviews and analysis. Both are experts in the field of health in emergencies and disasters. Translate and back-translate was also carried out by KB who is expert in the field of emergency medicine and English language. As a result of the researchers’ engagement and their consensus, the extracted codes were integrated in sub-categories and categories. Following that, via axial coding, the sub-categories and categories were compared and categorized based on their similarities and differences. Finally, based on selective coding, the link of categories and core categories were obtained and after that conceptual model was designed based on initial theoretical structure. Following that the paradigm model was presented to visualize the core category that is the central and main phenomenon of study.25 This model presents the decision making in the context of fear and concern. This core category is affected by Causal condition as a group of situations that influence decision making in the context of fear and concern. Context conditions as a component of the model, refer to a group of conditions that the phenomenon is raised and people respond to it via some strategies. Intervening conditions are facilitators or barriers action-interaction strategies which are purposeful acts that are taken by people to resolve a problem and lead to a number of consequences.25\n\n\nTrustworthiness\n\nFour strategies including credibility, confirmability, dependability and transferability were applied to achieve trustworthiness.26 To achieve credibility, data collection triangulations including interviews and observational field, researchers’ eligibility in the field of health in emergencies and disasters as well as their prolonged engagement with data was employed. In addition, member checking, peer checking and finally expert checking were applied to improve credibility of the findings. To determine the data consistency, beside the PI other team members, as external auditors, checked generated codes, categories and sub-categories. Expert check was carried out by the opinions of research supervisor, DKZ and KB on the findings. To establish Confirmability, triangulation for data collection including face-to-face interview, bracketing principle and peer evaluation were used. Transferability was achieved using detailed explanations of the data collection, analysis and result interpretation. Finally, dependability was met thought triangulation, code-recode strategy, and peer evaluation.", "This study explored the EMTS 'on-scene decision making process. In the initial analysis, 1352 codes were extracted and finally classified into six categories and 21 sub-categories. “Decision making in the context of fear and concern” was defined as a core category. This type of decision means that most EMTs are concerned about the consequences of their decisions due to ambiguous tasks and responsibilities, insufficient authority, competency and decision-making protocols. These conditions are in the setting concluding lack of trust and inadequate supportive rules for EMTs. Intervening conditions including interactions, fillings, and “Customer focus approach” intervene in the decision making. The participant number were included (by P) for each quote and (FNO) for each field notes observation.\nBased on the findings of this study, in dealing with fear and concern, EMTs adopt two types of action-interaction strategies in order to manage their stress and anxiety. They may first consult with their co-worker, MDP, or dispatch operator. If this strategy does not solve the problem, they will refer to the existing protocols. Due to the findings of this study, there are many weaknesses and barriers to these strategies, so due to frustration, some EMTs may take the strategies including irresponsibility, only transportation approach and leave the scene and transfer the patients without medical intervention. All of these strategies lead to negative consequences include reduced quality of EMS services, reduced stakeholders' satisfaction, hospital emergency unit overload, reduced EMT reputation and threat to patient safety and their outcomes (Figure 1).\n\nCausal condition\n\nIn this study, factors including unclear duties and ambiguous tasks, insufficient EMTs’ authorities, inadequate some EMTs’ competencies, and lack of appropriate on-scene decision-making protocols and guidelines lead to decision making in the context of fear and concern. According to EMTs, unclear duties and responsibilities are one of the most important factors that cause fear and concern at the time of decisions making. The EMTs are not completely aware of their duties and often their intervention is not legal. They described that multiple and sometimes indefinite responsibilities not only lead to neglect of necessary interventions, but also lead to excessive stress and anxiety.\n\n\"…I don't know exactly what is my duty and responsibility. If occurred complication in a patient due to my intervention and its possible failure, would we be prosecuted and compensated? ... What duty do we have for patients who refuse to be cared? Whether a patient's endotracheal intubation or medication is illegal without a MDP’s advice…? \". (P3)\n\nInadequate authority in contrast with delegated responsibility, inadequate competencies including knowledge, skills, experience and accurate judgment were identified as four important factors influencing the decision-making process of EMTs. The EMTs were concerned about the lack of sufficient authority. They believed that delegating multiple responsibilities without giving them sufficient authority would deprive many patients of emergency medical interventions and cares. According to participants and observation field notes, some EMTs were insufficiently competent to provide emergency interventions and cares. They declared that clinical judgment, experience, knowledge and skills of effective communication and scene management ability are critical to making the right decision in a timely manner. Field observations showed that there were cases of poor emergency scene management when assisting the patients that their family were agitated, which resulted in violence against EMTs.\n\n\"Some co-workers have enough experience and great skill to manage the scene. Judging right in spite of the patient's anger and illogical laypeople’s interference is of important. ...sometimes a patient does not have need an intervention and special care, but unfortunately, we don't have enough authority to leave such a patient. Sometimes the patient doesn't have to be taken to the hospital, but because we don't have the enough authorization, we are afraid of not taking the patient to the hospital.\" (FNO2)\n\nAccording to the EMTs, sometimes making a decision to manage the patients is very difficult and EMT's consultation may be ineffective, so guidelines and protocols are very helpful. If decisions are made on the basis of scientific protocols and guidelines, in addition to appropriate medical interventions, concerns and fears of EMTs about the negative consequences of their interventions will be eliminated. In other words, EMTs can make better decisions with these supporting documents. The lack of protocols for some emergencies and deficiencies in existing protocols, was a major challenge of decision making.\n\n \"... sometimes I talk to my colleague about the patient, but we really don't know what to do ... a guidance can be very helpful, but the number of our pre-hospital emergency guidelines and protocols are limited ... the available guidelines and protocols because of their simplicity or complexity are unusable.\" (p16)\n\n\nContextual conditions\n\nIn this paradigm model, community setting, trust, supportive and supervisory rules were identified as contextual conditions. The EMTs were extremely unsatisfied with the irrational and sometimes violent intervention of laypeople on the scene. Distress caused by irrational interferences disrupts emergency medical cares and results in rapid transmission without medical care. Another important aspect of community setting is the level of public education and awareness. The participants believed that community perception and its’ attitudes toward the importance of pre-hospital cares have an important role in their cooperation and collaboration. In the field observation, the involvement of laypeople with severe verbal violence were observed.\n\n\"Unfortunately, sometimes people angrily disrespect us. Although we do our best to help a patient with life-threatening conditions, they prevent us by irrational interferences. I am sure that the main reason for these behaviors and interferences is the low understanding and awareness. “(P10)\n\nThe findings of the study showed that trust had a significant impact on EMT's decision making. All participants believed that lack of trust is a major barrier to participatory decision making. They mentioned that there is no sufficient trust between EMTs, dispatchers, MDPs, and hospital emergency physicians. Some MDPs do not trust the technician's description and only recommend transferring instead of providing medication and medical advice. Lack of proper decision-making condition due to insufficient trust leads to fear and anxiety of EMTs. The EMTs believed that there is no a \" Chain of trust \" to facilitate the decision-making process, so people do not have trust to knowledge, skills and importance of EMTs’ medical approaches and cares.\n\n\"I believe that team-working in the pre-hospital emergency services is not possible without trust. I think there is no sufficient trust among our staff. When a physician does not have enough trust to the technician's report, how to order medical intervention. (P9) ... People do not trust our EMTs' capability ... because of the weak trust chain, sometimes some patients and their companions are not honest... \". (P6)\n\nThis study addressed that supervisory and supportive laws play an important role in emergency medical decision making. Lack of clear rules and regulations to determine the scopes of EMTs’ duties and lack of proper professional liability coverage by insurance companies were obtained as one of the most important factors leading to EMTs’ concern and fear. The EMTs stated that they were constantly afraid of the possible negative consequences of their decisions and interventions. Uncertainties at the time of dealing with an alone elderly or child patient who refuse to receive care, lack of clear rules related to end-stage patient and cardiopulmonary arrest who had clinical death were the most important stressors for decision making.\n\n\".... We don't know what the legal consequences of leaving the alone patient who refuse to receive care or transport ...” (P1) …there are many missions that the patient is dead and there is nothing to do, but we transfer them to hospitals due to lack of supportive laws and insurance.\" (P21)\n\n\nIntervening conditions\n\nInteractions, feelings, and the “Customer focus approach” are three important intervening factors in EMTs’ on-scene decision-making that lead to Action-Interaction strategies. According to participants, although some patients don’t not have emergency medical problems, they or families tend to exaggerate their problem to use free services. On the other hand, some people behave violently with the EMTs that these verbal and non-verbal violence were identified as the most important disrupting interactions factor and consequently making decisions with fear.\n\n\"Sometimes some patients pretend to be ill that we find out thorough physical and psychological examination...” (P13) Sometimes we face violent and inappropriate behavior from laypeople and bystanders that lead to stress, fear and wrong decisions.\" (P4)\n\nThe EMTs declared that their decisions and actions are also influenced by their feelings. The findings of this study indicated that feelings such as altruism and empathy play an important role in the decisions of EMTs. Seeing patients' suffering affects the feelings and focus of the EMTs. Another negative feeling reported by EMTs was that they felt unhelpful and sense of inadequacy. They were sometimes unable to do something, in spite of being aware of patient's needs. \n\n\"... Pain and groaning of some patients, especially those who are end-stage and those who are really poor affect me. We put all our focus and efforts on patients. Nevertheless, when I feel like some managers don't care, I'll be disappointed... “(P16)\n\nExcessive attention to the satisfaction of patients and their families by EMS system managers and policymakers, was noted repeatedly by the EMTs as \"Customer focus approach”. Most participants claimed that some EMS Managers and health policy makers only want to obtain the satisfaction of caregivers. The data extracted from observations and interviews showed that some managers neglected the EMTs’ satisfaction that had negative effect on EMTs’ motivation and so their decisions. \n\n\"... There are many cases with no emergencies that request an ambulance several times a week. Some patient or families threaten us very badly ...” (P14) … in the face violence, the executives say, in any case, they are sick and your duty is providing services ...” (P20)\n\n\nAction-Interaction strategies\n\nBased on the participants and fieldnotes, action strategies including protocol and guideline-based decision making, consultancy, and interaction strategies including “why me culture?”, only transportation approach and leaving scene without emergency medical intervention were obtained. EMTs act based on action strategies to overcome decision-making challenges and barriers. If consulting with another EMT could not resolve decision-making problems, they will ask dispatcher to consult with a physician, but most EMS centers do not have a physician. Protocol and guideline-based decision-making is another action strategy that face challenges such as the difficulty of using paper-based protocols, forgetfulness of protocols, the complexity of some protocols.\n\n\"Sometimes I can't make a decision. I consult with my colleague. Sometimes my colleagues ask me what to do. We sometimes ask dispatcher to consult with a physician, but most of the time it is not possible to consult with...” (P2)\n\nIn the case of EMTs’ frustration to adopt an action strategy, they adopt a strategy of irresponsibility that was coded as \"Why me? Culture”. Most participants declared that because of distress caused by bystanders’ irrational interference, they refused to medical emergency interventions and their responsibility. Fear of the decision’s negative consequences, feelings, people's interference and scene conditions are factors that lead to the only transportation approach so that EMTs sometimes leave the scene without care or intervention.\n\n\"In my opinion, the uncertainty, the pressure of the scene, the fear of the negative consequences of the decision have led some of our colleagues to not take responsibility...”. (P11)\n\n\nConsequences \n\nAccording to the most participants, the EMTs’ frustration in on-scene decision-making process lead to decrease in quality of the pre-hospital services, stakeholders’ dissatisfaction, hospital emergency units’ overload, decrease in reputation of the EMTs and threat to patient clinical outcome and patient safety. The participants believed that strategies such as \"why me? Culture\", only transportation approach, and leaving the scene without providing emergency medical cares would lead to decrease in quality of the pre-hospital services. The EMTs’ concern and fear lead to make hasty or incorrect decisions and so, threat to patients’ safety. Rapid and correct on-scene decision-making lead to on time medical intervention and emergency care so it concluded to caregivers’ satisfaction. Furthermore, as a result of correct decisions, on time, enough and adequate pre-hospital cares prevent from transportation of all cases to the hospital and hospital emergency unit burden.\n\n\"A lot of times we have to move patients as soon as we arrived to the scene…” (P10) …transporting all cases to the hospital, cause overcrowding and overloading of the hospital…” (P22) … inadequate pre-hospital cares lead to threat to patients’ safety and patients’ dissatisfaction”. (P10)\n\nAnother negative consequence was the reduction of EMT reputation. In other words, some people do not trust the knowledge and competence of EMTs. The EMTs claimed that some people do not believe in them. Furthermore, attitudes and perceptions of hospital staff including nurses and physicians based on inadequacy of EMTs' competence were unbearable to EMTs. \n\n \"People think we are just drivers. I think, the main reason of this attitude, is transporting the patients without enough emergency cares”. (P1)\n", "The findings of this study indicated that some EMTs face fear and concern due to unclear duties and ambiguous tasks, insufficient authority, inadequate competencies, and deficiencies of medical protocols. These factors are in the setting of inadequate trust, inadequate supervision and lack of supportive laws. The findings suggested that intervening factors such as interactions, EMT feelings, and “Customer focus approach” increase EMTs' fear and concern at the time of decision making. Technicians may adopt strategies such as consultancy or protocol referencing to overcome this fear. If these strategies fail, they take the irresponsibility approach and leave the scene without medical intervention. Based on the extracted model, these strategies concluded to decrease in quality of pre-hospital services, reduced stockholders’' satisfaction, hospital emergency unit overload, decrease in EMT’s reputation and threat to patient’s safety and outcomes.\nOne of the most important factors affecting the fear and concern of EMTs was insufficient authority. The EMTs believed that despite numerous and critical tasks, they did not have sufficient authority to provide medical emergency interventions. sufficient authority, especially in pre-hospital setting where patients are critical and in a life-threatening condition, is essential because it is not possible to delegate vital responsibility and provide lifesaving care without sufficient authority.27\n\nLack of sufficient and usable protocols and guidelines were categorized as the casual conditions. The EMTs have to make decision based on their judgment that is not evidence-based. So, the EMTs fear of legal consequences due to possible threat to patients' safety and dissatisfaction. The studies suggested that protocols and guidelines not only facilitate the physicians’ and nurses’ work but also provide better outcomes for patients and supportive space for health providers.28-30 A study that explored the experiences and perceptions of health providers in Iran, showed that decision-making protocols were complement to dispatchers’ experience.31 Therefore, \nBased on the experiences of the participants, some EMTs did not have sufficient competency including knowledge, skills, experience and the power of self-management, which were important causes for fear and concern. However, the findings of other studies that explored the competencies of pre-hospital providers, emphasized that having enough competencies is vital to provide on-time and correct clinical judgment.32,33 In order to improve EMTs competencies, it needs to continuous educational plan for both knowledge and particularly their skills, which is in line with previous study in pre-hospital phase improvement in Iran.34\n\nAs a contextual condition, community setting is one of the affecting factors on EMTs' on-scene decision making process. One of the aspects related to community setting is the bystanders and laypeople interferences in relief and rescue operations. The reason for people's interference was lack of their awareness of the EMTs’ duties and importance of pre-hospital cares. Other studies in the field of road traffic injuries in Iran also mentioned that bystanders’ interference has a negative role in pre-hospital emergency services21,35 because it led to over/under triage and EMT's confusion.36,37\n\nThe findings of this study showed that pre-hospital emergency services should be provided in a context of mutual trust. This chain of trust should exist between providers and caregivers as well as intra-organizational EMS staff. The studies claimed that trust is a vital component in the provision of health services and the promotion of stockholders’ collaboration.38,39\n\nBased on this study findings, the cause of the malingering was getting rid of or obtaining specific conditions. Because malinger patients are resistant to EMTs’ intervention and cares and their families are believed in them, technicians face serious problems when dealing with these people.40,41 Empathy with such a patient and his family as well as respectful communication were recommended.42,43\n\nThreats and violence against emergency personnel were identified as an aspect of negative and stressful interactions. The studies showed that fear, anxiety, sleep disorders, decrease in self-esteem are the negative consequences of exposure to violence.44-46 According to the findings of this study, most violent technicians were dissatisfied with poor support and follow-up of their organization. However, according to the studies, organizations should strive to satisfy their employees.47-49 Based on that, it is recommended for passing supportive law for EMTs in their duties and particularly on-scene protection of EMTs.\nThe findings of this study revealed that EMTs have adopted various strategies when faced with decision-making barriers that one of the most important was use of protocols, but most technicians stated that the protocols are inadequate and complex. A study was conducted to explore the feedback system between dispatch centers and ambulances in Sweden, also described that some EMS personnel did not use decision-making and dispatching protocols because of their complexity.50 A medical protocol should take all the clinical aspects of an emergency situation in order to make the best decision and thus the best possible service.51\n\nConsulting with Dispatch Operators and MDPs was another EMTs’ strategy to take guidance and sometimes get rid of the legal consequences of decisions thorough delegating responsibility to others. Responsibility is known as the cornerstone of provision of medical service.52,53 However, the findings of this study showed that technicians face the mistrust of some physicians and communication barriers. In line with our findings, similar studies mentioned legal Issues for the medical director.54,55\n\nThe analysis of the findings of this study indicated that some EMTs adopt an only transportation strategy with minimal and sometimes no care and medical intervention due to irrational interference of laypeople and bystanders. Although technicians provide essential services to patients in the ambulance during transportation, deprivation of critical care on the scene results in reduced quality of service, threat to patient safety. In line with the findings of this study, the studies highlighted that hospital emergency units’ overload, wasting time and facilities to transfer non-emergency patients is one of the negative consequences of this strategy.15,56-58\n\n\nStudy strengthens and limitations\n\nThe qualitative approach was used to explore the model of EMTs’ on-scene decision making in a middle-income setting. To our best knowledge, this study is the first study on EMTs’ on-scene decision making in Iran that was employed both interview and observation. To have maximum variation, the participants were selected among EMTs, dispatchers, managers, MDPs and policymakers. To achieve consistency and the credibility of the data, different methods including constant comparative analysis, member check, and peer review were used. One potential limitation of the current study is that, this study conducted in the Iranian context that can be limited in generalizability to other countries. In qualitative study, the scope is not generalization, however, it seems that given the shared problems, the results can be applicable in low- and middle-income countries.", "This study designed a paradigm model to explain the process and to explore the relationships between different components and affecting factors on EMTs' on-scene decision making process. Technicians' decision making in the context of fear and concern leads to action-interactive strategies that will ultimately lead to stockholders’ dissatisfaction, decrease in EMTs’ reputation, threat to patient safety and related outcome. According to the model, unclear tasks, and insufficient EMTs’ authority, inadequate some EMTs’ competencies, and lack of sufficient protocols and guidelines were categorized as Casual Conditions. Promoting EMTs’ competencies, delegating sufficient authority, and passing supportive rules for emergency personnel are suggested. Based on this model, there are several contextual and intervening conditions that effect on EMTs’ decisions. Finally, to facilitate on-scene decision making process, it is necessary that some strategies were implicated in national EMS system.\n\nAbbreviations\n\nEMS: Emergency Medical Services; EMTs: Emergency Medical Technicians; MDPs: Medical Direction’s Physicians; DKZ: Davoud Khorasani-Zavareh; MSK: Meysam Safi-Keykaleh; ZG: Zohreh Ghomian; KB: Katarina Bohm.\n\nAcknowledgements\n\nThis study is part of the Ph.D. thesis in School of Public Health and Safety. The authors would like to thank Shahid Beheshti University of Medical Sciences as well as all the participants in this study especially the EMTs." ]
[ "introduction", "methods", "results", "discussion", "conclusion" ]
[ "Emergency medical services", "Emergency medical technician", "Decision making", "Pre-hospital", "Iran " ]
Introduction: Emergency Medical Services (EMS) centers were established to provide on-time and rapid services to the patients and injured from the scene to the hospital.1 These centers can reduce significantly morbidity and mortality with providing pre-hospital cares in life-threatening conditions.2 To manage life-threatening conditions, save lives and reduce morbidity and mortality, rapid and accurate on-scene decision making is an important and influential factor.3 Decisions about the type and priority of emergency medical interventions require triage.4 As the conditions of some patients are complex and variable, the technicians' decision changes frequently, which can lead to technician error.5,6 Decision making errors threaten patient’s safety and Emergency Medical Technicians (EMTs)’ reputation.7-10 However, proper emergency medical decision-making leads to prevention of hospital emergency unit overcrowding, increased quality of health services and ultimately reduced morbidity and mortality.9,11 EMTs' on-scene decision making is influenced by incredible barriers and challenges.7,12,13 These factors influence the thinking, feeling and performance of the EMTs and so, increase their decision error, especially in some interventions including endotracheal intubation, mediation and no transportation.14,15 Facing these barriers and challenges lead to the EMTs’ concern and fear which of these fears have a profound impact on patient outcomes.16,17 As stress and anxiety often arise at the time of providing on-scene health services,18 identifying these factors are necessary. There is a paucity of studies in the field of EMTs’ on-scene decision making that have provided a good overview of the factors affecting dispatch services and transmission, but the decision-making process at the scene was not explored yet.19-21 While EMTs’ decision-making is a dynamic and on- going process starting from the scene until the patient is delivered to the hospital.20,22 This process will be influenced by multiple factors, so will have multiple outcomes. Understanding this process is crucial and can significantly address challenges and improve patient outcomes. Therefore, this study was conducted to design EMTs' on-scene decision making model in order to simplify the relationship between the components of this process. Methods : Study setting In Iran, national EMS system provides free services for patients and injured people from scene to hospital. In the provinces and cities, EMS centers are under supervision of universities medical sciences and national EMS system. EMTs including diploma(basic), nurses with associated degrees(intermediate), bachelor’s and master’s degrees (advanced/ paramedic) provide services in dispatch centers or on scene. There is a dispatch center in each provincial center or cities which response to calls, provide counselling, dispatch EMTs and coordinate dispatched ambulances. Two EMTs in each ambulance were dispatched to the scene who should get advice from general physician as Medical Direction’s Physicians (MDPs) of Dispatch in the provinces’ EMS center. In the cities’ EMS centers. EMTs typically operate independently due to lack of MDPs and communicating challenges. General physicians did not attend the ambulance and most of them as a medical director provide online radio or telephone counselling to EMTs at proveniences’ EMS centers and some of them are as head of emergency centers. There is no general physician in the cities’ EMS center. Study design A grounded theory method based on Corbin and Strauss approach in 2015 was conducted for data gathering. This method is useful for achieving new area or exploring new perspectives of known field.23,24 Participants’ selection Maximum variety sampling method was performed, of which 26 participants including Emergency Medical Technicians, Medical Direction’s Physicians (MDPs), dispatchers, representative for court affairs and EMS’ managers were chosen according to purposeful sampling (Table 1). Having practical or theoretical experience, being verbal and participations’ willingness were inclusion criteria. Moreover, observational field note by means of triangulation was used for data collection as well as data validation. Saturation principle was used for concept saturation, of which data saturation was reached after 26 interviews and 10 filed observations. Data collection Data were collected through in-depth interviews and field observations. Interviews were done in Farsi at the participants’ workplaces. Initially three unstructured interviews were conducted. These first three interviews helped to identify interview guideline and important concepts that effect on-scene decision makings. Following that 26 semi-structured interviews were conducted using interview guideline. Interviews were started with general question about participants’ experiences of on-scene decision making process. Following that, to explore participants’ experiences probing questions were performed. Examples of questions were including “What are your challenges at the time of on-scene decision making?”, “What factors effect on your on-scene decision making in emergency situations?”, “What strategies do you use to deal with on-scene decision making’s challenges?” The interviews’ duration lasted between 45 and 75 minutes from October 2018 to July 2019. Each recorded interview was listened several times and transcribed verbatim by the principal investigator (PI). The PI started observation to take field notes after three interviews. Moreover, in order to saturate the concept by means of triangulation, ten observations also conducted from November 2018 to March 2019. In order to do that, PI took part in the EMS’ mission as the participant as observer. In this regard, all relations and interactions were taken as the field notes. Data analysis Data analysis was carried out simultaneously and immediately after data collection. The PI listened the audio files several times and compared transcribed interviews with recorded digital audios file. Based on Corbin and Strauss recommendations in 2015,23 continuously during data analysis. Regarding open coding, transcribed interviews and filed note observations were analyzed line by line. In this regard, MSK and DKZ discussed interviews’ process and codes extracted to provide the guide of future interviews and analysis. Both are experts in the field of health in emergencies and disasters. Translate and back-translate was also carried out by KB who is expert in the field of emergency medicine and English language. As a result of the researchers’ engagement and their consensus, the extracted codes were integrated in sub-categories and categories. Following that, via axial coding, the sub-categories and categories were compared and categorized based on their similarities and differences. Finally, based on selective coding, the link of categories and core categories were obtained and after that conceptual model was designed based on initial theoretical structure. Following that the paradigm model was presented to visualize the core category that is the central and main phenomenon of study.25 This model presents the decision making in the context of fear and concern. This core category is affected by Causal condition as a group of situations that influence decision making in the context of fear and concern. Context conditions as a component of the model, refer to a group of conditions that the phenomenon is raised and people respond to it via some strategies. Intervening conditions are facilitators or barriers action-interaction strategies which are purposeful acts that are taken by people to resolve a problem and lead to a number of consequences.25 Trustworthiness Four strategies including credibility, confirmability, dependability and transferability were applied to achieve trustworthiness.26 To achieve credibility, data collection triangulations including interviews and observational field, researchers’ eligibility in the field of health in emergencies and disasters as well as their prolonged engagement with data was employed. In addition, member checking, peer checking and finally expert checking were applied to improve credibility of the findings. To determine the data consistency, beside the PI other team members, as external auditors, checked generated codes, categories and sub-categories. Expert check was carried out by the opinions of research supervisor, DKZ and KB on the findings. To establish Confirmability, triangulation for data collection including face-to-face interview, bracketing principle and peer evaluation were used. Transferability was achieved using detailed explanations of the data collection, analysis and result interpretation. Finally, dependability was met thought triangulation, code-recode strategy, and peer evaluation. Results: This study explored the EMTS 'on-scene decision making process. In the initial analysis, 1352 codes were extracted and finally classified into six categories and 21 sub-categories. “Decision making in the context of fear and concern” was defined as a core category. This type of decision means that most EMTs are concerned about the consequences of their decisions due to ambiguous tasks and responsibilities, insufficient authority, competency and decision-making protocols. These conditions are in the setting concluding lack of trust and inadequate supportive rules for EMTs. Intervening conditions including interactions, fillings, and “Customer focus approach” intervene in the decision making. The participant number were included (by P) for each quote and (FNO) for each field notes observation. Based on the findings of this study, in dealing with fear and concern, EMTs adopt two types of action-interaction strategies in order to manage their stress and anxiety. They may first consult with their co-worker, MDP, or dispatch operator. If this strategy does not solve the problem, they will refer to the existing protocols. Due to the findings of this study, there are many weaknesses and barriers to these strategies, so due to frustration, some EMTs may take the strategies including irresponsibility, only transportation approach and leave the scene and transfer the patients without medical intervention. All of these strategies lead to negative consequences include reduced quality of EMS services, reduced stakeholders' satisfaction, hospital emergency unit overload, reduced EMT reputation and threat to patient safety and their outcomes (Figure 1). Causal condition In this study, factors including unclear duties and ambiguous tasks, insufficient EMTs’ authorities, inadequate some EMTs’ competencies, and lack of appropriate on-scene decision-making protocols and guidelines lead to decision making in the context of fear and concern. According to EMTs, unclear duties and responsibilities are one of the most important factors that cause fear and concern at the time of decisions making. The EMTs are not completely aware of their duties and often their intervention is not legal. They described that multiple and sometimes indefinite responsibilities not only lead to neglect of necessary interventions, but also lead to excessive stress and anxiety. "…I don't know exactly what is my duty and responsibility. If occurred complication in a patient due to my intervention and its possible failure, would we be prosecuted and compensated? ... What duty do we have for patients who refuse to be cared? Whether a patient's endotracheal intubation or medication is illegal without a MDP’s advice…? ". (P3) Inadequate authority in contrast with delegated responsibility, inadequate competencies including knowledge, skills, experience and accurate judgment were identified as four important factors influencing the decision-making process of EMTs. The EMTs were concerned about the lack of sufficient authority. They believed that delegating multiple responsibilities without giving them sufficient authority would deprive many patients of emergency medical interventions and cares. According to participants and observation field notes, some EMTs were insufficiently competent to provide emergency interventions and cares. They declared that clinical judgment, experience, knowledge and skills of effective communication and scene management ability are critical to making the right decision in a timely manner. Field observations showed that there were cases of poor emergency scene management when assisting the patients that their family were agitated, which resulted in violence against EMTs. "Some co-workers have enough experience and great skill to manage the scene. Judging right in spite of the patient's anger and illogical laypeople’s interference is of important. ...sometimes a patient does not have need an intervention and special care, but unfortunately, we don't have enough authority to leave such a patient. Sometimes the patient doesn't have to be taken to the hospital, but because we don't have the enough authorization, we are afraid of not taking the patient to the hospital." (FNO2) According to the EMTs, sometimes making a decision to manage the patients is very difficult and EMT's consultation may be ineffective, so guidelines and protocols are very helpful. If decisions are made on the basis of scientific protocols and guidelines, in addition to appropriate medical interventions, concerns and fears of EMTs about the negative consequences of their interventions will be eliminated. In other words, EMTs can make better decisions with these supporting documents. The lack of protocols for some emergencies and deficiencies in existing protocols, was a major challenge of decision making. "... sometimes I talk to my colleague about the patient, but we really don't know what to do ... a guidance can be very helpful, but the number of our pre-hospital emergency guidelines and protocols are limited ... the available guidelines and protocols because of their simplicity or complexity are unusable." (p16) Contextual conditions In this paradigm model, community setting, trust, supportive and supervisory rules were identified as contextual conditions. The EMTs were extremely unsatisfied with the irrational and sometimes violent intervention of laypeople on the scene. Distress caused by irrational interferences disrupts emergency medical cares and results in rapid transmission without medical care. Another important aspect of community setting is the level of public education and awareness. The participants believed that community perception and its’ attitudes toward the importance of pre-hospital cares have an important role in their cooperation and collaboration. In the field observation, the involvement of laypeople with severe verbal violence were observed. "Unfortunately, sometimes people angrily disrespect us. Although we do our best to help a patient with life-threatening conditions, they prevent us by irrational interferences. I am sure that the main reason for these behaviors and interferences is the low understanding and awareness. “(P10) The findings of the study showed that trust had a significant impact on EMT's decision making. All participants believed that lack of trust is a major barrier to participatory decision making. They mentioned that there is no sufficient trust between EMTs, dispatchers, MDPs, and hospital emergency physicians. Some MDPs do not trust the technician's description and only recommend transferring instead of providing medication and medical advice. Lack of proper decision-making condition due to insufficient trust leads to fear and anxiety of EMTs. The EMTs believed that there is no a " Chain of trust " to facilitate the decision-making process, so people do not have trust to knowledge, skills and importance of EMTs’ medical approaches and cares. "I believe that team-working in the pre-hospital emergency services is not possible without trust. I think there is no sufficient trust among our staff. When a physician does not have enough trust to the technician's report, how to order medical intervention. (P9) ... People do not trust our EMTs' capability ... because of the weak trust chain, sometimes some patients and their companions are not honest... ". (P6) This study addressed that supervisory and supportive laws play an important role in emergency medical decision making. Lack of clear rules and regulations to determine the scopes of EMTs’ duties and lack of proper professional liability coverage by insurance companies were obtained as one of the most important factors leading to EMTs’ concern and fear. The EMTs stated that they were constantly afraid of the possible negative consequences of their decisions and interventions. Uncertainties at the time of dealing with an alone elderly or child patient who refuse to receive care, lack of clear rules related to end-stage patient and cardiopulmonary arrest who had clinical death were the most important stressors for decision making. ".... We don't know what the legal consequences of leaving the alone patient who refuse to receive care or transport ...” (P1) …there are many missions that the patient is dead and there is nothing to do, but we transfer them to hospitals due to lack of supportive laws and insurance." (P21) Intervening conditions Interactions, feelings, and the “Customer focus approach” are three important intervening factors in EMTs’ on-scene decision-making that lead to Action-Interaction strategies. According to participants, although some patients don’t not have emergency medical problems, they or families tend to exaggerate their problem to use free services. On the other hand, some people behave violently with the EMTs that these verbal and non-verbal violence were identified as the most important disrupting interactions factor and consequently making decisions with fear. "Sometimes some patients pretend to be ill that we find out thorough physical and psychological examination...” (P13) Sometimes we face violent and inappropriate behavior from laypeople and bystanders that lead to stress, fear and wrong decisions." (P4) The EMTs declared that their decisions and actions are also influenced by their feelings. The findings of this study indicated that feelings such as altruism and empathy play an important role in the decisions of EMTs. Seeing patients' suffering affects the feelings and focus of the EMTs. Another negative feeling reported by EMTs was that they felt unhelpful and sense of inadequacy. They were sometimes unable to do something, in spite of being aware of patient's needs. "... Pain and groaning of some patients, especially those who are end-stage and those who are really poor affect me. We put all our focus and efforts on patients. Nevertheless, when I feel like some managers don't care, I'll be disappointed... “(P16) Excessive attention to the satisfaction of patients and their families by EMS system managers and policymakers, was noted repeatedly by the EMTs as "Customer focus approach”. Most participants claimed that some EMS Managers and health policy makers only want to obtain the satisfaction of caregivers. The data extracted from observations and interviews showed that some managers neglected the EMTs’ satisfaction that had negative effect on EMTs’ motivation and so their decisions. "... There are many cases with no emergencies that request an ambulance several times a week. Some patient or families threaten us very badly ...” (P14) … in the face violence, the executives say, in any case, they are sick and your duty is providing services ...” (P20) Action-Interaction strategies Based on the participants and fieldnotes, action strategies including protocol and guideline-based decision making, consultancy, and interaction strategies including “why me culture?”, only transportation approach and leaving scene without emergency medical intervention were obtained. EMTs act based on action strategies to overcome decision-making challenges and barriers. If consulting with another EMT could not resolve decision-making problems, they will ask dispatcher to consult with a physician, but most EMS centers do not have a physician. Protocol and guideline-based decision-making is another action strategy that face challenges such as the difficulty of using paper-based protocols, forgetfulness of protocols, the complexity of some protocols. "Sometimes I can't make a decision. I consult with my colleague. Sometimes my colleagues ask me what to do. We sometimes ask dispatcher to consult with a physician, but most of the time it is not possible to consult with...” (P2) In the case of EMTs’ frustration to adopt an action strategy, they adopt a strategy of irresponsibility that was coded as "Why me? Culture”. Most participants declared that because of distress caused by bystanders’ irrational interference, they refused to medical emergency interventions and their responsibility. Fear of the decision’s negative consequences, feelings, people's interference and scene conditions are factors that lead to the only transportation approach so that EMTs sometimes leave the scene without care or intervention. "In my opinion, the uncertainty, the pressure of the scene, the fear of the negative consequences of the decision have led some of our colleagues to not take responsibility...”. (P11) Consequences According to the most participants, the EMTs’ frustration in on-scene decision-making process lead to decrease in quality of the pre-hospital services, stakeholders’ dissatisfaction, hospital emergency units’ overload, decrease in reputation of the EMTs and threat to patient clinical outcome and patient safety. The participants believed that strategies such as "why me? Culture", only transportation approach, and leaving the scene without providing emergency medical cares would lead to decrease in quality of the pre-hospital services. The EMTs’ concern and fear lead to make hasty or incorrect decisions and so, threat to patients’ safety. Rapid and correct on-scene decision-making lead to on time medical intervention and emergency care so it concluded to caregivers’ satisfaction. Furthermore, as a result of correct decisions, on time, enough and adequate pre-hospital cares prevent from transportation of all cases to the hospital and hospital emergency unit burden. "A lot of times we have to move patients as soon as we arrived to the scene…” (P10) …transporting all cases to the hospital, cause overcrowding and overloading of the hospital…” (P22) … inadequate pre-hospital cares lead to threat to patients’ safety and patients’ dissatisfaction”. (P10) Another negative consequence was the reduction of EMT reputation. In other words, some people do not trust the knowledge and competence of EMTs. The EMTs claimed that some people do not believe in them. Furthermore, attitudes and perceptions of hospital staff including nurses and physicians based on inadequacy of EMTs' competence were unbearable to EMTs. "People think we are just drivers. I think, the main reason of this attitude, is transporting the patients without enough emergency cares”. (P1) Discussion: The findings of this study indicated that some EMTs face fear and concern due to unclear duties and ambiguous tasks, insufficient authority, inadequate competencies, and deficiencies of medical protocols. These factors are in the setting of inadequate trust, inadequate supervision and lack of supportive laws. The findings suggested that intervening factors such as interactions, EMT feelings, and “Customer focus approach” increase EMTs' fear and concern at the time of decision making. Technicians may adopt strategies such as consultancy or protocol referencing to overcome this fear. If these strategies fail, they take the irresponsibility approach and leave the scene without medical intervention. Based on the extracted model, these strategies concluded to decrease in quality of pre-hospital services, reduced stockholders’' satisfaction, hospital emergency unit overload, decrease in EMT’s reputation and threat to patient’s safety and outcomes. One of the most important factors affecting the fear and concern of EMTs was insufficient authority. The EMTs believed that despite numerous and critical tasks, they did not have sufficient authority to provide medical emergency interventions. sufficient authority, especially in pre-hospital setting where patients are critical and in a life-threatening condition, is essential because it is not possible to delegate vital responsibility and provide lifesaving care without sufficient authority.27 Lack of sufficient and usable protocols and guidelines were categorized as the casual conditions. The EMTs have to make decision based on their judgment that is not evidence-based. So, the EMTs fear of legal consequences due to possible threat to patients' safety and dissatisfaction. The studies suggested that protocols and guidelines not only facilitate the physicians’ and nurses’ work but also provide better outcomes for patients and supportive space for health providers.28-30 A study that explored the experiences and perceptions of health providers in Iran, showed that decision-making protocols were complement to dispatchers’ experience.31 Therefore, Based on the experiences of the participants, some EMTs did not have sufficient competency including knowledge, skills, experience and the power of self-management, which were important causes for fear and concern. However, the findings of other studies that explored the competencies of pre-hospital providers, emphasized that having enough competencies is vital to provide on-time and correct clinical judgment.32,33 In order to improve EMTs competencies, it needs to continuous educational plan for both knowledge and particularly their skills, which is in line with previous study in pre-hospital phase improvement in Iran.34 As a contextual condition, community setting is one of the affecting factors on EMTs' on-scene decision making process. One of the aspects related to community setting is the bystanders and laypeople interferences in relief and rescue operations. The reason for people's interference was lack of their awareness of the EMTs’ duties and importance of pre-hospital cares. Other studies in the field of road traffic injuries in Iran also mentioned that bystanders’ interference has a negative role in pre-hospital emergency services21,35 because it led to over/under triage and EMT's confusion.36,37 The findings of this study showed that pre-hospital emergency services should be provided in a context of mutual trust. This chain of trust should exist between providers and caregivers as well as intra-organizational EMS staff. The studies claimed that trust is a vital component in the provision of health services and the promotion of stockholders’ collaboration.38,39 Based on this study findings, the cause of the malingering was getting rid of or obtaining specific conditions. Because malinger patients are resistant to EMTs’ intervention and cares and their families are believed in them, technicians face serious problems when dealing with these people.40,41 Empathy with such a patient and his family as well as respectful communication were recommended.42,43 Threats and violence against emergency personnel were identified as an aspect of negative and stressful interactions. The studies showed that fear, anxiety, sleep disorders, decrease in self-esteem are the negative consequences of exposure to violence.44-46 According to the findings of this study, most violent technicians were dissatisfied with poor support and follow-up of their organization. However, according to the studies, organizations should strive to satisfy their employees.47-49 Based on that, it is recommended for passing supportive law for EMTs in their duties and particularly on-scene protection of EMTs. The findings of this study revealed that EMTs have adopted various strategies when faced with decision-making barriers that one of the most important was use of protocols, but most technicians stated that the protocols are inadequate and complex. A study was conducted to explore the feedback system between dispatch centers and ambulances in Sweden, also described that some EMS personnel did not use decision-making and dispatching protocols because of their complexity.50 A medical protocol should take all the clinical aspects of an emergency situation in order to make the best decision and thus the best possible service.51 Consulting with Dispatch Operators and MDPs was another EMTs’ strategy to take guidance and sometimes get rid of the legal consequences of decisions thorough delegating responsibility to others. Responsibility is known as the cornerstone of provision of medical service.52,53 However, the findings of this study showed that technicians face the mistrust of some physicians and communication barriers. In line with our findings, similar studies mentioned legal Issues for the medical director.54,55 The analysis of the findings of this study indicated that some EMTs adopt an only transportation strategy with minimal and sometimes no care and medical intervention due to irrational interference of laypeople and bystanders. Although technicians provide essential services to patients in the ambulance during transportation, deprivation of critical care on the scene results in reduced quality of service, threat to patient safety. In line with the findings of this study, the studies highlighted that hospital emergency units’ overload, wasting time and facilities to transfer non-emergency patients is one of the negative consequences of this strategy.15,56-58 Study strengthens and limitations The qualitative approach was used to explore the model of EMTs’ on-scene decision making in a middle-income setting. To our best knowledge, this study is the first study on EMTs’ on-scene decision making in Iran that was employed both interview and observation. To have maximum variation, the participants were selected among EMTs, dispatchers, managers, MDPs and policymakers. To achieve consistency and the credibility of the data, different methods including constant comparative analysis, member check, and peer review were used. One potential limitation of the current study is that, this study conducted in the Iranian context that can be limited in generalizability to other countries. In qualitative study, the scope is not generalization, however, it seems that given the shared problems, the results can be applicable in low- and middle-income countries. Conclusion: This study designed a paradigm model to explain the process and to explore the relationships between different components and affecting factors on EMTs' on-scene decision making process. Technicians' decision making in the context of fear and concern leads to action-interactive strategies that will ultimately lead to stockholders’ dissatisfaction, decrease in EMTs’ reputation, threat to patient safety and related outcome. According to the model, unclear tasks, and insufficient EMTs’ authority, inadequate some EMTs’ competencies, and lack of sufficient protocols and guidelines were categorized as Casual Conditions. Promoting EMTs’ competencies, delegating sufficient authority, and passing supportive rules for emergency personnel are suggested. Based on this model, there are several contextual and intervening conditions that effect on EMTs’ decisions. Finally, to facilitate on-scene decision making process, it is necessary that some strategies were implicated in national EMS system. Abbreviations EMS: Emergency Medical Services; EMTs: Emergency Medical Technicians; MDPs: Medical Direction’s Physicians; DKZ: Davoud Khorasani-Zavareh; MSK: Meysam Safi-Keykaleh; ZG: Zohreh Ghomian; KB: Katarina Bohm. Acknowledgements This study is part of the Ph.D. thesis in School of Public Health and Safety. The authors would like to thank Shahid Beheshti University of Medical Sciences as well as all the participants in this study especially the EMTs.
Background: To manage life-threatening conditions and reduce morbidity and mortality, pre-hospital's on-scene decision making is an influential factor. Since pre-hospital's decision making is a challenging process, it is necessary to be identified this process. This study was conducted to explore the model of Iranian emergency medical technicians' decision making in emergency situations. Methods: This study was applied through grounded theory method using direct field observations and semi-structured interviews. Purposeful sampling with 26 participants including 17 emergency medical technicians including dispatchers, physicians of medical directions, managers and 1 representative for court affairs was performed. Interviews were lasted from October 2018 to July 2019. Corbin and Strauss approach, 2015 (open, axial and selective coding) were used to analyze data. Results: A paradigm model was developed to explain the relationships among the main categories. Decision making in the context of fear and concern was emerged as the core category. Unclear duties, insufficient authorities and competencies as well as lack of enough decision making's protocols and guidelines were categorized as casual conditions. Other important categories linked to the core category were interactions, feelings and "customer focus approach". Action-interaction strategies were taken by Emergency Medical technicians lead to some negative consequences that can threaten clinical outcome and patient safety. Conclusions: Based on the finding of this study, Emergency Medical technicians' decision making in the context of fear and concern, as the core concept of this model, lead to decrease in quality of the pre-hospital services, stakeholders' dissatisfaction, hospital emergency units' overload, decrease in reputation of the Emergency Medical Technicians, threat to patient clinical outcome and patient safety. To prevent of these negative consequences, facilitation of the Emergency Medical Technicians' on-scene decision making is recommended.
Introduction: Emergency Medical Services (EMS) centers were established to provide on-time and rapid services to the patients and injured from the scene to the hospital.1 These centers can reduce significantly morbidity and mortality with providing pre-hospital cares in life-threatening conditions.2 To manage life-threatening conditions, save lives and reduce morbidity and mortality, rapid and accurate on-scene decision making is an important and influential factor.3 Decisions about the type and priority of emergency medical interventions require triage.4 As the conditions of some patients are complex and variable, the technicians' decision changes frequently, which can lead to technician error.5,6 Decision making errors threaten patient’s safety and Emergency Medical Technicians (EMTs)’ reputation.7-10 However, proper emergency medical decision-making leads to prevention of hospital emergency unit overcrowding, increased quality of health services and ultimately reduced morbidity and mortality.9,11 EMTs' on-scene decision making is influenced by incredible barriers and challenges.7,12,13 These factors influence the thinking, feeling and performance of the EMTs and so, increase their decision error, especially in some interventions including endotracheal intubation, mediation and no transportation.14,15 Facing these barriers and challenges lead to the EMTs’ concern and fear which of these fears have a profound impact on patient outcomes.16,17 As stress and anxiety often arise at the time of providing on-scene health services,18 identifying these factors are necessary. There is a paucity of studies in the field of EMTs’ on-scene decision making that have provided a good overview of the factors affecting dispatch services and transmission, but the decision-making process at the scene was not explored yet.19-21 While EMTs’ decision-making is a dynamic and on- going process starting from the scene until the patient is delivered to the hospital.20,22 This process will be influenced by multiple factors, so will have multiple outcomes. Understanding this process is crucial and can significantly address challenges and improve patient outcomes. Therefore, this study was conducted to design EMTs' on-scene decision making model in order to simplify the relationship between the components of this process. Conclusion: This study designed a paradigm model to explain the process and to explore the relationships between different components and affecting factors on EMTs' on-scene decision making process. Technicians' decision making in the context of fear and concern leads to action-interactive strategies that will ultimately lead to stockholders’ dissatisfaction, decrease in EMTs’ reputation, threat to patient safety and related outcome. According to the model, unclear tasks, and insufficient EMTs’ authority, inadequate some EMTs’ competencies, and lack of sufficient protocols and guidelines were categorized as Casual Conditions. Promoting EMTs’ competencies, delegating sufficient authority, and passing supportive rules for emergency personnel are suggested. Based on this model, there are several contextual and intervening conditions that effect on EMTs’ decisions. Finally, to facilitate on-scene decision making process, it is necessary that some strategies were implicated in national EMS system. Abbreviations EMS: Emergency Medical Services; EMTs: Emergency Medical Technicians; MDPs: Medical Direction’s Physicians; DKZ: Davoud Khorasani-Zavareh; MSK: Meysam Safi-Keykaleh; ZG: Zohreh Ghomian; KB: Katarina Bohm. Acknowledgements This study is part of the Ph.D. thesis in School of Public Health and Safety. The authors would like to thank Shahid Beheshti University of Medical Sciences as well as all the participants in this study especially the EMTs.
Background: To manage life-threatening conditions and reduce morbidity and mortality, pre-hospital's on-scene decision making is an influential factor. Since pre-hospital's decision making is a challenging process, it is necessary to be identified this process. This study was conducted to explore the model of Iranian emergency medical technicians' decision making in emergency situations. Methods: This study was applied through grounded theory method using direct field observations and semi-structured interviews. Purposeful sampling with 26 participants including 17 emergency medical technicians including dispatchers, physicians of medical directions, managers and 1 representative for court affairs was performed. Interviews were lasted from October 2018 to July 2019. Corbin and Strauss approach, 2015 (open, axial and selective coding) were used to analyze data. Results: A paradigm model was developed to explain the relationships among the main categories. Decision making in the context of fear and concern was emerged as the core category. Unclear duties, insufficient authorities and competencies as well as lack of enough decision making's protocols and guidelines were categorized as casual conditions. Other important categories linked to the core category were interactions, feelings and "customer focus approach". Action-interaction strategies were taken by Emergency Medical technicians lead to some negative consequences that can threaten clinical outcome and patient safety. Conclusions: Based on the finding of this study, Emergency Medical technicians' decision making in the context of fear and concern, as the core concept of this model, lead to decrease in quality of the pre-hospital services, stakeholders' dissatisfaction, hospital emergency units' overload, decrease in reputation of the Emergency Medical Technicians, threat to patient clinical outcome and patient safety. To prevent of these negative consequences, facilitation of the Emergency Medical Technicians' on-scene decision making is recommended.
5,613
348
[]
5
[ "emts", "decision", "making", "decision making", "scene", "emergency", "medical", "hospital", "study", "patient" ]
[ "effect emts decisions", "emergency care concluded", "services emts emergency", "emt decision making", "decision making emergency" ]
[CONTENT] Emergency medical services | Emergency medical technician | Decision making | Pre-hospital | Iran [SUMMARY]
[CONTENT] Emergency medical services | Emergency medical technician | Decision making | Pre-hospital | Iran [SUMMARY]
[CONTENT] Emergency medical services | Emergency medical technician | Decision making | Pre-hospital | Iran [SUMMARY]
[CONTENT] Emergency medical services | Emergency medical technician | Decision making | Pre-hospital | Iran [SUMMARY]
[CONTENT] Emergency medical services | Emergency medical technician | Decision making | Pre-hospital | Iran [SUMMARY]
[CONTENT] Emergency medical services | Emergency medical technician | Decision making | Pre-hospital | Iran [SUMMARY]
[CONTENT] Decision Making | Emergency Medical Services | Emergency Medical Technicians | Grounded Theory | Humans | Iran | Qualitative Research [SUMMARY]
[CONTENT] Decision Making | Emergency Medical Services | Emergency Medical Technicians | Grounded Theory | Humans | Iran | Qualitative Research [SUMMARY]
[CONTENT] Decision Making | Emergency Medical Services | Emergency Medical Technicians | Grounded Theory | Humans | Iran | Qualitative Research [SUMMARY]
[CONTENT] Decision Making | Emergency Medical Services | Emergency Medical Technicians | Grounded Theory | Humans | Iran | Qualitative Research [SUMMARY]
[CONTENT] Decision Making | Emergency Medical Services | Emergency Medical Technicians | Grounded Theory | Humans | Iran | Qualitative Research [SUMMARY]
[CONTENT] Decision Making | Emergency Medical Services | Emergency Medical Technicians | Grounded Theory | Humans | Iran | Qualitative Research [SUMMARY]
[CONTENT] effect emts decisions | emergency care concluded | services emts emergency | emt decision making | decision making emergency [SUMMARY]
[CONTENT] effect emts decisions | emergency care concluded | services emts emergency | emt decision making | decision making emergency [SUMMARY]
[CONTENT] effect emts decisions | emergency care concluded | services emts emergency | emt decision making | decision making emergency [SUMMARY]
[CONTENT] effect emts decisions | emergency care concluded | services emts emergency | emt decision making | decision making emergency [SUMMARY]
[CONTENT] effect emts decisions | emergency care concluded | services emts emergency | emt decision making | decision making emergency [SUMMARY]
[CONTENT] effect emts decisions | emergency care concluded | services emts emergency | emt decision making | decision making emergency [SUMMARY]
[CONTENT] emts | decision | making | decision making | scene | emergency | medical | hospital | study | patient [SUMMARY]
[CONTENT] emts | decision | making | decision making | scene | emergency | medical | hospital | study | patient [SUMMARY]
[CONTENT] emts | decision | making | decision making | scene | emergency | medical | hospital | study | patient [SUMMARY]
[CONTENT] emts | decision | making | decision making | scene | emergency | medical | hospital | study | patient [SUMMARY]
[CONTENT] emts | decision | making | decision making | scene | emergency | medical | hospital | study | patient [SUMMARY]
[CONTENT] emts | decision | making | decision making | scene | emergency | medical | hospital | study | patient [SUMMARY]
[CONTENT] decision | scene | making | decision making | emts | morbidity | mortality | morbidity mortality | process | emergency [SUMMARY]
[CONTENT] interviews | data | categories | data collection | collection | field | pi | ems | center | general [SUMMARY]
[CONTENT] emts | decision | trust | making | patient | decision making | hospital | patients | emergency | protocols [SUMMARY]
[CONTENT] emts | medical | study | model | decision | decision making | process | emergency | making | sufficient [SUMMARY]
[CONTENT] emts | decision | making | decision making | scene | emergency | medical | study | hospital | patient [SUMMARY]
[CONTENT] emts | decision | making | decision making | scene | emergency | medical | study | hospital | patient [SUMMARY]
[CONTENT] ||| ||| Iranian [SUMMARY]
[CONTENT] ||| 26 | 17 | 1 ||| Interviews | October 2018 to July 2019 ||| Strauss | 2015 [SUMMARY]
[CONTENT] ||| ||| ||| ||| Emergency Medical [SUMMARY]
[CONTENT] Emergency Medical | the Emergency Medical Technicians ||| the Emergency Medical Technicians' [SUMMARY]
[CONTENT] ||| ||| Iranian ||| ||| 26 | 17 | 1 ||| Interviews | October 2018 to July 2019 ||| Strauss | 2015 ||| ||| ||| ||| ||| ||| Emergency Medical ||| Emergency Medical | the Emergency Medical Technicians ||| the Emergency Medical Technicians' [SUMMARY]
[CONTENT] ||| ||| Iranian ||| ||| 26 | 17 | 1 ||| Interviews | October 2018 to July 2019 ||| Strauss | 2015 ||| ||| ||| ||| ||| ||| Emergency Medical ||| Emergency Medical | the Emergency Medical Technicians ||| the Emergency Medical Technicians' [SUMMARY]
Developing a Prediction Score for the Diagnosis of Malignant Pleural Effusion: MPE Score.
35092368
The objective of this study was to develop a diagnostic prediction model for diagnosis of malignant pleural effusion (MPE) from pleural fluid cytology (MPE score).
BACKGROUND
Retrospective analysis of pleural fluid cytology was conducted in patients with MPE between 2018 and 2020. Multivariable logistic regression was used to explore the potential predictors. The selected logistic coefficients were transformed into a diagnostic predictive scoring system. Internal validation was done using the bootstrapping procedure.
MATERIALS AND METHODS
The data of pleural fluid cytology from 155 MPE patients were analyzed. Seventy-eight positive pleural cytology patients were found (50.32%). Lung cancer was the cancer most commonly sent for pleural fluid testing, with 66.67% positive cytology.  The predictive indicators included pleural fluid protein > 4.64 g/dL, pleural fluid LDH > 555 IU/L, and pleural fluid sugar > 60 mg/dL. Lung mass from imaging and double tap for pleural cytology were used for the derivation of the diagnostic prediction model. The score-based model showed that the area under the receiver operating characteristic curve was 0.74 (95% CI 0.66-0.82). The developed MPE score ranged from zero to 17. The cut-off point was 15 with 88.31% of specificity, 37.18% of sensitivity, positive predictive value of 0.76, and negative predictive value of 0.58. The measurement of the calibration was illustrated using a calibration plot (p-value = 0.49 for the Hosmer-Lemeshow based goodness of fit). Internal validation with 1,000 bootstrap resampling showed a good discrimination.
RESULTS
The MPE score, as the diagnostic prediction model can be used in planning for more efficient diagnosis of MPE in patients with cancer under MPE.
CONCLUSIONS
[ "Adult", "Area Under Curve", "Biomarkers, Tumor", "Body Fluids", "Clinical Decision Rules", "Female", "Humans", "Logistic Models", "Male", "Middle Aged", "Pleura", "Pleural Effusion, Malignant", "Predictive Value of Tests", "ROC Curve", "Retrospective Studies", "Risk Assessment", "Risk Factors", "Sensitivity and Specificity" ]
9258676
Introduction
Malignant pleural effusion (MPE) refers to finding cytology pleural fluid caused by the metastasis of malignant mesothelioma, which is mostly due to lung cancer in men and breast cancer in women (Psallidas et al., 2016; Agrawal et al., 2015; Mongardon et al., 2011; Aydin et al., 2009). The MPE is also the cause of exudative pleural effusion from 42% up to 77% (Valdes et al., 1996). Diagnosis of MPE through cytology initially showed 60% of positive cytology depending on the type of cancer cells and cancer severity (Antonangelo et al., 2015; Loddenkemper and Boutin, 1993). Later, the diagnostic accuracy of MPE was improved by using pleuroscopy to enhance the efficiency of testing metastasis to the pleura (Ali et al., 2019; Ferreiro et al., 2017). However, this method is an invasive procedure. Therefore, less invasive ones are used such as metabolic imaging with 18-fluoro-deoxy glucose positron emission tomography (FDG-PET). The sensitivity was increased to 90% (Nakajima et al.,2015; Toaff et al.,2005); nonetheless, this method could not determine the types of cancer cells. In addition, epigenetic analysis of the pleural fluid was used to distinguish malignant DNA from methylation-specific PCR (MSP). This could help the diagnosis of MPE and efficiently specify the types of cancer cells (Herman et al., 1996; Brock et al., 2005; Zhang et al., 2007). Nevertheless, this method is expensive and is not used widely. Cytology is still a key method with 60% sensitivity depending on the type of cancer (Johnston, 1985; Starr and Sherman, 1991; and Hsu, 1987). Mostly, positive cytology pleural fluid is found in lung cancer and breast cancer. According to studies (Garcia et al., 1994; Desai and Lee, 2017), the repetition pleural fluid cytology can increase the diagnostic opportunities by 24%. However, more than double of the repetition is impractical for the diagnosis of MPE (Garcia et al., 1994). Thus, pleuroscopy is also required for confirmation to conduct a pleural biopsy, which is an invasive procedure. The clinical features and pleural fluid profile should be used to assist MPE diagnosis as a routine clinical practice and a diagnostic prediction score to facilitate decision-making on whether to wait for cytology results or perform an invasive procedure for efficient and rapid MPE diagnosis. This is because some hospitals still have limited diagnosing capabilities, or patients may have to wait for the cytologic results for weeks. Moreover, not all hospitals have the facilities to perform pleuroscopy, which make the test inaccessible for many patients. Therefore, this diagnostic research aimed to develop a diagnostic prediction model to help the decision-making in the diagnosis of MPE and plan appropriate and efficient diagnostic guidelines in the future.
null
null
Results
The data of the cytologic results of pleural fluid of 166 patients were collected. Eleven patients were excluded; including six patients with incomplete data of the biochemical tests and five patients did not have the pathological results to confirm the malignancy diagnosis. Therefore, the data were collected for 155 patients. Seventy-eight patients (50.32%) had positive cytology whereas seventy-seven patients (49.68%) had negative cytology. In terms of the pathological diagnosis and among different cancers, lung cancer was the most frequent cancer (61.9%) that needed the pleural fluid test. It was also the cancer with 66.67% of positive cytology (Table1). Based on the univariate analysis of the clinical characteristics and pleural fluid profile on MPE, it was found that lung mass detected by clinical imaging, lung cancer, breast cancer, and lung cancer with extrathoracic metastasis were the factors significantly affecting the predictive variables on MPE (Table2). Model development After analyzing the variable factors by univariate logistic regression analysis, the potential predictors affecting the diagnosis of MPE were selected for the multivariate logistic regression analysis of the scoring system derivation. The area under the receiver operating characteristic curve (AUC) for the final model was equal to 0.74 (95% CI 0.66-0.82). Score transformation Each potential predictor in the multivariable model was assigned with a specific score derived from the logistic regression coefficient (Table 3). The scoring scheme had a total score ranging from zero to 17. For the discriminative ability, the area under the parametric ROC curve for the score-based logistic regression model was equal to 0.74 (95% CI 0.66-0.82) (Figure 1). The measurement of the calibration is illustrated with a calibration plot, and the p-value via the Hosmer-Lemeshow goodness of fit test is equal to 0.49 (Figure 2). According to the sensitivity and specificity in each cut-off point, the point at 15 had 88.31% of specificity and 37.18% of sensitivity. This point displayed appropriate specificity that could be used as a diagnosis tool (Table 4). With the cut-off point of the MPE score at 15, to help the diagnosis of MPE, the odds ratio was equal to 3.18 (95% CI 1.35-8.11; p-value 0.004), positive predictive value (PPV) was equal to 0.76, negative predictive value (NPV) was equal to 0.58, positive likelihood ratio (LR+) was equal to 3.18, and negative likelihood ratio (LR-) was equal to 0.71. Internal validation Through conducting the internal validation using the predictive model with 1,000 resampling bootstrap method data set, the mean of the AUC of the apparent curve was obtained equal to 0.75, the test curve was equal to 0.72 (bootstrap estimator), and average estimates of the optimism curve was equal to 0.03 (Table 5). Types of Cancer Confirming the Pathological Diagnosis with the Cytologic Results of the Pleural Fluid Performance of the Clinical Risk Score, Area under the Receiver Operating Characteristics Curve (AUC), and 95% Confidence Band (Above). The calibration plots (pmcalplot) comparing the observed probabilities (y) and predicted probabilities (x) of the use of the MPE score to predict MPE (Below). Univariate Logistic Regression Analysis of MPE and Variable Factors protein ratio, Pleural fluid protein / serum protein; LDH ratio, Pleural fluid LDH / serum LDH; *, statistical significant Observed risk (circle) versus the score predicted risk (solid line) of the positive cytology pleural effusion (malignant pleural effusion). The size of the circle represents the frequency of MPE in each score (Left). Well-fitting model shows non-significance difference between the model and the observed data on the Hosmer-Lemeshow goodness of fit test (p-value 0.49) (Right) Risk Score Derivation Using Multivariate Logistic Regression Coefficients The Sensitivity, Specificity, Positive Likelihood Ratio (LR+), and Negative Likelihood Ratio (LR-) of Each Cut-Off Point Value of the MPE Score Internal Validation via 1,000 Resampling Bootstrap Method
null
null
[ "Author Contribution Statement", "Funding Statement" ]
[ "Chaichana Chantharakhit: Designed the study, reviewed the paper, collected data, analyzed data, and edited the final version. Nantapa Sujaritvanichpong: Collected data. All authors read and approved the final version.", "The authors confirm that there are no relevant financial or non-financial competing interests to report and no conflicts of interest to declare." ]
[ null, null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion", "Author Contribution Statement", "Funding Statement", "Data Availability" ]
[ "Malignant pleural effusion (MPE) refers to finding cytology pleural fluid caused by the metastasis of malignant mesothelioma, which is mostly due to lung cancer in men and breast cancer in women (Psallidas et al., 2016; Agrawal et al., 2015; Mongardon et al., 2011; Aydin et al., 2009). The MPE is also the cause of exudative pleural effusion from 42% up to 77% (Valdes et al., 1996).\nDiagnosis of MPE through cytology initially showed 60% of positive cytology depending on the type of cancer cells and cancer severity (Antonangelo et al., 2015; Loddenkemper and Boutin, 1993). Later, the diagnostic accuracy of MPE was improved by using pleuroscopy to enhance the efficiency of testing metastasis to the pleura (Ali et al., 2019; Ferreiro et al., 2017). However, this method is an invasive procedure. Therefore, less invasive ones are used such as metabolic imaging with 18-fluoro-deoxy glucose positron emission tomography (FDG-PET). The sensitivity was increased to 90% (Nakajima et al.,2015; Toaff et al.,2005); nonetheless, this method could not determine the types of cancer cells.\nIn addition, epigenetic analysis of the pleural fluid was used to distinguish malignant DNA from methylation-specific PCR (MSP). This could help the diagnosis of MPE and efficiently specify the types of cancer cells (Herman et al., 1996; Brock et al., 2005; Zhang et al., 2007). Nevertheless, this method is expensive and is not used widely.\nCytology is still a key method with 60% sensitivity depending on the type of cancer (Johnston, 1985; Starr and Sherman, 1991; and Hsu, 1987). Mostly, positive cytology pleural fluid is found in lung cancer and breast cancer. According to studies (Garcia et al., 1994; Desai and Lee, 2017), the repetition pleural fluid cytology can increase the diagnostic opportunities by 24%. However, more than double of the repetition is impractical for the diagnosis of MPE (Garcia et al., 1994). Thus, pleuroscopy is also required for confirmation to conduct a pleural biopsy, which is an invasive procedure.\nThe clinical features and pleural fluid profile should be used to assist MPE diagnosis as a routine clinical practice and a diagnostic prediction score to facilitate decision-making on whether to wait for cytology results or perform an invasive procedure for efficient and rapid MPE diagnosis. This is because some hospitals still have limited diagnosing capabilities, or patients may have to wait for the cytologic results for weeks. Moreover, not all hospitals have the facilities to perform pleuroscopy, which make the test inaccessible for many patients. Therefore, this diagnostic research aimed to develop a diagnostic prediction model to help the decision-making in the diagnosis of MPE and plan appropriate and efficient diagnostic guidelines in the future.", "Clinical characteristics of patients with suspected MPE and cytology results of pleural fluid in Buddhasothorn Hospital, Chachoengsao, and Thailand were collected between 2018 and 2020.. The inclusion criteria were as follows:\n1) Patients older than 18 years.\n2) The results of the pleural fluid comprised of biochemical tests and key serum tests, i.e., lactate dehydrogenase (LDH) and protein. \n3) Cancer data based on the radiology findings. \nThe exclusion criteria consisted of:\n1) No results of a pathological diagnosis in the case of negative cytology pleural fluid based on the cytologic results. \n\nData analysis \n\nStep 1: The data were analyzed to find the potential factors in the diagnosis of positive cytology pleural fluid (MPE) using univariate regression analysis and multivariate logistic regression analysis.\nStep 2: Multiple imputation for missing data: Three predictor variables (pleural fluid white blood cell, pleural fluid lymphocyte, pleural fluid sugar) had more than 10% missing values, which could lead to biased estimates of the diagnostic model with the complete-case analysis. Multiple imputation with chained equation via mi impute chained command was used to generate missing values prior to model derivation. The logit model was chosen for the imputation of multivariable missing predictors.\nStep 3: The predictive variables from the multivariate logistic regression analysis were brought into the transformation of the risk score. A logistic regression coefficient was used to develop the MPE score to help diagnosing MPE. \nStep 4: The area under the receiver operating characteristic curve (AUC) based on the MPE score for the diagnosis of MPE was calculated and showed the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio for a positive test (LR +) and likelihood ratio for a negative test (LR -).\nStep 5: The accuracy was tested through calibration curve and the Hosmer-Lemeshow goodness of fit test. Internal validation was tested by using the bootstrapping procedure (1,000 replicates).\nThis study was approved by the Institutional Review Board of Buddhasothorn Hospital under the codes BSH-IRB 036/2563", "The data of the cytologic results of pleural fluid of 166 patients were collected. Eleven patients were excluded; including six patients with incomplete data of the biochemical tests and five patients did not have the pathological results to confirm the malignancy diagnosis. Therefore, the data were collected for 155 patients.\nSeventy-eight patients (50.32%) had positive cytology whereas seventy-seven patients (49.68%) had negative cytology. In terms of the pathological diagnosis and among different cancers, lung cancer was the most frequent cancer (61.9%) that needed the pleural fluid test. It was also the cancer with 66.67% of positive cytology (Table1).\nBased on the univariate analysis of the clinical characteristics and pleural fluid profile on MPE, it was found that lung mass detected by clinical imaging, lung cancer, breast cancer, and lung cancer with extrathoracic metastasis were the factors significantly affecting the predictive variables on MPE (Table2).\n\nModel development \n\nAfter analyzing the variable factors by univariate logistic regression analysis, the potential predictors affecting the diagnosis of MPE were selected for the multivariate logistic regression analysis of the scoring system derivation. The area under the receiver operating characteristic curve (AUC) for the final model was equal to 0.74 (95% CI 0.66-0.82).\n\nScore transformation\n\nEach potential predictor in the multivariable model was assigned with a specific score derived from the logistic regression coefficient (Table 3). The scoring scheme had a total score ranging from zero to 17. For the discriminative ability, the area under the parametric ROC curve for the score-based logistic regression model was equal to 0.74 (95% CI 0.66-0.82) (Figure 1). The measurement of the calibration is illustrated with a calibration plot, and the p-value via the Hosmer-Lemeshow goodness of fit test is equal to 0.49 (Figure 2). \nAccording to the sensitivity and specificity in each cut-off point, the point at 15 had 88.31% of specificity and 37.18% of sensitivity. This point displayed appropriate specificity that could be used as a diagnosis tool (Table 4). With the cut-off point of the MPE score at 15, to help the diagnosis of MPE, the odds ratio was equal to 3.18 (95% CI 1.35-8.11; p-value 0.004), positive predictive value (PPV) was equal to 0.76, negative predictive value (NPV) was equal to 0.58, positive likelihood ratio (LR+) was equal to 3.18, and negative likelihood ratio (LR-) was equal to 0.71.\n\nInternal validation \n\nThrough conducting the internal validation using the predictive model with 1,000 resampling bootstrap method data set, the mean of the AUC of the apparent curve was obtained equal to 0.75, the test curve was equal to 0.72 (bootstrap estimator), and average estimates of the optimism curve was equal to 0.03 (Table 5).\nTypes of Cancer Confirming the Pathological Diagnosis with the Cytologic Results of the Pleural Fluid\nPerformance of the Clinical Risk Score, Area under the Receiver Operating Characteristics Curve (AUC), and 95% Confidence Band (Above). The calibration plots (pmcalplot) comparing the observed probabilities (y) and predicted probabilities (x) of the use of the MPE score to predict MPE (Below).\nUnivariate Logistic Regression Analysis of MPE and Variable Factors\nprotein ratio, Pleural fluid protein / serum protein; LDH ratio, Pleural fluid LDH / serum LDH; *, statistical significant\nObserved risk (circle) versus the score predicted risk (solid line) of the positive cytology pleural effusion (malignant pleural effusion). The size of the circle represents the frequency of MPE in each score (Left). Well-fitting model shows non-significance difference between the model and the observed data on the Hosmer-Lemeshow goodness of fit test (p-value 0.49) (Right)\nRisk Score Derivation Using Multivariate Logistic Regression Coefficients\nThe Sensitivity, Specificity, Positive Likelihood Ratio (LR+), and Negative Likelihood Ratio (LR-) of Each Cut-Off Point Value of the MPE Score\nInternal Validation via 1,000 Resampling Bootstrap Method", "Pleural fluid found in malignant disease was due to two major conditions, i.e., paramalignant pleural effusion (PMPE) and malignant pleural effusion (MPE) (Wong et al., 1963; Epelbaum and Rahman, 2019). The PMPE is not a consequence of a malignant disease spreading to the pleura. The probability that an effusion is paramalignant is higher when the effusion is transudative, while MPE is exudative. Therefore, understanding the differentiation between PMPE and MPE is necessary.\nThere are studies on the use of the cancer ratio using the ratio of serum lactate dehydrogenase (LDH) to adenosine deaminase (ADA) in pleural fluid. The ratios used were based on the cut-off level > 20 to help diagnose the causes of exudative pleural fluid between benign and MPE. It was found that sensitivity and specificity were high because the relationship of the levels of serum LDH was usually high in a malignant disease (Verma et al., 2016; Korczyński et al., 2018; Verma et al., 2016). This was a result of using glycolysis for energy in tumor cells instead of oxidative phosphorylation, a switch in the adenosine triphosphate (ATP) generating pathways, which was mediated by LDH (Pfeiffer et al., 2001; Goldman et al., 1964; Mansouri et al., 2017). Likewise, the infection caused by tuberculosis in the pleural fluid usually had a higher ADA secreted by mononuclear cells, lymphocytes, neutrophils, and red blood cells (Liang et al., 2008; Jiménez Castro et al., 2003). \nHowever, the meta-analysis of using the cancer ratio for the diagnosis of MPE was based on the data from the PubMed and EMBASE databases The cancer ratio had a high diagnostic accuracy for predicting MPE. The pooled sensitivity and specificity of the cancer ratio were equal to 0.97 (95% CI 0.92-0.99) and 0.89 (0.69-0.97) respectively; with AUC equal to 0.98 (95% CI 0.97-0.99). Nevertheless, there were some limitations due to the bias of patient selection and potential partial verification (Han et al., 2019). Yet, it was frequently found that serum LDH may not be raised in the case of cancer. A high LDH may be related to poorer overall survival. Some minor studies (Chantharakit, 2018) also found that high LDH was related to cancer under liver metastasis; however, the data still contained a few limitations.\nPorcel et al., (2004) used a panel of tumor markers, i.e., carcinoembryonic antigen (CEA), cancer antigen (CA) 125, carbohydrate antigen (CA) 15-3, and cytokeratin 19 fragments in pleural fluid for the differential diagnosis of benign and malignant effusions. The combination of the four tumor markers reached a sensitivity of 54%, whereas the combined use of the cytology and the tumor marker panel increased the diagnostic yield of the former by 18% (95% CI; 13-23%). Yang et al., (2017) reported about a updated meta-analysis of patients with undiagnosed pleural effusion and showed that the combinations of positive pleural CEA + CA 15-3 and CEA + CA 19-9 were highly suspicious for pleural malignancy. Still, the sensitivity of these tests was poor.\nClive et al., (2014) studied prognostic indicators and found that the ones affecting the survival of MPE patients were pleural fluid LDH, the Eastern Cooperative Oncology Group (ECOG) performance status, and neutrophil-to-lymphocyte ratio (NLR). It was also found that the tumor type could be developed by the LENT scoring system as a prognostic prediction model. The levels of pleural fluid LDH were key markers of inflammation or cellular injury. LDH levels greater than three times the upper limit of normal (often >1,000 U/L) are often indicative of pleural infection. This can also be associated with rheumatoid pleurisy, tuberculous pleurisy or malignancy.\nThis study is different from previous studies as it focused only on PME. Pleural fluid cytology results may be positive malignant cells or negative malignant cells and it is not the intention of this study to distinguish MPE from benign disease. Therefore, all the pleural fluid was exudative pleural fluid according to Light’s criteria. The clinical information fitting the malignant disease was used to find the predictive indicators affecting the diagnosis of MPE using pleural fluid cytology to develop a diagnostic prediction model (MPE score). This was the first diagnostic prediction model used to assist in the diagnosis of MPE in patients with cancer. The data from the pleural fluid biomarkers were used along with the clinical data of the patients rather than using only the data from the biomarkers.\nHowever, the standard diagnosis of MPE features pleural fluid cytology supported by testing for confirmation by pleural biopsy in the case of negative pleural fluid cytology; still, with suspected MPE. This is an invasive procedure. Despite the effort to use a non-invasive technique, e.g., biomarker tests or molecular analysis from the pleural fluid for the diagnosis of MPE, the less invasive methods have not become popular yet. The validated clinical data found that there were still some limitations of use; thus, further studies are required.\nTherefore, using the MPE score at the cut-off point of 15, which has high specificity, may help in predicting MPE diagnosis to make decisions about planning for investigation while waiting for pleural fluid cytology results.This would enhance better efficiency of the diagnosis of MPE.", "Chaichana Chantharakhit: Designed the study, reviewed the paper, collected data, analyzed data, and edited the final version. Nantapa Sujaritvanichpong: Collected data. All authors read and approved the final version.", "The authors confirm that there are no relevant financial or non-financial competing interests to report and no conflicts of interest to declare.", "The data used to support the findings of this study have been deposited in the repository [https://drive.google.com/drive/folders/1Enrqg5Zq_co3rY73-epiqymNe_njLL7W?usp=sharing].\nWe confirm that there are no relevant financial or non-financial competing interests to report and no conflicts of interest to declare" ]
[ "intro", "materials|methods", "results", "discussion", null, null, "data-availability" ]
[ "MPE score", "pleural fluid cytology", "diagnostic prediction model", "malignant pleural effusion" ]
Introduction: Malignant pleural effusion (MPE) refers to finding cytology pleural fluid caused by the metastasis of malignant mesothelioma, which is mostly due to lung cancer in men and breast cancer in women (Psallidas et al., 2016; Agrawal et al., 2015; Mongardon et al., 2011; Aydin et al., 2009). The MPE is also the cause of exudative pleural effusion from 42% up to 77% (Valdes et al., 1996). Diagnosis of MPE through cytology initially showed 60% of positive cytology depending on the type of cancer cells and cancer severity (Antonangelo et al., 2015; Loddenkemper and Boutin, 1993). Later, the diagnostic accuracy of MPE was improved by using pleuroscopy to enhance the efficiency of testing metastasis to the pleura (Ali et al., 2019; Ferreiro et al., 2017). However, this method is an invasive procedure. Therefore, less invasive ones are used such as metabolic imaging with 18-fluoro-deoxy glucose positron emission tomography (FDG-PET). The sensitivity was increased to 90% (Nakajima et al.,2015; Toaff et al.,2005); nonetheless, this method could not determine the types of cancer cells. In addition, epigenetic analysis of the pleural fluid was used to distinguish malignant DNA from methylation-specific PCR (MSP). This could help the diagnosis of MPE and efficiently specify the types of cancer cells (Herman et al., 1996; Brock et al., 2005; Zhang et al., 2007). Nevertheless, this method is expensive and is not used widely. Cytology is still a key method with 60% sensitivity depending on the type of cancer (Johnston, 1985; Starr and Sherman, 1991; and Hsu, 1987). Mostly, positive cytology pleural fluid is found in lung cancer and breast cancer. According to studies (Garcia et al., 1994; Desai and Lee, 2017), the repetition pleural fluid cytology can increase the diagnostic opportunities by 24%. However, more than double of the repetition is impractical for the diagnosis of MPE (Garcia et al., 1994). Thus, pleuroscopy is also required for confirmation to conduct a pleural biopsy, which is an invasive procedure. The clinical features and pleural fluid profile should be used to assist MPE diagnosis as a routine clinical practice and a diagnostic prediction score to facilitate decision-making on whether to wait for cytology results or perform an invasive procedure for efficient and rapid MPE diagnosis. This is because some hospitals still have limited diagnosing capabilities, or patients may have to wait for the cytologic results for weeks. Moreover, not all hospitals have the facilities to perform pleuroscopy, which make the test inaccessible for many patients. Therefore, this diagnostic research aimed to develop a diagnostic prediction model to help the decision-making in the diagnosis of MPE and plan appropriate and efficient diagnostic guidelines in the future. Materials and Methods: Clinical characteristics of patients with suspected MPE and cytology results of pleural fluid in Buddhasothorn Hospital, Chachoengsao, and Thailand were collected between 2018 and 2020.. The inclusion criteria were as follows: 1) Patients older than 18 years. 2) The results of the pleural fluid comprised of biochemical tests and key serum tests, i.e., lactate dehydrogenase (LDH) and protein. 3) Cancer data based on the radiology findings. The exclusion criteria consisted of: 1) No results of a pathological diagnosis in the case of negative cytology pleural fluid based on the cytologic results. Data analysis Step 1: The data were analyzed to find the potential factors in the diagnosis of positive cytology pleural fluid (MPE) using univariate regression analysis and multivariate logistic regression analysis. Step 2: Multiple imputation for missing data: Three predictor variables (pleural fluid white blood cell, pleural fluid lymphocyte, pleural fluid sugar) had more than 10% missing values, which could lead to biased estimates of the diagnostic model with the complete-case analysis. Multiple imputation with chained equation via mi impute chained command was used to generate missing values prior to model derivation. The logit model was chosen for the imputation of multivariable missing predictors. Step 3: The predictive variables from the multivariate logistic regression analysis were brought into the transformation of the risk score. A logistic regression coefficient was used to develop the MPE score to help diagnosing MPE. Step 4: The area under the receiver operating characteristic curve (AUC) based on the MPE score for the diagnosis of MPE was calculated and showed the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio for a positive test (LR +) and likelihood ratio for a negative test (LR -). Step 5: The accuracy was tested through calibration curve and the Hosmer-Lemeshow goodness of fit test. Internal validation was tested by using the bootstrapping procedure (1,000 replicates). This study was approved by the Institutional Review Board of Buddhasothorn Hospital under the codes BSH-IRB 036/2563 Results: The data of the cytologic results of pleural fluid of 166 patients were collected. Eleven patients were excluded; including six patients with incomplete data of the biochemical tests and five patients did not have the pathological results to confirm the malignancy diagnosis. Therefore, the data were collected for 155 patients. Seventy-eight patients (50.32%) had positive cytology whereas seventy-seven patients (49.68%) had negative cytology. In terms of the pathological diagnosis and among different cancers, lung cancer was the most frequent cancer (61.9%) that needed the pleural fluid test. It was also the cancer with 66.67% of positive cytology (Table1). Based on the univariate analysis of the clinical characteristics and pleural fluid profile on MPE, it was found that lung mass detected by clinical imaging, lung cancer, breast cancer, and lung cancer with extrathoracic metastasis were the factors significantly affecting the predictive variables on MPE (Table2). Model development After analyzing the variable factors by univariate logistic regression analysis, the potential predictors affecting the diagnosis of MPE were selected for the multivariate logistic regression analysis of the scoring system derivation. The area under the receiver operating characteristic curve (AUC) for the final model was equal to 0.74 (95% CI 0.66-0.82). Score transformation Each potential predictor in the multivariable model was assigned with a specific score derived from the logistic regression coefficient (Table 3). The scoring scheme had a total score ranging from zero to 17. For the discriminative ability, the area under the parametric ROC curve for the score-based logistic regression model was equal to 0.74 (95% CI 0.66-0.82) (Figure 1). The measurement of the calibration is illustrated with a calibration plot, and the p-value via the Hosmer-Lemeshow goodness of fit test is equal to 0.49 (Figure 2). According to the sensitivity and specificity in each cut-off point, the point at 15 had 88.31% of specificity and 37.18% of sensitivity. This point displayed appropriate specificity that could be used as a diagnosis tool (Table 4). With the cut-off point of the MPE score at 15, to help the diagnosis of MPE, the odds ratio was equal to 3.18 (95% CI 1.35-8.11; p-value 0.004), positive predictive value (PPV) was equal to 0.76, negative predictive value (NPV) was equal to 0.58, positive likelihood ratio (LR+) was equal to 3.18, and negative likelihood ratio (LR-) was equal to 0.71. Internal validation Through conducting the internal validation using the predictive model with 1,000 resampling bootstrap method data set, the mean of the AUC of the apparent curve was obtained equal to 0.75, the test curve was equal to 0.72 (bootstrap estimator), and average estimates of the optimism curve was equal to 0.03 (Table 5). Types of Cancer Confirming the Pathological Diagnosis with the Cytologic Results of the Pleural Fluid Performance of the Clinical Risk Score, Area under the Receiver Operating Characteristics Curve (AUC), and 95% Confidence Band (Above). The calibration plots (pmcalplot) comparing the observed probabilities (y) and predicted probabilities (x) of the use of the MPE score to predict MPE (Below). Univariate Logistic Regression Analysis of MPE and Variable Factors protein ratio, Pleural fluid protein / serum protein; LDH ratio, Pleural fluid LDH / serum LDH; *, statistical significant Observed risk (circle) versus the score predicted risk (solid line) of the positive cytology pleural effusion (malignant pleural effusion). The size of the circle represents the frequency of MPE in each score (Left). Well-fitting model shows non-significance difference between the model and the observed data on the Hosmer-Lemeshow goodness of fit test (p-value 0.49) (Right) Risk Score Derivation Using Multivariate Logistic Regression Coefficients The Sensitivity, Specificity, Positive Likelihood Ratio (LR+), and Negative Likelihood Ratio (LR-) of Each Cut-Off Point Value of the MPE Score Internal Validation via 1,000 Resampling Bootstrap Method Discussion: Pleural fluid found in malignant disease was due to two major conditions, i.e., paramalignant pleural effusion (PMPE) and malignant pleural effusion (MPE) (Wong et al., 1963; Epelbaum and Rahman, 2019). The PMPE is not a consequence of a malignant disease spreading to the pleura. The probability that an effusion is paramalignant is higher when the effusion is transudative, while MPE is exudative. Therefore, understanding the differentiation between PMPE and MPE is necessary. There are studies on the use of the cancer ratio using the ratio of serum lactate dehydrogenase (LDH) to adenosine deaminase (ADA) in pleural fluid. The ratios used were based on the cut-off level > 20 to help diagnose the causes of exudative pleural fluid between benign and MPE. It was found that sensitivity and specificity were high because the relationship of the levels of serum LDH was usually high in a malignant disease (Verma et al., 2016; Korczyński et al., 2018; Verma et al., 2016). This was a result of using glycolysis for energy in tumor cells instead of oxidative phosphorylation, a switch in the adenosine triphosphate (ATP) generating pathways, which was mediated by LDH (Pfeiffer et al., 2001; Goldman et al., 1964; Mansouri et al., 2017). Likewise, the infection caused by tuberculosis in the pleural fluid usually had a higher ADA secreted by mononuclear cells, lymphocytes, neutrophils, and red blood cells (Liang et al., 2008; Jiménez Castro et al., 2003). However, the meta-analysis of using the cancer ratio for the diagnosis of MPE was based on the data from the PubMed and EMBASE databases The cancer ratio had a high diagnostic accuracy for predicting MPE. The pooled sensitivity and specificity of the cancer ratio were equal to 0.97 (95% CI 0.92-0.99) and 0.89 (0.69-0.97) respectively; with AUC equal to 0.98 (95% CI 0.97-0.99). Nevertheless, there were some limitations due to the bias of patient selection and potential partial verification (Han et al., 2019). Yet, it was frequently found that serum LDH may not be raised in the case of cancer. A high LDH may be related to poorer overall survival. Some minor studies (Chantharakit, 2018) also found that high LDH was related to cancer under liver metastasis; however, the data still contained a few limitations. Porcel et al., (2004) used a panel of tumor markers, i.e., carcinoembryonic antigen (CEA), cancer antigen (CA) 125, carbohydrate antigen (CA) 15-3, and cytokeratin 19 fragments in pleural fluid for the differential diagnosis of benign and malignant effusions. The combination of the four tumor markers reached a sensitivity of 54%, whereas the combined use of the cytology and the tumor marker panel increased the diagnostic yield of the former by 18% (95% CI; 13-23%). Yang et al., (2017) reported about a updated meta-analysis of patients with undiagnosed pleural effusion and showed that the combinations of positive pleural CEA + CA 15-3 and CEA + CA 19-9 were highly suspicious for pleural malignancy. Still, the sensitivity of these tests was poor. Clive et al., (2014) studied prognostic indicators and found that the ones affecting the survival of MPE patients were pleural fluid LDH, the Eastern Cooperative Oncology Group (ECOG) performance status, and neutrophil-to-lymphocyte ratio (NLR). It was also found that the tumor type could be developed by the LENT scoring system as a prognostic prediction model. The levels of pleural fluid LDH were key markers of inflammation or cellular injury. LDH levels greater than three times the upper limit of normal (often >1,000 U/L) are often indicative of pleural infection. This can also be associated with rheumatoid pleurisy, tuberculous pleurisy or malignancy. This study is different from previous studies as it focused only on PME. Pleural fluid cytology results may be positive malignant cells or negative malignant cells and it is not the intention of this study to distinguish MPE from benign disease. Therefore, all the pleural fluid was exudative pleural fluid according to Light’s criteria. The clinical information fitting the malignant disease was used to find the predictive indicators affecting the diagnosis of MPE using pleural fluid cytology to develop a diagnostic prediction model (MPE score). This was the first diagnostic prediction model used to assist in the diagnosis of MPE in patients with cancer. The data from the pleural fluid biomarkers were used along with the clinical data of the patients rather than using only the data from the biomarkers. However, the standard diagnosis of MPE features pleural fluid cytology supported by testing for confirmation by pleural biopsy in the case of negative pleural fluid cytology; still, with suspected MPE. This is an invasive procedure. Despite the effort to use a non-invasive technique, e.g., biomarker tests or molecular analysis from the pleural fluid for the diagnosis of MPE, the less invasive methods have not become popular yet. The validated clinical data found that there were still some limitations of use; thus, further studies are required. Therefore, using the MPE score at the cut-off point of 15, which has high specificity, may help in predicting MPE diagnosis to make decisions about planning for investigation while waiting for pleural fluid cytology results.This would enhance better efficiency of the diagnosis of MPE. Author Contribution Statement: Chaichana Chantharakhit: Designed the study, reviewed the paper, collected data, analyzed data, and edited the final version. Nantapa Sujaritvanichpong: Collected data. All authors read and approved the final version. Funding Statement: The authors confirm that there are no relevant financial or non-financial competing interests to report and no conflicts of interest to declare. Data Availability: The data used to support the findings of this study have been deposited in the repository [https://drive.google.com/drive/folders/1Enrqg5Zq_co3rY73-epiqymNe_njLL7W?usp=sharing]. We confirm that there are no relevant financial or non-financial competing interests to report and no conflicts of interest to declare
Background: The objective of this study was to develop a diagnostic prediction model for diagnosis of malignant pleural effusion (MPE) from pleural fluid cytology (MPE score). Methods: Retrospective analysis of pleural fluid cytology was conducted in patients with MPE between 2018 and 2020. Multivariable logistic regression was used to explore the potential predictors. The selected logistic coefficients were transformed into a diagnostic predictive scoring system. Internal validation was done using the bootstrapping procedure. Results: The data of pleural fluid cytology from 155 MPE patients were analyzed. Seventy-eight positive pleural cytology patients were found (50.32%). Lung cancer was the cancer most commonly sent for pleural fluid testing, with 66.67% positive cytology.  The predictive indicators included pleural fluid protein > 4.64 g/dL, pleural fluid LDH > 555 IU/L, and pleural fluid sugar > 60 mg/dL. Lung mass from imaging and double tap for pleural cytology were used for the derivation of the diagnostic prediction model. The score-based model showed that the area under the receiver operating characteristic curve was 0.74 (95% CI 0.66-0.82). The developed MPE score ranged from zero to 17. The cut-off point was 15 with 88.31% of specificity, 37.18% of sensitivity, positive predictive value of 0.76, and negative predictive value of 0.58. The measurement of the calibration was illustrated using a calibration plot (p-value = 0.49 for the Hosmer-Lemeshow based goodness of fit). Internal validation with 1,000 bootstrap resampling showed a good discrimination. Conclusions: The MPE score, as the diagnostic prediction model can be used in planning for more efficient diagnosis of MPE in patients with cancer under MPE.
null
null
2,951
327
[ 38, 25 ]
7
[ "pleural", "mpe", "fluid", "pleural fluid", "cancer", "diagnosis", "data", "cytology", "score", "ratio" ]
[ "diagnosis mpe cytology", "mpe improved pleuroscopy", "cytology pleural effusion", "mpe pleural fluid", "testing metastasis pleura" ]
null
null
null
[CONTENT] MPE score | pleural fluid cytology | diagnostic prediction model | malignant pleural effusion [SUMMARY]
null
[CONTENT] MPE score | pleural fluid cytology | diagnostic prediction model | malignant pleural effusion [SUMMARY]
null
[CONTENT] MPE score | pleural fluid cytology | diagnostic prediction model | malignant pleural effusion [SUMMARY]
null
[CONTENT] Adult | Area Under Curve | Biomarkers, Tumor | Body Fluids | Clinical Decision Rules | Female | Humans | Logistic Models | Male | Middle Aged | Pleura | Pleural Effusion, Malignant | Predictive Value of Tests | ROC Curve | Retrospective Studies | Risk Assessment | Risk Factors | Sensitivity and Specificity [SUMMARY]
null
[CONTENT] Adult | Area Under Curve | Biomarkers, Tumor | Body Fluids | Clinical Decision Rules | Female | Humans | Logistic Models | Male | Middle Aged | Pleura | Pleural Effusion, Malignant | Predictive Value of Tests | ROC Curve | Retrospective Studies | Risk Assessment | Risk Factors | Sensitivity and Specificity [SUMMARY]
null
[CONTENT] Adult | Area Under Curve | Biomarkers, Tumor | Body Fluids | Clinical Decision Rules | Female | Humans | Logistic Models | Male | Middle Aged | Pleura | Pleural Effusion, Malignant | Predictive Value of Tests | ROC Curve | Retrospective Studies | Risk Assessment | Risk Factors | Sensitivity and Specificity [SUMMARY]
null
[CONTENT] diagnosis mpe cytology | mpe improved pleuroscopy | cytology pleural effusion | mpe pleural fluid | testing metastasis pleura [SUMMARY]
null
[CONTENT] diagnosis mpe cytology | mpe improved pleuroscopy | cytology pleural effusion | mpe pleural fluid | testing metastasis pleura [SUMMARY]
null
[CONTENT] diagnosis mpe cytology | mpe improved pleuroscopy | cytology pleural effusion | mpe pleural fluid | testing metastasis pleura [SUMMARY]
null
[CONTENT] pleural | mpe | fluid | pleural fluid | cancer | diagnosis | data | cytology | score | ratio [SUMMARY]
null
[CONTENT] pleural | mpe | fluid | pleural fluid | cancer | diagnosis | data | cytology | score | ratio [SUMMARY]
null
[CONTENT] pleural | mpe | fluid | pleural fluid | cancer | diagnosis | data | cytology | score | ratio [SUMMARY]
null
[CONTENT] mpe | cancer | pleural | cytology | diagnostic | diagnosis | invasive | method | fluid | pleural fluid [SUMMARY]
null
[CONTENT] equal | score | mpe | curve | value | logistic regression | logistic | regression | ratio | pleural [SUMMARY]
null
[CONTENT] pleural | mpe | fluid | pleural fluid | financial | data | cancer | diagnosis | cytology | score [SUMMARY]
null
[CONTENT] MPE | pleural fluid | MPE [SUMMARY]
null
[CONTENT] pleural fluid | 155 | MPE ||| Seventy-eight | 50.32% ||| 66.67% ||| 4.64 | 555 ||| 60 | dL. Lung ||| 0.74 | 95% | CI | 0.66-0.82 ||| MPE | zero | 17 ||| 15 | 88.31% | 37.18% | 0.76 | 0.58 ||| 0.49 | the Hosmer-Lemeshow ||| 1,000 [SUMMARY]
null
[CONTENT] MPE | pleural fluid | MPE ||| MPE | between 2018 and 2020 ||| ||| ||| ||| ||| pleural fluid | 155 | MPE ||| Seventy-eight | 50.32% ||| 66.67% ||| 4.64 | 555 ||| 60 | dL. Lung ||| 0.74 | 95% | CI | 0.66-0.82 ||| MPE | zero | 17 ||| 15 | 88.31% | 37.18% | 0.76 | 0.58 ||| 0.49 | the Hosmer-Lemeshow ||| 1,000 ||| MPE | MPE | MPE [SUMMARY]
null
A deep neural network using audio files for detection of aortic stenosis.
35438211
Although aortic stenosis (AS) is the most common valvular heart disease in the western world, many affected patients remain undiagnosed. Auscultation is a readily available screening tool for AS. However, it requires a high level of professional expertise.
BACKGROUND
A deep neural network (DNN) was trained by preprocessed audio files of 100 patients with AS and 100 controls. The DNN's performance was evaluated with a test data set of 40 patients. The primary outcome measures were sensitivity, specificity, and F1-score. Results of the DNN were compared with the performance of cardiologists, residents, and medical students.
METHODS
Eighteen percent of patients without AS and 22% of patients with AS showed an additional moderate or severe mitral regurgitation. The DNN showed a sensitivity of 0.90 (0.81-0.99), a specificity of 1, and an F1-score of 0.95 (0.89-1.0) for the detection of AS. In comparison, we calculated an F1-score of 0.94 (0.86-1.0) for cardiologists, 0.88 (0.78-0.98) for residents, and 0.88 (0.78-0.98) for students.
RESULTS
The present study shows that deep learning-guided auscultation predicts significant AS with similar accuracy as cardiologists. The results of this pilot study suggest that AI-assisted auscultation may help general practitioners without special cardiology training in daily practice.
CONCLUSIONS
[ "Aortic Valve Stenosis", "Cardiologists", "Cardiology", "Humans", "Neural Networks, Computer", "Pilot Projects" ]
9175247
BACKGROUND
After mitral regurgitation, aortic valve stenosis (AS) is the second most common valvular disease. 1 Its prevalence in the general population is 0.4%, with a sharp age‐dependent increase in the older population. A prevalence of 3.4% in the age cohort >65 years can be observed. 2 It leads more often to hospitalization than other heart valve diseases and accounts for 45% of patients operated for valvular disease. 3 It is well known that symptomatic AS has high mortality when no valve replacement therapy is performed. 4 However, recent data suggest that even asymptomatic severe AS is associated with a mortality of up to 58% within 8 years. 5 Besides, it has been shown that aortic valve replacement in patients with severe asymptomatic AS significantly improves outcome. 6 Unfortunately, a high number of patients with significant AS remains undetected. The OxVALVE Population Cohort Study shows that unrecognized, significant AS was present in 1.6% of patients ≥65 years. 7 Thus, a reliable, readily available, and cheap screening tool is necessary. Since the invention of the stethoscope by Laennec in 1816, cardiac auscultation has been one of the pillars of cardiovascular examination. 8 However, this method's significant drawbacks are that heart murmurs are variable, and auscultation skills are highly performer‐dependent. 9 Deep learning is a branch of machine learning using artificial neural networks that models human brains' architecture. Heart sound classification can be principally done using a convolutional neural network (CNN), a subform of deep neural networks (DNNs). 10 We hypothesized that we could train a DNN to identify heart murmurs suspicious for significant AS with an accuracy comparable to experienced cardiologists.
METHODS
The study consists of two parts. In the first part, we trained a neural network to classify auscultation findings of patients who have significant AS or not. In the second part, we compared the performance of the trained DNN with the auscultatory skills of 10 experienced cardiologists, 10 residents, and 10 medical students by using a test data set that consisted of a completely disjointed set of patients. For training, we used auscultation audio files from 100 patients with significant AS and 100 patients without AS. The ground truth was defined by echocardiography. Significant AS was defined as V max of >3.5 m/s measured by continuous‐wave Doppler. Although the definition of high‐grade AS has not yet been reached, we chose this cut‐off value, as these patients require close monitoring. Patients admitted for suspicious coronary artery disease or other cardiac diseases were taken as a control group. We used an electronic stethoscope (Eko) connected to a smartphone interface via Bluetooth for auscultation. Auscultation was performed at the aortic auscultation point (second intercostal space, right sternal border) and the mitral auscultation point (fifth intercostal space, midclavicular line). Thus, from each patient, we included two auscultation files. At each auscultation point, audio files with an interval of 15 s and a sampling rate of 40 kHz were recorded. The audio files were recorded as part of the clinical routine in a tertiary teaching hospital with a large valve unit specialized in transcatheter aortic valve implantation (TAVI). Only data from patients of this database were included who got echocardiography within 7 days before or after auscultation. Since the study was retrospective, an explicit ethics vote was not necessary according to the regulations of the responsible ethics committee. We preprocessed the data in our study before using them to train the network. In the first step, the 15 s sound files were divided into three equal parts of 5 s. This was done to overcome the risk that small portions of an auscultation file falsified by respiration 11 contribute disproportionately to the training of the whole network. Consequently, six sound files per patient (three files for each of the 2 auscultation points) contribute to the network's training. In the second step, we performed Mel Frequency Cepstral Coefficients transformation of the audio files (MFCC‐transformation). This transformation maps the perception of human hearing and has been proposed for audio data analysis of heart and lung auscultation. 12 , 13 , 14 Subsequently, we trained the DNN with 1200 processed audio files (6 files per patient, 100 patients with AS, 100 patients without AS). We developed a CNN for classification (Figure 1), which takes as input MFCCs. The sequential model has three two‐dimensional‐convolutional layers and one max pooling layer. As an activation function, we used “ReLU”. Before each convolutional layer, we applied batch normalization. To combat overfitting, we used a dropout layer between the convolutional layers that sets a random portion of the weights equal to a probability of 0.2 to 0. Thereby the network has to learn different aspects of the data each time. 15 Data processing and analysis were principally done in two steps. In the first step, MFCC feature extraction was done. In the second step, the preprocessed data were fed to the convolutional part of the DNN. After the convolutional layers, the output is flattened to a one‐dimensional tensor. Data are then fed to a fully connected layer using the ReLU (rectified linear unit) activation function. To overcome overfitting, which means that the network is too much adapted to the training data set, the regulizer and dropout techniques were applied. In the softmax function, the input values are transformed to a probability distribution that gives the probability of AS or no AS in the present case. AS, aortic valve stenosis; MFCC, Mel frequency cepstral coefficients. Hyperparameter tuning was done iteratively for learning rate, batch size, number of epochs, number of kernels, and grid size of the convolutional layers. Model comparison was made using K‐fold cross‐validation. After training the network, it was applied to the test set that the model had not seen before. The test set consists of 20 patients with AS and 20 without AS. Accuracy, sensitivity, specificity, receiver operating characteristic curves (ROC), and F1 score were calculated. F1 score was calculated using the following formula: F1 score = 2 × (recall × precision)/(recall + precision). Then the same test set was classified by 10 experienced cardiologists, 10 residents, and 10 final year medical students. The performance parameters were averaged in cardiologists, residents, and students. Audio file processing, training the DNN, and making predictions were made with the general‐purpose programming language Python. Preprocessing audio files was done by the audio analysis library Librosa. We generated Mel Frequency Cepstral Coefficients (MFCC) with a hopelength of 10 and 13 coefficients. 16 A CNN was implemented using the Keras framework with a TensorFlow (Google) backend. 17 Continuous baseline characteristics are given as mean ± SD. Continuous variables were compared using t‐test, and categorical variables were compared using chi‐quadrat test. Accuracy, sensitivity, specificity, and F1‐value for cardiologists, residents, and students are given as mean with a 95% confidence interval. The confidence intervals for the specific parameters of the DNN were calculated with the formula = z × sqrt((parameter × (1 − parameter))/n), where z is the corresponding parameter and n is the size of the test sample, 40 in the present case. Inter‐rater reliability was assessed by calculating Fleiss' kappa. 18
RESULTS
Data from 120 patients with and 120 patients without AS were taken for the present study. From each group, audio files from 100 patients were allocated to the training group and 20 to the test group, respectively. Of the 120 patients with AS, in 99 patients femoral TAVI, in 5 transapical aortic valve implantation, in 13 patients open‐heart surgery, and in 1 patient only valvuloplasty were performed. Two patients were treated conservatively. Patients with AS were older than control patients in the training and test patients. A significant proportion of patients had mitral or tricuspid valve disease. Atrioventricular valve defects were more frequent in patients with AS. However, this difference was not significant. For details on patient characteristics, see Table 1. Patient characteristics Note: Data are given as mean ± SD. Moderate or severe heart valve disease. *p < .05 no AS versus AS. **p < .01 no AS versus AS. DNN's diagnostic accuracy Hyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1. We analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97). Hyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1. We analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97). Diagnostic accuracy of the CNN versus cardiologists, residents, and students Ten students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2. ROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot. Comparison between the DNN and humans in the detection of AS in the test set Note: Data are given as mean and confidence intervals. Abbreviations: DNN, deep neural network; MR, mitral regurgitation. Ten students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2. ROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot. Comparison between the DNN and humans in the detection of AS in the test set Note: Data are given as mean and confidence intervals. Abbreviations: DNN, deep neural network; MR, mitral regurgitation.
null
null
[ "DNN's diagnostic accuracy", "Diagnostic accuracy of the CNN versus cardiologists, residents, and students", "Clinical perspectives" ]
[ "Hyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1.\nWe analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97).", "Ten students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2.\nROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot.\nComparison between the DNN and humans in the detection of AS in the test set\n\nNote: Data are given as mean and confidence intervals.\nAbbreviations: DNN, deep neural network; MR, mitral regurgitation.", "The present study gives proof of concept that AI‐assisted auscultation can provide results on a high expert level concerning the detection of aortic stenosis. Integrating this technology in an electronic stethoscope could be the next step to upgrade this system for everyday clinical use. A stethoscope that indicates a warning in the event of certain valve defects would be conceivable in the foreseeable future. Introducing such a device in countries that cannot provide comprehensive medical care has already been proposed years ago.\n29\n\n" ]
[ null, null, null ]
[ "BACKGROUND", "METHODS", "RESULTS", "DNN's diagnostic accuracy", "Diagnostic accuracy of the CNN versus cardiologists, residents, and students", "DISCUSSION", "Clinical perspectives", "CONFLICTS OF INTEREST" ]
[ "After mitral regurgitation, aortic valve stenosis (AS) is the second most common valvular disease.\n1\n Its prevalence in the general population is 0.4%, with a sharp age‐dependent increase in the older population. A prevalence of 3.4% in the age cohort >65 years can be observed.\n2\n It leads more often to hospitalization than other heart valve diseases and accounts for 45% of patients operated for valvular disease.\n3\n\n\nIt is well known that symptomatic AS has high mortality when no valve replacement therapy is performed.\n4\n However, recent data suggest that even asymptomatic severe AS is associated with a mortality of up to 58% within 8 years.\n5\n Besides, it has been shown that aortic valve replacement in patients with severe asymptomatic AS significantly improves outcome.\n6\n Unfortunately, a high number of patients with significant AS remains undetected. The OxVALVE Population Cohort Study shows that unrecognized, significant AS was present in 1.6% of patients ≥65 years.\n7\n\n\nThus, a reliable, readily available, and cheap screening tool is necessary. Since the invention of the stethoscope by Laennec in 1816, cardiac auscultation has been one of the pillars of cardiovascular examination.\n8\n However, this method's significant drawbacks are that heart murmurs are variable, and auscultation skills are highly performer‐dependent.\n9\n\n\nDeep learning is a branch of machine learning using artificial neural networks that models human brains' architecture. Heart sound classification can be principally done using a convolutional neural network (CNN), a subform of deep neural networks (DNNs).\n10\n We hypothesized that we could train a DNN to identify heart murmurs suspicious for significant AS with an accuracy comparable to experienced cardiologists.", "The study consists of two parts. In the first part, we trained a neural network to classify auscultation findings of patients who have significant AS or not. In the second part, we compared the performance of the trained DNN with the auscultatory skills of 10 experienced cardiologists, 10 residents, and 10 medical students by using a test data set that consisted of a completely disjointed set of patients.\nFor training, we used auscultation audio files from 100 patients with significant AS and 100 patients without AS. The ground truth was defined by echocardiography. Significant AS was defined as V\nmax of >3.5 m/s measured by continuous‐wave Doppler. Although the definition of high‐grade AS has not yet been reached, we chose this cut‐off value, as these patients require close monitoring. Patients admitted for suspicious coronary artery disease or other cardiac diseases were taken as a control group.\nWe used an electronic stethoscope (Eko) connected to a smartphone interface via Bluetooth for auscultation. Auscultation was performed at the aortic auscultation point (second intercostal space, right sternal border) and the mitral auscultation point (fifth intercostal space, midclavicular line). Thus, from each patient, we included two auscultation files. At each auscultation point, audio files with an interval of 15 s and a sampling rate of 40 kHz were recorded.\nThe audio files were recorded as part of the clinical routine in a tertiary teaching hospital with a large valve unit specialized in transcatheter aortic valve implantation (TAVI). Only data from patients of this database were included who got echocardiography within 7 days before or after auscultation. Since the study was retrospective, an explicit ethics vote was not necessary according to the regulations of the responsible ethics committee.\nWe preprocessed the data in our study before using them to train the network. In the first step, the 15 s sound files were divided into three equal parts of 5 s. This was done to overcome the risk that small portions of an auscultation file falsified by respiration\n11\n contribute disproportionately to the training of the whole network. Consequently, six sound files per patient (three files for each of the 2 auscultation points) contribute to the network's training. In the second step, we performed Mel Frequency Cepstral Coefficients transformation of the audio files (MFCC‐transformation). This transformation maps the perception of human hearing and has been proposed for audio data analysis of heart and lung auscultation.\n12\n, \n13\n, \n14\n Subsequently, we trained the DNN with 1200 processed audio files (6 files per patient, 100 patients with AS, 100 patients without AS).\nWe developed a CNN for classification (Figure 1), which takes as input MFCCs. The sequential model has three two‐dimensional‐convolutional layers and one max pooling layer. As an activation function, we used “ReLU”. Before each convolutional layer, we applied batch normalization. To combat overfitting, we used a dropout layer between the convolutional layers that sets a random portion of the weights equal to a probability of 0.2 to 0. Thereby the network has to learn different aspects of the data each time.\n15\n\n\nData processing and analysis were principally done in two steps. In the first step, MFCC feature extraction was done. In the second step, the preprocessed data were fed to the convolutional part of the DNN. After the convolutional layers, the output is flattened to a one‐dimensional tensor. Data are then fed to a fully connected layer using the ReLU (rectified linear unit) activation function. To overcome overfitting, which means that the network is too much adapted to the training data set, the regulizer and dropout techniques were applied. In the softmax function, the input values are transformed to a probability distribution that gives the probability of AS or no AS in the present case. AS, aortic valve stenosis; MFCC, Mel frequency cepstral coefficients.\nHyperparameter tuning was done iteratively for learning rate, batch size, number of epochs, number of kernels, and grid size of the convolutional layers. Model comparison was made using K‐fold cross‐validation.\nAfter training the network, it was applied to the test set that the model had not seen before. The test set consists of 20 patients with AS and 20 without AS. Accuracy, sensitivity, specificity, receiver operating characteristic curves (ROC), and F1 score were calculated. F1 score was calculated using the following formula: F1 score = 2 × (recall × precision)/(recall + precision). Then the same test set was classified by 10 experienced cardiologists, 10 residents, and 10 final year medical students. The performance parameters were averaged in cardiologists, residents, and students.\nAudio file processing, training the DNN, and making predictions were made with the general‐purpose programming language Python. Preprocessing audio files was done by the audio analysis library Librosa. We generated Mel Frequency Cepstral Coefficients (MFCC) with a hopelength of 10 and 13 coefficients.\n16\n A CNN was implemented using the Keras framework with a TensorFlow (Google) backend.\n17\n\n\nContinuous baseline characteristics are given as mean ± SD. Continuous variables were compared using t‐test, and categorical variables were compared using chi‐quadrat test. Accuracy, sensitivity, specificity, and F1‐value for cardiologists, residents, and students are given as mean with a 95% confidence interval. The confidence intervals for the specific parameters of the DNN were calculated with the formula = z × sqrt((parameter × (1 − parameter))/n), where z is the corresponding parameter and n is the size of the test sample, 40 in the present case. Inter‐rater reliability was assessed by calculating Fleiss' kappa.\n18\n\n", "Data from 120 patients with and 120 patients without AS were taken for the present study. From each group, audio files from 100 patients were allocated to the training group and 20 to the test group, respectively.\nOf the 120 patients with AS, in 99 patients femoral TAVI, in 5 transapical aortic valve implantation, in 13 patients open‐heart surgery, and in 1 patient only valvuloplasty were performed. Two patients were treated conservatively.\nPatients with AS were older than control patients in the training and test patients. A significant proportion of patients had mitral or tricuspid valve disease. Atrioventricular valve defects were more frequent in patients with AS. However, this difference was not significant. For details on patient characteristics, see Table 1.\nPatient characteristics\n\nNote: Data are given as mean ± SD.\nModerate or severe heart valve disease.\n*p < .05 no AS versus AS. **p < .01 no AS versus AS.\nDNN's diagnostic accuracy Hyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1.\nWe analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97).\nHyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1.\nWe analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97).\nDiagnostic accuracy of the CNN versus cardiologists, residents, and students Ten students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2.\nROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot.\nComparison between the DNN and humans in the detection of AS in the test set\n\nNote: Data are given as mean and confidence intervals.\nAbbreviations: DNN, deep neural network; MR, mitral regurgitation.\nTen students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2.\nROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot.\nComparison between the DNN and humans in the detection of AS in the test set\n\nNote: Data are given as mean and confidence intervals.\nAbbreviations: DNN, deep neural network; MR, mitral regurgitation.", "Hyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1.\nWe analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97).", "Ten students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2.\nROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot.\nComparison between the DNN and humans in the detection of AS in the test set\n\nNote: Data are given as mean and confidence intervals.\nAbbreviations: DNN, deep neural network; MR, mitral regurgitation.", "The present study shows that deep learning‐guided auscultation predicts significant AS with similar accuracy as board‐certified cardiologists. These results suggest that artificial intelligence‐assisted auscultation may help general practitioners without special cardiology training.\nAuscultation is one of the pillars of clinical investigation. It is readily available, and no sophisticated technical requirements are necessary. On the other hand, a high level of expertise is essential, and the skills, once acquired, need to be used continuously. These circumstances may explain errors in auscultation between 20% and 80% in residents, primary care physicians, and cardiologists.\n19\n, \n20\n\n\nFor this reason, computer‐assisted auscultation was already proposed for clinical use at the beginning of this century. The developed algorithms decompose the cyclical heart sound, and hand‐engineered processing is applied for classification. In young patients with hypertrophic cardiomyopathy and children with congenital heart disease, sufficient sensitivity and specificity could be achieved with these techniques.\n21\n However, these studies were done in young patients with no conditions complicating the auscultation results like adiposity and lung emphysema.\nDeep neural networks (DNNs) take a completely different direction. A DNN is a machine learning algorithm that models brain architecture. They consist of perceptrons with adjustable weights and activation thresholds. In supervised learning, labeled data, often called ground truth, are propagated through a network of perceptrons that allows the DNN to adjust weights. In this step, the DNN learns how to assign known data to predefined categories. After this training phase, the performance of the final model is evaluated on a test data set. In this step, the trained DNN makes predictions in the form of probabilities for so far unknown data.\nIn first applications, the focus was directed on standard diagnostic techniques used by doctors daily but cannot be provided on an expert level in any case.\n22\n Deep learning systems were developed for ECG interpretation,\n23\n skin cancer identification,\n24\n and papilledema detection.\n25\n\n\nIn this context, it has been recognized that AI may also be a valuable tool to support doctors in identifying valvular heart disease. In a recently published study by Chorba et al., physicians assigned 5878 auscultation findings to the labels “heart murmur”, “no heart murmur”, or “inadequate signal”. The DNN was trained with these data using an end‐to‐end (E2E) network design. In a second step, the DNN was then validated on a test data set of 1774 recordings annotated by separate expert clinicians.\n26\n\n\nIn contrast, the ground truth was not defined by physician assignment but by echocardiography in the present study. By using this gold standard, the well‐known erroneous annotation of auscultation findings by physicians was avoided. This is a crucial point as machine learning is based on detecting subtle patterns in data. Only when using high‐quality training data, noise that masks these patterns can be sufficiently reduced. Furthermore, the data in our study were pre‐processed before they were used to train the DNN. In this pre‐processing, attributes of the audio data were isolated that have been shown to be essential for pattern recognition in audio files.\n27\n, \n28\n With this hybrid approach, our DNN showed similar accuracy as board‐certified cardiologists. The precisely defined ground truth in conjunction with the preprocessing of the audio data compensates for a comparatively low patient number.\nA potential limitation of our study is that we only included a few patients with moderate aortic stenosis. This is due to the fact that the study was conducted with data from patients admitted to a tertiary teaching hospital for specialized valve therapy. Moreover, patients with only moderate valve disease are challenging to identify because they are often completely asymptomatic and therefore not under medical surveillance. Similarly, the DNN was trained predominantly with normal‐flow, high‐gradient AS. Probably the present algorithm will have worse performance in low‐flow, low‐gradient AS because the flow properties over the valve and thereby the sound characteristics are entirely different. This fact naturally lowers the sensitivity, but this condition affects only a small portion of the patients.\nIn the present study, a machine learning algorithm was trained to detect patients with aortic stenosis. Patients with other valvular diseases like hypertrophic obstructive cardiomyopathy were not included, and few patients with pure mitral regurgitation participated. Thus, the presented algorithm is far from perfect, and this study can only be the first step in introducing artificial intelligence for valvular heart disease into everyday clinical practice. On the other side, it also shows that artificial intelligence can, in principle, be helpful in the auscultation of heart sounds.\nClinical perspectives The present study gives proof of concept that AI‐assisted auscultation can provide results on a high expert level concerning the detection of aortic stenosis. Integrating this technology in an electronic stethoscope could be the next step to upgrade this system for everyday clinical use. A stethoscope that indicates a warning in the event of certain valve defects would be conceivable in the foreseeable future. Introducing such a device in countries that cannot provide comprehensive medical care has already been proposed years ago.\n29\n\n\nThe present study gives proof of concept that AI‐assisted auscultation can provide results on a high expert level concerning the detection of aortic stenosis. Integrating this technology in an electronic stethoscope could be the next step to upgrade this system for everyday clinical use. A stethoscope that indicates a warning in the event of certain valve defects would be conceivable in the foreseeable future. Introducing such a device in countries that cannot provide comprehensive medical care has already been proposed years ago.\n29\n\n", "The present study gives proof of concept that AI‐assisted auscultation can provide results on a high expert level concerning the detection of aortic stenosis. Integrating this technology in an electronic stethoscope could be the next step to upgrade this system for everyday clinical use. A stethoscope that indicates a warning in the event of certain valve defects would be conceivable in the foreseeable future. Introducing such a device in countries that cannot provide comprehensive medical care has already been proposed years ago.\n29\n\n", "The authors declare no conflicts of interest." ]
[ "background", "methods", "results", null, null, "discussion", null, "COI-statement" ]
[ "aortic stenosis", "artificial intelligence", "auscultation", "deep neural network", "machine learning", "valvular heart disease" ]
BACKGROUND: After mitral regurgitation, aortic valve stenosis (AS) is the second most common valvular disease. 1 Its prevalence in the general population is 0.4%, with a sharp age‐dependent increase in the older population. A prevalence of 3.4% in the age cohort >65 years can be observed. 2 It leads more often to hospitalization than other heart valve diseases and accounts for 45% of patients operated for valvular disease. 3 It is well known that symptomatic AS has high mortality when no valve replacement therapy is performed. 4 However, recent data suggest that even asymptomatic severe AS is associated with a mortality of up to 58% within 8 years. 5 Besides, it has been shown that aortic valve replacement in patients with severe asymptomatic AS significantly improves outcome. 6 Unfortunately, a high number of patients with significant AS remains undetected. The OxVALVE Population Cohort Study shows that unrecognized, significant AS was present in 1.6% of patients ≥65 years. 7 Thus, a reliable, readily available, and cheap screening tool is necessary. Since the invention of the stethoscope by Laennec in 1816, cardiac auscultation has been one of the pillars of cardiovascular examination. 8 However, this method's significant drawbacks are that heart murmurs are variable, and auscultation skills are highly performer‐dependent. 9 Deep learning is a branch of machine learning using artificial neural networks that models human brains' architecture. Heart sound classification can be principally done using a convolutional neural network (CNN), a subform of deep neural networks (DNNs). 10 We hypothesized that we could train a DNN to identify heart murmurs suspicious for significant AS with an accuracy comparable to experienced cardiologists. METHODS: The study consists of two parts. In the first part, we trained a neural network to classify auscultation findings of patients who have significant AS or not. In the second part, we compared the performance of the trained DNN with the auscultatory skills of 10 experienced cardiologists, 10 residents, and 10 medical students by using a test data set that consisted of a completely disjointed set of patients. For training, we used auscultation audio files from 100 patients with significant AS and 100 patients without AS. The ground truth was defined by echocardiography. Significant AS was defined as V max of >3.5 m/s measured by continuous‐wave Doppler. Although the definition of high‐grade AS has not yet been reached, we chose this cut‐off value, as these patients require close monitoring. Patients admitted for suspicious coronary artery disease or other cardiac diseases were taken as a control group. We used an electronic stethoscope (Eko) connected to a smartphone interface via Bluetooth for auscultation. Auscultation was performed at the aortic auscultation point (second intercostal space, right sternal border) and the mitral auscultation point (fifth intercostal space, midclavicular line). Thus, from each patient, we included two auscultation files. At each auscultation point, audio files with an interval of 15 s and a sampling rate of 40 kHz were recorded. The audio files were recorded as part of the clinical routine in a tertiary teaching hospital with a large valve unit specialized in transcatheter aortic valve implantation (TAVI). Only data from patients of this database were included who got echocardiography within 7 days before or after auscultation. Since the study was retrospective, an explicit ethics vote was not necessary according to the regulations of the responsible ethics committee. We preprocessed the data in our study before using them to train the network. In the first step, the 15 s sound files were divided into three equal parts of 5 s. This was done to overcome the risk that small portions of an auscultation file falsified by respiration 11 contribute disproportionately to the training of the whole network. Consequently, six sound files per patient (three files for each of the 2 auscultation points) contribute to the network's training. In the second step, we performed Mel Frequency Cepstral Coefficients transformation of the audio files (MFCC‐transformation). This transformation maps the perception of human hearing and has been proposed for audio data analysis of heart and lung auscultation. 12 , 13 , 14 Subsequently, we trained the DNN with 1200 processed audio files (6 files per patient, 100 patients with AS, 100 patients without AS). We developed a CNN for classification (Figure 1), which takes as input MFCCs. The sequential model has three two‐dimensional‐convolutional layers and one max pooling layer. As an activation function, we used “ReLU”. Before each convolutional layer, we applied batch normalization. To combat overfitting, we used a dropout layer between the convolutional layers that sets a random portion of the weights equal to a probability of 0.2 to 0. Thereby the network has to learn different aspects of the data each time. 15 Data processing and analysis were principally done in two steps. In the first step, MFCC feature extraction was done. In the second step, the preprocessed data were fed to the convolutional part of the DNN. After the convolutional layers, the output is flattened to a one‐dimensional tensor. Data are then fed to a fully connected layer using the ReLU (rectified linear unit) activation function. To overcome overfitting, which means that the network is too much adapted to the training data set, the regulizer and dropout techniques were applied. In the softmax function, the input values are transformed to a probability distribution that gives the probability of AS or no AS in the present case. AS, aortic valve stenosis; MFCC, Mel frequency cepstral coefficients. Hyperparameter tuning was done iteratively for learning rate, batch size, number of epochs, number of kernels, and grid size of the convolutional layers. Model comparison was made using K‐fold cross‐validation. After training the network, it was applied to the test set that the model had not seen before. The test set consists of 20 patients with AS and 20 without AS. Accuracy, sensitivity, specificity, receiver operating characteristic curves (ROC), and F1 score were calculated. F1 score was calculated using the following formula: F1 score = 2 × (recall × precision)/(recall + precision). Then the same test set was classified by 10 experienced cardiologists, 10 residents, and 10 final year medical students. The performance parameters were averaged in cardiologists, residents, and students. Audio file processing, training the DNN, and making predictions were made with the general‐purpose programming language Python. Preprocessing audio files was done by the audio analysis library Librosa. We generated Mel Frequency Cepstral Coefficients (MFCC) with a hopelength of 10 and 13 coefficients. 16 A CNN was implemented using the Keras framework with a TensorFlow (Google) backend. 17 Continuous baseline characteristics are given as mean ± SD. Continuous variables were compared using t‐test, and categorical variables were compared using chi‐quadrat test. Accuracy, sensitivity, specificity, and F1‐value for cardiologists, residents, and students are given as mean with a 95% confidence interval. The confidence intervals for the specific parameters of the DNN were calculated with the formula = z × sqrt((parameter × (1 − parameter))/n), where z is the corresponding parameter and n is the size of the test sample, 40 in the present case. Inter‐rater reliability was assessed by calculating Fleiss' kappa. 18 RESULTS: Data from 120 patients with and 120 patients without AS were taken for the present study. From each group, audio files from 100 patients were allocated to the training group and 20 to the test group, respectively. Of the 120 patients with AS, in 99 patients femoral TAVI, in 5 transapical aortic valve implantation, in 13 patients open‐heart surgery, and in 1 patient only valvuloplasty were performed. Two patients were treated conservatively. Patients with AS were older than control patients in the training and test patients. A significant proportion of patients had mitral or tricuspid valve disease. Atrioventricular valve defects were more frequent in patients with AS. However, this difference was not significant. For details on patient characteristics, see Table 1. Patient characteristics Note: Data are given as mean ± SD. Moderate or severe heart valve disease. *p < .05 no AS versus AS. **p < .01 no AS versus AS. DNN's diagnostic accuracy Hyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1. We analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97). Hyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1. We analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97). Diagnostic accuracy of the CNN versus cardiologists, residents, and students Ten students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2. ROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot. Comparison between the DNN and humans in the detection of AS in the test set Note: Data are given as mean and confidence intervals. Abbreviations: DNN, deep neural network; MR, mitral regurgitation. Ten students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2. ROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot. Comparison between the DNN and humans in the detection of AS in the test set Note: Data are given as mean and confidence intervals. Abbreviations: DNN, deep neural network; MR, mitral regurgitation. DNN's diagnostic accuracy: Hyperparameter tuning was done using K‐fold cross‐validation. The training data were split into fourfolds, and while iterating through the folds, each iteration uses onefold as the validation set. Using this approach, the optimized DNN consists of three convolutional layers with 32 kernels with a grid size of 3 × 3 (convolutional layer 1 + 2) and 2 × 2 (convolutional layer 3). The best results could be achieved with 40 epochs, a batch size of 4, and a learning rate of 0.001. After flattening the tensor, a fully connected layer with 150 nodes followed. The final fully connected softmax layer produces a distribution over the two output classes. A schematic of the neural network is shown in Figure 1. We analyzed whether the algorithm showed different results depending on the auscultation point. As expected, worse prediction accuracy is shown when the neural network was trained only with audio files from one auscultation point. This is most likely due to the smaller number of audio files. However, the cause might also be that both auscultation points provide complementary information. Actually, the performance of a model trained only with data from the mitral point was the same as when only trained with data from the aortic point. This result contradicts common clinical experience. However, DNNs are able to identify patterns that are not recognizable to humans. The ROC‐AUC of the DNN trained only with audio files from the aortic auscultation point was 0.93, only trained with audio files from the mitral auscultation point was 0.83 and for merged data was 0.99. The DNN did not detect two patients with AS. No patient without AS was misclassified. The positive predictive value for the DNN using both auscultation points was 1.0, for students 0.83 (0.81–0.86), for residents 0.86 (0.82–0.89), and for cardiologists 0.93 (0.91–0.94). The negative predictive value for the DNN using both auscultation points was 0.91 (0.82–1.0), for students 0.94 (0.93–0.95), for residents 0.94 (0.93–0.95), and for cardiologists 0.96 (0.94–0.97). Diagnostic accuracy of the CNN versus cardiologists, residents, and students: Ten students, 10 residents in an advanced stage of training, and 10 consultant cardiologists participated in the study. Participants were blinded for the results of the DNN. They were asked to classify patients whether to have AS using two audio files for each patient. For inter‐rater reliability, Fleiss' kappa was 0.69 in students, 0.64 in residents, and 0.84 in cardiologists. This shows that the agreement in the group of cardiologists is much higher than in the group of residents and students. The F1‐score is a parameter to compare the performance of different models or rater groups when seeking a balance between precision and recall. In Figure 2 ROC curves for deep learning model, students, residents, and cardiologists are shown. The DNN showed a higher F1‐score than the mean score of cardiologists, residents, and students. Values for accuracy, sensitivity, specificity, and F1‐score are given in Table 2. ROC curve (orange line) achieved by the model in comparison to students (A), residents (B), and cardiologists (C). Individual rater performance is indicated by the black crosses, and averaged cardiologist performance is indicated by the red dot. Comparison between the DNN and humans in the detection of AS in the test set Note: Data are given as mean and confidence intervals. Abbreviations: DNN, deep neural network; MR, mitral regurgitation. DISCUSSION: The present study shows that deep learning‐guided auscultation predicts significant AS with similar accuracy as board‐certified cardiologists. These results suggest that artificial intelligence‐assisted auscultation may help general practitioners without special cardiology training. Auscultation is one of the pillars of clinical investigation. It is readily available, and no sophisticated technical requirements are necessary. On the other hand, a high level of expertise is essential, and the skills, once acquired, need to be used continuously. These circumstances may explain errors in auscultation between 20% and 80% in residents, primary care physicians, and cardiologists. 19 , 20 For this reason, computer‐assisted auscultation was already proposed for clinical use at the beginning of this century. The developed algorithms decompose the cyclical heart sound, and hand‐engineered processing is applied for classification. In young patients with hypertrophic cardiomyopathy and children with congenital heart disease, sufficient sensitivity and specificity could be achieved with these techniques. 21 However, these studies were done in young patients with no conditions complicating the auscultation results like adiposity and lung emphysema. Deep neural networks (DNNs) take a completely different direction. A DNN is a machine learning algorithm that models brain architecture. They consist of perceptrons with adjustable weights and activation thresholds. In supervised learning, labeled data, often called ground truth, are propagated through a network of perceptrons that allows the DNN to adjust weights. In this step, the DNN learns how to assign known data to predefined categories. After this training phase, the performance of the final model is evaluated on a test data set. In this step, the trained DNN makes predictions in the form of probabilities for so far unknown data. In first applications, the focus was directed on standard diagnostic techniques used by doctors daily but cannot be provided on an expert level in any case. 22 Deep learning systems were developed for ECG interpretation, 23 skin cancer identification, 24 and papilledema detection. 25 In this context, it has been recognized that AI may also be a valuable tool to support doctors in identifying valvular heart disease. In a recently published study by Chorba et al., physicians assigned 5878 auscultation findings to the labels “heart murmur”, “no heart murmur”, or “inadequate signal”. The DNN was trained with these data using an end‐to‐end (E2E) network design. In a second step, the DNN was then validated on a test data set of 1774 recordings annotated by separate expert clinicians. 26 In contrast, the ground truth was not defined by physician assignment but by echocardiography in the present study. By using this gold standard, the well‐known erroneous annotation of auscultation findings by physicians was avoided. This is a crucial point as machine learning is based on detecting subtle patterns in data. Only when using high‐quality training data, noise that masks these patterns can be sufficiently reduced. Furthermore, the data in our study were pre‐processed before they were used to train the DNN. In this pre‐processing, attributes of the audio data were isolated that have been shown to be essential for pattern recognition in audio files. 27 , 28 With this hybrid approach, our DNN showed similar accuracy as board‐certified cardiologists. The precisely defined ground truth in conjunction with the preprocessing of the audio data compensates for a comparatively low patient number. A potential limitation of our study is that we only included a few patients with moderate aortic stenosis. This is due to the fact that the study was conducted with data from patients admitted to a tertiary teaching hospital for specialized valve therapy. Moreover, patients with only moderate valve disease are challenging to identify because they are often completely asymptomatic and therefore not under medical surveillance. Similarly, the DNN was trained predominantly with normal‐flow, high‐gradient AS. Probably the present algorithm will have worse performance in low‐flow, low‐gradient AS because the flow properties over the valve and thereby the sound characteristics are entirely different. This fact naturally lowers the sensitivity, but this condition affects only a small portion of the patients. In the present study, a machine learning algorithm was trained to detect patients with aortic stenosis. Patients with other valvular diseases like hypertrophic obstructive cardiomyopathy were not included, and few patients with pure mitral regurgitation participated. Thus, the presented algorithm is far from perfect, and this study can only be the first step in introducing artificial intelligence for valvular heart disease into everyday clinical practice. On the other side, it also shows that artificial intelligence can, in principle, be helpful in the auscultation of heart sounds. Clinical perspectives The present study gives proof of concept that AI‐assisted auscultation can provide results on a high expert level concerning the detection of aortic stenosis. Integrating this technology in an electronic stethoscope could be the next step to upgrade this system for everyday clinical use. A stethoscope that indicates a warning in the event of certain valve defects would be conceivable in the foreseeable future. Introducing such a device in countries that cannot provide comprehensive medical care has already been proposed years ago. 29 The present study gives proof of concept that AI‐assisted auscultation can provide results on a high expert level concerning the detection of aortic stenosis. Integrating this technology in an electronic stethoscope could be the next step to upgrade this system for everyday clinical use. A stethoscope that indicates a warning in the event of certain valve defects would be conceivable in the foreseeable future. Introducing such a device in countries that cannot provide comprehensive medical care has already been proposed years ago. 29 Clinical perspectives: The present study gives proof of concept that AI‐assisted auscultation can provide results on a high expert level concerning the detection of aortic stenosis. Integrating this technology in an electronic stethoscope could be the next step to upgrade this system for everyday clinical use. A stethoscope that indicates a warning in the event of certain valve defects would be conceivable in the foreseeable future. Introducing such a device in countries that cannot provide comprehensive medical care has already been proposed years ago. 29 CONFLICTS OF INTEREST: The authors declare no conflicts of interest.
Background: Although aortic stenosis (AS) is the most common valvular heart disease in the western world, many affected patients remain undiagnosed. Auscultation is a readily available screening tool for AS. However, it requires a high level of professional expertise. Methods: A deep neural network (DNN) was trained by preprocessed audio files of 100 patients with AS and 100 controls. The DNN's performance was evaluated with a test data set of 40 patients. The primary outcome measures were sensitivity, specificity, and F1-score. Results of the DNN were compared with the performance of cardiologists, residents, and medical students. Results: Eighteen percent of patients without AS and 22% of patients with AS showed an additional moderate or severe mitral regurgitation. The DNN showed a sensitivity of 0.90 (0.81-0.99), a specificity of 1, and an F1-score of 0.95 (0.89-1.0) for the detection of AS. In comparison, we calculated an F1-score of 0.94 (0.86-1.0) for cardiologists, 0.88 (0.78-0.98) for residents, and 0.88 (0.78-0.98) for students. Conclusions: The present study shows that deep learning-guided auscultation predicts significant AS with similar accuracy as cardiologists. The results of this pilot study suggest that AI-assisted auscultation may help general practitioners without special cardiology training in daily practice.
null
null
4,795
268
[ 389, 264, 90 ]
8
[ "auscultation", "dnn", "patients", "data", "cardiologists", "residents", "students", "audio", "files", "audio files" ]
[ "valve stenosis second", "aortic valve replacement", "stenosis patients valvular", "aortic stenosis fact", "moderate aortic stenosis" ]
null
null
[CONTENT] aortic stenosis | artificial intelligence | auscultation | deep neural network | machine learning | valvular heart disease [SUMMARY]
[CONTENT] aortic stenosis | artificial intelligence | auscultation | deep neural network | machine learning | valvular heart disease [SUMMARY]
[CONTENT] aortic stenosis | artificial intelligence | auscultation | deep neural network | machine learning | valvular heart disease [SUMMARY]
null
[CONTENT] aortic stenosis | artificial intelligence | auscultation | deep neural network | machine learning | valvular heart disease [SUMMARY]
null
[CONTENT] Aortic Valve Stenosis | Cardiologists | Cardiology | Humans | Neural Networks, Computer | Pilot Projects [SUMMARY]
[CONTENT] Aortic Valve Stenosis | Cardiologists | Cardiology | Humans | Neural Networks, Computer | Pilot Projects [SUMMARY]
[CONTENT] Aortic Valve Stenosis | Cardiologists | Cardiology | Humans | Neural Networks, Computer | Pilot Projects [SUMMARY]
null
[CONTENT] Aortic Valve Stenosis | Cardiologists | Cardiology | Humans | Neural Networks, Computer | Pilot Projects [SUMMARY]
null
[CONTENT] valve stenosis second | aortic valve replacement | stenosis patients valvular | aortic stenosis fact | moderate aortic stenosis [SUMMARY]
[CONTENT] valve stenosis second | aortic valve replacement | stenosis patients valvular | aortic stenosis fact | moderate aortic stenosis [SUMMARY]
[CONTENT] valve stenosis second | aortic valve replacement | stenosis patients valvular | aortic stenosis fact | moderate aortic stenosis [SUMMARY]
null
[CONTENT] valve stenosis second | aortic valve replacement | stenosis patients valvular | aortic stenosis fact | moderate aortic stenosis [SUMMARY]
null
[CONTENT] auscultation | dnn | patients | data | cardiologists | residents | students | audio | files | audio files [SUMMARY]
[CONTENT] auscultation | dnn | patients | data | cardiologists | residents | students | audio | files | audio files [SUMMARY]
[CONTENT] auscultation | dnn | patients | data | cardiologists | residents | students | audio | files | audio files [SUMMARY]
null
[CONTENT] auscultation | dnn | patients | data | cardiologists | residents | students | audio | files | audio files [SUMMARY]
null
[CONTENT] population | heart | significant | valve | years | 65 | replacement | mortality | valve replacement | murmurs [SUMMARY]
[CONTENT] files | auscultation | audio | patients | data | 10 | test | coefficients | mfcc | convolutional [SUMMARY]
[CONTENT] students | residents | dnn | cardiologists | patients | point | auscultation | 94 | 93 | trained [SUMMARY]
null
[CONTENT] auscultation | dnn | students | patients | data | residents | cardiologists | files | audio | point [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] 100 | AS and 100 ||| 40 ||| F1 ||| [SUMMARY]
[CONTENT] Eighteen percent | 22% ||| 0.90 | 0.81 | 1 | F1 | 0.95 | 0.89-1.0 ||| F1 | 0.94 | 0.86-1.0 | 0.88 | 0.78-0.98 | 0.88 | 0.78-0.98 [SUMMARY]
null
[CONTENT] ||| ||| ||| 100 | AS and 100 ||| 40 ||| F1 ||| ||| Eighteen percent | 22% ||| 0.90 | 0.81 | 1 | F1 | 0.95 | 0.89-1.0 ||| F1 | 0.94 | 0.86-1.0 | 0.88 | 0.78-0.98 | 0.88 | 0.78-0.98 ||| ||| AI | daily [SUMMARY]
null
Skin barrier dysfunction measured by transepidermal water loss at 2 days and 2 months predates and predicts atopic dermatitis at 1 year.
25618747
Loss-of-function mutations in the skin barrier protein filaggrin (FLG) are a major risk for atopic dermatitis (AD). The pathogenic sequence of disturbances in skin barrier function before or during the early development of AD is not fully understood. A more detailed understanding of these events is needed to develop a clearer picture of disease pathogenesis. A robust, noninvasive test to identify babies at high risk of AD would be important in planning early intervention and/or prevention studies.
BACKGROUND
A total of 1903 infants were enrolled in the Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints Birth Cohort study from July 2009 to October 2011. Measurements of TEWL were made at birth (day 2) and at 2 and 6 months. The presence of AD was ascertained at 6 and 12 months, and disease severity was assessed by using the SCORing Atopic Dermatitis clinical tool at 6 months and by using both the SCORing Atopic Dermatitis clinical tool and Nottingham Severity Score at 12 months. A total of 1300 infants were genotyped for FLG mutations.
METHODS
At 6 months, 18.7% of the children had AD, and at 12 months, 15.53%. In a logistic regression model, day 2 upper quartile TEWL measurement was significantly predictive of AD at 12 months (area under the receiver operating characteristic curve, 0.81; P < .05). Lowest quartile day 2 TEWL was protective against AD at 12 months. An upper quartile 2 month TEWL was also strongly predictive of AD at 12 months (area under the receiver operating characteristic curve, 0.84; P < .05). At both ages, this effect was independent of parental atopy, FLG status, or report of an itchy flexural rash at 2 months. Associations were increased when parental atopy status or child FLG mutation status was added into the linear regression model.
RESULTS
Impairment of skin barrier function at birth and at 2 months precedes clinical AD. In addition to providing important mechanistic insights into disease pathogenesis, these findings have implications for the optimal timing of interventions for the prevention of AD.
CONCLUSIONS
[ "DNA Mutational Analysis", "Dermatitis, Atopic", "Female", "Filaggrin Proteins", "Genotype", "Humans", "Infant", "Infant, Newborn", "Intermediate Filament Proteins", "Male", "Prognosis", "Risk", "Skin", "Time Factors" ]
4382348
null
null
Methods
Study subjects The Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study. A total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months. All infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34 The Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study. A total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months. All infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34 TEWL measurements TEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken. TEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken. Statistical analysis A series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36 A series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36
Results
A total of 1903 infants were enrolled onto the study. The demographic details of the population studied are presented in Table I. Filaggrin mutation typing Stream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II). Stream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II). TEWL through early infancy TEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III. TEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III. Clinical diagnosis of AD AD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77). AD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77). LR analysis: Factors influencing the development of AD at 1 year Our primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models. Our primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models. LR modeling of AD risk at 12 months including day 2 TEWL The LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status. The LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status. LR modeling of AD risk at 12 months including TEWL at 2 months We repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months. We repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months.
null
null
[ "Study subjects", "TEWL measurements", "Statistical analysis", "Filaggrin mutation typing", "TEWL through early infancy", "Clinical diagnosis of AD", "LR analysis: Factors influencing the development of AD at 1 year", "LR modeling of AD risk at 12 months including day 2 TEWL", "LR modeling of AD risk at 12 months including TEWL at 2 months" ]
[ "The Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study.\nA total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months.\nAll infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34", "TEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken.", "A series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36", "Stream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II).", "TEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III.", "AD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77).", "Our primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models.", "The LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status.", "We repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Methods", "Study subjects", "TEWL measurements", "Statistical analysis", "Results", "Filaggrin mutation typing", "TEWL through early infancy", "Clinical diagnosis of AD", "LR analysis: Factors influencing the development of AD at 1 year", "LR modeling of AD risk at 12 months including day 2 TEWL", "LR modeling of AD risk at 12 months including TEWL at 2 months", "Discussion" ]
[ " Study subjects The Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study.\nA total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months.\nAll infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34\nThe Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study.\nA total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months.\nAll infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34\n TEWL measurements TEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken.\nTEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken.\n Statistical analysis A series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36\nA series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36", "The Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study.\nA total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months.\nAll infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34", "TEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken.", "A series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36", "A total of 1903 infants were enrolled onto the study. The demographic details of the population studied are presented in Table I.\n Filaggrin mutation typing Stream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II).\nStream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II).\n TEWL through early infancy TEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III.\nTEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III.\n Clinical diagnosis of AD AD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77).\nAD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77).\n LR analysis: Factors influencing the development of AD at 1 year Our primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models.\nOur primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models.\n LR modeling of AD risk at 12 months including day 2 TEWL The LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status.\nThe LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status.\n LR modeling of AD risk at 12 months including TEWL at 2 months We repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months.\nWe repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months.", "Stream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II).", "TEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III.", "AD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77).", "Our primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models.", "The LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status.", "We repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months.", "This study involves a large, unselected birth cohort study and is the first study of this scale to assess skin barrier function in the newborn period and early infancy. We have shown that changes in skin barrier predate clinical AD with a signal for barrier impairment detected in asymptomatic infants at day 2 and more markedly at 2 months. In our univariable modeling, the OR for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1), showing the additional value of an TEWL reading at this stage. These changes are seen in both high-risk and low-risk infants and, crucially, are independent of FLG status. Although in our multivariable analysis the AUC improves slightly to 0.83 when all available variables are included, importantly our second model (which excludes FLG genotyping) demonstrates that FLG status need not be measured to produce an accurate 12-month AD prognosis for babies with upper quartile day 2 TEWL readings.\nSome children will develop AD before 6 months. In our study, we did not formally diagnose AD at 2 months but all parents were asked about the presence of itchy rash on face or skin folds. These infants were designated as “itchy rash,” seen in 8% of all cases at 2 months. There are no widely recognized diagnostic criteria for AD at this age; however, we controlled for an itchy red rash at the 2-month visit in our model. AD was screened for in all participants at 6 and 12 months and formally diagnosed using the UK Working Party Criteria. When compared with other studies in the field, our study shows the earliest timepoint for a signal for impaired TEWL in asymptomatic infants: at day 2. A further, stronger signal is seen at 2 months.\nPrevious studies, of smaller scale have shown a relationship between FLG mutation status and TEWL.19 However, our study is much larger, by several orders of magnitude, and is not selected for high risk of atopy. In our study, 30% of FLG mutation carriers developed AD by age 12 months and 70% did not, consistent with large previous studies. At birth, there was no difference in mean TEWL reading between FLG mutation and FLG wild-type groups (7.33 ± 3.62 gwater/m2/h vs 7.3 ± 3.38 gwater/m2/h). However, by 2 months, FLG mutation carrying infants have a significantly higher mean TEWL than do FLG wild-type infants. This change persisted at 6 months.\nSelecting out children in the upper quartile regardless of FLG status allows for a noninvasive, discriminating, and targeted means of prediction of AD in the first days of life, allowing for possible preventive measures to be put in place. Conversely, selecting out infants in the lower quartile of TEWL, which has a protective effect in relation to AD at 12 months, could be an important piece of positive information for their families and would reduce the “number needed to treat” in an intervention study of potential protective interventions. Detection of an increased TEWL as early as day 2 or at 2 months, before developing clinical features of AD, is a novel and unique finding in a general pediatric population. Accurate identification of individuals at high risk for AD, using family history in combination with a noninvasive measurement, with the optional addition of FLG mutation status has great potential for intervention studies. Stratified interventions could be made on the basis of these 2 or 3 variables. Pilot studies for the primary prevention of AD by use of liberal emollients have shown the potential of this approach.24 We believe that the novel findings in this study will facilitate and inform stratification of future intervention studies.\nA lowest quartile TEWL at birth is protective against AD. A highest quartile TEWL at 2 days and at 2 months is strongly associated with increased prevalence of AD at 12 months. These changes predate the development of clinically apparent AD. The mechanistic implications of these observations are that abnormalities in the skin barrier predate symptomatic or clinically detectable AD. These changes occur very early in the postnatal environment after transition from the intrauterine aqueous environment to the xerotic postnatal environment. It is notable that this effect is not dependent on FLG loss-of-function mutations but is enhanced by and is interactive with these genetic factors and presumably with environmental influences encountered postnatally. TEWL measurement in early life is therefore an effective tool to detect infants at risk of AD, especially when combined in a model with parental history of atopy and/or FLG mutation status. The obvious next step would be to assess whether intervention studies between birth and age 2 months in those with raised TEWL at birth would be effective in maintaining the skin barrier and reducing the incidence of AD.Clinical implicationsA signal for the development of AD is seen at 2 days and at 2 months in asymptomatic infants. Interventions to potentially prevent AD could be targeted toward such infants.\nA signal for the development of AD is seen at 2 days and at 2 months in asymptomatic infants. Interventions to potentially prevent AD could be targeted toward such infants." ]
[ "methods", null, null, null, "results", null, null, null, null, null, null, "discussion" ]
[ "Infant", "skin barrier", "TEWL", "atopic dermatitis", "filaggrin", "predictor", "biomarker", "AD, Atopic dermatitis", "AUC, Area under the ROC curve", "BASELINE, Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints", "FLG, Filaggrin gene", "LR, Logistic regression", "OR, Odds ratio", "ROC, Receiver operating characteristic", "SCOPE, Screening for Pregnancy Endpoints", "TEWL, Transepidermal water loss", "ΔTEWL, Change in TEWL measurement between one timepoint and another" ]
Methods: Study subjects The Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study. A total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months. All infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34 The Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study. A total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months. All infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34 TEWL measurements TEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken. TEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken. Statistical analysis A series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36 A series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36 Study subjects: The Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints (BASELINE) Birth Cohort study is the first birth cohort study in Ireland.27 It was developed as the pediatric follow on from the Cork Centre for the Screening for Pregnancy Endpoints (SCOPE) study, a multicenter international study evaluating diseases of pregnancy in primigravidous women.28 The Cork BASELINE Birth Cohort study recruited from August 2009 through to October 2011. Infants recruited from July 2009 had skin barrier assessment at birth and throughout early life; these 1903 infants are included in this study. A total of 1303 infants were recruited antenatally (stream1). These women were subject to the inclusion criteria of the SCOPE study, namely, first-time, low-risk mothers with singleton pregnancies delivered at or near term. Consent for recruitment was sought at 20 weeks of gestation and confirmed at birth of the live baby. Cord blood samples were taken at birth and stored for future use. A second recruitment stream began in July 2010 that recruited mothers and babies on the postnatal ward (stream2). A total of 600 infants were recruited postnatally. These mothers were enrolled independently of the SCOPE study. The sole inclusion criterion in stream 2 was a healthy term infant on the postnatal ward. Stream 1 and stream 2 infants were assessed and followed up in an identical fashion at birth and thereafter clinic visits at 2, 6, and 12 months. All infants had assessment at birth, 2 months, 6 months, and 12 months involving parental questionnaires and physical assessment. Parental questionnaires at 2, 6, and 12 months contained specific screening questions for AD. Experienced health care personnel diagnosed AD at 6 and 12 months in accordance with the UK Working Party diagnostic criteria.29-31 If AD was present, severity was assessed at these timepoints using the SCORing Atopic Dermatitis (SCORAD) clinical tool.32,33 At 12 months, the Nottingham Severity Score was also recorded to assess the severity of AD.34 TEWL measurements: TEWL measurements were carried out using a widely validated open chamber system (Tewameter TM 300; Courage+Khazaka Electronic, Cologne, Germany). For newborns, TEWL was taken in the Cork University Maternity Hospital. The subject's arm was acclimated before measurement by exposing the arm in a nonenvironmentally controlled room for 10 minutes. This was typically done in the cot beside the mother's bed while an interview with the mother or parents was carried out. The infant was then brought to a windowless room in which both temperature and humidity were maintained constant by an air conditioning system. Temperature was set between 20°C and 25°C. Humidity was monitored by a manometer in the room and was maintained between 30% and 45%. TEWL was taken on the lower volar surface of the forearm by applying a probe to the exposed volar skin for approximately 15 seconds until the measurement was recorded. Three readings were taken, and the mean of the 3 readings was recorded. For TEWL readings at other timepoints, the same procedure was carried out at 2 months and 6 months in an environmentally controlled room in the Health Research Board Discovery Centre, the clinical research facility for children at Cork University Hospital. The parents were advised not to apply emollients to the infant's skin for 12 hours before the reading was taken. Statistical analysis: A series of logistic regression (LR) models was used to estimate the factors at (1) 2 days and (2) 2 months that influence the diagnosis of AD at 12 months. The dependent variable that measures diagnosis at 12 months is AD “Yes” and is equal to 1, with AD “No” = 0. To examine the influence of each variable, with and without the presence of other significant predictors, we carried out both univariable and multivariable analyses. With regard to the latter, 3 models were produced. Model 1 included parental atopy, FLG mutation status, and presence of an itchy rash at 2 months as independent variables. Model 2 included parental atopy and TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3, respectively) as independent variables, but did not include FLG mutation status. This was done to determine the unique influence of TEWL, without the contribution of FLG mutations. Model 3 included all independent variables: parental atopy, TEWL test reading scores, and FLG mutation status. All models controlled for the variables “use of emollient before test reading,” “presence of an itchy rash,” “infant sex,” and “infant birthweight.” Receiver operating characteristic (ROC) curve output, which plots the true positives (sensitivity) vs false positives (1 − specificity), was then evaluated to compare the ability of the 3 models to produce a prognosis with high accuracy. Our rationale for variable selection was to include those predictors necessary for face validity but only if they were significant at a .05 level in the univariable analysis or if they altered the coefficient of the main variable by more than 10% in cases in which the main association was significant.35,36 Results: A total of 1903 infants were enrolled onto the study. The demographic details of the population studied are presented in Table I. Filaggrin mutation typing Stream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II). Stream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II). TEWL through early infancy TEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III. TEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III. Clinical diagnosis of AD AD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77). AD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77). LR analysis: Factors influencing the development of AD at 1 year Our primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models. Our primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models. LR modeling of AD risk at 12 months including day 2 TEWL The LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status. The LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status. LR modeling of AD risk at 12 months including TEWL at 2 months We repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months. We repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months. Filaggrin mutation typing: Stream 1 infants had cord blood samples taken at birth. Infants without cord blood samples had Oragene saliva samples taken. Those infants still enrolled in the study at 2 years had EDTA blood samples taken. FLG genotyping was carried out in 1300 infants with available DNA. The cumulative FLG mutation rate was 10.46% (136 of 1300). Four infants were homozygous for FLG mutation, with the remainder heterozygous for FLG mutation (Table II). TEWL through early infancy: TEWL was taken in the early newborn period in 1691 of 1903 (88.86%) infants of the cohort. The mean TEWL for newborns was 7.32 ± 3.33 gwater/m2/h. Using univariable analysis, we found that there was no significant association between TEWL measurements at birth and sex, gestation, or postnatal age at measurement or recruitment stream 1 or 2 (data not shown). A total of 1614 of 1638 (98.5%) infants who attended the 2-month appointment had TEWL measured (98.5%). Mean 2-month TEWL was 10.97 ± 7.98 gwater/m2/h. A total of 1516 of 1537 (98.6%) infants who attended the 6-month appointment had TEWL measured. Mean 6-month TEWL was 10.71 ± 7.10 gwater/m2/h. TEWL measurements by quartiles are presented in Table III. Clinical diagnosis of AD: AD was screened for at 6- and 12-month appointments. This diagnosis of AD was made according to the UK Working Party diagnostic criteria. At 6 months, 18.7% (299 of 1597) of the infants screened were diagnosed with AD. A total of 287 had a SCORAD completed, with a mean SCORAD score of 21.54 ± 16.29 (range, 0-88). At 12 months, 15.53% (232 of 1494) of the infants screened were diagnosed with AD. The mean SCORAD score at 12 months in affected children was 18.56 ± 14.92 (range, 0-77). LR analysis: Factors influencing the development of AD at 1 year: Our primary outcome was the presence of AD at 1 year. We developed a model that explored the relative contributions of FLG mutation status, parental atopy, presence of an itchy rash at 2 months, TEWL at day 2 and month 2, and ΔTEWL (day 2 to month 2 and day 2 to month 6) on AD at 1 year (Table IV). FLG mutation status was not associated with an elevated TEWL at birth. However, it was associated with an elevated TEWL at 2 and 6 months and with elevated ΔTEWL day 2 to 2 months and ΔTEWL day 2 to 6 months. Notably, FLG mutation carriers did not have elevated ΔTEWL 2 months to 6 months, implying that the major changes in skin barrier function in those who develop AD by 12 months start in the first 2 months of life. In our univariable modeling, the odds ratio (OR) for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1 in this article's Online Repository at www.jacionline.org). LR models were then used to examine the factors at birth and at 2 months that influence a diagnosis of AD at 12 months. We used 3 separate models to explore the relative contribution of parental atopy, FLG mutation status, and TEWL (at day 2 and at 2 months) toward the development of AD at 12 months. All models controlled for the variables use of emollient before test reading, presence of an itchy rash, infant sex, and infant birthweight. These covariates were not significant in any of the 3 multivariable models. LR modeling of AD risk at 12 months including day 2 TEWL: The LR model incorporating day 2 TEWL is summarized in Table V. In our first model, we included parental atopy and FLG mutation status as independent variables and excluded day 2 TEWL. Parental atopy and FLG mutation status significantly increase the likelihood of a positive diagnosis of AD at 12 months by 13.7 and 8.6 times, respectively, compared with infants without FLG mutation and without an atopic parent. The ROC, which plots the true positives (sensitivity) vs false positives (1 −specificity), is 0.8 for this model. Our second model includes day 2 TEWL test reading scores at 25th, 50th, and 75th percentiles as independent variables. We did not include FLG mutation status in this second model. The area under the ROC curve (AUC) remains at 0.8 for this model. Infants with a TEWL reading of 9.0 or above are 7.1 times more likely to be diagnosed with AD at 1 year than are infants with a reading below this point, controlling for all other variables in the model. This suggests that day 2 TEWL can be used as a sole indicator of likelihood of AD at 12 months. Our third model includes all independent variables, including both TEWL and FLG status. LR modeling of AD risk at 12 months including TEWL at 2 months: We repeated the LR approach for month 2 TEWL values, summarized in Table VI. Again, we used 3 models. In the first model, we included parental atopy and FLG mutation status as independent variables but did not include TEWL. Parental atopy and FLG mutation carriage significantly increase the likelihood of a positive AD diagnosis at 12 months by 2.7 and 3.0 times, respectively, compared with infants without an FLG mutation and without an atopic parent. The ROC is 0.66 for this model. In the second model, we included the month 2 TEWL test reading scores at 25th, 50th, and 75th percentiles (7.0, 9.4, and 12.3 gwater/m2/h, respectively) as independent variables. We did not include FLG mutation status in this second model, as explained above. The AUC improved from 0.66 to 0.82 in this model. Infants with a TEWL reading of 12.3 or above were 5.6 times more likely to be diagnosed with AD at 12 months than were infants with a reading below this point, controlling for all other variables in the model. Model 3 included all independent variables. In a pattern similar to that found at 2 days, the AUC improved slightly to 0.84, although here again it is important to note that model 2 demonstrates that FLG status need not be measured to produce a prognosis with high accuracy. We modeled these in a traditional stepwise LR analysis. The significant finding was that after controlling for all other possible influencing factors, month 2 TEWL was the strongest independent predictor of AD at 12 months. Discussion: This study involves a large, unselected birth cohort study and is the first study of this scale to assess skin barrier function in the newborn period and early infancy. We have shown that changes in skin barrier predate clinical AD with a signal for barrier impairment detected in asymptomatic infants at day 2 and more markedly at 2 months. In our univariable modeling, the OR for AD at 12 months conferred by both parents being atopic was 2.5 compared with an OR of 3.1 for an upper quartile TEWL at 2 months (see Table E1), showing the additional value of an TEWL reading at this stage. These changes are seen in both high-risk and low-risk infants and, crucially, are independent of FLG status. Although in our multivariable analysis the AUC improves slightly to 0.83 when all available variables are included, importantly our second model (which excludes FLG genotyping) demonstrates that FLG status need not be measured to produce an accurate 12-month AD prognosis for babies with upper quartile day 2 TEWL readings. Some children will develop AD before 6 months. In our study, we did not formally diagnose AD at 2 months but all parents were asked about the presence of itchy rash on face or skin folds. These infants were designated as “itchy rash,” seen in 8% of all cases at 2 months. There are no widely recognized diagnostic criteria for AD at this age; however, we controlled for an itchy red rash at the 2-month visit in our model. AD was screened for in all participants at 6 and 12 months and formally diagnosed using the UK Working Party Criteria. When compared with other studies in the field, our study shows the earliest timepoint for a signal for impaired TEWL in asymptomatic infants: at day 2. A further, stronger signal is seen at 2 months. Previous studies, of smaller scale have shown a relationship between FLG mutation status and TEWL.19 However, our study is much larger, by several orders of magnitude, and is not selected for high risk of atopy. In our study, 30% of FLG mutation carriers developed AD by age 12 months and 70% did not, consistent with large previous studies. At birth, there was no difference in mean TEWL reading between FLG mutation and FLG wild-type groups (7.33 ± 3.62 gwater/m2/h vs 7.3 ± 3.38 gwater/m2/h). However, by 2 months, FLG mutation carrying infants have a significantly higher mean TEWL than do FLG wild-type infants. This change persisted at 6 months. Selecting out children in the upper quartile regardless of FLG status allows for a noninvasive, discriminating, and targeted means of prediction of AD in the first days of life, allowing for possible preventive measures to be put in place. Conversely, selecting out infants in the lower quartile of TEWL, which has a protective effect in relation to AD at 12 months, could be an important piece of positive information for their families and would reduce the “number needed to treat” in an intervention study of potential protective interventions. Detection of an increased TEWL as early as day 2 or at 2 months, before developing clinical features of AD, is a novel and unique finding in a general pediatric population. Accurate identification of individuals at high risk for AD, using family history in combination with a noninvasive measurement, with the optional addition of FLG mutation status has great potential for intervention studies. Stratified interventions could be made on the basis of these 2 or 3 variables. Pilot studies for the primary prevention of AD by use of liberal emollients have shown the potential of this approach.24 We believe that the novel findings in this study will facilitate and inform stratification of future intervention studies. A lowest quartile TEWL at birth is protective against AD. A highest quartile TEWL at 2 days and at 2 months is strongly associated with increased prevalence of AD at 12 months. These changes predate the development of clinically apparent AD. The mechanistic implications of these observations are that abnormalities in the skin barrier predate symptomatic or clinically detectable AD. These changes occur very early in the postnatal environment after transition from the intrauterine aqueous environment to the xerotic postnatal environment. It is notable that this effect is not dependent on FLG loss-of-function mutations but is enhanced by and is interactive with these genetic factors and presumably with environmental influences encountered postnatally. TEWL measurement in early life is therefore an effective tool to detect infants at risk of AD, especially when combined in a model with parental history of atopy and/or FLG mutation status. The obvious next step would be to assess whether intervention studies between birth and age 2 months in those with raised TEWL at birth would be effective in maintaining the skin barrier and reducing the incidence of AD.Clinical implicationsA signal for the development of AD is seen at 2 days and at 2 months in asymptomatic infants. Interventions to potentially prevent AD could be targeted toward such infants. A signal for the development of AD is seen at 2 days and at 2 months in asymptomatic infants. Interventions to potentially prevent AD could be targeted toward such infants.
Background: Loss-of-function mutations in the skin barrier protein filaggrin (FLG) are a major risk for atopic dermatitis (AD). The pathogenic sequence of disturbances in skin barrier function before or during the early development of AD is not fully understood. A more detailed understanding of these events is needed to develop a clearer picture of disease pathogenesis. A robust, noninvasive test to identify babies at high risk of AD would be important in planning early intervention and/or prevention studies. Methods: A total of 1903 infants were enrolled in the Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints Birth Cohort study from July 2009 to October 2011. Measurements of TEWL were made at birth (day 2) and at 2 and 6 months. The presence of AD was ascertained at 6 and 12 months, and disease severity was assessed by using the SCORing Atopic Dermatitis clinical tool at 6 months and by using both the SCORing Atopic Dermatitis clinical tool and Nottingham Severity Score at 12 months. A total of 1300 infants were genotyped for FLG mutations. Results: At 6 months, 18.7% of the children had AD, and at 12 months, 15.53%. In a logistic regression model, day 2 upper quartile TEWL measurement was significantly predictive of AD at 12 months (area under the receiver operating characteristic curve, 0.81; P < .05). Lowest quartile day 2 TEWL was protective against AD at 12 months. An upper quartile 2 month TEWL was also strongly predictive of AD at 12 months (area under the receiver operating characteristic curve, 0.84; P < .05). At both ages, this effect was independent of parental atopy, FLG status, or report of an itchy flexural rash at 2 months. Associations were increased when parental atopy status or child FLG mutation status was added into the linear regression model. Conclusions: Impairment of skin barrier function at birth and at 2 months precedes clinical AD. In addition to providing important mechanistic insights into disease pathogenesis, these findings have implications for the optimal timing of interventions for the prevention of AD.
null
null
7,641
401
[ 370, 248, 338, 84, 165, 123, 311, 226, 292 ]
12
[ "months", "tewl", "ad", "flg", "12", "infants", "12 months", "model", "mutation", "flg mutation" ]
[ "term infant postnatal", "cork university maternity", "birth cohort study", "cork baseline birth", "infants included study" ]
null
null
null
null
[CONTENT] Infant | skin barrier | TEWL | atopic dermatitis | filaggrin | predictor | biomarker | AD, Atopic dermatitis | AUC, Area under the ROC curve | BASELINE, Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints | FLG, Filaggrin gene | LR, Logistic regression | OR, Odds ratio | ROC, Receiver operating characteristic | SCOPE, Screening for Pregnancy Endpoints | TEWL, Transepidermal water loss | ΔTEWL, Change in TEWL measurement between one timepoint and another [SUMMARY]
[CONTENT] Infant | skin barrier | TEWL | atopic dermatitis | filaggrin | predictor | biomarker | AD, Atopic dermatitis | AUC, Area under the ROC curve | BASELINE, Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints | FLG, Filaggrin gene | LR, Logistic regression | OR, Odds ratio | ROC, Receiver operating characteristic | SCOPE, Screening for Pregnancy Endpoints | TEWL, Transepidermal water loss | ΔTEWL, Change in TEWL measurement between one timepoint and another [SUMMARY]
null
[CONTENT] Infant | skin barrier | TEWL | atopic dermatitis | filaggrin | predictor | biomarker | AD, Atopic dermatitis | AUC, Area under the ROC curve | BASELINE, Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional Endpoints | FLG, Filaggrin gene | LR, Logistic regression | OR, Odds ratio | ROC, Receiver operating characteristic | SCOPE, Screening for Pregnancy Endpoints | TEWL, Transepidermal water loss | ΔTEWL, Change in TEWL measurement between one timepoint and another [SUMMARY]
null
null
[CONTENT] DNA Mutational Analysis | Dermatitis, Atopic | Female | Filaggrin Proteins | Genotype | Humans | Infant | Infant, Newborn | Intermediate Filament Proteins | Male | Prognosis | Risk | Skin | Time Factors [SUMMARY]
[CONTENT] DNA Mutational Analysis | Dermatitis, Atopic | Female | Filaggrin Proteins | Genotype | Humans | Infant | Infant, Newborn | Intermediate Filament Proteins | Male | Prognosis | Risk | Skin | Time Factors [SUMMARY]
null
[CONTENT] DNA Mutational Analysis | Dermatitis, Atopic | Female | Filaggrin Proteins | Genotype | Humans | Infant | Infant, Newborn | Intermediate Filament Proteins | Male | Prognosis | Risk | Skin | Time Factors [SUMMARY]
null
null
[CONTENT] term infant postnatal | cork university maternity | birth cohort study | cork baseline birth | infants included study [SUMMARY]
[CONTENT] term infant postnatal | cork university maternity | birth cohort study | cork baseline birth | infants included study [SUMMARY]
null
[CONTENT] term infant postnatal | cork university maternity | birth cohort study | cork baseline birth | infants included study [SUMMARY]
null
null
[CONTENT] months | tewl | ad | flg | 12 | infants | 12 months | model | mutation | flg mutation [SUMMARY]
[CONTENT] months | tewl | ad | flg | 12 | infants | 12 months | model | mutation | flg mutation [SUMMARY]
null
[CONTENT] months | tewl | ad | flg | 12 | infants | 12 months | model | mutation | flg mutation [SUMMARY]
null
null
[CONTENT] months | study | recruited | birth | cork | 12 | room | variable | scope | tewl [SUMMARY]
[CONTENT] tewl | months | model | flg | ad | mutation | infants | day | flg mutation | month [SUMMARY]
null
[CONTENT] months | tewl | ad | flg | infants | model | 12 | mutation | flg mutation | 12 months [SUMMARY]
null
null
[CONTENT] 1903 | the Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional | July 2009 to October 2011 ||| TEWL | day 2 | 2 and 6 months ||| 6 and 12 months | Atopic Dermatitis | 6 months | Atopic Dermatitis | Nottingham Severity Score | 12 months ||| 1300 | FLG [SUMMARY]
[CONTENT] 6 months | 18.7% | 12 months | 15.53% ||| day 2 | TEWL | 12 months | 0.81 ||| day 2 | TEWL | AD | 12 months ||| 2 month | TEWL | 12 months | 0.84 ||| FLG | 2 months ||| FLG [SUMMARY]
null
[CONTENT] FLG ||| ||| ||| ||| 1903 | the Cork Babies After Scope: Evaluating the Longitudinal Impact Using Neurological and Nutritional | July 2009 to October 2011 ||| TEWL | day 2 | 2 and 6 months ||| 6 and 12 months | Atopic Dermatitis | 6 months | Atopic Dermatitis | Nottingham Severity Score | 12 months ||| 1300 | FLG ||| ||| 6 months | 18.7% | 12 months | 15.53% ||| day 2 | TEWL | 12 months | 0.81 ||| day 2 | TEWL | AD | 12 months ||| 2 month | TEWL | 12 months | 0.84 ||| FLG | 2 months ||| FLG ||| 2 months ||| [SUMMARY]
null
In vitro antioxidant and anticancer effects of solvent fractions from Prunella vulgaris var. lilacina.
24206840
Recently, considerable attention has been focused on exploring the potential antioxidant properties of plant extracts or isolated products of plant origin. Prunella vulgaris var. lilacina is widely distributed in Korea, Japan, China, and Europe, and it continues to be used to treat inflammation, eye pain, headache, and dizziness. However, reports on the antioxidant activities of P. vulgaris var. lilacina are limited, particularly concerning the relationship between its phenolic content and antioxidant capacity. In this study, we investigated the antioxidant and anticancer activities of an ethanol extract from P. vulgaris var. lilacina and its fractions.
BACKGROUND
Dried powder of P. vulgaris var. lilacina was extracted with ethanol, and the extract was fractionated to produce the hexane fraction, butanol fraction, chloroform fraction and residual water fraction. The phenolic content was assayed using the Folin-Ciocalteu colorimetric method. Subsequently, the antioxidant activities of the ethanol extract and its fractions were analyzed employing various antioxidant assay methods including DPPH, FRAP, ABTS, SOD activity and production of reactive oxygen species. Additionally, the extract and fractions were assayed for their ability to exert cytotoxic activities on various cancer cells using the MTT assay. We also investigated the expression of genes associated with apoptotic cell death by RT-PCR.
METHODS
The total phenolic contents of the ethanol extract and water fraction of P. vulgaris var. lilacina were 303.66 and 322.80 mg GAE/g dry weight (or fractions), respectively. The results showed that the ethanol extract and the water fraction of P. vulgaris var. lilacina had higher antioxidant content than other solvent fractions, similar to their total phenolic content. Anticancer activity was also tested using the HepG2, HT29, A549, MKN45 and HeLa cancer cell lines. The results clearly demonstrated that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines, and these effects were stronger than those induced by the P. vulgaris var. lilacina solvent fractions. We also investigated the expression of genes associated with apoptotic cell death. We confirmed that the P. vulgaris var. lilacina ethanol extract and water fraction significantly increased the expression of p53, Bax and Fas.
RESULTS
These results suggest that the ethanol extract from P. vulgaris var. lilacina and its fractions could be applied as natural sources of antioxidants and anticancer activities in food and in the pharmaceutical industry.
CONCLUSIONS
[ "Analysis of Variance", "Animals", "Antineoplastic Agents", "Antioxidants", "Cell Line, Tumor", "Cell Survival", "Humans", "Intracellular Space", "Mice", "Phenols", "Plant Extracts", "Prunella", "Reactive Oxygen Species", "Real-Time Polymerase Chain Reaction" ]
4226201
Background
Oxidative stress is caused by reactive oxygen species (ROS), which are associated with many pathological disorders such as atherosclerosis, diabetes, ageing and cancer [1-3]. In order to protect human beings against oxidative damage, synthetic antioxidants such as BHA and BHT were created due to demand [4]. However, there has been concern regarding the toxicity and carcinogenic effects of synthetic antioxidants [5,6]. Thus, it is important to identify new sources of safe and inexpensive antioxidants of natural origin. Natural antioxidants, especially plant phenolics, flavonoids, tannins and anthocyanidins, are safe and are also bioactive [7]. Therefore, in recent years, considerable attention has been focused on exploring the potential antioxidant properties of plant extracts or isolated products of plant origin [8]. Prunella vulgaris var. lilacina is widely distributed in Korea, Japan, China, and Europe, and it continues to be used to treat inflammation, eye pain, headache, and dizziness [9,10]. It is rich in active compounds known to significantly affect human health, such as triterpenoid, rosmarinic acid, hyperoside, ursolic acid, and flavonoids [11-16]. Furthermore, P. vulgaris var. lilacina has been shown to have anti-allergic, anti-inflammatory, anti-oxidative, anti-microbial, and anti-viral effects [17-19]. However, reports on the antioxidant activities of P. vulgaris var. lilacina are limited, particularly concerning the relationship between its phenolic content and antioxidant capacity. Therefore, the aims of this study were to identify new sources of antioxidants from P. vulgaris var. lilacina extract. Additionally, the effects of the extraction solvent (70% ethanol, hexane, butanol, chloroform, or water) on the total phenolic content and antioxidant activities of P. vulgaris var. lilacina were investigated.
Methods
Reagents The reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA). The reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA). Sample preparation and extraction Whole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO. Whole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO. Total phenolic content The total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions). The total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions). DPPH radical scavenging activity Analysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100, where Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate. Analysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100, where Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate. Ferric-reducing antioxidant power (FRAP) activity FRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents. FRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents. ABTS radical cation scavenging activity The ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50. The ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50. Superoxide dismutase (SOD) activity Superoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate. Superoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate. Cells and culture The mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2. The mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2. Cell cytotoxicity assay Exponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24]. Exponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24]. Intracellular reactive oxygen species (ROS) scavenging activity For microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany). For microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany). Real-time reverse transcription polymerase chain reaction analysis (RT-PCR) To determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse). To determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse). Gas chromatography-mass spectrum analysis (GC-MS) GC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples. GC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples. Statistical analysis Statistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests. Statistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests.
null
null
Conclusions
The present study determined that P. vulgaris var. lilacina extract and its fractions have strong antioxidant and anticancer activities in vitro. The correlation coefficients between antioxidant capacity and the phenolic content were very strong, and phenolic compounds were a major contributor to the antioxidant capacities of P. vulgaris var. lilacina. In addition, we confirmed the presence of 7 major components of P. vulgaris var. lilacina. However, further studies to study the mechanisms of these compounds and find the root of their antioxidative and anticancer activity are underway. On the basis of these results, P. vulgaris var. lilacina appears to be a good source of natural antioxidant and anticancer agents and could be of significance in the food industry and for the control of various human and animal diseases.
[ "Background", "Reagents", "Sample preparation and extraction", "Total phenolic content", "DPPH radical scavenging activity", "Ferric-reducing antioxidant power (FRAP) activity", "ABTS radical cation scavenging activity", "Superoxide dismutase (SOD) activity", "Cells and culture", "Cell cytotoxicity assay", "Intracellular reactive oxygen species (ROS) scavenging activity", "Real-time reverse transcription polymerase chain reaction analysis (RT-PCR)", "Gas chromatography-mass spectrum analysis (GC-MS)", "Statistical analysis", "Results and discussion", "Extraction yield", "Total phenolic content", "Antioxidant capacities of prunella vulgaris var. Lilacina", "Intracellular ROS scavenging activity", "Cell cytotoxicity activity", "Real-time RT-PCR analysis", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Oxidative stress is caused by reactive oxygen species (ROS), which are associated with many pathological disorders such as atherosclerosis, diabetes, ageing and cancer [1-3]. In order to protect human beings against oxidative damage, synthetic antioxidants such as BHA and BHT were created due to demand [4]. However, there has been concern regarding the toxicity and carcinogenic effects of synthetic antioxidants [5,6]. Thus, it is important to identify new sources of safe and inexpensive antioxidants of natural origin. Natural antioxidants, especially plant phenolics, flavonoids, tannins and anthocyanidins, are safe and are also bioactive [7]. Therefore, in recent years, considerable attention has been focused on exploring the potential antioxidant properties of plant extracts or isolated products of plant origin [8].\nPrunella vulgaris var. lilacina is widely distributed in Korea, Japan, China, and Europe, and it continues to be used to treat inflammation, eye pain, headache, and dizziness [9,10]. It is rich in active compounds known to significantly affect human health, such as triterpenoid, rosmarinic acid, hyperoside, ursolic acid, and flavonoids [11-16]. Furthermore, P. vulgaris var. lilacina has been shown to have anti-allergic, anti-inflammatory, anti-oxidative, anti-microbial, and anti-viral effects [17-19]. However, reports on the antioxidant activities of P. vulgaris var. lilacina are limited, particularly concerning the relationship between its phenolic content and antioxidant capacity. Therefore, the aims of this study were to identify new sources of antioxidants from P. vulgaris var. lilacina extract. Additionally, the effects of the extraction solvent (70% ethanol, hexane, butanol, chloroform, or water) on the total phenolic content and antioxidant activities of P. vulgaris var. lilacina were investigated.", "The reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA).", "Whole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO.", "The total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).", "Analysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100,\nwhere Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate.", "FRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents.", "The ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50.", "Superoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate.", "The mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2.", "Exponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24].", "For microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany).", "To determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse).", "GC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples.", "Statistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests.", " Extraction yield The yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent.\nThe yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent.\n Total phenolic content The total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities.\nTotal phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina\n1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\nThe total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities.\nTotal phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina\n1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\n Antioxidant capacities of prunella vulgaris var. Lilacina Results of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions.\nTotal antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)Concentration of test sample required to produce 50% inhibition of the DPPH radical.\n2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight.\n3)Concentration of test sample required to produce 50% inhibition of ABTS radical.\n4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions.\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\nDPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31].\nCorrelation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01.\nResults of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions.\nTotal antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)Concentration of test sample required to produce 50% inhibition of the DPPH radical.\n2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight.\n3)Concentration of test sample required to produce 50% inhibition of ABTS radical.\n4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions.\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\nDPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31].\nCorrelation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01.\n Intracellular ROS scavenging activity To investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays.\nReactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy.\nTo investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays.\nReactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy.\n Cell cytotoxicity activity In order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL.\nEffects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group.\nInvolvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively.\nThe cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05.\nIn the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer.\n\nGas chromatogram of the ethanol extract of \n\nPrunella vulgaris \n\nvar. \n\nlilacina\n\n.\n\nMS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate.\nIn order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL.\nEffects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group.\nInvolvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively.\nThe cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05.\nIn the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer.\n\nGas chromatogram of the ethanol extract of \n\nPrunella vulgaris \n\nvar. \n\nlilacina\n\n.\n\nMS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate.\n Real-time RT-PCR analysis We assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression.\nmRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001).\nWe assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression.\nmRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001).", "The yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent.", "The total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities.\nTotal phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina\n1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).", "Results of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions.\nTotal antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)Concentration of test sample required to produce 50% inhibition of the DPPH radical.\n2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight.\n3)Concentration of test sample required to produce 50% inhibition of ABTS radical.\n4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions.\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\nDPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31].\nCorrelation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01.", "To investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays.\nReactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy.", "In order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL.\nEffects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group.\nInvolvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively.\nThe cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05.\nIn the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer.\n\nGas chromatogram of the ethanol extract of \n\nPrunella vulgaris \n\nvar. \n\nlilacina\n\n.\n\nMS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate.", "We assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression.\nmRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001).", "ABTS: 2, 2′-Azinibis 3-ethyl benzothiazoline-6-sulfonic acid; DCF-DA: Dichlorofluorescein diacetate; DMSO: Dimethyl sulfoxide; DPPH: 1, 1-diphenyl-1-picrylhydrazyl; FRAP: Ferric-reducing antioxidant power; GAE: Gallic acid equivalent; LPS: Lipopolysaccharide; ROS: Reactive oxygen species; SOD: Superoxide dismutase; TPTZ: 2, 4, 6-tris(2-pyridyl)-s-triazine.", "The authors declare that they have no competing interests.", "KAH conceived this study and designed the experiments. YJH and EJL performed most of the experiments. All authors including HRK analyzed the data and discussed the results. KAH supervised the project and wrote the manuscript with the help of YJH, EJL and HRK, and all authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6882/13/310/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Reagents", "Sample preparation and extraction", "Total phenolic content", "DPPH radical scavenging activity", "Ferric-reducing antioxidant power (FRAP) activity", "ABTS radical cation scavenging activity", "Superoxide dismutase (SOD) activity", "Cells and culture", "Cell cytotoxicity assay", "Intracellular reactive oxygen species (ROS) scavenging activity", "Real-time reverse transcription polymerase chain reaction analysis (RT-PCR)", "Gas chromatography-mass spectrum analysis (GC-MS)", "Statistical analysis", "Results and discussion", "Extraction yield", "Total phenolic content", "Antioxidant capacities of prunella vulgaris var. Lilacina", "Intracellular ROS scavenging activity", "Cell cytotoxicity activity", "Real-time RT-PCR analysis", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history", "Supplementary Material" ]
[ "Oxidative stress is caused by reactive oxygen species (ROS), which are associated with many pathological disorders such as atherosclerosis, diabetes, ageing and cancer [1-3]. In order to protect human beings against oxidative damage, synthetic antioxidants such as BHA and BHT were created due to demand [4]. However, there has been concern regarding the toxicity and carcinogenic effects of synthetic antioxidants [5,6]. Thus, it is important to identify new sources of safe and inexpensive antioxidants of natural origin. Natural antioxidants, especially plant phenolics, flavonoids, tannins and anthocyanidins, are safe and are also bioactive [7]. Therefore, in recent years, considerable attention has been focused on exploring the potential antioxidant properties of plant extracts or isolated products of plant origin [8].\nPrunella vulgaris var. lilacina is widely distributed in Korea, Japan, China, and Europe, and it continues to be used to treat inflammation, eye pain, headache, and dizziness [9,10]. It is rich in active compounds known to significantly affect human health, such as triterpenoid, rosmarinic acid, hyperoside, ursolic acid, and flavonoids [11-16]. Furthermore, P. vulgaris var. lilacina has been shown to have anti-allergic, anti-inflammatory, anti-oxidative, anti-microbial, and anti-viral effects [17-19]. However, reports on the antioxidant activities of P. vulgaris var. lilacina are limited, particularly concerning the relationship between its phenolic content and antioxidant capacity. Therefore, the aims of this study were to identify new sources of antioxidants from P. vulgaris var. lilacina extract. Additionally, the effects of the extraction solvent (70% ethanol, hexane, butanol, chloroform, or water) on the total phenolic content and antioxidant activities of P. vulgaris var. lilacina were investigated.", " Reagents The reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA).\nThe reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA).\n Sample preparation and extraction Whole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO.\nWhole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO.\n Total phenolic content The total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).\nThe total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).\n DPPH radical scavenging activity Analysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100,\nwhere Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate.\nAnalysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100,\nwhere Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate.\n Ferric-reducing antioxidant power (FRAP) activity FRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents.\nFRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents.\n ABTS radical cation scavenging activity The ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50.\nThe ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50.\n Superoxide dismutase (SOD) activity Superoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate.\nSuperoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate.\n Cells and culture The mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2.\nThe mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2.\n Cell cytotoxicity assay Exponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24].\nExponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24].\n Intracellular reactive oxygen species (ROS) scavenging activity For microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany).\nFor microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany).\n Real-time reverse transcription polymerase chain reaction analysis (RT-PCR) To determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse).\nTo determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse).\n Gas chromatography-mass spectrum analysis (GC-MS) GC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples.\nGC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples.\n Statistical analysis Statistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests.\nStatistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests.", "The reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA).", "Whole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO.", "The total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).", "Analysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100,\nwhere Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate.", "FRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents.", "The ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50.", "Superoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate.", "The mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2.", "Exponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24].", "For microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany).", "To determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse).", "GC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples.", "Statistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests.", " Extraction yield The yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent.\nThe yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent.\n Total phenolic content The total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities.\nTotal phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina\n1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\nThe total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities.\nTotal phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina\n1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\n Antioxidant capacities of prunella vulgaris var. Lilacina Results of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions.\nTotal antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)Concentration of test sample required to produce 50% inhibition of the DPPH radical.\n2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight.\n3)Concentration of test sample required to produce 50% inhibition of ABTS radical.\n4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions.\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\nDPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31].\nCorrelation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01.\nResults of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions.\nTotal antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)Concentration of test sample required to produce 50% inhibition of the DPPH radical.\n2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight.\n3)Concentration of test sample required to produce 50% inhibition of ABTS radical.\n4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions.\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\nDPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31].\nCorrelation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01.\n Intracellular ROS scavenging activity To investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays.\nReactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy.\nTo investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays.\nReactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy.\n Cell cytotoxicity activity In order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL.\nEffects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group.\nInvolvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively.\nThe cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05.\nIn the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer.\n\nGas chromatogram of the ethanol extract of \n\nPrunella vulgaris \n\nvar. \n\nlilacina\n\n.\n\nMS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate.\nIn order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL.\nEffects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group.\nInvolvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively.\nThe cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05.\nIn the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer.\n\nGas chromatogram of the ethanol extract of \n\nPrunella vulgaris \n\nvar. \n\nlilacina\n\n.\n\nMS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate.\n Real-time RT-PCR analysis We assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression.\nmRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001).\nWe assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression.\nmRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001).", "The yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent.", "The total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities.\nTotal phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina\n1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions).\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).", "Results of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions.\nTotal antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)Concentration of test sample required to produce 50% inhibition of the DPPH radical.\n2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight.\n3)Concentration of test sample required to produce 50% inhibition of ABTS radical.\n4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions.\nValues are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05).\nDPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31].\nCorrelation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina\n1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01.", "To investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays.\nReactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy.", "In order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL.\nEffects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group.\nInvolvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively.\nThe cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05.\nIn the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer.\n\nGas chromatogram of the ethanol extract of \n\nPrunella vulgaris \n\nvar. \n\nlilacina\n\n.\n\nMS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate.", "We assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression.\nmRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001).", "The present study determined that P. vulgaris var. lilacina extract and its fractions have strong antioxidant and anticancer activities in vitro. The correlation coefficients between antioxidant capacity and the phenolic content were very strong, and phenolic compounds were a major contributor to the antioxidant capacities of P. vulgaris var. lilacina.\nIn addition, we confirmed the presence of 7 major components of P. vulgaris var. lilacina. However, further studies to study the mechanisms of these compounds and find the root of their antioxidative and anticancer activity are underway. On the basis of these results, P. vulgaris var. lilacina appears to be a good source of natural antioxidant and anticancer agents and could be of significance in the food industry and for the control of various human and animal diseases.", "ABTS: 2, 2′-Azinibis 3-ethyl benzothiazoline-6-sulfonic acid; DCF-DA: Dichlorofluorescein diacetate; DMSO: Dimethyl sulfoxide; DPPH: 1, 1-diphenyl-1-picrylhydrazyl; FRAP: Ferric-reducing antioxidant power; GAE: Gallic acid equivalent; LPS: Lipopolysaccharide; ROS: Reactive oxygen species; SOD: Superoxide dismutase; TPTZ: 2, 4, 6-tris(2-pyridyl)-s-triazine.", "The authors declare that they have no competing interests.", "KAH conceived this study and designed the experiments. YJH and EJL performed most of the experiments. All authors including HRK analyzed the data and discussed the results. KAH supervised the project and wrote the manuscript with the help of YJH, EJL and HRK, and all authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6882/13/310/prepub\n", "Extraction yield of Prunella vulgaris var. lilacina.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "conclusions", null, null, null, null, "supplementary-material" ]
[ "Prunella vulgaris var. lilacina", "Antioxidative activity", "Anticancer activity" ]
Background: Oxidative stress is caused by reactive oxygen species (ROS), which are associated with many pathological disorders such as atherosclerosis, diabetes, ageing and cancer [1-3]. In order to protect human beings against oxidative damage, synthetic antioxidants such as BHA and BHT were created due to demand [4]. However, there has been concern regarding the toxicity and carcinogenic effects of synthetic antioxidants [5,6]. Thus, it is important to identify new sources of safe and inexpensive antioxidants of natural origin. Natural antioxidants, especially plant phenolics, flavonoids, tannins and anthocyanidins, are safe and are also bioactive [7]. Therefore, in recent years, considerable attention has been focused on exploring the potential antioxidant properties of plant extracts or isolated products of plant origin [8]. Prunella vulgaris var. lilacina is widely distributed in Korea, Japan, China, and Europe, and it continues to be used to treat inflammation, eye pain, headache, and dizziness [9,10]. It is rich in active compounds known to significantly affect human health, such as triterpenoid, rosmarinic acid, hyperoside, ursolic acid, and flavonoids [11-16]. Furthermore, P. vulgaris var. lilacina has been shown to have anti-allergic, anti-inflammatory, anti-oxidative, anti-microbial, and anti-viral effects [17-19]. However, reports on the antioxidant activities of P. vulgaris var. lilacina are limited, particularly concerning the relationship between its phenolic content and antioxidant capacity. Therefore, the aims of this study were to identify new sources of antioxidants from P. vulgaris var. lilacina extract. Additionally, the effects of the extraction solvent (70% ethanol, hexane, butanol, chloroform, or water) on the total phenolic content and antioxidant activities of P. vulgaris var. lilacina were investigated. Methods: Reagents The reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA). The reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA). Sample preparation and extraction Whole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO. Whole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO. Total phenolic content The total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions). The total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions). DPPH radical scavenging activity Analysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100, where Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate. Analysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100, where Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate. Ferric-reducing antioxidant power (FRAP) activity FRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents. FRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents. ABTS radical cation scavenging activity The ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50. The ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50. Superoxide dismutase (SOD) activity Superoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate. Superoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate. Cells and culture The mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2. The mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2. Cell cytotoxicity assay Exponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24]. Exponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24]. Intracellular reactive oxygen species (ROS) scavenging activity For microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany). For microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany). Real-time reverse transcription polymerase chain reaction analysis (RT-PCR) To determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse). To determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse). Gas chromatography-mass spectrum analysis (GC-MS) GC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples. GC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples. Statistical analysis Statistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests. Statistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests. Reagents: The reagents 1,1-diphenyl-1-picrylhydrazyl (DPPH), 2,2′-azinibis 3-ethyl benzothiazoline-6-sulfonic acid (ABTS), α-tocopherol, 2,4,6-tris(2-pyridyl)-s-triazine (TPTZ), iron(III) chloride hexahydrate, gallic acid, Folin and Ciocalteu’s phenol reagent, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) and lipopolysaccharides (LPS) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron(II) sulfate heptahydrate and acetic acid were purchased from Junsei (Tokyo, Japan). A superoxide dismutase-WST kit was purchased from Dojindo (Kumamoto, Japan). Dulbecco’s Modified Eagle’s Medium (DMEM), RPMI 1640 medium, fetal bovine serum (FBS), and penicillin–streptomycin were obtained from Invitrogen (Carlsbad, CA, USA). Sample preparation and extraction: Whole plants of P. vulgaris var. lilacina were purchased from the Plant Extract Bank (#007-017, Dae-Jeon, Korea). Dried P. vulgaris var. lilacina was milled into powder of 80-mesh particle size and stored at -70°C. The dried P. vulgaris var. lilacina was extracted three times with 70% ethanol. The 70% ethanol extract powder (10 g) was suspended in 500 mL of distilled water and extracted with 500 mL of the following solvents in a stepwise manner: hexane, chloroform, and butanol. Each fraction was filtered through Whatman filter paper No. 2 (Advantec, Tokyo, Japan). Subsequently, the filtrates were combined and evaporated under a vacuum and then lyophilized with a freeze dryer (Ilshine Lab, Suwon, Korea) at -70°C under reduced pressure (< 20 Pa). The dry residue was stored at -20°C. For further analysis, we reconstituted the dry extract and fractions with DMSO. Total phenolic content: The total phenolic contents of P. vulgaris var. lilacina extract and its fractions were determined using the Folin-Ciocalteu method [20]. The extract and each fraction were oxidized with Folin–Ciocalteu’s reagents, and then, the reaction was neutralized with 10% sodium carbonate. After incubation at room temperature for 1 h, the absorbance of the reaction mixture was measured at 725 nm using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Quantification was performed based on a standard curve with gallic acid. Results were expressed as milligrams gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions). DPPH radical scavenging activity: Analysis of DPPH radical-scavenging activity was carried out according to the Blois method [21]. 0.3 mM DPPH was added to each sample. After incubation for 30 min in the dark at room temperature, the absorbance was measured at 518 nm using a microplate reader. α-Tocopherol was used as a positive control. Percent reduction of the DPPH radical was calculated in the following way: inhibition concentration (%) = 100 - (Asample/Acontrol) × 100, where Acontrol is the absorbance of the control reaction (containing all reagents except the test sample), and Asample is the absorbance of the test sample. Tests were carried out in triplicate. For the final results, RC50 values (the concentrations required for 50% reduction of DPPH by 30 min after starting the reaction) were calculated from the absorbance diminished by 50%. The experiment was performed in triplicate. Ferric-reducing antioxidant power (FRAP) activity: FRAP activity was determined using manual assay methods [22]. The working fluid was freshly prepared by mixing acetate buffer (300 mM, pH 3.6) with TPTZ in HCl and iron (III) chloride hexahydrate. Each sample solution or α-tocopherol was added to 3 mL of working fluid, and the mixture was left for 4 min at room temperature. The absorbance was measured at 593 nm. The results were expressed as iron (II) sulfate heptahydrate (FeSO4) equivalents. ABTS radical cation scavenging activity: The ABTS assay was based on the ability of different fractions to scavenge the ABTS radical cation in comparison to a standard (α-tocopherol) [23]. The radical cation was prepared by mixing 7 mM ABTS with 2.45 mM potassium persulfate (1:1 v/v) and leaving the mixture for 24 h until the reaction was completed and the absorbance was stable. The ABTS radical solution was diluted with PBS to an absorbance of 0.7 ± 0.02 at 732 nm. The photometric assay was conducted with 180 μL of ABTS radical solution and 20 μL of samples; measurements were taken at 732 nm after 1 min. The antioxidative activity of the tested samples was calculated by determining the decrease in absorbance. The free radical scavenging capacity was expressed by RC50. Superoxide dismutase (SOD) activity: Superoxide dismutase activity was determined using the highly water-soluble tetrazolium salt WST-1, which produces a water-soluble formazan dye upon reduction with a superoxide anion. SOD activity was determined using an SOD assay kit (Dojindo, Kumamoto, Japan) in accordance with the manufacturer’s instructions. Briefly, WST working solution was made by diluting 1 mL of WST solution into 19 ml of buffer solution. Enzyme working solution was made after the enzyme solution tube was centrifuged for 5 sec. Fifteen microliters of enzyme solution were diluted with 2.5 mL of dilution buffer. SOD activity was expressed as the percentage of inhibition rate. Cells and culture: The mouse macrophage cell line RAW264.7, human liver cancer cell line HepG2, human colon cancer cell line HT29, human lung cancer cell line A549, human stomach cancer cell line MKN-45 and human cervical cancer cell line HeLa were purchased from the Korean Cell Line Bank (Seoul, Korea). The cell lines were grown in RPMI 1640 medium or DMEM with 10% FBS and 1% penicillin-streptomycin and incubated at 37°C in 5% CO2. Cell cytotoxicity assay: Exponentially growing cells were collected and plated at 5 × 103 - 1 × 104 cells/well. P. vulgaris var. lilacina ethanol extract and its solvent fractions in DMSO were diluted in PBS to obtain final concentrations of 10, 50 and 100 μg/mL. Cells were treated with samples for 24 h, and MTT solution was added. After 4 h, the media was removed, and DMSO was added to each well. The resulting absorbance was measured at 540 nm [24]. Intracellular reactive oxygen species (ROS) scavenging activity: For microscopic detection of ROS formation, RAW264.7 cells were grown to 80% confluence in six-well plates and treated with samples for 24 h. After incubation, cells were incubated with dichlorofluorescein diacetate (DCF-DA) (25 μM) for 30 min at 37°C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope (Carl ZEISS, Oberkochen, Germany). Real-time reverse transcription polymerase chain reaction analysis (RT-PCR): To determine the expression levels of p53, Bax, Bcl-2 and Fas, RT-PCR was performed using a Qiagen Rotor-Gene Q real-time thermal cycler (Valencia, CA, USA) in accordance with the manufacturer’s instructions. The cells were treated with P. vulgaris var. lilacina extracts and cultured for 24 h. Thereafter, cDNA was synthesized from the total RNA isolated from cells. The PCR reaction was performed using 2× SYBR Green mix (Qiagen, Valencia, CA, USA). All results were normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) expression. The following primer sequences were used for the real-time RT-PCR: GAPDH, 5′-CGG AGT CAA CGG ATT TGG TCG TAT-3′ (forward), 5′-AGC CTT CTC CAT GGT GGT GAA GAC-3′ (reverse); p53, 5′-GCT CTG ACT GTA CCA CCA TCC-3′ (forward), 5′-CTC TCG GAA CAT CTC GAA GCG-3′ (reverse); Bax, 5′-ATG GAC GGG TCC GGG GAG-3′ (forward), 5′-TCA GCC CAT CTT CTT CCA-3′ (reverse); Bcl-2, 5′-CAG CTG CAC CTG ACG-3′ (forward), 5′-ATG CAC CTA CCC AGC-3′ (reverse); Fas, 5′- TCT AAC TTG GGG TGG CTT TGT CTT C -3′ (forward), 5′- GTG TCA TAC GCT TTC TTT CCA T-3′ (reverse). Gas chromatography-mass spectrum analysis (GC-MS): GC-MS analysis was carried out using an Agilent 6890 gas chromatograph equipped with a DB-5 ms capillary column (60 m × 0.25 mm; coating thickness 1.4 μm) and an Agilent 5975 MSD detector (Loveland, CO, USA). Analytical conditions were as follows: injector and transfer line temperatures of 250°C; oven temperature was programmed from 50°C to 150°C at 10°C/min, from 150°C to 200°C at 7°C/min, and from 200°C to 250°C at 5°C/min; carrier gas helium at 1 mL/min; and split ratio 1:10. Identification of the constituents was based on comparison of the retention times with those of authentic samples. Statistical analysis: Statistical analysis was performed with SPSS (version 17.0; SPSS Inc., Chicago, IL, USA). Descriptive statistics were used to calculate the mean and standard error of the mean (SEM). One-way analysis of variance was performed, and when the significance (p < 0.05) was determined, the differences of the mean values were identified using Duncan’s multiple range tests. Results and discussion: Extraction yield The yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent. The yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent. Total phenolic content The total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities. Total phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina 1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions). Values are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05). The total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities. Total phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina 1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions). Values are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05). Antioxidant capacities of prunella vulgaris var. Lilacina Results of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions. Total antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina 1)Concentration of test sample required to produce 50% inhibition of the DPPH radical. 2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight. 3)Concentration of test sample required to produce 50% inhibition of ABTS radical. 4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions. Values are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05). DPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31]. Correlation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina 1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01. Results of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions. Total antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina 1)Concentration of test sample required to produce 50% inhibition of the DPPH radical. 2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight. 3)Concentration of test sample required to produce 50% inhibition of ABTS radical. 4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions. Values are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05). DPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31]. Correlation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina 1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01. Intracellular ROS scavenging activity To investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays. Reactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy. To investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays. Reactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy. Cell cytotoxicity activity In order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL. Effects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group. Involvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively. The cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05. In the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer. Gas chromatogram of the ethanol extract of Prunella vulgaris var. lilacina . MS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate. In order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL. Effects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group. Involvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively. The cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05. In the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer. Gas chromatogram of the ethanol extract of Prunella vulgaris var. lilacina . MS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate. Real-time RT-PCR analysis We assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression. mRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001). We assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression. mRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001). Extraction yield: The yield of the extract and each fraction obtained from dry plant material was measured (Additional file 1: Table S1). The highest solid residue yields were obtained using butanol as the extraction solvent. Total phenolic content: The total phenolic content of the P. vulgaris var. lilacina extract and its fractions were determined through a linear gallic acid standard curve and expressed as mg GAE/g dry weight of extract (or fractions). As shown Table 1, the total phenolic content of all fractions from P. vulgaris var. lilacina varied from 109.31 to 322.80 mg GAE/g. The highest total phenolic content was detected in the water fraction (322.80 ± 15.12 mg GAE/g), whereas the lowest content was found in the hexane fraction (109.31 ± 4.08 mg GAE/g). Phenolic compounds are reported to be associated with antioxidant activity, anticancer effects, and other biological functions and may prevent the development of aging and disease [25]. These results suggest that P. vulgaris var. lilacina extracts might have high antioxidant and anticancer activities. Total phenolic contents of various solvent fractions obtained from the ethanol extract of Prunella vulgaris var. lilacina 1)Total phenolic content expressed in mg of gallic acid equivalent (GAE) per gram of dry weight of extract (or fractions). Values are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05). Antioxidant capacities of prunella vulgaris var. Lilacina: Results of the radical scavenging capacities determined by DPPH, FRAP, ABTS and SOD assays are shown in Table 2. In the DPPH assay, the DPPH radical scavenging activity of all fractions from P. vulgaris var. lilacina extract increased as shown in Table 2; the RC50 values of radical scavenging activity for DPPH were found to be 73.05 ± 10.32, 1402.96 ± 194.46, 83.52 ± 7.01, 521.58 ± 8.10, 64.26 ± 2.22, and 12.82 ± 1.33 μg/mL for ethanol extract, hexane, butanol, chloroform, water fractions and α-tocopherol, respectively. The water fraction showed the highest DPPH radical-scavenging activity. The DPPH scavenging activity of all fractions showed a similar trend to the content of total phenolic compounds. The FRAP assay measures total antioxidant activity based on the reduction of the ferric tripyridyltriazine (Fe3+-TPTZ) complex to the ferrous form. The ferric complex reducing abilities of different fractions were similar to the results obtained for the radical scavenging assay; water fraction exhibited very strong ferric ion reducing activity, and the five fractions in descending order of strength of ferric ion reducing activity were water fraction > ethanol extract > butanol > chloroform > hexane fraction. In terms of the ABTS assay, the water fraction demonstrated the highest scavenging activity, followed by the ethanol extract, butanol fraction, chloroform fraction and hexane fraction, and these trends was similar to those of the DPPH and FRAP assays. For the results of the SOD activity assay, most fractions with the exception of hexane exhibited very high SOD activity similar to α-tocopherol (99.64 ± 2.45%). However, there was no significant different among the fractions. Total antioxidant capacities of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina 1)Concentration of test sample required to produce 50% inhibition of the DPPH radical. 2)Expressed as mmol of Fe2+ equivalents per gram of dry plant weight. 3)Concentration of test sample required to produce 50% inhibition of ABTS radical. 4)Expressed as the superoxide inhibition rate of Prunella vulgaris var. lilacina ethanol extract and its fractions. Values are mean ± SEM. Values with different superscripts within same column are significantly different (p < 0.05). DPPH, FRAP, ABTS and SOD assays are widely used to determine the antioxidant capacity of plant extracts due to their simplicity, stability, and reproducibility [26]. In this study, the DPPH, FRAP, ABTS and SOD assays provided comparable results for the antioxidant capacity measured in P. vulgaris var. lilacina extract and its fractions. The P. vulgaris var. lilacina extract and its fractions exhibited strong antioxidant activities against various oxidative systems in vitro. The strong antioxidant activity of a plant extract is correlated with a high content of total phenols [27,28]. In our research, we observed that the P. vulgaris var. lilacina extract and its fractions that contained higher phenol content exerted stronger radical scavenging effects (Table 3). The correlations between the antioxidant assays, such as DPPH, FRAP, ABTS and SOD activity and phenolic content, were highly positive (0.759 < │r│ < 0.993, p < 0.01), indicating that the four assays provided comparable values when they were used to estimate the antioxidant capacity of P. vulgaris var. lilacina extract. Many studies have shown a good positive linear correlation between antioxidant capacity and the total phenolic content of spices, medicinal herbs, and other dietary plants. Moreover, these results have also suggested that phenolic compounds are responsible for their antioxidant capacity [29-31]. Correlation coefficients between the antioxidant capacity and phenolic content of various solvent fractions of the ethanol extract of Prunella vulgaris var. lilacina 1)TPC: total phenolic content, DPPH: DPPH radical scavenging activity, FRAP: ferric reducing antioxidant powers, ABTS: ABTS radical scavenging activity, SOD: Superoxide dismutase activity. *p<0.05, **p<0.01. Intracellular ROS scavenging activity: To investigate the intracellular levels of ROS, the cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides [32]. We examined whether P. vulgaris var. lilacina extract and its fractions inhibited LPS-induced ROS generation. As shown in Figure 1, LPS treatment significantly increased ROS formation in RAW264.7 cells as determined by DCF fluorescence. However, treatment with P. vulgaris var. lilacina extract and its fractions blocked LPS-induced ROS generation similar to the results obtained for the antioxidant assays. Reactive oxygen species scavenging activities of Prunella vulgaris var. lilacina. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at 10 μg/mL. After treatment for 24 h, ROS was stained with a DCF-DA for 30 min, and the generation of ROS was analyzed with fluorescence microscopy. Cell cytotoxicity activity: In order to evaluate the cytotoxic effects of all samples, we performed a preliminary cytotoxicity study with RAW264.7 cells exposed to various sample concentrations (10, 50, or 100 μg/mL). The P. vulgaris var. lilacina ethanol extract (at 50 and 100 μg/mL), hexane fraction (at 100 μg/mL) and chloroform fraction (at 50 and 100 μg/mL) inhibited cell proliferation, but did not at a concentration of 10 μg/mL (Figure 2). Conversely, the groups treated with 10 μg/mL of ethanol extract or butanol fraction or treated with 10, 50 and 100 μg/mL of the water fraction showed a proliferative effect of over 10%. Macrophages are specialized phagocytic cells that attack foreign substances and cancer cells through destruction and ingestion. They also stimulate lymphocytes and other immune cells to respond to pathogens [33]. These results suggest that the ethanol extract, butanol and water fractions of P. vulgaris var. lilacina can be used in the treatment of cancer. Based on this result, we determined the appropriate concentration to be 10 μg/mL. Effects of Prunella vulgaris var. lilacina on RAW264.7 cells as determined by the MTT assay. Cells were treated with solvent fractions of Prunella vulgaris var. lilacina at different concentrations (10, 50 and 100 μg/mL). After treatment for 24 h, cell viability was measured with the MTT assay. Values are the mean ± SEM; different marks within treatments indicate significant differences at *p < 0.05 compared to the PBS group. Involvement of free radical-mediated cell damage in many different diseases, particularly cancer, led us to evaluate the cytotoxic activities of the ethanol extract and the water fraction of P. vulgaris var. lilacina against five human cancer cell lines (liver, HepG2; colon, HT29; lung, A549; stomach, MKN-45; and cervical, HeLa) (Figure 3). The ethanol extract and the water fraction of P. vulgaris var. lilacina were most effective on A549 out of all the cancer cell lines; their values were 32.4 and 28.7% at 10 μg/mL, respectively. The cell cytotoxicity of the ethanol extract and water fraction from Prunella vulgaris var. lilacina against various cancer cell lines. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. After treatment for 24 h, cell viability was measured with the MTT assay. Values are the means of three determinations ± SEM. The different letters indicate a significant difference of p < 0.05. In the present study, the results clearly demonstrate that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines studies, and these effects were stronger than for the P. vulgaris var. lilacina solvent fractions. It may be difficult to determine the contribution of individual components on the overall anticancer effects. In the literature, it has been reported that P. vulgaris var. lilacina components such as ursolic acid and rosmarinic acid are responsible for anticancer activities. Woo et al. [34] reported significant apoptogenic activity of 2α,3α-dihydroxyurs-12-ene-28-oic acid in Jurkat T cells. Lee et al. [35] and Hsu et al. [36] explored the cytotoxic effects of ursolic acid. Psotova et al. [37] found that rosmarinic acid from P. vulgaris var. lilacina exhibited strong anticancer activity. In the present study, we have isolated various presumed active compounds from the ethanol extract of P. vulgaris var. lilacina (Figures 4 and 5). The spectrum profile of GC-MS confirmed the presence of 7 major components, which were hexadecanoic acid, ethyl palmitate, lioleic acid, (z,z,z)-9,12,15-octadecatrienoic acid, ethyl (9E, 12E)-9,12-octadecadienoate, (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid, and ethyl linoleolate. Lai et al. [38] reported significant antitumor effects of fatty acids such as hexadecanoic acid and ethyl palmitate obtained from plant extracts. Taken together, the anticancer activity of ethanol extract may be the result of the synergistic effects of various compounds in P. vulgaris var. lilacina, which suggests that P. vulgaris var. lilacina can be used as a biological agent in the treatment of cancer. Gas chromatogram of the ethanol extract of Prunella vulgaris var. lilacina . MS Spectrum of the ethanol extract of Prunella vulgaris var. lilacina. The X-axis and Y-axis of the chromatogram show mass/charge (m/z) and abundance, respectively. (A) Hexadecanoic acid; (B) ethyl palmitate; (C) linoleic acid; (D) (z,z,z)-9,12,15-octadecatrienoic acid; (E) ethyl(9E,12E)-9,12-octadecadienoate; (F) (z,z,z)-ethyl ester-9,12,15-octadecatrienoic acid; (G) ethyl linoleolate. Real-time RT-PCR analysis: We assessed whether P. vulgaris var. lilacina ethanol extract and the water fraction affected the expression of genes associated with apoptotic cell death, including the tumor suppressor p53,pro-apoptotic Bax, the anti-apoptotic Bcl-2 and Fas genes in A549 cells. p53-mediated apoptosis primarily occurs through the intrinsic apoptotic program [39]. It was reported that p53 induces apoptosis by either increasing transcriptional activity of proapoptotic genes such as Bax or suppressing the activity of the antiapoptotic genes of the Bcl-2 family [40]. Our data show that P. vulgaris var. lilacina ethanol extract and the water fraction significantly increased the expression of p53, Bax and Fas compared to the control. However, the expression of Bcl-2 was not decreased compared to that of the control (Figure 6). Therefore, the treatments altered the expression of Bax/Bcl-2, resulting in a shift in their ratio favoring apoptosis. Several other groups have shown in various cancer cell lines that P. vulgaris var. lilacina can lead to cell death by inducing apoptosis through regulation of p53 and Bax/Bcl-2 expression [41]. In our study, the resulting elevation in p53 and Bax protein expression in lung cancer cells is consistent with our earlier proposed involvement of p53 and Bax-related response systems. Taken together, we suggest that P. vulgaris var. lilacina ethanol extract and water fraction induce apoptosis through the regulation of p53, Bax, and Fas expression. mRNA expression of apoptotic genes in A549 cells treated with ethanol extracts and water fractions from Prunella vulgaris var. lilacina. Cells were treated with the ethanol extract and water fraction of P. vulgaris var. lilacina at 10 μg/ml. Values are the mean ± SEM; different marks within treatments indicate significant differences (*p < 0.05, **p < 0.01, ***p < 0.001). Conclusions: The present study determined that P. vulgaris var. lilacina extract and its fractions have strong antioxidant and anticancer activities in vitro. The correlation coefficients between antioxidant capacity and the phenolic content were very strong, and phenolic compounds were a major contributor to the antioxidant capacities of P. vulgaris var. lilacina. In addition, we confirmed the presence of 7 major components of P. vulgaris var. lilacina. However, further studies to study the mechanisms of these compounds and find the root of their antioxidative and anticancer activity are underway. On the basis of these results, P. vulgaris var. lilacina appears to be a good source of natural antioxidant and anticancer agents and could be of significance in the food industry and for the control of various human and animal diseases. Abbreviations: ABTS: 2, 2′-Azinibis 3-ethyl benzothiazoline-6-sulfonic acid; DCF-DA: Dichlorofluorescein diacetate; DMSO: Dimethyl sulfoxide; DPPH: 1, 1-diphenyl-1-picrylhydrazyl; FRAP: Ferric-reducing antioxidant power; GAE: Gallic acid equivalent; LPS: Lipopolysaccharide; ROS: Reactive oxygen species; SOD: Superoxide dismutase; TPTZ: 2, 4, 6-tris(2-pyridyl)-s-triazine. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: KAH conceived this study and designed the experiments. YJH and EJL performed most of the experiments. All authors including HRK analyzed the data and discussed the results. KAH supervised the project and wrote the manuscript with the help of YJH, EJL and HRK, and all authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6882/13/310/prepub Supplementary Material: Extraction yield of Prunella vulgaris var. lilacina. Click here for file
Background: Recently, considerable attention has been focused on exploring the potential antioxidant properties of plant extracts or isolated products of plant origin. Prunella vulgaris var. lilacina is widely distributed in Korea, Japan, China, and Europe, and it continues to be used to treat inflammation, eye pain, headache, and dizziness. However, reports on the antioxidant activities of P. vulgaris var. lilacina are limited, particularly concerning the relationship between its phenolic content and antioxidant capacity. In this study, we investigated the antioxidant and anticancer activities of an ethanol extract from P. vulgaris var. lilacina and its fractions. Methods: Dried powder of P. vulgaris var. lilacina was extracted with ethanol, and the extract was fractionated to produce the hexane fraction, butanol fraction, chloroform fraction and residual water fraction. The phenolic content was assayed using the Folin-Ciocalteu colorimetric method. Subsequently, the antioxidant activities of the ethanol extract and its fractions were analyzed employing various antioxidant assay methods including DPPH, FRAP, ABTS, SOD activity and production of reactive oxygen species. Additionally, the extract and fractions were assayed for their ability to exert cytotoxic activities on various cancer cells using the MTT assay. We also investigated the expression of genes associated with apoptotic cell death by RT-PCR. Results: The total phenolic contents of the ethanol extract and water fraction of P. vulgaris var. lilacina were 303.66 and 322.80 mg GAE/g dry weight (or fractions), respectively. The results showed that the ethanol extract and the water fraction of P. vulgaris var. lilacina had higher antioxidant content than other solvent fractions, similar to their total phenolic content. Anticancer activity was also tested using the HepG2, HT29, A549, MKN45 and HeLa cancer cell lines. The results clearly demonstrated that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines, and these effects were stronger than those induced by the P. vulgaris var. lilacina solvent fractions. We also investigated the expression of genes associated with apoptotic cell death. We confirmed that the P. vulgaris var. lilacina ethanol extract and water fraction significantly increased the expression of p53, Bax and Fas. Conclusions: These results suggest that the ethanol extract from P. vulgaris var. lilacina and its fractions could be applied as natural sources of antioxidants and anticancer activities in food and in the pharmaceutical industry.
Background: Oxidative stress is caused by reactive oxygen species (ROS), which are associated with many pathological disorders such as atherosclerosis, diabetes, ageing and cancer [1-3]. In order to protect human beings against oxidative damage, synthetic antioxidants such as BHA and BHT were created due to demand [4]. However, there has been concern regarding the toxicity and carcinogenic effects of synthetic antioxidants [5,6]. Thus, it is important to identify new sources of safe and inexpensive antioxidants of natural origin. Natural antioxidants, especially plant phenolics, flavonoids, tannins and anthocyanidins, are safe and are also bioactive [7]. Therefore, in recent years, considerable attention has been focused on exploring the potential antioxidant properties of plant extracts or isolated products of plant origin [8]. Prunella vulgaris var. lilacina is widely distributed in Korea, Japan, China, and Europe, and it continues to be used to treat inflammation, eye pain, headache, and dizziness [9,10]. It is rich in active compounds known to significantly affect human health, such as triterpenoid, rosmarinic acid, hyperoside, ursolic acid, and flavonoids [11-16]. Furthermore, P. vulgaris var. lilacina has been shown to have anti-allergic, anti-inflammatory, anti-oxidative, anti-microbial, and anti-viral effects [17-19]. However, reports on the antioxidant activities of P. vulgaris var. lilacina are limited, particularly concerning the relationship between its phenolic content and antioxidant capacity. Therefore, the aims of this study were to identify new sources of antioxidants from P. vulgaris var. lilacina extract. Additionally, the effects of the extraction solvent (70% ethanol, hexane, butanol, chloroform, or water) on the total phenolic content and antioxidant activities of P. vulgaris var. lilacina were investigated. Conclusions: The present study determined that P. vulgaris var. lilacina extract and its fractions have strong antioxidant and anticancer activities in vitro. The correlation coefficients between antioxidant capacity and the phenolic content were very strong, and phenolic compounds were a major contributor to the antioxidant capacities of P. vulgaris var. lilacina. In addition, we confirmed the presence of 7 major components of P. vulgaris var. lilacina. However, further studies to study the mechanisms of these compounds and find the root of their antioxidative and anticancer activity are underway. On the basis of these results, P. vulgaris var. lilacina appears to be a good source of natural antioxidant and anticancer agents and could be of significance in the food industry and for the control of various human and animal diseases.
Background: Recently, considerable attention has been focused on exploring the potential antioxidant properties of plant extracts or isolated products of plant origin. Prunella vulgaris var. lilacina is widely distributed in Korea, Japan, China, and Europe, and it continues to be used to treat inflammation, eye pain, headache, and dizziness. However, reports on the antioxidant activities of P. vulgaris var. lilacina are limited, particularly concerning the relationship between its phenolic content and antioxidant capacity. In this study, we investigated the antioxidant and anticancer activities of an ethanol extract from P. vulgaris var. lilacina and its fractions. Methods: Dried powder of P. vulgaris var. lilacina was extracted with ethanol, and the extract was fractionated to produce the hexane fraction, butanol fraction, chloroform fraction and residual water fraction. The phenolic content was assayed using the Folin-Ciocalteu colorimetric method. Subsequently, the antioxidant activities of the ethanol extract and its fractions were analyzed employing various antioxidant assay methods including DPPH, FRAP, ABTS, SOD activity and production of reactive oxygen species. Additionally, the extract and fractions were assayed for their ability to exert cytotoxic activities on various cancer cells using the MTT assay. We also investigated the expression of genes associated with apoptotic cell death by RT-PCR. Results: The total phenolic contents of the ethanol extract and water fraction of P. vulgaris var. lilacina were 303.66 and 322.80 mg GAE/g dry weight (or fractions), respectively. The results showed that the ethanol extract and the water fraction of P. vulgaris var. lilacina had higher antioxidant content than other solvent fractions, similar to their total phenolic content. Anticancer activity was also tested using the HepG2, HT29, A549, MKN45 and HeLa cancer cell lines. The results clearly demonstrated that the P. vulgaris var. lilacina ethanol extract induced significant cytotoxic effects on the various cancer cell lines, and these effects were stronger than those induced by the P. vulgaris var. lilacina solvent fractions. We also investigated the expression of genes associated with apoptotic cell death. We confirmed that the P. vulgaris var. lilacina ethanol extract and water fraction significantly increased the expression of p53, Bax and Fas. Conclusions: These results suggest that the ethanol extract from P. vulgaris var. lilacina and its fractions could be applied as natural sources of antioxidants and anticancer activities in food and in the pharmaceutical industry.
13,736
452
[ 351, 154, 188, 123, 170, 93, 145, 116, 87, 97, 78, 247, 144, 76, 5083, 38, 234, 770, 182, 940, 355, 78, 10, 58, 16 ]
28
[ "vulgaris var", "var lilacina", "var", "lilacina", "vulgaris", "vulgaris var lilacina", "extract", "activity", "fractions", "ethanol" ]
[ "strong antioxidant activity", "antioxidants especially plant", "antioxidants vulgaris", "extracts high antioxidant", "source natural antioxidant" ]
null
[CONTENT] Prunella vulgaris var. lilacina | Antioxidative activity | Anticancer activity [SUMMARY]
[CONTENT] Prunella vulgaris var. lilacina | Antioxidative activity | Anticancer activity [SUMMARY]
null
[CONTENT] Prunella vulgaris var. lilacina | Antioxidative activity | Anticancer activity [SUMMARY]
[CONTENT] Prunella vulgaris var. lilacina | Antioxidative activity | Anticancer activity [SUMMARY]
[CONTENT] Prunella vulgaris var. lilacina | Antioxidative activity | Anticancer activity [SUMMARY]
[CONTENT] Analysis of Variance | Animals | Antineoplastic Agents | Antioxidants | Cell Line, Tumor | Cell Survival | Humans | Intracellular Space | Mice | Phenols | Plant Extracts | Prunella | Reactive Oxygen Species | Real-Time Polymerase Chain Reaction [SUMMARY]
[CONTENT] Analysis of Variance | Animals | Antineoplastic Agents | Antioxidants | Cell Line, Tumor | Cell Survival | Humans | Intracellular Space | Mice | Phenols | Plant Extracts | Prunella | Reactive Oxygen Species | Real-Time Polymerase Chain Reaction [SUMMARY]
null
[CONTENT] Analysis of Variance | Animals | Antineoplastic Agents | Antioxidants | Cell Line, Tumor | Cell Survival | Humans | Intracellular Space | Mice | Phenols | Plant Extracts | Prunella | Reactive Oxygen Species | Real-Time Polymerase Chain Reaction [SUMMARY]
[CONTENT] Analysis of Variance | Animals | Antineoplastic Agents | Antioxidants | Cell Line, Tumor | Cell Survival | Humans | Intracellular Space | Mice | Phenols | Plant Extracts | Prunella | Reactive Oxygen Species | Real-Time Polymerase Chain Reaction [SUMMARY]
[CONTENT] Analysis of Variance | Animals | Antineoplastic Agents | Antioxidants | Cell Line, Tumor | Cell Survival | Humans | Intracellular Space | Mice | Phenols | Plant Extracts | Prunella | Reactive Oxygen Species | Real-Time Polymerase Chain Reaction [SUMMARY]
[CONTENT] strong antioxidant activity | antioxidants especially plant | antioxidants vulgaris | extracts high antioxidant | source natural antioxidant [SUMMARY]
[CONTENT] strong antioxidant activity | antioxidants especially plant | antioxidants vulgaris | extracts high antioxidant | source natural antioxidant [SUMMARY]
null
[CONTENT] strong antioxidant activity | antioxidants especially plant | antioxidants vulgaris | extracts high antioxidant | source natural antioxidant [SUMMARY]
[CONTENT] strong antioxidant activity | antioxidants especially plant | antioxidants vulgaris | extracts high antioxidant | source natural antioxidant [SUMMARY]
[CONTENT] strong antioxidant activity | antioxidants especially plant | antioxidants vulgaris | extracts high antioxidant | source natural antioxidant [SUMMARY]
[CONTENT] vulgaris var | var lilacina | var | lilacina | vulgaris | vulgaris var lilacina | extract | activity | fractions | ethanol [SUMMARY]
[CONTENT] vulgaris var | var lilacina | var | lilacina | vulgaris | vulgaris var lilacina | extract | activity | fractions | ethanol [SUMMARY]
null
[CONTENT] vulgaris var | var lilacina | var | lilacina | vulgaris | vulgaris var lilacina | extract | activity | fractions | ethanol [SUMMARY]
[CONTENT] vulgaris var | var lilacina | var | lilacina | vulgaris | vulgaris var lilacina | extract | activity | fractions | ethanol [SUMMARY]
[CONTENT] vulgaris var | var lilacina | var | lilacina | vulgaris | vulgaris var lilacina | extract | activity | fractions | ethanol [SUMMARY]
[CONTENT] antioxidants | anti | oxidative | antioxidant | vulgaris | var lilacina | var | vulgaris var | vulgaris var lilacina | lilacina [SUMMARY]
[CONTENT] solution | absorbance | line | cell line | cell | min | radical | cells | reverse | usa [SUMMARY]
null
[CONTENT] antioxidant | anticancer | vulgaris | var lilacina | var | vulgaris var lilacina | vulgaris var | lilacina | major | antioxidant anticancer [SUMMARY]
[CONTENT] vulgaris | lilacina | var lilacina | vulgaris var lilacina | var | vulgaris var | extract | cells | fractions | acid [SUMMARY]
[CONTENT] vulgaris | lilacina | var lilacina | vulgaris var lilacina | var | vulgaris var | extract | cells | fractions | acid [SUMMARY]
[CONTENT] ||| ||| Korea | Japan | China | Europe ||| P. vulgaris var ||| ||| P. vulgaris var ||| [SUMMARY]
[CONTENT] ||| the hexane fraction ||| Folin ||| FRAP | ABTS | SOD ||| MTT ||| RT-PCR [SUMMARY]
null
[CONTENT] P. vulgaris var ||| [SUMMARY]
[CONTENT] ||| ||| Korea | Japan | China | Europe ||| P. vulgaris var ||| ||| P. vulgaris var ||| ||| ||| the hexane fraction ||| Folin ||| FRAP | ABTS | SOD ||| MTT ||| RT-PCR ||| ||| 303.66 | 322.80 | GAE ||| ||| ||| HepG2 | A549 | HeLa ||| ||| lilacina ethanol extract ||| ||| ||| ||| lilacina ethanol extract | Bax ||| P. vulgaris var ||| [SUMMARY]
[CONTENT] ||| ||| Korea | Japan | China | Europe ||| P. vulgaris var ||| ||| P. vulgaris var ||| ||| ||| the hexane fraction ||| Folin ||| FRAP | ABTS | SOD ||| MTT ||| RT-PCR ||| ||| 303.66 | 322.80 | GAE ||| ||| ||| HepG2 | A549 | HeLa ||| ||| lilacina ethanol extract ||| ||| ||| ||| lilacina ethanol extract | Bax ||| P. vulgaris var ||| [SUMMARY]
REDUCE PORT LAPAROSCOPIC SPLENECTOMY FOR GIANT EPITELIAL CYST.
26734802
Delaitre and Maignien performed the first successful laparoscopic splenectomy in 1991. After that, laparoscopic splenectomy has become one of the most frequently performed laparoscopic solid organ procedures.
BACKGROUND
A reduce port laparoscopic splenectomy was performed by using a 10 mm and two 5 mm trocars. To entered the abdomen a trans-umbilical open technique was done and a 10 mm trocar was placed. A subcostal 5 mm trocar was placed under direct vision at the level of the anterior axillary line and another 5 mm port was inserted at the mid-epigastric region. Once it was completely dissected and freed from all of its attachments the hilum, splenic artery and vein, was clipped with hem-o-lock and divided with scissors. Then an endobag was used to retrieve the spleen after being morcellated trough the umbilical incision.
METHODS
This technique was used in a 15 years old female with epigastric and left upper quadrant pain. An abdominal ultrasound demonstrated a giant cyst located in the spleen. Laboratory tests findings were normal. The CT scan was also done, and showed a giant cyst, which squeeze the stomach. The patient tolerated well the procedure, with an unremarkable postoperative. She was discharge home 72 h after the surgery.
RESULTS
The use of reduce port minimizes abdominal trauma and has the hypothetical advantages of shorter postoperative stay, greater pain control, and better cosmesis. Laparoscopic splenectomy for giant cysts by using reduce port trocars is safe and feasible and less invasive.
CONCLUSION
[ "Adolescent", "Cysts", "Epithelium", "Female", "Humans", "Laparoscopy", "Splenectomy", "Splenic Diseases" ]
4755184
INTRODUCTION
Splenectomy was initially described for hereditary spherocytosis by Sutherland and Burghard in 1910 and for idiopathic thrombocytopenic purpura by Kaznelson in 19168. It has been well recognized as an effective cure for hematologic disorders, better than medical treatment. The first successful laparoscopic splenectomy was performed by Delaitre and Maignien in 19917 , 8. After that, laparoscopic splenectomy has become one of the most frequently performed laparoscopic solid organ procedures. Laparoscopic splenectomy is emerging as the gold standard for the management of various hematologic disorders. Minimally invasive surgery has earned boundless acceptance. The enthusiasm to limit the trauma of large incisions has been the incentive of the development of minimally invasive surgery during the past century. Numerous technology and equipment in laparoscopy have been emerging. It started gradually reducing and repositioning trocars during laparoscopic surgery. It began with repositioning the subxiphoid trocar into the umbilicus. Less trocars equals less abdominal trauma. In small spleens a single port laparoscopic surgery can be performed. In large spleens, less trocars can be used. The aim of this artcile is to details the splenectomy procedure using only three trochars.
METHOD
Surgical technique The patient is placed in lateral decubitus, to enter the abdomen using a trans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm 30o scope is inserted. A subcostal 5 mm trocar is placed under direct vision at the level of the anterior axillary line and another 5 mm port is inserted at the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon) and 5 mm instruments, access is gained to the lesser sac by dividing the gastrosplenic ligament and short vessels until the upper pole of the spleen. The splenic flexure of the colon is mobilized to get the lower pole of the spleen freed. The posterior splenorenal ligament is then freed. Once the spleen is completely dissected free from all of its hilum attachments, splenic artery and vein are clipped with hemolocks and divided with scissors. It is suggested clipping the artery first and then the vein, this can reduce the size of the spleen in an important percentage. This is specially useful when dealing with great size spleen. Then an endobag is used to retrieve the spleen after being morcellated through the umbilical incision. A drain, exteriorized through the lateral 5 mm trocar is used routinely. The patient is placed in lateral decubitus, to enter the abdomen using a trans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm 30o scope is inserted. A subcostal 5 mm trocar is placed under direct vision at the level of the anterior axillary line and another 5 mm port is inserted at the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon) and 5 mm instruments, access is gained to the lesser sac by dividing the gastrosplenic ligament and short vessels until the upper pole of the spleen. The splenic flexure of the colon is mobilized to get the lower pole of the spleen freed. The posterior splenorenal ligament is then freed. Once the spleen is completely dissected free from all of its hilum attachments, splenic artery and vein are clipped with hemolocks and divided with scissors. It is suggested clipping the artery first and then the vein, this can reduce the size of the spleen in an important percentage. This is specially useful when dealing with great size spleen. Then an endobag is used to retrieve the spleen after being morcellated through the umbilical incision. A drain, exteriorized through the lateral 5 mm trocar is used routinely.
null
null
CONCLUSION
The use of reduce port minimizes abdominal trauma and has the hypothetical advantages of shorter postoperative stay, greater pain control, and better cosmesis. Laparoscopic splenectomy for giant cysts by using reduce port trocars is safe and feasible and less invasive.
[ "Surgical technique", "RESULT" ]
[ "The patient is placed in lateral decubitus, to enter the abdomen using a\ntrans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm\n30o scope is inserted. A subcostal 5 mm trocar is placed under direct\nvision at the level of the anterior axillary line and another 5 mm port is inserted\nat the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon)\nand 5 mm instruments, access is gained to the lesser sac by dividing the\ngastrosplenic ligament and short vessels until the upper pole of the spleen. The\nsplenic flexure of the colon is mobilized to get the lower pole of the spleen freed.\nThe posterior splenorenal ligament is then freed. \nOnce the spleen is completely dissected free from all of its hilum attachments,\nsplenic artery and vein are clipped with hemolocks and divided with scissors. It is\nsuggested clipping the artery first and then the vein, this can reduce the size of\nthe spleen in an important percentage. This is specially useful when dealing with\ngreat size spleen. Then an endobag is used to retrieve the spleen after being\nmorcellated through the umbilical incision. A drain, exteriorized through the lateral\n5 mm trocar is used routinely. ", "This technique was used in a 15 years old female refered due to epigastric and left\nupper quadrant pain. An abdominal ultrasound was performed in order to find gallblader\nstones, but a giant cyst located in the spleen was found. Laboratory tests were normal.\nThe CT scan showed a giant cyst which squeeze the stomach (Figure 1). The patient was scheduled for surgery, and a laparoscopic\napproach was performed.\n\nFIGURE 1- CT scan showed a giant cyst which squeeze the stomach\n\nLateral positioning was used and with open technique the abdomen was entered. A 10 mm\ntrocar was inserted at the umbilicus and two more trocars were placed in the left upper\nquadrant. Once the abdomen was entered a gian cyst located in the spleen was observed\n(Figure 2). The ligaments were divided with\nelectronic shears (Figure 3 A) and the splenic\nvein and artery were ligated by using hemolocks (Figure\n3 B-D)\n\nFIGURE 2- Gian cyst located in the spleen is observed when entering the\nabdomen\n\n\nFIGURE 3- Technical steps: A) ligaments were divided with electronic shears; B, C\nand D) splenic vein and artery being ligated by using hemolocks\n\nFinally the spleen was placed into the retrieval bag and removed from the abdominal\ncavity before morcellating it with ringed forceps. In the Figure 4, the spleen and cyst are observed after its resection. \n\nFIGURE 4- Spleen and cyst are observed after resection\n\nThe patient tolerated well the procedure, with an unremarkable postoperative period. She\nwas discharge home 72 h after the surgery with all the vaccines set (Figure 5).\n\nFIGURE 5- Final aspect of the operation, and a drain exteriorized through the\nlateral 5 mm trocar is used routinely\n" ]
[ null, null ]
[ "INTRODUCTION", "METHOD", "Surgical technique", "RESULT", "DISCUSSION", "CONCLUSION" ]
[ "Splenectomy was initially described for hereditary spherocytosis by Sutherland and\nBurghard in 1910 and for idiopathic thrombocytopenic purpura by Kaznelson in 19168. It has been well recognized as an effective cure\nfor hematologic disorders, better than medical treatment. The first successful\nlaparoscopic splenectomy was performed by Delaitre and Maignien in 19917\n,\n8. After that, laparoscopic splenectomy has\nbecome one of the most frequently performed laparoscopic solid organ procedures.\nLaparoscopic splenectomy is emerging as the gold standard for the management of various\nhematologic disorders. Minimally invasive surgery has earned boundless acceptance. The\nenthusiasm to limit the trauma of large incisions has been the incentive of the\ndevelopment of minimally invasive surgery during the past century. Numerous technology\nand equipment in laparoscopy have been emerging. It started gradually reducing and\nrepositioning trocars during laparoscopic surgery. It began with repositioning the\nsubxiphoid trocar into the umbilicus. Less trocars equals less abdominal trauma. In\nsmall spleens a single port laparoscopic surgery can be performed. In large spleens,\nless trocars can be used. \nThe aim of this artcile is to details the splenectomy procedure using only three\ntrochars.", " Surgical technique The patient is placed in lateral decubitus, to enter the abdomen using a\ntrans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm\n30o scope is inserted. A subcostal 5 mm trocar is placed under direct\nvision at the level of the anterior axillary line and another 5 mm port is inserted\nat the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon)\nand 5 mm instruments, access is gained to the lesser sac by dividing the\ngastrosplenic ligament and short vessels until the upper pole of the spleen. The\nsplenic flexure of the colon is mobilized to get the lower pole of the spleen freed.\nThe posterior splenorenal ligament is then freed. \nOnce the spleen is completely dissected free from all of its hilum attachments,\nsplenic artery and vein are clipped with hemolocks and divided with scissors. It is\nsuggested clipping the artery first and then the vein, this can reduce the size of\nthe spleen in an important percentage. This is specially useful when dealing with\ngreat size spleen. Then an endobag is used to retrieve the spleen after being\nmorcellated through the umbilical incision. A drain, exteriorized through the lateral\n5 mm trocar is used routinely. \nThe patient is placed in lateral decubitus, to enter the abdomen using a\ntrans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm\n30o scope is inserted. A subcostal 5 mm trocar is placed under direct\nvision at the level of the anterior axillary line and another 5 mm port is inserted\nat the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon)\nand 5 mm instruments, access is gained to the lesser sac by dividing the\ngastrosplenic ligament and short vessels until the upper pole of the spleen. The\nsplenic flexure of the colon is mobilized to get the lower pole of the spleen freed.\nThe posterior splenorenal ligament is then freed. \nOnce the spleen is completely dissected free from all of its hilum attachments,\nsplenic artery and vein are clipped with hemolocks and divided with scissors. It is\nsuggested clipping the artery first and then the vein, this can reduce the size of\nthe spleen in an important percentage. This is specially useful when dealing with\ngreat size spleen. Then an endobag is used to retrieve the spleen after being\nmorcellated through the umbilical incision. A drain, exteriorized through the lateral\n5 mm trocar is used routinely. ", "The patient is placed in lateral decubitus, to enter the abdomen using a\ntrans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm\n30o scope is inserted. A subcostal 5 mm trocar is placed under direct\nvision at the level of the anterior axillary line and another 5 mm port is inserted\nat the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon)\nand 5 mm instruments, access is gained to the lesser sac by dividing the\ngastrosplenic ligament and short vessels until the upper pole of the spleen. The\nsplenic flexure of the colon is mobilized to get the lower pole of the spleen freed.\nThe posterior splenorenal ligament is then freed. \nOnce the spleen is completely dissected free from all of its hilum attachments,\nsplenic artery and vein are clipped with hemolocks and divided with scissors. It is\nsuggested clipping the artery first and then the vein, this can reduce the size of\nthe spleen in an important percentage. This is specially useful when dealing with\ngreat size spleen. Then an endobag is used to retrieve the spleen after being\nmorcellated through the umbilical incision. A drain, exteriorized through the lateral\n5 mm trocar is used routinely. ", "This technique was used in a 15 years old female refered due to epigastric and left\nupper quadrant pain. An abdominal ultrasound was performed in order to find gallblader\nstones, but a giant cyst located in the spleen was found. Laboratory tests were normal.\nThe CT scan showed a giant cyst which squeeze the stomach (Figure 1). The patient was scheduled for surgery, and a laparoscopic\napproach was performed.\n\nFIGURE 1- CT scan showed a giant cyst which squeeze the stomach\n\nLateral positioning was used and with open technique the abdomen was entered. A 10 mm\ntrocar was inserted at the umbilicus and two more trocars were placed in the left upper\nquadrant. Once the abdomen was entered a gian cyst located in the spleen was observed\n(Figure 2). The ligaments were divided with\nelectronic shears (Figure 3 A) and the splenic\nvein and artery were ligated by using hemolocks (Figure\n3 B-D)\n\nFIGURE 2- Gian cyst located in the spleen is observed when entering the\nabdomen\n\n\nFIGURE 3- Technical steps: A) ligaments were divided with electronic shears; B, C\nand D) splenic vein and artery being ligated by using hemolocks\n\nFinally the spleen was placed into the retrieval bag and removed from the abdominal\ncavity before morcellating it with ringed forceps. In the Figure 4, the spleen and cyst are observed after its resection. \n\nFIGURE 4- Spleen and cyst are observed after resection\n\nThe patient tolerated well the procedure, with an unremarkable postoperative period. She\nwas discharge home 72 h after the surgery with all the vaccines set (Figure 5).\n\nFIGURE 5- Final aspect of the operation, and a drain exteriorized through the\nlateral 5 mm trocar is used routinely\n", "The first successful laparoscopic splenectomy was performed by Delaitre and Maignien in\n19917\n,\n8. After that, laparoscopic splenectomy has\nbecome one of the most frequently performed laparoscopic solid organ procedures.\nRegarding the surgical technique, the lateral positioning is the preferred approach for\nlaparoscopic splenectomy because the gravity effect pulls down the colon, stomach and\nomentum and allows better visualization of the spleen. Three left subcostal ports are\nusually adequate for normal-sized spleens. In the presence of splenomegaly, ports are\noptimally positioned 4 cm below the inferior tip of the spleen, parallel to the left\ncostal margin, but within reach of the diaphragm. If the spleen is extremely large, the\ntrocars may have to be placed substantially more inferiorly than normal, creating the\nneed for an additional port posteriorly. This port allows for lateral retraction of the\nspleen and can facilitate access to the diaphragmatic attachments. Additional ports are\nplaced under laparoscopic guidance. Visualization and efficiency are optimized by\nexchanging the camera between the medial and lateral ports, while the surgeon operates\nwith both hands8.\nSupermassive spleens are most effectively managed with a hand-assisted approach. This\napproach utilizes similar patient positioning but uses a hand-assisted device to\nfacilitate insertion of the surgeon's nondominant hand into the abdominal cavity while\nmaintaining pneumoperitoneum. This technique allows for improved tissue handling and\natraumatic manipulation of the enlarged spleen. For patients with supermassive spleens,\nlateral positioning is altered slightly. In these cases, the patient is placed supine\nwith the left side elevated at 45°. This allows the surgeon to take advantage of gravity\nwhile also providing comfortable access through the hand-assist incision.\nDepending upon the hand dominance of the surgeon, the hand-assist device can be placed\nin either a midline (right-hand dominant) or a subcostal position (left-hand dominant)\nthrough a 7 to 8 cm incision which is located 2 to 4 cm caudal to the inferior pole of\nthe enlarged spleen. In both situations, the surgeon stands on the right side of the\npatient. The nondominant hand is inserted through the hand-assist device and provides\nmedial retraction and rotation of the spleen. The hilar pedicle is transected with an\nendoscopic gastrointestinal anastomotic stapler utilizing a vascular cartridge. The\nspleen is placed into an appropriately-sized impermeable retrieval bag. This bag must be\nstrong enough to avoid rupture during morcellation and extraction of the specimen.\nPlacing the spleen into the retrieval bag can be one of the most time-consuming and\nchallenging aspects of the operation but it's necessary in order to avoid possible\nsplenosis which is unwanted in many hematologic disease. The patient is placed into\nTrendelenburg's position and the spleen is gradually directed into the bag. After the\nspleen is within the retrieval bag, the opening of the bag is delivered through the\nlargest port site or the hand-assist incision and the spleen is morcellated with ringed\nforceps.\nThe most common hematologic disorder that requires splenectomy is the idiopathic\nthrombocytopenic purpura where surgery is indicated in patients with refractory\nsymptomatic thrombocytopenia after 4 to 6 weeks of medical therapy, patients requiring\ntoxic doses of steroids to achieve remission, and patients who relapse following an\ninitial response to steroid therapy. Hereditary spherocytosis (hereditary hemolytic\nanemia), is curable in approximately 90% of patients after splenectomy. Surgery is\nindicated for all patients with hereditary spherocytosis and splenomegaly, for patients\nwith symptoms of severe hemolytic anemia or mild hemolytic anemia and concomitant\ngallstones, and for patients with cholelithiasis in siblings8.\nOther indications for splenectomy are HIV-related, thrombocytopenic purpura systemic,\nlupus erythematosus-related, thrombocytopenic purpura, thrombotic thrombocytopenic\npurpura, autoimmune hemolytic anemias, Hodgkin's disease, non-Hodgkin's lymphoma,\nchronic lymphocytic leukemia and Hairy cell leukemia8.\nThe contraindications to laparoscopic splenectomy can be divided into: absolute\ncontraindications like, severe cardiopulmonary disease or cirrhosis with portal\nhypertension. And relative contraindication are previous abdominal surgery (open Hasson\ntechnique is mandatory) and some authors include massive splenectomy1.\nLaparoscopic splenectomy is emerging as the gold standard for the management of various\nhematologic disorders. Since the first laparoscopic splenectomies were performed in\nadults (1991) and children (1993), laparoscopic splenectomy has become the gold standard\nfor the elective removal of normal sized spleens. It clearly has been shown to have less\nmorbidity and postoperative pain, shorter hospital stay, earlier return of bowel\nfunction, and superior cosmesis compared with open splenectomy5\n,\n6. \nAlthough laparoscopic splenectomy is superior to open splenectomy for normal sized\nspleens, splenomegaly has been considered a contraindication to the laparoscopic\napproach because of difficulties with bleeding and removal from the abdomen. Over the\nlast years, however, several authors have reported laparoscopic removal of enlarged\nspleens. With the development of hand-assisted laparoscopic surgery, the retraction of\nmassive spleens was technically feasible. Recent studies, have shown that, aside from\noperative time, there is no difference in transfusions, length of stay, morbidity, or\nconversion rate with laparoscopic splenectomy for large spleens compared with normal\nsized spleens.\nLaparoscopic techniques, surgical skills, and instrumentation have improved, so are\nsafety and efficacy even in the presence of splenomegaly.\nAuthors like Kent W. Kercher et al.8, made a\nstudy on 177 patients who underwent laparoscopic splenectomy where forty-nine patients\n(28%) were identified as having massive splenomegaly. They defined massive splenomegaly\nas a craniocaudal length ≥17 cm or a weight ≥600 g. Spleens greater than 22 cm in\ncraniocaudal length, 19 cm in width, or a weight greater than 1600 g were defined as\n\"supermassive.\" And in children, splenic size greater than four times normal for age was\ndefined as massive. They recommend that most surgeons acquire their initial laparoscopic\nexperience with normal-sized spleens prior to attempting laparoscopic splenectomy of\nsplenomegaly. And the author conclude that laparoscopic splenectomy has become the gold\nstandard for elective splenectomy in patients with normal-sized spleens and laparoscopic\nsplenectomy in the setting of massive splenomegaly is safe and effective providing\ndistinct advantages over the open operation. In the presence of supermassive\nsplenomegaly, the use of hand-assisted laparoscopic surgery maintains the benefits of a\nminimally invasive approach8.\nOther authors like Arin K. Greene 7, have used the Lahey bag to remove a\nmassive spleen. The author describe that this technique facilitate the removal of\nmassively enlarged spleens laparoscopically (>1,000 g), because a large abdominal\nincision to remove the spleen is not required. The spleen is broken up while in the\nLahey bag so the risk of splenosis is eliminated.\nBecause of suitable single port laparoscopy equipment is not available to may centers\ndue to cost, Colon et al.3 developed a technique\nwhere additional instrumentation was kept to a minimum. Boone et al.2 demonstrate in their series that single port\nsplenectomy is safe and feasible in an unselected patient population, their patient\npopulation included patients with prior surgery, obese patients, medical comorbidities,\nsplenomegaly, and severe thrombocytopenia. In their study they also compared it to\nstandard laparoscopic splenectomy. The results showed no statistically significance in\nmorbidity and mortality between both groups. Analysis of postoperative pain medication\nrequirement revealed that the single incision patients required fewer narcotics, but\nthis did not reach statistical significance either. Single port splenectomy was\nassociates with a significantly lower open conversion, shorter operative time, and\nsimilar median estimated blood loss. Overall, the study demonstrated that it is at least\nequivalent to standard laparoscopic splenectomy. They enounced that single port\nsplenectomy is an appropriate procedure that can be done safely and may lead to higher\npatient satisfaction compared to laparoscopic splenectomy. Also, they stated, while\narticulating instruments and laparoscopes may offer technical advantages, they are not\ncompletely necessary for performing single port splenectomy. It is a safe and feasible\ntechnique to perform splenectomy for small spleens, but in giant and massive spleens,\nthe reduce port laparoscopic surgery technique is a better choice.\n Laparoscopic splenectomy has become the gold standard for normal size spleens. Massive\nsplenomegaly is not a contraindication for laparoscopic surgery moreover with the\ndevelop of the hand assisted technique that is feasible and safe. Morbidity and\nmortality are equivalent when coparing with regular laparoscopic splenectomy.", "The use of reduce port minimizes abdominal trauma and has the hypothetical advantages of\nshorter postoperative stay, greater pain control, and better cosmesis. Laparoscopic\nsplenectomy for giant cysts by using reduce port trocars is safe and feasible and less\ninvasive." ]
[ "intro", "methods", null, null, "discussion", "conclusions" ]
[ "Laparoscopy", "Splenectomy", "Surgery" ]
INTRODUCTION: Splenectomy was initially described for hereditary spherocytosis by Sutherland and Burghard in 1910 and for idiopathic thrombocytopenic purpura by Kaznelson in 19168. It has been well recognized as an effective cure for hematologic disorders, better than medical treatment. The first successful laparoscopic splenectomy was performed by Delaitre and Maignien in 19917 , 8. After that, laparoscopic splenectomy has become one of the most frequently performed laparoscopic solid organ procedures. Laparoscopic splenectomy is emerging as the gold standard for the management of various hematologic disorders. Minimally invasive surgery has earned boundless acceptance. The enthusiasm to limit the trauma of large incisions has been the incentive of the development of minimally invasive surgery during the past century. Numerous technology and equipment in laparoscopy have been emerging. It started gradually reducing and repositioning trocars during laparoscopic surgery. It began with repositioning the subxiphoid trocar into the umbilicus. Less trocars equals less abdominal trauma. In small spleens a single port laparoscopic surgery can be performed. In large spleens, less trocars can be used. The aim of this artcile is to details the splenectomy procedure using only three trochars. METHOD: Surgical technique The patient is placed in lateral decubitus, to enter the abdomen using a trans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm 30o scope is inserted. A subcostal 5 mm trocar is placed under direct vision at the level of the anterior axillary line and another 5 mm port is inserted at the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon) and 5 mm instruments, access is gained to the lesser sac by dividing the gastrosplenic ligament and short vessels until the upper pole of the spleen. The splenic flexure of the colon is mobilized to get the lower pole of the spleen freed. The posterior splenorenal ligament is then freed. Once the spleen is completely dissected free from all of its hilum attachments, splenic artery and vein are clipped with hemolocks and divided with scissors. It is suggested clipping the artery first and then the vein, this can reduce the size of the spleen in an important percentage. This is specially useful when dealing with great size spleen. Then an endobag is used to retrieve the spleen after being morcellated through the umbilical incision. A drain, exteriorized through the lateral 5 mm trocar is used routinely. The patient is placed in lateral decubitus, to enter the abdomen using a trans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm 30o scope is inserted. A subcostal 5 mm trocar is placed under direct vision at the level of the anterior axillary line and another 5 mm port is inserted at the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon) and 5 mm instruments, access is gained to the lesser sac by dividing the gastrosplenic ligament and short vessels until the upper pole of the spleen. The splenic flexure of the colon is mobilized to get the lower pole of the spleen freed. The posterior splenorenal ligament is then freed. Once the spleen is completely dissected free from all of its hilum attachments, splenic artery and vein are clipped with hemolocks and divided with scissors. It is suggested clipping the artery first and then the vein, this can reduce the size of the spleen in an important percentage. This is specially useful when dealing with great size spleen. Then an endobag is used to retrieve the spleen after being morcellated through the umbilical incision. A drain, exteriorized through the lateral 5 mm trocar is used routinely. Surgical technique: The patient is placed in lateral decubitus, to enter the abdomen using a trans-umbilical open technique and a 12 mm trocar is placed. Through it a 10 mm 30o scope is inserted. A subcostal 5 mm trocar is placed under direct vision at the level of the anterior axillary line and another 5 mm port is inserted at the midepigastric region. Using a 5 mm harmonic scalpel (Harmonic Ace, Ethicon) and 5 mm instruments, access is gained to the lesser sac by dividing the gastrosplenic ligament and short vessels until the upper pole of the spleen. The splenic flexure of the colon is mobilized to get the lower pole of the spleen freed. The posterior splenorenal ligament is then freed. Once the spleen is completely dissected free from all of its hilum attachments, splenic artery and vein are clipped with hemolocks and divided with scissors. It is suggested clipping the artery first and then the vein, this can reduce the size of the spleen in an important percentage. This is specially useful when dealing with great size spleen. Then an endobag is used to retrieve the spleen after being morcellated through the umbilical incision. A drain, exteriorized through the lateral 5 mm trocar is used routinely. RESULT: This technique was used in a 15 years old female refered due to epigastric and left upper quadrant pain. An abdominal ultrasound was performed in order to find gallblader stones, but a giant cyst located in the spleen was found. Laboratory tests were normal. The CT scan showed a giant cyst which squeeze the stomach (Figure 1). The patient was scheduled for surgery, and a laparoscopic approach was performed. FIGURE 1- CT scan showed a giant cyst which squeeze the stomach Lateral positioning was used and with open technique the abdomen was entered. A 10 mm trocar was inserted at the umbilicus and two more trocars were placed in the left upper quadrant. Once the abdomen was entered a gian cyst located in the spleen was observed (Figure 2). The ligaments were divided with electronic shears (Figure 3 A) and the splenic vein and artery were ligated by using hemolocks (Figure 3 B-D) FIGURE 2- Gian cyst located in the spleen is observed when entering the abdomen FIGURE 3- Technical steps: A) ligaments were divided with electronic shears; B, C and D) splenic vein and artery being ligated by using hemolocks Finally the spleen was placed into the retrieval bag and removed from the abdominal cavity before morcellating it with ringed forceps. In the Figure 4, the spleen and cyst are observed after its resection. FIGURE 4- Spleen and cyst are observed after resection The patient tolerated well the procedure, with an unremarkable postoperative period. She was discharge home 72 h after the surgery with all the vaccines set (Figure 5). FIGURE 5- Final aspect of the operation, and a drain exteriorized through the lateral 5 mm trocar is used routinely DISCUSSION: The first successful laparoscopic splenectomy was performed by Delaitre and Maignien in 19917 , 8. After that, laparoscopic splenectomy has become one of the most frequently performed laparoscopic solid organ procedures. Regarding the surgical technique, the lateral positioning is the preferred approach for laparoscopic splenectomy because the gravity effect pulls down the colon, stomach and omentum and allows better visualization of the spleen. Three left subcostal ports are usually adequate for normal-sized spleens. In the presence of splenomegaly, ports are optimally positioned 4 cm below the inferior tip of the spleen, parallel to the left costal margin, but within reach of the diaphragm. If the spleen is extremely large, the trocars may have to be placed substantially more inferiorly than normal, creating the need for an additional port posteriorly. This port allows for lateral retraction of the spleen and can facilitate access to the diaphragmatic attachments. Additional ports are placed under laparoscopic guidance. Visualization and efficiency are optimized by exchanging the camera between the medial and lateral ports, while the surgeon operates with both hands8. Supermassive spleens are most effectively managed with a hand-assisted approach. This approach utilizes similar patient positioning but uses a hand-assisted device to facilitate insertion of the surgeon's nondominant hand into the abdominal cavity while maintaining pneumoperitoneum. This technique allows for improved tissue handling and atraumatic manipulation of the enlarged spleen. For patients with supermassive spleens, lateral positioning is altered slightly. In these cases, the patient is placed supine with the left side elevated at 45°. This allows the surgeon to take advantage of gravity while also providing comfortable access through the hand-assist incision. Depending upon the hand dominance of the surgeon, the hand-assist device can be placed in either a midline (right-hand dominant) or a subcostal position (left-hand dominant) through a 7 to 8 cm incision which is located 2 to 4 cm caudal to the inferior pole of the enlarged spleen. In both situations, the surgeon stands on the right side of the patient. The nondominant hand is inserted through the hand-assist device and provides medial retraction and rotation of the spleen. The hilar pedicle is transected with an endoscopic gastrointestinal anastomotic stapler utilizing a vascular cartridge. The spleen is placed into an appropriately-sized impermeable retrieval bag. This bag must be strong enough to avoid rupture during morcellation and extraction of the specimen. Placing the spleen into the retrieval bag can be one of the most time-consuming and challenging aspects of the operation but it's necessary in order to avoid possible splenosis which is unwanted in many hematologic disease. The patient is placed into Trendelenburg's position and the spleen is gradually directed into the bag. After the spleen is within the retrieval bag, the opening of the bag is delivered through the largest port site or the hand-assist incision and the spleen is morcellated with ringed forceps. The most common hematologic disorder that requires splenectomy is the idiopathic thrombocytopenic purpura where surgery is indicated in patients with refractory symptomatic thrombocytopenia after 4 to 6 weeks of medical therapy, patients requiring toxic doses of steroids to achieve remission, and patients who relapse following an initial response to steroid therapy. Hereditary spherocytosis (hereditary hemolytic anemia), is curable in approximately 90% of patients after splenectomy. Surgery is indicated for all patients with hereditary spherocytosis and splenomegaly, for patients with symptoms of severe hemolytic anemia or mild hemolytic anemia and concomitant gallstones, and for patients with cholelithiasis in siblings8. Other indications for splenectomy are HIV-related, thrombocytopenic purpura systemic, lupus erythematosus-related, thrombocytopenic purpura, thrombotic thrombocytopenic purpura, autoimmune hemolytic anemias, Hodgkin's disease, non-Hodgkin's lymphoma, chronic lymphocytic leukemia and Hairy cell leukemia8. The contraindications to laparoscopic splenectomy can be divided into: absolute contraindications like, severe cardiopulmonary disease or cirrhosis with portal hypertension. And relative contraindication are previous abdominal surgery (open Hasson technique is mandatory) and some authors include massive splenectomy1. Laparoscopic splenectomy is emerging as the gold standard for the management of various hematologic disorders. Since the first laparoscopic splenectomies were performed in adults (1991) and children (1993), laparoscopic splenectomy has become the gold standard for the elective removal of normal sized spleens. It clearly has been shown to have less morbidity and postoperative pain, shorter hospital stay, earlier return of bowel function, and superior cosmesis compared with open splenectomy5 , 6. Although laparoscopic splenectomy is superior to open splenectomy for normal sized spleens, splenomegaly has been considered a contraindication to the laparoscopic approach because of difficulties with bleeding and removal from the abdomen. Over the last years, however, several authors have reported laparoscopic removal of enlarged spleens. With the development of hand-assisted laparoscopic surgery, the retraction of massive spleens was technically feasible. Recent studies, have shown that, aside from operative time, there is no difference in transfusions, length of stay, morbidity, or conversion rate with laparoscopic splenectomy for large spleens compared with normal sized spleens. Laparoscopic techniques, surgical skills, and instrumentation have improved, so are safety and efficacy even in the presence of splenomegaly. Authors like Kent W. Kercher et al.8, made a study on 177 patients who underwent laparoscopic splenectomy where forty-nine patients (28%) were identified as having massive splenomegaly. They defined massive splenomegaly as a craniocaudal length ≥17 cm or a weight ≥600 g. Spleens greater than 22 cm in craniocaudal length, 19 cm in width, or a weight greater than 1600 g were defined as "supermassive." And in children, splenic size greater than four times normal for age was defined as massive. They recommend that most surgeons acquire their initial laparoscopic experience with normal-sized spleens prior to attempting laparoscopic splenectomy of splenomegaly. And the author conclude that laparoscopic splenectomy has become the gold standard for elective splenectomy in patients with normal-sized spleens and laparoscopic splenectomy in the setting of massive splenomegaly is safe and effective providing distinct advantages over the open operation. In the presence of supermassive splenomegaly, the use of hand-assisted laparoscopic surgery maintains the benefits of a minimally invasive approach8. Other authors like Arin K. Greene 7, have used the Lahey bag to remove a massive spleen. The author describe that this technique facilitate the removal of massively enlarged spleens laparoscopically (>1,000 g), because a large abdominal incision to remove the spleen is not required. The spleen is broken up while in the Lahey bag so the risk of splenosis is eliminated. Because of suitable single port laparoscopy equipment is not available to may centers due to cost, Colon et al.3 developed a technique where additional instrumentation was kept to a minimum. Boone et al.2 demonstrate in their series that single port splenectomy is safe and feasible in an unselected patient population, their patient population included patients with prior surgery, obese patients, medical comorbidities, splenomegaly, and severe thrombocytopenia. In their study they also compared it to standard laparoscopic splenectomy. The results showed no statistically significance in morbidity and mortality between both groups. Analysis of postoperative pain medication requirement revealed that the single incision patients required fewer narcotics, but this did not reach statistical significance either. Single port splenectomy was associates with a significantly lower open conversion, shorter operative time, and similar median estimated blood loss. Overall, the study demonstrated that it is at least equivalent to standard laparoscopic splenectomy. They enounced that single port splenectomy is an appropriate procedure that can be done safely and may lead to higher patient satisfaction compared to laparoscopic splenectomy. Also, they stated, while articulating instruments and laparoscopes may offer technical advantages, they are not completely necessary for performing single port splenectomy. It is a safe and feasible technique to perform splenectomy for small spleens, but in giant and massive spleens, the reduce port laparoscopic surgery technique is a better choice. Laparoscopic splenectomy has become the gold standard for normal size spleens. Massive splenomegaly is not a contraindication for laparoscopic surgery moreover with the develop of the hand assisted technique that is feasible and safe. Morbidity and mortality are equivalent when coparing with regular laparoscopic splenectomy. CONCLUSION: The use of reduce port minimizes abdominal trauma and has the hypothetical advantages of shorter postoperative stay, greater pain control, and better cosmesis. Laparoscopic splenectomy for giant cysts by using reduce port trocars is safe and feasible and less invasive.
Background: Delaitre and Maignien performed the first successful laparoscopic splenectomy in 1991. After that, laparoscopic splenectomy has become one of the most frequently performed laparoscopic solid organ procedures. Methods: A reduce port laparoscopic splenectomy was performed by using a 10 mm and two 5 mm trocars. To entered the abdomen a trans-umbilical open technique was done and a 10 mm trocar was placed. A subcostal 5 mm trocar was placed under direct vision at the level of the anterior axillary line and another 5 mm port was inserted at the mid-epigastric region. Once it was completely dissected and freed from all of its attachments the hilum, splenic artery and vein, was clipped with hem-o-lock and divided with scissors. Then an endobag was used to retrieve the spleen after being morcellated trough the umbilical incision. Results: This technique was used in a 15 years old female with epigastric and left upper quadrant pain. An abdominal ultrasound demonstrated a giant cyst located in the spleen. Laboratory tests findings were normal. The CT scan was also done, and showed a giant cyst, which squeeze the stomach. The patient tolerated well the procedure, with an unremarkable postoperative. She was discharge home 72 h after the surgery. Conclusions: The use of reduce port minimizes abdominal trauma and has the hypothetical advantages of shorter postoperative stay, greater pain control, and better cosmesis. Laparoscopic splenectomy for giant cysts by using reduce port trocars is safe and feasible and less invasive.
INTRODUCTION: Splenectomy was initially described for hereditary spherocytosis by Sutherland and Burghard in 1910 and for idiopathic thrombocytopenic purpura by Kaznelson in 19168. It has been well recognized as an effective cure for hematologic disorders, better than medical treatment. The first successful laparoscopic splenectomy was performed by Delaitre and Maignien in 19917 , 8. After that, laparoscopic splenectomy has become one of the most frequently performed laparoscopic solid organ procedures. Laparoscopic splenectomy is emerging as the gold standard for the management of various hematologic disorders. Minimally invasive surgery has earned boundless acceptance. The enthusiasm to limit the trauma of large incisions has been the incentive of the development of minimally invasive surgery during the past century. Numerous technology and equipment in laparoscopy have been emerging. It started gradually reducing and repositioning trocars during laparoscopic surgery. It began with repositioning the subxiphoid trocar into the umbilicus. Less trocars equals less abdominal trauma. In small spleens a single port laparoscopic surgery can be performed. In large spleens, less trocars can be used. The aim of this artcile is to details the splenectomy procedure using only three trochars. CONCLUSION: The use of reduce port minimizes abdominal trauma and has the hypothetical advantages of shorter postoperative stay, greater pain control, and better cosmesis. Laparoscopic splenectomy for giant cysts by using reduce port trocars is safe and feasible and less invasive.
Background: Delaitre and Maignien performed the first successful laparoscopic splenectomy in 1991. After that, laparoscopic splenectomy has become one of the most frequently performed laparoscopic solid organ procedures. Methods: A reduce port laparoscopic splenectomy was performed by using a 10 mm and two 5 mm trocars. To entered the abdomen a trans-umbilical open technique was done and a 10 mm trocar was placed. A subcostal 5 mm trocar was placed under direct vision at the level of the anterior axillary line and another 5 mm port was inserted at the mid-epigastric region. Once it was completely dissected and freed from all of its attachments the hilum, splenic artery and vein, was clipped with hem-o-lock and divided with scissors. Then an endobag was used to retrieve the spleen after being morcellated trough the umbilical incision. Results: This technique was used in a 15 years old female with epigastric and left upper quadrant pain. An abdominal ultrasound demonstrated a giant cyst located in the spleen. Laboratory tests findings were normal. The CT scan was also done, and showed a giant cyst, which squeeze the stomach. The patient tolerated well the procedure, with an unremarkable postoperative. She was discharge home 72 h after the surgery. Conclusions: The use of reduce port minimizes abdominal trauma and has the hypothetical advantages of shorter postoperative stay, greater pain control, and better cosmesis. Laparoscopic splenectomy for giant cysts by using reduce port trocars is safe and feasible and less invasive.
2,985
285
[ 242, 339 ]
6
[ "spleen", "laparoscopic", "splenectomy", "mm", "laparoscopic splenectomy", "spleens", "placed", "technique", "port", "patients" ]
[ "spleens laparoscopic splenectomy", "compared laparoscopic splenectomy", "attempting laparoscopic splenectomy", "laparoscopic splenectomy stated", "laparoscopic splenectomy emerging" ]
null
[CONTENT] Laparoscopy | Splenectomy | Surgery [SUMMARY]
[CONTENT] Laparoscopy | Splenectomy | Surgery [SUMMARY]
null
[CONTENT] Laparoscopy | Splenectomy | Surgery [SUMMARY]
[CONTENT] Laparoscopy | Splenectomy | Surgery [SUMMARY]
[CONTENT] Laparoscopy | Splenectomy | Surgery [SUMMARY]
[CONTENT] Adolescent | Cysts | Epithelium | Female | Humans | Laparoscopy | Splenectomy | Splenic Diseases [SUMMARY]
[CONTENT] Adolescent | Cysts | Epithelium | Female | Humans | Laparoscopy | Splenectomy | Splenic Diseases [SUMMARY]
null
[CONTENT] Adolescent | Cysts | Epithelium | Female | Humans | Laparoscopy | Splenectomy | Splenic Diseases [SUMMARY]
[CONTENT] Adolescent | Cysts | Epithelium | Female | Humans | Laparoscopy | Splenectomy | Splenic Diseases [SUMMARY]
[CONTENT] Adolescent | Cysts | Epithelium | Female | Humans | Laparoscopy | Splenectomy | Splenic Diseases [SUMMARY]
[CONTENT] spleens laparoscopic splenectomy | compared laparoscopic splenectomy | attempting laparoscopic splenectomy | laparoscopic splenectomy stated | laparoscopic splenectomy emerging [SUMMARY]
[CONTENT] spleens laparoscopic splenectomy | compared laparoscopic splenectomy | attempting laparoscopic splenectomy | laparoscopic splenectomy stated | laparoscopic splenectomy emerging [SUMMARY]
null
[CONTENT] spleens laparoscopic splenectomy | compared laparoscopic splenectomy | attempting laparoscopic splenectomy | laparoscopic splenectomy stated | laparoscopic splenectomy emerging [SUMMARY]
[CONTENT] spleens laparoscopic splenectomy | compared laparoscopic splenectomy | attempting laparoscopic splenectomy | laparoscopic splenectomy stated | laparoscopic splenectomy emerging [SUMMARY]
[CONTENT] spleens laparoscopic splenectomy | compared laparoscopic splenectomy | attempting laparoscopic splenectomy | laparoscopic splenectomy stated | laparoscopic splenectomy emerging [SUMMARY]
[CONTENT] spleen | laparoscopic | splenectomy | mm | laparoscopic splenectomy | spleens | placed | technique | port | patients [SUMMARY]
[CONTENT] spleen | laparoscopic | splenectomy | mm | laparoscopic splenectomy | spleens | placed | technique | port | patients [SUMMARY]
null
[CONTENT] spleen | laparoscopic | splenectomy | mm | laparoscopic splenectomy | spleens | placed | technique | port | patients [SUMMARY]
[CONTENT] spleen | laparoscopic | splenectomy | mm | laparoscopic splenectomy | spleens | placed | technique | port | patients [SUMMARY]
[CONTENT] spleen | laparoscopic | splenectomy | mm | laparoscopic splenectomy | spleens | placed | technique | port | patients [SUMMARY]
[CONTENT] laparoscopic | splenectomy | surgery | performed | laparoscopic splenectomy | invasive surgery | minimally invasive surgery | repositioning | trocars | minimally invasive [SUMMARY]
[CONTENT] mm | spleen | mm trocar | trocar | placed | harmonic | artery vein | size spleen | trocar placed | pole spleen [SUMMARY]
null
[CONTENT] reduce port | reduce | port | minimizes | abdominal trauma hypothetical advantages | feasible invasive | control better cosmesis laparoscopic | minimizes abdominal trauma hypothetical | minimizes abdominal trauma | minimizes abdominal [SUMMARY]
[CONTENT] spleen | mm | splenectomy | laparoscopic | figure | laparoscopic splenectomy | placed | mm trocar | trocar | surgery [SUMMARY]
[CONTENT] spleen | mm | splenectomy | laparoscopic | figure | laparoscopic splenectomy | placed | mm trocar | trocar | surgery [SUMMARY]
[CONTENT] Delaitre | Maignien | first | 1991 ||| [SUMMARY]
[CONTENT] 10 mm and ||| 10 mm ||| 5 mm | 5 mm ||| ||| [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] Delaitre | Maignien | first | 1991 ||| ||| 10 mm and ||| 10 mm ||| 5 mm | 5 mm ||| ||| ||| ||| 15 years old ||| ||| ||| CT ||| ||| 72 ||| ||| [SUMMARY]
[CONTENT] Delaitre | Maignien | first | 1991 ||| ||| 10 mm and ||| 10 mm ||| 5 mm | 5 mm ||| ||| ||| ||| 15 years old ||| ||| ||| CT ||| ||| 72 ||| ||| [SUMMARY]
Association between depression, happiness, and sleep duration: data from the UAE healthy future pilot study.
36271400
The United Arab Emirates Healthy Future Study (UAEHFS) is one of the first large prospective cohort studies and one of the few studies in the region which examines causes and risk factors for chronic diseases among the nationals of the United Arab Emirates (UAE). The aim of this study is to investigate the eight-item Patient Health Questionnaire (PHQ-8) as a screening instrument for depression among the UAEHFS pilot participants.
BACKGROUND
The UAEHFS pilot data were analyzed to examine the relationship between the PHQ-8 and possible confounding factors, such as self-reported happiness, and self-reported sleep duration (hours) after adjusting for age, body mass index (BMI), and gender.
METHODS
Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis using 100 multiple imputations. 231 (44.7%) were included in the primary statistical analysis after omitting the missing values. Participants' median age was 32.0 years (Interquartile Range: 24.0, 39.0). In total, 22 (9.5%) of the participant reported depression. Females have shown significantly higher odds of reporting depression than males with an odds ratio = 3.2 (95% CI:1.17, 8.88), and there were approximately 5-fold higher odds of reporting depression for unhappy than for happy individuals. For one interquartile-range increase in age and BMI, the odds ratio of reporting depression was 0.34 (95% CI: 0.1, 1.0) and 1.8 (95% CI: 0.97, 3.32) respectively.
RESULTS
Females are more likely to report depression compared to males. Increasing age may decrease the risk of reporting depression. Unhappy individuals have approximately 5-fold higher odds of reporting depression compared to happy individuals. A higher BMI was associated with a higher risk of reporting depression. In a sensitivity analysis, individuals who reported less than 6 h of sleep per 24 h were more likely to report depression than those who reported 7 h of sleep.
CONCLUSION
[ "Male", "Female", "Humans", "Adult", "Happiness", "Pilot Projects", "Prospective Studies", "Depression", "United Arab Emirates", "Sleep" ]
9587590
Introduction
Depression is defined as a set of disorders ranging from mild to moderate to severe [1]. It is widely recognized as a major public health problem worldwide [2]. Reports from the Global Burden of Diseases declared major depressive disorder as one of the top three causes of disability-adjusted life years [3, 4]. The effect of depression has been extensively studied on the individual’s daily functioning and productivity. This is directly reflected in increased economic costs per capita [3, 4]. For example, in Catalonia in 2006, the average annual cost of an adult with depression was close to 1800 Euros, and the total annual cost of depression was 735.4 million Euros. These costs were linked directly to primary care, mental health specialized care, hospitalization, and pharmacological care, as well as indirect costs due to productivity loss, temporary and permanent disability [4]. To measure the level of depression in non-clinical populations; clinical and epidemiological studies have often used the established and validated eight-item Patient Health Questionnaire scale (PHQ-8) instead of the nine- item Patient Health Questionnaire scale (PHQ-9) [5]. As has been confirmed by previous studies, the scale can detect major depression with sensitivity and a specificity of 88%, to classify subjects into depressed or non-depressed, respectively [6–8]. Regionally, in Jordan, Lebanon, Syria and Afghanistan, the cutoff point of 10 was used [9–12]. Similarly, in UAE, studies have mostly used the cutoff point of 10 [13–16]. Recent studies have focused on exploring the methods of improving individual and environmental effects on disability related to depression by investigating its triggers and associations [2, 10]. Factors such as sleep duration and self-reported happiness are evidenced to be predictors for depression [17, 18]. Sleep duration can be considered a risk factor for lower well-being [19]. The number of sleep hours has also been found to have a causal relationship with depression as well as self-reported happiness [20]. For example, shorter sleep duration might lead to lower positive emotions such as self-reported happiness and showed stronger associations with negative emotional affect [21, 22]. One question that needs to be asked, however, is the nature and strength of the association between these three variables respectively. Sleep duration is determined by how many hours an individual sleeps over 24 h. Individuals with depression often have poor sleep status including abnormal REM (rapid eye movement), and insomnia (difficulty falling asleep or staying asleep). Abnormal REM may contribute to the development of altered emotional processing in depression [23]. It was reported that people with insomnia might have a ten-fold higher risk of developing depression in contrast to people who get a good night’s sleep [24]. Also, 75% of depressed individuals will have trouble falling asleep or staying asleep [25]. A bidirectional relationship between sleep duration and reported depression has been investigated [24, 26]. Such studies are unsatisfactory because they do not explore the association between major depressive disorder and sleep duration based on specific population demographics. In this study we are considering other sociodemographic factors such as age, gender, and marital status. Interest in studying happiness in the context of mental health status (such as depression and anxiety) has been growing recently [27]. The findings of some research papers suggest that self-reported happiness is a potential factor in the prevention and management of depression [27, 34]. Happiness is correlated to a person’s ability to approach situations in a less stressful manner and to an individual’s capacity to perceive and control their own feelings. This indicates that higher happiness levels may have a protective effect on depression [27]. Well-being has been defined as the combination of feeling good and functioning well [28]; conversely, quality of life could be defined as an individual’s satisfaction with his or her actual life compared with his or her ideal life. Evaluation of the quality of life depends on one’s value system [29]. Moreover, studies have shown that individuals use various chronically accessible and stable sources of information when making life satisfaction judgments [30]. Yet, well-being, quality of life and life satisfaction are multidimensional constructs which include complex cognitive evaluation processes and cannot be adequately assessed by using a single item or question [31]. Unlike quality of life, which typically requires detailed assessments to ascertain [32], happiness is easier to evaluate using a single item question [33]. Investigating the association between happiness and depression would add valuable information to public health research as the relationship between happiness and depression is observed to be bidirectional (i.e., one variable can predict the other, and people might not report depression but are more likely to report feeling unhappy) [34]. In addition, there are some factors which confound with sleep duration and self-reported happiness, such as age, gender, and marital status [36, 37]. For example, reviews reported that a prognosis of depression was improved with increasing age [38]. Conversely, another study showed that older age was associated with worsening of depressive symptoms [37]. Moreover, depressive symptoms could worsen among widowed individuals as their age increased [40]. Some research findings reflected that females are more likely to perceive depression compared to males [38, 41]. Other studies showed significant interplay between marital status and depressive symptoms [40, 42]. For example, being unmarried could lead to perceiving and developing depressive symptoms [43], while a worsening in depression symptoms among married individuals could lead to separation or being unmarried [42, 44]. So, further exploration is required to study the interplay of these variables together. The United Arab Emirates (UAE) is a high-income developed country which has undergone a rapid epidemiological transition from a traditional semi-nomadic society to a modern affluent society with a lifestyle characterized by over-consumption of energy-dense foods and low physical activity [47]. The UAE was ranked 15th out of 157 countries included in the WHO World Happiness Report with a score of 7.06 [48]. Despite this, depression has been identified as the third leading cause of disability in the UAE [3]. Nevertheless, there are few studies in the Gulf region examining the relationship between happiness, sleep duration and depression [36, 49, 50]. Therefore, studying the association between depression, happiness, and sleep in the United Arab Emirates (UAE) population would be of interest to the public health field in this part of the world. This study aimed to examine the relationship between depression, self-reported happiness, and self-reported sleep duration after adjusting for age and gender using the UAE Healthy Future Study (UAEHFS) pilot data.
null
null
Results
Of 517 participants who consented to participate in the UAEHF pilot study, 487 (94.2%) had completed questionnaire data [47]. Figure 1 describes all key phases from recruitment up to inclusion in the study and in the statistical analysis. After omitting missing values, 231 (44.7%) participants were included in the main statistical analysis. However, 487 (94.2%) participants were included in the sensitivity analysis (using 100 multiple imputations) [73]. The median age of the UAEHFS pilot data participants was 32.0 years (Interquartile Range: 24.0–39.0). The percentage of females included in the study was 32%, which represented the UAE population well [59]. Therefore, we did not make any adjustments for gender bias. Fig. 1Flow chart of UAEHF pilot study (describes all the key phases from the recruitment up to the inclusion in the study analysis) Flow chart of UAEHF pilot study (describes all the key phases from the recruitment up to the inclusion in the study analysis) Note: This Figure represents the data of the included participants in the main statistical analysis after omitting missing values. The number of observed values (%) of each categorical variable was presented by the PHQ-8 categories in Table 1, where the majority of the study participants (90.5%) had PHQ-8 score less than 10. When categorical groups were compared within the PHQ-8 depression group (< 10 versus ≥ 10), there was only a statistically significant difference between females versus males with a Fisher exact p-value = 0.028. Table 1 shows that there was a statistically significant difference in the age distribution across the depression groups (p-value = 0.012). There was no statistically significant difference in the BMI measurements between the PHQ-8 groups (p-value = 0.44). The result of the primary analysis showed that females have statistically significantly greater odds of reporting depression compared to males OR = 3.2 (95% CI:1.1, 8.9), p-value = 0.024 (Table 2). Furthermore, there was an approximately 5-fold greater odds of reporting depression for unhappy than for happy individuals. However, this was not statistically significant (p-value = 0.10). Similarly, the sleep duration and marital status were both not statistically significant (p-value of 0.284 and 0.205, respectively; Table 2). However, in a sensitivity analysis, people who reported sleep duration of less than 6 h were more likely to report depression as compared to the people who reported sleep of 7 h OR (95% CI) of 2.6 (1.0, 6.4), p-value = 0.040 and 1.136 (1.015, 1.272), p-value = 0.027. Table 2 The results of the primary analysis of the fitted multivariate logistic regression model VariableOR (95% CI)PbHappiness - HappyReferenceHappiness - Unhappy5.5 (0.7, 42.2)0.100Age (linear)0.3 (0.1, 1.0)a0.056BMI (linear)1.8 (1.0, 3.3)a0.064Gender = MalesReferenceGender = Females3.2 (1.2, 8.9)0.024Sleep = 7ReferenceSleep < 64.0 (0.9, 17.4)0.061Sleep = 61.3 (0.2, 6.9)0.775Sleep = 81.5 (0.3, 7.3)0.585Sleep > 81.2 (0.2, 9.2)0.848MarriedReferenceSingle0.9 (0.3, 3.0)0.8Others0.2 (0.02, 3.3)0.3Note: aInterquartile-range odds ratio for age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable The results of the primary analysis of the fitted multivariate logistic regression model Note: aInterquartile-range odds ratio for age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable To compute Interquartile Range (IQR) odds ratios for continuous variables, the age and BMI variables were divided by their IQR values of 15 and 8.7, respectively [54, 56]. Table 2 shows that for one interquartile-range increase in the age and BMI, the IQR-OR was 0.34 (95% CI: 0.12, 1.0) and 1.8 (95% CI: 0.97, 3.3), respectively, where both results were statistically significant with a p-value of 0.056 and 0.064 individually. The results of the sensitivity analysis using 100 multiple imputations (Table 3) were approximately similar to the result of the multivariate logistic ordinal regression analyses using omitted data (Table 2). However, the happiness variable was statistically significant OR = 5.1 (95%CI: 1.7, 15.7, p-value = 0.005), and there was a statistically significant difference between the sleep variable for less than six hours as compared with seven hours OR = 2.6 (95% CI: 1.0, 6.4, p-value = 0.04). Supplementary Fig. 1 illustrates the percentages of missing values in the variables included in this statistical analysis. Marital status had the lowest number of missing values (5.3%), followed by BMI (12.7%) and the Sleep variable with 12.9% missing values. The Happiness variable had 21.1% missing values. The PHQ-8 variables had 32.6–35.1% missing values. Supplementary Table 1 shows the percentages of the “Prefer not to answer (DA)” and “Do not know (UN)” in the eight PHQ questions, where the percentages vary between 16 and 18.5% between the eight questions. The percentages of “Prefer not to answer (DA)” and “Do not know (UN)” in the happiness variable was 2.5% and 2.3%, respectively, which were also considered missing values, although these were not missing at random and can be correlated with one of the PHQ-8 answers. However, this has been addressed in the multiple imputation analysis. Table 3 Results of the sensitivity analysis using multivariate logistic regression models with a sample size N = 487 VariableOR (95% CI)PbHappiness – HappyReferenceHappiness - Unhappy5.1 (1.7, 15.7)0.004Age (linear)0.5 (0.3, 1.0)a0.060BMI (linear)1.7 (1.1, 2.5)a0.021Gender = MalesReferenceGender = Females1.6 (0.9, 3.0)0.154Sleep = 7ReferenceSleep < 62.6 (1.0, 6.4)0.040Sleep = 60.9 (0.3, 2.6)0.887Sleep = 80.9 (0.3, 2.4)0.779Sleep > 81.1 (0.3, 3.7)0.853MarriedReferenceSingle0.9 (0.4, 2.1)0.885Others0.7 (0.2, 2.9)0.597Note: aInterquartile-range odds ratio for Age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable. Results were summarized using Rubin’s rules Results of the sensitivity analysis using multivariate logistic regression models with a sample size N = 487 Note: aInterquartile-range odds ratio for Age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable. Results were summarized using Rubin’s rules An additional sensitivity analysis was performed using the ordinal PHQ-8 score as an outcome in a multivariate linear regression model (see supplementary Table 1). The result of this sensitivity analysis was very similar to the sensitivity analysis in Table 3.
Conclusion
The results of this study indicate that females are more likely to report depression compared to males. Older age is associated with a decrease in self-reported depression. Unhappy individuals are approximately 5-fold higher odds to report depression compared to happy individuals. BMI was positively associated with reporting depression. The result of the sensitivity analysis shows that individuals who sleep less than 6 h per day are more likely to report depression compared to those who sleep 7 h per day. The results of this study have a potential value for researchers and public health professionals as it presents novel data on the PHQ-8 score in a healthy UAE population, which has not been explored before. Furthermore, the results of this study can help contribute to the knowledge base on current and potential population mental health impact in the UAE and Gulf Region.
[ "Study design", "Participants’ eligibility criteria and recruitment", "Measures", "Statistical analysis", "Sensitivity analysis", "Strengths and Limitations", "" ]
[ "This was a pilot prospective cohort study conducted from January 2015 to May 2015. The participants were recruited from two health care centers in Abu Dhabi. Participants completed an online questionnaire including questions on demographic data, PHQ-8 score, self-reported sleep duration, and self-reported happiness score. Physical measurements such as Body Mass Index (BMI) were collected during the recruitment visit.", "Seven hundred and sixty-nine UAE nationals aged ≥ 18 years were invited to participate voluntarily in the pilot study. Volunteers from the general population with inclusion criteria of age 18 or greater; able to consent; UAE nationals, resident in Abu Dhabi Emirate. All potential participants were given participant information leaflets in either Arabic or English to read and had the opportunity to ask questions prior to completion of the recruitment process. Participants signed an informed consent and were asked to complete a detailed questionnaire. However, 243 invited subjects did not respond, and their reasons for not participating were recorded [45]. Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis [47].", "The PHQ-8 questionnaire was used to measure the participants’ depression levels [6, 39, 51]. This study used the cutoff point of 10 as well based on the common local practice considering that there are no specific related research guidelines in UAE [13–16]. The PHQ-8 score was dichotomized into no-depression (total PHQ-8 < 10) versus depression (≥ 10) [6, 51].\nHappiness was measured using a one-question item that asked participants, “In general, how happy you are?’’ Those who responded as extremely happy, very happy and moderately happy were grouped as “happy” while those responding as moderately unhappy, very unhappy, and extremely unhappy were grouped as “unhappy”.\nDemographic variables such as age, gender, and marital status (single, married, and others) were also included in the reference. Sleep duration data was collected as an ordinal variable (number of hours) by asking how many hours of sleep the participant gets in a 24-hour period, including naps. Sleep duration was categorized into five categories (see Table 1) to avoid the linearity assumption between sleep duration and depression status as well as to be able to compare average sleepers to shorter and longer sleepers. The questionnaire was translated from English into Arabic and back-translated into English to check for linguistic validity.\n\nTable 1\nNumber (percentages) of the analyzed variables and median (IQR) for Age and BMI.\nVariableGroupPHQ-8 < 10PHQ-8 ≥ 10P-valueGenderFemale62 (83.8)12 (16.2)0.028aMale147 (93.6)10 (6.4)Sleep (hours)< 639 (83)8 (17)0.284a= 653 (93)4 (7)= 757 (95)3 (5)= 844 (89.8)5 (10.2)> 816 (88.9)2 (11.1)Marital StatusSingle120 (93)9 (7)0.205aMarried15 (93.8)1 (6.2)Other74 (86)12 (14)HappinessHappy204 (91.1)20 (8.9)0.136aUnhappy5 (71.4)2 (28.6)Total209 (90.5)22 (9.5)Median (IQR)Median (IQR)P-valueAgeyears33.0 (25.0, 40.0)24.5 (22.0, 35.0)0.012bBMIkg/m227.6 (23.4, 31.2)28.8 (24.2, 33.0)0.444bNote: aFisher’s exact test p-values for categorical data and bWilcoxon rank sum test for continuous data\n\n\nNumber (percentages) of the analyzed variables and median (IQR) for Age and BMI.\n\nNote: aFisher’s exact test p-values for categorical data and bWilcoxon rank sum test for continuous data\nBody mass index (BMI) was obtained via physical measurement using Tanita MC-780 MA Segmental Body Composition Analyzer [52]. All physical measurements were collected by a clinical research nurse. Additional details of the study recruitment have been previously described [47].", "All eligible participants 487 (94.2%) were included in a sensitivity analysis (using 100 multiple imputations). After excluding participants with missing values, 231 (44.7%) were included in the primary statistical analysis. The PHQ-8 questionnaire was used in this statistical analysis with two additional possible options to select for each question (i.e., P2A – P2H). These were “Do not know (UN)” and “Prefer not to answer (DA)”, which were treated as missing values in the statistical analysis. Fisher’s exact test was used to investigate the association between depression and categorical variables, such as Sleep (hours), Happiness, Marital Status, and Gender. Wilcoxon rank-sum test was performed to investigate differences in the distribution of age and BMI within the depressed versus non-depressed group respectively.\nTo examine the factors associated with depression, a multivariate logistic regression model was performed with the dichotomized PHQ-8 score as the outcome. The predictors were Happiness score (Happy vs. Unhappy), Age (linear), Gender (Females vs. Males), BMI (linear), and Marital status (categorical). Interquartile-range odds ratios (IQR-OR’s) for continuous predictors and simple odds ratios (OR’s) for categorical predictors with 95% confidence intervals (Cis) were estimated for continuous and categorical variables respectively; corresponding p-values were calculated [53].\nIn sensitivity analysis, a multivariate logistic regression model and a multivariate linear regression model were conducted using multiple imputations (see sensitivity analysis section). All applied statistical tests were two-sided; p < 0.05 were considered statistically significant. No adjustment for multiple comparisons was made. Statistical analyses were performed in R version 4.0.2 [54].", "The primary statistical analysis included subjects with at least one non-missing value. However, in a sensitivity analysis, a multivariate imputation by chained equations (MICE) procedure was applied with Classification and Regression Trees (CART) to impute missing values [55]. 100 multiple imputations were used [56]. Rubin’s rules were used to combine the multiple imputed estimates [57].\nThe pattern of missing values was investigated, and it was found that subjects who “did not want to answer” were not systematically different from those who answered the questionnaire. Therefore, “prefer not to answer” was recorded as missing value in the statistical analysis and was considered a missing variable in the sensitivity analysis [55].", "Missing values were omitted in the primary statistical analysis, which decreased the sample size and can lead to overfitting in the main finding [55]. Therefore, the number of participants with the PHQ-8 ≥ 10 (i.e. – events) was 22 which is a limitation in this data set because it does not allow us to fit a complex multivariate statistical regression model if we would follow the statistical rule of thumb “ten events per predictor” [56, 57, 73]. Thus, sensitivity analysis was performed, and the result of the sensitivity analysis (Table 3) was approximately the same as the main finding (Table 2).\nAlthough some limitations were found in this study, the findings provide future direction to mental health research. Further investigation is needed with a larger sample size using the main UAEHFS data to have a better picture of the role of happiness, marital status, sleep, and social demographic variables in association with depression and mental health disorders.", "Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1" ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Study design", "Participants’ eligibility criteria and recruitment", "Measures", "Statistical analysis", "Sensitivity analysis", "Results", "Discussion", "Strengths and Limitations", "Conclusion", "Electronic supplementary material", "" ]
[ "Depression is defined as a set of disorders ranging from mild to moderate to severe [1]. It is widely recognized as a major public health problem worldwide [2]. Reports from the Global Burden of Diseases declared major depressive disorder as one of the top three causes of disability-adjusted life years [3, 4]. The effect of depression has been extensively studied on the individual’s daily functioning and productivity. This is directly reflected in increased economic costs per capita [3, 4]. For example, in Catalonia in 2006, the average annual cost of an adult with depression was close to 1800 Euros, and the total annual cost of depression was 735.4 million Euros. These costs were linked directly to primary care, mental health specialized care, hospitalization, and pharmacological care, as well as indirect costs due to productivity loss, temporary and permanent disability [4].\nTo measure the level of depression in non-clinical populations; clinical and epidemiological studies have often used the established and validated eight-item Patient Health Questionnaire scale (PHQ-8) instead of the nine- item Patient Health Questionnaire scale (PHQ-9) [5]. As has been confirmed by previous studies, the scale can detect major depression with sensitivity and a specificity of 88%, to classify subjects into depressed or non-depressed, respectively [6–8]. Regionally, in Jordan, Lebanon, Syria and Afghanistan, the cutoff point of 10 was used [9–12]. Similarly, in UAE, studies have mostly used the cutoff point of 10 [13–16].\nRecent studies have focused on exploring the methods of improving individual and environmental effects on disability related to depression by investigating its triggers and associations [2, 10]. Factors such as sleep duration and self-reported happiness are evidenced to be predictors for depression [17, 18]. Sleep duration can be considered a risk factor for lower well-being [19]. The number of sleep hours has also been found to have a causal relationship with depression as well as self-reported happiness [20]. For example, shorter sleep duration might lead to lower positive emotions such as self-reported happiness and showed stronger associations with negative emotional affect [21, 22]. One question that needs to be asked, however, is the nature and strength of the association between these three variables respectively.\nSleep duration is determined by how many hours an individual sleeps over 24 h. Individuals with depression often have poor sleep status including abnormal REM (rapid eye movement), and insomnia (difficulty falling asleep or staying asleep). Abnormal REM may contribute to the development of altered emotional processing in depression [23]. It was reported that people with insomnia might have a ten-fold higher risk of developing depression in contrast to people who get a good night’s sleep [24]. Also, 75% of depressed individuals will have trouble falling asleep or staying asleep [25]. A bidirectional relationship between sleep duration and reported depression has been investigated [24, 26]. Such studies are unsatisfactory because they do not explore the association between major depressive disorder and sleep duration based on specific population demographics. In this study we are considering other sociodemographic factors such as age, gender, and marital status.\nInterest in studying happiness in the context of mental health status (such as depression and anxiety) has been growing recently [27]. The findings of some research papers suggest that self-reported happiness is a potential factor in the prevention and management of depression [27, 34]. Happiness is correlated to a person’s ability to approach situations in a less stressful manner and to an individual’s capacity to perceive and control their own feelings. This indicates that higher happiness levels may have a protective effect on depression [27]. Well-being has been defined as the combination of feeling good and functioning well [28]; conversely, quality of life could be defined as an individual’s satisfaction with his or her actual life compared with his or her ideal life. Evaluation of the quality of life depends on one’s value system [29].\nMoreover, studies have shown that individuals use various chronically accessible and stable sources of information when making life satisfaction judgments [30]. Yet, well-being, quality of life and life satisfaction are multidimensional constructs which include complex cognitive evaluation processes and cannot be adequately assessed by using a single item or question [31]. Unlike quality of life, which typically requires detailed assessments to ascertain [32], happiness is easier to evaluate using a single item question [33]. Investigating the association between happiness and depression would add valuable information to public health research as the relationship between happiness and depression is observed to be bidirectional (i.e., one variable can predict the other, and people might not report depression but are more likely to report feeling unhappy) [34].\nIn addition, there are some factors which confound with sleep duration and self-reported happiness, such as age, gender, and marital status [36, 37]. For example, reviews reported that a prognosis of depression was improved with increasing age [38]. Conversely, another study showed that older age was associated with worsening of depressive symptoms [37]. Moreover, depressive symptoms could worsen among widowed individuals as their age increased [40]. Some research findings reflected that females are more likely to perceive depression compared to males [38, 41]. Other studies showed significant interplay between marital status and depressive symptoms [40, 42]. For example, being unmarried could lead to perceiving and developing depressive symptoms [43], while a worsening in depression symptoms among married individuals could lead to separation or being unmarried [42, 44]. So, further exploration is required to study the interplay of these variables together.\nThe United Arab Emirates (UAE) is a high-income developed country which has undergone a rapid epidemiological transition from a traditional semi-nomadic society to a modern affluent society with a lifestyle characterized by over-consumption of energy-dense foods and low physical activity [47]. The UAE was ranked 15th out of 157 countries included in the WHO World Happiness Report with a score of 7.06 [48]. Despite this, depression has been identified as the third leading cause of disability in the UAE [3]. Nevertheless, there are few studies in the Gulf region examining the relationship between happiness, sleep duration and depression [36, 49, 50]. Therefore, studying the association between depression, happiness, and sleep in the United Arab Emirates (UAE) population would be of interest to the public health field in this part of the world. This study aimed to examine the relationship between depression, self-reported happiness, and self-reported sleep duration after adjusting for age and gender using the UAE Healthy Future Study (UAEHFS) pilot data.", " Study design This was a pilot prospective cohort study conducted from January 2015 to May 2015. The participants were recruited from two health care centers in Abu Dhabi. Participants completed an online questionnaire including questions on demographic data, PHQ-8 score, self-reported sleep duration, and self-reported happiness score. Physical measurements such as Body Mass Index (BMI) were collected during the recruitment visit.\nThis was a pilot prospective cohort study conducted from January 2015 to May 2015. The participants were recruited from two health care centers in Abu Dhabi. Participants completed an online questionnaire including questions on demographic data, PHQ-8 score, self-reported sleep duration, and self-reported happiness score. Physical measurements such as Body Mass Index (BMI) were collected during the recruitment visit.\n Participants’ eligibility criteria and recruitment Seven hundred and sixty-nine UAE nationals aged ≥ 18 years were invited to participate voluntarily in the pilot study. Volunteers from the general population with inclusion criteria of age 18 or greater; able to consent; UAE nationals, resident in Abu Dhabi Emirate. All potential participants were given participant information leaflets in either Arabic or English to read and had the opportunity to ask questions prior to completion of the recruitment process. Participants signed an informed consent and were asked to complete a detailed questionnaire. However, 243 invited subjects did not respond, and their reasons for not participating were recorded [45]. Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis [47].\nSeven hundred and sixty-nine UAE nationals aged ≥ 18 years were invited to participate voluntarily in the pilot study. Volunteers from the general population with inclusion criteria of age 18 or greater; able to consent; UAE nationals, resident in Abu Dhabi Emirate. All potential participants were given participant information leaflets in either Arabic or English to read and had the opportunity to ask questions prior to completion of the recruitment process. Participants signed an informed consent and were asked to complete a detailed questionnaire. However, 243 invited subjects did not respond, and their reasons for not participating were recorded [45]. Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis [47].", "This was a pilot prospective cohort study conducted from January 2015 to May 2015. The participants were recruited from two health care centers in Abu Dhabi. Participants completed an online questionnaire including questions on demographic data, PHQ-8 score, self-reported sleep duration, and self-reported happiness score. Physical measurements such as Body Mass Index (BMI) were collected during the recruitment visit.", "Seven hundred and sixty-nine UAE nationals aged ≥ 18 years were invited to participate voluntarily in the pilot study. Volunteers from the general population with inclusion criteria of age 18 or greater; able to consent; UAE nationals, resident in Abu Dhabi Emirate. All potential participants were given participant information leaflets in either Arabic or English to read and had the opportunity to ask questions prior to completion of the recruitment process. Participants signed an informed consent and were asked to complete a detailed questionnaire. However, 243 invited subjects did not respond, and their reasons for not participating were recorded [45]. Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis [47].", "The PHQ-8 questionnaire was used to measure the participants’ depression levels [6, 39, 51]. This study used the cutoff point of 10 as well based on the common local practice considering that there are no specific related research guidelines in UAE [13–16]. The PHQ-8 score was dichotomized into no-depression (total PHQ-8 < 10) versus depression (≥ 10) [6, 51].\nHappiness was measured using a one-question item that asked participants, “In general, how happy you are?’’ Those who responded as extremely happy, very happy and moderately happy were grouped as “happy” while those responding as moderately unhappy, very unhappy, and extremely unhappy were grouped as “unhappy”.\nDemographic variables such as age, gender, and marital status (single, married, and others) were also included in the reference. Sleep duration data was collected as an ordinal variable (number of hours) by asking how many hours of sleep the participant gets in a 24-hour period, including naps. Sleep duration was categorized into five categories (see Table 1) to avoid the linearity assumption between sleep duration and depression status as well as to be able to compare average sleepers to shorter and longer sleepers. The questionnaire was translated from English into Arabic and back-translated into English to check for linguistic validity.\n\nTable 1\nNumber (percentages) of the analyzed variables and median (IQR) for Age and BMI.\nVariableGroupPHQ-8 < 10PHQ-8 ≥ 10P-valueGenderFemale62 (83.8)12 (16.2)0.028aMale147 (93.6)10 (6.4)Sleep (hours)< 639 (83)8 (17)0.284a= 653 (93)4 (7)= 757 (95)3 (5)= 844 (89.8)5 (10.2)> 816 (88.9)2 (11.1)Marital StatusSingle120 (93)9 (7)0.205aMarried15 (93.8)1 (6.2)Other74 (86)12 (14)HappinessHappy204 (91.1)20 (8.9)0.136aUnhappy5 (71.4)2 (28.6)Total209 (90.5)22 (9.5)Median (IQR)Median (IQR)P-valueAgeyears33.0 (25.0, 40.0)24.5 (22.0, 35.0)0.012bBMIkg/m227.6 (23.4, 31.2)28.8 (24.2, 33.0)0.444bNote: aFisher’s exact test p-values for categorical data and bWilcoxon rank sum test for continuous data\n\n\nNumber (percentages) of the analyzed variables and median (IQR) for Age and BMI.\n\nNote: aFisher’s exact test p-values for categorical data and bWilcoxon rank sum test for continuous data\nBody mass index (BMI) was obtained via physical measurement using Tanita MC-780 MA Segmental Body Composition Analyzer [52]. All physical measurements were collected by a clinical research nurse. Additional details of the study recruitment have been previously described [47].", "All eligible participants 487 (94.2%) were included in a sensitivity analysis (using 100 multiple imputations). After excluding participants with missing values, 231 (44.7%) were included in the primary statistical analysis. The PHQ-8 questionnaire was used in this statistical analysis with two additional possible options to select for each question (i.e., P2A – P2H). These were “Do not know (UN)” and “Prefer not to answer (DA)”, which were treated as missing values in the statistical analysis. Fisher’s exact test was used to investigate the association between depression and categorical variables, such as Sleep (hours), Happiness, Marital Status, and Gender. Wilcoxon rank-sum test was performed to investigate differences in the distribution of age and BMI within the depressed versus non-depressed group respectively.\nTo examine the factors associated with depression, a multivariate logistic regression model was performed with the dichotomized PHQ-8 score as the outcome. The predictors were Happiness score (Happy vs. Unhappy), Age (linear), Gender (Females vs. Males), BMI (linear), and Marital status (categorical). Interquartile-range odds ratios (IQR-OR’s) for continuous predictors and simple odds ratios (OR’s) for categorical predictors with 95% confidence intervals (Cis) were estimated for continuous and categorical variables respectively; corresponding p-values were calculated [53].\nIn sensitivity analysis, a multivariate logistic regression model and a multivariate linear regression model were conducted using multiple imputations (see sensitivity analysis section). All applied statistical tests were two-sided; p < 0.05 were considered statistically significant. No adjustment for multiple comparisons was made. Statistical analyses were performed in R version 4.0.2 [54].", "The primary statistical analysis included subjects with at least one non-missing value. However, in a sensitivity analysis, a multivariate imputation by chained equations (MICE) procedure was applied with Classification and Regression Trees (CART) to impute missing values [55]. 100 multiple imputations were used [56]. Rubin’s rules were used to combine the multiple imputed estimates [57].\nThe pattern of missing values was investigated, and it was found that subjects who “did not want to answer” were not systematically different from those who answered the questionnaire. Therefore, “prefer not to answer” was recorded as missing value in the statistical analysis and was considered a missing variable in the sensitivity analysis [55].", "Of 517 participants who consented to participate in the UAEHF pilot study, 487 (94.2%) had completed questionnaire data [47]. Figure 1 describes all key phases from recruitment up to inclusion in the study and in the statistical analysis. After omitting missing values, 231 (44.7%) participants were included in the main statistical analysis. However, 487 (94.2%) participants were included in the sensitivity analysis (using 100 multiple imputations) [73]. The median age of the UAEHFS pilot data participants was 32.0 years (Interquartile Range: 24.0–39.0). The percentage of females included in the study was 32%, which represented the UAE population well [59]. Therefore, we did not make any adjustments for gender bias.\n\nFig. 1Flow chart of UAEHF pilot study (describes all the key phases from the recruitment up to the inclusion in the study analysis)\n\nFlow chart of UAEHF pilot study (describes all the key phases from the recruitment up to the inclusion in the study analysis)\nNote: This Figure represents the data of the included participants in the main statistical analysis after omitting missing values.\nThe number of observed values (%) of each categorical variable was presented by the PHQ-8 categories in Table 1, where the majority of the study participants (90.5%) had PHQ-8 score less than 10. When categorical groups were compared within the PHQ-8 depression group (< 10 versus ≥ 10), there was only a statistically significant difference between females versus males with a Fisher exact p-value = 0.028. Table 1 shows that there was a statistically significant difference in the age distribution across the depression groups (p-value = 0.012). There was no statistically significant difference in the BMI measurements between the PHQ-8 groups (p-value = 0.44).\nThe result of the primary analysis showed that females have statistically significantly greater odds of reporting depression compared to males OR = 3.2 (95% CI:1.1, 8.9), p-value = 0.024 (Table 2). Furthermore, there was an approximately 5-fold greater odds of reporting depression for unhappy than for happy individuals. However, this was not statistically significant (p-value = 0.10). Similarly, the sleep duration and marital status were both not statistically significant (p-value of 0.284 and 0.205, respectively; Table 2). However, in a sensitivity analysis, people who reported sleep duration of less than 6 h were more likely to report depression as compared to the people who reported sleep of 7 h OR (95% CI) of 2.6 (1.0, 6.4), p-value = 0.040 and 1.136 (1.015, 1.272), p-value = 0.027.\n\nTable 2\nThe results of the primary analysis of the fitted multivariate logistic regression model\nVariableOR (95% CI)PbHappiness - HappyReferenceHappiness - Unhappy5.5 (0.7, 42.2)0.100Age (linear)0.3 (0.1, 1.0)a0.056BMI (linear)1.8 (1.0, 3.3)a0.064Gender = MalesReferenceGender = Females3.2 (1.2, 8.9)0.024Sleep = 7ReferenceSleep < 64.0 (0.9, 17.4)0.061Sleep = 61.3 (0.2, 6.9)0.775Sleep = 81.5 (0.3, 7.3)0.585Sleep > 81.2 (0.2, 9.2)0.848MarriedReferenceSingle0.9 (0.3, 3.0)0.8Others0.2 (0.02, 3.3)0.3Note: aInterquartile-range odds ratio for age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable\n\n\nThe results of the primary analysis of the fitted multivariate logistic regression model\n\nNote: aInterquartile-range odds ratio for age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable\nTo compute Interquartile Range (IQR) odds ratios for continuous variables, the age and BMI variables were divided by their IQR values of 15 and 8.7, respectively [54, 56]. Table 2 shows that for one interquartile-range increase in the age and BMI, the IQR-OR was 0.34 (95% CI: 0.12, 1.0) and 1.8 (95% CI: 0.97, 3.3), respectively, where both results were statistically significant with a p-value of 0.056 and 0.064 individually.\nThe results of the sensitivity analysis using 100 multiple imputations (Table 3) were approximately similar to the result of the multivariate logistic ordinal regression analyses using omitted data (Table 2). However, the happiness variable was statistically significant OR = 5.1 (95%CI: 1.7, 15.7, p-value = 0.005), and there was a statistically significant difference between the sleep variable for less than six hours as compared with seven hours OR = 2.6 (95% CI: 1.0, 6.4, p-value = 0.04). Supplementary Fig. 1 illustrates the percentages of missing values in the variables included in this statistical analysis. Marital status had the lowest number of missing values (5.3%), followed by BMI (12.7%) and the Sleep variable with 12.9% missing values. The Happiness variable had 21.1% missing values. The PHQ-8 variables had 32.6–35.1% missing values. Supplementary Table 1 shows the percentages of the “Prefer not to answer (DA)” and “Do not know (UN)” in the eight PHQ questions, where the percentages vary between 16 and 18.5% between the eight questions. The percentages of “Prefer not to answer (DA)” and “Do not know (UN)” in the happiness variable was 2.5% and 2.3%, respectively, which were also considered missing values, although these were not missing at random and can be correlated with one of the PHQ-8 answers. However, this has been addressed in the multiple imputation analysis.\n\nTable 3\nResults of the sensitivity analysis using multivariate logistic regression models with a sample size N = 487\nVariableOR (95% CI)PbHappiness – HappyReferenceHappiness - Unhappy5.1 (1.7, 15.7)0.004Age (linear)0.5 (0.3, 1.0)a0.060BMI (linear)1.7 (1.1, 2.5)a0.021Gender = MalesReferenceGender = Females1.6 (0.9, 3.0)0.154Sleep = 7ReferenceSleep < 62.6 (1.0, 6.4)0.040Sleep = 60.9 (0.3, 2.6)0.887Sleep = 80.9 (0.3, 2.4)0.779Sleep > 81.1 (0.3, 3.7)0.853MarriedReferenceSingle0.9 (0.4, 2.1)0.885Others0.7 (0.2, 2.9)0.597Note: aInterquartile-range odds ratio for Age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable. Results were summarized using Rubin’s rules\n\n\nResults of the sensitivity analysis using multivariate logistic regression models with a sample size N = 487\n\nNote: aInterquartile-range odds ratio for Age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable. Results were summarized using Rubin’s rules\nAn additional sensitivity analysis was performed using the ordinal PHQ-8 score as an outcome in a multivariate linear regression model (see supplementary Table 1). The result of this sensitivity analysis was very similar to the sensitivity analysis in Table 3.", "Overall, there is a lack of quantitative research examining the relationship between depression, perceived happiness, and sleep duration and other confounding factors in the Gulf region [35,46, 36, 49, 60]. The UAE Healthy Future Study is the first prospective cohort study of the UAE population and one of the few studies in the region which examines such relationships between happiness, sleep duration and quality, and depression using PHQ-8. The evidence collected from this study confirms what has been published in the literature. For instance, in this study, it has been found that males were less likely to report depression symptoms than females, which is similar to what has been documented by other studies [38, 41]; where women reported more depressive symptoms than men [60]. In addition, the results of this study revealed that older people have a lower odds of reporting depression as compared with younger people, suggesting a possible protective age effect.\nThis study has used the pilot data of the UAE Healthy Future Study, as recruitment into the main cohort study is still ongoing. However, we plan to use the main UAEHFS data to confirm the results and provide further understanding of these findings.\nThe association between sleep duration and depression has been intensively studied in the literature [23, 24]. The findings of this study revealed that individuals who reported less than 6 h of sleep per 24 h were more likely to report being depressed compared to those who reported 7 h of sleep. This is similar to what has been reported in the literature that participants who reported insufficient sleep showed a 62–179% increase in the prevalence of depression versus those sleeping 6 to 8 h per day and reporting sufficient sleep (P < 0.05) [61, 62].\nFurthermore, self-reported happiness has been identified as a potential protective and management factor for depression [27]. The result of this study shows that unhappy individuals have approximately 5-fold higher odds of reporting depression compared to happy individuals, and this aligns with what has been found in the literature [27].\nAdditionally, a small number of studies found that insufficient sleep is associated with lower happiness in healthy adults using a self-reported questionnaire as a single item for measuring happiness [22]. Moreover, some longitudinal studies have found that the next-day happiness is lowered following a shorter sleep duration [63]. However, the associations between sleep and happiness have not been well-explored in adults. Our findings will add to the available evidence and will help to bring novel data from the Gulf Cooperation Council (GCC) countries about the association between sleep duration, self-reported happiness, and depression.\nA further finding is that marital status can contribute to health and self-reported happiness in a bidirectional way [49]. Several studies have found that there is an association between depression and marital status [63, 64]. There is an increased risk of reporting depression for divorced and separated people. It is frequently asserted that marriage is more beneficial for the mental health of men than women, but the evidence for this is far from clear-cut [65]. Single people have a higher level of depression as compared to married people and some studies have found that married people have a better mood than single people considering factors of age, gender, and education level [37, 40, 41, 66]. Research does not yet clarify whether gender differences in the prevalence of anxiety-mood disorders are greater among the married than the never-married or the previously married [65]. However, the mechanisms underlying the relationship between depression and marital status are not yet entirely clear and require further exploration.\nAn association has been reported between BMI categories and depression [67, 68]. Higher BMI is a risk factor for the likelihood of developing depression in individuals [69]. Underweight increases the risk of depression as well [70]. Our study considered the BMI factor in the data analysis of this study. This statistical analysis shows that as BMI increases, the odds of reporting depression also increase. This result is comparable to other research finding that participants with central obesity had an increased chance of depression [70,71,72].\nThe UAEHFS is a unique cohort study in the UAE and Gulf region as it allows researchers to investigate the association between disease outcomes and related risk factors [58]. In this study, we investigated the association between depression and sleep duration, self-reported happiness, BMI, and sociodemographic status using the UAEHFS pilot data, which presents a population that has not been studied. The result of our study needs to be confirmed in the main UAEHFS data.", "Missing values were omitted in the primary statistical analysis, which decreased the sample size and can lead to overfitting in the main finding [55]. Therefore, the number of participants with the PHQ-8 ≥ 10 (i.e. – events) was 22 which is a limitation in this data set because it does not allow us to fit a complex multivariate statistical regression model if we would follow the statistical rule of thumb “ten events per predictor” [56, 57, 73]. Thus, sensitivity analysis was performed, and the result of the sensitivity analysis (Table 3) was approximately the same as the main finding (Table 2).\nAlthough some limitations were found in this study, the findings provide future direction to mental health research. Further investigation is needed with a larger sample size using the main UAEHFS data to have a better picture of the role of happiness, marital status, sleep, and social demographic variables in association with depression and mental health disorders.", "The results of this study indicate that females are more likely to report depression compared to males. Older age is associated with a decrease in self-reported depression. Unhappy individuals are approximately 5-fold higher odds to report depression compared to happy individuals. BMI was positively associated with reporting depression. The result of the sensitivity analysis shows that individuals who sleep less than 6 h per day are more likely to report depression compared to those who sleep 7 h per day.\nThe results of this study have a potential value for researchers and public health professionals as it presents novel data on the PHQ-8 score in a healthy UAE population, which has not been explored before. Furthermore, the results of this study can help contribute to the knowledge base on current and potential population mental health impact in the UAE and Gulf Region.", " Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\nBelow is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1", "Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1" ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", "discussion", null, "conclusion", "supplementary-material", null ]
[ "PHQ-8", "Depression", "Sleep duration", "Happiness", "Self-reported happiness", "Sociodemographic and marital status" ]
Introduction: Depression is defined as a set of disorders ranging from mild to moderate to severe [1]. It is widely recognized as a major public health problem worldwide [2]. Reports from the Global Burden of Diseases declared major depressive disorder as one of the top three causes of disability-adjusted life years [3, 4]. The effect of depression has been extensively studied on the individual’s daily functioning and productivity. This is directly reflected in increased economic costs per capita [3, 4]. For example, in Catalonia in 2006, the average annual cost of an adult with depression was close to 1800 Euros, and the total annual cost of depression was 735.4 million Euros. These costs were linked directly to primary care, mental health specialized care, hospitalization, and pharmacological care, as well as indirect costs due to productivity loss, temporary and permanent disability [4]. To measure the level of depression in non-clinical populations; clinical and epidemiological studies have often used the established and validated eight-item Patient Health Questionnaire scale (PHQ-8) instead of the nine- item Patient Health Questionnaire scale (PHQ-9) [5]. As has been confirmed by previous studies, the scale can detect major depression with sensitivity and a specificity of 88%, to classify subjects into depressed or non-depressed, respectively [6–8]. Regionally, in Jordan, Lebanon, Syria and Afghanistan, the cutoff point of 10 was used [9–12]. Similarly, in UAE, studies have mostly used the cutoff point of 10 [13–16]. Recent studies have focused on exploring the methods of improving individual and environmental effects on disability related to depression by investigating its triggers and associations [2, 10]. Factors such as sleep duration and self-reported happiness are evidenced to be predictors for depression [17, 18]. Sleep duration can be considered a risk factor for lower well-being [19]. The number of sleep hours has also been found to have a causal relationship with depression as well as self-reported happiness [20]. For example, shorter sleep duration might lead to lower positive emotions such as self-reported happiness and showed stronger associations with negative emotional affect [21, 22]. One question that needs to be asked, however, is the nature and strength of the association between these three variables respectively. Sleep duration is determined by how many hours an individual sleeps over 24 h. Individuals with depression often have poor sleep status including abnormal REM (rapid eye movement), and insomnia (difficulty falling asleep or staying asleep). Abnormal REM may contribute to the development of altered emotional processing in depression [23]. It was reported that people with insomnia might have a ten-fold higher risk of developing depression in contrast to people who get a good night’s sleep [24]. Also, 75% of depressed individuals will have trouble falling asleep or staying asleep [25]. A bidirectional relationship between sleep duration and reported depression has been investigated [24, 26]. Such studies are unsatisfactory because they do not explore the association between major depressive disorder and sleep duration based on specific population demographics. In this study we are considering other sociodemographic factors such as age, gender, and marital status. Interest in studying happiness in the context of mental health status (such as depression and anxiety) has been growing recently [27]. The findings of some research papers suggest that self-reported happiness is a potential factor in the prevention and management of depression [27, 34]. Happiness is correlated to a person’s ability to approach situations in a less stressful manner and to an individual’s capacity to perceive and control their own feelings. This indicates that higher happiness levels may have a protective effect on depression [27]. Well-being has been defined as the combination of feeling good and functioning well [28]; conversely, quality of life could be defined as an individual’s satisfaction with his or her actual life compared with his or her ideal life. Evaluation of the quality of life depends on one’s value system [29]. Moreover, studies have shown that individuals use various chronically accessible and stable sources of information when making life satisfaction judgments [30]. Yet, well-being, quality of life and life satisfaction are multidimensional constructs which include complex cognitive evaluation processes and cannot be adequately assessed by using a single item or question [31]. Unlike quality of life, which typically requires detailed assessments to ascertain [32], happiness is easier to evaluate using a single item question [33]. Investigating the association between happiness and depression would add valuable information to public health research as the relationship between happiness and depression is observed to be bidirectional (i.e., one variable can predict the other, and people might not report depression but are more likely to report feeling unhappy) [34]. In addition, there are some factors which confound with sleep duration and self-reported happiness, such as age, gender, and marital status [36, 37]. For example, reviews reported that a prognosis of depression was improved with increasing age [38]. Conversely, another study showed that older age was associated with worsening of depressive symptoms [37]. Moreover, depressive symptoms could worsen among widowed individuals as their age increased [40]. Some research findings reflected that females are more likely to perceive depression compared to males [38, 41]. Other studies showed significant interplay between marital status and depressive symptoms [40, 42]. For example, being unmarried could lead to perceiving and developing depressive symptoms [43], while a worsening in depression symptoms among married individuals could lead to separation or being unmarried [42, 44]. So, further exploration is required to study the interplay of these variables together. The United Arab Emirates (UAE) is a high-income developed country which has undergone a rapid epidemiological transition from a traditional semi-nomadic society to a modern affluent society with a lifestyle characterized by over-consumption of energy-dense foods and low physical activity [47]. The UAE was ranked 15th out of 157 countries included in the WHO World Happiness Report with a score of 7.06 [48]. Despite this, depression has been identified as the third leading cause of disability in the UAE [3]. Nevertheless, there are few studies in the Gulf region examining the relationship between happiness, sleep duration and depression [36, 49, 50]. Therefore, studying the association between depression, happiness, and sleep in the United Arab Emirates (UAE) population would be of interest to the public health field in this part of the world. This study aimed to examine the relationship between depression, self-reported happiness, and self-reported sleep duration after adjusting for age and gender using the UAE Healthy Future Study (UAEHFS) pilot data. Materials and methods: Study design This was a pilot prospective cohort study conducted from January 2015 to May 2015. The participants were recruited from two health care centers in Abu Dhabi. Participants completed an online questionnaire including questions on demographic data, PHQ-8 score, self-reported sleep duration, and self-reported happiness score. Physical measurements such as Body Mass Index (BMI) were collected during the recruitment visit. This was a pilot prospective cohort study conducted from January 2015 to May 2015. The participants were recruited from two health care centers in Abu Dhabi. Participants completed an online questionnaire including questions on demographic data, PHQ-8 score, self-reported sleep duration, and self-reported happiness score. Physical measurements such as Body Mass Index (BMI) were collected during the recruitment visit. Participants’ eligibility criteria and recruitment Seven hundred and sixty-nine UAE nationals aged ≥ 18 years were invited to participate voluntarily in the pilot study. Volunteers from the general population with inclusion criteria of age 18 or greater; able to consent; UAE nationals, resident in Abu Dhabi Emirate. All potential participants were given participant information leaflets in either Arabic or English to read and had the opportunity to ask questions prior to completion of the recruitment process. Participants signed an informed consent and were asked to complete a detailed questionnaire. However, 243 invited subjects did not respond, and their reasons for not participating were recorded [45]. Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis [47]. Seven hundred and sixty-nine UAE nationals aged ≥ 18 years were invited to participate voluntarily in the pilot study. Volunteers from the general population with inclusion criteria of age 18 or greater; able to consent; UAE nationals, resident in Abu Dhabi Emirate. All potential participants were given participant information leaflets in either Arabic or English to read and had the opportunity to ask questions prior to completion of the recruitment process. Participants signed an informed consent and were asked to complete a detailed questionnaire. However, 243 invited subjects did not respond, and their reasons for not participating were recorded [45]. Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis [47]. Study design: This was a pilot prospective cohort study conducted from January 2015 to May 2015. The participants were recruited from two health care centers in Abu Dhabi. Participants completed an online questionnaire including questions on demographic data, PHQ-8 score, self-reported sleep duration, and self-reported happiness score. Physical measurements such as Body Mass Index (BMI) were collected during the recruitment visit. Participants’ eligibility criteria and recruitment: Seven hundred and sixty-nine UAE nationals aged ≥ 18 years were invited to participate voluntarily in the pilot study. Volunteers from the general population with inclusion criteria of age 18 or greater; able to consent; UAE nationals, resident in Abu Dhabi Emirate. All potential participants were given participant information leaflets in either Arabic or English to read and had the opportunity to ask questions prior to completion of the recruitment process. Participants signed an informed consent and were asked to complete a detailed questionnaire. However, 243 invited subjects did not respond, and their reasons for not participating were recorded [45]. Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis [47]. Measures: The PHQ-8 questionnaire was used to measure the participants’ depression levels [6, 39, 51]. This study used the cutoff point of 10 as well based on the common local practice considering that there are no specific related research guidelines in UAE [13–16]. The PHQ-8 score was dichotomized into no-depression (total PHQ-8 < 10) versus depression (≥ 10) [6, 51]. Happiness was measured using a one-question item that asked participants, “In general, how happy you are?’’ Those who responded as extremely happy, very happy and moderately happy were grouped as “happy” while those responding as moderately unhappy, very unhappy, and extremely unhappy were grouped as “unhappy”. Demographic variables such as age, gender, and marital status (single, married, and others) were also included in the reference. Sleep duration data was collected as an ordinal variable (number of hours) by asking how many hours of sleep the participant gets in a 24-hour period, including naps. Sleep duration was categorized into five categories (see Table 1) to avoid the linearity assumption between sleep duration and depression status as well as to be able to compare average sleepers to shorter and longer sleepers. The questionnaire was translated from English into Arabic and back-translated into English to check for linguistic validity. Table 1 Number (percentages) of the analyzed variables and median (IQR) for Age and BMI. VariableGroupPHQ-8 < 10PHQ-8 ≥ 10P-valueGenderFemale62 (83.8)12 (16.2)0.028aMale147 (93.6)10 (6.4)Sleep (hours)< 639 (83)8 (17)0.284a= 653 (93)4 (7)= 757 (95)3 (5)= 844 (89.8)5 (10.2)> 816 (88.9)2 (11.1)Marital StatusSingle120 (93)9 (7)0.205aMarried15 (93.8)1 (6.2)Other74 (86)12 (14)HappinessHappy204 (91.1)20 (8.9)0.136aUnhappy5 (71.4)2 (28.6)Total209 (90.5)22 (9.5)Median (IQR)Median (IQR)P-valueAgeyears33.0 (25.0, 40.0)24.5 (22.0, 35.0)0.012bBMIkg/m227.6 (23.4, 31.2)28.8 (24.2, 33.0)0.444bNote: aFisher’s exact test p-values for categorical data and bWilcoxon rank sum test for continuous data Number (percentages) of the analyzed variables and median (IQR) for Age and BMI. Note: aFisher’s exact test p-values for categorical data and bWilcoxon rank sum test for continuous data Body mass index (BMI) was obtained via physical measurement using Tanita MC-780 MA Segmental Body Composition Analyzer [52]. All physical measurements were collected by a clinical research nurse. Additional details of the study recruitment have been previously described [47]. Statistical analysis: All eligible participants 487 (94.2%) were included in a sensitivity analysis (using 100 multiple imputations). After excluding participants with missing values, 231 (44.7%) were included in the primary statistical analysis. The PHQ-8 questionnaire was used in this statistical analysis with two additional possible options to select for each question (i.e., P2A – P2H). These were “Do not know (UN)” and “Prefer not to answer (DA)”, which were treated as missing values in the statistical analysis. Fisher’s exact test was used to investigate the association between depression and categorical variables, such as Sleep (hours), Happiness, Marital Status, and Gender. Wilcoxon rank-sum test was performed to investigate differences in the distribution of age and BMI within the depressed versus non-depressed group respectively. To examine the factors associated with depression, a multivariate logistic regression model was performed with the dichotomized PHQ-8 score as the outcome. The predictors were Happiness score (Happy vs. Unhappy), Age (linear), Gender (Females vs. Males), BMI (linear), and Marital status (categorical). Interquartile-range odds ratios (IQR-OR’s) for continuous predictors and simple odds ratios (OR’s) for categorical predictors with 95% confidence intervals (Cis) were estimated for continuous and categorical variables respectively; corresponding p-values were calculated [53]. In sensitivity analysis, a multivariate logistic regression model and a multivariate linear regression model were conducted using multiple imputations (see sensitivity analysis section). All applied statistical tests were two-sided; p < 0.05 were considered statistically significant. No adjustment for multiple comparisons was made. Statistical analyses were performed in R version 4.0.2 [54]. Sensitivity analysis: The primary statistical analysis included subjects with at least one non-missing value. However, in a sensitivity analysis, a multivariate imputation by chained equations (MICE) procedure was applied with Classification and Regression Trees (CART) to impute missing values [55]. 100 multiple imputations were used [56]. Rubin’s rules were used to combine the multiple imputed estimates [57]. The pattern of missing values was investigated, and it was found that subjects who “did not want to answer” were not systematically different from those who answered the questionnaire. Therefore, “prefer not to answer” was recorded as missing value in the statistical analysis and was considered a missing variable in the sensitivity analysis [55]. Results: Of 517 participants who consented to participate in the UAEHF pilot study, 487 (94.2%) had completed questionnaire data [47]. Figure 1 describes all key phases from recruitment up to inclusion in the study and in the statistical analysis. After omitting missing values, 231 (44.7%) participants were included in the main statistical analysis. However, 487 (94.2%) participants were included in the sensitivity analysis (using 100 multiple imputations) [73]. The median age of the UAEHFS pilot data participants was 32.0 years (Interquartile Range: 24.0–39.0). The percentage of females included in the study was 32%, which represented the UAE population well [59]. Therefore, we did not make any adjustments for gender bias. Fig. 1Flow chart of UAEHF pilot study (describes all the key phases from the recruitment up to the inclusion in the study analysis) Flow chart of UAEHF pilot study (describes all the key phases from the recruitment up to the inclusion in the study analysis) Note: This Figure represents the data of the included participants in the main statistical analysis after omitting missing values. The number of observed values (%) of each categorical variable was presented by the PHQ-8 categories in Table 1, where the majority of the study participants (90.5%) had PHQ-8 score less than 10. When categorical groups were compared within the PHQ-8 depression group (< 10 versus ≥ 10), there was only a statistically significant difference between females versus males with a Fisher exact p-value = 0.028. Table 1 shows that there was a statistically significant difference in the age distribution across the depression groups (p-value = 0.012). There was no statistically significant difference in the BMI measurements between the PHQ-8 groups (p-value = 0.44). The result of the primary analysis showed that females have statistically significantly greater odds of reporting depression compared to males OR = 3.2 (95% CI:1.1, 8.9), p-value = 0.024 (Table 2). Furthermore, there was an approximately 5-fold greater odds of reporting depression for unhappy than for happy individuals. However, this was not statistically significant (p-value = 0.10). Similarly, the sleep duration and marital status were both not statistically significant (p-value of 0.284 and 0.205, respectively; Table 2). However, in a sensitivity analysis, people who reported sleep duration of less than 6 h were more likely to report depression as compared to the people who reported sleep of 7 h OR (95% CI) of 2.6 (1.0, 6.4), p-value = 0.040 and 1.136 (1.015, 1.272), p-value = 0.027. Table 2 The results of the primary analysis of the fitted multivariate logistic regression model VariableOR (95% CI)PbHappiness - HappyReferenceHappiness - Unhappy5.5 (0.7, 42.2)0.100Age (linear)0.3 (0.1, 1.0)a0.056BMI (linear)1.8 (1.0, 3.3)a0.064Gender = MalesReferenceGender = Females3.2 (1.2, 8.9)0.024Sleep = 7ReferenceSleep < 64.0 (0.9, 17.4)0.061Sleep = 61.3 (0.2, 6.9)0.775Sleep = 81.5 (0.3, 7.3)0.585Sleep > 81.2 (0.2, 9.2)0.848MarriedReferenceSingle0.9 (0.3, 3.0)0.8Others0.2 (0.02, 3.3)0.3Note: aInterquartile-range odds ratio for age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable The results of the primary analysis of the fitted multivariate logistic regression model Note: aInterquartile-range odds ratio for age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable To compute Interquartile Range (IQR) odds ratios for continuous variables, the age and BMI variables were divided by their IQR values of 15 and 8.7, respectively [54, 56]. Table 2 shows that for one interquartile-range increase in the age and BMI, the IQR-OR was 0.34 (95% CI: 0.12, 1.0) and 1.8 (95% CI: 0.97, 3.3), respectively, where both results were statistically significant with a p-value of 0.056 and 0.064 individually. The results of the sensitivity analysis using 100 multiple imputations (Table 3) were approximately similar to the result of the multivariate logistic ordinal regression analyses using omitted data (Table 2). However, the happiness variable was statistically significant OR = 5.1 (95%CI: 1.7, 15.7, p-value = 0.005), and there was a statistically significant difference between the sleep variable for less than six hours as compared with seven hours OR = 2.6 (95% CI: 1.0, 6.4, p-value = 0.04). Supplementary Fig. 1 illustrates the percentages of missing values in the variables included in this statistical analysis. Marital status had the lowest number of missing values (5.3%), followed by BMI (12.7%) and the Sleep variable with 12.9% missing values. The Happiness variable had 21.1% missing values. The PHQ-8 variables had 32.6–35.1% missing values. Supplementary Table 1 shows the percentages of the “Prefer not to answer (DA)” and “Do not know (UN)” in the eight PHQ questions, where the percentages vary between 16 and 18.5% between the eight questions. The percentages of “Prefer not to answer (DA)” and “Do not know (UN)” in the happiness variable was 2.5% and 2.3%, respectively, which were also considered missing values, although these were not missing at random and can be correlated with one of the PHQ-8 answers. However, this has been addressed in the multiple imputation analysis. Table 3 Results of the sensitivity analysis using multivariate logistic regression models with a sample size N = 487 VariableOR (95% CI)PbHappiness – HappyReferenceHappiness - Unhappy5.1 (1.7, 15.7)0.004Age (linear)0.5 (0.3, 1.0)a0.060BMI (linear)1.7 (1.1, 2.5)a0.021Gender = MalesReferenceGender = Females1.6 (0.9, 3.0)0.154Sleep = 7ReferenceSleep < 62.6 (1.0, 6.4)0.040Sleep = 60.9 (0.3, 2.6)0.887Sleep = 80.9 (0.3, 2.4)0.779Sleep > 81.1 (0.3, 3.7)0.853MarriedReferenceSingle0.9 (0.4, 2.1)0.885Others0.7 (0.2, 2.9)0.597Note: aInterquartile-range odds ratio for Age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable. Results were summarized using Rubin’s rules Results of the sensitivity analysis using multivariate logistic regression models with a sample size N = 487 Note: aInterquartile-range odds ratio for Age and BMI, which compares the 3rd quartile with the 1st quartile, and simple odds ratios for the categorical predictors (happiness, gender, sleep and marital status) compare each group with the reference group (the largest group) with corresponding 95% confidence interval (95% CI). bWald’s p-values are presented for each variable. Results were summarized using Rubin’s rules An additional sensitivity analysis was performed using the ordinal PHQ-8 score as an outcome in a multivariate linear regression model (see supplementary Table 1). The result of this sensitivity analysis was very similar to the sensitivity analysis in Table 3. Discussion: Overall, there is a lack of quantitative research examining the relationship between depression, perceived happiness, and sleep duration and other confounding factors in the Gulf region [35,46, 36, 49, 60]. The UAE Healthy Future Study is the first prospective cohort study of the UAE population and one of the few studies in the region which examines such relationships between happiness, sleep duration and quality, and depression using PHQ-8. The evidence collected from this study confirms what has been published in the literature. For instance, in this study, it has been found that males were less likely to report depression symptoms than females, which is similar to what has been documented by other studies [38, 41]; where women reported more depressive symptoms than men [60]. In addition, the results of this study revealed that older people have a lower odds of reporting depression as compared with younger people, suggesting a possible protective age effect. This study has used the pilot data of the UAE Healthy Future Study, as recruitment into the main cohort study is still ongoing. However, we plan to use the main UAEHFS data to confirm the results and provide further understanding of these findings. The association between sleep duration and depression has been intensively studied in the literature [23, 24]. The findings of this study revealed that individuals who reported less than 6 h of sleep per 24 h were more likely to report being depressed compared to those who reported 7 h of sleep. This is similar to what has been reported in the literature that participants who reported insufficient sleep showed a 62–179% increase in the prevalence of depression versus those sleeping 6 to 8 h per day and reporting sufficient sleep (P < 0.05) [61, 62]. Furthermore, self-reported happiness has been identified as a potential protective and management factor for depression [27]. The result of this study shows that unhappy individuals have approximately 5-fold higher odds of reporting depression compared to happy individuals, and this aligns with what has been found in the literature [27]. Additionally, a small number of studies found that insufficient sleep is associated with lower happiness in healthy adults using a self-reported questionnaire as a single item for measuring happiness [22]. Moreover, some longitudinal studies have found that the next-day happiness is lowered following a shorter sleep duration [63]. However, the associations between sleep and happiness have not been well-explored in adults. Our findings will add to the available evidence and will help to bring novel data from the Gulf Cooperation Council (GCC) countries about the association between sleep duration, self-reported happiness, and depression. A further finding is that marital status can contribute to health and self-reported happiness in a bidirectional way [49]. Several studies have found that there is an association between depression and marital status [63, 64]. There is an increased risk of reporting depression for divorced and separated people. It is frequently asserted that marriage is more beneficial for the mental health of men than women, but the evidence for this is far from clear-cut [65]. Single people have a higher level of depression as compared to married people and some studies have found that married people have a better mood than single people considering factors of age, gender, and education level [37, 40, 41, 66]. Research does not yet clarify whether gender differences in the prevalence of anxiety-mood disorders are greater among the married than the never-married or the previously married [65]. However, the mechanisms underlying the relationship between depression and marital status are not yet entirely clear and require further exploration. An association has been reported between BMI categories and depression [67, 68]. Higher BMI is a risk factor for the likelihood of developing depression in individuals [69]. Underweight increases the risk of depression as well [70]. Our study considered the BMI factor in the data analysis of this study. This statistical analysis shows that as BMI increases, the odds of reporting depression also increase. This result is comparable to other research finding that participants with central obesity had an increased chance of depression [70,71,72]. The UAEHFS is a unique cohort study in the UAE and Gulf region as it allows researchers to investigate the association between disease outcomes and related risk factors [58]. In this study, we investigated the association between depression and sleep duration, self-reported happiness, BMI, and sociodemographic status using the UAEHFS pilot data, which presents a population that has not been studied. The result of our study needs to be confirmed in the main UAEHFS data. Strengths and Limitations: Missing values were omitted in the primary statistical analysis, which decreased the sample size and can lead to overfitting in the main finding [55]. Therefore, the number of participants with the PHQ-8 ≥ 10 (i.e. – events) was 22 which is a limitation in this data set because it does not allow us to fit a complex multivariate statistical regression model if we would follow the statistical rule of thumb “ten events per predictor” [56, 57, 73]. Thus, sensitivity analysis was performed, and the result of the sensitivity analysis (Table 3) was approximately the same as the main finding (Table 2). Although some limitations were found in this study, the findings provide future direction to mental health research. Further investigation is needed with a larger sample size using the main UAEHFS data to have a better picture of the role of happiness, marital status, sleep, and social demographic variables in association with depression and mental health disorders. Conclusion: The results of this study indicate that females are more likely to report depression compared to males. Older age is associated with a decrease in self-reported depression. Unhappy individuals are approximately 5-fold higher odds to report depression compared to happy individuals. BMI was positively associated with reporting depression. The result of the sensitivity analysis shows that individuals who sleep less than 6 h per day are more likely to report depression compared to those who sleep 7 h per day. The results of this study have a potential value for researchers and public health professionals as it presents novel data on the PHQ-8 score in a healthy UAE population, which has not been explored before. Furthermore, the results of this study can help contribute to the knowledge base on current and potential population mental health impact in the UAE and Gulf Region. Electronic supplementary material: Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 1 Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 1 : Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 1
Background: The United Arab Emirates Healthy Future Study (UAEHFS) is one of the first large prospective cohort studies and one of the few studies in the region which examines causes and risk factors for chronic diseases among the nationals of the United Arab Emirates (UAE). The aim of this study is to investigate the eight-item Patient Health Questionnaire (PHQ-8) as a screening instrument for depression among the UAEHFS pilot participants. Methods: The UAEHFS pilot data were analyzed to examine the relationship between the PHQ-8 and possible confounding factors, such as self-reported happiness, and self-reported sleep duration (hours) after adjusting for age, body mass index (BMI), and gender. Results: Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis using 100 multiple imputations. 231 (44.7%) were included in the primary statistical analysis after omitting the missing values. Participants' median age was 32.0 years (Interquartile Range: 24.0, 39.0). In total, 22 (9.5%) of the participant reported depression. Females have shown significantly higher odds of reporting depression than males with an odds ratio = 3.2 (95% CI:1.17, 8.88), and there were approximately 5-fold higher odds of reporting depression for unhappy than for happy individuals. For one interquartile-range increase in age and BMI, the odds ratio of reporting depression was 0.34 (95% CI: 0.1, 1.0) and 1.8 (95% CI: 0.97, 3.32) respectively. Conclusions: Females are more likely to report depression compared to males. Increasing age may decrease the risk of reporting depression. Unhappy individuals have approximately 5-fold higher odds of reporting depression compared to happy individuals. A higher BMI was associated with a higher risk of reporting depression. In a sensitivity analysis, individuals who reported less than 6 h of sleep per 24 h were more likely to report depression than those who reported 7 h of sleep.
Introduction: Depression is defined as a set of disorders ranging from mild to moderate to severe [1]. It is widely recognized as a major public health problem worldwide [2]. Reports from the Global Burden of Diseases declared major depressive disorder as one of the top three causes of disability-adjusted life years [3, 4]. The effect of depression has been extensively studied on the individual’s daily functioning and productivity. This is directly reflected in increased economic costs per capita [3, 4]. For example, in Catalonia in 2006, the average annual cost of an adult with depression was close to 1800 Euros, and the total annual cost of depression was 735.4 million Euros. These costs were linked directly to primary care, mental health specialized care, hospitalization, and pharmacological care, as well as indirect costs due to productivity loss, temporary and permanent disability [4]. To measure the level of depression in non-clinical populations; clinical and epidemiological studies have often used the established and validated eight-item Patient Health Questionnaire scale (PHQ-8) instead of the nine- item Patient Health Questionnaire scale (PHQ-9) [5]. As has been confirmed by previous studies, the scale can detect major depression with sensitivity and a specificity of 88%, to classify subjects into depressed or non-depressed, respectively [6–8]. Regionally, in Jordan, Lebanon, Syria and Afghanistan, the cutoff point of 10 was used [9–12]. Similarly, in UAE, studies have mostly used the cutoff point of 10 [13–16]. Recent studies have focused on exploring the methods of improving individual and environmental effects on disability related to depression by investigating its triggers and associations [2, 10]. Factors such as sleep duration and self-reported happiness are evidenced to be predictors for depression [17, 18]. Sleep duration can be considered a risk factor for lower well-being [19]. The number of sleep hours has also been found to have a causal relationship with depression as well as self-reported happiness [20]. For example, shorter sleep duration might lead to lower positive emotions such as self-reported happiness and showed stronger associations with negative emotional affect [21, 22]. One question that needs to be asked, however, is the nature and strength of the association between these three variables respectively. Sleep duration is determined by how many hours an individual sleeps over 24 h. Individuals with depression often have poor sleep status including abnormal REM (rapid eye movement), and insomnia (difficulty falling asleep or staying asleep). Abnormal REM may contribute to the development of altered emotional processing in depression [23]. It was reported that people with insomnia might have a ten-fold higher risk of developing depression in contrast to people who get a good night’s sleep [24]. Also, 75% of depressed individuals will have trouble falling asleep or staying asleep [25]. A bidirectional relationship between sleep duration and reported depression has been investigated [24, 26]. Such studies are unsatisfactory because they do not explore the association between major depressive disorder and sleep duration based on specific population demographics. In this study we are considering other sociodemographic factors such as age, gender, and marital status. Interest in studying happiness in the context of mental health status (such as depression and anxiety) has been growing recently [27]. The findings of some research papers suggest that self-reported happiness is a potential factor in the prevention and management of depression [27, 34]. Happiness is correlated to a person’s ability to approach situations in a less stressful manner and to an individual’s capacity to perceive and control their own feelings. This indicates that higher happiness levels may have a protective effect on depression [27]. Well-being has been defined as the combination of feeling good and functioning well [28]; conversely, quality of life could be defined as an individual’s satisfaction with his or her actual life compared with his or her ideal life. Evaluation of the quality of life depends on one’s value system [29]. Moreover, studies have shown that individuals use various chronically accessible and stable sources of information when making life satisfaction judgments [30]. Yet, well-being, quality of life and life satisfaction are multidimensional constructs which include complex cognitive evaluation processes and cannot be adequately assessed by using a single item or question [31]. Unlike quality of life, which typically requires detailed assessments to ascertain [32], happiness is easier to evaluate using a single item question [33]. Investigating the association between happiness and depression would add valuable information to public health research as the relationship between happiness and depression is observed to be bidirectional (i.e., one variable can predict the other, and people might not report depression but are more likely to report feeling unhappy) [34]. In addition, there are some factors which confound with sleep duration and self-reported happiness, such as age, gender, and marital status [36, 37]. For example, reviews reported that a prognosis of depression was improved with increasing age [38]. Conversely, another study showed that older age was associated with worsening of depressive symptoms [37]. Moreover, depressive symptoms could worsen among widowed individuals as their age increased [40]. Some research findings reflected that females are more likely to perceive depression compared to males [38, 41]. Other studies showed significant interplay between marital status and depressive symptoms [40, 42]. For example, being unmarried could lead to perceiving and developing depressive symptoms [43], while a worsening in depression symptoms among married individuals could lead to separation or being unmarried [42, 44]. So, further exploration is required to study the interplay of these variables together. The United Arab Emirates (UAE) is a high-income developed country which has undergone a rapid epidemiological transition from a traditional semi-nomadic society to a modern affluent society with a lifestyle characterized by over-consumption of energy-dense foods and low physical activity [47]. The UAE was ranked 15th out of 157 countries included in the WHO World Happiness Report with a score of 7.06 [48]. Despite this, depression has been identified as the third leading cause of disability in the UAE [3]. Nevertheless, there are few studies in the Gulf region examining the relationship between happiness, sleep duration and depression [36, 49, 50]. Therefore, studying the association between depression, happiness, and sleep in the United Arab Emirates (UAE) population would be of interest to the public health field in this part of the world. This study aimed to examine the relationship between depression, self-reported happiness, and self-reported sleep duration after adjusting for age and gender using the UAE Healthy Future Study (UAEHFS) pilot data. Conclusion: The results of this study indicate that females are more likely to report depression compared to males. Older age is associated with a decrease in self-reported depression. Unhappy individuals are approximately 5-fold higher odds to report depression compared to happy individuals. BMI was positively associated with reporting depression. The result of the sensitivity analysis shows that individuals who sleep less than 6 h per day are more likely to report depression compared to those who sleep 7 h per day. The results of this study have a potential value for researchers and public health professionals as it presents novel data on the PHQ-8 score in a healthy UAE population, which has not been explored before. Furthermore, the results of this study can help contribute to the knowledge base on current and potential population mental health impact in the UAE and Gulf Region.
Background: The United Arab Emirates Healthy Future Study (UAEHFS) is one of the first large prospective cohort studies and one of the few studies in the region which examines causes and risk factors for chronic diseases among the nationals of the United Arab Emirates (UAE). The aim of this study is to investigate the eight-item Patient Health Questionnaire (PHQ-8) as a screening instrument for depression among the UAEHFS pilot participants. Methods: The UAEHFS pilot data were analyzed to examine the relationship between the PHQ-8 and possible confounding factors, such as self-reported happiness, and self-reported sleep duration (hours) after adjusting for age, body mass index (BMI), and gender. Results: Out of 517 participants who met the inclusion criteria, 487 (94.2%) participants filled out the questionnaire and were included in the statistical analysis using 100 multiple imputations. 231 (44.7%) were included in the primary statistical analysis after omitting the missing values. Participants' median age was 32.0 years (Interquartile Range: 24.0, 39.0). In total, 22 (9.5%) of the participant reported depression. Females have shown significantly higher odds of reporting depression than males with an odds ratio = 3.2 (95% CI:1.17, 8.88), and there were approximately 5-fold higher odds of reporting depression for unhappy than for happy individuals. For one interquartile-range increase in age and BMI, the odds ratio of reporting depression was 0.34 (95% CI: 0.1, 1.0) and 1.8 (95% CI: 0.97, 3.32) respectively. Conclusions: Females are more likely to report depression compared to males. Increasing age may decrease the risk of reporting depression. Unhappy individuals have approximately 5-fold higher odds of reporting depression compared to happy individuals. A higher BMI was associated with a higher risk of reporting depression. In a sensitivity analysis, individuals who reported less than 6 h of sleep per 24 h were more likely to report depression than those who reported 7 h of sleep.
5,885
396
[ 72, 147, 499, 337, 139, 189, 18 ]
13
[ "depression", "sleep", "study", "analysis", "happiness", "participants", "reported", "age", "sleep duration", "duration" ]
[ "measure participants depression", "depression total", "cost adult depression", "prevalence depression versus", "annual cost depression" ]
null
[CONTENT] PHQ-8 | Depression | Sleep duration | Happiness | Self-reported happiness | Sociodemographic and marital status [SUMMARY]
null
[CONTENT] PHQ-8 | Depression | Sleep duration | Happiness | Self-reported happiness | Sociodemographic and marital status [SUMMARY]
[CONTENT] PHQ-8 | Depression | Sleep duration | Happiness | Self-reported happiness | Sociodemographic and marital status [SUMMARY]
[CONTENT] PHQ-8 | Depression | Sleep duration | Happiness | Self-reported happiness | Sociodemographic and marital status [SUMMARY]
[CONTENT] PHQ-8 | Depression | Sleep duration | Happiness | Self-reported happiness | Sociodemographic and marital status [SUMMARY]
[CONTENT] Male | Female | Humans | Adult | Happiness | Pilot Projects | Prospective Studies | Depression | United Arab Emirates | Sleep [SUMMARY]
null
[CONTENT] Male | Female | Humans | Adult | Happiness | Pilot Projects | Prospective Studies | Depression | United Arab Emirates | Sleep [SUMMARY]
[CONTENT] Male | Female | Humans | Adult | Happiness | Pilot Projects | Prospective Studies | Depression | United Arab Emirates | Sleep [SUMMARY]
[CONTENT] Male | Female | Humans | Adult | Happiness | Pilot Projects | Prospective Studies | Depression | United Arab Emirates | Sleep [SUMMARY]
[CONTENT] Male | Female | Humans | Adult | Happiness | Pilot Projects | Prospective Studies | Depression | United Arab Emirates | Sleep [SUMMARY]
[CONTENT] measure participants depression | depression total | cost adult depression | prevalence depression versus | annual cost depression [SUMMARY]
null
[CONTENT] measure participants depression | depression total | cost adult depression | prevalence depression versus | annual cost depression [SUMMARY]
[CONTENT] measure participants depression | depression total | cost adult depression | prevalence depression versus | annual cost depression [SUMMARY]
[CONTENT] measure participants depression | depression total | cost adult depression | prevalence depression versus | annual cost depression [SUMMARY]
[CONTENT] measure participants depression | depression total | cost adult depression | prevalence depression versus | annual cost depression [SUMMARY]
[CONTENT] depression | sleep | study | analysis | happiness | participants | reported | age | sleep duration | duration [SUMMARY]
null
[CONTENT] depression | sleep | study | analysis | happiness | participants | reported | age | sleep duration | duration [SUMMARY]
[CONTENT] depression | sleep | study | analysis | happiness | participants | reported | age | sleep duration | duration [SUMMARY]
[CONTENT] depression | sleep | study | analysis | happiness | participants | reported | age | sleep duration | duration [SUMMARY]
[CONTENT] depression | sleep | study | analysis | happiness | participants | reported | age | sleep duration | duration [SUMMARY]
[CONTENT] depression | life | happiness | studies | sleep | reported | depressive | sleep duration | duration | individual [SUMMARY]
null
[CONTENT] 95 | 95 ci | ci | group | table | analysis | values | quartile | statistically | value [SUMMARY]
[CONTENT] depression | report depression compared | results study | results | individuals | depression compared | report | report depression | compared | sleep day [SUMMARY]
[CONTENT] material | supplementary material | depression | supplementary | material supplementary | material supplementary material | supplementary material supplementary material | supplementary material supplementary | participants | analysis [SUMMARY]
[CONTENT] material | supplementary material | depression | supplementary | material supplementary | material supplementary material | supplementary material supplementary material | supplementary material supplementary | participants | analysis [SUMMARY]
[CONTENT] The United Arab Emirates Healthy Future Study | first | one | the United Arab Emirates ||| eight | PHQ-8 | UAEHFS [SUMMARY]
null
[CONTENT] 517 | 487 | 94.2% | 100 ||| 231 | 44.7% ||| 32.0 years | 24.0 | 39.0 ||| 22 | 9.5% ||| 3.2 | 95% | 8.88 | approximately 5-fold ||| one | BMI | 0.34 | 95% | CI | 0.1 | 1.0 | 1.8 | 95% | CI | 0.97 | 3.32 [SUMMARY]
[CONTENT] ||| ||| approximately 5-fold ||| BMI ||| less than 6 | 24 [SUMMARY]
[CONTENT] The United Arab Emirates Healthy Future Study | first | one | the United Arab Emirates ||| eight | PHQ-8 | UAEHFS ||| UAEHFS | PHQ-8 | hours | BMI ||| 517 | 487 | 94.2% | 100 ||| 231 | 44.7% ||| 32.0 years | 24.0 | 39.0 ||| 22 | 9.5% ||| 3.2 | 95% | 8.88 | approximately 5-fold ||| one | BMI | 0.34 | 95% | CI | 0.1 | 1.0 | 1.8 | 95% | CI | 0.97 | 3.32 ||| ||| ||| approximately 5-fold ||| BMI ||| less than 6 | 24 [SUMMARY]
[CONTENT] The United Arab Emirates Healthy Future Study | first | one | the United Arab Emirates ||| eight | PHQ-8 | UAEHFS ||| UAEHFS | PHQ-8 | hours | BMI ||| 517 | 487 | 94.2% | 100 ||| 231 | 44.7% ||| 32.0 years | 24.0 | 39.0 ||| 22 | 9.5% ||| 3.2 | 95% | 8.88 | approximately 5-fold ||| one | BMI | 0.34 | 95% | CI | 0.1 | 1.0 | 1.8 | 95% | CI | 0.97 | 3.32 ||| ||| ||| approximately 5-fold ||| BMI ||| less than 6 | 24 [SUMMARY]
Prevalence and predisposing factors of brachial plexus birth palsy in a regional hospital in Ghana: a five year retrospective study.
31312323
Brachial plexus birth injury is one of the challenges associated with maternal delivery, with varying prevalence between countries. Brachial plexus birth injury poses negative health implications to children and also has socio-economic implications on families and the community as a whole. To treat brachial plexus birth injury, a multi-disciplinary treatment approach is recommended. Brachial plexus birth palsy (BPBP) is categorised into two-upper plexus injury (Erb's palsy) and lower plexus injury (Klumpke's palsy). These categories present with various degrees of injuries, with less severe injuries responding well to treatment and in most instances may resolve on their own, but serious and complicated injuries will require a multi-disciplinary treatment approach to treat and/or manage. Effective treatment and management depends on adequate knowledge of the disease condition. These include the risk factors and prevalence of brachial plexus birth palsy within a particular population at a specific period in time. The aim of this study was to determine the risk factors and the prevalence of a hospital based brachial plexus birth palsy within a five-year period (2013-2017).
INTRODUCTION
A five-year retrospective study design was used. The study involved selection of all clients' diagnosed with brachial plexus birth palsy, where their gender, birth weight, complications at birth, type of brachial plexus suffered, mothers' diabetes status, mother's age, birth attendant, side of affectation, presentation at birth and mode of delivery were recorded.
METHODS
The prevalence rate of brachial plexus birth palsy was 14.7% out of a total of three hundred and twenty (320) cases reviewed over the study period in the Volta Regional Hospital. Erb's palsy was found to be the modal type of BPBP in this population (93.6%).
RESULTS
There is the need to provide a nationwide education on the risk factors that predispose babies to brachial plexus birth palsy. There is also the need for frequent antenatal visit by pregnant women; this will help in the provision of best antenatal history, diagnostic investigation in determining the birth weight and safe mode of delivery.
CONCLUSION
[ "Birth Weight", "Delivery, Obstetric", "Female", "Ghana", "Humans", "Infant, Newborn", "Male", "Neonatal Brachial Plexus Palsy", "Pregnancy", "Prenatal Care", "Prevalence", "Retrospective Studies", "Risk Factors", "Severity of Illness Index" ]
6620083
Introduction
Brachial plexus birth palsy (BPBP) is a neurological condition that results from nerve injury to the brachial plexus; C5-T1, which supply the upper extremities [1, 2]. Al-Qattan et al. [3] classified BPBP into four main groups; group one shows injury to C5-C6 and results in paralysis of the shoulder and biceps muscles. Group two affects C5-C7, and typically results in paralysis of the shoulder, biceps and the forearm extensors [4]. Group three indicates injury to C5-T1 and results in complete paralysis of the affected upper limb. Group four affects C5-T1 and results in complete paralysis of the affected upper limb with Horner's syndrome. Zafeiriou and Psychogiou [5] further indicated that group one shows the least severity of injury whereas group four indicates the most severity of injury. Upper plexus injury is an injury to C5, C6 and sometimes C7 nerve of the cervical spine and lower plexus injury are injury to C8, and T1 nerves of the thoracic spine [6, 7]. The management of BPBP has many physical, psychological, financial, social and emotional implications on families and the country as a whole [8]. The prevalence of BPBP varies among countries. According to Foad et al. [9], the prevalence of BPBP in the United States is 1.5 in 1000 live births; shoulder dystocia had 100 times greater risk, an exceptionally large baby (>4.5 kg) had a fourteen times greater risk, and forceps delivery had a nine times greater risk for having a child with brachial plexus birth palsy. Having a twin or multiple birth mates and delivery by cesarean section had a protective effect against the occurrence of neonatal brachial plexus palsy. A study conducted by Evans-Jones and colleagues [10], reported the prevalence in the United Kingdom to be 0.42 per 1000 live births and the associated risk factors for BPBP was found to be shoulder dystocia, high birth weight and assisted delivery, but a considerably lower risk in infants delivered by caesarean section. The prevalence of children with BPBP in Nigeria over a ten year study period showed a persistent high prevalence, averaging 15.3% per year with associated problems such as birth asphyxia, humeral fracture, clavicular fracture and shoulder dislocation [4]. A study by Hamzat and colleagues [11] also reported the prevalence of BPBP in Accra, Ghana to be 27%, the results of the study further indicated that birth weight exceeding 4.0 kg, vertex presentation and vaginal delivery were the noticeable co-existing factors for BPBP in Accra. From the study done by Hamzat et al. [11] it was reported that only 55.2% of BPBP cases were referred for physiotherapy within one month after diagnosis and the treatment disposition for majority (88.1%) of the children were not documented and only 4.8% were formally discharged from physiotherapy. The prevalence of BPBP reported in Ghana focused on the Accra metropolis and there is no known data of BPBP in the Ho municipality, Volta region. The Ho municipality is a cosmopolitan urban city with health facilities serving the southern and central parts of the Volta region. BPBP is preventable when the risk factors and causes are known [12]. This work seeks to find out the prevalence and risk factors of BPBP in the Ho municipality of Ghana so as to advice policy makers on how to prevent the injury which will reduce the cost of treatment and increase productivity of the mothers. Managing of some children suffering from BPBP may need surgery and rehabilitation which poses financial issues to the family and loved ones [8]. Some children may have residual functional deficits and thereby affecting the way they function, as well psychological or emotional problems to the child [8]. This present study was undertaken to retrospectively investigate the prevalence of children who presented with BPBP and their predisposing factors in a regional hospital in Ghana. The clinical implication for this study is that there is not much information concerning the predisposing factors of BPBP in Ghana, apart from the study done by Hamzat et al. [11] that was done 10 years ago in Accra, an urban city in Ghana. Moreover the study by Hamzat et al. [11] did not provide information on the risk factors of BPBP in Ghana. As a result, there is paucity of information on the predisposing factors and the current prevalence of BPBP in a peri-urban city in Ghana. The aim of this study was to determine the prevalence and predisposing factors of BPBP over a five year period (January 2013-December 2017) at a regional hospital in Ghana.
Methods
Study site: the study was conducted at the physiotherapy department of the Volta Regional Hospital (VRH), the main referral centre for the Volta region of Ghana. The hospital attends to over 177,281 inhabitants in the Ho municipality and its environs [13]. Study design: the study design employed was a retrospective quantitative study over a five-year period (January 2013 to December 2017). Ethical approval: the study was approved by the Research Ethics Committees of the University of Health and Allied Sciences and the VRH with protocol identification number UHAS-REC/A.5 (62) 17-18. Inclusion and exclusion criterion: all paediatric cases under 15 years who attended the physiotherapy clinic of the VRH over the study period were recruited. This age category is chosen because children within this age group are treated in the paediatric unit of the department. Paediatric cases above 15 years of age who reported for treatment over the study period were excluded. Also acute brachial plexus injuries not associated with delivery were also excluded. Procedure for data collection: records of all newly diagnosed clients' were retrieved to identify all paediatric conditions. Sampling of all children diagnosed with birth brachial injuries were done, and this provided a means of recording their folder numbers for further investigation. The folder numbers obtained aided in the retrieval of individual folders. The study involved all children 15 years and below who attended the physiotherapy clinic of the VRH within the study period. The second stage of data collection included children that have been diagnosed with brachial plexus birth palsy within the study year period. Demographic data and clinical profile of children and their mothers were recorded. Information such as mother's history of birth, age at delivery, maternal diabetic status, parity status as well as the mothers' occupation was recorded. Other information retrieved included the diagnosis, side of affectation, birth weight, place of delivery, complication at birth, mothers' occupation, antenatal care history, type of BPBP, presentation at birth and others were retrieved from the client folders. The clients were grouped in two categories of Erb's and Klumpke's palsy. Those who had good wrist control were in Erb's classification while those with minimal to less wrist control but good shoulder control were classified as Klumpke's palsy. Those with no control of shoulders and wrist were classified as having total BPBP. Each participant was assigned a code to ensure anonymity. The data was kept under lock and key, accessible only by the researchers. Data analysis: the data collected was entered into IBM SPSS version 20.0 analysed using descriptive statistics.
Results
In all, the records of 47 participants who presented for BPBP were analysed. There were 28 (59.6%) female children with brachial plexus injuries, 17 (36.2%) were male children, and the gender for two (4.2%) of the cases were not documented. The average birth weight was 3.9 ± 0.5 kg. Table 1 presents the gender distribution for children diagnosed with BPBP in the five-year period with average birth weight and side of affectation. Table 2 shows the total number of paediatric conditions seen at the physiotherapy department of the Volta Regional Hospital during the study period. The weight category for children diagnosed with brachial plexus injuries was also recorded during the 5-year study period. Thirty-seven (37) of the children involved in the study had their birth weights recorded and their weights were categorised as shown in Table 2. The remaining 17.8% (n=8) of the children involved in the study had their birth weight not recorded. There was about half of brachial plexus injuries 48.7% (n=18) falling within the weight category (3.5 - 3.9 kg), 13.5% (n=5) falling within 3.0-3.4 kg and 24.3% (n=9) falling within the 4.5-5.0 kg weight category. Out of 320 paediatric cases reviewed during the study period, 47 (14.7%) of them were diagnosed with brachial plexus birth palsy. The rest of the paediatric cases constituted cerebral palsy (30.3%), musculoskeletal injuries (26.6%), acute flaccid paralysis (24%) and burns (4.4%). Brachial plexus birth palsy was the fourth (4th) leading cause for paediatric consultation and treatment at the physiotherapy department of the hospital. Gender distribution with their average birth weight and side of affectation Paediatric cases seen over the study period The clinical profile of the babies diagnosed with BPBP indicated that, the vast majority 44 (93%) had cephalic presentation at birth, 1 (2.1%) had breech presentation and 2 (4.2%) of the cases were not documented. Majority of the children 44 (93.6%) were delivered through spontaneous/normal vaginal delivery, 2 (4.3%) through caesarean section. However, the mode of delivery of one baby was not documented. 44 (93.6%) of the cases were diagnosed with Erb's palsy whereas 3 (6.4%) had Klumpke's palsy. Shoulder dystocia and prolonged labour were the most common complications documented; 13 (27.7%) and 20 (42.5%) respectively. 2 (4.3%) of the children had fractures of the clavicle and humerus. Unfortunately, there were no records on 12 (5.5%) of the children. In Table 3, the ages of the mothers were compared to the average birth weight of their children and complications such as shoulder dystocia and prolonged labour. Out of a total of nineteen (19) of mothers' whose age were documented, only thirteen (13) of their children's birth weight were documented. Within the age categories of 15-24, 25-34 and above 34 years of mothers' age, 71%, 75% and 80% of children's birth weight respectively were documented. Mother's whose ages were above 34 years recorded the highest average birth weight (4.07 ± 0.60 kg) for their children together with shoulder dystocia, n=6 and prolonged labour n=6. The number of children born with shoulder dystocia was directly proportional to the mothers' age. Mothers’ age category versus average birth weight, shoulder dystocia and prolonged labour Data is presented in mean ± SD
Conclusion
In conclusion, the results of this study showed a BPBP prevalence of 15% and most of the Erb's palsy cases had right arm affectation. The study also showed that majority of the babies had a birth weight >4.0 kg and this may accounts for the high number of shoulder dystocia cases. The presentations at birth were predominantly cephalic and this may pose an increased risk of upper plexus injuries among the macrosomic children. Majority of babies diagnosed with Erb's palsy were macrosomic with shoulder dystocia. Recommendations: the researchers recommend the need for appropriate documentation in the hospital, and the need for healthcare professionals to be mindful of the complications and the risk factors of BPBP so as to provide immediate and appropriate care. What is known about this topic Increase birth weight is a risk factor for BPBP; Birth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour. Increase birth weight is a risk factor for BPBP; Birth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour. What this study adds Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia; Both nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women; There is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years. Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia; Both nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women; There is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years.
[ "What is known about this topic", "What this study adds" ]
[ "Increase birth weight is a risk factor for BPBP;\nBirth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour.", "Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia;\nBoth nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women;\nThere is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "Brachial plexus birth palsy (BPBP) is a neurological condition that results from nerve injury to the brachial plexus; C5-T1, which supply the upper extremities [1, 2]. Al-Qattan et al. [3] classified BPBP into four main groups; group one shows injury to C5-C6 and results in paralysis of the shoulder and biceps muscles. Group two affects C5-C7, and typically results in paralysis of the shoulder, biceps and the forearm extensors [4]. Group three indicates injury to C5-T1 and results in complete paralysis of the affected upper limb. Group four affects C5-T1 and results in complete paralysis of the affected upper limb with Horner's syndrome. Zafeiriou and Psychogiou [5] further indicated that group one shows the least severity of injury whereas group four indicates the most severity of injury. Upper plexus injury is an injury to C5, C6 and sometimes C7 nerve of the cervical spine and lower plexus injury are injury to C8, and T1 nerves of the thoracic spine [6, 7]. The management of BPBP has many physical, psychological, financial, social and emotional implications on families and the country as a whole [8]. The prevalence of BPBP varies among countries. According to Foad et al. [9], the prevalence of BPBP in the United States is 1.5 in 1000 live births; shoulder dystocia had 100 times greater risk, an exceptionally large baby (>4.5 kg) had a fourteen times greater risk, and forceps delivery had a nine times greater risk for having a child with brachial plexus birth palsy. Having a twin or multiple birth mates and delivery by cesarean section had a protective effect against the occurrence of neonatal brachial plexus palsy. A study conducted by Evans-Jones and colleagues [10], reported the prevalence in the United Kingdom to be 0.42 per 1000 live births and the associated risk factors for BPBP was found to be shoulder dystocia, high birth weight and assisted delivery, but a considerably lower risk in infants delivered by caesarean section.\nThe prevalence of children with BPBP in Nigeria over a ten year study period showed a persistent high prevalence, averaging 15.3% per year with associated problems such as birth asphyxia, humeral fracture, clavicular fracture and shoulder dislocation [4]. A study by Hamzat and colleagues [11] also reported the prevalence of BPBP in Accra, Ghana to be 27%, the results of the study further indicated that birth weight exceeding 4.0 kg, vertex presentation and vaginal delivery were the noticeable co-existing factors for BPBP in Accra. From the study done by Hamzat et al. [11] it was reported that only 55.2% of BPBP cases were referred for physiotherapy within one month after diagnosis and the treatment disposition for majority (88.1%) of the children were not documented and only 4.8% were formally discharged from physiotherapy. The prevalence of BPBP reported in Ghana focused on the Accra metropolis and there is no known data of BPBP in the Ho municipality, Volta region. The Ho municipality is a cosmopolitan urban city with health facilities serving the southern and central parts of the Volta region. BPBP is preventable when the risk factors and causes are known [12]. This work seeks to find out the prevalence and risk factors of BPBP in the Ho municipality of Ghana so as to advice policy makers on how to prevent the injury which will reduce the cost of treatment and increase productivity of the mothers. Managing of some children suffering from BPBP may need surgery and rehabilitation which poses financial issues to the family and loved ones [8]. Some children may have residual functional deficits and thereby affecting the way they function, as well psychological or emotional problems to the child [8]. This present study was undertaken to retrospectively investigate the prevalence of children who presented with BPBP and their predisposing factors in a regional hospital in Ghana. The clinical implication for this study is that there is not much information concerning the predisposing factors of BPBP in Ghana, apart from the study done by Hamzat et al. [11] that was done 10 years ago in Accra, an urban city in Ghana. Moreover the study by Hamzat et al. [11] did not provide information on the risk factors of BPBP in Ghana. As a result, there is paucity of information on the predisposing factors and the current prevalence of BPBP in a peri-urban city in Ghana. The aim of this study was to determine the prevalence and predisposing factors of BPBP over a five year period (January 2013-December 2017) at a regional hospital in Ghana.", "Study site: the study was conducted at the physiotherapy department of the Volta Regional Hospital (VRH), the main referral centre for the Volta region of Ghana. The hospital attends to over 177,281 inhabitants in the Ho municipality and its environs [13].\nStudy design: the study design employed was a retrospective quantitative study over a five-year period (January 2013 to December 2017).\nEthical approval: the study was approved by the Research Ethics Committees of the University of Health and Allied Sciences and the VRH with protocol identification number UHAS-REC/A.5 (62) 17-18.\nInclusion and exclusion criterion: all paediatric cases under 15 years who attended the physiotherapy clinic of the VRH over the study period were recruited. This age category is chosen because children within this age group are treated in the paediatric unit of the department. Paediatric cases above 15 years of age who reported for treatment over the study period were excluded. Also acute brachial plexus injuries not associated with delivery were also excluded.\nProcedure for data collection: records of all newly diagnosed clients' were retrieved to identify all paediatric conditions. Sampling of all children diagnosed with birth brachial injuries were done, and this provided a means of recording their folder numbers for further investigation. The folder numbers obtained aided in the retrieval of individual folders. The study involved all children 15 years and below who attended the physiotherapy clinic of the VRH within the study period. The second stage of data collection included children that have been diagnosed with brachial plexus birth palsy within the study year period. Demographic data and clinical profile of children and their mothers were recorded. Information such as mother's history of birth, age at delivery, maternal diabetic status, parity status as well as the mothers' occupation was recorded. Other information retrieved included the diagnosis, side of affectation, birth weight, place of delivery, complication at birth, mothers' occupation, antenatal care history, type of BPBP, presentation at birth and others were retrieved from the client folders. The clients were grouped in two categories of Erb's and Klumpke's palsy. Those who had good wrist control were in Erb's classification while those with minimal to less wrist control but good shoulder control were classified as Klumpke's palsy. Those with no control of shoulders and wrist were classified as having total BPBP. Each participant was assigned a code to ensure anonymity. The data was kept under lock and key, accessible only by the researchers.\nData analysis: the data collected was entered into IBM SPSS version 20.0 analysed using descriptive statistics.", "In all, the records of 47 participants who presented for BPBP were analysed. There were 28 (59.6%) female children with brachial plexus injuries, 17 (36.2%) were male children, and the gender for two (4.2%) of the cases were not documented. The average birth weight was 3.9 ± 0.5 kg. Table 1 presents the gender distribution for children diagnosed with BPBP in the five-year period with average birth weight and side of affectation. Table 2 shows the total number of paediatric conditions seen at the physiotherapy department of the Volta Regional Hospital during the study period. The weight category for children diagnosed with brachial plexus injuries was also recorded during the 5-year study period. Thirty-seven (37) of the children involved in the study had their birth weights recorded and their weights were categorised as shown in Table 2. The remaining 17.8% (n=8) of the children involved in the study had their birth weight not recorded. There was about half of brachial plexus injuries 48.7% (n=18) falling within the weight category (3.5 - 3.9 kg), 13.5% (n=5) falling within 3.0-3.4 kg and 24.3% (n=9) falling within the 4.5-5.0 kg weight category. Out of 320 paediatric cases reviewed during the study period, 47 (14.7%) of them were diagnosed with brachial plexus birth palsy. The rest of the paediatric cases constituted cerebral palsy (30.3%), musculoskeletal injuries (26.6%), acute flaccid paralysis (24%) and burns (4.4%). Brachial plexus birth palsy was the fourth (4th) leading cause for paediatric consultation and treatment at the physiotherapy department of the hospital.\nGender distribution with their average birth weight and side of affectation\nPaediatric cases seen over the study period\nThe clinical profile of the babies diagnosed with BPBP indicated that, the vast majority 44 (93%) had cephalic presentation at birth, 1 (2.1%) had breech presentation and 2 (4.2%) of the cases were not documented. Majority of the children 44 (93.6%) were delivered through spontaneous/normal vaginal delivery, 2 (4.3%) through caesarean section. However, the mode of delivery of one baby was not documented. 44 (93.6%) of the cases were diagnosed with Erb's palsy whereas 3 (6.4%) had Klumpke's palsy. Shoulder dystocia and prolonged labour were the most common complications documented; 13 (27.7%) and 20 (42.5%) respectively. 2 (4.3%) of the children had fractures of the clavicle and humerus. Unfortunately, there were no records on 12 (5.5%) of the children. In Table 3, the ages of the mothers were compared to the average birth weight of their children and complications such as shoulder dystocia and prolonged labour. Out of a total of nineteen (19) of mothers' whose age were documented, only thirteen (13) of their children's birth weight were documented. Within the age categories of 15-24, 25-34 and above 34 years of mothers' age, 71%, 75% and 80% of children's birth weight respectively were documented. Mother's whose ages were above 34 years recorded the highest average birth weight (4.07 ± 0.60 kg) for their children together with shoulder dystocia, n=6 and prolonged labour n=6. The number of children born with shoulder dystocia was directly proportional to the mothers' age.\nMothers’ age category versus average birth weight, shoulder dystocia and prolonged labour\nData is presented in mean ± SD", "The aim of this study was to determine the prevalence of brachial plexus birth palsy (BPBP) over a five year period (January 2013-December 2017) and the predisposing factors of BPBP at a regional hospital in Ghana. The clinical profiles of 320 paediatric cases, within the study period were reviewed. The prevalence of BPBP was 14.7%, which is much lower than the prevalence (27.2%) recorded in a tertiary hospital in urban Ghana [11]. The difference in the prevalence could be attributed to the geographical difference (urban vs. rural). Majority (59.6%) of the cases recorded were females, with 80% of the cases affecting the right arm. This result is consistent with the findings of studies conducted by Toopchizadeh and colleagues [14] in Tabriz, Iran. Hamzat et al. [11] also found out, right arm palsy to be more prevalent than left arm palsy. In the Ghanaian, context, the high prevalence of right arm palsy could pose social problems for children because the Ghanaian culture tends to place much emphasis on the importance of the right hand in performing actions such as greeting, eating, handshaking and preparing meals [11]. The implication therefore is that children with right hand palsy may be seen as outcasts within the Ghanaian societal setup.\nMost (93.6%) of the study participants were cephalic spontaneous vaginal delivery with only two caesarean section (4.2%) and one having no documentation on the mode of delivery. About 95% of cephalic presentations were diagnosed with Erb's palsy, with 5% diagnosed with Klumpke's palsy; a breech presentation also resulted in Erb's palsy. These results confirm with a study conducted by Hale et al. [15], where they reported hyper abduction in breech presentation as a cause for Klumpke's palsy. For the current study, the high number of erb's palsy recorded for cephalic presentation will be due to the high birth weight of the children. Cephalic presentation with shoulder dystocia was recorded as the main mechanism putting a traction force on the upper brachial plexus [16]. The mechanism for Erb's palsy was reported to be due to, a traction force between the head and the shoulder, thereby putting a tension force on the nerve which might lead to tearing of some part or the entire upper brachial plexus [16]. This position supports the reason why majority of the children suffered from Erb's palsy in this study. According to Hehir and colleagues [16], cephalic presentation increase the risk for shoulder dystocia and causes a tension force on the upper brachial plexus, thereby leading to Erb's palsy in cases where the nerves are injured. The birth complications recorded in this study were shoulder dystocia (27.6%), prolonged labour (42.5%), clavicular fracture (2.1%) and humeral fracture (2.1%). These findings correspond to existing literature documenting shoulder dystocia and prolonged labour as the commonest complication for BPBP, with fractures recorded in some cases [17]. Similarly, Coroneos et al. [18] reported shoulder dystocia as the major cause for BPBP, but also found out that prolonged labour is the commonest risk factor resulting in BPBP. The recorded birth weight for the 47 BPBP participants used in this study were in the ranges of 3.0-5.0 kg with an average birth weight of 3.9 ± 0.5 kg. This finding shows that majority of the children's birth weight was within normal ranges (3.5-4.3 kg). This is consistent with existing literature which indicates that increase birth weight is a risk factor for BPBP [17].\nThe complications resulting in brachial plexus injury in this population is due to mothers not attending antenatal clinics, refusal to undergo caesarean section, delay in reporting to the hospital when labour sets in and lack of early referrals. Among the sampled population, majority of them suffered Erb's palsy (93.6%) with only 6.4% recording Klumpke's palsy. This shows that the number of children who suffered upper brachial injury was more than the lower plexus injuries. The clinical implication of this is that, clinicians and healthcare providers must be educated on the risk factors for brachial plexus injuries. Among the study population, only 19 mothers had their demographic details recorded. The average age and average birth weight of nulliparous mothers was 21 years and 3.8 kg respectively; whereas that for multiparous mothers was 34 years and 3.9 kg respectively. This shows that both nulliparous and multiparous mothers are at risk for delivering babies who may suffer birth injuries because the average birth weight is the same for both groups of women and therefore caregivers must provide the same level of care to prevent birth injuries. This study found out that age and parity status are risk free for brachial plexus injuries, since they are all at risk for the condition. This study contravenes other studies where mothers above 35 years and those below 16 years are considered as risk factors for brachial plexus injuries [19]. Birth weight above 4 kg was associated with an increased risk of shoulder dystocia and prolonged labour. Additionally, there was also an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years. This finding is consistent with the major risk factors reported by various studies [20, 21].\nLimitations: this study was limited by the lack of complete and adequate maternal data. There was lack of documentation on birth weight for ten of the children and therefore conclusions about the study population cannot be representative.", "In conclusion, the results of this study showed a BPBP prevalence of 15% and most of the Erb's palsy cases had right arm affectation. The study also showed that majority of the babies had a birth weight >4.0 kg and this may accounts for the high number of shoulder dystocia cases. The presentations at birth were predominantly cephalic and this may pose an increased risk of upper plexus injuries among the macrosomic children. Majority of babies diagnosed with Erb's palsy were macrosomic with shoulder dystocia. Recommendations: the researchers recommend the need for appropriate documentation in the hospital, and the need for healthcare professionals to be mindful of the complications and the risk factors of BPBP so as to provide immediate and appropriate care.\n What is known about this topic Increase birth weight is a risk factor for BPBP;\nBirth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour.\nIncrease birth weight is a risk factor for BPBP;\nBirth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour.\n What this study adds Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia;\nBoth nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women;\nThere is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years.\nIncreasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia;\nBoth nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women;\nThere is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years.", "Increase birth weight is a risk factor for BPBP;\nBirth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour.", "Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia;\nBoth nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women;\nThere is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years.", "The authors declare no competing interests." ]
[ "intro", "methods", "results", "discussion", "conclusion", null, null, "COI-statement" ]
[ "Brachial plexus birth palsy", "predisposing factors", "erb’s palsy", "Regional hospital", "Ghana" ]
Introduction: Brachial plexus birth palsy (BPBP) is a neurological condition that results from nerve injury to the brachial plexus; C5-T1, which supply the upper extremities [1, 2]. Al-Qattan et al. [3] classified BPBP into four main groups; group one shows injury to C5-C6 and results in paralysis of the shoulder and biceps muscles. Group two affects C5-C7, and typically results in paralysis of the shoulder, biceps and the forearm extensors [4]. Group three indicates injury to C5-T1 and results in complete paralysis of the affected upper limb. Group four affects C5-T1 and results in complete paralysis of the affected upper limb with Horner's syndrome. Zafeiriou and Psychogiou [5] further indicated that group one shows the least severity of injury whereas group four indicates the most severity of injury. Upper plexus injury is an injury to C5, C6 and sometimes C7 nerve of the cervical spine and lower plexus injury are injury to C8, and T1 nerves of the thoracic spine [6, 7]. The management of BPBP has many physical, psychological, financial, social and emotional implications on families and the country as a whole [8]. The prevalence of BPBP varies among countries. According to Foad et al. [9], the prevalence of BPBP in the United States is 1.5 in 1000 live births; shoulder dystocia had 100 times greater risk, an exceptionally large baby (>4.5 kg) had a fourteen times greater risk, and forceps delivery had a nine times greater risk for having a child with brachial plexus birth palsy. Having a twin or multiple birth mates and delivery by cesarean section had a protective effect against the occurrence of neonatal brachial plexus palsy. A study conducted by Evans-Jones and colleagues [10], reported the prevalence in the United Kingdom to be 0.42 per 1000 live births and the associated risk factors for BPBP was found to be shoulder dystocia, high birth weight and assisted delivery, but a considerably lower risk in infants delivered by caesarean section. The prevalence of children with BPBP in Nigeria over a ten year study period showed a persistent high prevalence, averaging 15.3% per year with associated problems such as birth asphyxia, humeral fracture, clavicular fracture and shoulder dislocation [4]. A study by Hamzat and colleagues [11] also reported the prevalence of BPBP in Accra, Ghana to be 27%, the results of the study further indicated that birth weight exceeding 4.0 kg, vertex presentation and vaginal delivery were the noticeable co-existing factors for BPBP in Accra. From the study done by Hamzat et al. [11] it was reported that only 55.2% of BPBP cases were referred for physiotherapy within one month after diagnosis and the treatment disposition for majority (88.1%) of the children were not documented and only 4.8% were formally discharged from physiotherapy. The prevalence of BPBP reported in Ghana focused on the Accra metropolis and there is no known data of BPBP in the Ho municipality, Volta region. The Ho municipality is a cosmopolitan urban city with health facilities serving the southern and central parts of the Volta region. BPBP is preventable when the risk factors and causes are known [12]. This work seeks to find out the prevalence and risk factors of BPBP in the Ho municipality of Ghana so as to advice policy makers on how to prevent the injury which will reduce the cost of treatment and increase productivity of the mothers. Managing of some children suffering from BPBP may need surgery and rehabilitation which poses financial issues to the family and loved ones [8]. Some children may have residual functional deficits and thereby affecting the way they function, as well psychological or emotional problems to the child [8]. This present study was undertaken to retrospectively investigate the prevalence of children who presented with BPBP and their predisposing factors in a regional hospital in Ghana. The clinical implication for this study is that there is not much information concerning the predisposing factors of BPBP in Ghana, apart from the study done by Hamzat et al. [11] that was done 10 years ago in Accra, an urban city in Ghana. Moreover the study by Hamzat et al. [11] did not provide information on the risk factors of BPBP in Ghana. As a result, there is paucity of information on the predisposing factors and the current prevalence of BPBP in a peri-urban city in Ghana. The aim of this study was to determine the prevalence and predisposing factors of BPBP over a five year period (January 2013-December 2017) at a regional hospital in Ghana. Methods: Study site: the study was conducted at the physiotherapy department of the Volta Regional Hospital (VRH), the main referral centre for the Volta region of Ghana. The hospital attends to over 177,281 inhabitants in the Ho municipality and its environs [13]. Study design: the study design employed was a retrospective quantitative study over a five-year period (January 2013 to December 2017). Ethical approval: the study was approved by the Research Ethics Committees of the University of Health and Allied Sciences and the VRH with protocol identification number UHAS-REC/A.5 (62) 17-18. Inclusion and exclusion criterion: all paediatric cases under 15 years who attended the physiotherapy clinic of the VRH over the study period were recruited. This age category is chosen because children within this age group are treated in the paediatric unit of the department. Paediatric cases above 15 years of age who reported for treatment over the study period were excluded. Also acute brachial plexus injuries not associated with delivery were also excluded. Procedure for data collection: records of all newly diagnosed clients' were retrieved to identify all paediatric conditions. Sampling of all children diagnosed with birth brachial injuries were done, and this provided a means of recording their folder numbers for further investigation. The folder numbers obtained aided in the retrieval of individual folders. The study involved all children 15 years and below who attended the physiotherapy clinic of the VRH within the study period. The second stage of data collection included children that have been diagnosed with brachial plexus birth palsy within the study year period. Demographic data and clinical profile of children and their mothers were recorded. Information such as mother's history of birth, age at delivery, maternal diabetic status, parity status as well as the mothers' occupation was recorded. Other information retrieved included the diagnosis, side of affectation, birth weight, place of delivery, complication at birth, mothers' occupation, antenatal care history, type of BPBP, presentation at birth and others were retrieved from the client folders. The clients were grouped in two categories of Erb's and Klumpke's palsy. Those who had good wrist control were in Erb's classification while those with minimal to less wrist control but good shoulder control were classified as Klumpke's palsy. Those with no control of shoulders and wrist were classified as having total BPBP. Each participant was assigned a code to ensure anonymity. The data was kept under lock and key, accessible only by the researchers. Data analysis: the data collected was entered into IBM SPSS version 20.0 analysed using descriptive statistics. Results: In all, the records of 47 participants who presented for BPBP were analysed. There were 28 (59.6%) female children with brachial plexus injuries, 17 (36.2%) were male children, and the gender for two (4.2%) of the cases were not documented. The average birth weight was 3.9 ± 0.5 kg. Table 1 presents the gender distribution for children diagnosed with BPBP in the five-year period with average birth weight and side of affectation. Table 2 shows the total number of paediatric conditions seen at the physiotherapy department of the Volta Regional Hospital during the study period. The weight category for children diagnosed with brachial plexus injuries was also recorded during the 5-year study period. Thirty-seven (37) of the children involved in the study had their birth weights recorded and their weights were categorised as shown in Table 2. The remaining 17.8% (n=8) of the children involved in the study had their birth weight not recorded. There was about half of brachial plexus injuries 48.7% (n=18) falling within the weight category (3.5 - 3.9 kg), 13.5% (n=5) falling within 3.0-3.4 kg and 24.3% (n=9) falling within the 4.5-5.0 kg weight category. Out of 320 paediatric cases reviewed during the study period, 47 (14.7%) of them were diagnosed with brachial plexus birth palsy. The rest of the paediatric cases constituted cerebral palsy (30.3%), musculoskeletal injuries (26.6%), acute flaccid paralysis (24%) and burns (4.4%). Brachial plexus birth palsy was the fourth (4th) leading cause for paediatric consultation and treatment at the physiotherapy department of the hospital. Gender distribution with their average birth weight and side of affectation Paediatric cases seen over the study period The clinical profile of the babies diagnosed with BPBP indicated that, the vast majority 44 (93%) had cephalic presentation at birth, 1 (2.1%) had breech presentation and 2 (4.2%) of the cases were not documented. Majority of the children 44 (93.6%) were delivered through spontaneous/normal vaginal delivery, 2 (4.3%) through caesarean section. However, the mode of delivery of one baby was not documented. 44 (93.6%) of the cases were diagnosed with Erb's palsy whereas 3 (6.4%) had Klumpke's palsy. Shoulder dystocia and prolonged labour were the most common complications documented; 13 (27.7%) and 20 (42.5%) respectively. 2 (4.3%) of the children had fractures of the clavicle and humerus. Unfortunately, there were no records on 12 (5.5%) of the children. In Table 3, the ages of the mothers were compared to the average birth weight of their children and complications such as shoulder dystocia and prolonged labour. Out of a total of nineteen (19) of mothers' whose age were documented, only thirteen (13) of their children's birth weight were documented. Within the age categories of 15-24, 25-34 and above 34 years of mothers' age, 71%, 75% and 80% of children's birth weight respectively were documented. Mother's whose ages were above 34 years recorded the highest average birth weight (4.07 ± 0.60 kg) for their children together with shoulder dystocia, n=6 and prolonged labour n=6. The number of children born with shoulder dystocia was directly proportional to the mothers' age. Mothers’ age category versus average birth weight, shoulder dystocia and prolonged labour Data is presented in mean ± SD Discussion: The aim of this study was to determine the prevalence of brachial plexus birth palsy (BPBP) over a five year period (January 2013-December 2017) and the predisposing factors of BPBP at a regional hospital in Ghana. The clinical profiles of 320 paediatric cases, within the study period were reviewed. The prevalence of BPBP was 14.7%, which is much lower than the prevalence (27.2%) recorded in a tertiary hospital in urban Ghana [11]. The difference in the prevalence could be attributed to the geographical difference (urban vs. rural). Majority (59.6%) of the cases recorded were females, with 80% of the cases affecting the right arm. This result is consistent with the findings of studies conducted by Toopchizadeh and colleagues [14] in Tabriz, Iran. Hamzat et al. [11] also found out, right arm palsy to be more prevalent than left arm palsy. In the Ghanaian, context, the high prevalence of right arm palsy could pose social problems for children because the Ghanaian culture tends to place much emphasis on the importance of the right hand in performing actions such as greeting, eating, handshaking and preparing meals [11]. The implication therefore is that children with right hand palsy may be seen as outcasts within the Ghanaian societal setup. Most (93.6%) of the study participants were cephalic spontaneous vaginal delivery with only two caesarean section (4.2%) and one having no documentation on the mode of delivery. About 95% of cephalic presentations were diagnosed with Erb's palsy, with 5% diagnosed with Klumpke's palsy; a breech presentation also resulted in Erb's palsy. These results confirm with a study conducted by Hale et al. [15], where they reported hyper abduction in breech presentation as a cause for Klumpke's palsy. For the current study, the high number of erb's palsy recorded for cephalic presentation will be due to the high birth weight of the children. Cephalic presentation with shoulder dystocia was recorded as the main mechanism putting a traction force on the upper brachial plexus [16]. The mechanism for Erb's palsy was reported to be due to, a traction force between the head and the shoulder, thereby putting a tension force on the nerve which might lead to tearing of some part or the entire upper brachial plexus [16]. This position supports the reason why majority of the children suffered from Erb's palsy in this study. According to Hehir and colleagues [16], cephalic presentation increase the risk for shoulder dystocia and causes a tension force on the upper brachial plexus, thereby leading to Erb's palsy in cases where the nerves are injured. The birth complications recorded in this study were shoulder dystocia (27.6%), prolonged labour (42.5%), clavicular fracture (2.1%) and humeral fracture (2.1%). These findings correspond to existing literature documenting shoulder dystocia and prolonged labour as the commonest complication for BPBP, with fractures recorded in some cases [17]. Similarly, Coroneos et al. [18] reported shoulder dystocia as the major cause for BPBP, but also found out that prolonged labour is the commonest risk factor resulting in BPBP. The recorded birth weight for the 47 BPBP participants used in this study were in the ranges of 3.0-5.0 kg with an average birth weight of 3.9 ± 0.5 kg. This finding shows that majority of the children's birth weight was within normal ranges (3.5-4.3 kg). This is consistent with existing literature which indicates that increase birth weight is a risk factor for BPBP [17]. The complications resulting in brachial plexus injury in this population is due to mothers not attending antenatal clinics, refusal to undergo caesarean section, delay in reporting to the hospital when labour sets in and lack of early referrals. Among the sampled population, majority of them suffered Erb's palsy (93.6%) with only 6.4% recording Klumpke's palsy. This shows that the number of children who suffered upper brachial injury was more than the lower plexus injuries. The clinical implication of this is that, clinicians and healthcare providers must be educated on the risk factors for brachial plexus injuries. Among the study population, only 19 mothers had their demographic details recorded. The average age and average birth weight of nulliparous mothers was 21 years and 3.8 kg respectively; whereas that for multiparous mothers was 34 years and 3.9 kg respectively. This shows that both nulliparous and multiparous mothers are at risk for delivering babies who may suffer birth injuries because the average birth weight is the same for both groups of women and therefore caregivers must provide the same level of care to prevent birth injuries. This study found out that age and parity status are risk free for brachial plexus injuries, since they are all at risk for the condition. This study contravenes other studies where mothers above 35 years and those below 16 years are considered as risk factors for brachial plexus injuries [19]. Birth weight above 4 kg was associated with an increased risk of shoulder dystocia and prolonged labour. Additionally, there was also an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years. This finding is consistent with the major risk factors reported by various studies [20, 21]. Limitations: this study was limited by the lack of complete and adequate maternal data. There was lack of documentation on birth weight for ten of the children and therefore conclusions about the study population cannot be representative. Conclusion: In conclusion, the results of this study showed a BPBP prevalence of 15% and most of the Erb's palsy cases had right arm affectation. The study also showed that majority of the babies had a birth weight >4.0 kg and this may accounts for the high number of shoulder dystocia cases. The presentations at birth were predominantly cephalic and this may pose an increased risk of upper plexus injuries among the macrosomic children. Majority of babies diagnosed with Erb's palsy were macrosomic with shoulder dystocia. Recommendations: the researchers recommend the need for appropriate documentation in the hospital, and the need for healthcare professionals to be mindful of the complications and the risk factors of BPBP so as to provide immediate and appropriate care. What is known about this topic Increase birth weight is a risk factor for BPBP; Birth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour. Increase birth weight is a risk factor for BPBP; Birth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour. What this study adds Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia; Both nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women; There is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years. Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia; Both nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women; There is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years. What is known about this topic: Increase birth weight is a risk factor for BPBP; Birth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour. What this study adds: Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia; Both nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women; There is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years. Competing interests: The authors declare no competing interests.
Background: Brachial plexus birth injury is one of the challenges associated with maternal delivery, with varying prevalence between countries. Brachial plexus birth injury poses negative health implications to children and also has socio-economic implications on families and the community as a whole. To treat brachial plexus birth injury, a multi-disciplinary treatment approach is recommended. Brachial plexus birth palsy (BPBP) is categorised into two-upper plexus injury (Erb's palsy) and lower plexus injury (Klumpke's palsy). These categories present with various degrees of injuries, with less severe injuries responding well to treatment and in most instances may resolve on their own, but serious and complicated injuries will require a multi-disciplinary treatment approach to treat and/or manage. Effective treatment and management depends on adequate knowledge of the disease condition. These include the risk factors and prevalence of brachial plexus birth palsy within a particular population at a specific period in time. The aim of this study was to determine the risk factors and the prevalence of a hospital based brachial plexus birth palsy within a five-year period (2013-2017). Methods: A five-year retrospective study design was used. The study involved selection of all clients' diagnosed with brachial plexus birth palsy, where their gender, birth weight, complications at birth, type of brachial plexus suffered, mothers' diabetes status, mother's age, birth attendant, side of affectation, presentation at birth and mode of delivery were recorded. Results: The prevalence rate of brachial plexus birth palsy was 14.7% out of a total of three hundred and twenty (320) cases reviewed over the study period in the Volta Regional Hospital. Erb's palsy was found to be the modal type of BPBP in this population (93.6%). Conclusions: There is the need to provide a nationwide education on the risk factors that predispose babies to brachial plexus birth palsy. There is also the need for frequent antenatal visit by pregnant women; this will help in the provision of best antenatal history, diagnostic investigation in determining the birth weight and safe mode of delivery.
Introduction: Brachial plexus birth palsy (BPBP) is a neurological condition that results from nerve injury to the brachial plexus; C5-T1, which supply the upper extremities [1, 2]. Al-Qattan et al. [3] classified BPBP into four main groups; group one shows injury to C5-C6 and results in paralysis of the shoulder and biceps muscles. Group two affects C5-C7, and typically results in paralysis of the shoulder, biceps and the forearm extensors [4]. Group three indicates injury to C5-T1 and results in complete paralysis of the affected upper limb. Group four affects C5-T1 and results in complete paralysis of the affected upper limb with Horner's syndrome. Zafeiriou and Psychogiou [5] further indicated that group one shows the least severity of injury whereas group four indicates the most severity of injury. Upper plexus injury is an injury to C5, C6 and sometimes C7 nerve of the cervical spine and lower plexus injury are injury to C8, and T1 nerves of the thoracic spine [6, 7]. The management of BPBP has many physical, psychological, financial, social and emotional implications on families and the country as a whole [8]. The prevalence of BPBP varies among countries. According to Foad et al. [9], the prevalence of BPBP in the United States is 1.5 in 1000 live births; shoulder dystocia had 100 times greater risk, an exceptionally large baby (>4.5 kg) had a fourteen times greater risk, and forceps delivery had a nine times greater risk for having a child with brachial plexus birth palsy. Having a twin or multiple birth mates and delivery by cesarean section had a protective effect against the occurrence of neonatal brachial plexus palsy. A study conducted by Evans-Jones and colleagues [10], reported the prevalence in the United Kingdom to be 0.42 per 1000 live births and the associated risk factors for BPBP was found to be shoulder dystocia, high birth weight and assisted delivery, but a considerably lower risk in infants delivered by caesarean section. The prevalence of children with BPBP in Nigeria over a ten year study period showed a persistent high prevalence, averaging 15.3% per year with associated problems such as birth asphyxia, humeral fracture, clavicular fracture and shoulder dislocation [4]. A study by Hamzat and colleagues [11] also reported the prevalence of BPBP in Accra, Ghana to be 27%, the results of the study further indicated that birth weight exceeding 4.0 kg, vertex presentation and vaginal delivery were the noticeable co-existing factors for BPBP in Accra. From the study done by Hamzat et al. [11] it was reported that only 55.2% of BPBP cases were referred for physiotherapy within one month after diagnosis and the treatment disposition for majority (88.1%) of the children were not documented and only 4.8% were formally discharged from physiotherapy. The prevalence of BPBP reported in Ghana focused on the Accra metropolis and there is no known data of BPBP in the Ho municipality, Volta region. The Ho municipality is a cosmopolitan urban city with health facilities serving the southern and central parts of the Volta region. BPBP is preventable when the risk factors and causes are known [12]. This work seeks to find out the prevalence and risk factors of BPBP in the Ho municipality of Ghana so as to advice policy makers on how to prevent the injury which will reduce the cost of treatment and increase productivity of the mothers. Managing of some children suffering from BPBP may need surgery and rehabilitation which poses financial issues to the family and loved ones [8]. Some children may have residual functional deficits and thereby affecting the way they function, as well psychological or emotional problems to the child [8]. This present study was undertaken to retrospectively investigate the prevalence of children who presented with BPBP and their predisposing factors in a regional hospital in Ghana. The clinical implication for this study is that there is not much information concerning the predisposing factors of BPBP in Ghana, apart from the study done by Hamzat et al. [11] that was done 10 years ago in Accra, an urban city in Ghana. Moreover the study by Hamzat et al. [11] did not provide information on the risk factors of BPBP in Ghana. As a result, there is paucity of information on the predisposing factors and the current prevalence of BPBP in a peri-urban city in Ghana. The aim of this study was to determine the prevalence and predisposing factors of BPBP over a five year period (January 2013-December 2017) at a regional hospital in Ghana. Conclusion: In conclusion, the results of this study showed a BPBP prevalence of 15% and most of the Erb's palsy cases had right arm affectation. The study also showed that majority of the babies had a birth weight >4.0 kg and this may accounts for the high number of shoulder dystocia cases. The presentations at birth were predominantly cephalic and this may pose an increased risk of upper plexus injuries among the macrosomic children. Majority of babies diagnosed with Erb's palsy were macrosomic with shoulder dystocia. Recommendations: the researchers recommend the need for appropriate documentation in the hospital, and the need for healthcare professionals to be mindful of the complications and the risk factors of BPBP so as to provide immediate and appropriate care. What is known about this topic Increase birth weight is a risk factor for BPBP; Birth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour. Increase birth weight is a risk factor for BPBP; Birth weight above 4 kg is associated with an increased risk of shoulder dystocia and prolonged labour. What this study adds Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia; Both nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women; There is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years. Increasing mothers' age was found to be directly related to the complications arising of the birth process such as prolonged labour and shoulder dystocia; Both nulliparous and multiparous mothers' are at risk for delivering babies who may suffer birth injuries because the average birth weight of children is the same for both groups of women; There is an associated increased risk of shoulder dystocia, high birth weight (macrosomia) and prolonged labour in mothers above 34 years.
Background: Brachial plexus birth injury is one of the challenges associated with maternal delivery, with varying prevalence between countries. Brachial plexus birth injury poses negative health implications to children and also has socio-economic implications on families and the community as a whole. To treat brachial plexus birth injury, a multi-disciplinary treatment approach is recommended. Brachial plexus birth palsy (BPBP) is categorised into two-upper plexus injury (Erb's palsy) and lower plexus injury (Klumpke's palsy). These categories present with various degrees of injuries, with less severe injuries responding well to treatment and in most instances may resolve on their own, but serious and complicated injuries will require a multi-disciplinary treatment approach to treat and/or manage. Effective treatment and management depends on adequate knowledge of the disease condition. These include the risk factors and prevalence of brachial plexus birth palsy within a particular population at a specific period in time. The aim of this study was to determine the risk factors and the prevalence of a hospital based brachial plexus birth palsy within a five-year period (2013-2017). Methods: A five-year retrospective study design was used. The study involved selection of all clients' diagnosed with brachial plexus birth palsy, where their gender, birth weight, complications at birth, type of brachial plexus suffered, mothers' diabetes status, mother's age, birth attendant, side of affectation, presentation at birth and mode of delivery were recorded. Results: The prevalence rate of brachial plexus birth palsy was 14.7% out of a total of three hundred and twenty (320) cases reviewed over the study period in the Volta Regional Hospital. Erb's palsy was found to be the modal type of BPBP in this population (93.6%). Conclusions: There is the need to provide a nationwide education on the risk factors that predispose babies to brachial plexus birth palsy. There is also the need for frequent antenatal visit by pregnant women; this will help in the provision of best antenatal history, diagnostic investigation in determining the birth weight and safe mode of delivery.
3,648
404
[ 29, 86 ]
8
[ "birth", "study", "weight", "bpbp", "birth weight", "children", "risk", "shoulder", "palsy", "mothers" ]
[ "child brachial plexus", "birth palsy bpbp", "brachial plexus injuries", "palsy bpbp neurological", "brachial plexus c5" ]
[CONTENT] Brachial plexus birth palsy | predisposing factors | erb’s palsy | Regional hospital | Ghana [SUMMARY]
[CONTENT] Brachial plexus birth palsy | predisposing factors | erb’s palsy | Regional hospital | Ghana [SUMMARY]
[CONTENT] Brachial plexus birth palsy | predisposing factors | erb’s palsy | Regional hospital | Ghana [SUMMARY]
[CONTENT] Brachial plexus birth palsy | predisposing factors | erb’s palsy | Regional hospital | Ghana [SUMMARY]
[CONTENT] Brachial plexus birth palsy | predisposing factors | erb’s palsy | Regional hospital | Ghana [SUMMARY]
[CONTENT] Brachial plexus birth palsy | predisposing factors | erb’s palsy | Regional hospital | Ghana [SUMMARY]
[CONTENT] Birth Weight | Delivery, Obstetric | Female | Ghana | Humans | Infant, Newborn | Male | Neonatal Brachial Plexus Palsy | Pregnancy | Prenatal Care | Prevalence | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
[CONTENT] Birth Weight | Delivery, Obstetric | Female | Ghana | Humans | Infant, Newborn | Male | Neonatal Brachial Plexus Palsy | Pregnancy | Prenatal Care | Prevalence | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
[CONTENT] Birth Weight | Delivery, Obstetric | Female | Ghana | Humans | Infant, Newborn | Male | Neonatal Brachial Plexus Palsy | Pregnancy | Prenatal Care | Prevalence | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
[CONTENT] Birth Weight | Delivery, Obstetric | Female | Ghana | Humans | Infant, Newborn | Male | Neonatal Brachial Plexus Palsy | Pregnancy | Prenatal Care | Prevalence | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
[CONTENT] Birth Weight | Delivery, Obstetric | Female | Ghana | Humans | Infant, Newborn | Male | Neonatal Brachial Plexus Palsy | Pregnancy | Prenatal Care | Prevalence | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
[CONTENT] Birth Weight | Delivery, Obstetric | Female | Ghana | Humans | Infant, Newborn | Male | Neonatal Brachial Plexus Palsy | Pregnancy | Prenatal Care | Prevalence | Retrospective Studies | Risk Factors | Severity of Illness Index [SUMMARY]
[CONTENT] child brachial plexus | birth palsy bpbp | brachial plexus injuries | palsy bpbp neurological | brachial plexus c5 [SUMMARY]
[CONTENT] child brachial plexus | birth palsy bpbp | brachial plexus injuries | palsy bpbp neurological | brachial plexus c5 [SUMMARY]
[CONTENT] child brachial plexus | birth palsy bpbp | brachial plexus injuries | palsy bpbp neurological | brachial plexus c5 [SUMMARY]
[CONTENT] child brachial plexus | birth palsy bpbp | brachial plexus injuries | palsy bpbp neurological | brachial plexus c5 [SUMMARY]
[CONTENT] child brachial plexus | birth palsy bpbp | brachial plexus injuries | palsy bpbp neurological | brachial plexus c5 [SUMMARY]
[CONTENT] child brachial plexus | birth palsy bpbp | brachial plexus injuries | palsy bpbp neurological | brachial plexus c5 [SUMMARY]
[CONTENT] birth | study | weight | bpbp | birth weight | children | risk | shoulder | palsy | mothers [SUMMARY]
[CONTENT] birth | study | weight | bpbp | birth weight | children | risk | shoulder | palsy | mothers [SUMMARY]
[CONTENT] birth | study | weight | bpbp | birth weight | children | risk | shoulder | palsy | mothers [SUMMARY]
[CONTENT] birth | study | weight | bpbp | birth weight | children | risk | shoulder | palsy | mothers [SUMMARY]
[CONTENT] birth | study | weight | bpbp | birth weight | children | risk | shoulder | palsy | mothers [SUMMARY]
[CONTENT] birth | study | weight | bpbp | birth weight | children | risk | shoulder | palsy | mothers [SUMMARY]
[CONTENT] bpbp | injury | prevalence | factors | ghana | c5 | study | group | risk | factors bpbp [SUMMARY]
[CONTENT] study | vrh | control | data | period | 15 years | wrist | retrieved | paediatric | birth [SUMMARY]
[CONTENT] children | documented | birth | weight | birth weight | table | average birth | average | average birth weight | paediatric [SUMMARY]
[CONTENT] birth | risk | weight | birth weight | shoulder dystocia | dystocia | shoulder | prolonged labour | prolonged | labour [SUMMARY]
[CONTENT] birth | risk | weight | birth weight | study | bpbp | dystocia | shoulder dystocia | shoulder | children [SUMMARY]
[CONTENT] birth | risk | weight | birth weight | study | bpbp | dystocia | shoulder dystocia | shoulder | children [SUMMARY]
[CONTENT] ||| ||| ||| two | Klumpke ||| ||| ||| ||| five-year | 2013-2017 [SUMMARY]
[CONTENT] five-year ||| [SUMMARY]
[CONTENT] 14.7% | three hundred and twenty | 320 | the Volta Regional Hospital ||| 93.6% [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| two | Klumpke ||| ||| ||| ||| five-year | 2013-2017 ||| five-year ||| ||| ||| 14.7% | three hundred and twenty | 320 | the Volta Regional Hospital ||| 93.6% ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| two | Klumpke ||| ||| ||| ||| five-year | 2013-2017 ||| five-year ||| ||| ||| 14.7% | three hundred and twenty | 320 | the Volta Regional Hospital ||| 93.6% ||| ||| [SUMMARY]
Evolution of Estrogen Receptor Status from Primary Tumors to Metastasis and Serially Collected Circulating Tumor Cells.
32326116
The estrogen receptor (ER) can change expression between primary tumor (PT) and distant metastasis (DM) in breast cancer. A tissue biopsy reflects a momentary state at one location, whereas circulating tumor cells (CTCs) reflect real-time tumor progression. We evaluated ER-status during tumor progression from PT to DM and CTCs, and related the ER-status of CTCs to prognosis.
BACKGROUND
In a study of metastatic breast cancer, blood was collected at different timepoints. After CellSearch® enrichment, CTCs were captured on DropMount slides and evaluated for ER expression at baseline (BL) and after 1 and 3 months of therapy. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. The primary endpoint was progression-free survival (PFS).
METHODS
Evidence of a shift from ER positivity to negativity between PT and DM was demonstrated (p = 0.019). We found strong evidence of similar shifts from PT to CTCs at different timepoints (p < 0.0001). ER-positive CTCs at 1 and 3 months were related to better prognosis.
RESULTS
A shift in ER-status from PT to DM/CTCs was demonstrated. ER-positive CTCs during systemic therapy might reflect the retention of a favorable phenotype that still responds to therapy.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Biomarkers, Tumor", "Breast Neoplasms", "Disease Progression", "Female", "Follow-Up Studies", "Humans", "Middle Aged", "Neoplasm Metastasis", "Neoplasm Staging", "Neoplasms", "Neoplastic Cells, Circulating", "Prognosis", "Receptors, Estrogen", "Survival Analysis" ]
7215368
1. Introduction
Breast cancer is the most common malignant disease in women, and although five-year survival is approaching 90%, 20–30% of women with an initially local disease develop metastatic disease within 10 years [1]. The median overall survival (OS) in women with metastatic breast cancer (MBC) is approximately two years, and only 25% survive beyond five years [2]. Evaluating the response of therapy in MBC can be difficult, especially in patients with non-measurable disease. Guidelines for therapy evaluation include diagnostic imaging, clinical assessment, and also evaluation of liquid-based tumor markers [2,3,4]. In breast cancer, the estrogen receptor (ER) and the human epidermal growth factor receptor 2 (HER2), which are predictive of the response to endocrine- and HER2-targeted therapy, respectively, are the most important treatment predictive markers. Since it was shown that the expression of ER and HER2 frequently changes between primary tumor (PT) and distant metastasis (DM) in breast cancer [5,6], it is important to re-evaluate the biomarker status in biopsies from metastatic lesions. Furthermore, there are indications that cancer cells lose their ER expression during endocrine treatment [7,8] as a mechanism of resistance. The choice of therapy in a metastatic setting should therefore preferably be based on the receptor status of the metastatic biopsy when available. Due to heterogeneity, biopsies from different sites should be evaluated. In clinical practice, however, it is not always feasible to biopsy multiple metastatic sites, and the metastasis might also be situated at a non-accessible site. Moreover, current diagnostic methods, i.e., needle biopsies, limit the amount of material available for further analyses. Liquid biopsies evaluating the presence of circulating tumor cells (CTCs) are non-invasive and easily accessible with a limited risk of complications in comparison to an invasive tissue biopsy. They can be repeated regularly, which allows for the serial monitoring of real-time tumor evolution/response to treatment. Additionally, a tissue biopsy reflects a momentary state of the tumor at one location in the body, whereas liquid-based markers reflect real-time tumor progression and are not dependent on repeated invasive tissue biopsies. CTCs can be detected in a liquid biopsy, and the CellSearch® system (©Menarini Silicon Biosystems, Inc 2020, USA) is considered the golden standard of CTC capturing and enumeration in MBC since it is the only Food and Drug Administration (FDA)-approved technology, so far. Multiple trials have shown CellSearch® enumerated CTCs as an independent prognostic factor in MBC [9,10,11,12,13,14,15]. CTCs have also been proposed to more accurately reflect tumor heterogeneity than biopsies of single metastases and are considered to originate from both the primary tumor and all metastatic sites [16]. Therefore, liquid biopsies enumerating CTCs might be a better proxy for the total metastatic burden. However, CTC enumeration alone is insufficient to predict the efficacy of therapy, and the addition of CTC tumor-marker expression might better predict the benefit of a therapy [17]. The primary aim of the present study was to characterize CTCs detected in blood samples from MBC patients and to evaluate ER-status during tumor progression from primary tumors to distant metastasis and CTCs, respectively. Our secondary aim was to relate the ER-status of CTCs to prognosis. We hypothesized that, by evaluating the ER-status of single CTCs, improved therapy guidance could be provided.
null
null
2. Results
For the study cohort with an available CTC status using CellSearch®, and with information on the ER-status in the PT (n = 147), the median age was 65 years (range 40–84 years; Figure 1, Table 1). In total, 63% (92/147) of the patients were given endocrine therapy as first-line systemic therapy, whereas 48% (70/147) had chemotherapy. Furthermore, 84% (123/147) of PTs and 86% (105/122) of DMs were ER-positive (Table 1). The median time from diagnosis of PT to diagnosis of MBC was five years (range 0–36 years). 2.1. CTC Status at Baseline and During Follow-Up At baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively. DropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively. At baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively. DropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively. 2.2. ER Status of CTCs By again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy. The distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094. By again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy. The distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094. 2.3. Shift in the ER Status Evidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a). Interestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b). McNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2). Statistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2). Evidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a). Interestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b). McNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2). Statistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2). 2.4. ER Status of CTCs and PFS Among all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c). Similar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12). Among all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c). Similar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12).
5. Conclusions
We demonstrated evidence of a shift from ER positivity to negativity in the PT compared to DM, as well as evidence of a similar shift in the ER-status of CTCs from PT and DM. Using the CTC-DropMount method, the ER positivity of CTCs at 1 and 3 months was related to better prognosis. The ER positivity of CTCs after the initiation of systemic therapy might reflect the retention of a favorable phenotype that still responds to therapy.
[ "2.1. CTC Status at Baseline and During Follow-Up", "2.2. ER Status of CTCs", "2.3. Shift in the ER Status", "2.4. ER Status of CTCs and PFS", "4. Material and Methods", "Statistical Analysis" ]
[ "At baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively.\nDropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively.", "By again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy.\nThe distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094.", "Evidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a).\nInterestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b).\nMcNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2).\nStatistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2).", "Among all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c).\nSimilar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12).", "The study was previously described in detail [27,28]. In brief, 168 women with newly diagnosed MBC, previously untreated for metastatic disease, were included in a prospective observational study at Skåne University Hospital and Halmstad County Hospital (ClinicalTrials.gov NCT01322893). Ethical permission was granted by the Lund University Ethics Committee (LU 2010/135) and all patients signed informed consent forms. Briefly, inclusion criteria were MBC diagnosis, age ≥18 years, Eastern Cooperative Oncology Group (ECOG) performance status score 0–2, and predicted life expectancy of >2 months. Study protocol included blood sampling at baseline (BL) and during therapy at 1, 3, and 6 months. In the current study, DropMount slides from BL (n = 113) and after 1 (n = 60) and 3 months (n = 46) were used. In total, 63% (92/147) of patients received adjuvant endocrine treatment, and 48% (70/147) received adjuvant chemotherapy. The primary endpoint was progression-free survival (PFS).\nPatient and tumor characteristics (e.g., ER-status) were retrieved from clinical records and pathology reports. ER-status was routinely assessed by immunohistochemistry on formalin paraffin-embedded material (with commercially available antibodies) on surgical specimen from the primary tumors at the time of primary diagnosis and on diagnostic core biopsies from metastasis at the time of diagnosis of metastatic disease. All assessments were performed by board-certified pathologists and ≥10% were classified as ER-positive according to the European guidelines [2,29].\nAfter CTC enumeration using CellSearch®, cells were fixated using the CTC-DropMount method as previously described [30]. In brief, isolated CTCs were separated using a magnetic stand, and all nonmagnetic fluid was removed. The remaining cells were resuspended in phosphate-buffered saline (PBS), dropped onto Superfrost™ slides (ThermoScientific™, Germany), dried at 37 °C for 30 min, and fixated in pure methanol for 5 min. The slides were stored at –20 °C until analysis (i.e., CTC-DropMount method).\nWith at least five CTCs enumerated using CellSearch®, 113 cases were available as DropMount at BL. Using a previously described immunofluorescence method [30], cells were evaluated in the present study for ER expression at BL and after 1 and 3 months. A rabbit monoclonal anti-ERα antibody was used as a primary (Thermo Fisher Scientific, #9101S1210D) and an AlexaFluor488-labeled goat antirabbit as a secondary antibody (Life Technologies, #1423009). For CD45 staining, a rat monoclonal antibody was used as a primary (Thermo Fisher Scientific/Invitrogen, #MA5-17687) and an AlexaFluor647-labeled goat antirat as a secondary antibody (Thermo Fisher Scientific/Invitrogen, #A-21247). Cytokeratins 8, 9, and 19 were stained with phycoerythrin-labeled mouse monoclonal antibody provided as leftovers from the CellSearch®-kit (Menarini, #E491A). Blood spiked with cells from the MCF-7 cell line was used as a positive control for ER expression. Cells positive for 4’,6-diamidino-2-phenylindole (DAPI) and cytokeratins 8, 18, and 19 but negative for CD45 were considered to be CTCs. Samples were classified as ER-positive if at least one CTC was positive for ER. Assessment of the ER-status of CTCs was independently performed by EM and CF. The cells were scanned using a BX63 Upright Microscope (Olympus Corporation, LRI, Lund, Sweden) and CellSense Dimension software. The filters used were DAPI, Cy2/fluorescein isothiocyanate (FITC), Cy3/tetramethylrhodamine-5-isothiocyante (TRITC), and Cy5. Scans were performed at 20×, while objects of special interest were captured at 40×. A representative example is shown in Figure 5.\n Statistical Analysis The study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses.\nThe study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses.", "The study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses." ]
[ null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. CTC Status at Baseline and During Follow-Up", "2.2. ER Status of CTCs", "2.3. Shift in the ER Status", "2.4. ER Status of CTCs and PFS", "3. Discussion", "4. Material and Methods", "Statistical Analysis", "5. Conclusions" ]
[ "Breast cancer is the most common malignant disease in women, and although five-year survival is approaching 90%, 20–30% of women with an initially local disease develop metastatic disease within 10 years [1]. The median overall survival (OS) in women with metastatic breast cancer (MBC) is approximately two years, and only 25% survive beyond five years [2]. Evaluating the response of therapy in MBC can be difficult, especially in patients with non-measurable disease. Guidelines for therapy evaluation include diagnostic imaging, clinical assessment, and also evaluation of liquid-based tumor markers [2,3,4].\nIn breast cancer, the estrogen receptor (ER) and the human epidermal growth factor receptor 2 (HER2), which are predictive of the response to endocrine- and HER2-targeted therapy, respectively, are the most important treatment predictive markers. Since it was shown that the expression of ER and HER2 frequently changes between primary tumor (PT) and distant metastasis (DM) in breast cancer [5,6], it is important to re-evaluate the biomarker status in biopsies from metastatic lesions. Furthermore, there are indications that cancer cells lose their ER expression during endocrine treatment [7,8] as a mechanism of resistance. The choice of therapy in a metastatic setting should therefore preferably be based on the receptor status of the metastatic biopsy when available. Due to heterogeneity, biopsies from different sites should be evaluated. In clinical practice, however, it is not always feasible to biopsy multiple metastatic sites, and the metastasis might also be situated at a non-accessible site. Moreover, current diagnostic methods, i.e., needle biopsies, limit the amount of material available for further analyses.\nLiquid biopsies evaluating the presence of circulating tumor cells (CTCs) are non-invasive and easily accessible with a limited risk of complications in comparison to an invasive tissue biopsy. They can be repeated regularly, which allows for the serial monitoring of real-time tumor evolution/response to treatment. Additionally, a tissue biopsy reflects a momentary state of the tumor at one location in the body, whereas liquid-based markers reflect real-time tumor progression and are not dependent on repeated invasive tissue biopsies.\nCTCs can be detected in a liquid biopsy, and the CellSearch® system (©Menarini Silicon Biosystems, Inc 2020, USA) is considered the golden standard of CTC capturing and enumeration in MBC since it is the only Food and Drug Administration (FDA)-approved technology, so far. Multiple trials have shown CellSearch® enumerated CTCs as an independent prognostic factor in MBC [9,10,11,12,13,14,15]. CTCs have also been proposed to more accurately reflect tumor heterogeneity than biopsies of single metastases and are considered to originate from both the primary tumor and all metastatic sites [16]. Therefore, liquid biopsies enumerating CTCs might be a better proxy for the total metastatic burden. However, CTC enumeration alone is insufficient to predict the efficacy of therapy, and the addition of CTC tumor-marker expression might better predict the benefit of a therapy [17].\nThe primary aim of the present study was to characterize CTCs detected in blood samples from MBC patients and to evaluate ER-status during tumor progression from primary tumors to distant metastasis and CTCs, respectively. Our secondary aim was to relate the ER-status of CTCs to prognosis. We hypothesized that, by evaluating the ER-status of single CTCs, improved therapy guidance could be provided.", "For the study cohort with an available CTC status using CellSearch®, and with information on the ER-status in the PT (n = 147), the median age was 65 years (range 40–84 years; Figure 1, Table 1). In total, 63% (92/147) of the patients were given endocrine therapy as first-line systemic therapy, whereas 48% (70/147) had chemotherapy. Furthermore, 84% (123/147) of PTs and 86% (105/122) of DMs were ER-positive (Table 1). The median time from diagnosis of PT to diagnosis of MBC was five years (range 0–36 years).\n 2.1. CTC Status at Baseline and During Follow-Up At baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively.\nDropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively.\nAt baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively.\nDropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively.\n 2.2. ER Status of CTCs By again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy.\nThe distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094.\nBy again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy.\nThe distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094.\n 2.3. Shift in the ER Status Evidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a).\nInterestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b).\nMcNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2).\nStatistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2).\nEvidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a).\nInterestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b).\nMcNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2).\nStatistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2).\n 2.4. ER Status of CTCs and PFS Among all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c).\nSimilar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12).\nAmong all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c).\nSimilar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12).", "At baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively.\nDropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively.", "By again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy.\nThe distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094.", "Evidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a).\nInterestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b).\nMcNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2).\nStatistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2).", "Among all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c).\nSimilar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12).", "The tissue biopsy reflects a momentary state of a tumor at one location in the body, whereas liquid-based markers reflect real-time tumor progression and are not dependent on repeated invasive tissue biopsies. Enumeration and characterization of CTCs has potential as a less invasive diagnostic method to evaluate the progression of breast cancer. To our knowledge, this is one of a few studies evaluating the ER-status of single CTCs in metastatic breast cancer using an immunofluorescence method [18]. In the present study, we also evaluated the ER-status in PT and DM and identified a shift towards more ER-negative cases in DM. A shift in the ER-status from PT to CTCs was also detected. Out of 50 patients with ER-positive PT, only 16 had ER-positive CTCs at BL.\nIn the present study, retained ER positivity of CTCs after initiation of systemic therapy was associated with better prognosis and thus seemed to reflect a favorable retained phenotype that still responded to therapy. Van de Ven et al. showed that, after neoadjuvant chemotherapy, there was a 2.5–17% change in the ER-status from positive to negative [19]; in metastatic breast cancer, 17% of the cases were found to have a different ER-status in metastasis compared to the primary tumor [20]. Hence, we hypothesized that, by evaluating the ER-status of single CTCs, improved therapy guidance could be provided.\nBock et al. reported results similar to ours, with a shift towards more ER negativity in CTCs; 37% ER-positive in PT vs. 8% ER-positive in CTCs compared to our results, with 84% ER-positive in PT vs. 26% ER-positive in CTCs at BL [18]. Other studies showed ER-status discordance between CTCs and primary or metastatic biopsies similar to our results, ranging from 30% to 60% [21,22]. Lindström et al. showed an altered ER-status in one-third of patients during tumor progression from PT to DM, but CTCs were not assessed [23]. We identified a 16% (19/122) altered ER-status in PT vs. DM in the present study and four of them changed from ER-negative in PT to ER-positive in DM. Two of them had first-line endocrine therapy and had shorter time to progression than the median PFS in the study cohort. Unfortunately, the ER-status of the corresponding baseline CTCs was not available for these patients and consequently no conclusion on endocrine therapy resistance based on CTC data could be drawn. Furthermore, Lindström et al. stated that ER-positive recurrence had better prognosis than ER-negative recurrence, irrespective of the ER-status of the PT [23].\nIn our study, as treatment continued, the number of CTCs decreased, and this was confirmed by a decrease in the number of CTCs between different timepoints. However, we could not determine whether this decrease was due to technical issues with the CTC-DropMount method or a true biological difference. Fewer CTCs were found through immunofluorescence using the CTC-DropMount method than using the CellSearch® method. This might have been because cells were attached to the walls of the CellSearch® cartridge or were detached from the slides during the immunofluorescent-staining process. With fewer CTCs detected, the possibility of finding CTCs positive for ER consequently decreases. However, direct use of the CellSearch® system for the evaluation of ER-status, as Paoletti et al. presented [24], would demand larger amounts of blood at a higher cost, which was not possible in this study. Other studies acknowledged technical difficulties concerning CTC characterization using fluorescence in situ hybridization [25]. With few cells to evaluate, this had the largest impact on the evaluation of ER-negative cases. The question was whether cases were truly negative or classified as such only because ER-positive cells were not detected.\nAccording to the applied criteria for the definition of the ER-status of CTCs, only a few positive cells are needed for a sample to be considered ER-positive. Since CTCs disseminate from tumors, ER-negative cells could preferentially detach compared to ER-positive cells. This might explain the finding of a lower number of ER-positive CTCs than ER-positive PT. In line with our findings, Babayan et al. found 19% (9/16) ER-negative CTCs in patients who originally had ER-positive primary tumors [26].\nAs depicted in Figure 2, the number of detected cells ranged from 1 to 47 for samples classified as ER-positive. Regarding samples classified as ER-negative, on the other hand, the range was wider (1 to 1094). Ultimately, in cases where only one CTC was found, it is difficult to say whether this was or was not a true-negative CTC sample.\nThe strengths of our study include a prospective study design, with a homogeneous cohort (all patients were included before they started the first-line of systemic therapy), with predefined timepoints for sampling; consequently, the ER-status of CTCs was measured at longitudinal timepoints after the initiation of therapy. Furthermore, information on the ER-status of both PT and DM was available for most cases, enabling a thorough comparison throughout the tumor evolution. Weaknesses are mainly attributed to applying an in-house technique for capturing CTCs from the cartridge, which could potentially cause a lower cell recovery rate. Consequently, we had few cases to compare between the different timepoints, limiting the statistical power. Furthermore, there are few similar publications in the field with which to compare our results.", "The study was previously described in detail [27,28]. In brief, 168 women with newly diagnosed MBC, previously untreated for metastatic disease, were included in a prospective observational study at Skåne University Hospital and Halmstad County Hospital (ClinicalTrials.gov NCT01322893). Ethical permission was granted by the Lund University Ethics Committee (LU 2010/135) and all patients signed informed consent forms. Briefly, inclusion criteria were MBC diagnosis, age ≥18 years, Eastern Cooperative Oncology Group (ECOG) performance status score 0–2, and predicted life expectancy of >2 months. Study protocol included blood sampling at baseline (BL) and during therapy at 1, 3, and 6 months. In the current study, DropMount slides from BL (n = 113) and after 1 (n = 60) and 3 months (n = 46) were used. In total, 63% (92/147) of patients received adjuvant endocrine treatment, and 48% (70/147) received adjuvant chemotherapy. The primary endpoint was progression-free survival (PFS).\nPatient and tumor characteristics (e.g., ER-status) were retrieved from clinical records and pathology reports. ER-status was routinely assessed by immunohistochemistry on formalin paraffin-embedded material (with commercially available antibodies) on surgical specimen from the primary tumors at the time of primary diagnosis and on diagnostic core biopsies from metastasis at the time of diagnosis of metastatic disease. All assessments were performed by board-certified pathologists and ≥10% were classified as ER-positive according to the European guidelines [2,29].\nAfter CTC enumeration using CellSearch®, cells were fixated using the CTC-DropMount method as previously described [30]. In brief, isolated CTCs were separated using a magnetic stand, and all nonmagnetic fluid was removed. The remaining cells were resuspended in phosphate-buffered saline (PBS), dropped onto Superfrost™ slides (ThermoScientific™, Germany), dried at 37 °C for 30 min, and fixated in pure methanol for 5 min. The slides were stored at –20 °C until analysis (i.e., CTC-DropMount method).\nWith at least five CTCs enumerated using CellSearch®, 113 cases were available as DropMount at BL. Using a previously described immunofluorescence method [30], cells were evaluated in the present study for ER expression at BL and after 1 and 3 months. A rabbit monoclonal anti-ERα antibody was used as a primary (Thermo Fisher Scientific, #9101S1210D) and an AlexaFluor488-labeled goat antirabbit as a secondary antibody (Life Technologies, #1423009). For CD45 staining, a rat monoclonal antibody was used as a primary (Thermo Fisher Scientific/Invitrogen, #MA5-17687) and an AlexaFluor647-labeled goat antirat as a secondary antibody (Thermo Fisher Scientific/Invitrogen, #A-21247). Cytokeratins 8, 9, and 19 were stained with phycoerythrin-labeled mouse monoclonal antibody provided as leftovers from the CellSearch®-kit (Menarini, #E491A). Blood spiked with cells from the MCF-7 cell line was used as a positive control for ER expression. Cells positive for 4’,6-diamidino-2-phenylindole (DAPI) and cytokeratins 8, 18, and 19 but negative for CD45 were considered to be CTCs. Samples were classified as ER-positive if at least one CTC was positive for ER. Assessment of the ER-status of CTCs was independently performed by EM and CF. The cells were scanned using a BX63 Upright Microscope (Olympus Corporation, LRI, Lund, Sweden) and CellSense Dimension software. The filters used were DAPI, Cy2/fluorescein isothiocyanate (FITC), Cy3/tetramethylrhodamine-5-isothiocyante (TRITC), and Cy5. Scans were performed at 20×, while objects of special interest were captured at 40×. A representative example is shown in Figure 5.\n Statistical Analysis The study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses.\nThe study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses.", "The study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses.", "We demonstrated evidence of a shift from ER positivity to negativity in the PT compared to DM, as well as evidence of a similar shift in the ER-status of CTCs from PT and DM. Using the CTC-DropMount method, the ER positivity of CTCs at 1 and 3 months was related to better prognosis. The ER positivity of CTCs after the initiation of systemic therapy might reflect the retention of a favorable phenotype that still responds to therapy." ]
[ "intro", "results", null, null, null, null, "discussion", null, null, "conclusions" ]
[ "circulating tumor cells", "breast cancer", "estrogen receptor: tumor progression", "metastasis" ]
1. Introduction: Breast cancer is the most common malignant disease in women, and although five-year survival is approaching 90%, 20–30% of women with an initially local disease develop metastatic disease within 10 years [1]. The median overall survival (OS) in women with metastatic breast cancer (MBC) is approximately two years, and only 25% survive beyond five years [2]. Evaluating the response of therapy in MBC can be difficult, especially in patients with non-measurable disease. Guidelines for therapy evaluation include diagnostic imaging, clinical assessment, and also evaluation of liquid-based tumor markers [2,3,4]. In breast cancer, the estrogen receptor (ER) and the human epidermal growth factor receptor 2 (HER2), which are predictive of the response to endocrine- and HER2-targeted therapy, respectively, are the most important treatment predictive markers. Since it was shown that the expression of ER and HER2 frequently changes between primary tumor (PT) and distant metastasis (DM) in breast cancer [5,6], it is important to re-evaluate the biomarker status in biopsies from metastatic lesions. Furthermore, there are indications that cancer cells lose their ER expression during endocrine treatment [7,8] as a mechanism of resistance. The choice of therapy in a metastatic setting should therefore preferably be based on the receptor status of the metastatic biopsy when available. Due to heterogeneity, biopsies from different sites should be evaluated. In clinical practice, however, it is not always feasible to biopsy multiple metastatic sites, and the metastasis might also be situated at a non-accessible site. Moreover, current diagnostic methods, i.e., needle biopsies, limit the amount of material available for further analyses. Liquid biopsies evaluating the presence of circulating tumor cells (CTCs) are non-invasive and easily accessible with a limited risk of complications in comparison to an invasive tissue biopsy. They can be repeated regularly, which allows for the serial monitoring of real-time tumor evolution/response to treatment. Additionally, a tissue biopsy reflects a momentary state of the tumor at one location in the body, whereas liquid-based markers reflect real-time tumor progression and are not dependent on repeated invasive tissue biopsies. CTCs can be detected in a liquid biopsy, and the CellSearch® system (©Menarini Silicon Biosystems, Inc 2020, USA) is considered the golden standard of CTC capturing and enumeration in MBC since it is the only Food and Drug Administration (FDA)-approved technology, so far. Multiple trials have shown CellSearch® enumerated CTCs as an independent prognostic factor in MBC [9,10,11,12,13,14,15]. CTCs have also been proposed to more accurately reflect tumor heterogeneity than biopsies of single metastases and are considered to originate from both the primary tumor and all metastatic sites [16]. Therefore, liquid biopsies enumerating CTCs might be a better proxy for the total metastatic burden. However, CTC enumeration alone is insufficient to predict the efficacy of therapy, and the addition of CTC tumor-marker expression might better predict the benefit of a therapy [17]. The primary aim of the present study was to characterize CTCs detected in blood samples from MBC patients and to evaluate ER-status during tumor progression from primary tumors to distant metastasis and CTCs, respectively. Our secondary aim was to relate the ER-status of CTCs to prognosis. We hypothesized that, by evaluating the ER-status of single CTCs, improved therapy guidance could be provided. 2. Results: For the study cohort with an available CTC status using CellSearch®, and with information on the ER-status in the PT (n = 147), the median age was 65 years (range 40–84 years; Figure 1, Table 1). In total, 63% (92/147) of the patients were given endocrine therapy as first-line systemic therapy, whereas 48% (70/147) had chemotherapy. Furthermore, 84% (123/147) of PTs and 86% (105/122) of DMs were ER-positive (Table 1). The median time from diagnosis of PT to diagnosis of MBC was five years (range 0–36 years). 2.1. CTC Status at Baseline and During Follow-Up At baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively. DropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively. At baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively. DropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively. 2.2. ER Status of CTCs By again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy. The distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094. By again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy. The distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094. 2.3. Shift in the ER Status Evidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a). Interestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b). McNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2). Statistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2). Evidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a). Interestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b). McNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2). Statistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2). 2.4. ER Status of CTCs and PFS Among all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c). Similar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12). Among all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c). Similar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12). 2.1. CTC Status at Baseline and During Follow-Up: At baseline (BL), 53% (76/144) of the patients were classified in the inferior prognostic group according to the previously approved and validated cut-off of ≥5 CTCs [9,10]. The corresponding percentages after 1 and 3 months of the first-line of therapy were 29% (37/128) and 19% (21/113), respectively. DropMount slides could be prepared for 113 cases (Figure 1). When the CTC-DropMount method was used for CTC characterization, the corresponding numbers for cases included in the inferior prognostic group were 54% (35/65), 32% (7/22), and 52% (12/23) at BL, after 1 month and after 3 months of therapy, respectively. 2.2. ER Status of CTCs: By again applying the CTC-DropMount method for characterization, 26% (17/65) of CTCs were ER-positive at BL compared with 23% (5/22) after 1 month and 43% (10/23) after 3 months of therapy. The distribution of number of CTCs at BL and the ER-status are depicted in Figure 2a, b. For ER-positive CTC cases at BL (Figure 2a), the number of detected cells ranged from 1 to 47, where cases were considered ER-positive if at least one CTC was positive for ER. With a cut-off of ≥5CTCs, 13 cases would have been regarded as ER-positive at BL instead of 17. For ER-negative cases (Figure 2b), the number of evaluated CTCs varied from 1 to 1094. 2.3. Shift in the ER Status: Evidence of a shift in the ER-status from PT to DM was assessed with the McNemar test, showing a shift towards more ER-negative tumors in DM (Table 2; Figure 3a). In total, 74% (90/122) retained ER positivity from PT to DM, 12% (15/122) changed from ER-positive in PT to ER-negative in DM, and 3% (4/122) changed from ER-negative to ER-positive DM (Table 2, Figure 3a). Interestingly, when separately comparing the ER-status of the PT with the ER-status of CTCs at different timepoints, evidence of a shift from ER positivity to negativity was strong (Table 2; Figure 3b–d). Among 50 patients with ER-positive PTs, only 16 had ER-positive CTCs at BL (p < 0.0001; Figure 3b). McNemar analysis comparing the ER-status in DM and of CTCs at different timepoints showed evidence of a negative association at BL and after 1 month but not after 3 months (Table 2). Statistical power was low for a comparison of the ER-status of CTCs between follow-up timepoints (NBL = 17, N1 month = 19, and N3 months = 10; Table 2). 2.4. ER Status of CTCs and PFS: Among all patients with positive CTCs (≥1 CTCs) at BL (n = 65), evidence of an association between the ER positivity of CTCs at BL, measured by the CTC-DropMount method, and improved PFS was weak (hazard ratio (HR): 0.57, 95% confidence interval (CI): 0.29–1.1, p = 0.10; Figure 4a). The effect, as well as the evidence, was stronger in landmark analyses using the ER-status of CTCs measured at 1 month (HR: 0.25, 95% CI: 0.055–1.1, p = 0.066; Figure 4b) and at 3 months (HR: 0.33, 95% CI: 0.11–1.01, p = 0.052; Figure 4c). Similar results were seen in patients with ≥5 CTCs at BL (n = 61). Thirteen cases were defined as ER-positive and 48 cases as ER-negative CTCs at BL, and ER positivity was associated with better PFS at BL (HR: 0.33, 95% CI: 0.13–0.83, p = 0.019), at 1 month (HR: 0.64, 95% CI: 0.071–5.8, p = 0.69) and at 3 months (HR: 0.32, 95% CI: 0.078–1.3, p = 0.12). 3. Discussion: The tissue biopsy reflects a momentary state of a tumor at one location in the body, whereas liquid-based markers reflect real-time tumor progression and are not dependent on repeated invasive tissue biopsies. Enumeration and characterization of CTCs has potential as a less invasive diagnostic method to evaluate the progression of breast cancer. To our knowledge, this is one of a few studies evaluating the ER-status of single CTCs in metastatic breast cancer using an immunofluorescence method [18]. In the present study, we also evaluated the ER-status in PT and DM and identified a shift towards more ER-negative cases in DM. A shift in the ER-status from PT to CTCs was also detected. Out of 50 patients with ER-positive PT, only 16 had ER-positive CTCs at BL. In the present study, retained ER positivity of CTCs after initiation of systemic therapy was associated with better prognosis and thus seemed to reflect a favorable retained phenotype that still responded to therapy. Van de Ven et al. showed that, after neoadjuvant chemotherapy, there was a 2.5–17% change in the ER-status from positive to negative [19]; in metastatic breast cancer, 17% of the cases were found to have a different ER-status in metastasis compared to the primary tumor [20]. Hence, we hypothesized that, by evaluating the ER-status of single CTCs, improved therapy guidance could be provided. Bock et al. reported results similar to ours, with a shift towards more ER negativity in CTCs; 37% ER-positive in PT vs. 8% ER-positive in CTCs compared to our results, with 84% ER-positive in PT vs. 26% ER-positive in CTCs at BL [18]. Other studies showed ER-status discordance between CTCs and primary or metastatic biopsies similar to our results, ranging from 30% to 60% [21,22]. Lindström et al. showed an altered ER-status in one-third of patients during tumor progression from PT to DM, but CTCs were not assessed [23]. We identified a 16% (19/122) altered ER-status in PT vs. DM in the present study and four of them changed from ER-negative in PT to ER-positive in DM. Two of them had first-line endocrine therapy and had shorter time to progression than the median PFS in the study cohort. Unfortunately, the ER-status of the corresponding baseline CTCs was not available for these patients and consequently no conclusion on endocrine therapy resistance based on CTC data could be drawn. Furthermore, Lindström et al. stated that ER-positive recurrence had better prognosis than ER-negative recurrence, irrespective of the ER-status of the PT [23]. In our study, as treatment continued, the number of CTCs decreased, and this was confirmed by a decrease in the number of CTCs between different timepoints. However, we could not determine whether this decrease was due to technical issues with the CTC-DropMount method or a true biological difference. Fewer CTCs were found through immunofluorescence using the CTC-DropMount method than using the CellSearch® method. This might have been because cells were attached to the walls of the CellSearch® cartridge or were detached from the slides during the immunofluorescent-staining process. With fewer CTCs detected, the possibility of finding CTCs positive for ER consequently decreases. However, direct use of the CellSearch® system for the evaluation of ER-status, as Paoletti et al. presented [24], would demand larger amounts of blood at a higher cost, which was not possible in this study. Other studies acknowledged technical difficulties concerning CTC characterization using fluorescence in situ hybridization [25]. With few cells to evaluate, this had the largest impact on the evaluation of ER-negative cases. The question was whether cases were truly negative or classified as such only because ER-positive cells were not detected. According to the applied criteria for the definition of the ER-status of CTCs, only a few positive cells are needed for a sample to be considered ER-positive. Since CTCs disseminate from tumors, ER-negative cells could preferentially detach compared to ER-positive cells. This might explain the finding of a lower number of ER-positive CTCs than ER-positive PT. In line with our findings, Babayan et al. found 19% (9/16) ER-negative CTCs in patients who originally had ER-positive primary tumors [26]. As depicted in Figure 2, the number of detected cells ranged from 1 to 47 for samples classified as ER-positive. Regarding samples classified as ER-negative, on the other hand, the range was wider (1 to 1094). Ultimately, in cases where only one CTC was found, it is difficult to say whether this was or was not a true-negative CTC sample. The strengths of our study include a prospective study design, with a homogeneous cohort (all patients were included before they started the first-line of systemic therapy), with predefined timepoints for sampling; consequently, the ER-status of CTCs was measured at longitudinal timepoints after the initiation of therapy. Furthermore, information on the ER-status of both PT and DM was available for most cases, enabling a thorough comparison throughout the tumor evolution. Weaknesses are mainly attributed to applying an in-house technique for capturing CTCs from the cartridge, which could potentially cause a lower cell recovery rate. Consequently, we had few cases to compare between the different timepoints, limiting the statistical power. Furthermore, there are few similar publications in the field with which to compare our results. 4. Material and Methods: The study was previously described in detail [27,28]. In brief, 168 women with newly diagnosed MBC, previously untreated for metastatic disease, were included in a prospective observational study at Skåne University Hospital and Halmstad County Hospital (ClinicalTrials.gov NCT01322893). Ethical permission was granted by the Lund University Ethics Committee (LU 2010/135) and all patients signed informed consent forms. Briefly, inclusion criteria were MBC diagnosis, age ≥18 years, Eastern Cooperative Oncology Group (ECOG) performance status score 0–2, and predicted life expectancy of >2 months. Study protocol included blood sampling at baseline (BL) and during therapy at 1, 3, and 6 months. In the current study, DropMount slides from BL (n = 113) and after 1 (n = 60) and 3 months (n = 46) were used. In total, 63% (92/147) of patients received adjuvant endocrine treatment, and 48% (70/147) received adjuvant chemotherapy. The primary endpoint was progression-free survival (PFS). Patient and tumor characteristics (e.g., ER-status) were retrieved from clinical records and pathology reports. ER-status was routinely assessed by immunohistochemistry on formalin paraffin-embedded material (with commercially available antibodies) on surgical specimen from the primary tumors at the time of primary diagnosis and on diagnostic core biopsies from metastasis at the time of diagnosis of metastatic disease. All assessments were performed by board-certified pathologists and ≥10% were classified as ER-positive according to the European guidelines [2,29]. After CTC enumeration using CellSearch®, cells were fixated using the CTC-DropMount method as previously described [30]. In brief, isolated CTCs were separated using a magnetic stand, and all nonmagnetic fluid was removed. The remaining cells were resuspended in phosphate-buffered saline (PBS), dropped onto Superfrost™ slides (ThermoScientific™, Germany), dried at 37 °C for 30 min, and fixated in pure methanol for 5 min. The slides were stored at –20 °C until analysis (i.e., CTC-DropMount method). With at least five CTCs enumerated using CellSearch®, 113 cases were available as DropMount at BL. Using a previously described immunofluorescence method [30], cells were evaluated in the present study for ER expression at BL and after 1 and 3 months. A rabbit monoclonal anti-ERα antibody was used as a primary (Thermo Fisher Scientific, #9101S1210D) and an AlexaFluor488-labeled goat antirabbit as a secondary antibody (Life Technologies, #1423009). For CD45 staining, a rat monoclonal antibody was used as a primary (Thermo Fisher Scientific/Invitrogen, #MA5-17687) and an AlexaFluor647-labeled goat antirat as a secondary antibody (Thermo Fisher Scientific/Invitrogen, #A-21247). Cytokeratins 8, 9, and 19 were stained with phycoerythrin-labeled mouse monoclonal antibody provided as leftovers from the CellSearch®-kit (Menarini, #E491A). Blood spiked with cells from the MCF-7 cell line was used as a positive control for ER expression. Cells positive for 4’,6-diamidino-2-phenylindole (DAPI) and cytokeratins 8, 18, and 19 but negative for CD45 were considered to be CTCs. Samples were classified as ER-positive if at least one CTC was positive for ER. Assessment of the ER-status of CTCs was independently performed by EM and CF. The cells were scanned using a BX63 Upright Microscope (Olympus Corporation, LRI, Lund, Sweden) and CellSense Dimension software. The filters used were DAPI, Cy2/fluorescein isothiocyanate (FITC), Cy3/tetramethylrhodamine-5-isothiocyante (TRITC), and Cy5. Scans were performed at 20×, while objects of special interest were captured at 40×. A representative example is shown in Figure 5. Statistical Analysis The study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses. The study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses. Statistical Analysis: The study followed the Reporting Recommendation for Tumor Marker (REMARK) criteria [31,32]. Kaplan–Meier plots and the log-rank test were used to compare PFS between patient groups. Survival analysis of ER expression measured at 1 and 3 months was performed with landmark analysis. Hazard ratios (HRs) were quantified by the univariable Cox proportional hazards-regression model, and proportional hazard assumptions were graphically checked. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. Stata version 16.1 (StataCorp, College Station, TX, USA) was used for statistical analyses. 5. Conclusions: We demonstrated evidence of a shift from ER positivity to negativity in the PT compared to DM, as well as evidence of a similar shift in the ER-status of CTCs from PT and DM. Using the CTC-DropMount method, the ER positivity of CTCs at 1 and 3 months was related to better prognosis. The ER positivity of CTCs after the initiation of systemic therapy might reflect the retention of a favorable phenotype that still responds to therapy.
Background: The estrogen receptor (ER) can change expression between primary tumor (PT) and distant metastasis (DM) in breast cancer. A tissue biopsy reflects a momentary state at one location, whereas circulating tumor cells (CTCs) reflect real-time tumor progression. We evaluated ER-status during tumor progression from PT to DM and CTCs, and related the ER-status of CTCs to prognosis. Methods: In a study of metastatic breast cancer, blood was collected at different timepoints. After CellSearch® enrichment, CTCs were captured on DropMount slides and evaluated for ER expression at baseline (BL) and after 1 and 3 months of therapy. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. The primary endpoint was progression-free survival (PFS). Results: Evidence of a shift from ER positivity to negativity between PT and DM was demonstrated (p = 0.019). We found strong evidence of similar shifts from PT to CTCs at different timepoints (p < 0.0001). ER-positive CTCs at 1 and 3 months were related to better prognosis. Conclusions: A shift in ER-status from PT to DM/CTCs was demonstrated. ER-positive CTCs during systemic therapy might reflect the retention of a favorable phenotype that still responds to therapy.
1. Introduction: Breast cancer is the most common malignant disease in women, and although five-year survival is approaching 90%, 20–30% of women with an initially local disease develop metastatic disease within 10 years [1]. The median overall survival (OS) in women with metastatic breast cancer (MBC) is approximately two years, and only 25% survive beyond five years [2]. Evaluating the response of therapy in MBC can be difficult, especially in patients with non-measurable disease. Guidelines for therapy evaluation include diagnostic imaging, clinical assessment, and also evaluation of liquid-based tumor markers [2,3,4]. In breast cancer, the estrogen receptor (ER) and the human epidermal growth factor receptor 2 (HER2), which are predictive of the response to endocrine- and HER2-targeted therapy, respectively, are the most important treatment predictive markers. Since it was shown that the expression of ER and HER2 frequently changes between primary tumor (PT) and distant metastasis (DM) in breast cancer [5,6], it is important to re-evaluate the biomarker status in biopsies from metastatic lesions. Furthermore, there are indications that cancer cells lose their ER expression during endocrine treatment [7,8] as a mechanism of resistance. The choice of therapy in a metastatic setting should therefore preferably be based on the receptor status of the metastatic biopsy when available. Due to heterogeneity, biopsies from different sites should be evaluated. In clinical practice, however, it is not always feasible to biopsy multiple metastatic sites, and the metastasis might also be situated at a non-accessible site. Moreover, current diagnostic methods, i.e., needle biopsies, limit the amount of material available for further analyses. Liquid biopsies evaluating the presence of circulating tumor cells (CTCs) are non-invasive and easily accessible with a limited risk of complications in comparison to an invasive tissue biopsy. They can be repeated regularly, which allows for the serial monitoring of real-time tumor evolution/response to treatment. Additionally, a tissue biopsy reflects a momentary state of the tumor at one location in the body, whereas liquid-based markers reflect real-time tumor progression and are not dependent on repeated invasive tissue biopsies. CTCs can be detected in a liquid biopsy, and the CellSearch® system (©Menarini Silicon Biosystems, Inc 2020, USA) is considered the golden standard of CTC capturing and enumeration in MBC since it is the only Food and Drug Administration (FDA)-approved technology, so far. Multiple trials have shown CellSearch® enumerated CTCs as an independent prognostic factor in MBC [9,10,11,12,13,14,15]. CTCs have also been proposed to more accurately reflect tumor heterogeneity than biopsies of single metastases and are considered to originate from both the primary tumor and all metastatic sites [16]. Therefore, liquid biopsies enumerating CTCs might be a better proxy for the total metastatic burden. However, CTC enumeration alone is insufficient to predict the efficacy of therapy, and the addition of CTC tumor-marker expression might better predict the benefit of a therapy [17]. The primary aim of the present study was to characterize CTCs detected in blood samples from MBC patients and to evaluate ER-status during tumor progression from primary tumors to distant metastasis and CTCs, respectively. Our secondary aim was to relate the ER-status of CTCs to prognosis. We hypothesized that, by evaluating the ER-status of single CTCs, improved therapy guidance could be provided. 5. Conclusions: We demonstrated evidence of a shift from ER positivity to negativity in the PT compared to DM, as well as evidence of a similar shift in the ER-status of CTCs from PT and DM. Using the CTC-DropMount method, the ER positivity of CTCs at 1 and 3 months was related to better prognosis. The ER positivity of CTCs after the initiation of systemic therapy might reflect the retention of a favorable phenotype that still responds to therapy.
Background: The estrogen receptor (ER) can change expression between primary tumor (PT) and distant metastasis (DM) in breast cancer. A tissue biopsy reflects a momentary state at one location, whereas circulating tumor cells (CTCs) reflect real-time tumor progression. We evaluated ER-status during tumor progression from PT to DM and CTCs, and related the ER-status of CTCs to prognosis. Methods: In a study of metastatic breast cancer, blood was collected at different timepoints. After CellSearch® enrichment, CTCs were captured on DropMount slides and evaluated for ER expression at baseline (BL) and after 1 and 3 months of therapy. Comparison of the ER-status of PT, DM, and CTCs at different timepoints was performed using the McNemar test. The primary endpoint was progression-free survival (PFS). Results: Evidence of a shift from ER positivity to negativity between PT and DM was demonstrated (p = 0.019). We found strong evidence of similar shifts from PT to CTCs at different timepoints (p < 0.0001). ER-positive CTCs at 1 and 3 months were related to better prognosis. Conclusions: A shift in ER-status from PT to DM/CTCs was demonstrated. ER-positive CTCs during systemic therapy might reflect the retention of a favorable phenotype that still responds to therapy.
5,494
263
[ 139, 153, 248, 236, 964, 120 ]
10
[ "er", "ctcs", "status", "positive", "er status", "bl", "er positive", "figure", "ctc", "cases" ]
[ "women metastatic breast", "metastasis dm breast", "cancer estrogen receptor", "progression breast cancer", "markers breast cancer" ]
null
[CONTENT] circulating tumor cells | breast cancer | estrogen receptor: tumor progression | metastasis [SUMMARY]
null
[CONTENT] circulating tumor cells | breast cancer | estrogen receptor: tumor progression | metastasis [SUMMARY]
[CONTENT] circulating tumor cells | breast cancer | estrogen receptor: tumor progression | metastasis [SUMMARY]
[CONTENT] circulating tumor cells | breast cancer | estrogen receptor: tumor progression | metastasis [SUMMARY]
[CONTENT] circulating tumor cells | breast cancer | estrogen receptor: tumor progression | metastasis [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Breast Neoplasms | Disease Progression | Female | Follow-Up Studies | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Staging | Neoplasms | Neoplastic Cells, Circulating | Prognosis | Receptors, Estrogen | Survival Analysis [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Breast Neoplasms | Disease Progression | Female | Follow-Up Studies | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Staging | Neoplasms | Neoplastic Cells, Circulating | Prognosis | Receptors, Estrogen | Survival Analysis [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Breast Neoplasms | Disease Progression | Female | Follow-Up Studies | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Staging | Neoplasms | Neoplastic Cells, Circulating | Prognosis | Receptors, Estrogen | Survival Analysis [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Breast Neoplasms | Disease Progression | Female | Follow-Up Studies | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Staging | Neoplasms | Neoplastic Cells, Circulating | Prognosis | Receptors, Estrogen | Survival Analysis [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Breast Neoplasms | Disease Progression | Female | Follow-Up Studies | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Staging | Neoplasms | Neoplastic Cells, Circulating | Prognosis | Receptors, Estrogen | Survival Analysis [SUMMARY]
[CONTENT] women metastatic breast | metastasis dm breast | cancer estrogen receptor | progression breast cancer | markers breast cancer [SUMMARY]
null
[CONTENT] women metastatic breast | metastasis dm breast | cancer estrogen receptor | progression breast cancer | markers breast cancer [SUMMARY]
[CONTENT] women metastatic breast | metastasis dm breast | cancer estrogen receptor | progression breast cancer | markers breast cancer [SUMMARY]
[CONTENT] women metastatic breast | metastasis dm breast | cancer estrogen receptor | progression breast cancer | markers breast cancer [SUMMARY]
[CONTENT] women metastatic breast | metastasis dm breast | cancer estrogen receptor | progression breast cancer | markers breast cancer [SUMMARY]
[CONTENT] er | ctcs | status | positive | er status | bl | er positive | figure | ctc | cases [SUMMARY]
null
[CONTENT] er | ctcs | status | positive | er status | bl | er positive | figure | ctc | cases [SUMMARY]
[CONTENT] er | ctcs | status | positive | er status | bl | er positive | figure | ctc | cases [SUMMARY]
[CONTENT] er | ctcs | status | positive | er status | bl | er positive | figure | ctc | cases [SUMMARY]
[CONTENT] er | ctcs | status | positive | er status | bl | er positive | figure | ctc | cases [SUMMARY]
[CONTENT] tumor | metastatic | biopsies | liquid | cancer | biopsy | mbc | therapy | breast cancer | breast [SUMMARY]
null
[CONTENT] er | bl | positive | figure | ctcs | table | hr | 95 | ci | er positive [SUMMARY]
[CONTENT] er positivity | positivity | er | evidence | positivity ctcs | shift | shift er | er positivity ctcs | ctcs | compared dm [SUMMARY]
[CONTENT] er | ctcs | positive | bl | er positive | status | er status | cases | figure | dm [SUMMARY]
[CONTENT] er | ctcs | positive | bl | er positive | status | er status | cases | figure | dm [SUMMARY]
[CONTENT] estrogen | ER ||| one ||| ER | PT | ER [SUMMARY]
null
[CONTENT] ER | PT | DM | 0.019 ||| ||| ER | 1 and 3 months [SUMMARY]
[CONTENT] ER | PT ||| ER [SUMMARY]
[CONTENT] ER ||| one ||| ER | PT | ER ||| ||| CellSearch | DropMount | ER | 1 and 3 months ||| ER | PT | DM | McNemar ||| ||| ||| ER | PT | DM | 0.019 ||| ||| ER | 1 and 3 months ||| ER | PT ||| ER [SUMMARY]
[CONTENT] ER ||| one ||| ER | PT | ER ||| ||| CellSearch | DropMount | ER | 1 and 3 months ||| ER | PT | DM | McNemar ||| ||| ||| ER | PT | DM | 0.019 ||| ||| ER | 1 and 3 months ||| ER | PT ||| ER [SUMMARY]
Prognosis of ampullary cancer based on immunohistochemical type and expression of osteopontin.
21992455
Ampullary cancer (AC) was classified as pancreatobiliary, intestinal, or other subtype based on the expression of cytokeratin 7 (CK7) and cytokeratin 20 (CK20). We aimed to explore the association of AC subtype with patient prognosis.
BACKGROUND
The relationship of AC subtype and expression of Osteopontin (OPN) with the prognosis of 120 AC patients after pancreaticoduodenectomy was investigated.
METHODS
The patients had pancreatobiliary (CK7+/CK20-, n = 24, 20%), intestinal (CK7-/CK20+, n = 29, 24.2%) or other (CK7+/CK20+ or CK7-/CK20-, n = 67, 55.8%) subtypes of AC, and their median survival times were 23 ± 4.2, 38 ± 2.8 and 64 ± 16.8 months, respectively. The survival times of 64 OPN- patients (53.3%) and 56 OPN+ patients (46.7%) were 69 ± 18.4 and 36 ± 1.3 months, respectively. There was no significant effect of AC subtype on survival of OPN- patients. For OPN+ patients, those with pancreatobiliary AC had a shorter survival time (22 ± 6.6 months) than those with intestinal AC (37 ± 1.4 months, p = 0.041), and other AC subtype (36 ± 0.9 months, p = 0.010); intestinal and other AC subtypes had similar survival times.
RESULTS
The prognosis of AC patients can be estimated based on immunohistochemical classification and OPN status.
CONCLUSIONS
[ "Adenocarcinoma", "Ampulla of Vater", "Biomarkers, Tumor", "Common Bile Duct Neoplasms", "Female", "Humans", "Immunohistochemistry", "Intestinal Neoplasms", "Kaplan-Meier Estimate", "Keratin-20", "Keratin-7", "Male", "Middle Aged", "Osteopontin", "Prognosis" ]
3213044
Background
Ampullary carcinoma (AC) is a relatively rare tumor of the hepatopancreatic ampulla that accounts for approximately 0.2% of gastrointestinal tract malignancies and 7% of periampullary carcinomas [1]. ACs have different anatomical origins. Kimura et al. initially classified AC as pancreatobiliary AC if it had papillary projections with scant fibrous cores and as intestinal AC if it resembled tubular adenocarcinoma of the stomach or colon [2]. Numerous studies have reported that intestinal AC is associated with a better prognosis than pancreatobiliary AC [1-5]. AC has also been classified based on immunohistochemical expression of cytokeratin 7 (CK7), Mucins and CDX2 [4,6-8] and HNF4α [9]. However, the clinical significance and survival rates of AC patients with these different immunohistochemical subtypes have not been definitely established. Histologic classification and immunohistochemical characterization by cytokeratins are in good agreement [5]. Fischer et al. reported that the histological subtypes of AC could be determined by the expression of CK7, CK20, and MUC2; pancreaticobiliary AC is CK7+/CK20-/MUC2-, and intestinal AC is CK7-/CK20+/MUC2+ [10]. Zhou et al. classified CK7-/CK20+ tumors as intestinal AC, CK7+/CK20- tumors as pancreatobiliary AC, and tumors that are CK7+/CK20+ or CK7-/CK20- as "other" [3]. However, there was no statistical difference in survival of patients with different CK7/CK20 subtypes [3] or with different CK20/MUC subtypes [11]. Osteopontin (OPN) is a secretory calcium-binding phosphorylated glycoprotein and plays an important role in bone metabolism. OPN is widely distributed in the urine, blood, gastrointestinal tract, pancreas, lungs and elsewhere. At the molecular level, OPN plays important roles in cellular adhesion and migration, tissue repair, and signal transduction and also in the invasion and metastasis of several cancers [12]. OPN is significantly associated with survival rate in several cancers and has value as a marker of clinical tumor progression [13,14]. In particular, low OPN levels were significantly associated with a favorable prognosis in patients with advanced non-small cell lung cancer [15], laryngeal and hypopharyngeal carcinomas [16], hepatocellular carcinoma [17], colorectal cancer [18], idiopathic pulmonary hypertension [19], upper urinary tract urothelial carcinoma [20], acute myeloid leukemia [21], oral squamous cell carcinoma [22], and endometrial cancer [23]. OPN may also be a suitable biomarker for overall survival and renal outcome of patients who are critically ill with acute kidney injury [24]. However, few studies have investigated the expression of OPN in patients with AC. Van Heek et al. reported higher OPN expression in the sera and tumors of AC patients than in the sera and duodenal samples of healthy controls [25]. Bloomston et al. reported that node-negative status and lack of OPN expression were associated with prolonged survival in patients with AC [26]. Hsu et al. reported that expression of OPN and the presence of tumor-associated macrophages in bulky AC were associated with tumor recurrence, and poorer disease-specific survival [27]. In the present study, we retrospectively analyzed the clinical data of 120 patients who were undergoing pancreaticoduodenectomy due to AC. We focused on the association of AC prognosis with the expression of CK7, CK20, and OPN.
Patients and Methods
Patients From January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board. From January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board. Immunohistochemistry Carcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control. Details for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive. Carcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control. Details for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive. Statistical analysis Results are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05. Results are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05.
null
null
Conclusions
XQZ carried out the study design, defined the intellectual content, participated in the literature research and manuscript preparation, analyzed data, and edited the manuscript. JHD do guarantor of integrity of the entire study, carried out the study concepts, and reviewed the manuscript. WZZ carried out the clinical studies, and acquired data. ZL carried out the experimental studies, and did statistical analysis. All authors read and approved the final manuscript.
[ "Background", "Patients", "Immunohistochemistry", "Statistical analysis", "Results", "Patient characteristics", "Survival of patients with different subtypes of AC", "Discussion", "Conclusions" ]
[ "Ampullary carcinoma (AC) is a relatively rare tumor of the hepatopancreatic ampulla that accounts for approximately 0.2% of gastrointestinal tract malignancies and 7% of periampullary carcinomas [1]. ACs have different anatomical origins. Kimura et al. initially classified AC as pancreatobiliary AC if it had papillary projections with scant fibrous cores and as intestinal AC if it resembled tubular adenocarcinoma of the stomach or colon [2]. Numerous studies have reported that intestinal AC is associated with a better prognosis than pancreatobiliary AC [1-5].\nAC has also been classified based on immunohistochemical expression of cytokeratin 7 (CK7), Mucins and CDX2 [4,6-8] and HNF4α [9]. However, the clinical significance and survival rates of AC patients with these different immunohistochemical subtypes have not been definitely established.\nHistologic classification and immunohistochemical characterization by cytokeratins are in good agreement [5]. Fischer et al. reported that the histological subtypes of AC could be determined by the expression of CK7, CK20, and MUC2; pancreaticobiliary AC is CK7+/CK20-/MUC2-, and intestinal AC is CK7-/CK20+/MUC2+ [10]. Zhou et al. classified CK7-/CK20+ tumors as intestinal AC, CK7+/CK20- tumors as pancreatobiliary AC, and tumors that are CK7+/CK20+ or CK7-/CK20- as \"other\" [3]. However, there was no statistical difference in survival of patients with different CK7/CK20 subtypes [3] or with different CK20/MUC subtypes [11].\nOsteopontin (OPN) is a secretory calcium-binding phosphorylated glycoprotein and plays an important role in bone metabolism. OPN is widely distributed in the urine, blood, gastrointestinal tract, pancreas, lungs and elsewhere. At the molecular level, OPN plays important roles in cellular adhesion and migration, tissue repair, and signal transduction and also in the invasion and metastasis of several cancers [12]. OPN is significantly associated with survival rate in several cancers and has value as a marker of clinical tumor progression [13,14]. In particular, low OPN levels were significantly associated with a favorable prognosis in patients with advanced non-small cell lung cancer [15], laryngeal and hypopharyngeal carcinomas [16], hepatocellular carcinoma [17], colorectal cancer [18], idiopathic pulmonary hypertension [19], upper urinary tract urothelial carcinoma [20], acute myeloid leukemia [21], oral squamous cell carcinoma [22], and endometrial cancer [23]. OPN may also be a suitable biomarker for overall survival and renal outcome of patients who are critically ill with acute kidney injury [24].\nHowever, few studies have investigated the expression of OPN in patients with AC. Van Heek et al. reported higher OPN expression in the sera and tumors of AC patients than in the sera and duodenal samples of healthy controls [25]. Bloomston et al. reported that node-negative status and lack of OPN expression were associated with prolonged survival in patients with AC [26]. Hsu et al. reported that expression of OPN and the presence of tumor-associated macrophages in bulky AC were associated with tumor recurrence, and poorer disease-specific survival [27].\nIn the present study, we retrospectively analyzed the clinical data of 120 patients who were undergoing pancreaticoduodenectomy due to AC. We focused on the association of AC prognosis with the expression of CK7, CK20, and OPN.", "From January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board.", "Carcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control.\nDetails for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive.", "Results are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05.", " Patient characteristics A total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery.\nA total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery.\n Survival of patients with different subtypes of AC Figure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining.\nRepresentative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin.\nTable 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively.\nCharacteristics of ampullary carcinoma and survival time of patients.\nSurvival time is expressed as median ± SD and p-values were calculated by the log-rank test.\nKaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test.\nFigure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002).\nExpression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC).\nSurvival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907).\nFigure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining.\nRepresentative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin.\nTable 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively.\nCharacteristics of ampullary carcinoma and survival time of patients.\nSurvival time is expressed as median ± SD and p-values were calculated by the log-rank test.\nKaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test.\nFigure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002).\nExpression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC).\nSurvival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907).", "A total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery.", "Figure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining.\nRepresentative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin.\nTable 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively.\nCharacteristics of ampullary carcinoma and survival time of patients.\nSurvival time is expressed as median ± SD and p-values were calculated by the log-rank test.\nKaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test.\nFigure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002).\nExpression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC).\nSurvival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907).", "Many previous studies have examined the expression of histological markers in the carcinomas of patients with ACs. For example, de Paiva Haddad et al. reported that MUC1, MUC2, and CDX2 provided the best agreement with histomorphological classification of AC [4]. However, their multivariate analysis indicated that neither histological classification nor immunohistochemical results were statistically independent predictors of poor prognosis. Moriya et al. reported that MUC1 and MUC2 expression was useful for classification of pancreatobiliary and intestinal AC, and that pancreatobiliary AC was associated with worse prognosis than intestinal AC [8]. Ehehalt et al. reported that immunohistochemical determination of HNF4α expression distinguished different AC subtypes and that HNF4α protein expression was an independent predictor of favorable prognosis [9]. Westgaard et al. classified ACs by expression of CK7, MUC4, and CDX2 [6]. They found only moderate agreement of the immunohistochemical and histomorphological classifications, and that expression of MUC1 and/or MUC4 was independently associated with poor prognosis. Santini et al. reported that MUC2 and MUC5 expression was not associated with prognosis of the patients with radically resected AC [7]. Clearly, it is difficult to completely reconcile these diverse and occasionally contradictory results, which have examined many markers in diverse patient populations.\nOther researchers have reported immunohistochemical classification of AC based on expression of CK7 and CK20 [3,10]. Thus, in the present study we classified our AC patients as having pancreatobiliary AC (CK7+/CK20-, 20%), intestinal AC (CK7-/CK20+, 24.2%), or other subtype of AC (CK7+/CK20+ or CK7-/CK20-, 55.8%). Thus, the majority of our Chinese patients had other subtype of AC, in contrast to Kimura et al. [2], who studied Japanese patients, and Zhou et al. [3] who studied German patients and reported that pancreatobiliary AC was the most common subtype. This difference might be due to population differences and/or to the different reactivity thresholds used in the immunohistochemical classification [29]. Our survival analysis indicated no significant differences in survival among patients with the different subtypes of AC, consistent with the report of Zhou et al. [3]. Taken together, these findings suggest the immunohistologic subtype of AC alone has limited value in determination of the prognosis.\nPrevious research has indicated that OPN expression is significantly associated with poor survival of patients with several forms of cancer [13,14]. It has also been reported that elevated OPN expression in AC patients predicts poor disease-specific survival [25-27]. However, when we pooled all AC subtypes, we found that survival time of OPN- and OPN+ patients had no significant difference. This is consistent with the results of Matsuzaki et al., who reported that OPN expression in AC patients was not associated with survival rate, although OPN expression in the carcinoma was higher than in adjacent normal tissues [30].\nOur further analysis indicated that the subtype of AC (intestinal, pancreatobiliary, or other) had no significant effect on survival of patients with OPN- carcinomas. However, for patients with OPN+ carcinomas, those with intestinal AC or other subtype of AC had significantly better survival than those with pancreatobiliary AC. Thus, OPN expression appears to affect the biological behavior of AC, and this effect depends on the anatomical origin of the tumor. These results indicate that determination of the prognosis of patients with AC should consider OPN expression.\nPrevious studies have reported interactions of OPN with other proteins. For example, in situ proximity ligation analysis indicated a molecular interaction of OPN and CD44 and that elevated expression of these proteins were associated with increased mitosis and significantly enhanced gastrointestinal stromal tumor cell proliferation in vitro [31]. Yang et al. reported that OPN combined with CD44 was a promising independent predictor of tumor recurrence and survival in patients with hepatocellular carcinoma [32]. OPN combined with CDX2 appears to predict survival of advanced gastric cancer patients, and CDX2 may be a transcription factor that modulates the expression of osteopontin [33]. Our results support previous reports which suggest that OPN has a role in the pathogenesis of AC [25-27]. However, a limitation of our study is that patients were enrolled retrospectively, and we did not include histomorphological classification of patients or immunochemical determination based on other factors including CDX2 and mucins. Clearly, the potential interaction of OPN, CK7, CDX2, mucins and other factors and the role of these in the pathogenesis of AC warrant further studies.", "In conclusion, our results indicate that it is difficult to determine prognosis of patients with AC based solely on immunohistochemical classification that considers CK7 and CK20 status. However, the additional consideration of OPN status allows determination of prognosis. Our results also suggest that OPN plays a role in the pathogenesis of AC, but its mechanisms and relationship with CK7 and CK20 warrant further studies." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Patients and Methods", "Patients", "Immunohistochemistry", "Statistical analysis", "Results", "Patient characteristics", "Survival of patients with different subtypes of AC", "Discussion", "Conclusions" ]
[ "Ampullary carcinoma (AC) is a relatively rare tumor of the hepatopancreatic ampulla that accounts for approximately 0.2% of gastrointestinal tract malignancies and 7% of periampullary carcinomas [1]. ACs have different anatomical origins. Kimura et al. initially classified AC as pancreatobiliary AC if it had papillary projections with scant fibrous cores and as intestinal AC if it resembled tubular adenocarcinoma of the stomach or colon [2]. Numerous studies have reported that intestinal AC is associated with a better prognosis than pancreatobiliary AC [1-5].\nAC has also been classified based on immunohistochemical expression of cytokeratin 7 (CK7), Mucins and CDX2 [4,6-8] and HNF4α [9]. However, the clinical significance and survival rates of AC patients with these different immunohistochemical subtypes have not been definitely established.\nHistologic classification and immunohistochemical characterization by cytokeratins are in good agreement [5]. Fischer et al. reported that the histological subtypes of AC could be determined by the expression of CK7, CK20, and MUC2; pancreaticobiliary AC is CK7+/CK20-/MUC2-, and intestinal AC is CK7-/CK20+/MUC2+ [10]. Zhou et al. classified CK7-/CK20+ tumors as intestinal AC, CK7+/CK20- tumors as pancreatobiliary AC, and tumors that are CK7+/CK20+ or CK7-/CK20- as \"other\" [3]. However, there was no statistical difference in survival of patients with different CK7/CK20 subtypes [3] or with different CK20/MUC subtypes [11].\nOsteopontin (OPN) is a secretory calcium-binding phosphorylated glycoprotein and plays an important role in bone metabolism. OPN is widely distributed in the urine, blood, gastrointestinal tract, pancreas, lungs and elsewhere. At the molecular level, OPN plays important roles in cellular adhesion and migration, tissue repair, and signal transduction and also in the invasion and metastasis of several cancers [12]. OPN is significantly associated with survival rate in several cancers and has value as a marker of clinical tumor progression [13,14]. In particular, low OPN levels were significantly associated with a favorable prognosis in patients with advanced non-small cell lung cancer [15], laryngeal and hypopharyngeal carcinomas [16], hepatocellular carcinoma [17], colorectal cancer [18], idiopathic pulmonary hypertension [19], upper urinary tract urothelial carcinoma [20], acute myeloid leukemia [21], oral squamous cell carcinoma [22], and endometrial cancer [23]. OPN may also be a suitable biomarker for overall survival and renal outcome of patients who are critically ill with acute kidney injury [24].\nHowever, few studies have investigated the expression of OPN in patients with AC. Van Heek et al. reported higher OPN expression in the sera and tumors of AC patients than in the sera and duodenal samples of healthy controls [25]. Bloomston et al. reported that node-negative status and lack of OPN expression were associated with prolonged survival in patients with AC [26]. Hsu et al. reported that expression of OPN and the presence of tumor-associated macrophages in bulky AC were associated with tumor recurrence, and poorer disease-specific survival [27].\nIn the present study, we retrospectively analyzed the clinical data of 120 patients who were undergoing pancreaticoduodenectomy due to AC. We focused on the association of AC prognosis with the expression of CK7, CK20, and OPN.", " Patients From January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board.\nFrom January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board.\n Immunohistochemistry Carcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control.\nDetails for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive.\nCarcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control.\nDetails for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive.\n Statistical analysis Results are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05.\nResults are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05.", "From January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board.", "Carcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control.\nDetails for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive.", "Results are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05.", " Patient characteristics A total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery.\nA total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery.\n Survival of patients with different subtypes of AC Figure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining.\nRepresentative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin.\nTable 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively.\nCharacteristics of ampullary carcinoma and survival time of patients.\nSurvival time is expressed as median ± SD and p-values were calculated by the log-rank test.\nKaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test.\nFigure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002).\nExpression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC).\nSurvival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907).\nFigure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining.\nRepresentative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin.\nTable 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively.\nCharacteristics of ampullary carcinoma and survival time of patients.\nSurvival time is expressed as median ± SD and p-values were calculated by the log-rank test.\nKaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test.\nFigure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002).\nExpression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC).\nSurvival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907).", "A total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery.", "Figure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining.\nRepresentative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin.\nTable 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively.\nCharacteristics of ampullary carcinoma and survival time of patients.\nSurvival time is expressed as median ± SD and p-values were calculated by the log-rank test.\nKaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test.\nFigure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002).\nExpression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC).\nSurvival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907).", "Many previous studies have examined the expression of histological markers in the carcinomas of patients with ACs. For example, de Paiva Haddad et al. reported that MUC1, MUC2, and CDX2 provided the best agreement with histomorphological classification of AC [4]. However, their multivariate analysis indicated that neither histological classification nor immunohistochemical results were statistically independent predictors of poor prognosis. Moriya et al. reported that MUC1 and MUC2 expression was useful for classification of pancreatobiliary and intestinal AC, and that pancreatobiliary AC was associated with worse prognosis than intestinal AC [8]. Ehehalt et al. reported that immunohistochemical determination of HNF4α expression distinguished different AC subtypes and that HNF4α protein expression was an independent predictor of favorable prognosis [9]. Westgaard et al. classified ACs by expression of CK7, MUC4, and CDX2 [6]. They found only moderate agreement of the immunohistochemical and histomorphological classifications, and that expression of MUC1 and/or MUC4 was independently associated with poor prognosis. Santini et al. reported that MUC2 and MUC5 expression was not associated with prognosis of the patients with radically resected AC [7]. Clearly, it is difficult to completely reconcile these diverse and occasionally contradictory results, which have examined many markers in diverse patient populations.\nOther researchers have reported immunohistochemical classification of AC based on expression of CK7 and CK20 [3,10]. Thus, in the present study we classified our AC patients as having pancreatobiliary AC (CK7+/CK20-, 20%), intestinal AC (CK7-/CK20+, 24.2%), or other subtype of AC (CK7+/CK20+ or CK7-/CK20-, 55.8%). Thus, the majority of our Chinese patients had other subtype of AC, in contrast to Kimura et al. [2], who studied Japanese patients, and Zhou et al. [3] who studied German patients and reported that pancreatobiliary AC was the most common subtype. This difference might be due to population differences and/or to the different reactivity thresholds used in the immunohistochemical classification [29]. Our survival analysis indicated no significant differences in survival among patients with the different subtypes of AC, consistent with the report of Zhou et al. [3]. Taken together, these findings suggest the immunohistologic subtype of AC alone has limited value in determination of the prognosis.\nPrevious research has indicated that OPN expression is significantly associated with poor survival of patients with several forms of cancer [13,14]. It has also been reported that elevated OPN expression in AC patients predicts poor disease-specific survival [25-27]. However, when we pooled all AC subtypes, we found that survival time of OPN- and OPN+ patients had no significant difference. This is consistent with the results of Matsuzaki et al., who reported that OPN expression in AC patients was not associated with survival rate, although OPN expression in the carcinoma was higher than in adjacent normal tissues [30].\nOur further analysis indicated that the subtype of AC (intestinal, pancreatobiliary, or other) had no significant effect on survival of patients with OPN- carcinomas. However, for patients with OPN+ carcinomas, those with intestinal AC or other subtype of AC had significantly better survival than those with pancreatobiliary AC. Thus, OPN expression appears to affect the biological behavior of AC, and this effect depends on the anatomical origin of the tumor. These results indicate that determination of the prognosis of patients with AC should consider OPN expression.\nPrevious studies have reported interactions of OPN with other proteins. For example, in situ proximity ligation analysis indicated a molecular interaction of OPN and CD44 and that elevated expression of these proteins were associated with increased mitosis and significantly enhanced gastrointestinal stromal tumor cell proliferation in vitro [31]. Yang et al. reported that OPN combined with CD44 was a promising independent predictor of tumor recurrence and survival in patients with hepatocellular carcinoma [32]. OPN combined with CDX2 appears to predict survival of advanced gastric cancer patients, and CDX2 may be a transcription factor that modulates the expression of osteopontin [33]. Our results support previous reports which suggest that OPN has a role in the pathogenesis of AC [25-27]. However, a limitation of our study is that patients were enrolled retrospectively, and we did not include histomorphological classification of patients or immunochemical determination based on other factors including CDX2 and mucins. Clearly, the potential interaction of OPN, CK7, CDX2, mucins and other factors and the role of these in the pathogenesis of AC warrant further studies.", "In conclusion, our results indicate that it is difficult to determine prognosis of patients with AC based solely on immunohistochemical classification that considers CK7 and CK20 status. However, the additional consideration of OPN status allows determination of prognosis. Our results also suggest that OPN plays a role in the pathogenesis of AC, but its mechanisms and relationship with CK7 and CK20 warrant further studies." ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[ "ampullary cancer", "osteopontin", "survival analysis", "immunohistochemistry", "classification", "Cytokeratin 20", "Cytokeratin 7" ]
Background: Ampullary carcinoma (AC) is a relatively rare tumor of the hepatopancreatic ampulla that accounts for approximately 0.2% of gastrointestinal tract malignancies and 7% of periampullary carcinomas [1]. ACs have different anatomical origins. Kimura et al. initially classified AC as pancreatobiliary AC if it had papillary projections with scant fibrous cores and as intestinal AC if it resembled tubular adenocarcinoma of the stomach or colon [2]. Numerous studies have reported that intestinal AC is associated with a better prognosis than pancreatobiliary AC [1-5]. AC has also been classified based on immunohistochemical expression of cytokeratin 7 (CK7), Mucins and CDX2 [4,6-8] and HNF4α [9]. However, the clinical significance and survival rates of AC patients with these different immunohistochemical subtypes have not been definitely established. Histologic classification and immunohistochemical characterization by cytokeratins are in good agreement [5]. Fischer et al. reported that the histological subtypes of AC could be determined by the expression of CK7, CK20, and MUC2; pancreaticobiliary AC is CK7+/CK20-/MUC2-, and intestinal AC is CK7-/CK20+/MUC2+ [10]. Zhou et al. classified CK7-/CK20+ tumors as intestinal AC, CK7+/CK20- tumors as pancreatobiliary AC, and tumors that are CK7+/CK20+ or CK7-/CK20- as "other" [3]. However, there was no statistical difference in survival of patients with different CK7/CK20 subtypes [3] or with different CK20/MUC subtypes [11]. Osteopontin (OPN) is a secretory calcium-binding phosphorylated glycoprotein and plays an important role in bone metabolism. OPN is widely distributed in the urine, blood, gastrointestinal tract, pancreas, lungs and elsewhere. At the molecular level, OPN plays important roles in cellular adhesion and migration, tissue repair, and signal transduction and also in the invasion and metastasis of several cancers [12]. OPN is significantly associated with survival rate in several cancers and has value as a marker of clinical tumor progression [13,14]. In particular, low OPN levels were significantly associated with a favorable prognosis in patients with advanced non-small cell lung cancer [15], laryngeal and hypopharyngeal carcinomas [16], hepatocellular carcinoma [17], colorectal cancer [18], idiopathic pulmonary hypertension [19], upper urinary tract urothelial carcinoma [20], acute myeloid leukemia [21], oral squamous cell carcinoma [22], and endometrial cancer [23]. OPN may also be a suitable biomarker for overall survival and renal outcome of patients who are critically ill with acute kidney injury [24]. However, few studies have investigated the expression of OPN in patients with AC. Van Heek et al. reported higher OPN expression in the sera and tumors of AC patients than in the sera and duodenal samples of healthy controls [25]. Bloomston et al. reported that node-negative status and lack of OPN expression were associated with prolonged survival in patients with AC [26]. Hsu et al. reported that expression of OPN and the presence of tumor-associated macrophages in bulky AC were associated with tumor recurrence, and poorer disease-specific survival [27]. In the present study, we retrospectively analyzed the clinical data of 120 patients who were undergoing pancreaticoduodenectomy due to AC. We focused on the association of AC prognosis with the expression of CK7, CK20, and OPN. Patients and Methods: Patients From January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board. From January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board. Immunohistochemistry Carcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control. Details for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive. Carcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control. Details for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive. Statistical analysis Results are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05. Results are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05. Patients: From January 1, 1994 to December 30, 2008, patients undergoing pancreaticoduodenectomy due to AC were recruited from the Department of Hepatobiliary Surgery of the General Hospital of the Peoples Liberation Army (Beijing, China). The exclusion criteria were: (i) duodenal cancer, cancer of the lower bile duct, or cancer of the pancreas or any of these cancers involving the ampulla or duodenal papilla, based on pathological examination; (ii) uncertain origin of the cancer; (iii) previous focal resection of duodenal papillary cancer or AC; (iv) metastasis to other organs; and (v) presence of concomitant heart disease, cerebrovascular disease, or pulmonary disease that made the patient ineligible for surgery. Follow-up examinations were performed at 3 months after surgery, once every 6 months for 3 years, and then once per year. These follow-up examinations included routine tests (liver and kidney function, blood electrolytes, routine blood test), tests for tumor markers, chest X-ray, and abdominal imaging by ultrasonography, CT, or MRI. The last follow-up was on January 31, 2010. Tumor stage and lymph node metastasis were evaluated according to Greene et al [28]. This study was approved by the hospital Institutional Review Board. Immunohistochemistry: Carcinoma specimens were embedded in paraffin, cut consecutively into sections (4 μm), and the streptavidin-peroxidase method was used for immunohistochemical visualization (UltraSensitive™ SP kit, Maximbio. Co. Ltd, Fuzhou, China). The primary antibodies were mouse anti-human CK7 or CK20 monoclonal antibodies and rabbit anti-human OPN polyclonal antibody (Lab Vision & NeoMarkers, USA). The normal serum from non-immunized goat was used as a negative control of the primary antibody, and CK7+/CK20+/OPN+ pancreatic carcinoma was used as a positive control. Details for the determination of positive staining were previously provided[3]. In brief, cells positive for CK7, CK20, or OPN had brown or yellow-brown granules, mainly in the cytoplasm. Sections were evaluated by two independent and blinded pathologists. No staining or staining in fewer than 10% of cells was considered negative, and staining of 10% or more of cells was considered positive. Statistical analysis: Results are expressed as means or medians with standard deviations, or counts and percentages. Survival analysis was analyzed by the Kaplan-Meier method and the log-rank test. Data were analyzed using SPSS 15.0 (SPSS, Inc., Chicago, IL, USA). All p-values were two-sided and were considered significant if p was less than 0.05. Results: Patient characteristics A total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery. A total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery. Survival of patients with different subtypes of AC Figure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining. Representative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin. Table 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively. Characteristics of ampullary carcinoma and survival time of patients. Survival time is expressed as median ± SD and p-values were calculated by the log-rank test. Kaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test. Figure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002). Expression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC). Survival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907). Figure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining. Representative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin. Table 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively. Characteristics of ampullary carcinoma and survival time of patients. Survival time is expressed as median ± SD and p-values were calculated by the log-rank test. Kaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test. Figure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002). Expression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC). Survival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907). Patient characteristics: A total of 120 patients (84 males, 36 females) met our inclusion criteria and received follow-up examinations. The mean age was 55.1 ± 9.8 years and the mean tumor diameter was 2.4 ± 1.5 cm. The 1-, 3- and 5-year survival rates were 94.8%, 78.7%, and 68.0%, respectively. The median survival time was 38 ± 11.3 months and the mean survival time was 53.9 months. A total of 51 patients (42.5%) survived to the end of the follow-up period (January 31, 2010), 1 patient survived more than 10 years, and 2 patients survived more than 5 years. In addition to pancreaticoduodenectomy (given to all patients), 8 patients received chemotherapy. Among the 69 patients (57.5%) who died in the follow-up period, 21 died within 5 years after surgery and 1 died 11 years after surgery. Survival of patients with different subtypes of AC: Figure 1A-D shows representative immunohistochemical results of patients with intestinal AC (CK7-/CK20+), pancreatobiliary AC (CK7+/CK20-), and other subtype of AC (CK7+/CK20+ and CK7-/CK20-). Figure 1E shows representative positive and negative results for OPN staining. Representative immunohistochemical staining of ampullary cancer. A: intestinal type, CK7-/CK20+; B: pancreatobiliary type, CK7+/CK20-; C: other type, (C: CK7+/CK20+); D: other type, CK7-/CK20-); E: positive and negative immunohistochemical staining for osteopontin. Table 1 shows the survival times of AC patients stratified by immunohistochemical results, extent of tumor differentiation, amount of tumor invasion, and lymph node metastasis. None of the other associations were significant. OPN+ patients had marginally longer survival time than OPN- patients (p = 0.062; Figure 2A). The 1-, 3, and 5-year survival rates were 84.3%, 55.3%, and 0% for OPN+ patients and 91.8%, 68.4% and 7.2% for OPN- patients respectively. Characteristics of ampullary carcinoma and survival time of patients. Survival time is expressed as median ± SD and p-values were calculated by the log-rank test. Kaplan-Meier survival curves of patients with different immunohistochemical types of ampullary cancer and with positive or negative expression of osteopontin. *p < 0.05 by log-rank test. Figure 2B shows the survival times of patients with pancreatobiliary AC, intestinal AC, and other AC. Although pancreatobiliary AC was associated with slightly worse prognosis, a log-rank test indicated that this difference was only marginal (p = 0.052, Table 2). Figures 2C and 2D show the survival times of these three groups stratified by OPN expression. The results indicate that OPN- patients with different subtypes of AC had similar survival times (p = 0.75, Table 2). However, OPN+ patients with different subtypes of AC had significantly different survival times (p = 0.017, Table 2). Specifically, OPN+ patients with pancreatobiliary AC had worse prognosis than those with intestinal AC (p = 0.041) and other subtype of AC (p = 0.010); the survival time of the patients with intestinal and other types had no difference. In addition, the survival time of patients with OPN+ pancreatobiliary AC was shorter than those with OPN- intestinal AC (p = 0.043), and other subtype of AC (p = 0.002). Expression of OPN and survival time of patients with different subtypes of ampullary carcinoma (AC). Survival time is expressed as median ± SD. 1p-value for comparison of the 3 types; 2p-value for comparison of the OPN+ and OPN- groups. * p = 0.041 and p = 0.010 for comparison with the intestinal type and other type OPN+ AC, respectively. There was no difference between intestinal AC and other type of AC (p = 0.907). Discussion: Many previous studies have examined the expression of histological markers in the carcinomas of patients with ACs. For example, de Paiva Haddad et al. reported that MUC1, MUC2, and CDX2 provided the best agreement with histomorphological classification of AC [4]. However, their multivariate analysis indicated that neither histological classification nor immunohistochemical results were statistically independent predictors of poor prognosis. Moriya et al. reported that MUC1 and MUC2 expression was useful for classification of pancreatobiliary and intestinal AC, and that pancreatobiliary AC was associated with worse prognosis than intestinal AC [8]. Ehehalt et al. reported that immunohistochemical determination of HNF4α expression distinguished different AC subtypes and that HNF4α protein expression was an independent predictor of favorable prognosis [9]. Westgaard et al. classified ACs by expression of CK7, MUC4, and CDX2 [6]. They found only moderate agreement of the immunohistochemical and histomorphological classifications, and that expression of MUC1 and/or MUC4 was independently associated with poor prognosis. Santini et al. reported that MUC2 and MUC5 expression was not associated with prognosis of the patients with radically resected AC [7]. Clearly, it is difficult to completely reconcile these diverse and occasionally contradictory results, which have examined many markers in diverse patient populations. Other researchers have reported immunohistochemical classification of AC based on expression of CK7 and CK20 [3,10]. Thus, in the present study we classified our AC patients as having pancreatobiliary AC (CK7+/CK20-, 20%), intestinal AC (CK7-/CK20+, 24.2%), or other subtype of AC (CK7+/CK20+ or CK7-/CK20-, 55.8%). Thus, the majority of our Chinese patients had other subtype of AC, in contrast to Kimura et al. [2], who studied Japanese patients, and Zhou et al. [3] who studied German patients and reported that pancreatobiliary AC was the most common subtype. This difference might be due to population differences and/or to the different reactivity thresholds used in the immunohistochemical classification [29]. Our survival analysis indicated no significant differences in survival among patients with the different subtypes of AC, consistent with the report of Zhou et al. [3]. Taken together, these findings suggest the immunohistologic subtype of AC alone has limited value in determination of the prognosis. Previous research has indicated that OPN expression is significantly associated with poor survival of patients with several forms of cancer [13,14]. It has also been reported that elevated OPN expression in AC patients predicts poor disease-specific survival [25-27]. However, when we pooled all AC subtypes, we found that survival time of OPN- and OPN+ patients had no significant difference. This is consistent with the results of Matsuzaki et al., who reported that OPN expression in AC patients was not associated with survival rate, although OPN expression in the carcinoma was higher than in adjacent normal tissues [30]. Our further analysis indicated that the subtype of AC (intestinal, pancreatobiliary, or other) had no significant effect on survival of patients with OPN- carcinomas. However, for patients with OPN+ carcinomas, those with intestinal AC or other subtype of AC had significantly better survival than those with pancreatobiliary AC. Thus, OPN expression appears to affect the biological behavior of AC, and this effect depends on the anatomical origin of the tumor. These results indicate that determination of the prognosis of patients with AC should consider OPN expression. Previous studies have reported interactions of OPN with other proteins. For example, in situ proximity ligation analysis indicated a molecular interaction of OPN and CD44 and that elevated expression of these proteins were associated with increased mitosis and significantly enhanced gastrointestinal stromal tumor cell proliferation in vitro [31]. Yang et al. reported that OPN combined with CD44 was a promising independent predictor of tumor recurrence and survival in patients with hepatocellular carcinoma [32]. OPN combined with CDX2 appears to predict survival of advanced gastric cancer patients, and CDX2 may be a transcription factor that modulates the expression of osteopontin [33]. Our results support previous reports which suggest that OPN has a role in the pathogenesis of AC [25-27]. However, a limitation of our study is that patients were enrolled retrospectively, and we did not include histomorphological classification of patients or immunochemical determination based on other factors including CDX2 and mucins. Clearly, the potential interaction of OPN, CK7, CDX2, mucins and other factors and the role of these in the pathogenesis of AC warrant further studies. Conclusions: In conclusion, our results indicate that it is difficult to determine prognosis of patients with AC based solely on immunohistochemical classification that considers CK7 and CK20 status. However, the additional consideration of OPN status allows determination of prognosis. Our results also suggest that OPN plays a role in the pathogenesis of AC, but its mechanisms and relationship with CK7 and CK20 warrant further studies.
Background: Ampullary cancer (AC) was classified as pancreatobiliary, intestinal, or other subtype based on the expression of cytokeratin 7 (CK7) and cytokeratin 20 (CK20). We aimed to explore the association of AC subtype with patient prognosis. Methods: The relationship of AC subtype and expression of Osteopontin (OPN) with the prognosis of 120 AC patients after pancreaticoduodenectomy was investigated. Results: The patients had pancreatobiliary (CK7+/CK20-, n = 24, 20%), intestinal (CK7-/CK20+, n = 29, 24.2%) or other (CK7+/CK20+ or CK7-/CK20-, n = 67, 55.8%) subtypes of AC, and their median survival times were 23 ± 4.2, 38 ± 2.8 and 64 ± 16.8 months, respectively. The survival times of 64 OPN- patients (53.3%) and 56 OPN+ patients (46.7%) were 69 ± 18.4 and 36 ± 1.3 months, respectively. There was no significant effect of AC subtype on survival of OPN- patients. For OPN+ patients, those with pancreatobiliary AC had a shorter survival time (22 ± 6.6 months) than those with intestinal AC (37 ± 1.4 months, p = 0.041), and other AC subtype (36 ± 0.9 months, p = 0.010); intestinal and other AC subtypes had similar survival times. Conclusions: The prognosis of AC patients can be estimated based on immunohistochemical classification and OPN status.
Background: Ampullary carcinoma (AC) is a relatively rare tumor of the hepatopancreatic ampulla that accounts for approximately 0.2% of gastrointestinal tract malignancies and 7% of periampullary carcinomas [1]. ACs have different anatomical origins. Kimura et al. initially classified AC as pancreatobiliary AC if it had papillary projections with scant fibrous cores and as intestinal AC if it resembled tubular adenocarcinoma of the stomach or colon [2]. Numerous studies have reported that intestinal AC is associated with a better prognosis than pancreatobiliary AC [1-5]. AC has also been classified based on immunohistochemical expression of cytokeratin 7 (CK7), Mucins and CDX2 [4,6-8] and HNF4α [9]. However, the clinical significance and survival rates of AC patients with these different immunohistochemical subtypes have not been definitely established. Histologic classification and immunohistochemical characterization by cytokeratins are in good agreement [5]. Fischer et al. reported that the histological subtypes of AC could be determined by the expression of CK7, CK20, and MUC2; pancreaticobiliary AC is CK7+/CK20-/MUC2-, and intestinal AC is CK7-/CK20+/MUC2+ [10]. Zhou et al. classified CK7-/CK20+ tumors as intestinal AC, CK7+/CK20- tumors as pancreatobiliary AC, and tumors that are CK7+/CK20+ or CK7-/CK20- as "other" [3]. However, there was no statistical difference in survival of patients with different CK7/CK20 subtypes [3] or with different CK20/MUC subtypes [11]. Osteopontin (OPN) is a secretory calcium-binding phosphorylated glycoprotein and plays an important role in bone metabolism. OPN is widely distributed in the urine, blood, gastrointestinal tract, pancreas, lungs and elsewhere. At the molecular level, OPN plays important roles in cellular adhesion and migration, tissue repair, and signal transduction and also in the invasion and metastasis of several cancers [12]. OPN is significantly associated with survival rate in several cancers and has value as a marker of clinical tumor progression [13,14]. In particular, low OPN levels were significantly associated with a favorable prognosis in patients with advanced non-small cell lung cancer [15], laryngeal and hypopharyngeal carcinomas [16], hepatocellular carcinoma [17], colorectal cancer [18], idiopathic pulmonary hypertension [19], upper urinary tract urothelial carcinoma [20], acute myeloid leukemia [21], oral squamous cell carcinoma [22], and endometrial cancer [23]. OPN may also be a suitable biomarker for overall survival and renal outcome of patients who are critically ill with acute kidney injury [24]. However, few studies have investigated the expression of OPN in patients with AC. Van Heek et al. reported higher OPN expression in the sera and tumors of AC patients than in the sera and duodenal samples of healthy controls [25]. Bloomston et al. reported that node-negative status and lack of OPN expression were associated with prolonged survival in patients with AC [26]. Hsu et al. reported that expression of OPN and the presence of tumor-associated macrophages in bulky AC were associated with tumor recurrence, and poorer disease-specific survival [27]. In the present study, we retrospectively analyzed the clinical data of 120 patients who were undergoing pancreaticoduodenectomy due to AC. We focused on the association of AC prognosis with the expression of CK7, CK20, and OPN. Conclusions: XQZ carried out the study design, defined the intellectual content, participated in the literature research and manuscript preparation, analyzed data, and edited the manuscript. JHD do guarantor of integrity of the entire study, carried out the study concepts, and reviewed the manuscript. WZZ carried out the clinical studies, and acquired data. ZL carried out the experimental studies, and did statistical analysis. All authors read and approved the final manuscript.
Background: Ampullary cancer (AC) was classified as pancreatobiliary, intestinal, or other subtype based on the expression of cytokeratin 7 (CK7) and cytokeratin 20 (CK20). We aimed to explore the association of AC subtype with patient prognosis. Methods: The relationship of AC subtype and expression of Osteopontin (OPN) with the prognosis of 120 AC patients after pancreaticoduodenectomy was investigated. Results: The patients had pancreatobiliary (CK7+/CK20-, n = 24, 20%), intestinal (CK7-/CK20+, n = 29, 24.2%) or other (CK7+/CK20+ or CK7-/CK20-, n = 67, 55.8%) subtypes of AC, and their median survival times were 23 ± 4.2, 38 ± 2.8 and 64 ± 16.8 months, respectively. The survival times of 64 OPN- patients (53.3%) and 56 OPN+ patients (46.7%) were 69 ± 18.4 and 36 ± 1.3 months, respectively. There was no significant effect of AC subtype on survival of OPN- patients. For OPN+ patients, those with pancreatobiliary AC had a shorter survival time (22 ± 6.6 months) than those with intestinal AC (37 ± 1.4 months, p = 0.041), and other AC subtype (36 ± 0.9 months, p = 0.010); intestinal and other AC subtypes had similar survival times. Conclusions: The prognosis of AC patients can be estimated based on immunohistochemical classification and OPN status.
5,285
272
[ 638, 243, 179, 70, 1468, 174, 552, 845, 70 ]
10
[ "ac", "patients", "opn", "survival", "ck7", "ck20", "ck7 ck20", "intestinal", "expression", "survival time" ]
[ "ac intestinal pancreatobiliary", "pancreatobiliary intestinal ac", "ampullary cancer intestinal", "ampullary carcinoma ac", "pancreas cancers involving" ]
null
[CONTENT] ampullary cancer | osteopontin | survival analysis | immunohistochemistry | classification | Cytokeratin 20 | Cytokeratin 7 [SUMMARY]
[CONTENT] ampullary cancer | osteopontin | survival analysis | immunohistochemistry | classification | Cytokeratin 20 | Cytokeratin 7 [SUMMARY]
null
[CONTENT] ampullary cancer | osteopontin | survival analysis | immunohistochemistry | classification | Cytokeratin 20 | Cytokeratin 7 [SUMMARY]
[CONTENT] ampullary cancer | osteopontin | survival analysis | immunohistochemistry | classification | Cytokeratin 20 | Cytokeratin 7 [SUMMARY]
[CONTENT] ampullary cancer | osteopontin | survival analysis | immunohistochemistry | classification | Cytokeratin 20 | Cytokeratin 7 [SUMMARY]
[CONTENT] Adenocarcinoma | Ampulla of Vater | Biomarkers, Tumor | Common Bile Duct Neoplasms | Female | Humans | Immunohistochemistry | Intestinal Neoplasms | Kaplan-Meier Estimate | Keratin-20 | Keratin-7 | Male | Middle Aged | Osteopontin | Prognosis [SUMMARY]
[CONTENT] Adenocarcinoma | Ampulla of Vater | Biomarkers, Tumor | Common Bile Duct Neoplasms | Female | Humans | Immunohistochemistry | Intestinal Neoplasms | Kaplan-Meier Estimate | Keratin-20 | Keratin-7 | Male | Middle Aged | Osteopontin | Prognosis [SUMMARY]
null
[CONTENT] Adenocarcinoma | Ampulla of Vater | Biomarkers, Tumor | Common Bile Duct Neoplasms | Female | Humans | Immunohistochemistry | Intestinal Neoplasms | Kaplan-Meier Estimate | Keratin-20 | Keratin-7 | Male | Middle Aged | Osteopontin | Prognosis [SUMMARY]
[CONTENT] Adenocarcinoma | Ampulla of Vater | Biomarkers, Tumor | Common Bile Duct Neoplasms | Female | Humans | Immunohistochemistry | Intestinal Neoplasms | Kaplan-Meier Estimate | Keratin-20 | Keratin-7 | Male | Middle Aged | Osteopontin | Prognosis [SUMMARY]
[CONTENT] Adenocarcinoma | Ampulla of Vater | Biomarkers, Tumor | Common Bile Duct Neoplasms | Female | Humans | Immunohistochemistry | Intestinal Neoplasms | Kaplan-Meier Estimate | Keratin-20 | Keratin-7 | Male | Middle Aged | Osteopontin | Prognosis [SUMMARY]
[CONTENT] ac intestinal pancreatobiliary | pancreatobiliary intestinal ac | ampullary cancer intestinal | ampullary carcinoma ac | pancreas cancers involving [SUMMARY]
[CONTENT] ac intestinal pancreatobiliary | pancreatobiliary intestinal ac | ampullary cancer intestinal | ampullary carcinoma ac | pancreas cancers involving [SUMMARY]
null
[CONTENT] ac intestinal pancreatobiliary | pancreatobiliary intestinal ac | ampullary cancer intestinal | ampullary carcinoma ac | pancreas cancers involving [SUMMARY]
[CONTENT] ac intestinal pancreatobiliary | pancreatobiliary intestinal ac | ampullary cancer intestinal | ampullary carcinoma ac | pancreas cancers involving [SUMMARY]
[CONTENT] ac intestinal pancreatobiliary | pancreatobiliary intestinal ac | ampullary cancer intestinal | ampullary carcinoma ac | pancreas cancers involving [SUMMARY]
[CONTENT] ac | patients | opn | survival | ck7 | ck20 | ck7 ck20 | intestinal | expression | survival time [SUMMARY]
[CONTENT] ac | patients | opn | survival | ck7 | ck20 | ck7 ck20 | intestinal | expression | survival time [SUMMARY]
null
[CONTENT] ac | patients | opn | survival | ck7 | ck20 | ck7 ck20 | intestinal | expression | survival time [SUMMARY]
[CONTENT] ac | patients | opn | survival | ck7 | ck20 | ck7 ck20 | intestinal | expression | survival time [SUMMARY]
[CONTENT] ac | patients | opn | survival | ck7 | ck20 | ck7 ck20 | intestinal | expression | survival time [SUMMARY]
[CONTENT] ac | opn | ck7 | ck20 | expression | ck7 ck20 | reported | tumors | associated | patients [SUMMARY]
[CONTENT] cancer | positive | staining | cells | duodenal | considered | surgery | disease | follow | spss [SUMMARY]
null
[CONTENT] status | prognosis | results | patients ac based | considers ck7 ck20 | considers ck7 ck20 status | mechanisms | mechanisms relationship | mechanisms relationship ck7 | mechanisms relationship ck7 ck20 [SUMMARY]
[CONTENT] ac | patients | opn | survival | ck7 | ck20 | ck7 ck20 | expression | intestinal | time [SUMMARY]
[CONTENT] ac | patients | opn | survival | ck7 | ck20 | ck7 ck20 | expression | intestinal | time [SUMMARY]
[CONTENT] AC | 7 | 20 ||| AC [SUMMARY]
[CONTENT] AC | Osteopontin | OPN | 120 [SUMMARY]
null
[CONTENT] AC | OPN [SUMMARY]
[CONTENT] AC | 7 | 20 ||| AC ||| AC | Osteopontin | OPN | 120 ||| ||| CK7+/CK20- | 24 | 20% | 29 | 24.2% | 67 | 55.8% | AC | 23 | 38 | 2.8 | 16.8 months ||| 64 | OPN- | 53.3% | 56 | 46.7% | 69 | 18.4 | 36 ± 1.3 months ||| AC | OPN- ||| OPN+ | AC | 6.6 months | AC | 37 | 1.4 months | 0.041 | AC | 36 ± 0.9 months | 0.010 | AC ||| AC | OPN [SUMMARY]
[CONTENT] AC | 7 | 20 ||| AC ||| AC | Osteopontin | OPN | 120 ||| ||| CK7+/CK20- | 24 | 20% | 29 | 24.2% | 67 | 55.8% | AC | 23 | 38 | 2.8 | 16.8 months ||| 64 | OPN- | 53.3% | 56 | 46.7% | 69 | 18.4 | 36 ± 1.3 months ||| AC | OPN- ||| OPN+ | AC | 6.6 months | AC | 37 | 1.4 months | 0.041 | AC | 36 ± 0.9 months | 0.010 | AC ||| AC | OPN [SUMMARY]
Identification of putative interactions between swine and human influenza A virus nucleoprotein and human host proteins.
25547032
Influenza A viruses (IAVs) are important pathogens that affect the health of humans and many additional animal species. IAVs are enveloped, negative single-stranded RNA viruses whose genome encodes at least ten proteins. The IAV nucleoprotein (NP) is a structural protein that associates with the viral RNA and is essential for virus replication. Understanding how IAVs interact with host proteins is essential for elucidating all of the required processes for viral replication, restrictions in species host range, and potential targets for antiviral therapies.
BACKGROUND
In this study, the NP from a swine IAV was cloned into a yeast two-hybrid "bait" vector for expression of a yeast Gal4 binding domain (BD)-NP fusion protein. This "bait" was used to screen a Y2H human HeLa cell "prey" library which consisted of human proteins fused to the Gal4 protein's activation domain (AD). The interaction of "bait" and "prey" proteins resulted in activation of reporter genes.
METHODS
Seventeen positive bait-prey interactions were isolated in yeast. All of the "prey" isolated also interact in yeast with a NP "bait" cloned from a human IAV strain. Isolation and sequence analysis of the cDNAs encoding the human prey proteins revealed ten different human proteins. These host proteins are involved in various host cell processes and structures, including purine biosynthesis (PAICS), metabolism (ACOT13), proteasome (PA28B), DNA-binding (MSANTD3), cytoskeleton (CKAP5), potassium channel formation (KCTD9), zinc transporter function (SLC30A9), Na+/K+ ATPase function (ATP1B1), and RNA splicing (TRA2B).
RESULTS
Ten human proteins were identified as interacting with IAV NP in a Y2H screen. Some of these human proteins were reported in previous screens aimed at elucidating host proteins relevant to specific viral life cycle processes such as replication. This study extends previous findings by suggesting a mechanism by which these host proteins associate with the IAV, i.e., physical interaction with NP. Furthermore, this study revealed novel host protein-NP interactions in yeast.
CONCLUSIONS
[ "Animals", "Genes, Reporter", "HeLa Cells", "Host-Pathogen Interactions", "Humans", "Influenzavirus A", "Nucleocapsid Proteins", "Protein Interaction Mapping", "RNA-Binding Proteins", "Swine", "Two-Hybrid System Techniques", "Viral Core Proteins" ]
4297426
Background
Influenza A viruses (IAVs) are important pathogens that affect the health of humans and many additional animal species. In humans, seasonal IAV infection presents as a non-fatal, uncomplicated, acute infection characterized by the presence of upper respiratory symptoms as well as fever, headache, soreness and fatigue lasting 2–5 days [1]. However, deaths from seasonal IAV infections often arise when the normal flu symptoms are exacerbated by compromised immunity or age [1]. In addition to seasonal influenza A viruses, pandemic strains periodically appear causing increased mortality rates. For example, the 1918 Spanish flu resulted in approximately 50 million human deaths [2]. The most recent pandemic resulted from the emergence of the swine-origin H1N1 virus [3,4]. Due to influenza A’s potential for mortality, high mutation rates (resulting in genetic drift) and pandemic potential (resulting from genetic reassortment), it is critical to learn more about the virus, especially as it pertains to virulence, transmissibility, and identification of potential targets for development of therapeutics. Influenza A viruses exhibit a broad host range beyond humans [5]. Waterfowl serve as the central reservoir species. In addition to humans, various IAV subtypes circulate in pigs, poultry, horses, dogs as well as other species. Subtypes (H1N1 and H3N2) that circulate in the human population also circulate in the swine populations [6]. In addition to the concern for IAV in terms of swine health [7], IAV infection in swine is also important due to the potential for zoonotic infections as well as the potential for serving as a mixing vessel for the generation of human pandemic viruses [5]. The majority of inter-species transmission research has focused on amino acids in the hemagglutinin affecting viral attachment and entry [8]. Although known to be important, other than amino acids associated with temperature sensitivity (e.g., PB2 627 [9] or 701 [10]), the effects of differences in amino acid residues in proteins of the replication complex on species specificity is less well understood. Previous evidence suggests that there may be a role in some of these replication complex proteins in allowing for zoonotic infections [11]. The genome of the enveloped influenza A virion consists of eight segments of negative-sense single-stranded RNA that encode at least ten viral genes: Hemagglutinin (HA), Neuraminidase (NA), Matrix 2 protein (M2), Matrix 1 protein (M1), Non-structural Protein 1 (NS1), Non-structural Protein 2 (NS2), Nucleoprotein (NP), Polymerase Basic 1 (PB-1), Polymerase Basic 2 (PB-2), and Polymerase Acidic (PA) [1]. The viral envelope contains HA, NA, and M2 proteins, and M1 proteins form a layer inside the envelope. The IAV RNA genome located inside the virion is coated with NP and is bound by the replication complex consisting of PB1, PB2, and PA. The NP, the focus of this study, encoded by the fifth IAV RNA segment, binds with high affinity to viral RNA [12]. NP plays a role in viral RNA replication and transcription [12]. More recent data suggests that NP binds the polymerase as well as the newly replicated RNA and may act as a processivity factor that is necessary for replication of the viral RNA to be completed [13]. Phylogenic analysis of IAV NP shows distinct lineages of NP based on the host species [14,15]. Within the NP, there are amino acid signatures found within different host species [16,17]. These host-specific amino acid residues may result in differences in affinities for the various host proteins with which they interact (e.g., importin α1 [18], F- actin [19], nuclear factor 90 [20], cyclophilin E [21], exportin 1 [22], HMGB1 [23], HMGB2 [23], MxA/Mx1 [24,25], HSP40 [26], karyopherin alpha [27,28], clusterin [29], Raf-2p48/BAT1/UAP56 [30], Aly/REF [31], Tat-SF1 [32], TRIM22 [33], and alpha actinin 4 [34]) or they may result in differences in how the NP interacts with other viral proteins that have also made host-specific adaptations [11]. Given the suggestion that NP plays a role in determining host range [5,35-37], it is important to identify all host proteins that interact with NP. Replication of the IAV is dependent on the host cell machinery and interactions between host proteins and viral proteins. Therefore any inquiry attempting to investigate the life cycle of IAV must incorporate host-pathogen protein interactions in order to truly provide a mechanistic understanding of the process. Previous screens have identified important protein-protein interactions between IAV and host proteins, but often results are not consistent between screens. Therefore, corroborating evidence with additional screens is crucial to accumulating the best understanding of critical pathogen-host interactions in viral replication. As examples, several genome-wide screens have been performed using RNAi to systematically knockdown host genes and evaluate the effect on various stages of the IAV life cycle [38-41]. An integrated approach carried out by Shapira and colleagues involved transcription profiling and yeast two hybrid screens using specific viral proteins as baits [28]. Proteomics approaches have revealed host proteins found within viral particles [42,43]. The Random Homozygous Gene Perturbation strategy was used to identify host factors that prevent influenza-mediated killing of host cells [44]. In order to better understand the role of NP-host protein interactions in IAV replication, a Gal4-based yeast two-hybrid (Y2H) assay was used in this study. The Y2H system allows for the identification of unique human binding partners with NP. A “bait” plasmid encoding the binding domain (BD) of the Gal4 transcriptional activator fused to NP and a “prey” plasmid encoding Gal4’s activation domain (AD) fused to a protein encoded by a human cDNA are introduced into a yeast strain containing Gal4-responsive reporter genes. Interaction of the bait and prey brings together Gal4’s BD and AD, resulting in transcriptional activation of reporter genes [45]. The Y2H approach has successfully identified cellular factors (e.g., Raf-2p48/BAT1/UAP56, Hsp40, KPNA1, KNPA3, KPNA6, C16orf45, GMCL1, MAGED1, MLH1, USHBP1, ZBTB25, CLU, Aly/REF, and ACTN4) that interact with the IAV NP [26-31,34], (Table 1). In this study, the nucleoprotein from a classical swine H1N1 IAV (Sw/NC/44173/00) was used as the “bait” in a Y2H screen against a “prey” HeLa cDNA library. To investigate if the origin of NP affects the ability of the interactions to take place, the nucleoprotein from a contemporary human H3N2 IAV (A/Ca/07/04) was also used as the “bait” in a Y2H screen against the same prey HeLa cDNA library. By determination of the putative interaction partners between NP and human proteins, the possible functions of the newfound protein-protein interactions or verification and perhaps elucidation of previously identified host factors may be further investigated.Table 1 Summary of identified bait-prey interactions and previous reports of host protein association with influenza A virus Gene Protein description Complete ORF Number of prey clones Prey clones Previous reports with IAV PAICSPhosphoribosylaminoimidazole carboxylaseYes3R3, R9, R15Karlas et al. [40], Kumar and Nanduri [47], Kroeker et al. [48]MSANTD3Myb/SANT-like DNA-binding domain containing 3No3R10, R13, R27N/AFLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAYes2R23, R33N/APA28BProteasome activator subunit 2Yes2R6, R19N/AKCTD9Potassium channel tetramerisation domain containing 9No2R34, R36N/AACOT13Acyl-CoA thioesterase 13Yes1R11N/ATRA2BTransformer 2 beta homolog (Drosophila)Yes1R12Zhu et al. [56]SLC30A9Solute carrier family 30 (zinc transporter), member 9No1R28Kumar and Nanduri [47], Sui et al. [44]ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideNo1R7Mi et al. [51], Liu et al. [50]CKAP5Cytoskeleton associated protein 5No1R29N/A Host proteins identified as interacting with IAV nucleoprotein in previously published yeast two-hybrid screens BAT1/UAP56RNA-dependent ATPaseN/AN/AN/AMomose et al. [30]HSP40Heat shock protein 40N/AN/AN/ASharma et al. [26]NPI-1/ SRP1/KPNA1Karyopherin α1N/AN/AN/AO’Neill and Palese [27], Shapira et al. [28]KPNA3Karyopherin α3N/AN/AN/AShapira et al. [28]KPNA6Karyopherin α6N/AN/AN/AShapira et al. [28]C16orf45-N/AN/AN/AShapira et al. [28]GMCL1Germ cell-less, spermatogenesis associated 1N/AN/AN/AShapira et al. [28]MAGED1Melanoma antigen family D.1N/AN/AN/AShapira et al. [28]MLH1MutL homolog 1N/AN/AN/AShapira et al. [28]USHBP1Usher syndrome 1C binding protein 1N/AN/AN/AShapira et al. [28]ZBTB25Zinc finger and BTB domain containing 25N/AN/AN/AShapira et al. [28]CLUClusterinN/AN/AN/ATripathi et al. [29]ALY/REFRNA export adaptor proteinN/AN/AN/ABalasubramaniam et al. [31]ACTN4Alpha-actinin 4N/AN/AN/ASharma et al. [34] Summary of identified bait-prey interactions and previous reports of host protein association with influenza A virus Reported herein are ten potential human host cell proteins that interact with IAV NP in a Y2H screen. The relative strengths of these protein-protein interactions were characterized by a plating assay and beta-galactosidase assay.
null
null
Results
Constructing an NP bait and screening a human cDNA prey library The nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1 Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Using a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs. The nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1 Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Using a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs. Evaluation of the strength of bait-prey interactions The strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2 Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2 Summary of affinity of bait-prey interactions Growth on SD-His agar plates* β-Gal activity** Gene Protein description Prey clone Human (30°C) Human (37°C) Swine (30°C) Swine (37°C) Human NP Swine NP PAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector). Summary of affinity of bait-prey interactions * Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2). ** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). The strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3 β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. The strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2 Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2 Summary of affinity of bait-prey interactions Growth on SD-His agar plates* β-Gal activity** Gene Protein description Prey clone Human (30°C) Human (37°C) Swine (30°C) Swine (37°C) Human NP Swine NP PAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector). Summary of affinity of bait-prey interactions * Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2). ** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). The strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3 β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.
Conclusions
A Y2H screen using a classical swine influenza A virus nucleoprotein as a bait resulted in isolation of ten different putative interacting human host proteins involved in a variety of cellular processes and structures: purine biosynthesis (PAICS), metabolism (ACOT13), proteasome (PA28B), DNA-binding (MSANTD3), cytoskeleton (CKAP5), potassium channel formation (KCTD9), zinc transporter function (SLC30A9), Na+/K+ ATPase function (ATP1B1), and RNA splicing (TRA2B). All of the identified human host proteins interacted in yeast with NP from both human and swine origin. Proteins PAICS, ATP1B1, TRA2B, and SLC30A9 have been previously identified in studies related to host response to the IAV. Host proteins SLC30A9 and CKAP5 displayed the strongest interactions with IAV NP in yeast in a quantitative β-gal assay.
[ "Constructing an NP bait and screening a human cDNA prey library", "Evaluation of the strength of bait-prey interactions", "Strains, plasmids, media, microbial growth conditions, and reagents", "Cloning Influenza A virus nucleoprotein into vector pGBKT7", "Yeast two-hybrid screen", "Testing potential positive prey plasmids for toxicity to yeast and autoactivation of the Y2H reporter genes", "Bioinformatics", "Maintenance of protein-protein interactions at 30°C and 37°C", "ß-galactosidase assay" ]
[ "The nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1\nSequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17].\n\nSequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17].\nUsing a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs.", "The strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2\nMaintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2\nSummary of affinity of bait-prey interactions\n\nGrowth on SD-His agar plates*\n\nβ-Gal activity**\n\nGene\n\nProtein description\n\nPrey clone\n\nHuman (30°C)\n\nHuman (37°C)\n\nSwine (30°C)\n\nSwine (37°C)\n\nHuman NP\n\nSwine NP\nPAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3).\n\nMaintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).\n\nSummary of affinity of bait-prey interactions\n\n* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).\n** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3).\nThe strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3\nβ-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.\n\nβ-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.", "Unless indicated otherwise, yeast strains (Table 3) were grown in liquid culture at 30°C at 180 rpm and on agar plates at 30°C. Bacterial strains were grown in liquid culture at 37°C at 200 rpm and on agar plates at 37°C. Selectable markers for the bait (pGBKT7) and prey (pGADT7) plasmids in the yeast Saccharomyces cerevisiae and bacterium E. coli are listed in Table 3. All media and Y2H reagents were purchased from Clontech (Mountain View, CA).Table 3\nStrains and plasmids\n\nYeast strain\n\nGenotype\n\nReporter genes\nY2HGold\nMATa, trp1-901, leu2-3, 112, ura3-52, his3-200, gal4Δ, gal80Δ, LYS2::GAL1UAS–Gal1TATA–His3, GAL2UAS–Gal2TATA–Ade2 URA3::MEL1UAS–Mel1TATA, AUR1-C MEL1\n\nADE2, HIS3, MEL1, AUR1-C\nY187\nade2-101, trp1-901, leu2-3, 112, gal4Δ, gal80Δ, met–, URA3::GAL1UAS–Gal1TATA–LacZ, MEL1\n\nMEL1, LacZ\n\nPlasmid\n\nSelectable markers (yeast, bacteria)\n\nAdditional information\nPlasmid pGBKT7 (bait vector)\nTRP1, Kan\nGAL4(1–147)DNA-BD, TRP1, kanr, c-Myc epitope tagPlasmid pGADT7 (prey vector)\nLEU2, Amp\nGAL4(768–881)AD, LEU2, ampr, HA epitope tag\n\nStrains and plasmids\n", "The open reading frame (ORF) of the nucleoprotein gene from swine IAV H1N1 strain A/Sw/NC/44173/00 was amplified from the plasmid pScript A/Sw/NC/44173/00 NP (a gift from Christopher Olsen) by PCR using forward primer NP3F 5’-CATGGAGGCCGAATTCatggcgtctcaaggcaccaaacga-3’ and reverse primer NP2R 5’-GCAGGTCGACGGATCCattgtcatactcctctgcattgtctccgaaga-3’. The NP ORF from the human IAV H3N2 strain A/Ca/07/04 was amplified from plasmid pScript A/Ca/07/04 NP (a gift from Christopher Olsen) by PCR using forward primer NP2F 5’-CATGGAGGCCGAATTC atggcgtcccaaggcaccaaacg-3’ and reverse primer NP3R 5’-GCAGGTCGACGGATCC attgtcgtactcttctgcattgtctccgaaga-3’. For all primers the uppercase letters represent sequences homologous to the BamHI-EcoRI-linearized vector pGBKT7, and lower case letters represent sequences specific for the NP ORF. Reactions included 200 ng plasmid template DNA, 1X Advantage HD Buffer (Clontech, Mountain View, CA), 0.2 mM dNTPs, 0.25 μM forward and reverse primers, and 0.625 Units Advantage HD Polymerase (Clontech, Mountain View, CA). Thermocycler conditions were as follows: 95°C for 3 minutes; 30 cycles of [95°C for 15 seconds, 55°C for 5 seconds, 72°C for 100 seconds], 72°C for 10 minutes.\nInFusion cloning (Clontech, Mountain View, CA) was used to insert the NP ORFs into BamHI-EcoRI-linearized pGBKT7 3’ and in-frame with DNA encoding Gal4’s DNA binding domain (BD). Conditions were as follows: 60 ng gel-purified BamHI-EcoRI-linearized pGBKT7, 50 ng gel-purified NP PCR product, and 1X InFusion HD Enzyme Premix in a 5 μl reaction were incubated at 50°C for 15 min. An aliquot of the cloning reaction was transformed into Stellar competent cells per the manufacturer’s directions (Clontech, Mountain View, CA). All plasmid sequences were verified by DNA sequencing.", "The Yeast Two-Hybrid MatchMaker Gold system (Clontech, Mountain View, CA) was used to isolate prey plasmids that interact with bait pGBKT7-NP. A 50 ml SD-TRP liquid media culture inoculated with Y2HGold cells containing the bait plasmid pGBKT7-NP was grown in a 200 rpm shaker at 30 degrees to an OD600 of 0.8 (approximately 18 hours). Bait cells were pelleted by centrifugation, resuspended in 4 ml SD-TRP, and mixed with 45 ml YPDA in a 2 L flask with 2 × 107 Y187 cells containing a commercially-available normalized HeLa S3 Mate and Plate cDNA prey library (Clontech, Mountain View, CA). Bait and prey cells were mated by slow shaking (40 rpm) for 24 hours at 30°C. After 20 hours, mating cells were observed microscopically for the presence of zygotes. Mated cells were centrifuged 1,000 × g for 10 min, and the pellet was resuspended in 5 ml 0.5X YPDA containing 50 μg/ml kanamycin. Mated cells (0.2 ml) were spread on each of 55 SD-LEU-TRP/X-a-Gal/125 ng/ml AbA agar plates, and plates were incubated 30°C for 5–8 days. This concentration of AbA is considered high stringency and could exclude isolation of low-affinity interactors. The mating efficiency was determined by plating 1:1,000 and 1:10,000 dilutions on SD-LEU, SD-TRP, and SD-LEU-TRP agar plates.\nBlue colonies from the Y2H screen SD-LEU-TRP/X-a-Gal/AbA plates were picked as small patches to an SD-LEU-TRP plate, grown 2 days at 30°C, and replica plated to agar media (SD-ADE, SD-HIS, SD-LEU-TRP/X-a-Gal, SD-LEU-TRP/AbA) to test for activation of four reporter genes: ADE2, HIS3, MEL1C, and AUR1-C, respectively. Replica plates were incubated at 30°C and observed daily for 3 days. Cells from potential positive interactors (growing on all types of reporter media and blue/light blue on SD-LEU-TRP/X-α-Gal) were single-colony purified on SD-LEU-TRP/X-a-Gal and incubated at 30°C for 4 days. A single blue colony from each potential positive interaction strain was single-colony purified a second time on SD-LEU-TRP/X-α-Gal.\nFollowing two rounds of streaking, the prey plasmid from each potential positive interaction yeast strain was isolated using the “Easy Yeast Plasmid Isolation Kit” (Clontech, Mountain View, CA). The resulting yeast plasmid DNAs were transformed into and subsequently re-isolated from E. coli cells.", "Isolated prey plasmid DNAs were re-introduced into Y2HGold cells using a yeast transformation system (Geno Technology). Y2HGold cells containing prey plasmids were single-colony purified on SD-Leu agar plates alongside a control (Y2HGold containing the empty prey vector pGADT7), and cell growth was observed after 3 days at 30C to test for toxicity. To test for false positives, patches of Y2HGold cells containing prey plasmids were replica plated to assess autoactivation of reporter genes on reporter media (SD-Ade, SD-His, SD-Leu-Trp + X-α-Gal, SD-Leu-Trp + AbA). Prey strains determined to be false positives in yeast were excluded from the pool of identified interacting host proteins.", "The 5’ and 3’ ends of human cDNA within each confirmed positive prey plasmid were determined by sequencing at the Iowa State University (ISU) DNA Facility using primers T7-1 (AATACGACTCACTATAG) and 3’AD (AGATGGTGCACGATGCACAG) or 3’ poly-TA (TTTTTTTTTTTTTTTTTTTTTTTTTA), respectively. The sequence data was used in a BLAST to query the NCBI human genome and human transcript databases.", "Y2HGold yeast cells were separately co-transformed with a bait plasmid (either pGBKT7 bait vector containing NP from human influenza A strain H3N2, A/Ca/07/04 or NP from swine influenza A strain H1N1, A/Sw/NC/44173/00) and a prey plasmid (pGADT7 containing one of the human cDNAs isolated as a positive interactor with human NP bait). Y2HGold cells containing known bait/prey interactors pGBKT7-53 and pGADT7-T and known non-interactors pGBKT7-Lam and pGADT7-T were used as positive and negative controls, respectively. Additional negative interaction controls included Y2HGold cells containing human or swine NP (described above) and an empty prey vector pGADT7.\nAll transformants were grown in YPD media in overnight cultures and were diluted to a concentration of 5 × 106 cells/ml (OD = 0.417). 200 ml of culture was placed into a 96 well plate. A 48 sample pinner was used to spot yeast onto 2 plates each of SD-Leu-Trp (control), and SD-His. One plate of each type was incubated at 30°C, the other was incubated at 37°C. Growth was assessed and images taken 4 days after the initial plating procedure. At least two replicates of each strain containing the bait (swine NP or human NP) and prey were tested on SD-Leu-Trp and SD-His at both 30°C and 37°C.", "Diploid cells containing bait and prey were prepared by mating Y187 cells containing each of the prey plasmids with Y2HGold cells containing each of the bait plasmids (swine NP or human NP) on YPD agar (MatchMaker Gold Y2H protocol, Clontech, Mountainview, CA). Eight replicate samples of diploids were selected by subsequent growth on SD-Leu-Trp agar for each protein interaction. Diploid cells containing bait/prey were inoculated in 2 mL SD-Leu/Trp liquid media and grown in a 200 rpm shaker at 30°C overnight. 2 mL of the overnight cultures were diluted to 4 mL with YPD media and grown in a 200 rpm shaker at 30°C until they reached mid-log phase (OD600 of 0.5-0.8). Cultures were separated into 1.5 mL aliquots, pelleted, washed once, and then resuspended in 300 μL of Z buffer (16.1 g/L Na2HPO4⋅7H2O, 5.5 g/L NaH2PO4⋅H2O, 0.75 g/L KCl, 0.246 g/L MgSO4⋅7H2O, pH7.0). Samples of 100 μL were lysed by three freeze/thaw cycles with liquid nitrogen and a 37°C water bath. After the cells were lysed, the sample was split into two 150 μL aliquots. A blank tube, containing 100uL of Z buffer, was also set up. To analyze the enzyme activity, 350 μL of Z buffer with 0.27% β-mercaptoethanol and 4 mg/ml of ONPG as substrate were added. The yellow color was then allowed to develop at 30°C, and the reaction was stopped by the addition of 200 μL of 1 M Na2CO3. To remove cell debris, the samples were centrifuged for 10 minutes at 16,000 RCF, and OD420 values were recorded. In order to express the enzyme activity, Miller Units were calculated using the formula: 1000*OD420/(t*V*OD600) where t refers to the elapsed time, and V refers to the dilution factor (0.25 in this case)." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Results", "Constructing an NP bait and screening a human cDNA prey library", "Evaluation of the strength of bait-prey interactions", "Discussion", "Conclusions", "Methods", "Strains, plasmids, media, microbial growth conditions, and reagents", "Cloning Influenza A virus nucleoprotein into vector pGBKT7", "Yeast two-hybrid screen", "Testing potential positive prey plasmids for toxicity to yeast and autoactivation of the Y2H reporter genes", "Bioinformatics", "Maintenance of protein-protein interactions at 30°C and 37°C", "ß-galactosidase assay" ]
[ "Influenza A viruses (IAVs) are important pathogens that affect the health of humans and many additional animal species. In humans, seasonal IAV infection presents as a non-fatal, uncomplicated, acute infection characterized by the presence of upper respiratory symptoms as well as fever, headache, soreness and fatigue lasting 2–5 days [1]. However, deaths from seasonal IAV infections often arise when the normal flu symptoms are exacerbated by compromised immunity or age [1]. In addition to seasonal influenza A viruses, pandemic strains periodically appear causing increased mortality rates. For example, the 1918 Spanish flu resulted in approximately 50 million human deaths [2]. The most recent pandemic resulted from the emergence of the swine-origin H1N1 virus [3,4]. Due to influenza A’s potential for mortality, high mutation rates (resulting in genetic drift) and pandemic potential (resulting from genetic reassortment), it is critical to learn more about the virus, especially as it pertains to virulence, transmissibility, and identification of potential targets for development of therapeutics.\nInfluenza A viruses exhibit a broad host range beyond humans [5]. Waterfowl serve as the central reservoir species. In addition to humans, various IAV subtypes circulate in pigs, poultry, horses, dogs as well as other species. Subtypes (H1N1 and H3N2) that circulate in the human population also circulate in the swine populations [6]. In addition to the concern for IAV in terms of swine health [7], IAV infection in swine is also important due to the potential for zoonotic infections as well as the potential for serving as a mixing vessel for the generation of human pandemic viruses [5]. The majority of inter-species transmission research has focused on amino acids in the hemagglutinin affecting viral attachment and entry [8]. Although known to be important, other than amino acids associated with temperature sensitivity (e.g., PB2 627 [9] or 701 [10]), the effects of differences in amino acid residues in proteins of the replication complex on species specificity is less well understood. Previous evidence suggests that there may be a role in some of these replication complex proteins in allowing for zoonotic infections [11].\nThe genome of the enveloped influenza A virion consists of eight segments of negative-sense single-stranded RNA that encode at least ten viral genes: Hemagglutinin (HA), Neuraminidase (NA), Matrix 2 protein (M2), Matrix 1 protein (M1), Non-structural Protein 1 (NS1), Non-structural Protein 2 (NS2), Nucleoprotein (NP), Polymerase Basic 1 (PB-1), Polymerase Basic 2 (PB-2), and Polymerase Acidic (PA) [1]. The viral envelope contains HA, NA, and M2 proteins, and M1 proteins form a layer inside the envelope. The IAV RNA genome located inside the virion is coated with NP and is bound by the replication complex consisting of PB1, PB2, and PA. The NP, the focus of this study, encoded by the fifth IAV RNA segment, binds with high affinity to viral RNA [12]. NP plays a role in viral RNA replication and transcription [12]. More recent data suggests that NP binds the polymerase as well as the newly replicated RNA and may act as a processivity factor that is necessary for replication of the viral RNA to be completed [13]. Phylogenic analysis of IAV NP shows distinct lineages of NP based on the host species [14,15]. Within the NP, there are amino acid signatures found within different host species [16,17]. These host-specific amino acid residues may result in differences in affinities for the various host proteins with which they interact (e.g., importin α1 [18], F- actin [19], nuclear factor 90 [20], cyclophilin E [21], exportin 1 [22], HMGB1 [23], HMGB2 [23], MxA/Mx1 [24,25], HSP40 [26], karyopherin alpha [27,28], clusterin [29], Raf-2p48/BAT1/UAP56 [30], Aly/REF [31], Tat-SF1 [32], TRIM22 [33], and alpha actinin 4 [34]) or they may result in differences in how the NP interacts with other viral proteins that have also made host-specific adaptations [11]. Given the suggestion that NP plays a role in determining host range [5,35-37], it is important to identify all host proteins that interact with NP.\nReplication of the IAV is dependent on the host cell machinery and interactions between host proteins and viral proteins. Therefore any inquiry attempting to investigate the life cycle of IAV must incorporate host-pathogen protein interactions in order to truly provide a mechanistic understanding of the process. Previous screens have identified important protein-protein interactions between IAV and host proteins, but often results are not consistent between screens. Therefore, corroborating evidence with additional screens is crucial to accumulating the best understanding of critical pathogen-host interactions in viral replication. As examples, several genome-wide screens have been performed using RNAi to systematically knockdown host genes and evaluate the effect on various stages of the IAV life cycle [38-41]. An integrated approach carried out by Shapira and colleagues involved transcription profiling and yeast two hybrid screens using specific viral proteins as baits [28]. Proteomics approaches have revealed host proteins found within viral particles [42,43]. The Random Homozygous Gene Perturbation strategy was used to identify host factors that prevent influenza-mediated killing of host cells [44].\nIn order to better understand the role of NP-host protein interactions in IAV replication, a Gal4-based yeast two-hybrid (Y2H) assay was used in this study. The Y2H system allows for the identification of unique human binding partners with NP. A “bait” plasmid encoding the binding domain (BD) of the Gal4 transcriptional activator fused to NP and a “prey” plasmid encoding Gal4’s activation domain (AD) fused to a protein encoded by a human cDNA are introduced into a yeast strain containing Gal4-responsive reporter genes. Interaction of the bait and prey brings together Gal4’s BD and AD, resulting in transcriptional activation of reporter genes [45]. The Y2H approach has successfully identified cellular factors (e.g., Raf-2p48/BAT1/UAP56, Hsp40, KPNA1, KNPA3, KPNA6, C16orf45, GMCL1, MAGED1, MLH1, USHBP1, ZBTB25, CLU, Aly/REF, and ACTN4) that interact with the IAV NP [26-31,34], (Table 1). In this study, the nucleoprotein from a classical swine H1N1 IAV (Sw/NC/44173/00) was used as the “bait” in a Y2H screen against a “prey” HeLa cDNA library. To investigate if the origin of NP affects the ability of the interactions to take place, the nucleoprotein from a contemporary human H3N2 IAV (A/Ca/07/04) was also used as the “bait” in a Y2H screen against the same prey HeLa cDNA library. By determination of the putative interaction partners between NP and human proteins, the possible functions of the newfound protein-protein interactions or verification and perhaps elucidation of previously identified host factors may be further investigated.Table 1\nSummary of identified bait-prey interactions and previous reports of host protein association with influenza A virus\n\nGene\n\nProtein description\n\nComplete ORF\n\nNumber of prey clones\n\nPrey clones\n\nPrevious reports with IAV\nPAICSPhosphoribosylaminoimidazole carboxylaseYes3R3, R9, R15Karlas et al. [40], Kumar and Nanduri [47], Kroeker et al. [48]MSANTD3Myb/SANT-like DNA-binding domain containing 3No3R10, R13, R27N/AFLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAYes2R23, R33N/APA28BProteasome activator subunit 2Yes2R6, R19N/AKCTD9Potassium channel tetramerisation domain containing 9No2R34, R36N/AACOT13Acyl-CoA thioesterase 13Yes1R11N/ATRA2BTransformer 2 beta homolog (Drosophila)Yes1R12Zhu et al. [56]SLC30A9Solute carrier family 30 (zinc transporter), member 9No1R28Kumar and Nanduri [47], Sui et al. [44]ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideNo1R7Mi et al. [51], Liu et al. [50]CKAP5Cytoskeleton associated protein 5No1R29N/A\nHost proteins identified as interacting with IAV nucleoprotein in previously published yeast two-hybrid screens\nBAT1/UAP56RNA-dependent ATPaseN/AN/AN/AMomose et al. [30]HSP40Heat shock protein 40N/AN/AN/ASharma et al. [26]NPI-1/ SRP1/KPNA1Karyopherin α1N/AN/AN/AO’Neill and Palese [27], Shapira et al. [28]KPNA3Karyopherin α3N/AN/AN/AShapira et al. [28]KPNA6Karyopherin α6N/AN/AN/AShapira et al. [28]C16orf45-N/AN/AN/AShapira et al. [28]GMCL1Germ cell-less, spermatogenesis associated 1N/AN/AN/AShapira et al. [28]MAGED1Melanoma antigen family D.1N/AN/AN/AShapira et al. [28]MLH1MutL homolog 1N/AN/AN/AShapira et al. [28]USHBP1Usher syndrome 1C binding protein 1N/AN/AN/AShapira et al. [28]ZBTB25Zinc finger and BTB domain containing 25N/AN/AN/AShapira et al. [28]CLUClusterinN/AN/AN/ATripathi et al. [29]ALY/REFRNA export adaptor proteinN/AN/AN/ABalasubramaniam et al. [31]ACTN4Alpha-actinin 4N/AN/AN/ASharma et al. [34]\n\nSummary of identified bait-prey interactions and previous reports of host protein association with influenza A virus\n\nReported herein are ten potential human host cell proteins that interact with IAV NP in a Y2H screen. The relative strengths of these protein-protein interactions were characterized by a plating assay and beta-galactosidase assay.", " Constructing an NP bait and screening a human cDNA prey library The nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1\nSequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17].\n\nSequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17].\nUsing a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs.\nThe nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1\nSequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17].\n\nSequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17].\nUsing a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs.\n Evaluation of the strength of bait-prey interactions The strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2\nMaintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2\nSummary of affinity of bait-prey interactions\n\nGrowth on SD-His agar plates*\n\nβ-Gal activity**\n\nGene\n\nProtein description\n\nPrey clone\n\nHuman (30°C)\n\nHuman (37°C)\n\nSwine (30°C)\n\nSwine (37°C)\n\nHuman NP\n\nSwine NP\nPAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3).\n\nMaintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).\n\nSummary of affinity of bait-prey interactions\n\n* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).\n** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3).\nThe strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3\nβ-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.\n\nβ-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.\nThe strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2\nMaintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2\nSummary of affinity of bait-prey interactions\n\nGrowth on SD-His agar plates*\n\nβ-Gal activity**\n\nGene\n\nProtein description\n\nPrey clone\n\nHuman (30°C)\n\nHuman (37°C)\n\nSwine (30°C)\n\nSwine (37°C)\n\nHuman NP\n\nSwine NP\nPAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3).\n\nMaintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).\n\nSummary of affinity of bait-prey interactions\n\n* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).\n** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3).\nThe strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3\nβ-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.\n\nβ-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.", "The nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1\nSequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17].\n\nSequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17].\nUsing a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs.", "The strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2\nMaintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2\nSummary of affinity of bait-prey interactions\n\nGrowth on SD-His agar plates*\n\nβ-Gal activity**\n\nGene\n\nProtein description\n\nPrey clone\n\nHuman (30°C)\n\nHuman (37°C)\n\nSwine (30°C)\n\nSwine (37°C)\n\nHuman NP\n\nSwine NP\nPAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3).\n\nMaintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).\n\nSummary of affinity of bait-prey interactions\n\n* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).\n** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3).\nThe strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3\nβ-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.\n\nβ-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples.", "This Y2H screen identified multiple candidate host proteins that were previously identified in genome-wide screens as playing a role in the IAV infection cycle (PAICS, ATP1B1, SLC30A9, and TRA2B), while also identifying new potential interactors with IAV nucleoprotein (ACOT13, CKAP5, KCTD9, PA28B, MSANDTD3, and FLJ30306). These results complement previous reports by suggesting the mechanism for involvement of the host proteins with the IAV (i.e. interaction with the NP). The yeast screen is internally validated by the observation that half of the prey proteins identified were independently isolated multiple times during the screen. All host prey proteins identified interact in yeast with IAV NP from both human and swine isolates. Although there can be variability in gene expression among yeast colonies containing the same combination of bait/prey, both a qualitative growth assay and a quantitative β-gal assay yielded similar results in terms of the strength of interaction between IAV NP bait and human prey proteins. Notably, CKAP5 displayed good growth at the more stringent 37°C in the qualitative assay and also had high β-gal activity. The putative interacting host proteins represent a variety of cellular processes, consistent with the diversity of processes required during the IAV infection cycle.\nPhosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) is an enzyme necessary to catalyze the sixth and seventh steps of the de novo purine biosynthesis pathway [46]. PAICS has been identified by two separate RNAi screens as being crucial to IAV replication [40,47]. Furthermore, proteins involved in purine biosynthesis, including PAICS, were found to be significantly up-regulated during an IAV infection [48]. The data presented herein suggest that the requirement for PAICS during IAV infection may be through the interaction with NP.\nThe sodium/potassium-transporting ATPase subunit beta-1 (ATP1B1) represents another host protein that interacts in yeast with NP and that has been previously shown to be important for influenza virus replication in host cells. It belongs to the family of Na+/K + −ATPase beta chain proteins. Na+/K + −ATPase is necessary for the creation and maintenance of electrochemical gradients of sodium and potassium ions on either side of the plasma membrane [49]. This gradient is used by cells to permit functional nervous and muscular excitability, osmoregulation, and active transport of numerous molecules [49]. The beta subunit functions to regulate the quantity of sodium pumps transported to the plasma membrane [49]. Using a quantitative proteomics approach, ATP1B1 was one of 43 proteins found to be significantly up-regulated in IAV-infected primary human alveolar macrophages [50]. In a Y2H study, Mi and colleagues [51] isolated ATP1B1 as a protein that binds the cytoplasmic domain of IAV M2 and subsequently showed that knockdown of ATP1B1 in MDCK cells suppressed IAV replication [51]. These studies suggest that ATP1B1 may be interacting with multiple IAV proteins.\nThe solute carrier family 30 (zinc transporter), member 9 (SLC30A9) is involved in nucleotide-excision repair of human DNA [52] and has an efflux motif characteristic of proteins in the SLC30 zinc efflux transporter family [53]. SLC30A9 protein found in the cytoplasm can bind nuclear receptors and/or nuclear receptor coactivators and translocate to the nucleus where it regulates gene transcription [54]. SLC30A9 was identified as a possible host target that confers upon the host cell resistance to IAV infection [44]. How SLC30A9 might interact with NP and be necessary for IAV replication is unclear.\nAnother protein identified by this screen is TRA2B, a tissue-specific splicing factor [55]. TRA2B protein binds to AGAA and CAA RNA sequences affecting the inclusion of introns in the processed transcript [55]. With respect to influenza, expression of TRA2B protein was reported to be down-regulated in porcine alveolar macrophage cells infected by two swine IAVs [56]. Because influenza genes M2 and NS2 are spliced, it is possible that TRA2B is important for the splicing of these genes. However, how NP might be involved in TRA2B’s function in IAV replication is unknown.\nAnother binding partner identified is Acyl CoA thioesterase 13 (ACOT13), a eukaryotic protein also known as thioesterase superfamily member 2 (Them2) [57,58]. ACOT13 catalyzes fatty acyl-CoA hydrolysis that exists primarily in association with mitochondria [57]. ACOT13/Them2 was identified in a Y2H screen focused on phosphatidylcholine transfer protein, suggestive of a role in fatty acid metabolism [57]. Although ACOT13 has not previously been reported to play a role in the IAV life cycle, another member of the ACOT family (ACOT9) which has two hot dog fold domains (ACOT13 has one) [59] has been shown to physically interact with the IAV PA protein [28]. How ACOT13 may be interacting with NP during IAV replication is unclear.\nCytoskeletal associated protein 5 (CKAP5, also known as TOG) binds microtubules and is important for the process of centrosomal microtubule assembly especially during mitosis [60]. The microtubule network of the cell plays a role in movement of materials throughout the cell, including during infection [61]. TOG2 binds to a ribonucleoprotein and plays a role in RNA trafficking [62]. While a specific role for CKAP5 in the IAV life cycle is unknown, a genome-wide siRNA screen revealed CKAP5 as one of 96 human genes that supports replication of another RNA virus- the hepatitis C virus [63]. It’s possible that CKAP5 functions during the IAV infection cycle to organize microtubules for trafficking of viral components.\nAnother interactor identified in this screen, KCTD9, has been previously shown to interact with a subunit of the Mediator complex, acting as a scaffold between regulatory proteins and the RNA polymerase II [64]. In addition, KCTD9 homolog FIP2 acts as a cytoskeletal rearrangement protein suspected to be involved in nuclear export in some plants through the organization of actin cables [64]. Perhaps KCTD9 aids in translocating NP to the nucleus for replication and transcription of the viral RNA. It has been shown that the translocation of NF-E2 related factor 2 (Nrf2) is mediated by cytoskeletal rearrangement in the oxidative stress response [65], and the level of Nrf2 expression is associated with IAV entry and replication in nasal epithelial cells [66].\nAlso identified by this screen was PA28B, a protein associated with the proteasome complex. In the typical proteasome, there is a 19S regulator, which is replaced by the 11S regulator in the modified immunoproteasome [67]. PA28B is part of the proteasome complex and acts as a subunit in the 11S alternate regulator. It is referred to as the beta subunit and is comprised of three beta and three alpha subunits arranged in a heterohexameric ring [68]. The proteasome, as a whole, is used for the cleaving of peptides and the 11S regulator, in particular, induces the degradation of short peptides, but not entire proteins [68]. Induction of 11S regulator expression is the responsibility of interferon gamma. PA28B is involved in cleavage of the peptides which bind to the major histocompatibility complex (MHC). It is possible that NP interacts with PA28B to prevent MHC I presentation of IAV antigens.\nMyb/SANT-like DNA-binding domain containing 3 (MSANTD3) is a member of the MSANTD3 family which contains DNA binding domains for Myb proteins and the SANT domain family. Myb proteins are proto-oncogenes which are necessary for hematopoiesis and possibly involved in tumorigenesis [69]. The SANT domain permits the interaction of chromatin remodeling proteins with histones and is found in chromatin-remodeling complexes as well as nuclear receptor co-repressors [70]. NP interactions with MSANTD3 could have an effect on gene transcription of the host through chromatin modification, presumably to inhibit antiviral genes or up-regulate genes beneficial to viral replication.\nThe screen also identified FLJ30306 as a potential interactor with NP. FLJ30306 maintains a sequence similarity to retroviral elements, resulting in its classification as endogenous retrovirus group K3, member 1 [Homo sapiens] [71]. The endogenous retrovirus sequence was revealed by sequence analysis of the mRNA, not by direct protein sequencing. It is unclear whether the protein carries out a normal cellular effect, such as endogenous retrovirus 1’s (ERV-1) use in placental syncytia formation or if it is a result of mis-regulation due to influenza infection [71]. In Hodgkin’s lymphoma it has been shown that many endogenous retroviruses are reactivated and produce virus-like particles [71]. It may be that instead of directly utilizing FLJ30306 influenza may simply be up-regulating its expression due to its pathogenesis. Alternatively, FLJ30306 may contain useful enzymatic activity for influenza replication through an unknown mechanism. A characterization of normal FLJ30306 function would allow for appropriately testing either hypothesis.\nAll human host proteins identified in this Y2H screen interact in yeast with NP from both swine and human IAV strains. While the data suggest there may be differences in the strength of bait-prey interactions in yeast based on NP host origin, these differences should be interpreted carefully given the nature of the assay. Given the suggestion that NP affects species specificity, additional screens using NP from other IAV strains as the bait and/or cDNA libraries from other susceptible species may shed light on amino acids within the NP and specific host factors that are responsible for host restrictions. As is the case with other published screens (e.g., [72]), it will be important to verify the putative interactions described here in infected cells by co-immunoprecipitation, co-localization, or RNA interference (e.g., [34]) and to investigate the role of the identified host proteins in the IAV life cycle.", "A Y2H screen using a classical swine influenza A virus nucleoprotein as a bait resulted in isolation of ten different putative interacting human host proteins involved in a variety of cellular processes and structures: purine biosynthesis (PAICS), metabolism (ACOT13), proteasome (PA28B), DNA-binding (MSANTD3), cytoskeleton (CKAP5), potassium channel formation (KCTD9), zinc transporter function (SLC30A9), Na+/K+ ATPase function (ATP1B1), and RNA splicing (TRA2B). All of the identified human host proteins interacted in yeast with NP from both human and swine origin. Proteins PAICS, ATP1B1, TRA2B, and SLC30A9 have been previously identified in studies related to host response to the IAV. Host proteins SLC30A9 and CKAP5 displayed the strongest interactions with IAV NP in yeast in a quantitative β-gal assay.", " Strains, plasmids, media, microbial growth conditions, and reagents Unless indicated otherwise, yeast strains (Table 3) were grown in liquid culture at 30°C at 180 rpm and on agar plates at 30°C. Bacterial strains were grown in liquid culture at 37°C at 200 rpm and on agar plates at 37°C. Selectable markers for the bait (pGBKT7) and prey (pGADT7) plasmids in the yeast Saccharomyces cerevisiae and bacterium E. coli are listed in Table 3. All media and Y2H reagents were purchased from Clontech (Mountain View, CA).Table 3\nStrains and plasmids\n\nYeast strain\n\nGenotype\n\nReporter genes\nY2HGold\nMATa, trp1-901, leu2-3, 112, ura3-52, his3-200, gal4Δ, gal80Δ, LYS2::GAL1UAS–Gal1TATA–His3, GAL2UAS–Gal2TATA–Ade2 URA3::MEL1UAS–Mel1TATA, AUR1-C MEL1\n\nADE2, HIS3, MEL1, AUR1-C\nY187\nade2-101, trp1-901, leu2-3, 112, gal4Δ, gal80Δ, met–, URA3::GAL1UAS–Gal1TATA–LacZ, MEL1\n\nMEL1, LacZ\n\nPlasmid\n\nSelectable markers (yeast, bacteria)\n\nAdditional information\nPlasmid pGBKT7 (bait vector)\nTRP1, Kan\nGAL4(1–147)DNA-BD, TRP1, kanr, c-Myc epitope tagPlasmid pGADT7 (prey vector)\nLEU2, Amp\nGAL4(768–881)AD, LEU2, ampr, HA epitope tag\n\nStrains and plasmids\n\nUnless indicated otherwise, yeast strains (Table 3) were grown in liquid culture at 30°C at 180 rpm and on agar plates at 30°C. Bacterial strains were grown in liquid culture at 37°C at 200 rpm and on agar plates at 37°C. Selectable markers for the bait (pGBKT7) and prey (pGADT7) plasmids in the yeast Saccharomyces cerevisiae and bacterium E. coli are listed in Table 3. All media and Y2H reagents were purchased from Clontech (Mountain View, CA).Table 3\nStrains and plasmids\n\nYeast strain\n\nGenotype\n\nReporter genes\nY2HGold\nMATa, trp1-901, leu2-3, 112, ura3-52, his3-200, gal4Δ, gal80Δ, LYS2::GAL1UAS–Gal1TATA–His3, GAL2UAS–Gal2TATA–Ade2 URA3::MEL1UAS–Mel1TATA, AUR1-C MEL1\n\nADE2, HIS3, MEL1, AUR1-C\nY187\nade2-101, trp1-901, leu2-3, 112, gal4Δ, gal80Δ, met–, URA3::GAL1UAS–Gal1TATA–LacZ, MEL1\n\nMEL1, LacZ\n\nPlasmid\n\nSelectable markers (yeast, bacteria)\n\nAdditional information\nPlasmid pGBKT7 (bait vector)\nTRP1, Kan\nGAL4(1–147)DNA-BD, TRP1, kanr, c-Myc epitope tagPlasmid pGADT7 (prey vector)\nLEU2, Amp\nGAL4(768–881)AD, LEU2, ampr, HA epitope tag\n\nStrains and plasmids\n\n Cloning Influenza A virus nucleoprotein into vector pGBKT7 The open reading frame (ORF) of the nucleoprotein gene from swine IAV H1N1 strain A/Sw/NC/44173/00 was amplified from the plasmid pScript A/Sw/NC/44173/00 NP (a gift from Christopher Olsen) by PCR using forward primer NP3F 5’-CATGGAGGCCGAATTCatggcgtctcaaggcaccaaacga-3’ and reverse primer NP2R 5’-GCAGGTCGACGGATCCattgtcatactcctctgcattgtctccgaaga-3’. The NP ORF from the human IAV H3N2 strain A/Ca/07/04 was amplified from plasmid pScript A/Ca/07/04 NP (a gift from Christopher Olsen) by PCR using forward primer NP2F 5’-CATGGAGGCCGAATTC atggcgtcccaaggcaccaaacg-3’ and reverse primer NP3R 5’-GCAGGTCGACGGATCC attgtcgtactcttctgcattgtctccgaaga-3’. For all primers the uppercase letters represent sequences homologous to the BamHI-EcoRI-linearized vector pGBKT7, and lower case letters represent sequences specific for the NP ORF. Reactions included 200 ng plasmid template DNA, 1X Advantage HD Buffer (Clontech, Mountain View, CA), 0.2 mM dNTPs, 0.25 μM forward and reverse primers, and 0.625 Units Advantage HD Polymerase (Clontech, Mountain View, CA). Thermocycler conditions were as follows: 95°C for 3 minutes; 30 cycles of [95°C for 15 seconds, 55°C for 5 seconds, 72°C for 100 seconds], 72°C for 10 minutes.\nInFusion cloning (Clontech, Mountain View, CA) was used to insert the NP ORFs into BamHI-EcoRI-linearized pGBKT7 3’ and in-frame with DNA encoding Gal4’s DNA binding domain (BD). Conditions were as follows: 60 ng gel-purified BamHI-EcoRI-linearized pGBKT7, 50 ng gel-purified NP PCR product, and 1X InFusion HD Enzyme Premix in a 5 μl reaction were incubated at 50°C for 15 min. An aliquot of the cloning reaction was transformed into Stellar competent cells per the manufacturer’s directions (Clontech, Mountain View, CA). All plasmid sequences were verified by DNA sequencing.\nThe open reading frame (ORF) of the nucleoprotein gene from swine IAV H1N1 strain A/Sw/NC/44173/00 was amplified from the plasmid pScript A/Sw/NC/44173/00 NP (a gift from Christopher Olsen) by PCR using forward primer NP3F 5’-CATGGAGGCCGAATTCatggcgtctcaaggcaccaaacga-3’ and reverse primer NP2R 5’-GCAGGTCGACGGATCCattgtcatactcctctgcattgtctccgaaga-3’. The NP ORF from the human IAV H3N2 strain A/Ca/07/04 was amplified from plasmid pScript A/Ca/07/04 NP (a gift from Christopher Olsen) by PCR using forward primer NP2F 5’-CATGGAGGCCGAATTC atggcgtcccaaggcaccaaacg-3’ and reverse primer NP3R 5’-GCAGGTCGACGGATCC attgtcgtactcttctgcattgtctccgaaga-3’. For all primers the uppercase letters represent sequences homologous to the BamHI-EcoRI-linearized vector pGBKT7, and lower case letters represent sequences specific for the NP ORF. Reactions included 200 ng plasmid template DNA, 1X Advantage HD Buffer (Clontech, Mountain View, CA), 0.2 mM dNTPs, 0.25 μM forward and reverse primers, and 0.625 Units Advantage HD Polymerase (Clontech, Mountain View, CA). Thermocycler conditions were as follows: 95°C for 3 minutes; 30 cycles of [95°C for 15 seconds, 55°C for 5 seconds, 72°C for 100 seconds], 72°C for 10 minutes.\nInFusion cloning (Clontech, Mountain View, CA) was used to insert the NP ORFs into BamHI-EcoRI-linearized pGBKT7 3’ and in-frame with DNA encoding Gal4’s DNA binding domain (BD). Conditions were as follows: 60 ng gel-purified BamHI-EcoRI-linearized pGBKT7, 50 ng gel-purified NP PCR product, and 1X InFusion HD Enzyme Premix in a 5 μl reaction were incubated at 50°C for 15 min. An aliquot of the cloning reaction was transformed into Stellar competent cells per the manufacturer’s directions (Clontech, Mountain View, CA). All plasmid sequences were verified by DNA sequencing.\n Yeast two-hybrid screen The Yeast Two-Hybrid MatchMaker Gold system (Clontech, Mountain View, CA) was used to isolate prey plasmids that interact with bait pGBKT7-NP. A 50 ml SD-TRP liquid media culture inoculated with Y2HGold cells containing the bait plasmid pGBKT7-NP was grown in a 200 rpm shaker at 30 degrees to an OD600 of 0.8 (approximately 18 hours). Bait cells were pelleted by centrifugation, resuspended in 4 ml SD-TRP, and mixed with 45 ml YPDA in a 2 L flask with 2 × 107 Y187 cells containing a commercially-available normalized HeLa S3 Mate and Plate cDNA prey library (Clontech, Mountain View, CA). Bait and prey cells were mated by slow shaking (40 rpm) for 24 hours at 30°C. After 20 hours, mating cells were observed microscopically for the presence of zygotes. Mated cells were centrifuged 1,000 × g for 10 min, and the pellet was resuspended in 5 ml 0.5X YPDA containing 50 μg/ml kanamycin. Mated cells (0.2 ml) were spread on each of 55 SD-LEU-TRP/X-a-Gal/125 ng/ml AbA agar plates, and plates were incubated 30°C for 5–8 days. This concentration of AbA is considered high stringency and could exclude isolation of low-affinity interactors. The mating efficiency was determined by plating 1:1,000 and 1:10,000 dilutions on SD-LEU, SD-TRP, and SD-LEU-TRP agar plates.\nBlue colonies from the Y2H screen SD-LEU-TRP/X-a-Gal/AbA plates were picked as small patches to an SD-LEU-TRP plate, grown 2 days at 30°C, and replica plated to agar media (SD-ADE, SD-HIS, SD-LEU-TRP/X-a-Gal, SD-LEU-TRP/AbA) to test for activation of four reporter genes: ADE2, HIS3, MEL1C, and AUR1-C, respectively. Replica plates were incubated at 30°C and observed daily for 3 days. Cells from potential positive interactors (growing on all types of reporter media and blue/light blue on SD-LEU-TRP/X-α-Gal) were single-colony purified on SD-LEU-TRP/X-a-Gal and incubated at 30°C for 4 days. A single blue colony from each potential positive interaction strain was single-colony purified a second time on SD-LEU-TRP/X-α-Gal.\nFollowing two rounds of streaking, the prey plasmid from each potential positive interaction yeast strain was isolated using the “Easy Yeast Plasmid Isolation Kit” (Clontech, Mountain View, CA). The resulting yeast plasmid DNAs were transformed into and subsequently re-isolated from E. coli cells.\nThe Yeast Two-Hybrid MatchMaker Gold system (Clontech, Mountain View, CA) was used to isolate prey plasmids that interact with bait pGBKT7-NP. A 50 ml SD-TRP liquid media culture inoculated with Y2HGold cells containing the bait plasmid pGBKT7-NP was grown in a 200 rpm shaker at 30 degrees to an OD600 of 0.8 (approximately 18 hours). Bait cells were pelleted by centrifugation, resuspended in 4 ml SD-TRP, and mixed with 45 ml YPDA in a 2 L flask with 2 × 107 Y187 cells containing a commercially-available normalized HeLa S3 Mate and Plate cDNA prey library (Clontech, Mountain View, CA). Bait and prey cells were mated by slow shaking (40 rpm) for 24 hours at 30°C. After 20 hours, mating cells were observed microscopically for the presence of zygotes. Mated cells were centrifuged 1,000 × g for 10 min, and the pellet was resuspended in 5 ml 0.5X YPDA containing 50 μg/ml kanamycin. Mated cells (0.2 ml) were spread on each of 55 SD-LEU-TRP/X-a-Gal/125 ng/ml AbA agar plates, and plates were incubated 30°C for 5–8 days. This concentration of AbA is considered high stringency and could exclude isolation of low-affinity interactors. The mating efficiency was determined by plating 1:1,000 and 1:10,000 dilutions on SD-LEU, SD-TRP, and SD-LEU-TRP agar plates.\nBlue colonies from the Y2H screen SD-LEU-TRP/X-a-Gal/AbA plates were picked as small patches to an SD-LEU-TRP plate, grown 2 days at 30°C, and replica plated to agar media (SD-ADE, SD-HIS, SD-LEU-TRP/X-a-Gal, SD-LEU-TRP/AbA) to test for activation of four reporter genes: ADE2, HIS3, MEL1C, and AUR1-C, respectively. Replica plates were incubated at 30°C and observed daily for 3 days. Cells from potential positive interactors (growing on all types of reporter media and blue/light blue on SD-LEU-TRP/X-α-Gal) were single-colony purified on SD-LEU-TRP/X-a-Gal and incubated at 30°C for 4 days. A single blue colony from each potential positive interaction strain was single-colony purified a second time on SD-LEU-TRP/X-α-Gal.\nFollowing two rounds of streaking, the prey plasmid from each potential positive interaction yeast strain was isolated using the “Easy Yeast Plasmid Isolation Kit” (Clontech, Mountain View, CA). The resulting yeast plasmid DNAs were transformed into and subsequently re-isolated from E. coli cells.\n Testing potential positive prey plasmids for toxicity to yeast and autoactivation of the Y2H reporter genes Isolated prey plasmid DNAs were re-introduced into Y2HGold cells using a yeast transformation system (Geno Technology). Y2HGold cells containing prey plasmids were single-colony purified on SD-Leu agar plates alongside a control (Y2HGold containing the empty prey vector pGADT7), and cell growth was observed after 3 days at 30C to test for toxicity. To test for false positives, patches of Y2HGold cells containing prey plasmids were replica plated to assess autoactivation of reporter genes on reporter media (SD-Ade, SD-His, SD-Leu-Trp + X-α-Gal, SD-Leu-Trp + AbA). Prey strains determined to be false positives in yeast were excluded from the pool of identified interacting host proteins.\nIsolated prey plasmid DNAs were re-introduced into Y2HGold cells using a yeast transformation system (Geno Technology). Y2HGold cells containing prey plasmids were single-colony purified on SD-Leu agar plates alongside a control (Y2HGold containing the empty prey vector pGADT7), and cell growth was observed after 3 days at 30C to test for toxicity. To test for false positives, patches of Y2HGold cells containing prey plasmids were replica plated to assess autoactivation of reporter genes on reporter media (SD-Ade, SD-His, SD-Leu-Trp + X-α-Gal, SD-Leu-Trp + AbA). Prey strains determined to be false positives in yeast were excluded from the pool of identified interacting host proteins.\n Bioinformatics The 5’ and 3’ ends of human cDNA within each confirmed positive prey plasmid were determined by sequencing at the Iowa State University (ISU) DNA Facility using primers T7-1 (AATACGACTCACTATAG) and 3’AD (AGATGGTGCACGATGCACAG) or 3’ poly-TA (TTTTTTTTTTTTTTTTTTTTTTTTTA), respectively. The sequence data was used in a BLAST to query the NCBI human genome and human transcript databases.\nThe 5’ and 3’ ends of human cDNA within each confirmed positive prey plasmid were determined by sequencing at the Iowa State University (ISU) DNA Facility using primers T7-1 (AATACGACTCACTATAG) and 3’AD (AGATGGTGCACGATGCACAG) or 3’ poly-TA (TTTTTTTTTTTTTTTTTTTTTTTTTA), respectively. The sequence data was used in a BLAST to query the NCBI human genome and human transcript databases.\n Maintenance of protein-protein interactions at 30°C and 37°C Y2HGold yeast cells were separately co-transformed with a bait plasmid (either pGBKT7 bait vector containing NP from human influenza A strain H3N2, A/Ca/07/04 or NP from swine influenza A strain H1N1, A/Sw/NC/44173/00) and a prey plasmid (pGADT7 containing one of the human cDNAs isolated as a positive interactor with human NP bait). Y2HGold cells containing known bait/prey interactors pGBKT7-53 and pGADT7-T and known non-interactors pGBKT7-Lam and pGADT7-T were used as positive and negative controls, respectively. Additional negative interaction controls included Y2HGold cells containing human or swine NP (described above) and an empty prey vector pGADT7.\nAll transformants were grown in YPD media in overnight cultures and were diluted to a concentration of 5 × 106 cells/ml (OD = 0.417). 200 ml of culture was placed into a 96 well plate. A 48 sample pinner was used to spot yeast onto 2 plates each of SD-Leu-Trp (control), and SD-His. One plate of each type was incubated at 30°C, the other was incubated at 37°C. Growth was assessed and images taken 4 days after the initial plating procedure. At least two replicates of each strain containing the bait (swine NP or human NP) and prey were tested on SD-Leu-Trp and SD-His at both 30°C and 37°C.\nY2HGold yeast cells were separately co-transformed with a bait plasmid (either pGBKT7 bait vector containing NP from human influenza A strain H3N2, A/Ca/07/04 or NP from swine influenza A strain H1N1, A/Sw/NC/44173/00) and a prey plasmid (pGADT7 containing one of the human cDNAs isolated as a positive interactor with human NP bait). Y2HGold cells containing known bait/prey interactors pGBKT7-53 and pGADT7-T and known non-interactors pGBKT7-Lam and pGADT7-T were used as positive and negative controls, respectively. Additional negative interaction controls included Y2HGold cells containing human or swine NP (described above) and an empty prey vector pGADT7.\nAll transformants were grown in YPD media in overnight cultures and were diluted to a concentration of 5 × 106 cells/ml (OD = 0.417). 200 ml of culture was placed into a 96 well plate. A 48 sample pinner was used to spot yeast onto 2 plates each of SD-Leu-Trp (control), and SD-His. One plate of each type was incubated at 30°C, the other was incubated at 37°C. Growth was assessed and images taken 4 days after the initial plating procedure. At least two replicates of each strain containing the bait (swine NP or human NP) and prey were tested on SD-Leu-Trp and SD-His at both 30°C and 37°C.\n ß-galactosidase assay Diploid cells containing bait and prey were prepared by mating Y187 cells containing each of the prey plasmids with Y2HGold cells containing each of the bait plasmids (swine NP or human NP) on YPD agar (MatchMaker Gold Y2H protocol, Clontech, Mountainview, CA). Eight replicate samples of diploids were selected by subsequent growth on SD-Leu-Trp agar for each protein interaction. Diploid cells containing bait/prey were inoculated in 2 mL SD-Leu/Trp liquid media and grown in a 200 rpm shaker at 30°C overnight. 2 mL of the overnight cultures were diluted to 4 mL with YPD media and grown in a 200 rpm shaker at 30°C until they reached mid-log phase (OD600 of 0.5-0.8). Cultures were separated into 1.5 mL aliquots, pelleted, washed once, and then resuspended in 300 μL of Z buffer (16.1 g/L Na2HPO4⋅7H2O, 5.5 g/L NaH2PO4⋅H2O, 0.75 g/L KCl, 0.246 g/L MgSO4⋅7H2O, pH7.0). Samples of 100 μL were lysed by three freeze/thaw cycles with liquid nitrogen and a 37°C water bath. After the cells were lysed, the sample was split into two 150 μL aliquots. A blank tube, containing 100uL of Z buffer, was also set up. To analyze the enzyme activity, 350 μL of Z buffer with 0.27% β-mercaptoethanol and 4 mg/ml of ONPG as substrate were added. The yellow color was then allowed to develop at 30°C, and the reaction was stopped by the addition of 200 μL of 1 M Na2CO3. To remove cell debris, the samples were centrifuged for 10 minutes at 16,000 RCF, and OD420 values were recorded. In order to express the enzyme activity, Miller Units were calculated using the formula: 1000*OD420/(t*V*OD600) where t refers to the elapsed time, and V refers to the dilution factor (0.25 in this case).\nDiploid cells containing bait and prey were prepared by mating Y187 cells containing each of the prey plasmids with Y2HGold cells containing each of the bait plasmids (swine NP or human NP) on YPD agar (MatchMaker Gold Y2H protocol, Clontech, Mountainview, CA). Eight replicate samples of diploids were selected by subsequent growth on SD-Leu-Trp agar for each protein interaction. Diploid cells containing bait/prey were inoculated in 2 mL SD-Leu/Trp liquid media and grown in a 200 rpm shaker at 30°C overnight. 2 mL of the overnight cultures were diluted to 4 mL with YPD media and grown in a 200 rpm shaker at 30°C until they reached mid-log phase (OD600 of 0.5-0.8). Cultures were separated into 1.5 mL aliquots, pelleted, washed once, and then resuspended in 300 μL of Z buffer (16.1 g/L Na2HPO4⋅7H2O, 5.5 g/L NaH2PO4⋅H2O, 0.75 g/L KCl, 0.246 g/L MgSO4⋅7H2O, pH7.0). Samples of 100 μL were lysed by three freeze/thaw cycles with liquid nitrogen and a 37°C water bath. After the cells were lysed, the sample was split into two 150 μL aliquots. A blank tube, containing 100uL of Z buffer, was also set up. To analyze the enzyme activity, 350 μL of Z buffer with 0.27% β-mercaptoethanol and 4 mg/ml of ONPG as substrate were added. The yellow color was then allowed to develop at 30°C, and the reaction was stopped by the addition of 200 μL of 1 M Na2CO3. To remove cell debris, the samples were centrifuged for 10 minutes at 16,000 RCF, and OD420 values were recorded. In order to express the enzyme activity, Miller Units were calculated using the formula: 1000*OD420/(t*V*OD600) where t refers to the elapsed time, and V refers to the dilution factor (0.25 in this case).", "Unless indicated otherwise, yeast strains (Table 3) were grown in liquid culture at 30°C at 180 rpm and on agar plates at 30°C. Bacterial strains were grown in liquid culture at 37°C at 200 rpm and on agar plates at 37°C. Selectable markers for the bait (pGBKT7) and prey (pGADT7) plasmids in the yeast Saccharomyces cerevisiae and bacterium E. coli are listed in Table 3. All media and Y2H reagents were purchased from Clontech (Mountain View, CA).Table 3\nStrains and plasmids\n\nYeast strain\n\nGenotype\n\nReporter genes\nY2HGold\nMATa, trp1-901, leu2-3, 112, ura3-52, his3-200, gal4Δ, gal80Δ, LYS2::GAL1UAS–Gal1TATA–His3, GAL2UAS–Gal2TATA–Ade2 URA3::MEL1UAS–Mel1TATA, AUR1-C MEL1\n\nADE2, HIS3, MEL1, AUR1-C\nY187\nade2-101, trp1-901, leu2-3, 112, gal4Δ, gal80Δ, met–, URA3::GAL1UAS–Gal1TATA–LacZ, MEL1\n\nMEL1, LacZ\n\nPlasmid\n\nSelectable markers (yeast, bacteria)\n\nAdditional information\nPlasmid pGBKT7 (bait vector)\nTRP1, Kan\nGAL4(1–147)DNA-BD, TRP1, kanr, c-Myc epitope tagPlasmid pGADT7 (prey vector)\nLEU2, Amp\nGAL4(768–881)AD, LEU2, ampr, HA epitope tag\n\nStrains and plasmids\n", "The open reading frame (ORF) of the nucleoprotein gene from swine IAV H1N1 strain A/Sw/NC/44173/00 was amplified from the plasmid pScript A/Sw/NC/44173/00 NP (a gift from Christopher Olsen) by PCR using forward primer NP3F 5’-CATGGAGGCCGAATTCatggcgtctcaaggcaccaaacga-3’ and reverse primer NP2R 5’-GCAGGTCGACGGATCCattgtcatactcctctgcattgtctccgaaga-3’. The NP ORF from the human IAV H3N2 strain A/Ca/07/04 was amplified from plasmid pScript A/Ca/07/04 NP (a gift from Christopher Olsen) by PCR using forward primer NP2F 5’-CATGGAGGCCGAATTC atggcgtcccaaggcaccaaacg-3’ and reverse primer NP3R 5’-GCAGGTCGACGGATCC attgtcgtactcttctgcattgtctccgaaga-3’. For all primers the uppercase letters represent sequences homologous to the BamHI-EcoRI-linearized vector pGBKT7, and lower case letters represent sequences specific for the NP ORF. Reactions included 200 ng plasmid template DNA, 1X Advantage HD Buffer (Clontech, Mountain View, CA), 0.2 mM dNTPs, 0.25 μM forward and reverse primers, and 0.625 Units Advantage HD Polymerase (Clontech, Mountain View, CA). Thermocycler conditions were as follows: 95°C for 3 minutes; 30 cycles of [95°C for 15 seconds, 55°C for 5 seconds, 72°C for 100 seconds], 72°C for 10 minutes.\nInFusion cloning (Clontech, Mountain View, CA) was used to insert the NP ORFs into BamHI-EcoRI-linearized pGBKT7 3’ and in-frame with DNA encoding Gal4’s DNA binding domain (BD). Conditions were as follows: 60 ng gel-purified BamHI-EcoRI-linearized pGBKT7, 50 ng gel-purified NP PCR product, and 1X InFusion HD Enzyme Premix in a 5 μl reaction were incubated at 50°C for 15 min. An aliquot of the cloning reaction was transformed into Stellar competent cells per the manufacturer’s directions (Clontech, Mountain View, CA). All plasmid sequences were verified by DNA sequencing.", "The Yeast Two-Hybrid MatchMaker Gold system (Clontech, Mountain View, CA) was used to isolate prey plasmids that interact with bait pGBKT7-NP. A 50 ml SD-TRP liquid media culture inoculated with Y2HGold cells containing the bait plasmid pGBKT7-NP was grown in a 200 rpm shaker at 30 degrees to an OD600 of 0.8 (approximately 18 hours). Bait cells were pelleted by centrifugation, resuspended in 4 ml SD-TRP, and mixed with 45 ml YPDA in a 2 L flask with 2 × 107 Y187 cells containing a commercially-available normalized HeLa S3 Mate and Plate cDNA prey library (Clontech, Mountain View, CA). Bait and prey cells were mated by slow shaking (40 rpm) for 24 hours at 30°C. After 20 hours, mating cells were observed microscopically for the presence of zygotes. Mated cells were centrifuged 1,000 × g for 10 min, and the pellet was resuspended in 5 ml 0.5X YPDA containing 50 μg/ml kanamycin. Mated cells (0.2 ml) were spread on each of 55 SD-LEU-TRP/X-a-Gal/125 ng/ml AbA agar plates, and plates were incubated 30°C for 5–8 days. This concentration of AbA is considered high stringency and could exclude isolation of low-affinity interactors. The mating efficiency was determined by plating 1:1,000 and 1:10,000 dilutions on SD-LEU, SD-TRP, and SD-LEU-TRP agar plates.\nBlue colonies from the Y2H screen SD-LEU-TRP/X-a-Gal/AbA plates were picked as small patches to an SD-LEU-TRP plate, grown 2 days at 30°C, and replica plated to agar media (SD-ADE, SD-HIS, SD-LEU-TRP/X-a-Gal, SD-LEU-TRP/AbA) to test for activation of four reporter genes: ADE2, HIS3, MEL1C, and AUR1-C, respectively. Replica plates were incubated at 30°C and observed daily for 3 days. Cells from potential positive interactors (growing on all types of reporter media and blue/light blue on SD-LEU-TRP/X-α-Gal) were single-colony purified on SD-LEU-TRP/X-a-Gal and incubated at 30°C for 4 days. A single blue colony from each potential positive interaction strain was single-colony purified a second time on SD-LEU-TRP/X-α-Gal.\nFollowing two rounds of streaking, the prey plasmid from each potential positive interaction yeast strain was isolated using the “Easy Yeast Plasmid Isolation Kit” (Clontech, Mountain View, CA). The resulting yeast plasmid DNAs were transformed into and subsequently re-isolated from E. coli cells.", "Isolated prey plasmid DNAs were re-introduced into Y2HGold cells using a yeast transformation system (Geno Technology). Y2HGold cells containing prey plasmids were single-colony purified on SD-Leu agar plates alongside a control (Y2HGold containing the empty prey vector pGADT7), and cell growth was observed after 3 days at 30C to test for toxicity. To test for false positives, patches of Y2HGold cells containing prey plasmids were replica plated to assess autoactivation of reporter genes on reporter media (SD-Ade, SD-His, SD-Leu-Trp + X-α-Gal, SD-Leu-Trp + AbA). Prey strains determined to be false positives in yeast were excluded from the pool of identified interacting host proteins.", "The 5’ and 3’ ends of human cDNA within each confirmed positive prey plasmid were determined by sequencing at the Iowa State University (ISU) DNA Facility using primers T7-1 (AATACGACTCACTATAG) and 3’AD (AGATGGTGCACGATGCACAG) or 3’ poly-TA (TTTTTTTTTTTTTTTTTTTTTTTTTA), respectively. The sequence data was used in a BLAST to query the NCBI human genome and human transcript databases.", "Y2HGold yeast cells were separately co-transformed with a bait plasmid (either pGBKT7 bait vector containing NP from human influenza A strain H3N2, A/Ca/07/04 or NP from swine influenza A strain H1N1, A/Sw/NC/44173/00) and a prey plasmid (pGADT7 containing one of the human cDNAs isolated as a positive interactor with human NP bait). Y2HGold cells containing known bait/prey interactors pGBKT7-53 and pGADT7-T and known non-interactors pGBKT7-Lam and pGADT7-T were used as positive and negative controls, respectively. Additional negative interaction controls included Y2HGold cells containing human or swine NP (described above) and an empty prey vector pGADT7.\nAll transformants were grown in YPD media in overnight cultures and were diluted to a concentration of 5 × 106 cells/ml (OD = 0.417). 200 ml of culture was placed into a 96 well plate. A 48 sample pinner was used to spot yeast onto 2 plates each of SD-Leu-Trp (control), and SD-His. One plate of each type was incubated at 30°C, the other was incubated at 37°C. Growth was assessed and images taken 4 days after the initial plating procedure. At least two replicates of each strain containing the bait (swine NP or human NP) and prey were tested on SD-Leu-Trp and SD-His at both 30°C and 37°C.", "Diploid cells containing bait and prey were prepared by mating Y187 cells containing each of the prey plasmids with Y2HGold cells containing each of the bait plasmids (swine NP or human NP) on YPD agar (MatchMaker Gold Y2H protocol, Clontech, Mountainview, CA). Eight replicate samples of diploids were selected by subsequent growth on SD-Leu-Trp agar for each protein interaction. Diploid cells containing bait/prey were inoculated in 2 mL SD-Leu/Trp liquid media and grown in a 200 rpm shaker at 30°C overnight. 2 mL of the overnight cultures were diluted to 4 mL with YPD media and grown in a 200 rpm shaker at 30°C until they reached mid-log phase (OD600 of 0.5-0.8). Cultures were separated into 1.5 mL aliquots, pelleted, washed once, and then resuspended in 300 μL of Z buffer (16.1 g/L Na2HPO4⋅7H2O, 5.5 g/L NaH2PO4⋅H2O, 0.75 g/L KCl, 0.246 g/L MgSO4⋅7H2O, pH7.0). Samples of 100 μL were lysed by three freeze/thaw cycles with liquid nitrogen and a 37°C water bath. After the cells were lysed, the sample was split into two 150 μL aliquots. A blank tube, containing 100uL of Z buffer, was also set up. To analyze the enzyme activity, 350 μL of Z buffer with 0.27% β-mercaptoethanol and 4 mg/ml of ONPG as substrate were added. The yellow color was then allowed to develop at 30°C, and the reaction was stopped by the addition of 200 μL of 1 M Na2CO3. To remove cell debris, the samples were centrifuged for 10 minutes at 16,000 RCF, and OD420 values were recorded. In order to express the enzyme activity, Miller Units were calculated using the formula: 1000*OD420/(t*V*OD600) where t refers to the elapsed time, and V refers to the dilution factor (0.25 in this case)." ]
[ "introduction", "results", null, null, "discussion", "conclusion", "materials|methods", null, null, null, null, null, null, null ]
[ "Influenza A virus", "Nucleoprotein", "Yeast two-hybrid", "Host restriction" ]
Background: Influenza A viruses (IAVs) are important pathogens that affect the health of humans and many additional animal species. In humans, seasonal IAV infection presents as a non-fatal, uncomplicated, acute infection characterized by the presence of upper respiratory symptoms as well as fever, headache, soreness and fatigue lasting 2–5 days [1]. However, deaths from seasonal IAV infections often arise when the normal flu symptoms are exacerbated by compromised immunity or age [1]. In addition to seasonal influenza A viruses, pandemic strains periodically appear causing increased mortality rates. For example, the 1918 Spanish flu resulted in approximately 50 million human deaths [2]. The most recent pandemic resulted from the emergence of the swine-origin H1N1 virus [3,4]. Due to influenza A’s potential for mortality, high mutation rates (resulting in genetic drift) and pandemic potential (resulting from genetic reassortment), it is critical to learn more about the virus, especially as it pertains to virulence, transmissibility, and identification of potential targets for development of therapeutics. Influenza A viruses exhibit a broad host range beyond humans [5]. Waterfowl serve as the central reservoir species. In addition to humans, various IAV subtypes circulate in pigs, poultry, horses, dogs as well as other species. Subtypes (H1N1 and H3N2) that circulate in the human population also circulate in the swine populations [6]. In addition to the concern for IAV in terms of swine health [7], IAV infection in swine is also important due to the potential for zoonotic infections as well as the potential for serving as a mixing vessel for the generation of human pandemic viruses [5]. The majority of inter-species transmission research has focused on amino acids in the hemagglutinin affecting viral attachment and entry [8]. Although known to be important, other than amino acids associated with temperature sensitivity (e.g., PB2 627 [9] or 701 [10]), the effects of differences in amino acid residues in proteins of the replication complex on species specificity is less well understood. Previous evidence suggests that there may be a role in some of these replication complex proteins in allowing for zoonotic infections [11]. The genome of the enveloped influenza A virion consists of eight segments of negative-sense single-stranded RNA that encode at least ten viral genes: Hemagglutinin (HA), Neuraminidase (NA), Matrix 2 protein (M2), Matrix 1 protein (M1), Non-structural Protein 1 (NS1), Non-structural Protein 2 (NS2), Nucleoprotein (NP), Polymerase Basic 1 (PB-1), Polymerase Basic 2 (PB-2), and Polymerase Acidic (PA) [1]. The viral envelope contains HA, NA, and M2 proteins, and M1 proteins form a layer inside the envelope. The IAV RNA genome located inside the virion is coated with NP and is bound by the replication complex consisting of PB1, PB2, and PA. The NP, the focus of this study, encoded by the fifth IAV RNA segment, binds with high affinity to viral RNA [12]. NP plays a role in viral RNA replication and transcription [12]. More recent data suggests that NP binds the polymerase as well as the newly replicated RNA and may act as a processivity factor that is necessary for replication of the viral RNA to be completed [13]. Phylogenic analysis of IAV NP shows distinct lineages of NP based on the host species [14,15]. Within the NP, there are amino acid signatures found within different host species [16,17]. These host-specific amino acid residues may result in differences in affinities for the various host proteins with which they interact (e.g., importin α1 [18], F- actin [19], nuclear factor 90 [20], cyclophilin E [21], exportin 1 [22], HMGB1 [23], HMGB2 [23], MxA/Mx1 [24,25], HSP40 [26], karyopherin alpha [27,28], clusterin [29], Raf-2p48/BAT1/UAP56 [30], Aly/REF [31], Tat-SF1 [32], TRIM22 [33], and alpha actinin 4 [34]) or they may result in differences in how the NP interacts with other viral proteins that have also made host-specific adaptations [11]. Given the suggestion that NP plays a role in determining host range [5,35-37], it is important to identify all host proteins that interact with NP. Replication of the IAV is dependent on the host cell machinery and interactions between host proteins and viral proteins. Therefore any inquiry attempting to investigate the life cycle of IAV must incorporate host-pathogen protein interactions in order to truly provide a mechanistic understanding of the process. Previous screens have identified important protein-protein interactions between IAV and host proteins, but often results are not consistent between screens. Therefore, corroborating evidence with additional screens is crucial to accumulating the best understanding of critical pathogen-host interactions in viral replication. As examples, several genome-wide screens have been performed using RNAi to systematically knockdown host genes and evaluate the effect on various stages of the IAV life cycle [38-41]. An integrated approach carried out by Shapira and colleagues involved transcription profiling and yeast two hybrid screens using specific viral proteins as baits [28]. Proteomics approaches have revealed host proteins found within viral particles [42,43]. The Random Homozygous Gene Perturbation strategy was used to identify host factors that prevent influenza-mediated killing of host cells [44]. In order to better understand the role of NP-host protein interactions in IAV replication, a Gal4-based yeast two-hybrid (Y2H) assay was used in this study. The Y2H system allows for the identification of unique human binding partners with NP. A “bait” plasmid encoding the binding domain (BD) of the Gal4 transcriptional activator fused to NP and a “prey” plasmid encoding Gal4’s activation domain (AD) fused to a protein encoded by a human cDNA are introduced into a yeast strain containing Gal4-responsive reporter genes. Interaction of the bait and prey brings together Gal4’s BD and AD, resulting in transcriptional activation of reporter genes [45]. The Y2H approach has successfully identified cellular factors (e.g., Raf-2p48/BAT1/UAP56, Hsp40, KPNA1, KNPA3, KPNA6, C16orf45, GMCL1, MAGED1, MLH1, USHBP1, ZBTB25, CLU, Aly/REF, and ACTN4) that interact with the IAV NP [26-31,34], (Table 1). In this study, the nucleoprotein from a classical swine H1N1 IAV (Sw/NC/44173/00) was used as the “bait” in a Y2H screen against a “prey” HeLa cDNA library. To investigate if the origin of NP affects the ability of the interactions to take place, the nucleoprotein from a contemporary human H3N2 IAV (A/Ca/07/04) was also used as the “bait” in a Y2H screen against the same prey HeLa cDNA library. By determination of the putative interaction partners between NP and human proteins, the possible functions of the newfound protein-protein interactions or verification and perhaps elucidation of previously identified host factors may be further investigated.Table 1 Summary of identified bait-prey interactions and previous reports of host protein association with influenza A virus Gene Protein description Complete ORF Number of prey clones Prey clones Previous reports with IAV PAICSPhosphoribosylaminoimidazole carboxylaseYes3R3, R9, R15Karlas et al. [40], Kumar and Nanduri [47], Kroeker et al. [48]MSANTD3Myb/SANT-like DNA-binding domain containing 3No3R10, R13, R27N/AFLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAYes2R23, R33N/APA28BProteasome activator subunit 2Yes2R6, R19N/AKCTD9Potassium channel tetramerisation domain containing 9No2R34, R36N/AACOT13Acyl-CoA thioesterase 13Yes1R11N/ATRA2BTransformer 2 beta homolog (Drosophila)Yes1R12Zhu et al. [56]SLC30A9Solute carrier family 30 (zinc transporter), member 9No1R28Kumar and Nanduri [47], Sui et al. [44]ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideNo1R7Mi et al. [51], Liu et al. [50]CKAP5Cytoskeleton associated protein 5No1R29N/A Host proteins identified as interacting with IAV nucleoprotein in previously published yeast two-hybrid screens BAT1/UAP56RNA-dependent ATPaseN/AN/AN/AMomose et al. [30]HSP40Heat shock protein 40N/AN/AN/ASharma et al. [26]NPI-1/ SRP1/KPNA1Karyopherin α1N/AN/AN/AO’Neill and Palese [27], Shapira et al. [28]KPNA3Karyopherin α3N/AN/AN/AShapira et al. [28]KPNA6Karyopherin α6N/AN/AN/AShapira et al. [28]C16orf45-N/AN/AN/AShapira et al. [28]GMCL1Germ cell-less, spermatogenesis associated 1N/AN/AN/AShapira et al. [28]MAGED1Melanoma antigen family D.1N/AN/AN/AShapira et al. [28]MLH1MutL homolog 1N/AN/AN/AShapira et al. [28]USHBP1Usher syndrome 1C binding protein 1N/AN/AN/AShapira et al. [28]ZBTB25Zinc finger and BTB domain containing 25N/AN/AN/AShapira et al. [28]CLUClusterinN/AN/AN/ATripathi et al. [29]ALY/REFRNA export adaptor proteinN/AN/AN/ABalasubramaniam et al. [31]ACTN4Alpha-actinin 4N/AN/AN/ASharma et al. [34] Summary of identified bait-prey interactions and previous reports of host protein association with influenza A virus Reported herein are ten potential human host cell proteins that interact with IAV NP in a Y2H screen. The relative strengths of these protein-protein interactions were characterized by a plating assay and beta-galactosidase assay. Results: Constructing an NP bait and screening a human cDNA prey library The nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1 Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Using a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs. The nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1 Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Using a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs. Evaluation of the strength of bait-prey interactions The strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2 Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2 Summary of affinity of bait-prey interactions Growth on SD-His agar plates* β-Gal activity** Gene Protein description Prey clone Human (30°C) Human (37°C) Swine (30°C) Swine (37°C) Human NP Swine NP PAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector). Summary of affinity of bait-prey interactions * Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2). ** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). The strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3 β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. The strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2 Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2 Summary of affinity of bait-prey interactions Growth on SD-His agar plates* β-Gal activity** Gene Protein description Prey clone Human (30°C) Human (37°C) Swine (30°C) Swine (37°C) Human NP Swine NP PAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector). Summary of affinity of bait-prey interactions * Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2). ** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). The strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3 β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. Constructing an NP bait and screening a human cDNA prey library: The nucleoprotein open reading frame from the classical swine IAV H1N1 strain A/Sw/NC/44173/00 and contemporary human IAV H3N2 strain A/Ca/07/04 encode proteins with 89.1% identity and 94.8% similarity (Figure 1). Of the 49 amino acids that differ between the swine and human NPs, 29 are conservative differences. However, it is worth noting that several of these amino acid differences are signatures that differ between avian, swine and human isolates [16,17]. The two NP ORFs were amplified by PCR, and the resulting PCR products were separately inserted by recombination cloning into Y2H bait vector pGBKT7 as a C-terminal fusion with Gal4p’s DNA binding domain. Correct constructs were confirmed by DNA sequence analysis. Gal4(BD)-NP fusion proteins isolated from yeast cells were detected as 77 KDa bands on a Western blot probed with an anti-Myc antibody (data not shown). When introduced alone into yeast Y2HGold cells, the swine and human NP baits constructs were not toxic to yeast and did not autoactivate the Y2H reporter genes, indicating they were suitable for use in a Y2H screen (data not shown).Figure 1 Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Sequence alignment of nucleoproteins from swine and human influenza A virus strains. The swine and human NPs are 89.1% identical (yellow highlighting) and 94.8% similar (yellow and green highlighting). Nonconservative amino acids differences are represented in white. Amino acids with * represent signature amino acids associated with avian influenza A NP, and # and † represent signature amino acids associated with avian influenza A NP based on Chen et al. [16] and Pan et al. [17]. † Denotes the only deviation of A/Sw/NC 44173/00 from the consensus swine residues based on Pan et al. [17]. Using a Y2H assay with Gal4 (BD)-swine NP as a bait and a human HeLa cDNA prey library, approximately 2 × 106 clones were screened resulting in 17 positive bait-prey interacting yeast strains. Sequence and bioinformatics analysis of the prey plasmids indicated that these 17 prey plasmids, listed by their initial positive interaction identification number (R3, R6, etc.), represent ten different human cDNAs (Table 1). An independent screen using Gal4 (BD)-human NP as a bait and a human HeLa cDNA prey library yielded a subset of these same host protein targets. Some of the identified clones contain partial cDNAs. Three independent prey plasmids isolated contained human cDNAs phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) and another three contained Myb/SANT-like DNA-binding domain containing 3 (MSANTD3). Two independent prey plasmids were isolated for each of three human cDNAs: proteasome activator subunit 2 (PA28B), potassium channel tetramerisation domain (KCTD9), and an uncharacterized protein (FLJ30306). Each of the remaining five prey plasmids contained different human cDNAs. Evaluation of the strength of bait-prey interactions: The strength of interaction between each NP bait (swine or human) and human prey in yeast was measured using a qualitative growth assay and a quantitative β-galactosidase activity assay. First, yeast growth on selective media after cotransformation with swine or human NP and one of the identified prey was tested at both 30°C and 37°C as a qualitative metric of the protein-protein interaction’s strength (Figure 2, Table 2). All strains, including the positive and negative controls, grew on SD-Leu-Trp which selects for the presence of both bait and prey plasmids. Most strains showed decreased growth at 37°C relative to the more permissive temperature 30°C. Strains examined at 37°C which contained either swine or human NP bait grew best in the presence of prey CKAP5, MSANTD3, or TRA2B. In contrast, at 37°C strains containing swine NP grew poorly with prey ACOT13, whereas strains containing human NP grew poorly with prey PA28B.Figure 2 Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector).Table 2 Summary of affinity of bait-prey interactions Growth on SD-His agar plates* β-Gal activity** Gene Protein description Prey clone Human (30°C) Human (37°C) Swine (30°C) Swine (37°C) Human NP Swine NP PAICSPhosphoribosylaminoimidazole carboxylaseR15(++)(+)(++)(++)2.230.4MSANTD3Myb/SANT-like DNA-binding domain containing 3R10(++)(++)(++)(++)13.918.2FLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAR23(+)(+)(++)(++)1.95.4PA28BProteasome (prosome, macropain) activator subunit 2R6(++)(−)(++)(+)6.733.3KCTD9Potassium channel tetramerisation domain containing 9R34(+)(+)(++)(++)5.016.4ACOT13Acyl-CoA thioesterase 13R11(++)(+)(++)(−)6.915.5TRA2BTransformer 2 beta homolog (Drosophila)R12(++)(++)(++)(++)1.90.85SLC30A9Solute carrier family 30 (zinc transporter), member 9R28(++)(+)(++)(+)82.9119.9ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideR7(++)(+)(++)(++)3.44.3CKAP5Cytoskeleton associated protein 5R29(++)(++)(++)(++)70.190.3* Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2).** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). Maintenance of protein-protein interactions at 30°C and 37°C. Yeast cells containing both a bait plasmid and prey plasmid were spotted in duplicate onto agar plates and incubated at 30°C or 37°C. SD-Leu-Trp media serves as a growth control, and SD-His is protein interaction reporter media. Growth on SD-His is represented as normal growth (++), reduced growth (+), and no growth (−) (see Table 2). NC and PC represent negative and positive controls, respectively. Additional yeast strain negative controls include h- (human NP bait and empty prey vector) and s- (swine NP bait and empty prey vector). Summary of affinity of bait-prey interactions * Results from growth on SD-His protein interaction assay media are presented as: (++) normal growth, (+) reduced growth, and (-) no growth (Figure 2). ** Mean β-galalactosidase activity from at least eight replicates reported in Miller units (Figure 3). The strength of the interactions between IAV NP and host proteins identified by the library screen was also investigated by measuring the β-gal activity in liquid yeast cultures (Figure 3, Table 2). As observed with the qualitative growth assay, for all ten prey proteins, interactions were observed with both the swine and human NP. Of the ten interactions investigated, yeast strains containing swine or human NP and human prey SLC30A9 and CKAP3 showed the highest levels of β-gal activity compared to the other prey proteins examined. This is consistent with the growth of yeast strains containing human NP and SLC30A9 and CKAP3 on selective media at 37°C ( Figure 2, Table 2). Even strains with the lowest levels of β-gal activity observed (TRA2B and ATP1B1) have double the β-gal activity of the negative control. While the data suggest there may be differences in the strength of bait-prey interactions based on NP host origin, these differences should be interpreted carefully given the nature of the assay.Figure 3 β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. β-galactosidase activity in liquid culture. Β-gal activity is a quantitative measure of the strength of interaction between bait swine or human NP and prey human host proteins identified in the Y2H screen. The β-gal activity for each interacting bait-prey pair is expressed in Miller units and represents the mean and standard error of the mean of at least eight independent samples. Discussion: This Y2H screen identified multiple candidate host proteins that were previously identified in genome-wide screens as playing a role in the IAV infection cycle (PAICS, ATP1B1, SLC30A9, and TRA2B), while also identifying new potential interactors with IAV nucleoprotein (ACOT13, CKAP5, KCTD9, PA28B, MSANDTD3, and FLJ30306). These results complement previous reports by suggesting the mechanism for involvement of the host proteins with the IAV (i.e. interaction with the NP). The yeast screen is internally validated by the observation that half of the prey proteins identified were independently isolated multiple times during the screen. All host prey proteins identified interact in yeast with IAV NP from both human and swine isolates. Although there can be variability in gene expression among yeast colonies containing the same combination of bait/prey, both a qualitative growth assay and a quantitative β-gal assay yielded similar results in terms of the strength of interaction between IAV NP bait and human prey proteins. Notably, CKAP5 displayed good growth at the more stringent 37°C in the qualitative assay and also had high β-gal activity. The putative interacting host proteins represent a variety of cellular processes, consistent with the diversity of processes required during the IAV infection cycle. Phosphoribosylaminoimidazole carboxylase, phosphoribosylaminoimidazole succinocarboxamide synthetase (PAICS) is an enzyme necessary to catalyze the sixth and seventh steps of the de novo purine biosynthesis pathway [46]. PAICS has been identified by two separate RNAi screens as being crucial to IAV replication [40,47]. Furthermore, proteins involved in purine biosynthesis, including PAICS, were found to be significantly up-regulated during an IAV infection [48]. The data presented herein suggest that the requirement for PAICS during IAV infection may be through the interaction with NP. The sodium/potassium-transporting ATPase subunit beta-1 (ATP1B1) represents another host protein that interacts in yeast with NP and that has been previously shown to be important for influenza virus replication in host cells. It belongs to the family of Na+/K + −ATPase beta chain proteins. Na+/K + −ATPase is necessary for the creation and maintenance of electrochemical gradients of sodium and potassium ions on either side of the plasma membrane [49]. This gradient is used by cells to permit functional nervous and muscular excitability, osmoregulation, and active transport of numerous molecules [49]. The beta subunit functions to regulate the quantity of sodium pumps transported to the plasma membrane [49]. Using a quantitative proteomics approach, ATP1B1 was one of 43 proteins found to be significantly up-regulated in IAV-infected primary human alveolar macrophages [50]. In a Y2H study, Mi and colleagues [51] isolated ATP1B1 as a protein that binds the cytoplasmic domain of IAV M2 and subsequently showed that knockdown of ATP1B1 in MDCK cells suppressed IAV replication [51]. These studies suggest that ATP1B1 may be interacting with multiple IAV proteins. The solute carrier family 30 (zinc transporter), member 9 (SLC30A9) is involved in nucleotide-excision repair of human DNA [52] and has an efflux motif characteristic of proteins in the SLC30 zinc efflux transporter family [53]. SLC30A9 protein found in the cytoplasm can bind nuclear receptors and/or nuclear receptor coactivators and translocate to the nucleus where it regulates gene transcription [54]. SLC30A9 was identified as a possible host target that confers upon the host cell resistance to IAV infection [44]. How SLC30A9 might interact with NP and be necessary for IAV replication is unclear. Another protein identified by this screen is TRA2B, a tissue-specific splicing factor [55]. TRA2B protein binds to AGAA and CAA RNA sequences affecting the inclusion of introns in the processed transcript [55]. With respect to influenza, expression of TRA2B protein was reported to be down-regulated in porcine alveolar macrophage cells infected by two swine IAVs [56]. Because influenza genes M2 and NS2 are spliced, it is possible that TRA2B is important for the splicing of these genes. However, how NP might be involved in TRA2B’s function in IAV replication is unknown. Another binding partner identified is Acyl CoA thioesterase 13 (ACOT13), a eukaryotic protein also known as thioesterase superfamily member 2 (Them2) [57,58]. ACOT13 catalyzes fatty acyl-CoA hydrolysis that exists primarily in association with mitochondria [57]. ACOT13/Them2 was identified in a Y2H screen focused on phosphatidylcholine transfer protein, suggestive of a role in fatty acid metabolism [57]. Although ACOT13 has not previously been reported to play a role in the IAV life cycle, another member of the ACOT family (ACOT9) which has two hot dog fold domains (ACOT13 has one) [59] has been shown to physically interact with the IAV PA protein [28]. How ACOT13 may be interacting with NP during IAV replication is unclear. Cytoskeletal associated protein 5 (CKAP5, also known as TOG) binds microtubules and is important for the process of centrosomal microtubule assembly especially during mitosis [60]. The microtubule network of the cell plays a role in movement of materials throughout the cell, including during infection [61]. TOG2 binds to a ribonucleoprotein and plays a role in RNA trafficking [62]. While a specific role for CKAP5 in the IAV life cycle is unknown, a genome-wide siRNA screen revealed CKAP5 as one of 96 human genes that supports replication of another RNA virus- the hepatitis C virus [63]. It’s possible that CKAP5 functions during the IAV infection cycle to organize microtubules for trafficking of viral components. Another interactor identified in this screen, KCTD9, has been previously shown to interact with a subunit of the Mediator complex, acting as a scaffold between regulatory proteins and the RNA polymerase II [64]. In addition, KCTD9 homolog FIP2 acts as a cytoskeletal rearrangement protein suspected to be involved in nuclear export in some plants through the organization of actin cables [64]. Perhaps KCTD9 aids in translocating NP to the nucleus for replication and transcription of the viral RNA. It has been shown that the translocation of NF-E2 related factor 2 (Nrf2) is mediated by cytoskeletal rearrangement in the oxidative stress response [65], and the level of Nrf2 expression is associated with IAV entry and replication in nasal epithelial cells [66]. Also identified by this screen was PA28B, a protein associated with the proteasome complex. In the typical proteasome, there is a 19S regulator, which is replaced by the 11S regulator in the modified immunoproteasome [67]. PA28B is part of the proteasome complex and acts as a subunit in the 11S alternate regulator. It is referred to as the beta subunit and is comprised of three beta and three alpha subunits arranged in a heterohexameric ring [68]. The proteasome, as a whole, is used for the cleaving of peptides and the 11S regulator, in particular, induces the degradation of short peptides, but not entire proteins [68]. Induction of 11S regulator expression is the responsibility of interferon gamma. PA28B is involved in cleavage of the peptides which bind to the major histocompatibility complex (MHC). It is possible that NP interacts with PA28B to prevent MHC I presentation of IAV antigens. Myb/SANT-like DNA-binding domain containing 3 (MSANTD3) is a member of the MSANTD3 family which contains DNA binding domains for Myb proteins and the SANT domain family. Myb proteins are proto-oncogenes which are necessary for hematopoiesis and possibly involved in tumorigenesis [69]. The SANT domain permits the interaction of chromatin remodeling proteins with histones and is found in chromatin-remodeling complexes as well as nuclear receptor co-repressors [70]. NP interactions with MSANTD3 could have an effect on gene transcription of the host through chromatin modification, presumably to inhibit antiviral genes or up-regulate genes beneficial to viral replication. The screen also identified FLJ30306 as a potential interactor with NP. FLJ30306 maintains a sequence similarity to retroviral elements, resulting in its classification as endogenous retrovirus group K3, member 1 [Homo sapiens] [71]. The endogenous retrovirus sequence was revealed by sequence analysis of the mRNA, not by direct protein sequencing. It is unclear whether the protein carries out a normal cellular effect, such as endogenous retrovirus 1’s (ERV-1) use in placental syncytia formation or if it is a result of mis-regulation due to influenza infection [71]. In Hodgkin’s lymphoma it has been shown that many endogenous retroviruses are reactivated and produce virus-like particles [71]. It may be that instead of directly utilizing FLJ30306 influenza may simply be up-regulating its expression due to its pathogenesis. Alternatively, FLJ30306 may contain useful enzymatic activity for influenza replication through an unknown mechanism. A characterization of normal FLJ30306 function would allow for appropriately testing either hypothesis. All human host proteins identified in this Y2H screen interact in yeast with NP from both swine and human IAV strains. While the data suggest there may be differences in the strength of bait-prey interactions in yeast based on NP host origin, these differences should be interpreted carefully given the nature of the assay. Given the suggestion that NP affects species specificity, additional screens using NP from other IAV strains as the bait and/or cDNA libraries from other susceptible species may shed light on amino acids within the NP and specific host factors that are responsible for host restrictions. As is the case with other published screens (e.g., [72]), it will be important to verify the putative interactions described here in infected cells by co-immunoprecipitation, co-localization, or RNA interference (e.g., [34]) and to investigate the role of the identified host proteins in the IAV life cycle. Conclusions: A Y2H screen using a classical swine influenza A virus nucleoprotein as a bait resulted in isolation of ten different putative interacting human host proteins involved in a variety of cellular processes and structures: purine biosynthesis (PAICS), metabolism (ACOT13), proteasome (PA28B), DNA-binding (MSANTD3), cytoskeleton (CKAP5), potassium channel formation (KCTD9), zinc transporter function (SLC30A9), Na+/K+ ATPase function (ATP1B1), and RNA splicing (TRA2B). All of the identified human host proteins interacted in yeast with NP from both human and swine origin. Proteins PAICS, ATP1B1, TRA2B, and SLC30A9 have been previously identified in studies related to host response to the IAV. Host proteins SLC30A9 and CKAP5 displayed the strongest interactions with IAV NP in yeast in a quantitative β-gal assay. Methods: Strains, plasmids, media, microbial growth conditions, and reagents Unless indicated otherwise, yeast strains (Table 3) were grown in liquid culture at 30°C at 180 rpm and on agar plates at 30°C. Bacterial strains were grown in liquid culture at 37°C at 200 rpm and on agar plates at 37°C. Selectable markers for the bait (pGBKT7) and prey (pGADT7) plasmids in the yeast Saccharomyces cerevisiae and bacterium E. coli are listed in Table 3. All media and Y2H reagents were purchased from Clontech (Mountain View, CA).Table 3 Strains and plasmids Yeast strain Genotype Reporter genes Y2HGold MATa, trp1-901, leu2-3, 112, ura3-52, his3-200, gal4Δ, gal80Δ, LYS2::GAL1UAS–Gal1TATA–His3, GAL2UAS–Gal2TATA–Ade2 URA3::MEL1UAS–Mel1TATA, AUR1-C MEL1 ADE2, HIS3, MEL1, AUR1-C Y187 ade2-101, trp1-901, leu2-3, 112, gal4Δ, gal80Δ, met–, URA3::GAL1UAS–Gal1TATA–LacZ, MEL1 MEL1, LacZ Plasmid Selectable markers (yeast, bacteria) Additional information Plasmid pGBKT7 (bait vector) TRP1, Kan GAL4(1–147)DNA-BD, TRP1, kanr, c-Myc epitope tagPlasmid pGADT7 (prey vector) LEU2, Amp GAL4(768–881)AD, LEU2, ampr, HA epitope tag Strains and plasmids Unless indicated otherwise, yeast strains (Table 3) were grown in liquid culture at 30°C at 180 rpm and on agar plates at 30°C. Bacterial strains were grown in liquid culture at 37°C at 200 rpm and on agar plates at 37°C. Selectable markers for the bait (pGBKT7) and prey (pGADT7) plasmids in the yeast Saccharomyces cerevisiae and bacterium E. coli are listed in Table 3. All media and Y2H reagents were purchased from Clontech (Mountain View, CA).Table 3 Strains and plasmids Yeast strain Genotype Reporter genes Y2HGold MATa, trp1-901, leu2-3, 112, ura3-52, his3-200, gal4Δ, gal80Δ, LYS2::GAL1UAS–Gal1TATA–His3, GAL2UAS–Gal2TATA–Ade2 URA3::MEL1UAS–Mel1TATA, AUR1-C MEL1 ADE2, HIS3, MEL1, AUR1-C Y187 ade2-101, trp1-901, leu2-3, 112, gal4Δ, gal80Δ, met–, URA3::GAL1UAS–Gal1TATA–LacZ, MEL1 MEL1, LacZ Plasmid Selectable markers (yeast, bacteria) Additional information Plasmid pGBKT7 (bait vector) TRP1, Kan GAL4(1–147)DNA-BD, TRP1, kanr, c-Myc epitope tagPlasmid pGADT7 (prey vector) LEU2, Amp GAL4(768–881)AD, LEU2, ampr, HA epitope tag Strains and plasmids Cloning Influenza A virus nucleoprotein into vector pGBKT7 The open reading frame (ORF) of the nucleoprotein gene from swine IAV H1N1 strain A/Sw/NC/44173/00 was amplified from the plasmid pScript A/Sw/NC/44173/00 NP (a gift from Christopher Olsen) by PCR using forward primer NP3F 5’-CATGGAGGCCGAATTCatggcgtctcaaggcaccaaacga-3’ and reverse primer NP2R 5’-GCAGGTCGACGGATCCattgtcatactcctctgcattgtctccgaaga-3’. The NP ORF from the human IAV H3N2 strain A/Ca/07/04 was amplified from plasmid pScript A/Ca/07/04 NP (a gift from Christopher Olsen) by PCR using forward primer NP2F 5’-CATGGAGGCCGAATTC atggcgtcccaaggcaccaaacg-3’ and reverse primer NP3R 5’-GCAGGTCGACGGATCC attgtcgtactcttctgcattgtctccgaaga-3’. For all primers the uppercase letters represent sequences homologous to the BamHI-EcoRI-linearized vector pGBKT7, and lower case letters represent sequences specific for the NP ORF. Reactions included 200 ng plasmid template DNA, 1X Advantage HD Buffer (Clontech, Mountain View, CA), 0.2 mM dNTPs, 0.25 μM forward and reverse primers, and 0.625 Units Advantage HD Polymerase (Clontech, Mountain View, CA). Thermocycler conditions were as follows: 95°C for 3 minutes; 30 cycles of [95°C for 15 seconds, 55°C for 5 seconds, 72°C for 100 seconds], 72°C for 10 minutes. InFusion cloning (Clontech, Mountain View, CA) was used to insert the NP ORFs into BamHI-EcoRI-linearized pGBKT7 3’ and in-frame with DNA encoding Gal4’s DNA binding domain (BD). Conditions were as follows: 60 ng gel-purified BamHI-EcoRI-linearized pGBKT7, 50 ng gel-purified NP PCR product, and 1X InFusion HD Enzyme Premix in a 5 μl reaction were incubated at 50°C for 15 min. An aliquot of the cloning reaction was transformed into Stellar competent cells per the manufacturer’s directions (Clontech, Mountain View, CA). All plasmid sequences were verified by DNA sequencing. The open reading frame (ORF) of the nucleoprotein gene from swine IAV H1N1 strain A/Sw/NC/44173/00 was amplified from the plasmid pScript A/Sw/NC/44173/00 NP (a gift from Christopher Olsen) by PCR using forward primer NP3F 5’-CATGGAGGCCGAATTCatggcgtctcaaggcaccaaacga-3’ and reverse primer NP2R 5’-GCAGGTCGACGGATCCattgtcatactcctctgcattgtctccgaaga-3’. The NP ORF from the human IAV H3N2 strain A/Ca/07/04 was amplified from plasmid pScript A/Ca/07/04 NP (a gift from Christopher Olsen) by PCR using forward primer NP2F 5’-CATGGAGGCCGAATTC atggcgtcccaaggcaccaaacg-3’ and reverse primer NP3R 5’-GCAGGTCGACGGATCC attgtcgtactcttctgcattgtctccgaaga-3’. For all primers the uppercase letters represent sequences homologous to the BamHI-EcoRI-linearized vector pGBKT7, and lower case letters represent sequences specific for the NP ORF. Reactions included 200 ng plasmid template DNA, 1X Advantage HD Buffer (Clontech, Mountain View, CA), 0.2 mM dNTPs, 0.25 μM forward and reverse primers, and 0.625 Units Advantage HD Polymerase (Clontech, Mountain View, CA). Thermocycler conditions were as follows: 95°C for 3 minutes; 30 cycles of [95°C for 15 seconds, 55°C for 5 seconds, 72°C for 100 seconds], 72°C for 10 minutes. InFusion cloning (Clontech, Mountain View, CA) was used to insert the NP ORFs into BamHI-EcoRI-linearized pGBKT7 3’ and in-frame with DNA encoding Gal4’s DNA binding domain (BD). Conditions were as follows: 60 ng gel-purified BamHI-EcoRI-linearized pGBKT7, 50 ng gel-purified NP PCR product, and 1X InFusion HD Enzyme Premix in a 5 μl reaction were incubated at 50°C for 15 min. An aliquot of the cloning reaction was transformed into Stellar competent cells per the manufacturer’s directions (Clontech, Mountain View, CA). All plasmid sequences were verified by DNA sequencing. Yeast two-hybrid screen The Yeast Two-Hybrid MatchMaker Gold system (Clontech, Mountain View, CA) was used to isolate prey plasmids that interact with bait pGBKT7-NP. A 50 ml SD-TRP liquid media culture inoculated with Y2HGold cells containing the bait plasmid pGBKT7-NP was grown in a 200 rpm shaker at 30 degrees to an OD600 of 0.8 (approximately 18 hours). Bait cells were pelleted by centrifugation, resuspended in 4 ml SD-TRP, and mixed with 45 ml YPDA in a 2 L flask with 2 × 107 Y187 cells containing a commercially-available normalized HeLa S3 Mate and Plate cDNA prey library (Clontech, Mountain View, CA). Bait and prey cells were mated by slow shaking (40 rpm) for 24 hours at 30°C. After 20 hours, mating cells were observed microscopically for the presence of zygotes. Mated cells were centrifuged 1,000 × g for 10 min, and the pellet was resuspended in 5 ml 0.5X YPDA containing 50 μg/ml kanamycin. Mated cells (0.2 ml) were spread on each of 55 SD-LEU-TRP/X-a-Gal/125 ng/ml AbA agar plates, and plates were incubated 30°C for 5–8 days. This concentration of AbA is considered high stringency and could exclude isolation of low-affinity interactors. The mating efficiency was determined by plating 1:1,000 and 1:10,000 dilutions on SD-LEU, SD-TRP, and SD-LEU-TRP agar plates. Blue colonies from the Y2H screen SD-LEU-TRP/X-a-Gal/AbA plates were picked as small patches to an SD-LEU-TRP plate, grown 2 days at 30°C, and replica plated to agar media (SD-ADE, SD-HIS, SD-LEU-TRP/X-a-Gal, SD-LEU-TRP/AbA) to test for activation of four reporter genes: ADE2, HIS3, MEL1C, and AUR1-C, respectively. Replica plates were incubated at 30°C and observed daily for 3 days. Cells from potential positive interactors (growing on all types of reporter media and blue/light blue on SD-LEU-TRP/X-α-Gal) were single-colony purified on SD-LEU-TRP/X-a-Gal and incubated at 30°C for 4 days. A single blue colony from each potential positive interaction strain was single-colony purified a second time on SD-LEU-TRP/X-α-Gal. Following two rounds of streaking, the prey plasmid from each potential positive interaction yeast strain was isolated using the “Easy Yeast Plasmid Isolation Kit” (Clontech, Mountain View, CA). The resulting yeast plasmid DNAs were transformed into and subsequently re-isolated from E. coli cells. The Yeast Two-Hybrid MatchMaker Gold system (Clontech, Mountain View, CA) was used to isolate prey plasmids that interact with bait pGBKT7-NP. A 50 ml SD-TRP liquid media culture inoculated with Y2HGold cells containing the bait plasmid pGBKT7-NP was grown in a 200 rpm shaker at 30 degrees to an OD600 of 0.8 (approximately 18 hours). Bait cells were pelleted by centrifugation, resuspended in 4 ml SD-TRP, and mixed with 45 ml YPDA in a 2 L flask with 2 × 107 Y187 cells containing a commercially-available normalized HeLa S3 Mate and Plate cDNA prey library (Clontech, Mountain View, CA). Bait and prey cells were mated by slow shaking (40 rpm) for 24 hours at 30°C. After 20 hours, mating cells were observed microscopically for the presence of zygotes. Mated cells were centrifuged 1,000 × g for 10 min, and the pellet was resuspended in 5 ml 0.5X YPDA containing 50 μg/ml kanamycin. Mated cells (0.2 ml) were spread on each of 55 SD-LEU-TRP/X-a-Gal/125 ng/ml AbA agar plates, and plates were incubated 30°C for 5–8 days. This concentration of AbA is considered high stringency and could exclude isolation of low-affinity interactors. The mating efficiency was determined by plating 1:1,000 and 1:10,000 dilutions on SD-LEU, SD-TRP, and SD-LEU-TRP agar plates. Blue colonies from the Y2H screen SD-LEU-TRP/X-a-Gal/AbA plates were picked as small patches to an SD-LEU-TRP plate, grown 2 days at 30°C, and replica plated to agar media (SD-ADE, SD-HIS, SD-LEU-TRP/X-a-Gal, SD-LEU-TRP/AbA) to test for activation of four reporter genes: ADE2, HIS3, MEL1C, and AUR1-C, respectively. Replica plates were incubated at 30°C and observed daily for 3 days. Cells from potential positive interactors (growing on all types of reporter media and blue/light blue on SD-LEU-TRP/X-α-Gal) were single-colony purified on SD-LEU-TRP/X-a-Gal and incubated at 30°C for 4 days. A single blue colony from each potential positive interaction strain was single-colony purified a second time on SD-LEU-TRP/X-α-Gal. Following two rounds of streaking, the prey plasmid from each potential positive interaction yeast strain was isolated using the “Easy Yeast Plasmid Isolation Kit” (Clontech, Mountain View, CA). The resulting yeast plasmid DNAs were transformed into and subsequently re-isolated from E. coli cells. Testing potential positive prey plasmids for toxicity to yeast and autoactivation of the Y2H reporter genes Isolated prey plasmid DNAs were re-introduced into Y2HGold cells using a yeast transformation system (Geno Technology). Y2HGold cells containing prey plasmids were single-colony purified on SD-Leu agar plates alongside a control (Y2HGold containing the empty prey vector pGADT7), and cell growth was observed after 3 days at 30C to test for toxicity. To test for false positives, patches of Y2HGold cells containing prey plasmids were replica plated to assess autoactivation of reporter genes on reporter media (SD-Ade, SD-His, SD-Leu-Trp + X-α-Gal, SD-Leu-Trp + AbA). Prey strains determined to be false positives in yeast were excluded from the pool of identified interacting host proteins. Isolated prey plasmid DNAs were re-introduced into Y2HGold cells using a yeast transformation system (Geno Technology). Y2HGold cells containing prey plasmids were single-colony purified on SD-Leu agar plates alongside a control (Y2HGold containing the empty prey vector pGADT7), and cell growth was observed after 3 days at 30C to test for toxicity. To test for false positives, patches of Y2HGold cells containing prey plasmids were replica plated to assess autoactivation of reporter genes on reporter media (SD-Ade, SD-His, SD-Leu-Trp + X-α-Gal, SD-Leu-Trp + AbA). Prey strains determined to be false positives in yeast were excluded from the pool of identified interacting host proteins. Bioinformatics The 5’ and 3’ ends of human cDNA within each confirmed positive prey plasmid were determined by sequencing at the Iowa State University (ISU) DNA Facility using primers T7-1 (AATACGACTCACTATAG) and 3’AD (AGATGGTGCACGATGCACAG) or 3’ poly-TA (TTTTTTTTTTTTTTTTTTTTTTTTTA), respectively. The sequence data was used in a BLAST to query the NCBI human genome and human transcript databases. The 5’ and 3’ ends of human cDNA within each confirmed positive prey plasmid were determined by sequencing at the Iowa State University (ISU) DNA Facility using primers T7-1 (AATACGACTCACTATAG) and 3’AD (AGATGGTGCACGATGCACAG) or 3’ poly-TA (TTTTTTTTTTTTTTTTTTTTTTTTTA), respectively. The sequence data was used in a BLAST to query the NCBI human genome and human transcript databases. Maintenance of protein-protein interactions at 30°C and 37°C Y2HGold yeast cells were separately co-transformed with a bait plasmid (either pGBKT7 bait vector containing NP from human influenza A strain H3N2, A/Ca/07/04 or NP from swine influenza A strain H1N1, A/Sw/NC/44173/00) and a prey plasmid (pGADT7 containing one of the human cDNAs isolated as a positive interactor with human NP bait). Y2HGold cells containing known bait/prey interactors pGBKT7-53 and pGADT7-T and known non-interactors pGBKT7-Lam and pGADT7-T were used as positive and negative controls, respectively. Additional negative interaction controls included Y2HGold cells containing human or swine NP (described above) and an empty prey vector pGADT7. All transformants were grown in YPD media in overnight cultures and were diluted to a concentration of 5 × 106 cells/ml (OD = 0.417). 200 ml of culture was placed into a 96 well plate. A 48 sample pinner was used to spot yeast onto 2 plates each of SD-Leu-Trp (control), and SD-His. One plate of each type was incubated at 30°C, the other was incubated at 37°C. Growth was assessed and images taken 4 days after the initial plating procedure. At least two replicates of each strain containing the bait (swine NP or human NP) and prey were tested on SD-Leu-Trp and SD-His at both 30°C and 37°C. Y2HGold yeast cells were separately co-transformed with a bait plasmid (either pGBKT7 bait vector containing NP from human influenza A strain H3N2, A/Ca/07/04 or NP from swine influenza A strain H1N1, A/Sw/NC/44173/00) and a prey plasmid (pGADT7 containing one of the human cDNAs isolated as a positive interactor with human NP bait). Y2HGold cells containing known bait/prey interactors pGBKT7-53 and pGADT7-T and known non-interactors pGBKT7-Lam and pGADT7-T were used as positive and negative controls, respectively. Additional negative interaction controls included Y2HGold cells containing human or swine NP (described above) and an empty prey vector pGADT7. All transformants were grown in YPD media in overnight cultures and were diluted to a concentration of 5 × 106 cells/ml (OD = 0.417). 200 ml of culture was placed into a 96 well plate. A 48 sample pinner was used to spot yeast onto 2 plates each of SD-Leu-Trp (control), and SD-His. One plate of each type was incubated at 30°C, the other was incubated at 37°C. Growth was assessed and images taken 4 days after the initial plating procedure. At least two replicates of each strain containing the bait (swine NP or human NP) and prey were tested on SD-Leu-Trp and SD-His at both 30°C and 37°C. ß-galactosidase assay Diploid cells containing bait and prey were prepared by mating Y187 cells containing each of the prey plasmids with Y2HGold cells containing each of the bait plasmids (swine NP or human NP) on YPD agar (MatchMaker Gold Y2H protocol, Clontech, Mountainview, CA). Eight replicate samples of diploids were selected by subsequent growth on SD-Leu-Trp agar for each protein interaction. Diploid cells containing bait/prey were inoculated in 2 mL SD-Leu/Trp liquid media and grown in a 200 rpm shaker at 30°C overnight. 2 mL of the overnight cultures were diluted to 4 mL with YPD media and grown in a 200 rpm shaker at 30°C until they reached mid-log phase (OD600 of 0.5-0.8). Cultures were separated into 1.5 mL aliquots, pelleted, washed once, and then resuspended in 300 μL of Z buffer (16.1 g/L Na2HPO4⋅7H2O, 5.5 g/L NaH2PO4⋅H2O, 0.75 g/L KCl, 0.246 g/L MgSO4⋅7H2O, pH7.0). Samples of 100 μL were lysed by three freeze/thaw cycles with liquid nitrogen and a 37°C water bath. After the cells were lysed, the sample was split into two 150 μL aliquots. A blank tube, containing 100uL of Z buffer, was also set up. To analyze the enzyme activity, 350 μL of Z buffer with 0.27% β-mercaptoethanol and 4 mg/ml of ONPG as substrate were added. The yellow color was then allowed to develop at 30°C, and the reaction was stopped by the addition of 200 μL of 1 M Na2CO3. To remove cell debris, the samples were centrifuged for 10 minutes at 16,000 RCF, and OD420 values were recorded. In order to express the enzyme activity, Miller Units were calculated using the formula: 1000*OD420/(t*V*OD600) where t refers to the elapsed time, and V refers to the dilution factor (0.25 in this case). Diploid cells containing bait and prey were prepared by mating Y187 cells containing each of the prey plasmids with Y2HGold cells containing each of the bait plasmids (swine NP or human NP) on YPD agar (MatchMaker Gold Y2H protocol, Clontech, Mountainview, CA). Eight replicate samples of diploids were selected by subsequent growth on SD-Leu-Trp agar for each protein interaction. Diploid cells containing bait/prey were inoculated in 2 mL SD-Leu/Trp liquid media and grown in a 200 rpm shaker at 30°C overnight. 2 mL of the overnight cultures were diluted to 4 mL with YPD media and grown in a 200 rpm shaker at 30°C until they reached mid-log phase (OD600 of 0.5-0.8). Cultures were separated into 1.5 mL aliquots, pelleted, washed once, and then resuspended in 300 μL of Z buffer (16.1 g/L Na2HPO4⋅7H2O, 5.5 g/L NaH2PO4⋅H2O, 0.75 g/L KCl, 0.246 g/L MgSO4⋅7H2O, pH7.0). Samples of 100 μL were lysed by three freeze/thaw cycles with liquid nitrogen and a 37°C water bath. After the cells were lysed, the sample was split into two 150 μL aliquots. A blank tube, containing 100uL of Z buffer, was also set up. To analyze the enzyme activity, 350 μL of Z buffer with 0.27% β-mercaptoethanol and 4 mg/ml of ONPG as substrate were added. The yellow color was then allowed to develop at 30°C, and the reaction was stopped by the addition of 200 μL of 1 M Na2CO3. To remove cell debris, the samples were centrifuged for 10 minutes at 16,000 RCF, and OD420 values were recorded. In order to express the enzyme activity, Miller Units were calculated using the formula: 1000*OD420/(t*V*OD600) where t refers to the elapsed time, and V refers to the dilution factor (0.25 in this case). Strains, plasmids, media, microbial growth conditions, and reagents: Unless indicated otherwise, yeast strains (Table 3) were grown in liquid culture at 30°C at 180 rpm and on agar plates at 30°C. Bacterial strains were grown in liquid culture at 37°C at 200 rpm and on agar plates at 37°C. Selectable markers for the bait (pGBKT7) and prey (pGADT7) plasmids in the yeast Saccharomyces cerevisiae and bacterium E. coli are listed in Table 3. All media and Y2H reagents were purchased from Clontech (Mountain View, CA).Table 3 Strains and plasmids Yeast strain Genotype Reporter genes Y2HGold MATa, trp1-901, leu2-3, 112, ura3-52, his3-200, gal4Δ, gal80Δ, LYS2::GAL1UAS–Gal1TATA–His3, GAL2UAS–Gal2TATA–Ade2 URA3::MEL1UAS–Mel1TATA, AUR1-C MEL1 ADE2, HIS3, MEL1, AUR1-C Y187 ade2-101, trp1-901, leu2-3, 112, gal4Δ, gal80Δ, met–, URA3::GAL1UAS–Gal1TATA–LacZ, MEL1 MEL1, LacZ Plasmid Selectable markers (yeast, bacteria) Additional information Plasmid pGBKT7 (bait vector) TRP1, Kan GAL4(1–147)DNA-BD, TRP1, kanr, c-Myc epitope tagPlasmid pGADT7 (prey vector) LEU2, Amp GAL4(768–881)AD, LEU2, ampr, HA epitope tag Strains and plasmids Cloning Influenza A virus nucleoprotein into vector pGBKT7: The open reading frame (ORF) of the nucleoprotein gene from swine IAV H1N1 strain A/Sw/NC/44173/00 was amplified from the plasmid pScript A/Sw/NC/44173/00 NP (a gift from Christopher Olsen) by PCR using forward primer NP3F 5’-CATGGAGGCCGAATTCatggcgtctcaaggcaccaaacga-3’ and reverse primer NP2R 5’-GCAGGTCGACGGATCCattgtcatactcctctgcattgtctccgaaga-3’. The NP ORF from the human IAV H3N2 strain A/Ca/07/04 was amplified from plasmid pScript A/Ca/07/04 NP (a gift from Christopher Olsen) by PCR using forward primer NP2F 5’-CATGGAGGCCGAATTC atggcgtcccaaggcaccaaacg-3’ and reverse primer NP3R 5’-GCAGGTCGACGGATCC attgtcgtactcttctgcattgtctccgaaga-3’. For all primers the uppercase letters represent sequences homologous to the BamHI-EcoRI-linearized vector pGBKT7, and lower case letters represent sequences specific for the NP ORF. Reactions included 200 ng plasmid template DNA, 1X Advantage HD Buffer (Clontech, Mountain View, CA), 0.2 mM dNTPs, 0.25 μM forward and reverse primers, and 0.625 Units Advantage HD Polymerase (Clontech, Mountain View, CA). Thermocycler conditions were as follows: 95°C for 3 minutes; 30 cycles of [95°C for 15 seconds, 55°C for 5 seconds, 72°C for 100 seconds], 72°C for 10 minutes. InFusion cloning (Clontech, Mountain View, CA) was used to insert the NP ORFs into BamHI-EcoRI-linearized pGBKT7 3’ and in-frame with DNA encoding Gal4’s DNA binding domain (BD). Conditions were as follows: 60 ng gel-purified BamHI-EcoRI-linearized pGBKT7, 50 ng gel-purified NP PCR product, and 1X InFusion HD Enzyme Premix in a 5 μl reaction were incubated at 50°C for 15 min. An aliquot of the cloning reaction was transformed into Stellar competent cells per the manufacturer’s directions (Clontech, Mountain View, CA). All plasmid sequences were verified by DNA sequencing. Yeast two-hybrid screen: The Yeast Two-Hybrid MatchMaker Gold system (Clontech, Mountain View, CA) was used to isolate prey plasmids that interact with bait pGBKT7-NP. A 50 ml SD-TRP liquid media culture inoculated with Y2HGold cells containing the bait plasmid pGBKT7-NP was grown in a 200 rpm shaker at 30 degrees to an OD600 of 0.8 (approximately 18 hours). Bait cells were pelleted by centrifugation, resuspended in 4 ml SD-TRP, and mixed with 45 ml YPDA in a 2 L flask with 2 × 107 Y187 cells containing a commercially-available normalized HeLa S3 Mate and Plate cDNA prey library (Clontech, Mountain View, CA). Bait and prey cells were mated by slow shaking (40 rpm) for 24 hours at 30°C. After 20 hours, mating cells were observed microscopically for the presence of zygotes. Mated cells were centrifuged 1,000 × g for 10 min, and the pellet was resuspended in 5 ml 0.5X YPDA containing 50 μg/ml kanamycin. Mated cells (0.2 ml) were spread on each of 55 SD-LEU-TRP/X-a-Gal/125 ng/ml AbA agar plates, and plates were incubated 30°C for 5–8 days. This concentration of AbA is considered high stringency and could exclude isolation of low-affinity interactors. The mating efficiency was determined by plating 1:1,000 and 1:10,000 dilutions on SD-LEU, SD-TRP, and SD-LEU-TRP agar plates. Blue colonies from the Y2H screen SD-LEU-TRP/X-a-Gal/AbA plates were picked as small patches to an SD-LEU-TRP plate, grown 2 days at 30°C, and replica plated to agar media (SD-ADE, SD-HIS, SD-LEU-TRP/X-a-Gal, SD-LEU-TRP/AbA) to test for activation of four reporter genes: ADE2, HIS3, MEL1C, and AUR1-C, respectively. Replica plates were incubated at 30°C and observed daily for 3 days. Cells from potential positive interactors (growing on all types of reporter media and blue/light blue on SD-LEU-TRP/X-α-Gal) were single-colony purified on SD-LEU-TRP/X-a-Gal and incubated at 30°C for 4 days. A single blue colony from each potential positive interaction strain was single-colony purified a second time on SD-LEU-TRP/X-α-Gal. Following two rounds of streaking, the prey plasmid from each potential positive interaction yeast strain was isolated using the “Easy Yeast Plasmid Isolation Kit” (Clontech, Mountain View, CA). The resulting yeast plasmid DNAs were transformed into and subsequently re-isolated from E. coli cells. Testing potential positive prey plasmids for toxicity to yeast and autoactivation of the Y2H reporter genes: Isolated prey plasmid DNAs were re-introduced into Y2HGold cells using a yeast transformation system (Geno Technology). Y2HGold cells containing prey plasmids were single-colony purified on SD-Leu agar plates alongside a control (Y2HGold containing the empty prey vector pGADT7), and cell growth was observed after 3 days at 30C to test for toxicity. To test for false positives, patches of Y2HGold cells containing prey plasmids were replica plated to assess autoactivation of reporter genes on reporter media (SD-Ade, SD-His, SD-Leu-Trp + X-α-Gal, SD-Leu-Trp + AbA). Prey strains determined to be false positives in yeast were excluded from the pool of identified interacting host proteins. Bioinformatics: The 5’ and 3’ ends of human cDNA within each confirmed positive prey plasmid were determined by sequencing at the Iowa State University (ISU) DNA Facility using primers T7-1 (AATACGACTCACTATAG) and 3’AD (AGATGGTGCACGATGCACAG) or 3’ poly-TA (TTTTTTTTTTTTTTTTTTTTTTTTTA), respectively. The sequence data was used in a BLAST to query the NCBI human genome and human transcript databases. Maintenance of protein-protein interactions at 30°C and 37°C: Y2HGold yeast cells were separately co-transformed with a bait plasmid (either pGBKT7 bait vector containing NP from human influenza A strain H3N2, A/Ca/07/04 or NP from swine influenza A strain H1N1, A/Sw/NC/44173/00) and a prey plasmid (pGADT7 containing one of the human cDNAs isolated as a positive interactor with human NP bait). Y2HGold cells containing known bait/prey interactors pGBKT7-53 and pGADT7-T and known non-interactors pGBKT7-Lam and pGADT7-T were used as positive and negative controls, respectively. Additional negative interaction controls included Y2HGold cells containing human or swine NP (described above) and an empty prey vector pGADT7. All transformants were grown in YPD media in overnight cultures and were diluted to a concentration of 5 × 106 cells/ml (OD = 0.417). 200 ml of culture was placed into a 96 well plate. A 48 sample pinner was used to spot yeast onto 2 plates each of SD-Leu-Trp (control), and SD-His. One plate of each type was incubated at 30°C, the other was incubated at 37°C. Growth was assessed and images taken 4 days after the initial plating procedure. At least two replicates of each strain containing the bait (swine NP or human NP) and prey were tested on SD-Leu-Trp and SD-His at both 30°C and 37°C. ß-galactosidase assay: Diploid cells containing bait and prey were prepared by mating Y187 cells containing each of the prey plasmids with Y2HGold cells containing each of the bait plasmids (swine NP or human NP) on YPD agar (MatchMaker Gold Y2H protocol, Clontech, Mountainview, CA). Eight replicate samples of diploids were selected by subsequent growth on SD-Leu-Trp agar for each protein interaction. Diploid cells containing bait/prey were inoculated in 2 mL SD-Leu/Trp liquid media and grown in a 200 rpm shaker at 30°C overnight. 2 mL of the overnight cultures were diluted to 4 mL with YPD media and grown in a 200 rpm shaker at 30°C until they reached mid-log phase (OD600 of 0.5-0.8). Cultures were separated into 1.5 mL aliquots, pelleted, washed once, and then resuspended in 300 μL of Z buffer (16.1 g/L Na2HPO4⋅7H2O, 5.5 g/L NaH2PO4⋅H2O, 0.75 g/L KCl, 0.246 g/L MgSO4⋅7H2O, pH7.0). Samples of 100 μL were lysed by three freeze/thaw cycles with liquid nitrogen and a 37°C water bath. After the cells were lysed, the sample was split into two 150 μL aliquots. A blank tube, containing 100uL of Z buffer, was also set up. To analyze the enzyme activity, 350 μL of Z buffer with 0.27% β-mercaptoethanol and 4 mg/ml of ONPG as substrate were added. The yellow color was then allowed to develop at 30°C, and the reaction was stopped by the addition of 200 μL of 1 M Na2CO3. To remove cell debris, the samples were centrifuged for 10 minutes at 16,000 RCF, and OD420 values were recorded. In order to express the enzyme activity, Miller Units were calculated using the formula: 1000*OD420/(t*V*OD600) where t refers to the elapsed time, and V refers to the dilution factor (0.25 in this case).
Background: Influenza A viruses (IAVs) are important pathogens that affect the health of humans and many additional animal species. IAVs are enveloped, negative single-stranded RNA viruses whose genome encodes at least ten proteins. The IAV nucleoprotein (NP) is a structural protein that associates with the viral RNA and is essential for virus replication. Understanding how IAVs interact with host proteins is essential for elucidating all of the required processes for viral replication, restrictions in species host range, and potential targets for antiviral therapies. Methods: In this study, the NP from a swine IAV was cloned into a yeast two-hybrid "bait" vector for expression of a yeast Gal4 binding domain (BD)-NP fusion protein. This "bait" was used to screen a Y2H human HeLa cell "prey" library which consisted of human proteins fused to the Gal4 protein's activation domain (AD). The interaction of "bait" and "prey" proteins resulted in activation of reporter genes. Results: Seventeen positive bait-prey interactions were isolated in yeast. All of the "prey" isolated also interact in yeast with a NP "bait" cloned from a human IAV strain. Isolation and sequence analysis of the cDNAs encoding the human prey proteins revealed ten different human proteins. These host proteins are involved in various host cell processes and structures, including purine biosynthesis (PAICS), metabolism (ACOT13), proteasome (PA28B), DNA-binding (MSANTD3), cytoskeleton (CKAP5), potassium channel formation (KCTD9), zinc transporter function (SLC30A9), Na+/K+ ATPase function (ATP1B1), and RNA splicing (TRA2B). Conclusions: Ten human proteins were identified as interacting with IAV NP in a Y2H screen. Some of these human proteins were reported in previous screens aimed at elucidating host proteins relevant to specific viral life cycle processes such as replication. This study extends previous findings by suggesting a mechanism by which these host proteins associate with the IAV, i.e., physical interaction with NP. Furthermore, this study revealed novel host protein-NP interactions in yeast.
Background: Influenza A viruses (IAVs) are important pathogens that affect the health of humans and many additional animal species. In humans, seasonal IAV infection presents as a non-fatal, uncomplicated, acute infection characterized by the presence of upper respiratory symptoms as well as fever, headache, soreness and fatigue lasting 2–5 days [1]. However, deaths from seasonal IAV infections often arise when the normal flu symptoms are exacerbated by compromised immunity or age [1]. In addition to seasonal influenza A viruses, pandemic strains periodically appear causing increased mortality rates. For example, the 1918 Spanish flu resulted in approximately 50 million human deaths [2]. The most recent pandemic resulted from the emergence of the swine-origin H1N1 virus [3,4]. Due to influenza A’s potential for mortality, high mutation rates (resulting in genetic drift) and pandemic potential (resulting from genetic reassortment), it is critical to learn more about the virus, especially as it pertains to virulence, transmissibility, and identification of potential targets for development of therapeutics. Influenza A viruses exhibit a broad host range beyond humans [5]. Waterfowl serve as the central reservoir species. In addition to humans, various IAV subtypes circulate in pigs, poultry, horses, dogs as well as other species. Subtypes (H1N1 and H3N2) that circulate in the human population also circulate in the swine populations [6]. In addition to the concern for IAV in terms of swine health [7], IAV infection in swine is also important due to the potential for zoonotic infections as well as the potential for serving as a mixing vessel for the generation of human pandemic viruses [5]. The majority of inter-species transmission research has focused on amino acids in the hemagglutinin affecting viral attachment and entry [8]. Although known to be important, other than amino acids associated with temperature sensitivity (e.g., PB2 627 [9] or 701 [10]), the effects of differences in amino acid residues in proteins of the replication complex on species specificity is less well understood. Previous evidence suggests that there may be a role in some of these replication complex proteins in allowing for zoonotic infections [11]. The genome of the enveloped influenza A virion consists of eight segments of negative-sense single-stranded RNA that encode at least ten viral genes: Hemagglutinin (HA), Neuraminidase (NA), Matrix 2 protein (M2), Matrix 1 protein (M1), Non-structural Protein 1 (NS1), Non-structural Protein 2 (NS2), Nucleoprotein (NP), Polymerase Basic 1 (PB-1), Polymerase Basic 2 (PB-2), and Polymerase Acidic (PA) [1]. The viral envelope contains HA, NA, and M2 proteins, and M1 proteins form a layer inside the envelope. The IAV RNA genome located inside the virion is coated with NP and is bound by the replication complex consisting of PB1, PB2, and PA. The NP, the focus of this study, encoded by the fifth IAV RNA segment, binds with high affinity to viral RNA [12]. NP plays a role in viral RNA replication and transcription [12]. More recent data suggests that NP binds the polymerase as well as the newly replicated RNA and may act as a processivity factor that is necessary for replication of the viral RNA to be completed [13]. Phylogenic analysis of IAV NP shows distinct lineages of NP based on the host species [14,15]. Within the NP, there are amino acid signatures found within different host species [16,17]. These host-specific amino acid residues may result in differences in affinities for the various host proteins with which they interact (e.g., importin α1 [18], F- actin [19], nuclear factor 90 [20], cyclophilin E [21], exportin 1 [22], HMGB1 [23], HMGB2 [23], MxA/Mx1 [24,25], HSP40 [26], karyopherin alpha [27,28], clusterin [29], Raf-2p48/BAT1/UAP56 [30], Aly/REF [31], Tat-SF1 [32], TRIM22 [33], and alpha actinin 4 [34]) or they may result in differences in how the NP interacts with other viral proteins that have also made host-specific adaptations [11]. Given the suggestion that NP plays a role in determining host range [5,35-37], it is important to identify all host proteins that interact with NP. Replication of the IAV is dependent on the host cell machinery and interactions between host proteins and viral proteins. Therefore any inquiry attempting to investigate the life cycle of IAV must incorporate host-pathogen protein interactions in order to truly provide a mechanistic understanding of the process. Previous screens have identified important protein-protein interactions between IAV and host proteins, but often results are not consistent between screens. Therefore, corroborating evidence with additional screens is crucial to accumulating the best understanding of critical pathogen-host interactions in viral replication. As examples, several genome-wide screens have been performed using RNAi to systematically knockdown host genes and evaluate the effect on various stages of the IAV life cycle [38-41]. An integrated approach carried out by Shapira and colleagues involved transcription profiling and yeast two hybrid screens using specific viral proteins as baits [28]. Proteomics approaches have revealed host proteins found within viral particles [42,43]. The Random Homozygous Gene Perturbation strategy was used to identify host factors that prevent influenza-mediated killing of host cells [44]. In order to better understand the role of NP-host protein interactions in IAV replication, a Gal4-based yeast two-hybrid (Y2H) assay was used in this study. The Y2H system allows for the identification of unique human binding partners with NP. A “bait” plasmid encoding the binding domain (BD) of the Gal4 transcriptional activator fused to NP and a “prey” plasmid encoding Gal4’s activation domain (AD) fused to a protein encoded by a human cDNA are introduced into a yeast strain containing Gal4-responsive reporter genes. Interaction of the bait and prey brings together Gal4’s BD and AD, resulting in transcriptional activation of reporter genes [45]. The Y2H approach has successfully identified cellular factors (e.g., Raf-2p48/BAT1/UAP56, Hsp40, KPNA1, KNPA3, KPNA6, C16orf45, GMCL1, MAGED1, MLH1, USHBP1, ZBTB25, CLU, Aly/REF, and ACTN4) that interact with the IAV NP [26-31,34], (Table 1). In this study, the nucleoprotein from a classical swine H1N1 IAV (Sw/NC/44173/00) was used as the “bait” in a Y2H screen against a “prey” HeLa cDNA library. To investigate if the origin of NP affects the ability of the interactions to take place, the nucleoprotein from a contemporary human H3N2 IAV (A/Ca/07/04) was also used as the “bait” in a Y2H screen against the same prey HeLa cDNA library. By determination of the putative interaction partners between NP and human proteins, the possible functions of the newfound protein-protein interactions or verification and perhaps elucidation of previously identified host factors may be further investigated.Table 1 Summary of identified bait-prey interactions and previous reports of host protein association with influenza A virus Gene Protein description Complete ORF Number of prey clones Prey clones Previous reports with IAV PAICSPhosphoribosylaminoimidazole carboxylaseYes3R3, R9, R15Karlas et al. [40], Kumar and Nanduri [47], Kroeker et al. [48]MSANTD3Myb/SANT-like DNA-binding domain containing 3No3R10, R13, R27N/AFLJ30306PREDICTED: Homo sapiens uncharacterized LOC101059922, mRNAYes2R23, R33N/APA28BProteasome activator subunit 2Yes2R6, R19N/AKCTD9Potassium channel tetramerisation domain containing 9No2R34, R36N/AACOT13Acyl-CoA thioesterase 13Yes1R11N/ATRA2BTransformer 2 beta homolog (Drosophila)Yes1R12Zhu et al. [56]SLC30A9Solute carrier family 30 (zinc transporter), member 9No1R28Kumar and Nanduri [47], Sui et al. [44]ATP1B1ATPase, Na+/K+ transporting, beta 1 polypeptideNo1R7Mi et al. [51], Liu et al. [50]CKAP5Cytoskeleton associated protein 5No1R29N/A Host proteins identified as interacting with IAV nucleoprotein in previously published yeast two-hybrid screens BAT1/UAP56RNA-dependent ATPaseN/AN/AN/AMomose et al. [30]HSP40Heat shock protein 40N/AN/AN/ASharma et al. [26]NPI-1/ SRP1/KPNA1Karyopherin α1N/AN/AN/AO’Neill and Palese [27], Shapira et al. [28]KPNA3Karyopherin α3N/AN/AN/AShapira et al. [28]KPNA6Karyopherin α6N/AN/AN/AShapira et al. [28]C16orf45-N/AN/AN/AShapira et al. [28]GMCL1Germ cell-less, spermatogenesis associated 1N/AN/AN/AShapira et al. [28]MAGED1Melanoma antigen family D.1N/AN/AN/AShapira et al. [28]MLH1MutL homolog 1N/AN/AN/AShapira et al. [28]USHBP1Usher syndrome 1C binding protein 1N/AN/AN/AShapira et al. [28]ZBTB25Zinc finger and BTB domain containing 25N/AN/AN/AShapira et al. [28]CLUClusterinN/AN/AN/ATripathi et al. [29]ALY/REFRNA export adaptor proteinN/AN/AN/ABalasubramaniam et al. [31]ACTN4Alpha-actinin 4N/AN/AN/ASharma et al. [34] Summary of identified bait-prey interactions and previous reports of host protein association with influenza A virus Reported herein are ten potential human host cell proteins that interact with IAV NP in a Y2H screen. The relative strengths of these protein-protein interactions were characterized by a plating assay and beta-galactosidase assay. Conclusions: A Y2H screen using a classical swine influenza A virus nucleoprotein as a bait resulted in isolation of ten different putative interacting human host proteins involved in a variety of cellular processes and structures: purine biosynthesis (PAICS), metabolism (ACOT13), proteasome (PA28B), DNA-binding (MSANTD3), cytoskeleton (CKAP5), potassium channel formation (KCTD9), zinc transporter function (SLC30A9), Na+/K+ ATPase function (ATP1B1), and RNA splicing (TRA2B). All of the identified human host proteins interacted in yeast with NP from both human and swine origin. Proteins PAICS, ATP1B1, TRA2B, and SLC30A9 have been previously identified in studies related to host response to the IAV. Host proteins SLC30A9 and CKAP5 displayed the strongest interactions with IAV NP in yeast in a quantitative β-gal assay.
Background: Influenza A viruses (IAVs) are important pathogens that affect the health of humans and many additional animal species. IAVs are enveloped, negative single-stranded RNA viruses whose genome encodes at least ten proteins. The IAV nucleoprotein (NP) is a structural protein that associates with the viral RNA and is essential for virus replication. Understanding how IAVs interact with host proteins is essential for elucidating all of the required processes for viral replication, restrictions in species host range, and potential targets for antiviral therapies. Methods: In this study, the NP from a swine IAV was cloned into a yeast two-hybrid "bait" vector for expression of a yeast Gal4 binding domain (BD)-NP fusion protein. This "bait" was used to screen a Y2H human HeLa cell "prey" library which consisted of human proteins fused to the Gal4 protein's activation domain (AD). The interaction of "bait" and "prey" proteins resulted in activation of reporter genes. Results: Seventeen positive bait-prey interactions were isolated in yeast. All of the "prey" isolated also interact in yeast with a NP "bait" cloned from a human IAV strain. Isolation and sequence analysis of the cDNAs encoding the human prey proteins revealed ten different human proteins. These host proteins are involved in various host cell processes and structures, including purine biosynthesis (PAICS), metabolism (ACOT13), proteasome (PA28B), DNA-binding (MSANTD3), cytoskeleton (CKAP5), potassium channel formation (KCTD9), zinc transporter function (SLC30A9), Na+/K+ ATPase function (ATP1B1), and RNA splicing (TRA2B). Conclusions: Ten human proteins were identified as interacting with IAV NP in a Y2H screen. Some of these human proteins were reported in previous screens aimed at elucidating host proteins relevant to specific viral life cycle processes such as replication. This study extends previous findings by suggesting a mechanism by which these host proteins associate with the IAV, i.e., physical interaction with NP. Furthermore, this study revealed novel host protein-NP interactions in yeast.
15,472
401
[ 656, 1076, 267, 362, 554, 145, 74, 277, 385 ]
14
[ "prey", "np", "human", "bait", "sd", "swine", "yeast", "growth", "cells", "protein" ]
[ "seasonal influenza viruses", "human influenza", "influenza potential mortality", "influenza genes", "virus influenza potential" ]
null
[CONTENT] Influenza A virus | Nucleoprotein | Yeast two-hybrid | Host restriction [SUMMARY]
null
[CONTENT] Influenza A virus | Nucleoprotein | Yeast two-hybrid | Host restriction [SUMMARY]
[CONTENT] Influenza A virus | Nucleoprotein | Yeast two-hybrid | Host restriction [SUMMARY]
[CONTENT] Influenza A virus | Nucleoprotein | Yeast two-hybrid | Host restriction [SUMMARY]
[CONTENT] Influenza A virus | Nucleoprotein | Yeast two-hybrid | Host restriction [SUMMARY]
[CONTENT] Animals | Genes, Reporter | HeLa Cells | Host-Pathogen Interactions | Humans | Influenzavirus A | Nucleocapsid Proteins | Protein Interaction Mapping | RNA-Binding Proteins | Swine | Two-Hybrid System Techniques | Viral Core Proteins [SUMMARY]
null
[CONTENT] Animals | Genes, Reporter | HeLa Cells | Host-Pathogen Interactions | Humans | Influenzavirus A | Nucleocapsid Proteins | Protein Interaction Mapping | RNA-Binding Proteins | Swine | Two-Hybrid System Techniques | Viral Core Proteins [SUMMARY]
[CONTENT] Animals | Genes, Reporter | HeLa Cells | Host-Pathogen Interactions | Humans | Influenzavirus A | Nucleocapsid Proteins | Protein Interaction Mapping | RNA-Binding Proteins | Swine | Two-Hybrid System Techniques | Viral Core Proteins [SUMMARY]
[CONTENT] Animals | Genes, Reporter | HeLa Cells | Host-Pathogen Interactions | Humans | Influenzavirus A | Nucleocapsid Proteins | Protein Interaction Mapping | RNA-Binding Proteins | Swine | Two-Hybrid System Techniques | Viral Core Proteins [SUMMARY]
[CONTENT] Animals | Genes, Reporter | HeLa Cells | Host-Pathogen Interactions | Humans | Influenzavirus A | Nucleocapsid Proteins | Protein Interaction Mapping | RNA-Binding Proteins | Swine | Two-Hybrid System Techniques | Viral Core Proteins [SUMMARY]
[CONTENT] seasonal influenza viruses | human influenza | influenza potential mortality | influenza genes | virus influenza potential [SUMMARY]
null
[CONTENT] seasonal influenza viruses | human influenza | influenza potential mortality | influenza genes | virus influenza potential [SUMMARY]
[CONTENT] seasonal influenza viruses | human influenza | influenza potential mortality | influenza genes | virus influenza potential [SUMMARY]
[CONTENT] seasonal influenza viruses | human influenza | influenza potential mortality | influenza genes | virus influenza potential [SUMMARY]
[CONTENT] seasonal influenza viruses | human influenza | influenza potential mortality | influenza genes | virus influenza potential [SUMMARY]
[CONTENT] prey | np | human | bait | sd | swine | yeast | growth | cells | protein [SUMMARY]
null
[CONTENT] prey | np | human | bait | sd | swine | yeast | growth | cells | protein [SUMMARY]
[CONTENT] prey | np | human | bait | sd | swine | yeast | growth | cells | protein [SUMMARY]
[CONTENT] prey | np | human | bait | sd | swine | yeast | growth | cells | protein [SUMMARY]
[CONTENT] prey | np | human | bait | sd | swine | yeast | growth | cells | protein [SUMMARY]
[CONTENT] host | protein | iav | 28 | viral | ashapira | ashapira 28 | proteins | np | replication [SUMMARY]
null
[CONTENT] human | growth | prey | np | swine | swine human | activity | bait | figure | human np [SUMMARY]
[CONTENT] slc30a9 | host | proteins | function | host proteins | ckap5 | paics | tra2b | human host proteins | atp1b1 [SUMMARY]
[CONTENT] np | sd | human | prey | bait | cells | swine | containing | yeast | trp [SUMMARY]
[CONTENT] np | sd | human | prey | bait | cells | swine | containing | yeast | trp [SUMMARY]
[CONTENT] Influenza A ||| RNA | at least ten ||| NP | RNA ||| [SUMMARY]
null
[CONTENT] Seventeen ||| NP ||| ten ||| PAICS | MSANTD3 | cytoskeleton | CKAP5 | KCTD9 | ATPase | RNA [SUMMARY]
[CONTENT] Ten | NP ||| ||| IAV | NP ||| NP [SUMMARY]
[CONTENT] ||| RNA | at least ten ||| NP | RNA ||| ||| NP | IAV | two ||| HeLa ||| ||| ||| Seventeen ||| NP ||| ten ||| PAICS | MSANTD3 | cytoskeleton | CKAP5 | KCTD9 | ATPase | RNA ||| Ten | NP ||| ||| IAV | NP ||| NP [SUMMARY]
[CONTENT] ||| RNA | at least ten ||| NP | RNA ||| ||| NP | IAV | two ||| HeLa ||| ||| ||| Seventeen ||| NP ||| ten ||| PAICS | MSANTD3 | cytoskeleton | CKAP5 | KCTD9 | ATPase | RNA ||| Ten | NP ||| ||| IAV | NP ||| NP [SUMMARY]
Antibiotic resistance in Helicobacter pylori strains and its effect on H. pylori eradication rates in a single center in Korea.
24205490
Clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin have been commonly used for the eradication of Helicobacter pylori. We compared the change in antibiotic resistance of H. pylori strains during two separate periods and investigated the effect of antibiotic resistance on H. pylori eradication.
BACKGROUND
H. pylori strains were isolated from 71 patients between 2009 and 2010 and from 94 patients between 2011 and 2012. The distribution of minimal inhibitory concentration (MIC) of 5 antibiotics was assessed using the agar dilution method, and H. pylori eradication based on the antimicrobial susceptibility of the isolates was investigated retrospectively.
METHODS
Antibiotic resistance rate against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin for the 2009-2010 isolates were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and for the 2011-2012 isolates were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. Multi-drug resistance for 2 or more antibiotics increased slightly from 16.9% (12/71) in the 2009-2010 isolates to 23.4% (22/94) in the 2011-2012 isolates. In follow-up testing of 66 patients, first-line treatment successfully eradicated H. pylori in 50 patients (75.8%) and failed in 4 of 7 patients (57.1%) in a clarithromycin-resistant and amoxicillin-susceptible group.
RESULTS
We observed an increase in resistance to clarithromycin and an overall increase in multi-drug resistance during the 2 study periods. The effectiveness of the eradication regimen was low with combinations of clarithromycin and amoxicillin, particularly in the clarithromycin-resistant group. Thus, eradication of H. pylori depends upon periodic monitoring of antimicrobial susceptibility.
CONCLUSIONS
[ "Adult", "Aged", "Anti-Bacterial Agents", "Drug Resistance, Multiple, Bacterial", "Female", "Helicobacter Infections", "Helicobacter pylori", "Humans", "Male", "Microbial Sensitivity Tests", "Middle Aged", "Peptic Ulcer", "Republic of Korea", "Retrospective Studies", "Treatment Outcome" ]
3819440
INTRODUCTION
Eradication of gastric colonies of Helicobacter pylori helps heal gastritis and peptic ulcer disease and has beneficial effects on the regression of atrophic gastritis and the prevention of distal gastric cancer [1, 2]. Triple therapy using a proton pump inhibitor (PPI) with clarithromycin and amoxicillin or metronidazole is recommended as the first-line treatment regimen for H. pylori eradication. If it fails, bismuth-containing quadruple therapy, which involves inclusion of additional antibiotics to the first-line treatment regimen is used [3, 4]. The increase in clarithromycin resistance in Korea is considered to be closely related to the decrease of eradication rate in first-line therapy. According to recent data, clarithromycin resistance sharply increased from 16.7% to 38.5% from 2003 through 2009, and eradication rates have decreased by 77-87% since 2003 [4-6]; these rates are inclusive of regional and institutional differences. Although regular antibiotic resistance monitoring is important in the clinical setting, the labor-intensive and time-consuming nature of H. pylori isolation from clinical samples complicates comparative antibiotic susceptibility testing. In this study, we investigated H. pylori antibiotic resistance and its effect on eradication rates in a single center in Korea between 2009-2010 and 2011-2012.
METHODS
1. Patients H. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration. This study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital. H. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration. This study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital. 2. H. pylori culture The culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment. The culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment. 3. Susceptibility tests The minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10]. For H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively. The minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10]. For H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively. 4. Statistical analysis Statistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant. Statistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant.
RESULTS
1. Antibiotic resistance of H. pylori The antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant. When the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2). The antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant. When the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2). 2. Effect of antibiotic resistance on H. pylori eradication rates Of the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) . Second-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32). Of the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) . Second-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32).
null
null
[ "INTRODUCTION", "1. Patients", "2. H. pylori culture", "3. Susceptibility tests", "4. Statistical analysis", "1. Antibiotic resistance of H. pylori", "2. Effect of antibiotic resistance on H. pylori eradication rates" ]
[ "Eradication of gastric colonies of Helicobacter pylori helps heal gastritis and peptic ulcer disease and has beneficial effects on the regression of atrophic gastritis and the prevention of distal gastric cancer [1, 2]. Triple therapy using a proton pump inhibitor (PPI) with clarithromycin and amoxicillin or metronidazole is recommended as the first-line treatment regimen for H. pylori eradication. If it fails, bismuth-containing quadruple therapy, which involves inclusion of additional antibiotics to the first-line treatment regimen is used [3, 4]. The increase in clarithromycin resistance in Korea is considered to be closely related to the decrease of eradication rate in first-line therapy. According to recent data, clarithromycin resistance sharply increased from 16.7% to 38.5% from 2003 through 2009, and eradication rates have decreased by 77-87% since 2003 [4-6]; these rates are inclusive of regional and institutional differences.\nAlthough regular antibiotic resistance monitoring is important in the clinical setting, the labor-intensive and time-consuming nature of H. pylori isolation from clinical samples complicates comparative antibiotic susceptibility testing. In this study, we investigated H. pylori antibiotic resistance and its effect on eradication rates in a single center in Korea between 2009-2010 and 2011-2012.", "H. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration.\nThis study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital.", "The culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment.", "The minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10].\nFor H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively.", "Statistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant.", "The antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant.\nWhen the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2).", "Of the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) .\nSecond-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32)." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "1. Patients", "2. H. pylori culture", "3. Susceptibility tests", "4. Statistical analysis", "RESULTS", "1. Antibiotic resistance of H. pylori", "2. Effect of antibiotic resistance on H. pylori eradication rates", "DISCUSSION" ]
[ "Eradication of gastric colonies of Helicobacter pylori helps heal gastritis and peptic ulcer disease and has beneficial effects on the regression of atrophic gastritis and the prevention of distal gastric cancer [1, 2]. Triple therapy using a proton pump inhibitor (PPI) with clarithromycin and amoxicillin or metronidazole is recommended as the first-line treatment regimen for H. pylori eradication. If it fails, bismuth-containing quadruple therapy, which involves inclusion of additional antibiotics to the first-line treatment regimen is used [3, 4]. The increase in clarithromycin resistance in Korea is considered to be closely related to the decrease of eradication rate in first-line therapy. According to recent data, clarithromycin resistance sharply increased from 16.7% to 38.5% from 2003 through 2009, and eradication rates have decreased by 77-87% since 2003 [4-6]; these rates are inclusive of regional and institutional differences.\nAlthough regular antibiotic resistance monitoring is important in the clinical setting, the labor-intensive and time-consuming nature of H. pylori isolation from clinical samples complicates comparative antibiotic susceptibility testing. In this study, we investigated H. pylori antibiotic resistance and its effect on eradication rates in a single center in Korea between 2009-2010 and 2011-2012.", " 1. Patients H. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration.\nThis study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital.\nH. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration.\nThis study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital.\n 2. H. pylori culture The culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment.\nThe culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment.\n 3. Susceptibility tests The minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10].\nFor H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively.\nThe minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10].\nFor H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively.\n 4. Statistical analysis Statistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant.\nStatistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant.", "H. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration.\nThis study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital.", "The culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment.", "The minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10].\nFor H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively.", "Statistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant.", " 1. Antibiotic resistance of H. pylori The antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant.\nWhen the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2).\nThe antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant.\nWhen the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2).\n 2. Effect of antibiotic resistance on H. pylori eradication rates Of the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) .\nSecond-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32).\nOf the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) .\nSecond-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32).", "The antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant.\nWhen the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2).", "Of the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) .\nSecond-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32).", "Recently, H. pylori eradication rates of 70-95% have been reported [4-6]. Failure of eradication may be attributed to increase in antibiotic resistance associated with problems in patient compliance, such as difficulties in taking drugs, or side effects [11-13]. In this study, the antimicrobial susceptibility test was conducted for H. pylori strains isolated from a single center over 2 periods, followed by examination of the factors affecting failure.\nClarithromycin resistance rates increased from 7.0% in the 2009-2010 patient group to 16.0% in the 2011-2012 patient group. These rates are slightly lower than that reported in a previous study, which showed that the overall frequency of clarithromycin-resistant H. pylori in 2008 was 21.6% [14]. This discrepancy is conceivably attributable to regional differences in the location of the studies. The primary factor influencing clarithromycin resistance is known to be the A2142-4 point mutation in the 23S rRNA [14-17].\nAmoxicillin resistance rates decreased slightly from 2.8% (2009-2011) to 2.1% (2011-2012). Resistance to tetracycline was not detected in any strain when the cut-off was set at ≥ 4 µg/mL, and the MICs were as low as 0.031-2 µg/mL. Recently, tetracycline resistance rates of 0-36% have been reported. However, as with clarithromycin resistance, the differences could be due to regional differences [18, 19]. Metronidazole resistance rates were higher than those for all other antibiotics, ranging from 45.1% in the 2009-2010 group to 56.3% in the 2011-2012 group, and the MIC of metronidazole was the highest among all the studied antibiotics (8-256 µg/mL). Levofloxacin resistance rates decreased slightly from 26.8% in 2009-2010 to 23.3% in 2011-2012. This finding is consistent with that reported in a previous domestic study (resistance rates decreased from 26.3% to 22.5%) [18, 19]. The continuous increase in levofloxacin resistance warrants the use of rescue therapy based on the results of antimicrobial susceptibility tests. Although differences in resistance rates to the 5 antibiotics in the 2 study periods failed to reach statistical significance, increases in the resistance to clarithromycin and metronidazole were identified. Moreover, multi-drug resistance for 2 or more antibiotics increased from 16.9% (12/71) in 2009-2010 group to 23.4% (22/94) in the 2011-2012 group, but there was no statistical significance (P<0.082).\nThe overall eradication rate in patients who received first-line therapy with clarithromycin and amoxicillin was 75.8% (50/66), ranging from 78.1% in 2009-2010 to 73.5% in 2011-2012 (data not shown). The eradication rate for clarithromycin-resistant strains (42.9%, 3/7) was significantly lower than that for the clarithromycin-susceptible strains (79.7%, 47/59) (P<0.001). These results indicate that resistance to clarithromycin is a critical factor in the effectiveness of eradication with the first-line regimen.\nComplete eradication rate was only 79.3% in strains susceptible to both clarithromycin and amoxicillin with first-line therapy (Table 3). Thus failure rate of 20.7% may be attributable to problems with patient compliance; however, a more extensive follow-up survey is needed to confirm it. Eradication was successful in 6 of 16 patients, who received second-line therapy including tetracycline and metronidazole; second-line therapy failed in 4 patients, and data were unavailable for the remaining 6 patients. Previous treatment histories of the 16 patients in the second-line treatment group were as follows: clarithromycin treatment for eradication of H. pylori, 2 patients; treatment for liver cirrhosis, 1 patient; and poor compliance, 1 patient. Specific histories were unavailable for the remaining 12 patients. Although failure of eradication is generally linked to antibiotic resistance, increase in antibiotic resistance does not always correlate with decrease in eradication rates; therefore, further studies are required to identify other factors affecting eradication rates.\nIn conclusion, the effectiveness of eradication using first-line therapy with clarithromycin and amoxicillin decreased, especially in the clarithromycin-resistant group, and clarithromycin resistance was considered crucial for the eradication of H. pylori. This result suggested that eradication of H. pylori is greatly dependent on periodic monitoring of antimicrobial susceptibility, which is necessary for selection of an appropriate antibiotic regimen." ]
[ null, "methods", null, null, null, null, "results", null, null, "discussion" ]
[ "\nHelicobacter pylori\n", "Antibiotic resistance", "Eradication" ]
INTRODUCTION: Eradication of gastric colonies of Helicobacter pylori helps heal gastritis and peptic ulcer disease and has beneficial effects on the regression of atrophic gastritis and the prevention of distal gastric cancer [1, 2]. Triple therapy using a proton pump inhibitor (PPI) with clarithromycin and amoxicillin or metronidazole is recommended as the first-line treatment regimen for H. pylori eradication. If it fails, bismuth-containing quadruple therapy, which involves inclusion of additional antibiotics to the first-line treatment regimen is used [3, 4]. The increase in clarithromycin resistance in Korea is considered to be closely related to the decrease of eradication rate in first-line therapy. According to recent data, clarithromycin resistance sharply increased from 16.7% to 38.5% from 2003 through 2009, and eradication rates have decreased by 77-87% since 2003 [4-6]; these rates are inclusive of regional and institutional differences. Although regular antibiotic resistance monitoring is important in the clinical setting, the labor-intensive and time-consuming nature of H. pylori isolation from clinical samples complicates comparative antibiotic susceptibility testing. In this study, we investigated H. pylori antibiotic resistance and its effect on eradication rates in a single center in Korea between 2009-2010 and 2011-2012. METHODS: 1. Patients H. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration. This study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital. H. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration. This study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital. 2. H. pylori culture The culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment. The culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment. 3. Susceptibility tests The minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10]. For H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively. The minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10]. For H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively. 4. Statistical analysis Statistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant. Statistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant. 1. Patients: H. pylori strains were isolated from 71 patients with H. pylori infections from July 2009 to December 2010 and from 94 patients from June 2011 through December 2012 at the Yongin Severance Hospital of Yonsei University, Korea. Of these patients, 66 (clinical characteristics listed in Table 1) had previously undergone eradication treatments, including week-long first-line treatment with PPI (pantoprazole or esomeprazole 30 mg, bid), amoxicillin (2,250 mg, tid), clarithromycin (1,000 mg, bid). First-line therapy failed in 16 patients, and they were subjected to second-line treatment with PPI (30 mg, bid), bismuth (300 mg, bid), metronidazole (2,250 mg, bid), and tetracycline (1,000 mg, qid). Eradication of H. pylori was verified by a negative result in a 13C-urea breath test (Isotechnika, Alberta, Canada) after at least 4 weeks of drug administration. This study was conducted retrospectively to follow up the results of eradication of H. pylori on the basis of antimicrobial susceptibility of the isolates, and it did not interfere with patient management decisions. The study was approved by the Institutional Review Board of Yonsei University College of Medicine (No. 4-2011-0508). Written informed consent was provided by all patients at the time of their first visit to the hospital. 2. H. pylori culture: The culture medium used in this study was composed of Brucella broth (BBL, Sparks, MD, USA) containing 1.2% agar, 10% bovine serum, and selected antibiotics (Oxoid Limited, Hampshire, England) (10 µg/mL vancomycin, 5 µg/mL trimethoprim, 5 µg/mL cefsulodin, and 5 µg/mL amphotericin B). Completely minced gastric biopsy specimens were incubated under 10% CO2, 5% O2, and 100% humidity at 37℃ for 3-5 days. Strains were identified as H. pylori by Gram staining; colony morphology analysis; and oxidase, catalase, and urease tests. The H. pylori ATCC 43504 strain was cultured as a standard using the same methods described above for quality control assessment. 3. Susceptibility tests: The minimal inhibitory concentrations (MICs) for clarithromycin (Sigma-Aldrich Co., St. Louis, MO, USA), amoxicillin (Sigma-Aldrich), tetracycline (Sigma-Aldrich), metronidazole (Sigma-Aldrich), and levofloxacin (Sigma-Aldrich) were determined using a slightly modified agar dilution method (using Brucella broth base containing 1.2% agar). Clarithromycin resistance was defined according to the CLSI-approved breakpoint (≥1 µg/mL) [7]. Isolates were defined as resistant to amoxicillin, tetracycline, metronidazole, and levofloxacin, when MICs were ≥1, ≥4, ≥8, and ≥1 µg/mL, respectively [8-10]. For H. pylori ATCC 43504, the MIC ranges for clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin were 0.016-0.125 µg/mL, 0.016-0.125 µg/mL, 64-256 µg/mL, 0.125-1 µg/mL, and 0.064-0.5 µg/mL, respectively. 4. Statistical analysis: Statistical analysis was performed using SPSS (Statistical Package for the Social Sciences version 18.0; SPSS Ins., Chicago, IL, USA). Data of antibiotic resistance were analyzed using the student t test and Chi-square test. P<0.05 was considered statistically significant. RESULTS: 1. Antibiotic resistance of H. pylori The antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant. When the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2). The antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant. When the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2). 2. Effect of antibiotic resistance on H. pylori eradication rates Of the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) . Second-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32). Of the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) . Second-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32). 1. Antibiotic resistance of H. pylori: The antibiotic resistance rates for the isolates from the 2009-2010 group against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and those for the isolates from the 2011-2012 group were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. The rate of H. pylori resistance to clarithromycin and metronidazole increased from 7.0% to 16.0% and from 45.1% to 56.3% for the 2 periods, respectively (Fig. 1), although the increase was not statistically significant. When the MIC distribution profiles for the 2 study periods were compared, the MICs for clarithromycin showed notable differences between the susceptible and resistant strains (Fig. 2). The MIC of the clarithromycin-susceptible strains was less than 0.125 µg/mL. Resistance to tetracycline was not detected in any strain (based on a cut-off of ≥ 4 µg/mL). The MIC range of tetracycline was 0.031-2 µg/mL. The MIC of metronidazole varied widely (8-256 µg/mL). Multi-drug resistance for 2 or more antibiotics was more frequent in the isolates from 2011-2012 (23.4%, 22/94) than in the isolates from 2009-2010 (16.9%, 12/71), but there was no statistical significance (P<0.082). Only 1 strain exhibited multi-drug resistance to clarithromycin, amoxicillin, metronidazole, and levofloxacin (Table 2). 2. Effect of antibiotic resistance on H. pylori eradication rates: Of the 165 patients studied during the 2009-2012 period, 66 patients who were subjected to the first-line therapy were followed up for H. pylori eradication after treatment. Among these 66 patients, no significant differences were found with respect to sex, age, and endoscopic diagnosis. Eradication of H. pylori was successful in 50 of these 66 patients (75.8%). The effects of antibiotic resistance on H. pylori eradication rates are shown in Table 3. Eradication rates were 79.3% (46/58) for the clarithromycin-susceptible and amoxicillin-susceptible strains, and 100% (1/1) for the clarithromycin-susceptible and amoxicillin-resistant strains. A significant difference was observed between the eradication rates for the clarithromycin-resistant (42.9%, 3/7) and the clarithromycin-sensitive (79.7%, 47/59) strains (P<0.001) . Second-line therapy was prescribed for the 16 patients in whom first-line therapy failed. The eradication rates for the tetracycline-susceptible and metronidazole-susceptible strains and the tetracycline-susceptible and metronidazole-resistant strains were 50.0% (4/8) and 25.0% (2/8), respectively (P<0.32). DISCUSSION: Recently, H. pylori eradication rates of 70-95% have been reported [4-6]. Failure of eradication may be attributed to increase in antibiotic resistance associated with problems in patient compliance, such as difficulties in taking drugs, or side effects [11-13]. In this study, the antimicrobial susceptibility test was conducted for H. pylori strains isolated from a single center over 2 periods, followed by examination of the factors affecting failure. Clarithromycin resistance rates increased from 7.0% in the 2009-2010 patient group to 16.0% in the 2011-2012 patient group. These rates are slightly lower than that reported in a previous study, which showed that the overall frequency of clarithromycin-resistant H. pylori in 2008 was 21.6% [14]. This discrepancy is conceivably attributable to regional differences in the location of the studies. The primary factor influencing clarithromycin resistance is known to be the A2142-4 point mutation in the 23S rRNA [14-17]. Amoxicillin resistance rates decreased slightly from 2.8% (2009-2011) to 2.1% (2011-2012). Resistance to tetracycline was not detected in any strain when the cut-off was set at ≥ 4 µg/mL, and the MICs were as low as 0.031-2 µg/mL. Recently, tetracycline resistance rates of 0-36% have been reported. However, as with clarithromycin resistance, the differences could be due to regional differences [18, 19]. Metronidazole resistance rates were higher than those for all other antibiotics, ranging from 45.1% in the 2009-2010 group to 56.3% in the 2011-2012 group, and the MIC of metronidazole was the highest among all the studied antibiotics (8-256 µg/mL). Levofloxacin resistance rates decreased slightly from 26.8% in 2009-2010 to 23.3% in 2011-2012. This finding is consistent with that reported in a previous domestic study (resistance rates decreased from 26.3% to 22.5%) [18, 19]. The continuous increase in levofloxacin resistance warrants the use of rescue therapy based on the results of antimicrobial susceptibility tests. Although differences in resistance rates to the 5 antibiotics in the 2 study periods failed to reach statistical significance, increases in the resistance to clarithromycin and metronidazole were identified. Moreover, multi-drug resistance for 2 or more antibiotics increased from 16.9% (12/71) in 2009-2010 group to 23.4% (22/94) in the 2011-2012 group, but there was no statistical significance (P<0.082). The overall eradication rate in patients who received first-line therapy with clarithromycin and amoxicillin was 75.8% (50/66), ranging from 78.1% in 2009-2010 to 73.5% in 2011-2012 (data not shown). The eradication rate for clarithromycin-resistant strains (42.9%, 3/7) was significantly lower than that for the clarithromycin-susceptible strains (79.7%, 47/59) (P<0.001). These results indicate that resistance to clarithromycin is a critical factor in the effectiveness of eradication with the first-line regimen. Complete eradication rate was only 79.3% in strains susceptible to both clarithromycin and amoxicillin with first-line therapy (Table 3). Thus failure rate of 20.7% may be attributable to problems with patient compliance; however, a more extensive follow-up survey is needed to confirm it. Eradication was successful in 6 of 16 patients, who received second-line therapy including tetracycline and metronidazole; second-line therapy failed in 4 patients, and data were unavailable for the remaining 6 patients. Previous treatment histories of the 16 patients in the second-line treatment group were as follows: clarithromycin treatment for eradication of H. pylori, 2 patients; treatment for liver cirrhosis, 1 patient; and poor compliance, 1 patient. Specific histories were unavailable for the remaining 12 patients. Although failure of eradication is generally linked to antibiotic resistance, increase in antibiotic resistance does not always correlate with decrease in eradication rates; therefore, further studies are required to identify other factors affecting eradication rates. In conclusion, the effectiveness of eradication using first-line therapy with clarithromycin and amoxicillin decreased, especially in the clarithromycin-resistant group, and clarithromycin resistance was considered crucial for the eradication of H. pylori. This result suggested that eradication of H. pylori is greatly dependent on periodic monitoring of antimicrobial susceptibility, which is necessary for selection of an appropriate antibiotic regimen.
Background: Clarithromycin, amoxicillin, metronidazole, tetracycline, and levofloxacin have been commonly used for the eradication of Helicobacter pylori. We compared the change in antibiotic resistance of H. pylori strains during two separate periods and investigated the effect of antibiotic resistance on H. pylori eradication. Methods: H. pylori strains were isolated from 71 patients between 2009 and 2010 and from 94 patients between 2011 and 2012. The distribution of minimal inhibitory concentration (MIC) of 5 antibiotics was assessed using the agar dilution method, and H. pylori eradication based on the antimicrobial susceptibility of the isolates was investigated retrospectively. Results: Antibiotic resistance rate against clarithromycin, amoxicillin, tetracycline, metronidazole, and levofloxacin for the 2009-2010 isolates were 7.0% (5/71), 2.8% (2/71), 0% (0/71), 45.1% (32/71), and 26.8% (19/71), respectively, and for the 2011-2012 isolates were 16.0% (15/94), 2.1% (2/94), 0% (0/94), 56.3% (53/94), and 22.3% (21/94), respectively. Multi-drug resistance for 2 or more antibiotics increased slightly from 16.9% (12/71) in the 2009-2010 isolates to 23.4% (22/94) in the 2011-2012 isolates. In follow-up testing of 66 patients, first-line treatment successfully eradicated H. pylori in 50 patients (75.8%) and failed in 4 of 7 patients (57.1%) in a clarithromycin-resistant and amoxicillin-susceptible group. Conclusions: We observed an increase in resistance to clarithromycin and an overall increase in multi-drug resistance during the 2 study periods. The effectiveness of the eradication regimen was low with combinations of clarithromycin and amoxicillin, particularly in the clarithromycin-resistant group. Thus, eradication of H. pylori depends upon periodic monitoring of antimicrobial susceptibility.
null
null
4,734
357
[ 237, 259, 144, 189, 50, 318, 219 ]
10
[ "clarithromycin", "resistance", "pylori", "ml", "eradication", "µg", "µg ml", "patients", "metronidazole", "strains" ]
[ "pylori eradication fails", "pylori patients treatment", "followed pylori eradication", "pylori resistance clarithromycin", "clarithromycin resistant pylori" ]
null
null
[CONTENT] Helicobacter pylori | Antibiotic resistance | Eradication [SUMMARY]
[CONTENT] Helicobacter pylori | Antibiotic resistance | Eradication [SUMMARY]
[CONTENT] Helicobacter pylori | Antibiotic resistance | Eradication [SUMMARY]
null
[CONTENT] Helicobacter pylori | Antibiotic resistance | Eradication [SUMMARY]
null
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Drug Resistance, Multiple, Bacterial | Female | Helicobacter Infections | Helicobacter pylori | Humans | Male | Microbial Sensitivity Tests | Middle Aged | Peptic Ulcer | Republic of Korea | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Drug Resistance, Multiple, Bacterial | Female | Helicobacter Infections | Helicobacter pylori | Humans | Male | Microbial Sensitivity Tests | Middle Aged | Peptic Ulcer | Republic of Korea | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Drug Resistance, Multiple, Bacterial | Female | Helicobacter Infections | Helicobacter pylori | Humans | Male | Microbial Sensitivity Tests | Middle Aged | Peptic Ulcer | Republic of Korea | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] Adult | Aged | Anti-Bacterial Agents | Drug Resistance, Multiple, Bacterial | Female | Helicobacter Infections | Helicobacter pylori | Humans | Male | Microbial Sensitivity Tests | Middle Aged | Peptic Ulcer | Republic of Korea | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] pylori eradication fails | pylori patients treatment | followed pylori eradication | pylori resistance clarithromycin | clarithromycin resistant pylori [SUMMARY]
[CONTENT] pylori eradication fails | pylori patients treatment | followed pylori eradication | pylori resistance clarithromycin | clarithromycin resistant pylori [SUMMARY]
[CONTENT] pylori eradication fails | pylori patients treatment | followed pylori eradication | pylori resistance clarithromycin | clarithromycin resistant pylori [SUMMARY]
null
[CONTENT] pylori eradication fails | pylori patients treatment | followed pylori eradication | pylori resistance clarithromycin | clarithromycin resistant pylori [SUMMARY]
null
[CONTENT] clarithromycin | resistance | pylori | ml | eradication | µg | µg ml | patients | metronidazole | strains [SUMMARY]
[CONTENT] clarithromycin | resistance | pylori | ml | eradication | µg | µg ml | patients | metronidazole | strains [SUMMARY]
[CONTENT] clarithromycin | resistance | pylori | ml | eradication | µg | µg ml | patients | metronidazole | strains [SUMMARY]
null
[CONTENT] clarithromycin | resistance | pylori | ml | eradication | µg | µg ml | patients | metronidazole | strains [SUMMARY]
null
[CONTENT] eradication | line treatment regimen | 2003 | treatment regimen | gastritis | rates | resistance | regimen | pylori | line [SUMMARY]
[CONTENT] mg | µg | µg ml | ml | aldrich | bid | mg bid | sigma | sigma aldrich | patients [SUMMARY]
[CONTENT] susceptible | clarithromycin | 71 | 94 | eradication | strains | rates | resistance | eradication rates | patients [SUMMARY]
null
[CONTENT] ml | µg | µg ml | eradication | clarithromycin | resistance | patients | pylori | rates | susceptible [SUMMARY]
null
[CONTENT] Clarithromycin ||| two [SUMMARY]
[CONTENT] 71 | between 2009 and 2010 | 94 | between 2011 and 2012 ||| MIC | 5 [SUMMARY]
[CONTENT] 2009-2010 | 7.0% | 5/71 | 2.8% | 2/71 | 0% | 0/71 | 45.1% | 32/71 | 26.8% | 19/71 | 2011-2012 | 16.0% | 15/94 | 2.1% | 2/94 | 0% | 0/94 | 56.3% | 22.3% | 21/94 ||| 2 | 16.9% | 12/71 | 2009-2010 | 23.4% | 22/94 | 2011-2012 ||| 66 | first | 50 | 75.8% | 4 | 7 | 57.1% [SUMMARY]
null
[CONTENT] Clarithromycin ||| two ||| 71 | between 2009 and 2010 | 94 | between 2011 and 2012 ||| MIC | 5 ||| ||| 2009-2010 | 7.0% | 5/71 | 2.8% | 2/71 | 0% | 0/71 | 45.1% | 32/71 | 26.8% | 19/71 | 2011-2012 | 16.0% | 15/94 | 2.1% | 2/94 | 0% | 0/94 | 56.3% | 22.3% | 21/94 ||| 2 | 16.9% | 12/71 | 2009-2010 | 23.4% | 22/94 | 2011-2012 ||| 66 | first | 50 | 75.8% | 4 | 7 | 57.1% ||| 2 ||| ||| [SUMMARY]
null
LPS preconditioning redirects TLR signaling following stroke: TRIF-IRF3 plays a seminal role in mediating tolerance to ischemic injury.
21999375
Toll-like receptor 4 (TLR4) is activated in response to cerebral ischemia leading to substantial brain damage. In contrast, mild activation of TLR4 by preconditioning with low dose exposure to lipopolysaccharide (LPS) prior to cerebral ischemia dramatically improves outcome by reprogramming the signaling response to injury. This suggests that TLR4 signaling can be altered to induce an endogenously neuroprotective phenotype. However, the TLR4 signaling events involved in this neuroprotective response are poorly understood. Here we define several molecular mediators of the primary signaling cascades induced by LPS preconditioning that give rise to the reprogrammed response to cerebral ischemia and confer the neuroprotective phenotype.
BACKGROUND
C57BL6 mice were preconditioned with low dose LPS prior to transient middle cerebral artery occlusion (MCAO). Cortical tissue and blood were collected following MCAO. Microarray and qtPCR were performed to analyze gene expression associated with TLR4 signaling. EMSA and DNA binding ELISA were used to evaluate NFκB and IRF3 activity. Protein expression was determined using Western blot or ELISA. MyD88-/- and TRIF-/- mice were utilized to evaluate signaling in LPS preconditioning-induced neuroprotection.
METHODS
Gene expression analyses revealed that LPS preconditioning resulted in a marked upregulation of anti-inflammatory/type I IFN-associated genes following ischemia while pro-inflammatory genes induced following ischemia were present but not differentially modulated by LPS. Interestingly, although expression of pro-inflammatory genes was observed, there was decreased activity of NFκB p65 and increased presence of NFκB inhibitors, including Ship1, Tollip, and p105, in LPS-preconditioned mice following stroke. In contrast, IRF3 activity was enhanced in LPS-preconditioned mice following stroke. TRIF and MyD88 deficient mice revealed that neuroprotection induced by LPS depends on TLR4 signaling via TRIF, which activates IRF3, but does not depend on MyD88 signaling.
RESULTS
Our results characterize several critical mediators of the TLR4 signaling events associated with neuroprotection. LPS preconditioning redirects TLR4 signaling in response to stroke through suppression of NFκB activity, enhanced IRF3 activity, and increased anti-inflammatory/type I IFN gene expression. Interestingly, this protective phenotype does not require the suppression of pro-inflammatory mediators. Furthermore, our results highlight a critical role for TRIF-IRF3 signaling as the governing mechanism in the neuroprotective response to stroke.
CONCLUSION
[ "Adaptor Proteins, Vesicular Transport", "Animals", "Brain Ischemia", "Chemokines", "Cytokines", "Gene Expression Profiling", "Humans", "Infarction, Middle Cerebral Artery", "Interferon Regulatory Factor-3", "Ischemic Preconditioning", "Lipopolysaccharides", "Male", "Mice", "Mice, Inbred C57BL", "Mice, Knockout", "Microarray Analysis", "NF-kappa B", "Signal Transduction", "Stroke", "Toll-Like Receptor 4" ]
3217906
Background
Stroke is one of the leading causes of death and the leading cause of morbidity in the United States [1]. The inflammatory response to stroke substantially exacerbates ischemic damage. The acute activation of the NFκB transcription factor has been linked to the inflammatory response to stroke [2] and suppression of NFκB activity following stroke has been found to reduce damage [3]. NFκB activation can lead to the dramatic upregulation of inflammatory molecules and cytokines including TNFα, IL6, IL1β, and COX2 [2]. The source of these inflammatory molecules in the acute response to stroke appears to stem from the cells of the central nervous system (CNS), including neurons and glial cells [2]. The cells in the CNS play a particularly dominant role early in the response to ischemia because infiltrating leukocytes do not appear in substantial numbers in the brain until 24 hr following injury [4]. Stroke also induces an acute inflammatory response in the circulating blood. Inflammatory cytokine and chemokine levels, including IL6, IL1β, MCP-1 and TNFα are elevated in the circulation following stroke [5]. This suggests there is an intimate relationship between responses in the brain and blood following stroke--responses that result in increased inflammation. Toll-like receptors (TLRs), traditionally considered innate immune receptors, signal through the adaptor proteins MyD88 and TRIF to activate NFκB and interferon regulatory factors (IRFs). It has been shown recently that TLRs become activated in response to endogenous ligands, known as damage associated molecular patterns (DAMPs), released during injury. Interestingly, animals deficient in TLR2 or TLR4 have significantly reduced infarct sizes in several models of stroke [6-11]. This suggests that TLR2 and TLR4 activation in response to ischemic injury exacerbates damage. In addition, a recent investigation in humans showed that the inflammatory responses to stroke in the blood were linked to increased TLR2 and TLR4 expression on hematopoetic cells and associated with worse outcome in stroke [12]. The detrimental effect of TLR signaling is associated with the pathways that lead to NFκB activation and pro-inflammatory responses. In contrast, TLR signaling pathways that activate IRFs can induce anti-inflammatory mediators and type I IFNs that have been associated with neuroprotection [13,14]. Thus, in TLR signaling there is a fine balance between pathways leading to injury or protection. TLR ligands have been a major source of interest as preconditioning agents for prophylactic therapy against ischemic injury. Such therapies would target a population of patients that are at risk of ischemia in the setting of surgery [15-18]. Preconditioning with low doses of ligands for TLR2, TLR4, and TLR9 all successfully reduce infarct size in experimental models of stroke [19-21], including a recent study showing that a TLR9 ligand is neuroprotective in a nonhuman primate model of stroke [22]. Emerging evidence suggests that TLR-induced neuroprotection occurs by reprogramming the genomic response to the DAMPs, which are produced in response to ischemic injury. In this reprogrammed state, the resultant pathway activation of TLR4 signaling preferentially leads to IRF-mediated gene expression [13,14]. However, whether TLR preconditioning affects NFκB activity and pro-inflammatory signaling is unknown. As yet, a complete analysis of the characteristic TLR signaling responses to stroke following preconditioning has not been reported. The objective of this study is to utilize LPS preconditioning followed by transient middle cerebral artery occlusion (MCAO) to elucidate the reprogrammed TLR response to stroke and to determine the major pathways involved in producing the neuroprotective phenotype. Here we show that preconditioning against ischemia using LPS leads to suppressed NFκB activity--although pro-inflammatory gene expression does not appear to be attenuated. We also demonstrate that LPS-preconditioned mice have enhanced IRF3 activity and anti-inflammatory/type I IFN gene expression in the ischemic brain. This expression pattern was recapitulated in the blood where plasma levels of pro-inflammatory cytokine proteins were comparable in LPS-preconditioned and control mice while IRF-associated proteins were enhanced in LPS preconditioned mice. To our knowledge, we provide the first evidence that protection due to LPS preconditioning stems from TRIF signaling, the cascade that is associated with IRF3 activation, and is independent of MyD88 signaling. These molecular features suggest that, following stroke, signaling is directed away from NFκB activity and towards a dominant TRIF-IRF3 response. Understanding the endogenous signaling events that promote protection against ischemic injury is integral to the identification and development of novel stroke therapeutics. In particular, the evidence presented here further highlights a key role for IRF3 activity in the protective response to stroke.
Methods
Animals C57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines. C57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines. LPS treatment Mice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown). Mice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown). Middle Cerebral Artery Occlusion (MCAO) Mice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections. Mice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections. Tissue collection Under deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C. Under deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C. Genomic profiling of TLR associated mediators For the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA. Microarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively. For the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA. Microarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively. RNA isolation, Reverse Transcription, and qtPCR RNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt). RNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt). Western Blot Protein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH). Protein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH). Electrophoretic Mobility Shift Assay (EMSA) Nuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ). IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Nuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ). IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Cytokine Analysis Cytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ. Cytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ. Statistical Analysis Data is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05. Data is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05.
null
null
Conclusions
KBV performed experiments, collected data, conceived of the idea for the paper, and wrote the manuscript. SLS worked on the microarray, provided guidance in the production of data, and edited the paper. BJM performed experiments and contributed to the writing of the Methods section. RWK performed experiments. NL performed the MCAO surgeries. MSP provided critical guidance and worked on the manuscript. All authors approved of the final manuscript.
[ "Background", "Animals", "LPS treatment", "Middle Cerebral Artery Occlusion (MCAO)", "Tissue collection", "Genomic profiling of TLR associated mediators", "RNA isolation, Reverse Transcription, and qtPCR", "Western Blot", "Electrophoretic Mobility Shift Assay (EMSA)", "IRF3 Activity Assay", "Cytokine Analysis", "Statistical Analysis", "Results", "LPS preconditioning does not affect inflammatory gene expression in the brain following stroke", "LPS preconditioning upregulates anti-inflammatory/type I IFN gene expression in the brain following MCAO", "NFκB activity is suppressed in the brain of LPS-preconditioned animals 24 hr post MCAO", "IRF3 activity in the brain is enhanced following MCAO in LPS-preconditioned mice", "Blood cytokine/chemokine levels parallel the expression in the brain", "TRIF dependent LPS preconditioning induced neuroprotection", "Discussion", "Conclusions" ]
[ "Stroke is one of the leading causes of death and the leading cause of morbidity in the United States [1]. The inflammatory response to stroke substantially exacerbates ischemic damage. The acute activation of the NFκB transcription factor has been linked to the inflammatory response to stroke [2] and suppression of NFκB activity following stroke has been found to reduce damage [3]. NFκB activation can lead to the dramatic upregulation of inflammatory molecules and cytokines including TNFα, IL6, IL1β, and COX2 [2]. The source of these inflammatory molecules in the acute response to stroke appears to stem from the cells of the central nervous system (CNS), including neurons and glial cells [2]. The cells in the CNS play a particularly dominant role early in the response to ischemia because infiltrating leukocytes do not appear in substantial numbers in the brain until 24 hr following injury [4]. Stroke also induces an acute inflammatory response in the circulating blood. Inflammatory cytokine and chemokine levels, including IL6, IL1β, MCP-1 and TNFα are elevated in the circulation following stroke [5]. This suggests there is an intimate relationship between responses in the brain and blood following stroke--responses that result in increased inflammation.\nToll-like receptors (TLRs), traditionally considered innate immune receptors, signal through the adaptor proteins MyD88 and TRIF to activate NFκB and interferon regulatory factors (IRFs). It has been shown recently that TLRs become activated in response to endogenous ligands, known as damage associated molecular patterns (DAMPs), released during injury. Interestingly, animals deficient in TLR2 or TLR4 have significantly reduced infarct sizes in several models of stroke [6-11]. This suggests that TLR2 and TLR4 activation in response to ischemic injury exacerbates damage. In addition, a recent investigation in humans showed that the inflammatory responses to stroke in the blood were linked to increased TLR2 and TLR4 expression on hematopoetic cells and associated with worse outcome in stroke [12]. The detrimental effect of TLR signaling is associated with the pathways that lead to NFκB activation and pro-inflammatory responses. In contrast, TLR signaling pathways that activate IRFs can induce anti-inflammatory mediators and type I IFNs that have been associated with neuroprotection [13,14]. Thus, in TLR signaling there is a fine balance between pathways leading to injury or protection.\nTLR ligands have been a major source of interest as preconditioning agents for prophylactic therapy against ischemic injury. Such therapies would target a population of patients that are at risk of ischemia in the setting of surgery [15-18]. Preconditioning with low doses of ligands for TLR2, TLR4, and TLR9 all successfully reduce infarct size in experimental models of stroke [19-21], including a recent study showing that a TLR9 ligand is neuroprotective in a nonhuman primate model of stroke [22]. Emerging evidence suggests that TLR-induced neuroprotection occurs by reprogramming the genomic response to the DAMPs, which are produced in response to ischemic injury. In this reprogrammed state, the resultant pathway activation of TLR4 signaling preferentially leads to IRF-mediated gene expression [13,14]. However, whether TLR preconditioning affects NFκB activity and pro-inflammatory signaling is unknown. As yet, a complete analysis of the characteristic TLR signaling responses to stroke following preconditioning has not been reported. The objective of this study is to utilize LPS preconditioning followed by transient middle cerebral artery occlusion (MCAO) to elucidate the reprogrammed TLR response to stroke and to determine the major pathways involved in producing the neuroprotective phenotype.\nHere we show that preconditioning against ischemia using LPS leads to suppressed NFκB activity--although pro-inflammatory gene expression does not appear to be attenuated. We also demonstrate that LPS-preconditioned mice have enhanced IRF3 activity and anti-inflammatory/type I IFN gene expression in the ischemic brain. This expression pattern was recapitulated in the blood where plasma levels of pro-inflammatory cytokine proteins were comparable in LPS-preconditioned and control mice while IRF-associated proteins were enhanced in LPS preconditioned mice. To our knowledge, we provide the first evidence that protection due to LPS preconditioning stems from TRIF signaling, the cascade that is associated with IRF3 activation, and is independent of MyD88 signaling. These molecular features suggest that, following stroke, signaling is directed away from NFκB activity and towards a dominant TRIF-IRF3 response. Understanding the endogenous signaling events that promote protection against ischemic injury is integral to the identification and development of novel stroke therapeutics. In particular, the evidence presented here further highlights a key role for IRF3 activity in the protective response to stroke.", "C57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines.", "Mice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown).", "Mice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections.", "Under deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C.", "For the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA.\nMicroarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively.", "RNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt).", "Protein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH).", "Nuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ).\n IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.\nNuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.", "Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.", "Cytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ.", "Data is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05.", " LPS preconditioning does not affect inflammatory gene expression in the brain following stroke We used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO.\nEnhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment.\nWe used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO.\nEnhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment.\n LPS preconditioning upregulates anti-inflammatory/type I IFN gene expression in the brain following MCAO Although pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected.\nAlthough pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected.\n NFκB activity is suppressed in the brain of LPS-preconditioned animals 24 hr post MCAO NFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A).\nNFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01.\nShip1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice.\nNFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A).\nNFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01.\nShip1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice.\n IRF3 activity in the brain is enhanced following MCAO in LPS-preconditioned mice IRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression.\nIRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment.\nIRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression.\nIRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment.\n Blood cytokine/chemokine levels parallel the expression in the brain Evidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke.\nBlood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment.\nEvidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke.\nBlood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment.\n TRIF dependent LPS preconditioning induced neuroprotection Evidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection.\nLPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment.\nEvidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection.\nLPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment.", "We used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO.\nEnhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment.", "Although pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected.", "NFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A).\nNFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01.\nShip1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice.", "IRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression.\nIRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment.", "Evidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke.\nBlood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment.", "Evidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection.\nLPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment.", "Here we sought to describe the LPS-induced reprogrammed response to stroke and to determine the important signaling events involved in neuroprotection against ischemic injury. Our results demonstrated that NFκB activity was suppressed and that the cytosolic inhibitors of NFκB, Ship1, Tollip, and p105, were present 24 hr post MCAO although pro-inflammatory gene expression was unaffected (diagrammed in Figure 7). Interestingly, there is evidence that suppression of NFκB can promote protection against cerebral ischemia without influencing pro-inflammatory cytokine production [3,31]. In particular, administration of the NFκB inhibitor Tat-NEMO Binding Domain provided protection against hypoxia-ischemia in neonatal rats without affecting TNFα or IL1β production [3]. Furthermore, TLR4 deficient mice have smaller infarcts in response to MCAO, yet the production of TNFα and IL1β was unaffected [6]. This suggests that reduced ischemic injury can be achieved by suppressing NFκB activity without suppressing pro-inflammatory cytokines and that TLR4 signaling and NFκB activation is not the sole source of these pro-inflammatory cytokines in response to ischemic injury, implicating other signaling cascades and transcription factors in the inflammatory response. Thus, consistent with our result, reprogramming the TLR4 response would not alter inflammatory gene expression in the brain.\nSchematic of TLR4 signaling and gene expression following stroke. (Top) TLR4 signaling cascades following stroke. In the absence of LPS preconditioning, stroke leads to NFκB activation without IRF3 activation. LPS preconditioning prior to stroke leads to robust activation of IRF3 and suppressed NFκB activity compared to stroke alone. (Bottom) Gene expression 24 hr post stroke. Stroke alone dramatically upregulates pro-inflammatory genes. LPS preconditioning prior to stroke dramatically upregulates anti-inflammatory/Type I IFN genes, many of which are associated with IRF3, while still maintaining a pro-inflammatory response.\nNFκB is known to be induced acutely in response to ischemic injury; however, investigation into the role of NFκB activity has revealed conflicting results [2]. For instance, NFκB is constitutively active in neurons, a requirement for their survival, while the surrounding glial cells have inducible NFκB activity [32]. In response to ischemic challenge, NFκB activity in astrocytes is responsible for detrimental inflammation [33]. This concept of pleotropic roles also applies to many of the inflammatory genes expressed in the brain in the setting of stroke [34,35]. For example, intracerebroventricular injection of recombinant IL6 significantly decreased the infarct size in rats 24 hr post MCAO [36]. IL1β is a potent inducer of IL1 receptor antagonist (IL-rn), which significantly reduces damage in response to stroke [37] and, notably, is upregulated in our microarray (Figure 1, Lt.). TNFα is considered to play multiple roles in stroke injury mediating many neuroprotective and injurious effects [34]. Furthermore, in response to viral challenge, the simultaneous presence of inflammatory cytokines, such as TNFα, and type I IFNs can alter their effects and synergize to promote a more protective state [38]. Thus, alterations in the environment in which NFκB is activated and inflammatory genes are present may affect the roles pro-inflammatory mediators play in injury and may even contribute to the protective phenotype.\nIRF3 activity induces the expression of anti-inflammatory and type I IFN-associated genes. Interestingly, mice deficient in IRF3 are not protected against cerebral ischemia by LPS preconditioning [13]. We have further established the importance of IRF3 in neuroprotection by identifying that multiple preconditioning paradigms including LPS, CpG (TLR9 agonist) and brief ischemia induce a common set of IRF-mediated genes in the neuroprotective environment following MCAO [14]. Here we demonstrate that IRF3 activity is upregulated in the brain of LPS-preconditioned mice in response to MCAO and that several IRF3-mediated genes are also upregulated, including RANTES and IFIT1 (diagrammed in Figure 7), which may mitigate the damaging effects of ischemia.\nMany of the upregulated anti-inflammatory/type I IFN genes in the brain following stroke have several identified neuroprotective functions. TGFβ has been shown to protect neurons from apoptosis, promote angiogenesis, decrease microglial activation, and reduce edema [34,39]. RANTES, which is induced by IRF3 and IRF7 [30], has been shown to protect neurons from cell death in response to HIV-1 glycoprotein gp120 [40]. In the setting of brain ischemia, mice deficient in the RANTES receptor, CCR5, have larger infarcts, suggesting a neuroprotective role for CCR5 activation [41]. Notably, the expression of CCR5 is upregulated in our microarray data (Figure 1, Lt). IFIT1 is commonly associated with IRF3 signaling in response to IFN treatment and viral infection [42]. Little is known about a role for IFIT1 in ischemic injury; however, it is inducible in microglia and neurons and has been shown to affect NFκB and IRF3 activation [42-45]. Additional anti-inflammatory/type I IFN genes shown to be upregulated in our microarray studies have potential roles in neuroprotection including IL-receptor antagonist (IL-rn), which is associated with reduced infarct size in response to stroke [34,46]. A recombinant form of IL-rn is being tested in Phase II clinical trials as an acute stroke therapy [47,48]. Although not detected in our gene microarray studies here, perhaps due to assay sensitivity for IFNβ transcript on the microarray, we have previously published that IFNβ mRNA, a type I IFN known to have neuroprotective properties, is upregulated following stroke in the brain of LPS-preconditioned mice using qtPCR [13]. The protective functions of these genes may be of considerable importance to the neuroprotective phenotype following MCAO induced by LPS preconditioning.\nResearch strongly suggests that cerebral ischemia dramatically alters the protein and gene expression profile in the peripheral blood [5,29,49,50]. Our results demonstrate that the cytokine and chemokine response in the blood paralleled the pattern of gene expression in the brain. Overall, inflammatory cytokine protein levels were similarly induced in LPS and saline preconditioned mice following stroke. However, we have previously published that TNFα is significantly reduced in the plasma of LPS-preconditioned mice following MCAO [51]. The anti-inflammatory and type I IFN-induced cytokines and chemokines measured in the blood were enhanced in LPS-preconditioned mice compared to saline. In particular, IL10 was significantly upregulated in the blood following MCAO of LPS-preconditioned mice. Importantly, in humans, upregulation of IL10 in the blood has been correlated with improved outcome in stroke [52]. While IL10 mRNA was not detectable in the brain, IL10 can be induced by IRF3 activity and therefore is indicative of the same redirected response seen in the brain. IFNβ was not detected in the blood 24 hr post MCAO. This may be due to the kinetics of IFNβ expression. Further investigation into the time course of IFNβ induction in the blood is necessary to fully understand the role of IFNβ in this system. The redirected signaling observed in the blood may stem from the brain's response to injury by leaking proteins into the peripheral circulation; however, this is not considered a major source of plasma cytokines at these early timepoints following stroke [29]. Alternately, because LPS administration occurs by a systemic route, target cells in the periphery may become tolerant to activation by the secondary stimuli resulting from ischemic injury. Although our data does not distinguish between these possibilities, it is clear that LPS preconditioning alters the response to injury in the brain and the blood in a manner that promotes a protective phenotype.\nTLR4 signals through the adaptor molecules MyD88 and TRIF. MyD88 signaling culminates in NFκB activation. TRIF signaling can activate both IRF3 and NFκB, although IRF3 activation often is more rapid and robust, while activation of NFκB is a secondary effect that occurs as part of late-phase TLR signaling [53]. The data presented in this paper and Marsh et al., 2009 [13] suggests a dominant role for IRF3 signaling in LPS-induced neuroprotection, which implicates the TRIF adaptor as a key player in the reprogrammed TLR4 response to stroke. Support for this lies in our finding that mice deficient in TRIF are not protected by LPS preconditioning. In contrast, MyD88 deficient mice preconditioned with LPS are still protected against MCAO. Taken together, these data strongly support a protective role for TRIF-mediated IRF3 activation in the neuroprotective phenotype induced by LPS preconditioning.\nTLRs have the ability to self regulate in a manner that redirects their signaling. The classic example is endotoxin tolerance, whereby a low dose of the TLR4 ligand LPS reprograms TLR4 signaling in response to a subsequent toxic dose of LPS, leading to a protective phenotype [54]. This reprogrammed response comes in two major forms: (1.) suppressed pro-inflammatory signaling and enhanced anti-inflammatory/type I IFN signaling, or (2.) enhanced anti-inflammatory/type I IFN signaling in the absence of suppressed pro-inflammatory signaling. Thus, the suppressed NFκB activity, the enhanced IRF3 activity, and the upregulated anti-inflammatory/type I IFN associated genes seen in the LPS-preconditioned brain following stroke is reminiscent of endotoxin tolerance--a phenomenon that has been best described in macrophages in vitro, but more recently in animals. Many other key features of endotoxin tolerance are seen in the reprogrammed response to stroke produced by LPS preconditioning. For example, Tollip and Ship1 are known to be induced in endotoxin tolerance and lead to suppressed NFκB activity. TGFβ has been shown to play an important role in endotoxin tolerance, whereby TGFβ-mediated induction of SMAD4 is required to promote complete endotoxin tolerance and to induce the NFκB inhibitor, Ship1 [55]. Interestingly, in our system the upregulation of TGFβ corresponds to Ship1 upregulation 24 hr post MCAO in LPS-preconditioned mice compared to saline. Furthermore, cells deficient in TRIF or IRF3 are unable to develop tolerance to endotoxin [56]. This is similar to TRIF deficient or IRF3 deficient mice not being protected by LPS preconditioning against cerebral ischemia. Taken together, this suggests that the cellular phenomenon of endotoxin tolerance is potentially the same response observed in LPS preconditioning wherein LPS exposure leads to a reprogrammed TLR signaling response in the brain following stroke to produce protection.", "The findings reported here provide an important characterization of the LPS-induced neuroprotective response following stroke. We show that LPS preconditioning induces a reprogrammed response to stroke, whereby NFκB activity is suppressed, IRF3 activity is enhanced, and anti-inflammatory/type-I IFN genes are upregulated (diagrammed in Figure 7). Interestingly, the suppression of pro-inflammatory genes is not a necessary part of the neuroprotective response induced by LPS preconditioning. Further evaluation into the TLR4 signaling cascades revealed a seminal role for the TRIF cascade in producing the neuroprotection initiated by LPS preconditioning. As TRIF signaling culminates in IRF3 activation, this finding provides further evidence for the importance of IRF3 in the neuroprotective response to stroke." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Animals", "LPS treatment", "Middle Cerebral Artery Occlusion (MCAO)", "Tissue collection", "Genomic profiling of TLR associated mediators", "RNA isolation, Reverse Transcription, and qtPCR", "Western Blot", "Electrophoretic Mobility Shift Assay (EMSA)", "IRF3 Activity Assay", "Cytokine Analysis", "Statistical Analysis", "Results", "LPS preconditioning does not affect inflammatory gene expression in the brain following stroke", "LPS preconditioning upregulates anti-inflammatory/type I IFN gene expression in the brain following MCAO", "NFκB activity is suppressed in the brain of LPS-preconditioned animals 24 hr post MCAO", "IRF3 activity in the brain is enhanced following MCAO in LPS-preconditioned mice", "Blood cytokine/chemokine levels parallel the expression in the brain", "TRIF dependent LPS preconditioning induced neuroprotection", "Discussion", "Conclusions" ]
[ "Stroke is one of the leading causes of death and the leading cause of morbidity in the United States [1]. The inflammatory response to stroke substantially exacerbates ischemic damage. The acute activation of the NFκB transcription factor has been linked to the inflammatory response to stroke [2] and suppression of NFκB activity following stroke has been found to reduce damage [3]. NFκB activation can lead to the dramatic upregulation of inflammatory molecules and cytokines including TNFα, IL6, IL1β, and COX2 [2]. The source of these inflammatory molecules in the acute response to stroke appears to stem from the cells of the central nervous system (CNS), including neurons and glial cells [2]. The cells in the CNS play a particularly dominant role early in the response to ischemia because infiltrating leukocytes do not appear in substantial numbers in the brain until 24 hr following injury [4]. Stroke also induces an acute inflammatory response in the circulating blood. Inflammatory cytokine and chemokine levels, including IL6, IL1β, MCP-1 and TNFα are elevated in the circulation following stroke [5]. This suggests there is an intimate relationship between responses in the brain and blood following stroke--responses that result in increased inflammation.\nToll-like receptors (TLRs), traditionally considered innate immune receptors, signal through the adaptor proteins MyD88 and TRIF to activate NFκB and interferon regulatory factors (IRFs). It has been shown recently that TLRs become activated in response to endogenous ligands, known as damage associated molecular patterns (DAMPs), released during injury. Interestingly, animals deficient in TLR2 or TLR4 have significantly reduced infarct sizes in several models of stroke [6-11]. This suggests that TLR2 and TLR4 activation in response to ischemic injury exacerbates damage. In addition, a recent investigation in humans showed that the inflammatory responses to stroke in the blood were linked to increased TLR2 and TLR4 expression on hematopoetic cells and associated with worse outcome in stroke [12]. The detrimental effect of TLR signaling is associated with the pathways that lead to NFκB activation and pro-inflammatory responses. In contrast, TLR signaling pathways that activate IRFs can induce anti-inflammatory mediators and type I IFNs that have been associated with neuroprotection [13,14]. Thus, in TLR signaling there is a fine balance between pathways leading to injury or protection.\nTLR ligands have been a major source of interest as preconditioning agents for prophylactic therapy against ischemic injury. Such therapies would target a population of patients that are at risk of ischemia in the setting of surgery [15-18]. Preconditioning with low doses of ligands for TLR2, TLR4, and TLR9 all successfully reduce infarct size in experimental models of stroke [19-21], including a recent study showing that a TLR9 ligand is neuroprotective in a nonhuman primate model of stroke [22]. Emerging evidence suggests that TLR-induced neuroprotection occurs by reprogramming the genomic response to the DAMPs, which are produced in response to ischemic injury. In this reprogrammed state, the resultant pathway activation of TLR4 signaling preferentially leads to IRF-mediated gene expression [13,14]. However, whether TLR preconditioning affects NFκB activity and pro-inflammatory signaling is unknown. As yet, a complete analysis of the characteristic TLR signaling responses to stroke following preconditioning has not been reported. The objective of this study is to utilize LPS preconditioning followed by transient middle cerebral artery occlusion (MCAO) to elucidate the reprogrammed TLR response to stroke and to determine the major pathways involved in producing the neuroprotective phenotype.\nHere we show that preconditioning against ischemia using LPS leads to suppressed NFκB activity--although pro-inflammatory gene expression does not appear to be attenuated. We also demonstrate that LPS-preconditioned mice have enhanced IRF3 activity and anti-inflammatory/type I IFN gene expression in the ischemic brain. This expression pattern was recapitulated in the blood where plasma levels of pro-inflammatory cytokine proteins were comparable in LPS-preconditioned and control mice while IRF-associated proteins were enhanced in LPS preconditioned mice. To our knowledge, we provide the first evidence that protection due to LPS preconditioning stems from TRIF signaling, the cascade that is associated with IRF3 activation, and is independent of MyD88 signaling. These molecular features suggest that, following stroke, signaling is directed away from NFκB activity and towards a dominant TRIF-IRF3 response. Understanding the endogenous signaling events that promote protection against ischemic injury is integral to the identification and development of novel stroke therapeutics. In particular, the evidence presented here further highlights a key role for IRF3 activity in the protective response to stroke.", " Animals C57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines.\nC57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines.\n LPS treatment Mice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown).\nMice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown).\n Middle Cerebral Artery Occlusion (MCAO) Mice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections.\nMice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections.\n Tissue collection Under deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C.\nUnder deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C.\n Genomic profiling of TLR associated mediators For the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA.\nMicroarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively.\nFor the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA.\nMicroarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively.\n RNA isolation, Reverse Transcription, and qtPCR RNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt).\nRNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt).\n Western Blot Protein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH).\nProtein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH).\n Electrophoretic Mobility Shift Assay (EMSA) Nuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ).\n IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.\nNuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.\nNuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ).\n IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.\nNuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.\n Cytokine Analysis Cytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ.\nCytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ.\n Statistical Analysis Data is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05.\nData is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05.", "C57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines.", "Mice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown).", "Mice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections.", "Under deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C.", "For the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA.\nMicroarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively.", "RNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt).", "Protein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH).", "Nuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ).\n IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.\nNuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.", "Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides.", "Cytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ.", "Data is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05.", " LPS preconditioning does not affect inflammatory gene expression in the brain following stroke We used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO.\nEnhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment.\nWe used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO.\nEnhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment.\n LPS preconditioning upregulates anti-inflammatory/type I IFN gene expression in the brain following MCAO Although pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected.\nAlthough pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected.\n NFκB activity is suppressed in the brain of LPS-preconditioned animals 24 hr post MCAO NFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A).\nNFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01.\nShip1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice.\nNFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A).\nNFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01.\nShip1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice.\n IRF3 activity in the brain is enhanced following MCAO in LPS-preconditioned mice IRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression.\nIRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment.\nIRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression.\nIRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment.\n Blood cytokine/chemokine levels parallel the expression in the brain Evidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke.\nBlood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment.\nEvidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke.\nBlood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment.\n TRIF dependent LPS preconditioning induced neuroprotection Evidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection.\nLPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment.\nEvidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection.\nLPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment.", "We used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO.\nEnhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment.", "Although pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected.", "NFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A).\nNFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01.\nShip1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice.", "IRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression.\nIRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment.", "Evidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke.\nBlood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment.", "Evidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection.\nLPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment.", "Here we sought to describe the LPS-induced reprogrammed response to stroke and to determine the important signaling events involved in neuroprotection against ischemic injury. Our results demonstrated that NFκB activity was suppressed and that the cytosolic inhibitors of NFκB, Ship1, Tollip, and p105, were present 24 hr post MCAO although pro-inflammatory gene expression was unaffected (diagrammed in Figure 7). Interestingly, there is evidence that suppression of NFκB can promote protection against cerebral ischemia without influencing pro-inflammatory cytokine production [3,31]. In particular, administration of the NFκB inhibitor Tat-NEMO Binding Domain provided protection against hypoxia-ischemia in neonatal rats without affecting TNFα or IL1β production [3]. Furthermore, TLR4 deficient mice have smaller infarcts in response to MCAO, yet the production of TNFα and IL1β was unaffected [6]. This suggests that reduced ischemic injury can be achieved by suppressing NFκB activity without suppressing pro-inflammatory cytokines and that TLR4 signaling and NFκB activation is not the sole source of these pro-inflammatory cytokines in response to ischemic injury, implicating other signaling cascades and transcription factors in the inflammatory response. Thus, consistent with our result, reprogramming the TLR4 response would not alter inflammatory gene expression in the brain.\nSchematic of TLR4 signaling and gene expression following stroke. (Top) TLR4 signaling cascades following stroke. In the absence of LPS preconditioning, stroke leads to NFκB activation without IRF3 activation. LPS preconditioning prior to stroke leads to robust activation of IRF3 and suppressed NFκB activity compared to stroke alone. (Bottom) Gene expression 24 hr post stroke. Stroke alone dramatically upregulates pro-inflammatory genes. LPS preconditioning prior to stroke dramatically upregulates anti-inflammatory/Type I IFN genes, many of which are associated with IRF3, while still maintaining a pro-inflammatory response.\nNFκB is known to be induced acutely in response to ischemic injury; however, investigation into the role of NFκB activity has revealed conflicting results [2]. For instance, NFκB is constitutively active in neurons, a requirement for their survival, while the surrounding glial cells have inducible NFκB activity [32]. In response to ischemic challenge, NFκB activity in astrocytes is responsible for detrimental inflammation [33]. This concept of pleotropic roles also applies to many of the inflammatory genes expressed in the brain in the setting of stroke [34,35]. For example, intracerebroventricular injection of recombinant IL6 significantly decreased the infarct size in rats 24 hr post MCAO [36]. IL1β is a potent inducer of IL1 receptor antagonist (IL-rn), which significantly reduces damage in response to stroke [37] and, notably, is upregulated in our microarray (Figure 1, Lt.). TNFα is considered to play multiple roles in stroke injury mediating many neuroprotective and injurious effects [34]. Furthermore, in response to viral challenge, the simultaneous presence of inflammatory cytokines, such as TNFα, and type I IFNs can alter their effects and synergize to promote a more protective state [38]. Thus, alterations in the environment in which NFκB is activated and inflammatory genes are present may affect the roles pro-inflammatory mediators play in injury and may even contribute to the protective phenotype.\nIRF3 activity induces the expression of anti-inflammatory and type I IFN-associated genes. Interestingly, mice deficient in IRF3 are not protected against cerebral ischemia by LPS preconditioning [13]. We have further established the importance of IRF3 in neuroprotection by identifying that multiple preconditioning paradigms including LPS, CpG (TLR9 agonist) and brief ischemia induce a common set of IRF-mediated genes in the neuroprotective environment following MCAO [14]. Here we demonstrate that IRF3 activity is upregulated in the brain of LPS-preconditioned mice in response to MCAO and that several IRF3-mediated genes are also upregulated, including RANTES and IFIT1 (diagrammed in Figure 7), which may mitigate the damaging effects of ischemia.\nMany of the upregulated anti-inflammatory/type I IFN genes in the brain following stroke have several identified neuroprotective functions. TGFβ has been shown to protect neurons from apoptosis, promote angiogenesis, decrease microglial activation, and reduce edema [34,39]. RANTES, which is induced by IRF3 and IRF7 [30], has been shown to protect neurons from cell death in response to HIV-1 glycoprotein gp120 [40]. In the setting of brain ischemia, mice deficient in the RANTES receptor, CCR5, have larger infarcts, suggesting a neuroprotective role for CCR5 activation [41]. Notably, the expression of CCR5 is upregulated in our microarray data (Figure 1, Lt). IFIT1 is commonly associated with IRF3 signaling in response to IFN treatment and viral infection [42]. Little is known about a role for IFIT1 in ischemic injury; however, it is inducible in microglia and neurons and has been shown to affect NFκB and IRF3 activation [42-45]. Additional anti-inflammatory/type I IFN genes shown to be upregulated in our microarray studies have potential roles in neuroprotection including IL-receptor antagonist (IL-rn), which is associated with reduced infarct size in response to stroke [34,46]. A recombinant form of IL-rn is being tested in Phase II clinical trials as an acute stroke therapy [47,48]. Although not detected in our gene microarray studies here, perhaps due to assay sensitivity for IFNβ transcript on the microarray, we have previously published that IFNβ mRNA, a type I IFN known to have neuroprotective properties, is upregulated following stroke in the brain of LPS-preconditioned mice using qtPCR [13]. The protective functions of these genes may be of considerable importance to the neuroprotective phenotype following MCAO induced by LPS preconditioning.\nResearch strongly suggests that cerebral ischemia dramatically alters the protein and gene expression profile in the peripheral blood [5,29,49,50]. Our results demonstrate that the cytokine and chemokine response in the blood paralleled the pattern of gene expression in the brain. Overall, inflammatory cytokine protein levels were similarly induced in LPS and saline preconditioned mice following stroke. However, we have previously published that TNFα is significantly reduced in the plasma of LPS-preconditioned mice following MCAO [51]. The anti-inflammatory and type I IFN-induced cytokines and chemokines measured in the blood were enhanced in LPS-preconditioned mice compared to saline. In particular, IL10 was significantly upregulated in the blood following MCAO of LPS-preconditioned mice. Importantly, in humans, upregulation of IL10 in the blood has been correlated with improved outcome in stroke [52]. While IL10 mRNA was not detectable in the brain, IL10 can be induced by IRF3 activity and therefore is indicative of the same redirected response seen in the brain. IFNβ was not detected in the blood 24 hr post MCAO. This may be due to the kinetics of IFNβ expression. Further investigation into the time course of IFNβ induction in the blood is necessary to fully understand the role of IFNβ in this system. The redirected signaling observed in the blood may stem from the brain's response to injury by leaking proteins into the peripheral circulation; however, this is not considered a major source of plasma cytokines at these early timepoints following stroke [29]. Alternately, because LPS administration occurs by a systemic route, target cells in the periphery may become tolerant to activation by the secondary stimuli resulting from ischemic injury. Although our data does not distinguish between these possibilities, it is clear that LPS preconditioning alters the response to injury in the brain and the blood in a manner that promotes a protective phenotype.\nTLR4 signals through the adaptor molecules MyD88 and TRIF. MyD88 signaling culminates in NFκB activation. TRIF signaling can activate both IRF3 and NFκB, although IRF3 activation often is more rapid and robust, while activation of NFκB is a secondary effect that occurs as part of late-phase TLR signaling [53]. The data presented in this paper and Marsh et al., 2009 [13] suggests a dominant role for IRF3 signaling in LPS-induced neuroprotection, which implicates the TRIF adaptor as a key player in the reprogrammed TLR4 response to stroke. Support for this lies in our finding that mice deficient in TRIF are not protected by LPS preconditioning. In contrast, MyD88 deficient mice preconditioned with LPS are still protected against MCAO. Taken together, these data strongly support a protective role for TRIF-mediated IRF3 activation in the neuroprotective phenotype induced by LPS preconditioning.\nTLRs have the ability to self regulate in a manner that redirects their signaling. The classic example is endotoxin tolerance, whereby a low dose of the TLR4 ligand LPS reprograms TLR4 signaling in response to a subsequent toxic dose of LPS, leading to a protective phenotype [54]. This reprogrammed response comes in two major forms: (1.) suppressed pro-inflammatory signaling and enhanced anti-inflammatory/type I IFN signaling, or (2.) enhanced anti-inflammatory/type I IFN signaling in the absence of suppressed pro-inflammatory signaling. Thus, the suppressed NFκB activity, the enhanced IRF3 activity, and the upregulated anti-inflammatory/type I IFN associated genes seen in the LPS-preconditioned brain following stroke is reminiscent of endotoxin tolerance--a phenomenon that has been best described in macrophages in vitro, but more recently in animals. Many other key features of endotoxin tolerance are seen in the reprogrammed response to stroke produced by LPS preconditioning. For example, Tollip and Ship1 are known to be induced in endotoxin tolerance and lead to suppressed NFκB activity. TGFβ has been shown to play an important role in endotoxin tolerance, whereby TGFβ-mediated induction of SMAD4 is required to promote complete endotoxin tolerance and to induce the NFκB inhibitor, Ship1 [55]. Interestingly, in our system the upregulation of TGFβ corresponds to Ship1 upregulation 24 hr post MCAO in LPS-preconditioned mice compared to saline. Furthermore, cells deficient in TRIF or IRF3 are unable to develop tolerance to endotoxin [56]. This is similar to TRIF deficient or IRF3 deficient mice not being protected by LPS preconditioning against cerebral ischemia. Taken together, this suggests that the cellular phenomenon of endotoxin tolerance is potentially the same response observed in LPS preconditioning wherein LPS exposure leads to a reprogrammed TLR signaling response in the brain following stroke to produce protection.", "The findings reported here provide an important characterization of the LPS-induced neuroprotective response following stroke. We show that LPS preconditioning induces a reprogrammed response to stroke, whereby NFκB activity is suppressed, IRF3 activity is enhanced, and anti-inflammatory/type-I IFN genes are upregulated (diagrammed in Figure 7). Interestingly, the suppression of pro-inflammatory genes is not a necessary part of the neuroprotective response induced by LPS preconditioning. Further evaluation into the TLR4 signaling cascades revealed a seminal role for the TRIF cascade in producing the neuroprotection initiated by LPS preconditioning. As TRIF signaling culminates in IRF3 activation, this finding provides further evidence for the importance of IRF3 in the neuroprotective response to stroke." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Toll-like receptors", "stroke", "NFκB", "inflammation", "preconditioning", "neuroprotection" ]
Background: Stroke is one of the leading causes of death and the leading cause of morbidity in the United States [1]. The inflammatory response to stroke substantially exacerbates ischemic damage. The acute activation of the NFκB transcription factor has been linked to the inflammatory response to stroke [2] and suppression of NFκB activity following stroke has been found to reduce damage [3]. NFκB activation can lead to the dramatic upregulation of inflammatory molecules and cytokines including TNFα, IL6, IL1β, and COX2 [2]. The source of these inflammatory molecules in the acute response to stroke appears to stem from the cells of the central nervous system (CNS), including neurons and glial cells [2]. The cells in the CNS play a particularly dominant role early in the response to ischemia because infiltrating leukocytes do not appear in substantial numbers in the brain until 24 hr following injury [4]. Stroke also induces an acute inflammatory response in the circulating blood. Inflammatory cytokine and chemokine levels, including IL6, IL1β, MCP-1 and TNFα are elevated in the circulation following stroke [5]. This suggests there is an intimate relationship between responses in the brain and blood following stroke--responses that result in increased inflammation. Toll-like receptors (TLRs), traditionally considered innate immune receptors, signal through the adaptor proteins MyD88 and TRIF to activate NFκB and interferon regulatory factors (IRFs). It has been shown recently that TLRs become activated in response to endogenous ligands, known as damage associated molecular patterns (DAMPs), released during injury. Interestingly, animals deficient in TLR2 or TLR4 have significantly reduced infarct sizes in several models of stroke [6-11]. This suggests that TLR2 and TLR4 activation in response to ischemic injury exacerbates damage. In addition, a recent investigation in humans showed that the inflammatory responses to stroke in the blood were linked to increased TLR2 and TLR4 expression on hematopoetic cells and associated with worse outcome in stroke [12]. The detrimental effect of TLR signaling is associated with the pathways that lead to NFκB activation and pro-inflammatory responses. In contrast, TLR signaling pathways that activate IRFs can induce anti-inflammatory mediators and type I IFNs that have been associated with neuroprotection [13,14]. Thus, in TLR signaling there is a fine balance between pathways leading to injury or protection. TLR ligands have been a major source of interest as preconditioning agents for prophylactic therapy against ischemic injury. Such therapies would target a population of patients that are at risk of ischemia in the setting of surgery [15-18]. Preconditioning with low doses of ligands for TLR2, TLR4, and TLR9 all successfully reduce infarct size in experimental models of stroke [19-21], including a recent study showing that a TLR9 ligand is neuroprotective in a nonhuman primate model of stroke [22]. Emerging evidence suggests that TLR-induced neuroprotection occurs by reprogramming the genomic response to the DAMPs, which are produced in response to ischemic injury. In this reprogrammed state, the resultant pathway activation of TLR4 signaling preferentially leads to IRF-mediated gene expression [13,14]. However, whether TLR preconditioning affects NFκB activity and pro-inflammatory signaling is unknown. As yet, a complete analysis of the characteristic TLR signaling responses to stroke following preconditioning has not been reported. The objective of this study is to utilize LPS preconditioning followed by transient middle cerebral artery occlusion (MCAO) to elucidate the reprogrammed TLR response to stroke and to determine the major pathways involved in producing the neuroprotective phenotype. Here we show that preconditioning against ischemia using LPS leads to suppressed NFκB activity--although pro-inflammatory gene expression does not appear to be attenuated. We also demonstrate that LPS-preconditioned mice have enhanced IRF3 activity and anti-inflammatory/type I IFN gene expression in the ischemic brain. This expression pattern was recapitulated in the blood where plasma levels of pro-inflammatory cytokine proteins were comparable in LPS-preconditioned and control mice while IRF-associated proteins were enhanced in LPS preconditioned mice. To our knowledge, we provide the first evidence that protection due to LPS preconditioning stems from TRIF signaling, the cascade that is associated with IRF3 activation, and is independent of MyD88 signaling. These molecular features suggest that, following stroke, signaling is directed away from NFκB activity and towards a dominant TRIF-IRF3 response. Understanding the endogenous signaling events that promote protection against ischemic injury is integral to the identification and development of novel stroke therapeutics. In particular, the evidence presented here further highlights a key role for IRF3 activity in the protective response to stroke. Methods: Animals C57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines. C57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines. LPS treatment Mice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown). Mice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown). Middle Cerebral Artery Occlusion (MCAO) Mice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections. Mice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections. Tissue collection Under deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C. Under deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C. Genomic profiling of TLR associated mediators For the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA. Microarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively. For the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA. Microarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively. RNA isolation, Reverse Transcription, and qtPCR RNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt). RNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt). Western Blot Protein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH). Protein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH). Electrophoretic Mobility Shift Assay (EMSA) Nuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ). IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Nuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ). IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Cytokine Analysis Cytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ. Cytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ. Statistical Analysis Data is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05. Data is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05. Animals: C57Bl/6J mice (male, 8-12 weeks) were purchased from Jackson Laboratories (West Sacramento, CA). C57Bl/6J-Ticam1LPS2/J (TRIF-/-) mice were also obtained from Jackson Laboratories. MyD88-/- mice were a kind gift of Dr. Shizuo Akira (Osaka University, Osaka Japan) and were bred in our facility. All mice were housed in an American Association for Laboratory Animal Care-approved facility. Procedures were conducted according to Oregon Health & Science University, Institutional Animal Care and Use Committee, and National Institutes of Health guidelines. LPS treatment: Mice were preconditioned with LPS (0.2 or 0.8 mg/kg, Escherichia coli serotype 0111:B4; Sigma) or saline by one subcutaneous injection, unless otherwise indicated, 72 hr prior to MCAO. Each new lot of LPS was titrated for the optimal dose that confers neuroprotection. No differences were observed in the genomic responses to LPS for each dose used and route of administration (subcutaneous or intraperitoneal, data not shown). Middle Cerebral Artery Occlusion (MCAO): Mice were anesthetized with isoflurane (1.5-2%) and subjected to MCAO using the monofilament suture method described previously [23]. Briefly, a silicone-coated 7-0 monofilament nylon surgical suture was threaded through the external carotid artery to the internal carotid artery to block the middle cerebral artery, and maintained intraluminally for 40 to 60 min. The suture was then removed to restore blood flow. The duration of occlusion was optimized based on the specific surgeon who performed the MCAO to yield comparable infarct sizes in the saline treated control animals (~35-40%). The selected duration of MCAO was held constant within experiments. Cerebral blood flow (CBF) was monitored throughout surgery by laser doppler flowmetry. Any mouse that did not maintain a CBF during occlusion of <25% of baseline was excluded from the study. The reduction of CBF was comparable in LPS and saline preconditioned mice in response to MCAO. Body temperature was monitored and maintained at 37°C with a thermostat-controlled heating pad. Infarct measurements were made using triphenyltetrazolium chloride (TTC) staining of 1 mm coronal brain sections. Tissue collection: Under deep isoflurane anesthesia, approximately ~0.5-1.0 ml of blood was collected via cardiac puncture in a heparinized syringe. Subsequently, the mice were perfused with heparinized (2 U/ml) saline followed by rapid removal of the brain. The olfactory bulbs were removed and the first 4 mm of tissue was collected beginning at the rostral end. The striatum was dissected and removed and the remaining cortex was utilized for RNA isolation or protein extraction. The collected blood was centrifuged at 5000 × g for 20 min to obtain plasma that was stored at -80°C. Genomic profiling of TLR associated mediators: For the genes displayed in Figure 1, the transcript expression levels were determined as previously described from our microarray experiments examining the brain cortical response to stroke and 3 different preconditioning stimuli [14]. In brief, total RNA was isolated from the ipsilateral cortex (n = 4 mice/treatment/timepoint), using the Qiagen Rneasy Lipid Mini Kit (Qiagen). Microarray assays were performed in the Affymetrix Microarray Core of the Oregon Health & Science University Gene Microarray Shared Resource. RNA samples were labeled using the NuGEN Ovation Biotin RNA Amplification and Labeling System_V1. Hybridization was performed as described in the Affymetrix technical manual (Affymetrix) with modification as recommended for the Ovation labeling protocol (NuGEN Technologies). Labeled cRNA target was quality-checked based on yield and size distribution. Quality-tested samples were hybridized to the MOE430 2.0 array. The array image was processed with Affymetrix GeneChip Operating Software (GCOS). Affymetrix CEL files were then uploaded into GeneSifter (http://www.genesifter.net) and normalized using RMA. Microarray analysis of anti-inflammatory/type I IFN and pro-inflammatory gene expression. Microarray analysis revealed enhanced anti-inflammatory/type I IFN and comparable pro-inflammatory gene expression profiles in the brain of LPS-preconditioned (0.2 mg/kg, intraperitoneal injection) mice following 45 min MCAO. Heatmap representing level of gene expression immediately prior to (0 hr) MCAO and 3 and 24 hr post MCAO; n = 4/treatment/timepoint. Lt. Select anti-inflammatory/type I IFN genes. Rt. Select pro-inflammatory genes. Color scale from green to red represents relative decreased or increased gene expression levels, respectively. RNA isolation, Reverse Transcription, and qtPCR: RNA was isolated from cortical tissue 72 hr post injection or from ipsilateral cortical tissue at 3 or 24 hr following MCAO (n ≥ 4 mice/treatment/timepoint) using a Lipid Mini RNA isolation kit (Qiagen). Reverse transcription was performed on 2 μg of RNA using Omniscript (Qiagen). Quantitative PCR was performed using Taqman Gene Expression Assays (Applied Biosystems) for each gene of interest on an ABI Prism 7700. Results were normalized to β-Actin expression and analyzed relative to their saline preconditioned counterparts. The relative quantification of the gene of interest was determined using the comparative CT method (2-DDCt). Western Blot: Protein extraction was performed as described previously [24] with some modifications. Briefly, tissue samples (n ≥ 4 mice/treatment/timepoint) were dissected from the ipsilateral cortex and lysed in a buffer containing a protease inhibitor cocktail (Roche). Protein concentrations were determined using the BCA method (Pierce-Endogen). Protein samples (50 μg) were denatured in a gel-loading buffer (Bio-Rad Laboratories) at 100°C for 5 min and then loaded onto 12% Bis-Tris polyacrylamide gels (Bio-Rad Laboratories). Following electrophoresis, proteins were transferred to polyvinylodene difluoride membranes (Bio-Rad Laboratories) and incubated with primary antibodies for Ship-1 (Santa Cruz, sc8425), Tollip (AbCam, Ab37155), p105 (Santa Cruz, sc7178), or β-Actin (Santa Cruz, sc1616R) at 4°C overnight. Membranes were then incubated with horseradish peroxidase conjugated anti-rabbit, anti-goat, or anti-mouse antibody (Santa Cruz Biotechnology) and detected by chemiluminescence (NEN Life Science Products) and exposed to Kodak film (Biomax). Images were captured using an Epson scanner and the densitometry of the gel bands, including β-Actin loading control, was analyzed using ImageJ (NIH). Electrophoretic Mobility Shift Assay (EMSA): Nuclear protein extracts (n = 4 mice/treatment/timepoint) were prepared from tissue dissected from the ipsilateral cortex. Homogenized tissue was incubated in Buffer A (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 1 mM DTT, 1 mM PMSF) for 5 min on ice and centrifuged at 3000 rpm for 5 min at 4°C. The pellets were washed in Buffer B (10 mM Hepes-KOH pH7.9, 60 mM KCl, 1 mM EDTA, 0.5% NP-40, 1 mM DTT, 1 mM PMSF), resuspended in Buffer C (250 mM Tris pH7.8, 60 mM KCl, 1 mM DTT, 1 mM PMSF), and freeze-thawed 3 times in liquid nitrogen. All buffers contained a protease inhibitor cocktail (Roche). After centrifuging at 10,000 rpm for 10 min at 4°C, the supernatant was collected and stored as nuclear extract at -80°C. Nuclear protein concentrations were determined using the BCA method (Pierce-Endogen). EMSAs were performed using the Promega Gel Shift Assay System according to the manufacturer's instructions. Briefly, 15 μg of nuclear protein was incubated with 32P-labeled NFκB consensus oligonucleotide (Promega), either with or without unlabeled competitor oligonucleotide, unlabeled noncompetitor oligonucleotide, or anti-p65 antibody (Santa Cruz). Samples were electrophoresed on a 4% acrylamide gel, dried and exposed to phosphorimager overnight. The densitometry of the gel bands was analyzed using scanning integrated optical density software (ImageJ). IRF3 Activity Assay Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. IRF3 Activity Assay: Nuclear protein (n ≥ 4 mice/treatment/timepoint) was isolated from fresh cortical tissue at 72 hr post injection and from ipsilateral cortices at 3 or 24 hr following MCAO using a Nuclear Extraction Kit (Active Motif, Inc.). IRF3 activity was measured using 10 μg of nuclear protein in an IRF3 activity ELISA (Active Motif, Inc), that utilizes colorimetric detection of active IRF3 bound to immobilized oligonucleotides. Cytokine Analysis: Cytokine/chemokine analysis for IL1β, IL1α, MIP-1α, MCP-1, RANTES, and IL10 was performed on plasma samples (n ≥ 3 mice/treatment/timepoint) using a multiplex ELISA (Quansys). An IFNβ ELISA (PBL Interferon Source) was used to measure plasma levels of IFNβ. Statistical Analysis: Data is represented as mean ± SEM. The n for each experiment is greater than or equal to 3, as specified in each figure. Statistical analysis was performed using GraphPad Prism5 software. Two-way ANOVA with Bonferroni Post Hoc test and Student's t-test were utilized as specified. Significance was determined as p < 0.05. Results: LPS preconditioning does not affect inflammatory gene expression in the brain following stroke We used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO. Enhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment. We used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO. Enhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment. LPS preconditioning upregulates anti-inflammatory/type I IFN gene expression in the brain following MCAO Although pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected. Although pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected. NFκB activity is suppressed in the brain of LPS-preconditioned animals 24 hr post MCAO NFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A). NFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01. Ship1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice. NFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A). NFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01. Ship1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice. IRF3 activity in the brain is enhanced following MCAO in LPS-preconditioned mice IRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression. IRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment. IRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression. IRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment. Blood cytokine/chemokine levels parallel the expression in the brain Evidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke. Blood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment. Evidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke. Blood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment. TRIF dependent LPS preconditioning induced neuroprotection Evidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection. LPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment. Evidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection. LPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment. LPS preconditioning does not affect inflammatory gene expression in the brain following stroke: We used gene microarray analysis to elucidate the pattern of inflammatory or anti-inflammatory/type I IFN gene expression in the brain following stroke. In the setting of stroke, LPS preconditioned animals exhibited regulation of a number of genes typically found downstream of TLR signaling. The inflammatory profile reveals that the gene regulation is similar at each timepoint following stroke in LPS or saline preconditioned animals (Figure 1, Rt.). There is no evidence of inflammatory gene expression present immediately prior to stroke (Figure 1, Rt. 0 hr). At 3 hr post MCAO, several inflammatory genes are upregulated including IL6, IL1β, Ptgs2/COX2, and CCL2/MCP-1 (Figure 1, Rt.) and this upregulation is sustained at the 24 hr timepoint following MCAO (Figure 1, Rt.). TNFα, which is commonly shown to be upregulated following MCAO [25,26], only shows marginal levels of upregulation in LPS or saline preconditioned mice (Figure 1, Rt.). To confirm the microarray results, a subset of selected inflammatory genes including IL6, IL1β, COX2, and TNFα, were analyzed using qtPCR. Each of these genes were upregulated following MCAO in LPS and saline preconditioned mice, but there were no significant differences based on treatment at 3 hr (data not shown) and 24 hr (Figure 2) following MCAO. Enhanced anti-inflammatory/type I IFN gene expression but comparable pro-inflammatory gene expression in LPS-preconditioned mice post MCAO. Gene regulation 24 hr post MCAO measured by qtPCR reveals that anti-inflammatory/type I IFN-associated genes TGFβ, RANTES, and IFIT1 are significantly upregulated in LPS preconditioned mice compared to saline. Pro-inflammatory genes IL6, IL1β, COX2, and TNFα show similar regulation in LPS and saline preconditioned mice. These results confirm the gene microarray data. Samples from mice receiving a 45 (LPS: 0.2 mg/kg) or 60 min (LPS: 0.8 mg/kg) MCAO were combined due to comparable gene regulation (see methods). ND = not detected. Student's t-test, LPS vs. saline 24 hr post MCAO, **p < 0.01, n ≥ 4 per treatment. LPS preconditioning upregulates anti-inflammatory/type I IFN gene expression in the brain following MCAO: Although pro-inflammatory gene expression was not differentially modulated in preconditioned animals, microarray results revealed that the majority of the anti-inflammatory/type I IFN genes, such as TGFβ, IL1 receptor antagonist (IL1rn), RANTES, and IRF7, were upregulated following stroke in the brains of LPS versus saline preconditioned mice (Figure 1, Lt.). IL10 gene expression was not detected at any timepoint (Figure 1, Lt.). TGFβ, IL10, RANTES, and IFIT1 were selected for qtPCR analysis. TGFβ, RANTES, and IFIT1 were significantly upregulated in the LPS-preconditioned brain compared to saline 24 hr following stroke (Figure 2). RANTES was also significantly upregulated at 3 hr following stroke in LPS-preconditioned mice compared to saline (data not shown). IL10 expression remained undetectable by qtPCR analysis (Figure 2), suggesting that IL10 mRNA is not present at these timepoints in the brain following stroke. These qtPCR results confirm the gene expression profile observed on the microarray. Taken together, these data indicate an enhanced anti-inflammatory/type I IFN gene expression profile in the brain of LPS-preconditioned animals following MCAO while the inflammatory gene expression is unaffected. NFκB activity is suppressed in the brain of LPS-preconditioned animals 24 hr post MCAO: NFκB activity is associated with damage and inflammation in the brain that occurs in response to stroke. We used EMSAs to evaluate the activity of the NFκB subunit p65 in the brain following stroke. The results indicated that LPS and saline preconditioned mice have comparable NFκB activity at 3 hr post MCAO (Figure 3A). However, at 24 hr post MCAO, LPS-preconditioned animals have significantly suppressed NFκB activity compared to saline preconditioned mice (Figure 3A). NFκB is suppressed 24 hr post MCAO in LPS-preconditioned mice. (A) Nuclear protein obtained from ipsilateral cortices was used to measure p65 activity by EMSA analysis. EMSA gel of pooled samples (n = 4) following 45 min MCAO for saline and LPS preconditioned (0.8 mg/kg) mice (Lt.). Quantification of band intensity of individual mice following MCAO (Rt.). NFκB is significantly decreased in LPS-preconditioned mice 24 hr post MCAO compared to saline. Supershift assay confirmed specificity for p65 oligos (data not shown). (B) Ship1 and Tollip mRNA are significantly upregulated 24 hr post 60 minute MCAO in LPS-preconditioned (0.8 mg/kg) mice compared to saline, n ≥ 4 per treatment/timepoint. (C) Western blot for Ship1 and Tollip and relative band quantification showing significant upregulation of Ship1 and Tollip protein 24 hr post 45 min MCAO in LPS-preconditioned mice (0.8 mg/kg), n ≥ 3 per treatment/timepoint. (D) Western blot and relative band quantification for p105 at 24 hr post 45 minute MCAO showing significant upregulation in LPS-preconditioned (0.8 mg/kg) mice, n ≥ 3 per treatment/timepoint. (A) Two-Way ANOVA, Bonferroni Post Hoc, *p < 0.05. (B-D) Student's t-test, LPS vs. saline, **p < 0.01. Ship1 and Tollip are cytosolic molecules that inhibit TLR signaling, which leads to the suppression of NFκB activity. We found that Ship1 and Tollip mRNA are upregulated in the brain 72 hr post injection versus saline controls (2.06 ± 0.27 and 2.31 ± 0.35, respectively) but not at 3 hr post stroke (1.09 ± 0.10 and 1.05 ± 0.09, respectively). However, by 24 hr post MCAO, Ship1 and Tollip mRNA are significantly enhanced in the brain of LPS-preconditioned mice compared to saline controls (2.62 ± 0.84 and 4.01 ± 1.06, respectively, Figure 3B). Ship1 protein is not upregulated at 72 hr post injection (Fold change vs. saline: 1.01 ± 0.32), but becomes significantly enhanced in LPS-preconditioned mice at 3 hr (Fold change vs saline: 1.83 ± 0.13) and at 24 hr (Fold change vs. saline: 8.81 ± 1.54, Figure 3C) post MCAO. Tollip protein is not affected by LPS preconditioning at 72 hr post injection or 3 hr post MCAO (Fold change vs. saline: 1.42 ± 0.10 and 0.83 ± 0.10, respectively), but it is significantly enhanced in LPS-preconditioned mice compared to saline controls at 24 hr post MCAO (Fold change vs. saline: 2.42 ± 0.20, Figure 3C). Additionally, the p50 precursor protein p105, which inhibits NFκB activity by acting like an IκB molecule by sequestering NFκB in the cytosol [27,28], was significantly upregulated 24 hr post stroke in LPS-preconditioned mice compared to saline (Figure 3D). Thus, despite the upregulation of inflammatory genes, the activity of NFκB is suppressed in the late-phase of the neuroprotective response of LPS-preconditioned mice. IRF3 activity in the brain is enhanced following MCAO in LPS-preconditioned mice: IRF3 activation downstream of TLR4 is associated with anti-inflammatory/type I IFN responses. Using an IRF3 activity ELISA, we determined that IRF3 activity is comparable immediately prior to stroke (data not shown) and subsequently enhanced in the brains of LPS-preconditioned mice following MCAO (Figure 4). The trend for increased IRF3 activity is present at 3 hr post MCAO and is significantly increased at 24 hr in LPS-preconditioned mice (Figure 4). Saline treated animals showed no evidence of increased IRF3 activity following stroke (Figure 4). This indicates that LPS preconditioning alters the response to ischemic injury by activating IRF3--a finding that is consistent with the enhanced anti-inflammatory/type I IFN gene expression. IRF3 activity is enhanced following MCAO in LPS-preconditioned mice. Nuclear protein obtained from ipsilateral cortex post 60 min MCAO analyzed using an IRF3 activity ELISA (Active Motif, Inc.) revealed a significant increase in IRF3 activity in LPS-preconditioned (0.8 mg/kg) mice. Two-way ANOVA, Bonferroni Post Hoc, LPS vs. saline, *p < 0.05, n ≥ 4 per treatment. Blood cytokine/chemokine levels parallel the expression in the brain: Evidence indicates that stroke alters the cytokine profile in the plasma of circulating blood [5,29]. To determine whether LPS preconditioning changes the balance of pro- and anti-inflammatory cytokines and chemokines in the plasma we examined the levels of seven molecules using ELISAs. The results indicate that the level of pro-inflammatory cytokines, such as IL6, IL1β, and MCP-1, are increased in both LPS and saline preconditioned mice (Figure 5). The pro-inflammatory cytokines MIP-1α and IL1α were not detected in the serum (data not shown). The anti-inflammatory cytokine IL10 was significantly increased only in the plasma of LPS-preconditioned mice compared to saline preconditioned mice following stroke (Figure 5). RANTES, which is a chemokine associated with IRF3 and IRF7 activity [30], was present in the blood of LPS-preconditioned mice at significantly greater levels than saline preconditioned mice (Figure 5). IFNβ was not detectable in the blood of LPS or saline preconditioned animals following stroke (data not shown). Overall, this suggests that the pro-inflammatory and anti-inflammatory/type I IFN-associated response in the blood parallels the response in the brain following stroke. Blood cytokine/chemokine levels show alterations in gene expression patterns comparable to the brain. Plasma collected from saline or LPS-preconditioned (0.8 mg/kg) mice at the time of or following 60 min MCAO was examined using a multikine ELISA (Quansys). Results indicated that pro-inflammatory cytokines IL1β, IL6 and MCP-1 are similar in saline and LPS-preconditioned mice. In contrast, LPS-preconditioned mice have significantly enhanced levels of the anti-inflammatory/type I IFN-associated cytokine and chemokine IL10 and RANTES compared to saline following MCAO. Two-way ANOVA, LPS vs. saline, *p < 0.05, n ≥ 3 per treatment. TRIF dependent LPS preconditioning induced neuroprotection: Evidence presented here and previously suggests that signaling following stroke is redirected towards IRF3 [13,14]. TLR4 signaling, which activates IRF3, is initiated by the adaptor molecule TRIF, while TLR4 signaling that activates NFκB is initiated by the adaptor molecule MyD88. The individual roles of these adaptor molecules in neuroprotection induced by LPS preconditioning are unknown. To test whether either of these key molecular adaptors were important in mediating the neuroprotective effects of LPS, we exposed MyD88-/- and TRIF-/- mice to LPS preconditioning (n = 4-10 mice/treatment). We found that MyD88-/- mice preconditioned with LPS had significantly reduced infarct sizes in response to MCAO compared to saline controls (Figure 6), indicating that LPS preconditioning is able to induce neuroprotection in mice lacking MyD88. In contrast, TRIF-/- mice preconditioned with LPS or saline had comparable infarct sizes (Figure 6), indicating that LPS preconditioning is not able to induce neuroprotection in mice lacking TRIF. Importantly, the TRIF adaptor is responsible for activation of IRF3, thus, our finding that TRIF is required for LPS preconditioning provides further support for a protective role of IRF3 activity in neuroprotection. LPS preconditioning requires TLR signaling through TRIF to promote neuroprotection. WT, MyD88-/-, and TRIF-/- mice were preconditioned with LPS (0.8 mg/kg) 3 days prior to 40 min MCAO. MyD88-/- mice were protected by LPS preconditioning resulting in smaller infarct sizes. TRIF-/- mice did not have reduced infarct sizes, demonstrating that TRIF deficient mice are not protected by LPS preconditioning. Thus, TRIF is required for LPS preconditioning induced neuroprotection. Student's t-test, LPS vs. saline, *p < 0.05, n = 4-10 per treatment. Discussion: Here we sought to describe the LPS-induced reprogrammed response to stroke and to determine the important signaling events involved in neuroprotection against ischemic injury. Our results demonstrated that NFκB activity was suppressed and that the cytosolic inhibitors of NFκB, Ship1, Tollip, and p105, were present 24 hr post MCAO although pro-inflammatory gene expression was unaffected (diagrammed in Figure 7). Interestingly, there is evidence that suppression of NFκB can promote protection against cerebral ischemia without influencing pro-inflammatory cytokine production [3,31]. In particular, administration of the NFκB inhibitor Tat-NEMO Binding Domain provided protection against hypoxia-ischemia in neonatal rats without affecting TNFα or IL1β production [3]. Furthermore, TLR4 deficient mice have smaller infarcts in response to MCAO, yet the production of TNFα and IL1β was unaffected [6]. This suggests that reduced ischemic injury can be achieved by suppressing NFκB activity without suppressing pro-inflammatory cytokines and that TLR4 signaling and NFκB activation is not the sole source of these pro-inflammatory cytokines in response to ischemic injury, implicating other signaling cascades and transcription factors in the inflammatory response. Thus, consistent with our result, reprogramming the TLR4 response would not alter inflammatory gene expression in the brain. Schematic of TLR4 signaling and gene expression following stroke. (Top) TLR4 signaling cascades following stroke. In the absence of LPS preconditioning, stroke leads to NFκB activation without IRF3 activation. LPS preconditioning prior to stroke leads to robust activation of IRF3 and suppressed NFκB activity compared to stroke alone. (Bottom) Gene expression 24 hr post stroke. Stroke alone dramatically upregulates pro-inflammatory genes. LPS preconditioning prior to stroke dramatically upregulates anti-inflammatory/Type I IFN genes, many of which are associated with IRF3, while still maintaining a pro-inflammatory response. NFκB is known to be induced acutely in response to ischemic injury; however, investigation into the role of NFκB activity has revealed conflicting results [2]. For instance, NFκB is constitutively active in neurons, a requirement for their survival, while the surrounding glial cells have inducible NFκB activity [32]. In response to ischemic challenge, NFκB activity in astrocytes is responsible for detrimental inflammation [33]. This concept of pleotropic roles also applies to many of the inflammatory genes expressed in the brain in the setting of stroke [34,35]. For example, intracerebroventricular injection of recombinant IL6 significantly decreased the infarct size in rats 24 hr post MCAO [36]. IL1β is a potent inducer of IL1 receptor antagonist (IL-rn), which significantly reduces damage in response to stroke [37] and, notably, is upregulated in our microarray (Figure 1, Lt.). TNFα is considered to play multiple roles in stroke injury mediating many neuroprotective and injurious effects [34]. Furthermore, in response to viral challenge, the simultaneous presence of inflammatory cytokines, such as TNFα, and type I IFNs can alter their effects and synergize to promote a more protective state [38]. Thus, alterations in the environment in which NFκB is activated and inflammatory genes are present may affect the roles pro-inflammatory mediators play in injury and may even contribute to the protective phenotype. IRF3 activity induces the expression of anti-inflammatory and type I IFN-associated genes. Interestingly, mice deficient in IRF3 are not protected against cerebral ischemia by LPS preconditioning [13]. We have further established the importance of IRF3 in neuroprotection by identifying that multiple preconditioning paradigms including LPS, CpG (TLR9 agonist) and brief ischemia induce a common set of IRF-mediated genes in the neuroprotective environment following MCAO [14]. Here we demonstrate that IRF3 activity is upregulated in the brain of LPS-preconditioned mice in response to MCAO and that several IRF3-mediated genes are also upregulated, including RANTES and IFIT1 (diagrammed in Figure 7), which may mitigate the damaging effects of ischemia. Many of the upregulated anti-inflammatory/type I IFN genes in the brain following stroke have several identified neuroprotective functions. TGFβ has been shown to protect neurons from apoptosis, promote angiogenesis, decrease microglial activation, and reduce edema [34,39]. RANTES, which is induced by IRF3 and IRF7 [30], has been shown to protect neurons from cell death in response to HIV-1 glycoprotein gp120 [40]. In the setting of brain ischemia, mice deficient in the RANTES receptor, CCR5, have larger infarcts, suggesting a neuroprotective role for CCR5 activation [41]. Notably, the expression of CCR5 is upregulated in our microarray data (Figure 1, Lt). IFIT1 is commonly associated with IRF3 signaling in response to IFN treatment and viral infection [42]. Little is known about a role for IFIT1 in ischemic injury; however, it is inducible in microglia and neurons and has been shown to affect NFκB and IRF3 activation [42-45]. Additional anti-inflammatory/type I IFN genes shown to be upregulated in our microarray studies have potential roles in neuroprotection including IL-receptor antagonist (IL-rn), which is associated with reduced infarct size in response to stroke [34,46]. A recombinant form of IL-rn is being tested in Phase II clinical trials as an acute stroke therapy [47,48]. Although not detected in our gene microarray studies here, perhaps due to assay sensitivity for IFNβ transcript on the microarray, we have previously published that IFNβ mRNA, a type I IFN known to have neuroprotective properties, is upregulated following stroke in the brain of LPS-preconditioned mice using qtPCR [13]. The protective functions of these genes may be of considerable importance to the neuroprotective phenotype following MCAO induced by LPS preconditioning. Research strongly suggests that cerebral ischemia dramatically alters the protein and gene expression profile in the peripheral blood [5,29,49,50]. Our results demonstrate that the cytokine and chemokine response in the blood paralleled the pattern of gene expression in the brain. Overall, inflammatory cytokine protein levels were similarly induced in LPS and saline preconditioned mice following stroke. However, we have previously published that TNFα is significantly reduced in the plasma of LPS-preconditioned mice following MCAO [51]. The anti-inflammatory and type I IFN-induced cytokines and chemokines measured in the blood were enhanced in LPS-preconditioned mice compared to saline. In particular, IL10 was significantly upregulated in the blood following MCAO of LPS-preconditioned mice. Importantly, in humans, upregulation of IL10 in the blood has been correlated with improved outcome in stroke [52]. While IL10 mRNA was not detectable in the brain, IL10 can be induced by IRF3 activity and therefore is indicative of the same redirected response seen in the brain. IFNβ was not detected in the blood 24 hr post MCAO. This may be due to the kinetics of IFNβ expression. Further investigation into the time course of IFNβ induction in the blood is necessary to fully understand the role of IFNβ in this system. The redirected signaling observed in the blood may stem from the brain's response to injury by leaking proteins into the peripheral circulation; however, this is not considered a major source of plasma cytokines at these early timepoints following stroke [29]. Alternately, because LPS administration occurs by a systemic route, target cells in the periphery may become tolerant to activation by the secondary stimuli resulting from ischemic injury. Although our data does not distinguish between these possibilities, it is clear that LPS preconditioning alters the response to injury in the brain and the blood in a manner that promotes a protective phenotype. TLR4 signals through the adaptor molecules MyD88 and TRIF. MyD88 signaling culminates in NFκB activation. TRIF signaling can activate both IRF3 and NFκB, although IRF3 activation often is more rapid and robust, while activation of NFκB is a secondary effect that occurs as part of late-phase TLR signaling [53]. The data presented in this paper and Marsh et al., 2009 [13] suggests a dominant role for IRF3 signaling in LPS-induced neuroprotection, which implicates the TRIF adaptor as a key player in the reprogrammed TLR4 response to stroke. Support for this lies in our finding that mice deficient in TRIF are not protected by LPS preconditioning. In contrast, MyD88 deficient mice preconditioned with LPS are still protected against MCAO. Taken together, these data strongly support a protective role for TRIF-mediated IRF3 activation in the neuroprotective phenotype induced by LPS preconditioning. TLRs have the ability to self regulate in a manner that redirects their signaling. The classic example is endotoxin tolerance, whereby a low dose of the TLR4 ligand LPS reprograms TLR4 signaling in response to a subsequent toxic dose of LPS, leading to a protective phenotype [54]. This reprogrammed response comes in two major forms: (1.) suppressed pro-inflammatory signaling and enhanced anti-inflammatory/type I IFN signaling, or (2.) enhanced anti-inflammatory/type I IFN signaling in the absence of suppressed pro-inflammatory signaling. Thus, the suppressed NFκB activity, the enhanced IRF3 activity, and the upregulated anti-inflammatory/type I IFN associated genes seen in the LPS-preconditioned brain following stroke is reminiscent of endotoxin tolerance--a phenomenon that has been best described in macrophages in vitro, but more recently in animals. Many other key features of endotoxin tolerance are seen in the reprogrammed response to stroke produced by LPS preconditioning. For example, Tollip and Ship1 are known to be induced in endotoxin tolerance and lead to suppressed NFκB activity. TGFβ has been shown to play an important role in endotoxin tolerance, whereby TGFβ-mediated induction of SMAD4 is required to promote complete endotoxin tolerance and to induce the NFκB inhibitor, Ship1 [55]. Interestingly, in our system the upregulation of TGFβ corresponds to Ship1 upregulation 24 hr post MCAO in LPS-preconditioned mice compared to saline. Furthermore, cells deficient in TRIF or IRF3 are unable to develop tolerance to endotoxin [56]. This is similar to TRIF deficient or IRF3 deficient mice not being protected by LPS preconditioning against cerebral ischemia. Taken together, this suggests that the cellular phenomenon of endotoxin tolerance is potentially the same response observed in LPS preconditioning wherein LPS exposure leads to a reprogrammed TLR signaling response in the brain following stroke to produce protection. Conclusions: The findings reported here provide an important characterization of the LPS-induced neuroprotective response following stroke. We show that LPS preconditioning induces a reprogrammed response to stroke, whereby NFκB activity is suppressed, IRF3 activity is enhanced, and anti-inflammatory/type-I IFN genes are upregulated (diagrammed in Figure 7). Interestingly, the suppression of pro-inflammatory genes is not a necessary part of the neuroprotective response induced by LPS preconditioning. Further evaluation into the TLR4 signaling cascades revealed a seminal role for the TRIF cascade in producing the neuroprotection initiated by LPS preconditioning. As TRIF signaling culminates in IRF3 activation, this finding provides further evidence for the importance of IRF3 in the neuroprotective response to stroke.
Background: Toll-like receptor 4 (TLR4) is activated in response to cerebral ischemia leading to substantial brain damage. In contrast, mild activation of TLR4 by preconditioning with low dose exposure to lipopolysaccharide (LPS) prior to cerebral ischemia dramatically improves outcome by reprogramming the signaling response to injury. This suggests that TLR4 signaling can be altered to induce an endogenously neuroprotective phenotype. However, the TLR4 signaling events involved in this neuroprotective response are poorly understood. Here we define several molecular mediators of the primary signaling cascades induced by LPS preconditioning that give rise to the reprogrammed response to cerebral ischemia and confer the neuroprotective phenotype. Methods: C57BL6 mice were preconditioned with low dose LPS prior to transient middle cerebral artery occlusion (MCAO). Cortical tissue and blood were collected following MCAO. Microarray and qtPCR were performed to analyze gene expression associated with TLR4 signaling. EMSA and DNA binding ELISA were used to evaluate NFκB and IRF3 activity. Protein expression was determined using Western blot or ELISA. MyD88-/- and TRIF-/- mice were utilized to evaluate signaling in LPS preconditioning-induced neuroprotection. Results: Gene expression analyses revealed that LPS preconditioning resulted in a marked upregulation of anti-inflammatory/type I IFN-associated genes following ischemia while pro-inflammatory genes induced following ischemia were present but not differentially modulated by LPS. Interestingly, although expression of pro-inflammatory genes was observed, there was decreased activity of NFκB p65 and increased presence of NFκB inhibitors, including Ship1, Tollip, and p105, in LPS-preconditioned mice following stroke. In contrast, IRF3 activity was enhanced in LPS-preconditioned mice following stroke. TRIF and MyD88 deficient mice revealed that neuroprotection induced by LPS depends on TLR4 signaling via TRIF, which activates IRF3, but does not depend on MyD88 signaling. Conclusions: Our results characterize several critical mediators of the TLR4 signaling events associated with neuroprotection. LPS preconditioning redirects TLR4 signaling in response to stroke through suppression of NFκB activity, enhanced IRF3 activity, and increased anti-inflammatory/type I IFN gene expression. Interestingly, this protective phenotype does not require the suppression of pro-inflammatory mediators. Furthermore, our results highlight a critical role for TRIF-IRF3 signaling as the governing mechanism in the neuroprotective response to stroke.
Background: Stroke is one of the leading causes of death and the leading cause of morbidity in the United States [1]. The inflammatory response to stroke substantially exacerbates ischemic damage. The acute activation of the NFκB transcription factor has been linked to the inflammatory response to stroke [2] and suppression of NFκB activity following stroke has been found to reduce damage [3]. NFκB activation can lead to the dramatic upregulation of inflammatory molecules and cytokines including TNFα, IL6, IL1β, and COX2 [2]. The source of these inflammatory molecules in the acute response to stroke appears to stem from the cells of the central nervous system (CNS), including neurons and glial cells [2]. The cells in the CNS play a particularly dominant role early in the response to ischemia because infiltrating leukocytes do not appear in substantial numbers in the brain until 24 hr following injury [4]. Stroke also induces an acute inflammatory response in the circulating blood. Inflammatory cytokine and chemokine levels, including IL6, IL1β, MCP-1 and TNFα are elevated in the circulation following stroke [5]. This suggests there is an intimate relationship between responses in the brain and blood following stroke--responses that result in increased inflammation. Toll-like receptors (TLRs), traditionally considered innate immune receptors, signal through the adaptor proteins MyD88 and TRIF to activate NFκB and interferon regulatory factors (IRFs). It has been shown recently that TLRs become activated in response to endogenous ligands, known as damage associated molecular patterns (DAMPs), released during injury. Interestingly, animals deficient in TLR2 or TLR4 have significantly reduced infarct sizes in several models of stroke [6-11]. This suggests that TLR2 and TLR4 activation in response to ischemic injury exacerbates damage. In addition, a recent investigation in humans showed that the inflammatory responses to stroke in the blood were linked to increased TLR2 and TLR4 expression on hematopoetic cells and associated with worse outcome in stroke [12]. The detrimental effect of TLR signaling is associated with the pathways that lead to NFκB activation and pro-inflammatory responses. In contrast, TLR signaling pathways that activate IRFs can induce anti-inflammatory mediators and type I IFNs that have been associated with neuroprotection [13,14]. Thus, in TLR signaling there is a fine balance between pathways leading to injury or protection. TLR ligands have been a major source of interest as preconditioning agents for prophylactic therapy against ischemic injury. Such therapies would target a population of patients that are at risk of ischemia in the setting of surgery [15-18]. Preconditioning with low doses of ligands for TLR2, TLR4, and TLR9 all successfully reduce infarct size in experimental models of stroke [19-21], including a recent study showing that a TLR9 ligand is neuroprotective in a nonhuman primate model of stroke [22]. Emerging evidence suggests that TLR-induced neuroprotection occurs by reprogramming the genomic response to the DAMPs, which are produced in response to ischemic injury. In this reprogrammed state, the resultant pathway activation of TLR4 signaling preferentially leads to IRF-mediated gene expression [13,14]. However, whether TLR preconditioning affects NFκB activity and pro-inflammatory signaling is unknown. As yet, a complete analysis of the characteristic TLR signaling responses to stroke following preconditioning has not been reported. The objective of this study is to utilize LPS preconditioning followed by transient middle cerebral artery occlusion (MCAO) to elucidate the reprogrammed TLR response to stroke and to determine the major pathways involved in producing the neuroprotective phenotype. Here we show that preconditioning against ischemia using LPS leads to suppressed NFκB activity--although pro-inflammatory gene expression does not appear to be attenuated. We also demonstrate that LPS-preconditioned mice have enhanced IRF3 activity and anti-inflammatory/type I IFN gene expression in the ischemic brain. This expression pattern was recapitulated in the blood where plasma levels of pro-inflammatory cytokine proteins were comparable in LPS-preconditioned and control mice while IRF-associated proteins were enhanced in LPS preconditioned mice. To our knowledge, we provide the first evidence that protection due to LPS preconditioning stems from TRIF signaling, the cascade that is associated with IRF3 activation, and is independent of MyD88 signaling. These molecular features suggest that, following stroke, signaling is directed away from NFκB activity and towards a dominant TRIF-IRF3 response. Understanding the endogenous signaling events that promote protection against ischemic injury is integral to the identification and development of novel stroke therapeutics. In particular, the evidence presented here further highlights a key role for IRF3 activity in the protective response to stroke. Conclusions: KBV performed experiments, collected data, conceived of the idea for the paper, and wrote the manuscript. SLS worked on the microarray, provided guidance in the production of data, and edited the paper. BJM performed experiments and contributed to the writing of the Methods section. RWK performed experiments. NL performed the MCAO surgeries. MSP provided critical guidance and worked on the manuscript. All authors approved of the final manuscript.
Background: Toll-like receptor 4 (TLR4) is activated in response to cerebral ischemia leading to substantial brain damage. In contrast, mild activation of TLR4 by preconditioning with low dose exposure to lipopolysaccharide (LPS) prior to cerebral ischemia dramatically improves outcome by reprogramming the signaling response to injury. This suggests that TLR4 signaling can be altered to induce an endogenously neuroprotective phenotype. However, the TLR4 signaling events involved in this neuroprotective response are poorly understood. Here we define several molecular mediators of the primary signaling cascades induced by LPS preconditioning that give rise to the reprogrammed response to cerebral ischemia and confer the neuroprotective phenotype. Methods: C57BL6 mice were preconditioned with low dose LPS prior to transient middle cerebral artery occlusion (MCAO). Cortical tissue and blood were collected following MCAO. Microarray and qtPCR were performed to analyze gene expression associated with TLR4 signaling. EMSA and DNA binding ELISA were used to evaluate NFκB and IRF3 activity. Protein expression was determined using Western blot or ELISA. MyD88-/- and TRIF-/- mice were utilized to evaluate signaling in LPS preconditioning-induced neuroprotection. Results: Gene expression analyses revealed that LPS preconditioning resulted in a marked upregulation of anti-inflammatory/type I IFN-associated genes following ischemia while pro-inflammatory genes induced following ischemia were present but not differentially modulated by LPS. Interestingly, although expression of pro-inflammatory genes was observed, there was decreased activity of NFκB p65 and increased presence of NFκB inhibitors, including Ship1, Tollip, and p105, in LPS-preconditioned mice following stroke. In contrast, IRF3 activity was enhanced in LPS-preconditioned mice following stroke. TRIF and MyD88 deficient mice revealed that neuroprotection induced by LPS depends on TLR4 signaling via TRIF, which activates IRF3, but does not depend on MyD88 signaling. Conclusions: Our results characterize several critical mediators of the TLR4 signaling events associated with neuroprotection. LPS preconditioning redirects TLR4 signaling in response to stroke through suppression of NFκB activity, enhanced IRF3 activity, and increased anti-inflammatory/type I IFN gene expression. Interestingly, this protective phenotype does not require the suppression of pro-inflammatory mediators. Furthermore, our results highlight a critical role for TRIF-IRF3 signaling as the governing mechanism in the neuroprotective response to stroke.
15,267
427
[ 866, 103, 82, 211, 107, 315, 120, 243, 454, 80, 57, 64, 4517, 420, 227, 680, 215, 352, 317, 1938, 132 ]
22
[ "lps", "mice", "preconditioned", "mcao", "inflammatory", "saline", "hr", "following", "stroke", "lps preconditioned" ]
[ "stroke leads nfκb", "inflammatory response nfκb", "nfκb activated inflammatory", "inflammatory response stroke", "stroke blood cytokine" ]
null
[CONTENT] Toll-like receptors | stroke | NFκB | inflammation | preconditioning | neuroprotection [SUMMARY]
[CONTENT] Toll-like receptors | stroke | NFκB | inflammation | preconditioning | neuroprotection [SUMMARY]
null
[CONTENT] Toll-like receptors | stroke | NFκB | inflammation | preconditioning | neuroprotection [SUMMARY]
[CONTENT] Toll-like receptors | stroke | NFκB | inflammation | preconditioning | neuroprotection [SUMMARY]
[CONTENT] Toll-like receptors | stroke | NFκB | inflammation | preconditioning | neuroprotection [SUMMARY]
[CONTENT] Adaptor Proteins, Vesicular Transport | Animals | Brain Ischemia | Chemokines | Cytokines | Gene Expression Profiling | Humans | Infarction, Middle Cerebral Artery | Interferon Regulatory Factor-3 | Ischemic Preconditioning | Lipopolysaccharides | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Microarray Analysis | NF-kappa B | Signal Transduction | Stroke | Toll-Like Receptor 4 [SUMMARY]
[CONTENT] Adaptor Proteins, Vesicular Transport | Animals | Brain Ischemia | Chemokines | Cytokines | Gene Expression Profiling | Humans | Infarction, Middle Cerebral Artery | Interferon Regulatory Factor-3 | Ischemic Preconditioning | Lipopolysaccharides | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Microarray Analysis | NF-kappa B | Signal Transduction | Stroke | Toll-Like Receptor 4 [SUMMARY]
null
[CONTENT] Adaptor Proteins, Vesicular Transport | Animals | Brain Ischemia | Chemokines | Cytokines | Gene Expression Profiling | Humans | Infarction, Middle Cerebral Artery | Interferon Regulatory Factor-3 | Ischemic Preconditioning | Lipopolysaccharides | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Microarray Analysis | NF-kappa B | Signal Transduction | Stroke | Toll-Like Receptor 4 [SUMMARY]
[CONTENT] Adaptor Proteins, Vesicular Transport | Animals | Brain Ischemia | Chemokines | Cytokines | Gene Expression Profiling | Humans | Infarction, Middle Cerebral Artery | Interferon Regulatory Factor-3 | Ischemic Preconditioning | Lipopolysaccharides | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Microarray Analysis | NF-kappa B | Signal Transduction | Stroke | Toll-Like Receptor 4 [SUMMARY]
[CONTENT] Adaptor Proteins, Vesicular Transport | Animals | Brain Ischemia | Chemokines | Cytokines | Gene Expression Profiling | Humans | Infarction, Middle Cerebral Artery | Interferon Regulatory Factor-3 | Ischemic Preconditioning | Lipopolysaccharides | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Microarray Analysis | NF-kappa B | Signal Transduction | Stroke | Toll-Like Receptor 4 [SUMMARY]
[CONTENT] stroke leads nfκb | inflammatory response nfκb | nfκb activated inflammatory | inflammatory response stroke | stroke blood cytokine [SUMMARY]
[CONTENT] stroke leads nfκb | inflammatory response nfκb | nfκb activated inflammatory | inflammatory response stroke | stroke blood cytokine [SUMMARY]
null
[CONTENT] stroke leads nfκb | inflammatory response nfκb | nfκb activated inflammatory | inflammatory response stroke | stroke blood cytokine [SUMMARY]
[CONTENT] stroke leads nfκb | inflammatory response nfκb | nfκb activated inflammatory | inflammatory response stroke | stroke blood cytokine [SUMMARY]
[CONTENT] stroke leads nfκb | inflammatory response nfκb | nfκb activated inflammatory | inflammatory response stroke | stroke blood cytokine [SUMMARY]
[CONTENT] lps | mice | preconditioned | mcao | inflammatory | saline | hr | following | stroke | lps preconditioned [SUMMARY]
[CONTENT] lps | mice | preconditioned | mcao | inflammatory | saline | hr | following | stroke | lps preconditioned [SUMMARY]
null
[CONTENT] lps | mice | preconditioned | mcao | inflammatory | saline | hr | following | stroke | lps preconditioned [SUMMARY]
[CONTENT] lps | mice | preconditioned | mcao | inflammatory | saline | hr | following | stroke | lps preconditioned [SUMMARY]
[CONTENT] lps | mice | preconditioned | mcao | inflammatory | saline | hr | following | stroke | lps preconditioned [SUMMARY]
[CONTENT] stroke | inflammatory | signaling | response | injury | tlr | nfκb | ischemic | tlr2 | tlr2 tlr4 [SUMMARY]
[CONTENT] mm | nuclear | protein | rna | tissue | mice | performed | nuclear protein | affymetrix | mcao [SUMMARY]
null
[CONTENT] neuroprotective response | response | neuroprotective | lps preconditioning | lps | preconditioning | irf3 | stroke | induced | signaling [SUMMARY]
[CONTENT] lps | mice | inflammatory | preconditioned | irf3 | saline | mcao | hr | stroke | activity [SUMMARY]
[CONTENT] lps | mice | inflammatory | preconditioned | irf3 | saline | mcao | hr | stroke | activity [SUMMARY]
[CONTENT] 4 ||| LPS ||| ||| ||| LPS [SUMMARY]
[CONTENT] C57BL6 | LPS | MCAO ||| MCAO ||| Microarray ||| EMSA | ELISA | NFκB ||| ELISA ||| MyD88-/- | TRIF-/- | LPS [SUMMARY]
null
[CONTENT] ||| LPS | NFκB | IFN ||| ||| TRIF [SUMMARY]
[CONTENT] 4 ||| LPS ||| ||| ||| LPS ||| C57BL6 | LPS | MCAO ||| MCAO ||| Microarray ||| EMSA | ELISA | NFκB ||| ELISA ||| MyD88-/- | TRIF-/- | LPS ||| ||| LPS | IFN | LPS ||| NFκB | NFκB | LPS ||| LPS ||| TRIF | MyD88 | LPS | TRIF | MyD88 ||| ||| LPS | NFκB | IFN ||| ||| TRIF [SUMMARY]
[CONTENT] 4 ||| LPS ||| ||| ||| LPS ||| C57BL6 | LPS | MCAO ||| MCAO ||| Microarray ||| EMSA | ELISA | NFκB ||| ELISA ||| MyD88-/- | TRIF-/- | LPS ||| ||| LPS | IFN | LPS ||| NFκB | NFκB | LPS ||| LPS ||| TRIF | MyD88 | LPS | TRIF | MyD88 ||| ||| LPS | NFκB | IFN ||| ||| TRIF [SUMMARY]
Conditional survival of cancer patients: an Australian perspective.
23043308
Estimated conditional survival for cancer patients diagnosed at different ages and disease stage provides important information for cancer patients and clinicians in planning follow-up, surveillance and ongoing management.
BACKGROUND
Using population-based cancer registry data for New South Wales Australia, we estimated conditional 5-year relative survival for 11 major cancers diagnosed 1972-2006 by time since diagnosis and age and stage at diagnosis.
METHODS
193,182 cases were included, with the most common cancers being prostate (39,851), female breast (36,585) and colorectal (35,455). Five-year relative survival tended to increase with increasing years already survived and improvement was greatest for cancers with poor prognosis at diagnosis (lung or pancreas) and for those with advanced stage or older age at diagnosis. After surviving 10 years, conditional 5-year survival was over 95% for 6 localised, 6 regional, 3 distant and 3 unknown stage cancers. For the remaining patient groups, conditional 5-year survival ranged from 74% (for distant stage bladder cancer) to 94% (for 4 cancers at different stages), indicating that they continue to have excess mortality 10-15 years after diagnosis.
RESULTS
These data provide important information for cancer patients, based on age and stage at diagnosis, as they continue on their cancer journey. This information may also be used by clinicians as a tool to make more evidence-based decisions regarding follow-up, surveillance, or ongoing management according to patients' changing survival expectations over time.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Breast Neoplasms", "Colorectal Neoplasms", "Female", "Humans", "Male", "Middle Aged", "Mortality", "Neoplasm Staging", "Neoplasms", "New South Wales", "Population Surveillance", "Prostatic Neoplasms", "Registries", "Survival Analysis", "Survival Rate", "Time Factors", "Young Adult" ]
3519618
Background
Survival estimates for cancer patients are traditionally reported from the time of diagnosis such as five-year survival. It is useful for answering questions that many people ask about their prognosis when first diagnosed with cancer. For cancer patients who have already survived a number of years, survival expectations at diagnosis are too pessimistic because they include people who have already died. An ongoing question among these surviving patients is “now that I have survived for x years, what is the probability that I will survive another y years”. Over the past decade, the concept of conditional survival (CS) has emerged to directly address this question, because it provides cancer patients with survival expectations based on people who have reached a similar point in their cancer journey. However, numerous previously published CS estimates have focused on one or a few cancer types, including cancer of the head and neck [1], stomach [2], colon [3-5], rectum [6], lung [7,8], breast [9] and melanoma of the skin [10-12]. Only a few published studies provided estimates for many cancer sites [13-18], and an even smaller number have included stratification by age group and stage at diagnosis [15-17]. Ellison et al. [14] acknowledged that a stratification of conditional survival estimates by age group at diagnosis provides more relevant clinical information for clinicians and cancer patients. Similarly other studies have acknowledged the limitation of excluding information about stage at diagnosis [13,15]. This has been shown to be an important prognostic factor for survival outcomes [19]. While it has been suggested that the impact of stage reduces and can disappear for long term conditional survival [16], there are currently no published Australian data describing conditional survival outcomes according to the stage at diagnosis. This paper provides conditional survival estimates from New South Wales (NSW), Australia stratified by age group and stage at diagnosis for 11 major cancers.
Methods
Study population New South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00) [20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table 1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded. Conditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006 The NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage [21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available) [22]. Survival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2% [23]. This study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317). New South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00) [20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table 1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded. Conditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006 The NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage [21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available) [22]. Survival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2% [23]. This study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317). Statistical methods Estimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate [24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population [25]. We calculated relative survival using the period approach [26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method [27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al. [28]. Estimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate [24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population [25]. We calculated relative survival using the period approach [26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method [27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al. [28]. Conditional survival Conditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis [8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website [29]. Conditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis [8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website [29].
Results
A total of 193,182 cases were included in this study, with the most common cancers being prostate (39,851), female breast (36,585) and colorectum (35,455) (Table 1). Table 1 shows the 5-year relative survival estimates at diagnosis for each of the 11 selected cancer types, along with 5-year CS estimates for patients who have survived 1, 3, 5 and 10 years after diagnosis. Overall, 5-year relative survival tended to increase when conditional on increasing years after diagnosis and the greatest changes in CS occurred for cancers with poor prognosis at diagnosis for example, patients with aggressive cancers or those with advanced stage or at older age. For example, people diagnosed with lung cancer had an initial 5-year relative survival of 14%. However, their conditional 5-year relative survival increased to 33% after they survived one-year after diagnosis, and reached 85% if they survived 10 years after diagnosis. In contrast, 5-year relative survival was initially very high for men with prostate cancer (90%), with no change after surviving 10 years since diagnosis (90%). Table 2 shows the 5-year relative survival estimates stratified by stage at diagnosis for each of the 11 selected cancers and conditional on having survived 1, 3, 5 and 10 years after diagnosis. The improvement in 5-year relative survival was greatest for cases with distant metastases when conditional on increasing years already survived whereas the impact on early stage cancers was much smaller. Conditional 5-year relative survival estimates, by type of cancer, disease stage and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006 † Thickness of the lesion was also used to categorise disease stage at diagnosis for melanoma [T3 (thickness = 2.01-4.0 mm) and T4 (thickness > 4.0 mm) stages were grouped with ‘spread to regional lymph nodes’ as ”regional”]. Since thickness data prior to 1983 were considered unreliable, we used melanoma data from 1983 onward for the stage-specific survival estimates. Age-specific and stage-specific conditional 5-year relative survival at 0, 1, 3, 5 and 10 years after diagnosis for each selected cancer are also presented graphically in Figures 1 and 2. For most cancers, the age or stage differential in survival at diagnosis generally decreased over time except for cancers of the lung and pancreas. Age-specific conditional 5-year relative survival at 0, 1, 3, 5, 10 years after diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998–2006. Stage-specific conditional 5-year relative survival at 0, 1, 3, 5, 10 years after diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998–2006.
Discussion and conclusion
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/12/460/prepub
[ "Background", "Study population", "Statistical methods", "Conditional survival", "Discussion and conclusion" ]
[ "Survival estimates for cancer patients are traditionally reported from the time of diagnosis such as five-year survival. It is useful for answering questions that many people ask about their prognosis when first diagnosed with cancer. For cancer patients who have already survived a number of years, survival expectations at diagnosis are too pessimistic because they include people who have already died. An ongoing question among these surviving patients is “now that I have survived for x years, what is the probability that I will survive another y years”. Over the past decade, the concept of conditional survival (CS) has emerged to directly address this question, because it provides cancer patients with survival expectations based on people who have reached a similar point in their cancer journey.\nHowever, numerous previously published CS estimates have focused on one or a few cancer types, including cancer of the head and neck\n[1], stomach\n[2], colon\n[3-5], rectum\n[6], lung\n[7,8], breast\n[9] and melanoma of the skin\n[10-12]. Only a few published studies provided estimates for many cancer sites\n[13-18], and an even smaller number have included stratification by age group and stage at diagnosis\n[15-17]. Ellison et al.\n[14] acknowledged that a stratification of conditional survival estimates by age group at diagnosis provides more relevant clinical information for clinicians and cancer patients. Similarly other studies have acknowledged the limitation of excluding information about stage at diagnosis\n[13,15]. This has been shown to be an important prognostic factor for survival outcomes\n[19]. While it has been suggested that the impact of stage reduces and can disappear for long term conditional survival\n[16], there are currently no published Australian data describing conditional survival outcomes according to the stage at diagnosis.\nThis paper provides conditional survival estimates from New South Wales (NSW), Australia stratified by age group and stage at diagnosis for 11 major cancers.", "New South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00)\n[20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table\n1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded.\nConditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006\nThe NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage\n[21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available)\n[22].\nSurvival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2%\n[23].\nThis study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317).", "Estimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate\n[24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population\n[25]. We calculated relative survival using the period approach\n[26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method\n[27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al.\n[28].", "Conditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis\n[8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website\n[29].", "This study provides quantitative evidence that Australian cancer patients who are still alive ten years after their cancer diagnosis, even those diagnosed with advanced stage disease or at older age, have, at that moment, a much better survival outlook over the next five years than they did at diagnosis. The information is important for cancer patients as they face important life decisions in trying to plan their remaining life, and to provide evidence-based optimism as they continue living after their initial cancer diagnosis.\nWhen 5-year relative survival exceeds 95%, the excess mortality is minimal, and so the survival for this group is considered similar to the general population with the same age structure\n[15,16], although this does not necessarily indicate cure of cancer. In NSW, we found that patients who were diagnosed with cancer of the colorectum, cervix and thyroid, and melanoma of the skin had similar mortality expectations after surviving 10 years since diagnosis. This was consistent with another Australian report\n[13]. Our results of 5-year CS after surviving one year since diagnosis were also consistent with a recent international comparison of cancer survival in which NSW data for colorectum, lung and female breast cancer were also included\n[18]. Their published CS rates for the three cancers in NSW\n[18] were slightly higher than those reported here as their estimates were age-adjusted, which would have the effect of increasing overall survival by reducing the weight given to the poorer survival among older people. The overall consistency of these results\n[13,18] with those we reported for overall CS during an almost identical study period provides indirect confirmation of our findings. The strength of our study is that we presented age- and stage-specific CS in addition to the overall CS. By presenting age- and stage-specific CS estimates for 11 major cancers in one geographically defined population, clinicians, cancer patients and their support networks can compare temporal and age-specific patterns in CS across multiple cancer types, thus gaining a greater understanding of the ongoing survival expectations faced by cancer patients.\nOur results are consistent with many international studies including those in Europe\n[15,16] and North America\n[14,17] for a variety of individual cancer types. Our 5-year relative survival estimates conditional on surviving 5 years after diagnosis were very close to those for major cancers reported in Canada including cancer of the colorectum, lung, breast and melanoma\n[14]. The overall patterns of stage-specific CS for major cancer types from SEER data\n[17] and our data were also very similar particularly when accounting for any differences in age distribution: the increase in 5-year relative survival when conditional on more years already survived was greatest for later stage cancer and also the more fatal cancers but the stage differential in survival tended to reduce over time. Our stage-specific 5-year relative survival estimates conditional on having survived 5 years were very similar to those from SEER data\n[17] for localised and regional stage cancers of the colon and rectum, lung, breast and melanoma. However, considerable differences were observed for distant or unknown stages which may reflect different case mix in these two groups of patients due partly to more stringent criteria and data quality control being used in the SEER system than in Australia.\nRegarding the effect of age at diagnosis on CS, patients diagnosed at older ages tended to have lower relative conditional survival, but this effect reduced substantially after surviving 5–10 years for most cancers. The overall pattern of age-specific CS estimates were similar to those from other studies in Europe\n[15,16] and US\n[17], although different age categories make it difficult to compare specific estimates. When we reanalysed our data with age categories matching those in a European study\n[15], we found that age-specific CS was consistently higher in NSW, apart from identical results for thyroid cancer for the three younger age groups (data not shown). The overall survival differences for three major cancers of the colorectum, lung and female breast between Australia and Europe had been confirmed by a recent study comparing survival from four major cancers between Australia, Canada and several European countries\n[18]. As suggested in that study\n[18], the possible explanations for these survival differences may be due to later diagnosis or differences in treatment in the European population.\nAs was noted in most previous studies, the greatest differences in conditional survival were for those cancers that had initially poor survival, such as lung or pancreatic cancer. This study confirmed a similar pattern for those cancers diagnosed at an advanced stage. While there was a substantial impact of disease stage on survival expectations at diagnosis, for most types of cancer this stage differential decreased as time since diagnosis increased, a pattern that has been reported in many international studies\n[2,8,15,17]. Unfortunately, because of the high initial mortality, the number of people who survive to enjoy this greater survival is low.\nHowever the fact remains that of the people who do survive more than ten years after diagnosis, many continue to have poorer survival expectations than the general population with the same age structure. This could relate to the impact of the co-morbidities associated with the initial cancer diagnosis (for example while smoking causes lung cancer it is also associated with increased risk of cardiovascular diseases); the late recurrences of the primary cancer or secondary tumours; or the late side effects of treatment\n[15,16]. These ongoing reduced survival expectations for people diagnosed with late stage cancer in particular has substantial implications for health care providers in providing regular surveillance and monitoring even when the patients have survived at least ten or more years after the initial cancer diagnosis.\nThere are three widely used methods for estimating expected survival for relative survival analysis, commonly known as the Ederer I\n[30], Ederer II\n[27] and Hakulinen\n[31] methods. The Hakulinen method\n[31] was widely used in many international studies of cancer survival using population-based data including the most recent EUROCARE-4 study\n[32] and CONCORD study\n[33]. However, there is a growing consensus among researchers in relative survival analysis using population-based cancer registry data that the Ederer II method is more preferable\n[34-36], although relative survival estimates, using any of these methods, are generally very similar. Following this recommendation, we used the Ederer II\n[27] method in our estimation of relative survival. More recently, a modified Ederer II estimator has been proposed which is obtained by weighting the individual observations with their population survival\n[36]. The authors recommend the use of this new estimator when comparing cancer survival between countries because it is believed that this is the only unbiased estimator\n[36]. Another recent simulation study provided evidence that this estimator\n[36] is only unbiased for net survival when compared with other widely used estimators (including Ederer II\n[27] and Hakulinen\n[31] estimators)\n[37]. As the new estimator has not been used within the context of period analysis, future research in this area may be warranted.\nStrengths of this study are the statewide population-based cancer registry data including information about the stage at diagnosis and multiple cancer types. This makes our study more representative and comprehensive than many other studies. The similarity between our estimates for all stages of cancer at diagnosis and those for another Australian state\n[13] provide optimism that the stage-specific CS estimates can be generalised nationally. We were unable to adjust for treatment, which may have impacted survival estimates through initial remission of cancer but may also cause longer-term adverse complications. Although mortality information was obtained by matching against the National Death Index, it remains possible that some deaths were missed, thus artificially inflating survival estimates. However since the matching process was not conditional on cancer type, it is unlikely that this would influence comparisons of conditional survival across cancer types.\nIn conclusion, these data provide important information for cancer patients, based on age and the stage at diagnosis, as they continue on their cancer journey. This information should be used by clinicians as a tool to make evidence-based decisions regarding follow-up, surveillance, or ongoing management according to their patient’s changing survival expectations over time." ]
[ null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Statistical methods", "Conditional survival", "Results", "Discussion and conclusion" ]
[ "Survival estimates for cancer patients are traditionally reported from the time of diagnosis such as five-year survival. It is useful for answering questions that many people ask about their prognosis when first diagnosed with cancer. For cancer patients who have already survived a number of years, survival expectations at diagnosis are too pessimistic because they include people who have already died. An ongoing question among these surviving patients is “now that I have survived for x years, what is the probability that I will survive another y years”. Over the past decade, the concept of conditional survival (CS) has emerged to directly address this question, because it provides cancer patients with survival expectations based on people who have reached a similar point in their cancer journey.\nHowever, numerous previously published CS estimates have focused on one or a few cancer types, including cancer of the head and neck\n[1], stomach\n[2], colon\n[3-5], rectum\n[6], lung\n[7,8], breast\n[9] and melanoma of the skin\n[10-12]. Only a few published studies provided estimates for many cancer sites\n[13-18], and an even smaller number have included stratification by age group and stage at diagnosis\n[15-17]. Ellison et al.\n[14] acknowledged that a stratification of conditional survival estimates by age group at diagnosis provides more relevant clinical information for clinicians and cancer patients. Similarly other studies have acknowledged the limitation of excluding information about stage at diagnosis\n[13,15]. This has been shown to be an important prognostic factor for survival outcomes\n[19]. While it has been suggested that the impact of stage reduces and can disappear for long term conditional survival\n[16], there are currently no published Australian data describing conditional survival outcomes according to the stage at diagnosis.\nThis paper provides conditional survival estimates from New South Wales (NSW), Australia stratified by age group and stage at diagnosis for 11 major cancers.", " Study population New South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00)\n[20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table\n1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded.\nConditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006\nThe NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage\n[21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available)\n[22].\nSurvival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2%\n[23].\nThis study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317).\nNew South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00)\n[20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table\n1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded.\nConditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006\nThe NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage\n[21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available)\n[22].\nSurvival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2%\n[23].\nThis study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317).\n Statistical methods Estimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate\n[24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population\n[25]. We calculated relative survival using the period approach\n[26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method\n[27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al.\n[28].\nEstimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate\n[24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population\n[25]. We calculated relative survival using the period approach\n[26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method\n[27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al.\n[28].\n Conditional survival Conditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis\n[8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website\n[29].\nConditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis\n[8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website\n[29].", "New South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00)\n[20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table\n1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded.\nConditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006\nThe NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage\n[21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available)\n[22].\nSurvival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2%\n[23].\nThis study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317).", "Estimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate\n[24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population\n[25]. We calculated relative survival using the period approach\n[26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method\n[27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al.\n[28].", "Conditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis\n[8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website\n[29].", "A total of 193,182 cases were included in this study, with the most common cancers being prostate (39,851), female breast (36,585) and colorectum (35,455) (Table\n1). Table\n1 shows the 5-year relative survival estimates at diagnosis for each of the 11 selected cancer types, along with 5-year CS estimates for patients who have survived 1, 3, 5 and 10 years after diagnosis. Overall, 5-year relative survival tended to increase when conditional on increasing years after diagnosis and the greatest changes in CS occurred for cancers with poor prognosis at diagnosis for example, patients with aggressive cancers or those with advanced stage or at older age. For example, people diagnosed with lung cancer had an initial 5-year relative survival of 14%. However, their conditional 5-year relative survival increased to 33% after they survived one-year after diagnosis, and reached 85% if they survived 10 years after diagnosis. In contrast, 5-year relative survival was initially very high for men with prostate cancer (90%), with no change after surviving 10 years since diagnosis (90%).\nTable\n2 shows the 5-year relative survival estimates stratified by stage at diagnosis for each of the 11 selected cancers and conditional on having survived 1, 3, 5 and 10 years after diagnosis. The improvement in 5-year relative survival was greatest for cases with distant metastases when conditional on increasing years already survived whereas the impact on early stage cancers was much smaller.\nConditional 5-year relative survival estimates, by type of cancer, disease stage and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006\n† Thickness of the lesion was also used to categorise disease stage at diagnosis for melanoma [T3 (thickness = 2.01-4.0 mm) and T4 (thickness > 4.0 mm) stages were grouped with ‘spread to regional lymph nodes’ as ”regional”]. Since thickness data prior to 1983 were considered unreliable, we used melanoma data from 1983 onward for the stage-specific survival estimates.\nAge-specific and stage-specific conditional 5-year relative survival at 0, 1, 3, 5 and 10 years after diagnosis for each selected cancer are also presented graphically in Figures\n1 and\n2. For most cancers, the age or stage differential in survival at diagnosis generally decreased over time except for cancers of the lung and pancreas.\nAge-specific conditional 5-year relative survival at 0, 1, 3, 5, 10 years after diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998–2006.\nStage-specific conditional 5-year relative survival at 0, 1, 3, 5, 10 years after diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998–2006.", "This study provides quantitative evidence that Australian cancer patients who are still alive ten years after their cancer diagnosis, even those diagnosed with advanced stage disease or at older age, have, at that moment, a much better survival outlook over the next five years than they did at diagnosis. The information is important for cancer patients as they face important life decisions in trying to plan their remaining life, and to provide evidence-based optimism as they continue living after their initial cancer diagnosis.\nWhen 5-year relative survival exceeds 95%, the excess mortality is minimal, and so the survival for this group is considered similar to the general population with the same age structure\n[15,16], although this does not necessarily indicate cure of cancer. In NSW, we found that patients who were diagnosed with cancer of the colorectum, cervix and thyroid, and melanoma of the skin had similar mortality expectations after surviving 10 years since diagnosis. This was consistent with another Australian report\n[13]. Our results of 5-year CS after surviving one year since diagnosis were also consistent with a recent international comparison of cancer survival in which NSW data for colorectum, lung and female breast cancer were also included\n[18]. Their published CS rates for the three cancers in NSW\n[18] were slightly higher than those reported here as their estimates were age-adjusted, which would have the effect of increasing overall survival by reducing the weight given to the poorer survival among older people. The overall consistency of these results\n[13,18] with those we reported for overall CS during an almost identical study period provides indirect confirmation of our findings. The strength of our study is that we presented age- and stage-specific CS in addition to the overall CS. By presenting age- and stage-specific CS estimates for 11 major cancers in one geographically defined population, clinicians, cancer patients and their support networks can compare temporal and age-specific patterns in CS across multiple cancer types, thus gaining a greater understanding of the ongoing survival expectations faced by cancer patients.\nOur results are consistent with many international studies including those in Europe\n[15,16] and North America\n[14,17] for a variety of individual cancer types. Our 5-year relative survival estimates conditional on surviving 5 years after diagnosis were very close to those for major cancers reported in Canada including cancer of the colorectum, lung, breast and melanoma\n[14]. The overall patterns of stage-specific CS for major cancer types from SEER data\n[17] and our data were also very similar particularly when accounting for any differences in age distribution: the increase in 5-year relative survival when conditional on more years already survived was greatest for later stage cancer and also the more fatal cancers but the stage differential in survival tended to reduce over time. Our stage-specific 5-year relative survival estimates conditional on having survived 5 years were very similar to those from SEER data\n[17] for localised and regional stage cancers of the colon and rectum, lung, breast and melanoma. However, considerable differences were observed for distant or unknown stages which may reflect different case mix in these two groups of patients due partly to more stringent criteria and data quality control being used in the SEER system than in Australia.\nRegarding the effect of age at diagnosis on CS, patients diagnosed at older ages tended to have lower relative conditional survival, but this effect reduced substantially after surviving 5–10 years for most cancers. The overall pattern of age-specific CS estimates were similar to those from other studies in Europe\n[15,16] and US\n[17], although different age categories make it difficult to compare specific estimates. When we reanalysed our data with age categories matching those in a European study\n[15], we found that age-specific CS was consistently higher in NSW, apart from identical results for thyroid cancer for the three younger age groups (data not shown). The overall survival differences for three major cancers of the colorectum, lung and female breast between Australia and Europe had been confirmed by a recent study comparing survival from four major cancers between Australia, Canada and several European countries\n[18]. As suggested in that study\n[18], the possible explanations for these survival differences may be due to later diagnosis or differences in treatment in the European population.\nAs was noted in most previous studies, the greatest differences in conditional survival were for those cancers that had initially poor survival, such as lung or pancreatic cancer. This study confirmed a similar pattern for those cancers diagnosed at an advanced stage. While there was a substantial impact of disease stage on survival expectations at diagnosis, for most types of cancer this stage differential decreased as time since diagnosis increased, a pattern that has been reported in many international studies\n[2,8,15,17]. Unfortunately, because of the high initial mortality, the number of people who survive to enjoy this greater survival is low.\nHowever the fact remains that of the people who do survive more than ten years after diagnosis, many continue to have poorer survival expectations than the general population with the same age structure. This could relate to the impact of the co-morbidities associated with the initial cancer diagnosis (for example while smoking causes lung cancer it is also associated with increased risk of cardiovascular diseases); the late recurrences of the primary cancer or secondary tumours; or the late side effects of treatment\n[15,16]. These ongoing reduced survival expectations for people diagnosed with late stage cancer in particular has substantial implications for health care providers in providing regular surveillance and monitoring even when the patients have survived at least ten or more years after the initial cancer diagnosis.\nThere are three widely used methods for estimating expected survival for relative survival analysis, commonly known as the Ederer I\n[30], Ederer II\n[27] and Hakulinen\n[31] methods. The Hakulinen method\n[31] was widely used in many international studies of cancer survival using population-based data including the most recent EUROCARE-4 study\n[32] and CONCORD study\n[33]. However, there is a growing consensus among researchers in relative survival analysis using population-based cancer registry data that the Ederer II method is more preferable\n[34-36], although relative survival estimates, using any of these methods, are generally very similar. Following this recommendation, we used the Ederer II\n[27] method in our estimation of relative survival. More recently, a modified Ederer II estimator has been proposed which is obtained by weighting the individual observations with their population survival\n[36]. The authors recommend the use of this new estimator when comparing cancer survival between countries because it is believed that this is the only unbiased estimator\n[36]. Another recent simulation study provided evidence that this estimator\n[36] is only unbiased for net survival when compared with other widely used estimators (including Ederer II\n[27] and Hakulinen\n[31] estimators)\n[37]. As the new estimator has not been used within the context of period analysis, future research in this area may be warranted.\nStrengths of this study are the statewide population-based cancer registry data including information about the stage at diagnosis and multiple cancer types. This makes our study more representative and comprehensive than many other studies. The similarity between our estimates for all stages of cancer at diagnosis and those for another Australian state\n[13] provide optimism that the stage-specific CS estimates can be generalised nationally. We were unable to adjust for treatment, which may have impacted survival estimates through initial remission of cancer but may also cause longer-term adverse complications. Although mortality information was obtained by matching against the National Death Index, it remains possible that some deaths were missed, thus artificially inflating survival estimates. However since the matching process was not conditional on cancer type, it is unlikely that this would influence comparisons of conditional survival across cancer types.\nIn conclusion, these data provide important information for cancer patients, based on age and the stage at diagnosis, as they continue on their cancer journey. This information should be used by clinicians as a tool to make evidence-based decisions regarding follow-up, surveillance, or ongoing management according to their patient’s changing survival expectations over time." ]
[ null, "methods", null, null, null, "results", null ]
[ "Conditional survival", "Relative survival", "Cancer registry", "Australia" ]
Background: Survival estimates for cancer patients are traditionally reported from the time of diagnosis such as five-year survival. It is useful for answering questions that many people ask about their prognosis when first diagnosed with cancer. For cancer patients who have already survived a number of years, survival expectations at diagnosis are too pessimistic because they include people who have already died. An ongoing question among these surviving patients is “now that I have survived for x years, what is the probability that I will survive another y years”. Over the past decade, the concept of conditional survival (CS) has emerged to directly address this question, because it provides cancer patients with survival expectations based on people who have reached a similar point in their cancer journey. However, numerous previously published CS estimates have focused on one or a few cancer types, including cancer of the head and neck [1], stomach [2], colon [3-5], rectum [6], lung [7,8], breast [9] and melanoma of the skin [10-12]. Only a few published studies provided estimates for many cancer sites [13-18], and an even smaller number have included stratification by age group and stage at diagnosis [15-17]. Ellison et al. [14] acknowledged that a stratification of conditional survival estimates by age group at diagnosis provides more relevant clinical information for clinicians and cancer patients. Similarly other studies have acknowledged the limitation of excluding information about stage at diagnosis [13,15]. This has been shown to be an important prognostic factor for survival outcomes [19]. While it has been suggested that the impact of stage reduces and can disappear for long term conditional survival [16], there are currently no published Australian data describing conditional survival outcomes according to the stage at diagnosis. This paper provides conditional survival estimates from New South Wales (NSW), Australia stratified by age group and stage at diagnosis for 11 major cancers. Methods: Study population New South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00) [20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table 1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded. Conditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006 The NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage [21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available) [22]. Survival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2% [23]. This study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317). New South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00) [20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table 1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded. Conditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006 The NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage [21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available) [22]. Survival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2% [23]. This study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317). Statistical methods Estimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate [24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population [25]. We calculated relative survival using the period approach [26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method [27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al. [28]. Estimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate [24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population [25]. We calculated relative survival using the period approach [26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method [27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al. [28]. Conditional survival Conditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis [8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website [29]. Conditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis [8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website [29]. Study population: New South Wales is the most populous state in Australia with a population of 7.2 million, approximately one-third of the Australian population. Age-standardised mortality rates from cancer in NSW are almost identical to the national rates (187.8 per 100,000 vs 187.1 per 100,00) [20]. The de-identified records of people diagnosed with one of 11 major cancers in NSW (Table 1) were obtained from the NSW Central Cancer Registry. The Registry maintains a record of all cases of cancer diagnosed in NSW residents since 1972, with notifications from multiple sources and linkage to death certificates. We included cases diagnosed in 1972–2006 and aged 15–89 years at diagnosis. Cases reported to the Registry through death certificate only or first identified at post-mortem were excluded. Conditional 5-year relative survival estimates, by type of cancer and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006 The NSW Central Cancer Registry is the only population-based cancer registry in Australia that routinely collects information on spread of disease at diagnosis which had been used as an indicator of disease stage at diagnosis in this study. Medical coders from the Registry categorise stage based on information from statutory notification forms and pathology reports using a modified summary classification similar to the Surveillance, Epidemiology, and End Results (SEER) summary stage [21]. Categories are localised (stage I confined to tissue or organ of origin), regional (stage II spread to adjacent organs or tissues or stage III spread to regional lymph nodes), distant (stage IV with metastases to distant organs), or unknown stage (insufficient information available) [22]. Survival status was obtained through record linkage of the cancer cases in the Registry with the death records from the NSW Register of Births, Deaths, and Marriages and the National Death Index. All eligible cases were followed up to 31 December 2006 to determine survival status. This passive approach to follow-up may fail to ascertain all deaths and may incorrectly link some incidence and death records. A previous study investigating its completeness and accuracy found loss to follow-up to be uniform from 1980 to 1993 and estimated the resulting overestimation of relative survival to be a maximum of 2% [23]. This study was approved by the NSW Population and Health Service Research Ethics Committee (reference number: 2011/04/317). Statistical methods: Estimation of relative survival overcomes the possibility that cause of death on death certificates may be inaccurate [24]. Relative survival is the ratio of the observed proportion surviving in a group of patients to the expected proportion that would have survived in an age- and sex-comparable group of people from the general population [25]. We calculated relative survival using the period approach [26], with cancer patients under observation between 1 January 1998 and 31 December 2006. In period analysis survival times can be left-truncated at the beginning of the period of interest in addition to being right-censored at its end. Expected survival was estimated using the Ederer and Heise (Ederer II) method [27]. Observed survival was measured from the month of diagnosis to the date of death or censoring (31 December 2006) whichever occurred first. Survival estimates were stratified by age group (15–49, 50–69 and 70–89) and stage at diagnosis separately. Stata 11 (College Station, TX: StataCorp) was used for all analyses together with publically available commands for estimating relative survival from Dickman et al. [28]. Conditional survival: Conditional survival is defined as the probability of surviving an additional y years on the condition that the patient has survived x years. It is calculated by dividing the relative survival at (x + y) years after diagnosis by the relative survival at x years after diagnosis [8]. For each type of cancer, 5-year conditional survival is estimated at 1, 3, 5 and 10 years after diagnosis. We calculated the 95% confidence intervals assuming that CS follows a normal distribution and using Paul Dickman’s method for period analysis, the details of which can be found on his website [29]. Results: A total of 193,182 cases were included in this study, with the most common cancers being prostate (39,851), female breast (36,585) and colorectum (35,455) (Table 1). Table 1 shows the 5-year relative survival estimates at diagnosis for each of the 11 selected cancer types, along with 5-year CS estimates for patients who have survived 1, 3, 5 and 10 years after diagnosis. Overall, 5-year relative survival tended to increase when conditional on increasing years after diagnosis and the greatest changes in CS occurred for cancers with poor prognosis at diagnosis for example, patients with aggressive cancers or those with advanced stage or at older age. For example, people diagnosed with lung cancer had an initial 5-year relative survival of 14%. However, their conditional 5-year relative survival increased to 33% after they survived one-year after diagnosis, and reached 85% if they survived 10 years after diagnosis. In contrast, 5-year relative survival was initially very high for men with prostate cancer (90%), with no change after surviving 10 years since diagnosis (90%). Table 2 shows the 5-year relative survival estimates stratified by stage at diagnosis for each of the 11 selected cancers and conditional on having survived 1, 3, 5 and 10 years after diagnosis. The improvement in 5-year relative survival was greatest for cases with distant metastases when conditional on increasing years already survived whereas the impact on early stage cancers was much smaller. Conditional 5-year relative survival estimates, by type of cancer, disease stage and number of years since diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998-2006 † Thickness of the lesion was also used to categorise disease stage at diagnosis for melanoma [T3 (thickness = 2.01-4.0 mm) and T4 (thickness > 4.0 mm) stages were grouped with ‘spread to regional lymph nodes’ as ”regional”]. Since thickness data prior to 1983 were considered unreliable, we used melanoma data from 1983 onward for the stage-specific survival estimates. Age-specific and stage-specific conditional 5-year relative survival at 0, 1, 3, 5 and 10 years after diagnosis for each selected cancer are also presented graphically in Figures 1 and 2. For most cancers, the age or stage differential in survival at diagnosis generally decreased over time except for cancers of the lung and pancreas. Age-specific conditional 5-year relative survival at 0, 1, 3, 5, 10 years after diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998–2006. Stage-specific conditional 5-year relative survival at 0, 1, 3, 5, 10 years after diagnosis, for patients aged 15–89 years at diagnosis, NSW Australia 1998–2006. Discussion and conclusion: This study provides quantitative evidence that Australian cancer patients who are still alive ten years after their cancer diagnosis, even those diagnosed with advanced stage disease or at older age, have, at that moment, a much better survival outlook over the next five years than they did at diagnosis. The information is important for cancer patients as they face important life decisions in trying to plan their remaining life, and to provide evidence-based optimism as they continue living after their initial cancer diagnosis. When 5-year relative survival exceeds 95%, the excess mortality is minimal, and so the survival for this group is considered similar to the general population with the same age structure [15,16], although this does not necessarily indicate cure of cancer. In NSW, we found that patients who were diagnosed with cancer of the colorectum, cervix and thyroid, and melanoma of the skin had similar mortality expectations after surviving 10 years since diagnosis. This was consistent with another Australian report [13]. Our results of 5-year CS after surviving one year since diagnosis were also consistent with a recent international comparison of cancer survival in which NSW data for colorectum, lung and female breast cancer were also included [18]. Their published CS rates for the three cancers in NSW [18] were slightly higher than those reported here as their estimates were age-adjusted, which would have the effect of increasing overall survival by reducing the weight given to the poorer survival among older people. The overall consistency of these results [13,18] with those we reported for overall CS during an almost identical study period provides indirect confirmation of our findings. The strength of our study is that we presented age- and stage-specific CS in addition to the overall CS. By presenting age- and stage-specific CS estimates for 11 major cancers in one geographically defined population, clinicians, cancer patients and their support networks can compare temporal and age-specific patterns in CS across multiple cancer types, thus gaining a greater understanding of the ongoing survival expectations faced by cancer patients. Our results are consistent with many international studies including those in Europe [15,16] and North America [14,17] for a variety of individual cancer types. Our 5-year relative survival estimates conditional on surviving 5 years after diagnosis were very close to those for major cancers reported in Canada including cancer of the colorectum, lung, breast and melanoma [14]. The overall patterns of stage-specific CS for major cancer types from SEER data [17] and our data were also very similar particularly when accounting for any differences in age distribution: the increase in 5-year relative survival when conditional on more years already survived was greatest for later stage cancer and also the more fatal cancers but the stage differential in survival tended to reduce over time. Our stage-specific 5-year relative survival estimates conditional on having survived 5 years were very similar to those from SEER data [17] for localised and regional stage cancers of the colon and rectum, lung, breast and melanoma. However, considerable differences were observed for distant or unknown stages which may reflect different case mix in these two groups of patients due partly to more stringent criteria and data quality control being used in the SEER system than in Australia. Regarding the effect of age at diagnosis on CS, patients diagnosed at older ages tended to have lower relative conditional survival, but this effect reduced substantially after surviving 5–10 years for most cancers. The overall pattern of age-specific CS estimates were similar to those from other studies in Europe [15,16] and US [17], although different age categories make it difficult to compare specific estimates. When we reanalysed our data with age categories matching those in a European study [15], we found that age-specific CS was consistently higher in NSW, apart from identical results for thyroid cancer for the three younger age groups (data not shown). The overall survival differences for three major cancers of the colorectum, lung and female breast between Australia and Europe had been confirmed by a recent study comparing survival from four major cancers between Australia, Canada and several European countries [18]. As suggested in that study [18], the possible explanations for these survival differences may be due to later diagnosis or differences in treatment in the European population. As was noted in most previous studies, the greatest differences in conditional survival were for those cancers that had initially poor survival, such as lung or pancreatic cancer. This study confirmed a similar pattern for those cancers diagnosed at an advanced stage. While there was a substantial impact of disease stage on survival expectations at diagnosis, for most types of cancer this stage differential decreased as time since diagnosis increased, a pattern that has been reported in many international studies [2,8,15,17]. Unfortunately, because of the high initial mortality, the number of people who survive to enjoy this greater survival is low. However the fact remains that of the people who do survive more than ten years after diagnosis, many continue to have poorer survival expectations than the general population with the same age structure. This could relate to the impact of the co-morbidities associated with the initial cancer diagnosis (for example while smoking causes lung cancer it is also associated with increased risk of cardiovascular diseases); the late recurrences of the primary cancer or secondary tumours; or the late side effects of treatment [15,16]. These ongoing reduced survival expectations for people diagnosed with late stage cancer in particular has substantial implications for health care providers in providing regular surveillance and monitoring even when the patients have survived at least ten or more years after the initial cancer diagnosis. There are three widely used methods for estimating expected survival for relative survival analysis, commonly known as the Ederer I [30], Ederer II [27] and Hakulinen [31] methods. The Hakulinen method [31] was widely used in many international studies of cancer survival using population-based data including the most recent EUROCARE-4 study [32] and CONCORD study [33]. However, there is a growing consensus among researchers in relative survival analysis using population-based cancer registry data that the Ederer II method is more preferable [34-36], although relative survival estimates, using any of these methods, are generally very similar. Following this recommendation, we used the Ederer II [27] method in our estimation of relative survival. More recently, a modified Ederer II estimator has been proposed which is obtained by weighting the individual observations with their population survival [36]. The authors recommend the use of this new estimator when comparing cancer survival between countries because it is believed that this is the only unbiased estimator [36]. Another recent simulation study provided evidence that this estimator [36] is only unbiased for net survival when compared with other widely used estimators (including Ederer II [27] and Hakulinen [31] estimators) [37]. As the new estimator has not been used within the context of period analysis, future research in this area may be warranted. Strengths of this study are the statewide population-based cancer registry data including information about the stage at diagnosis and multiple cancer types. This makes our study more representative and comprehensive than many other studies. The similarity between our estimates for all stages of cancer at diagnosis and those for another Australian state [13] provide optimism that the stage-specific CS estimates can be generalised nationally. We were unable to adjust for treatment, which may have impacted survival estimates through initial remission of cancer but may also cause longer-term adverse complications. Although mortality information was obtained by matching against the National Death Index, it remains possible that some deaths were missed, thus artificially inflating survival estimates. However since the matching process was not conditional on cancer type, it is unlikely that this would influence comparisons of conditional survival across cancer types. In conclusion, these data provide important information for cancer patients, based on age and the stage at diagnosis, as they continue on their cancer journey. This information should be used by clinicians as a tool to make evidence-based decisions regarding follow-up, surveillance, or ongoing management according to their patient’s changing survival expectations over time.
Background: Estimated conditional survival for cancer patients diagnosed at different ages and disease stage provides important information for cancer patients and clinicians in planning follow-up, surveillance and ongoing management. Methods: Using population-based cancer registry data for New South Wales Australia, we estimated conditional 5-year relative survival for 11 major cancers diagnosed 1972-2006 by time since diagnosis and age and stage at diagnosis. Results: 193,182 cases were included, with the most common cancers being prostate (39,851), female breast (36,585) and colorectal (35,455). Five-year relative survival tended to increase with increasing years already survived and improvement was greatest for cancers with poor prognosis at diagnosis (lung or pancreas) and for those with advanced stage or older age at diagnosis. After surviving 10 years, conditional 5-year survival was over 95% for 6 localised, 6 regional, 3 distant and 3 unknown stage cancers. For the remaining patient groups, conditional 5-year survival ranged from 74% (for distant stage bladder cancer) to 94% (for 4 cancers at different stages), indicating that they continue to have excess mortality 10-15 years after diagnosis. Conclusions: These data provide important information for cancer patients, based on age and stage at diagnosis, as they continue on their cancer journey. This information may also be used by clinicians as a tool to make more evidence-based decisions regarding follow-up, surveillance, or ongoing management according to patients' changing survival expectations over time.
Background: Survival estimates for cancer patients are traditionally reported from the time of diagnosis such as five-year survival. It is useful for answering questions that many people ask about their prognosis when first diagnosed with cancer. For cancer patients who have already survived a number of years, survival expectations at diagnosis are too pessimistic because they include people who have already died. An ongoing question among these surviving patients is “now that I have survived for x years, what is the probability that I will survive another y years”. Over the past decade, the concept of conditional survival (CS) has emerged to directly address this question, because it provides cancer patients with survival expectations based on people who have reached a similar point in their cancer journey. However, numerous previously published CS estimates have focused on one or a few cancer types, including cancer of the head and neck [1], stomach [2], colon [3-5], rectum [6], lung [7,8], breast [9] and melanoma of the skin [10-12]. Only a few published studies provided estimates for many cancer sites [13-18], and an even smaller number have included stratification by age group and stage at diagnosis [15-17]. Ellison et al. [14] acknowledged that a stratification of conditional survival estimates by age group at diagnosis provides more relevant clinical information for clinicians and cancer patients. Similarly other studies have acknowledged the limitation of excluding information about stage at diagnosis [13,15]. This has been shown to be an important prognostic factor for survival outcomes [19]. While it has been suggested that the impact of stage reduces and can disappear for long term conditional survival [16], there are currently no published Australian data describing conditional survival outcomes according to the stage at diagnosis. This paper provides conditional survival estimates from New South Wales (NSW), Australia stratified by age group and stage at diagnosis for 11 major cancers. Discussion and conclusion: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/12/460/prepub
Background: Estimated conditional survival for cancer patients diagnosed at different ages and disease stage provides important information for cancer patients and clinicians in planning follow-up, surveillance and ongoing management. Methods: Using population-based cancer registry data for New South Wales Australia, we estimated conditional 5-year relative survival for 11 major cancers diagnosed 1972-2006 by time since diagnosis and age and stage at diagnosis. Results: 193,182 cases were included, with the most common cancers being prostate (39,851), female breast (36,585) and colorectal (35,455). Five-year relative survival tended to increase with increasing years already survived and improvement was greatest for cancers with poor prognosis at diagnosis (lung or pancreas) and for those with advanced stage or older age at diagnosis. After surviving 10 years, conditional 5-year survival was over 95% for 6 localised, 6 regional, 3 distant and 3 unknown stage cancers. For the remaining patient groups, conditional 5-year survival ranged from 74% (for distant stage bladder cancer) to 94% (for 4 cancers at different stages), indicating that they continue to have excess mortality 10-15 years after diagnosis. Conclusions: These data provide important information for cancer patients, based on age and stage at diagnosis, as they continue on their cancer journey. This information may also be used by clinicians as a tool to make more evidence-based decisions regarding follow-up, surveillance, or ongoing management according to patients' changing survival expectations over time.
5,016
296
[ 394, 461, 218, 122, 1608 ]
7
[ "survival", "cancer", "diagnosis", "stage", "years", "relative", "relative survival", "years diagnosis", "nsw", "age" ]
[ "survival cancers initially", "patients survival expectations", "comparison cancer survival", "survival estimates cancer", "conditional survival cancer" ]
[CONTENT] Conditional survival | Relative survival | Cancer registry | Australia [SUMMARY]
[CONTENT] Conditional survival | Relative survival | Cancer registry | Australia [SUMMARY]
[CONTENT] Conditional survival | Relative survival | Cancer registry | Australia [SUMMARY]
[CONTENT] Conditional survival | Relative survival | Cancer registry | Australia [SUMMARY]
[CONTENT] Conditional survival | Relative survival | Cancer registry | Australia [SUMMARY]
[CONTENT] Conditional survival | Relative survival | Cancer registry | Australia [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Breast Neoplasms | Colorectal Neoplasms | Female | Humans | Male | Middle Aged | Mortality | Neoplasm Staging | Neoplasms | New South Wales | Population Surveillance | Prostatic Neoplasms | Registries | Survival Analysis | Survival Rate | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Breast Neoplasms | Colorectal Neoplasms | Female | Humans | Male | Middle Aged | Mortality | Neoplasm Staging | Neoplasms | New South Wales | Population Surveillance | Prostatic Neoplasms | Registries | Survival Analysis | Survival Rate | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Breast Neoplasms | Colorectal Neoplasms | Female | Humans | Male | Middle Aged | Mortality | Neoplasm Staging | Neoplasms | New South Wales | Population Surveillance | Prostatic Neoplasms | Registries | Survival Analysis | Survival Rate | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Breast Neoplasms | Colorectal Neoplasms | Female | Humans | Male | Middle Aged | Mortality | Neoplasm Staging | Neoplasms | New South Wales | Population Surveillance | Prostatic Neoplasms | Registries | Survival Analysis | Survival Rate | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Breast Neoplasms | Colorectal Neoplasms | Female | Humans | Male | Middle Aged | Mortality | Neoplasm Staging | Neoplasms | New South Wales | Population Surveillance | Prostatic Neoplasms | Registries | Survival Analysis | Survival Rate | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Breast Neoplasms | Colorectal Neoplasms | Female | Humans | Male | Middle Aged | Mortality | Neoplasm Staging | Neoplasms | New South Wales | Population Surveillance | Prostatic Neoplasms | Registries | Survival Analysis | Survival Rate | Time Factors | Young Adult [SUMMARY]
[CONTENT] survival cancers initially | patients survival expectations | comparison cancer survival | survival estimates cancer | conditional survival cancer [SUMMARY]
[CONTENT] survival cancers initially | patients survival expectations | comparison cancer survival | survival estimates cancer | conditional survival cancer [SUMMARY]
[CONTENT] survival cancers initially | patients survival expectations | comparison cancer survival | survival estimates cancer | conditional survival cancer [SUMMARY]
[CONTENT] survival cancers initially | patients survival expectations | comparison cancer survival | survival estimates cancer | conditional survival cancer [SUMMARY]
[CONTENT] survival cancers initially | patients survival expectations | comparison cancer survival | survival estimates cancer | conditional survival cancer [SUMMARY]
[CONTENT] survival cancers initially | patients survival expectations | comparison cancer survival | survival estimates cancer | conditional survival cancer [SUMMARY]
[CONTENT] survival | cancer | diagnosis | stage | years | relative | relative survival | years diagnosis | nsw | age [SUMMARY]
[CONTENT] survival | cancer | diagnosis | stage | years | relative | relative survival | years diagnosis | nsw | age [SUMMARY]
[CONTENT] survival | cancer | diagnosis | stage | years | relative | relative survival | years diagnosis | nsw | age [SUMMARY]
[CONTENT] survival | cancer | diagnosis | stage | years | relative | relative survival | years diagnosis | nsw | age [SUMMARY]
[CONTENT] survival | cancer | diagnosis | stage | years | relative | relative survival | years diagnosis | nsw | age [SUMMARY]
[CONTENT] survival | cancer | diagnosis | stage | years | relative | relative survival | years diagnosis | nsw | age [SUMMARY]
[CONTENT] survival | cancer | conditional survival | diagnosis | published | provides | cancer patients | stage | conditional | patients [SUMMARY]
[CONTENT] survival | registry | death | nsw | stage | diagnosis | relative survival | relative | years | cancer [SUMMARY]
[CONTENT] diagnosis | year relative | year relative survival | years diagnosis | years | year | survival | relative survival | relative | 10 years diagnosis [SUMMARY]
[CONTENT] survival | cancer | data | specific | study | age | stage | diagnosis | cs | specific cs [SUMMARY]
[CONTENT] survival | diagnosis | cancer | years | stage | relative | relative survival | years diagnosis | conditional | nsw [SUMMARY]
[CONTENT] survival | diagnosis | cancer | years | stage | relative | relative survival | years diagnosis | conditional | nsw [SUMMARY]
[CONTENT] clinicians [SUMMARY]
[CONTENT] New South Wales | Australia | 5-year | 11 | 1972-2006 [SUMMARY]
[CONTENT] 193,182 | 39,851 | 36,585 | 35,455 ||| Five-year ||| 10 years | 5-year | 95% | 6 | 6 | 3 | 3 ||| 5-year | 74% | 94% | 4 | 10-15 years [SUMMARY]
[CONTENT] ||| clinicians [SUMMARY]
[CONTENT] clinicians ||| New South Wales | Australia | 5-year | 11 | 1972-2006 ||| 193,182 | 39,851 | 36,585 | 35,455 ||| Five-year ||| 10 years | 5-year | 95% | 6 | 6 | 3 | 3 ||| 5-year | 74% | 94% | 4 | 10-15 years ||| ||| clinicians [SUMMARY]
[CONTENT] clinicians ||| New South Wales | Australia | 5-year | 11 | 1972-2006 ||| 193,182 | 39,851 | 36,585 | 35,455 ||| Five-year ||| 10 years | 5-year | 95% | 6 | 6 | 3 | 3 ||| 5-year | 74% | 94% | 4 | 10-15 years ||| ||| clinicians [SUMMARY]
Current Asthma Prevalence Using Methacholine Challenge Test in Korean Children from 2010 to 2014.
34002550
Most epidemiological studies depend on the subjects' response to asthma symptom questionnaires. Questionnaire-based study for childhood asthma prevalence may overestimate the true prevalence. The aim of this study was to investigate the prevalence of "Current asthma" using the International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire and methacholine challenge test in Korean children.
BACKGROUND
Our survey on allergic disease included 4,791 children (age 7-12 years) from 2010 to 2014 in Korean elementary schools. Bronchial hyperresponsiveness (BHR) was defined as provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second (FEV1) (PC20) ≤ 16 mg/mL. "Current asthma symptoms" was defined as positive response to "Wheezing, current," "Treatment, current," or "Exercise, current." "Current asthma" was defined when the subjects with "Current asthma symptoms" showed BHR on the methacholine challenge test or had less than 70% of predicted FEV1 value.
METHODS
The prevalence of "Wheezing, ever," "Wheezing, current," "Diagnosis, ever," "Treatment, current," "Exercise, current," and "Current asthma symptoms" was 19.6%, 6.9%, 10.0%, 3.3%, 3.5%, and 9.6%, respectively, in our cross-sectional study of Korean elementary school students. The prevalence of BHR in elementary school students was 14.5%. The prevalence of BHR in children with "Wheezing, ever," "Wheezing, current," "Diagnosis, ever," "Treatment, current," and "Exercise, current" was 22.3%, 30.5%, 22.4%, 28.8%, and 29.9%, respectively. BHR was 26.1% in those with "Current asthma symptoms." The prevalence of "Current asthma" was 2.7%.
RESULTS
Our large-scale study provides 2.7% prevalence of current asthma in Korean elementary school children. Since approximately one third of the children who have "Current asthma symptoms" present BHR, both subjective and objective methods are required to accurately predict asthma in subjects with asthma symptoms.
CONCLUSIONS
[ "Asthma", "Bronchial Hyperreactivity", "Bronchial Provocation Tests", "Bronchoconstrictor Agents", "Child", "Cross-Sectional Studies", "Female", "Forced Expiratory Volume", "Humans", "Male", "Methacholine Chloride", "Prevalence", "Republic of Korea", "Respiratory Sounds", "Surveys and Questionnaires" ]
8129620
INTRODUCTION
Asthma is not only a common chronic disease but also one of the major causes of hospitalization in children. Asthma should be managed appropriately especially in children for preventing progression to airway remodeling and impaired lung function. Early accurate diagnosis and appropriate treatment enables children to enjoy a high quality of life and leads to much better disease control and outcomes. Asthma may differ in prevalence and severity by race, region, and country. Thus, physicians should have appropriate guidance upon data with the prevalence of accurate asthma in the general population of children. In most previous epidemiological studies, questionnaires based on subjective symptoms and past medical history of asthma have been used for estimating the prevalence of asthma. The International Study of Asthma and Allergies of Childhood (ISAAC) questionnaire is a widely accepted standardized tool for evaluating asthma prevalence.1 However, the questionnaire alone may overestimate the true prevalence of asthma, since asthma-mimicking symptoms can manifest in various other diseases. Therefore, it is important to estimate accurate asthma prevalence based on an objective method such as the bronchial provocation test in a general population of children. Bronchial hyperresponsiveness (BHR) is one of the key features of asthma and is demonstrated in almost all subjects with current symptomatic asthma.2 Defining current asthma with BHR and recent wheezing in the past 12 months have been considered useful in evaluating asthma prevalence in the general population.3 In pediatric populations, the prevalence of BHR with “Current asthma symptoms” varies among studies. The prevalence of BHR to methacholine (provocative concentration of methacholine causing a 20% reduction in forced expiratory volume in one second [FEV1]; PC20 ≤ 16 mg/mL) in children who had wheezing in the past 12 months has been reported to range from 25% in the United States4 to 60% in Turkey5 which is a large difference. In addition, the frequency of BHR in current wheezing within 12 months showed considerable inconsistency in Korean pediatric population studies: 6%6 vs. 56%.7 As above, BHR differs from country to country, and even in the same country, it may vary depending on the region and age. In Korean studies, epidemiological data on the nationwide prevalence of all ages in elementary school students were insufficient. Some large-scaled studies of asthma prevalence among Korean children were mainly based on questionnaires. There have been a few studies on BHR in children.678 Subjects in those studies did not include all ages of elementary students. A study that conducted a methacholine challenge test (MCT) on subjects aged 7–19 years with asthma symptoms reported 4.6% prevalence of asthma in 1997.7 As healthcare environments, including advances in therapeutic agents, early diagnosis, as well as early and proper treatment, have changed from the mid-1990s when previous studies were conducted, it is necessary to update asthma prevalence in Korean children. Therefore, this is a meaningful study to understand the trend in prevalence of asthma over time. The aim of this study was to assess the prevalence of current asthma and BHR using both the ISAAC questionnaire and MCT in Korean elementary school students in the metropolitan cities of Incheon, Gwangju (southern inland), and Gyeonggi.
METHODS
Subjects The subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively. The subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively. Questionnaire and case definitions Asthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. Asthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. Pulmonary function test Before performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9 Before performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9 Bronchial challenge test All subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT. MCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn). All subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT. MCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn). Statistical analysis Statistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant. Statistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant. Ethics statement Written informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019). Written informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019).
RESULTS
Study subjects From total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively. ISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second. From total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively. ISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second. Prevalence of asthma symptoms Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1). Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1). Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1). Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1). Prevalence of BHR Ten out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT. The prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. Student's t-test. PC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. The prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Ten out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT. The prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. Student's t-test. PC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. The prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Characteristics of symptomatic BHR and asymptomatic BHR There was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795). There was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795). Prevalence of “Current asthma” The prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group. Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001. The prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group. Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001. Subjects who received treatment in the past 12 months The proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively. The proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively. Epidemiological studies on asthma prevalence in Korean children The prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1). The prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6). BHR = bronchial hyperresponsiveness, ns = not shown. aChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study. The prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1). The prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6). BHR = bronchial hyperresponsiveness, ns = not shown. aChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study.
null
null
[ "Subjects", "Questionnaire and case definitions", "Pulmonary function test", "Bronchial challenge test", "Statistical analysis", "Ethics statement", "Study subjects", "Prevalence of asthma symptoms", "Prevalence of BHR", "Characteristics of symptomatic BHR and asymptomatic BHR", "Prevalence of “Current asthma”", "Subjects who received treatment in the past 12 months", "Epidemiological studies on asthma prevalence in Korean children" ]
[ "The subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively.", "Asthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value.", "Before performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9", "All subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT.\nMCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn).", "Statistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant.", "Written informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019).", "From total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively.\nISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second.", "Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1).\nValues are presented as number (%). Pearson's χ2 test.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001.\nPrevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1).", "Ten out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT.\nThe prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3).\nValues are presented as number (%). Pearson's χ2 test.\nBHR = bronchial hyperresponsiveness.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001.\nStudent's t-test.\nPC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001.\nThe prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown).\nValues are presented as number (%). Pearson's χ2 test.\nBHR = bronchial hyperresponsiveness.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001.", "There was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795).", "The prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group.\nValues are presented as number (%). Pearson's χ2 test.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001.", "The proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively.", "The prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1).\nThe prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6).\nBHR = bronchial hyperresponsiveness, ns = not shown.\naChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Subjects", "Questionnaire and case definitions", "Pulmonary function test", "Bronchial challenge test", "Statistical analysis", "Ethics statement", "RESULTS", "Study subjects", "Prevalence of asthma symptoms", "Prevalence of BHR", "Characteristics of symptomatic BHR and asymptomatic BHR", "Prevalence of “Current asthma”", "Subjects who received treatment in the past 12 months", "Epidemiological studies on asthma prevalence in Korean children", "DISCUSSION" ]
[ "Asthma is not only a common chronic disease but also one of the major causes of hospitalization in children. Asthma should be managed appropriately especially in children for preventing progression to airway remodeling and impaired lung function. Early accurate diagnosis and appropriate treatment enables children to enjoy a high quality of life and leads to much better disease control and outcomes.\nAsthma may differ in prevalence and severity by race, region, and country. Thus, physicians should have appropriate guidance upon data with the prevalence of accurate asthma in the general population of children.\nIn most previous epidemiological studies, questionnaires based on subjective symptoms and past medical history of asthma have been used for estimating the prevalence of asthma.\nThe International Study of Asthma and Allergies of Childhood (ISAAC) questionnaire is a widely accepted standardized tool for evaluating asthma prevalence.1 However, the questionnaire alone may overestimate the true prevalence of asthma, since asthma-mimicking symptoms can manifest in various other diseases. Therefore, it is important to estimate accurate asthma prevalence based on an objective method such as the bronchial provocation test in a general population of children.\nBronchial hyperresponsiveness (BHR) is one of the key features of asthma and is demonstrated in almost all subjects with current symptomatic asthma.2 Defining current asthma with BHR and recent wheezing in the past 12 months have been considered useful in evaluating asthma prevalence in the general population.3\nIn pediatric populations, the prevalence of BHR with “Current asthma symptoms” varies among studies. The prevalence of BHR to methacholine (provocative concentration of methacholine causing a 20% reduction in forced expiratory volume in one second [FEV1]; PC20 ≤ 16 mg/mL) in children who had wheezing in the past 12 months has been reported to range from 25% in the United States4 to 60% in Turkey5 which is a large difference. In addition, the frequency of BHR in current wheezing within 12 months showed considerable inconsistency in Korean pediatric population studies: 6%6 vs. 56%.7 As above, BHR differs from country to country, and even in the same country, it may vary depending on the region and age.\nIn Korean studies, epidemiological data on the nationwide prevalence of all ages in elementary school students were insufficient. Some large-scaled studies of asthma prevalence among Korean children were mainly based on questionnaires. There have been a few studies on BHR in children.678 Subjects in those studies did not include all ages of elementary students. A study that conducted a methacholine challenge test (MCT) on subjects aged 7–19 years with asthma symptoms reported 4.6% prevalence of asthma in 1997.7 As healthcare environments, including advances in therapeutic agents, early diagnosis, as well as early and proper treatment, have changed from the mid-1990s when previous studies were conducted, it is necessary to update asthma prevalence in Korean children. Therefore, this is a meaningful study to understand the trend in prevalence of asthma over time.\nThe aim of this study was to assess the prevalence of current asthma and BHR using both the ISAAC questionnaire and MCT in Korean elementary school students in the metropolitan cities of Incheon, Gwangju (southern inland), and Gyeonggi.", "Subjects The subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively.\nThe subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively.\nQuestionnaire and case definitions Asthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value.\nAsthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value.\nPulmonary function test Before performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9\nBefore performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9\nBronchial challenge test All subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT.\nMCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn).\nAll subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT.\nMCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn).\nStatistical analysis Statistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant.\nStatistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant.\nEthics statement Written informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019).\nWritten informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019).", "The subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively.", "Asthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value.", "Before performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9", "All subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT.\nMCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn).", "Statistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant.", "Written informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019).", "Study subjects From total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively.\nISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second.\nFrom total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively.\nISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second.\nPrevalence of asthma symptoms Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1).\nValues are presented as number (%). Pearson's χ2 test.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001.\nPrevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1).\nPrevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1).\nValues are presented as number (%). Pearson's χ2 test.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001.\nPrevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1).\nPrevalence of BHR Ten out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT.\nThe prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3).\nValues are presented as number (%). Pearson's χ2 test.\nBHR = bronchial hyperresponsiveness.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001.\nStudent's t-test.\nPC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001.\nThe prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown).\nValues are presented as number (%). Pearson's χ2 test.\nBHR = bronchial hyperresponsiveness.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001.\nTen out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT.\nThe prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3).\nValues are presented as number (%). Pearson's χ2 test.\nBHR = bronchial hyperresponsiveness.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001.\nStudent's t-test.\nPC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001.\nThe prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown).\nValues are presented as number (%). Pearson's χ2 test.\nBHR = bronchial hyperresponsiveness.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001.\nCharacteristics of symptomatic BHR and asymptomatic BHR There was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795).\nThere was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795).\nPrevalence of “Current asthma” The prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group.\nValues are presented as number (%). Pearson's χ2 test.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001.\nThe prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group.\nValues are presented as number (%). Pearson's χ2 test.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001.\nSubjects who received treatment in the past 12 months The proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively.\nThe proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively.\nEpidemiological studies on asthma prevalence in Korean children The prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1).\nThe prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6).\nBHR = bronchial hyperresponsiveness, ns = not shown.\naChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study.\nThe prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1).\nThe prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6).\nBHR = bronchial hyperresponsiveness, ns = not shown.\naChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study.", "From total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively.\nISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second.", "Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1).\nValues are presented as number (%). Pearson's χ2 test.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001.\nPrevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1).", "Ten out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT.\nThe prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3).\nValues are presented as number (%). Pearson's χ2 test.\nBHR = bronchial hyperresponsiveness.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001.\nStudent's t-test.\nPC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001.\nThe prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown).\nValues are presented as number (%). Pearson's χ2 test.\nBHR = bronchial hyperresponsiveness.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001.", "There was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795).", "The prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group.\nValues are presented as number (%). Pearson's χ2 test.\naDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001.", "The proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively.", "The prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1).\nThe prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6).\nBHR = bronchial hyperresponsiveness, ns = not shown.\naChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study.", "The purpose of this study was to investigate the prevalence of “Current asthma” with the ISAAC questionnaire and MCT in Korean elementary school students.\nThe study results showed that the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and \"Exercise, current” was 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively in Korean students (age 7–12 years). “Current asthma symptoms” defined as positive response to “Wheezing, current,” “Treatment, current,” or “Exercise, current” was 9.6%. However, among them, 26% of the children had BHR in this cross-sectional study. The reason why students with BHR were low in subjects with current asthma symptoms may be because their symptoms were mild, have no recent symptoms in the previous week, or have been misdiagnosed. In addition, the Korean version of ISAAC questionnaire might have low sensitivity to diagnose asthma. In a study that conducted the asthma ISAAC questionnaire and hypertonic saline bronchial challenge test for 13–14-year-old Korean children, the sensitivity of BHR to “asthma ever” was the highest at 0.61, wheezing after exercise 0.35, nocturnal wheezing 0.24.13 Overall, the sensitivity of the ISAAC questionnaire to BHR was low. These results are consistent with our findings, suggesting that the ISSAC survey questions are not sufficient to predict BHR. Therefore, these results imply that the usefulness of the Korean ISAAC questionnaire for diagnosing current asthma in Korean children should be reevaluated.\nWe defined “Current asthma” if the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. The prevalence of “Current asthma” in the current study was 2.7%, consistent with 2.5% of the Seoul study (current asthma, lifetime diagnosis of asthma and experience of wheezing within 12 months),14 but 4.6% in a survey conducted about 20 years ago,7 suggesting decrement of true current asthma.\nThe time trends in asthma symptom prevalence have shown geographic/national differences. The prevalence of “Current asthma symptoms” in children have reduced in Western countries and Oceania, where the prevalence was previously high. On the other hand, the prevalence of “Wheezing, current” increased in parts of Africa, Latin America, and Asia, where the prevalence was previously low. Consequently, international differences in asthma symptom prevalence have reduced.15\nIn Korean children, the prevalence of asthma symptoms based on questionnaire showed a decreasing trend from 1995 to 2006, however, the study results increased compared to previous 2006 data (Table 6). The previous studies were based on questionnaire survey only, and the 2006 study had a low response rate compared to the current study. Therefore, previous studies have limitations in predicting the true prevalence of asthma.\nOur study approached the prevalence of “Current asthma” more accurately using both the ISAAC questionnaire and MCT. The current study also included a relatively large number of children. Therefore, the prevalence of asthma can be estimated more accurately than those in previous studies.\nThe prevalence of BHR can vary in studies due to the different study designs including different study subjects, bronchial provocation stimuli or their cutoff value.816171819\nThe MCT is highly sensitive and widely used to detect and quantify BHR.20 The prevalence of BHR defined as methacholine PC20 ≤ 16 mg/mL was 14.5% in children aged 7–12 years (mean age, 9.6 years) in our study, which was lower than those in previous studies of Korean children with similar study periods; 19.4% (mean age, 9.2 years) in 20128 and 17.5% (age 6–15 years) from 2009 to 2012.21 However, the study was conducted only at 1 school, which is located in Seoul.8 The other study subjects were also in Seoul.21 However, the present study populations were in Incheon, Gwangju (southern inland), and Gyeonggi, which were different from previous studies.\nA German study showed that the prevalence of BHR increased from 6.4% in 1992–1993 to 11.6% in 1995–1996 in East German children.22 It is suggested that changes in the environment, such as Western lifestyle, attending daycare center, exposure to new allergens and/or air pollutants contributed to the increase in BHR of children for a short period of time after reunification of East Germany.2324\nThere have been few epidemiological studies of BHR in Korean children with asthma symptoms. The present study found that the prevalence of BHR was 30.5%, 28.8%, and 29.9% in children with “Wheezing, current,” “Treatment, current,” and “Exercise, current,” respectively.\nThe present study result of 26% BHR in children with “Current asthma symptoms” was consistent with 20.5% (age 7–19 years)7 and 20.1% (age 10–12 years)6 in subjects with asthma symptoms in Korean epidemiological studies, which were low, altogether.\nIn contrast to epidemiological studies, clinical studies reported BHR prevalence of 84.0% in Korean children (mean age, 8.7 years) who had asthma symptoms or were diagnosed with asthma21 and 70.7% in Northern Taiwan children (mean age, 7.5 years) who had symptoms within the 12 months,25 which showed that there was a strong correlation between asthma symptoms and BHR.\nIn our study, the prevalence of BHR and asthma symptoms in younger age groups (age 7–8 years and 9–10 years) was higher than those in the 11–12-year age groups, showing significant correlation with age (Table 2). Many studies have suggested that younger children are more sensitive to bronchoprovocation tests.826 One of the reasons is that the airway structure, impedance, and contractile force of the bronchial smooth muscle change with age, reducing airway hypersensitivity as the airways grows in size. Lung function in children continues to increase until adolescence.27\nThe prevalence of BHR was also correlated with sex and was more prevalent in boys before puberty onset in this study. Physiological function or maturation of the lungs during infancy differs by sex, which extends to pre-puberty, adolescence and adulthood. The prevalence of BHR by sex varies with age. Since the growth spurt begins earlier in girls than in boys, BHR drops earlier in girls compared to boys.28 The current study found that the prevalence of BHR decreased sharply in boys from 11 years of age and from 9 years of age in girls, consistent with findings that prevalence of BHR and asthma in boys decreases with age after puberty and that of asthma in girls increases from puberty.29 As females have increased asthma prevalence starting around puberty, the male sex predominance of asthma and BHR is reversed at puberty onset.30 The mechanisms underlying the sex difference in the prevalence of asthma and BHR are still unknown. Anatomical and hormonal factors are thought to be differently involved in the development of airways between males and females from birth to puberty.313233 Our study included pre-pubertal children, so males showed a higher prevalence of BHR.\nThis study has several limitations. First, we defined BHR as methacholine PC20 ≤ 16 mg/mL without stratifying age groups. Since BHR could be influenced by age, the cutoff values of methacholine PC20 to define BHR may need to be adjusted by age.21 However, the BHR positive criteria was proposed as PC20 ≤ 16 mg/mL9 and considering the age range from 7 to 12 years of this study, the criterion of BHR can be accepted. Second, the study subjects were not national but limited in Incheon, Gwangju, and Gyeonggi areas, and the subject groups were not evenly distributed, especially, age 9–10 years was greater in Incheon, which had higher prevalence of asthma symptoms and BHR. However, the 2006 nationwide data showed that the prevalence of asthma (“Wheezing, current”) was almost similar in Incheon, Gyeonggi and Gwangju, which were slightly more than that in Seoul.34 Third, the MCT may not be appropriate for diagnosing exercise-induced bronchoconstriction (EIB) compare to field exercise challenge test.3135 However, there was less chance of underdiagnosis of current asthma, because the prevalence of EIB from this study was only 3.5%, and about one third of EIB showed BHR. Fourth, although there might have been some patients who had negative BHR in response to asthma medication within 12 months, the number of subjects who had been treated for asthma within the last 12 months was very small, 156 out of total 4,791 (3.3%). In addition, several studies have consistently raised the issue of low adherence to asthma medications.363738 According to a recent Korean study, adherence to asthma medication was low (38% of inhalant users, 50% of oral users and 67% of transdermal patch).39 Therefore, given that the treatment group was very small and the adherence to asthma medications was low, the effect of treatment on the asthma prevalence would be insignificant. Finally, the prevalence of asthma symptoms was highest in the 9–10 year-olds in the current study, but it was expected to be highest in the 7–8-year-olds, making it difficult to identify trend with age. In this study, the number of subjects in the 9–10-year-old group was remarkably large (n = 2,304), and that in the 7–8-year-old group (n = 1,027) was the least. In addition, there was a difference in number among regions. The number of subjects in Incheon, Gwangju, and Gyeonggi was 2,421, 1,029, and 1,341, respectively. Therefore, the reason for the high prevalence of asthma symptoms and BHR in age 9–10 years is thought to be that more children were included in the region (Incheon) with a relatively high prevalence.\nBHR is an important feature that reflects airway hyperresponsiveness in asthma. Transient or persistent BHR can also occur in exposure to environmental pollutants and irritants, viral respiratory infections, chronic infections, rhinitis, sarcoidosis, mitral valve stenosis, and tracheal dysplasia.40 The positive rate of BHR (22.3%) in children with “Diagnosis, ever” was similar in those with “Wheezing, ever” (22.4%). There may be some reasons why the proportion of BHR was low in children with “Diagnosis ever.” First, cases of early transient wheezer could be included. Second, healthcare environment in Korea including Health Insurance Review and Assessment Service requires asthma diagnosis when doctors prescribe asthma-related medications, such as inhaled corticosteroids, leukotriene receptor antagonist. Third, early and appropriate treatment in early childhood asthma may have contributed to resolution of BHR. Finally, there might be a possibility of overdiagnosis since the understanding of wheezing in the asthma-related questionnaire may have been poorly understood depending on the individuals.\nIn conclusion, these study results showed that about one third of children with asthma symptoms had BHR, and overall prevalence of current asthma was 2.7%. Most nationwide epidemiological studies of Korean children are based on questionnaires only, and studies conducted with both questionnaires and bronchial provocation test have limitations of small number of population and/or local regions. The current large-scaled study with both questionnaires and MCT provides accurate current asthma prevalence in Korean elementary school children. This would help physicians diagnose asthma appropriately. Considering that performing MCT is time and energy consuming and not easy, especially for children, we suggest measuring BHR selectively in children with asthma symptoms. Since, about one third of children with asthma symptoms based on ISAAC questionnaire are likely to be true asthmatics, studies are necessary to develop modified questionnaire which have high diagnostic accuracy of asthma." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion" ]
[ "Asthma", "Prevalence", "Child", "Epidemiology", "Questionnaires", "Bronchial Hyperresponsiveness" ]
INTRODUCTION: Asthma is not only a common chronic disease but also one of the major causes of hospitalization in children. Asthma should be managed appropriately especially in children for preventing progression to airway remodeling and impaired lung function. Early accurate diagnosis and appropriate treatment enables children to enjoy a high quality of life and leads to much better disease control and outcomes. Asthma may differ in prevalence and severity by race, region, and country. Thus, physicians should have appropriate guidance upon data with the prevalence of accurate asthma in the general population of children. In most previous epidemiological studies, questionnaires based on subjective symptoms and past medical history of asthma have been used for estimating the prevalence of asthma. The International Study of Asthma and Allergies of Childhood (ISAAC) questionnaire is a widely accepted standardized tool for evaluating asthma prevalence.1 However, the questionnaire alone may overestimate the true prevalence of asthma, since asthma-mimicking symptoms can manifest in various other diseases. Therefore, it is important to estimate accurate asthma prevalence based on an objective method such as the bronchial provocation test in a general population of children. Bronchial hyperresponsiveness (BHR) is one of the key features of asthma and is demonstrated in almost all subjects with current symptomatic asthma.2 Defining current asthma with BHR and recent wheezing in the past 12 months have been considered useful in evaluating asthma prevalence in the general population.3 In pediatric populations, the prevalence of BHR with “Current asthma symptoms” varies among studies. The prevalence of BHR to methacholine (provocative concentration of methacholine causing a 20% reduction in forced expiratory volume in one second [FEV1]; PC20 ≤ 16 mg/mL) in children who had wheezing in the past 12 months has been reported to range from 25% in the United States4 to 60% in Turkey5 which is a large difference. In addition, the frequency of BHR in current wheezing within 12 months showed considerable inconsistency in Korean pediatric population studies: 6%6 vs. 56%.7 As above, BHR differs from country to country, and even in the same country, it may vary depending on the region and age. In Korean studies, epidemiological data on the nationwide prevalence of all ages in elementary school students were insufficient. Some large-scaled studies of asthma prevalence among Korean children were mainly based on questionnaires. There have been a few studies on BHR in children.678 Subjects in those studies did not include all ages of elementary students. A study that conducted a methacholine challenge test (MCT) on subjects aged 7–19 years with asthma symptoms reported 4.6% prevalence of asthma in 1997.7 As healthcare environments, including advances in therapeutic agents, early diagnosis, as well as early and proper treatment, have changed from the mid-1990s when previous studies were conducted, it is necessary to update asthma prevalence in Korean children. Therefore, this is a meaningful study to understand the trend in prevalence of asthma over time. The aim of this study was to assess the prevalence of current asthma and BHR using both the ISAAC questionnaire and MCT in Korean elementary school students in the metropolitan cities of Incheon, Gwangju (southern inland), and Gyeonggi. METHODS: Subjects The subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively. The subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively. Questionnaire and case definitions Asthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. Asthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. Pulmonary function test Before performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9 Before performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9 Bronchial challenge test All subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT. MCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn). All subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT. MCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn). Statistical analysis Statistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant. Statistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant. Ethics statement Written informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019). Written informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019). Subjects: The subjects included 5,531 elementary school students in the metropolitan cities of Incheon (northwest coast), Gwangju (southern inland), and Gyeonggi from 2010 to 2014 in Korea. From the total, 4791 provided complete data in Questionnaires and MCT. The number of subjects in Incheon, Gwangju and Gyeonggi was 2,421, 1,029 and 1,341, respectively. The response rates of Seoul and the provincial area were 95.6% and 98.3%, respectively. Questionnaire and case definitions: Asthma symptoms were identified through the Korean version of the ISAAC core questions on asthma:1 Have you ever had wheezing or whistling in the chest at any time in the past? (“Wheezing, ever”); Have you had wheezing or whistling in the chest in the last 12 months? (“Wheezing, current”); Has your child ever had asthma? (“Diagnosis, ever”); Has your child been treated for asthma in the last 12 months? (“Treatment, current”); and in the last 12 months, has your chest sounded wheezy during or after exercise? (“Exercise, current”). We defined “Current asthma symptoms” when the answer was yes to any one of the “Wheezing, current,” “Treatment, current,” or “Exercise, current” questions. “Current asthma” was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. Pulmonary function test: Before performing the MCT, a pulmonary function test was performed according to the recommendation by the American Thoracic Society and European Respiratory Society with a Microplus Spirometer (Carefusion, Kent, UK). Spirometric values of forced vital capacity, FEV1, maximum mid-expiratory flow, and peak expiratory flow rate were recorded.9 Bronchial challenge test: All subjects performed the MCT according to the protocol as described in the American Thoracic Society guideline.10 Briefly, participants inhaled doubling doses of fresh methacholine solutions by using a dosimeter with a concentration range of 0.62, 1.25, 2.5, 5, 12.5, and 25 mg/mL. A spirometer (Microplus Spirometer, Carefusion) was used to measure the FEV1 after each inhalation. Airway responsiveness was expressed as the PC20. BHR to methacholine was defined as a PC20 ≤ 16 mg/mL. Children who had fever or acute respiratory symptoms within 1 week or whose predicted value of FEV1 was less than 70% were not asked to perform the MCT. MCT was conducted in the spring season, from March to June, excluding Incheon (September to November, autumn). Statistical analysis: Statistical analysis was performed using the SAS version 9 (SAS Institute Inc., Cary, NC, USA). The prevalence rates of asthma symptoms and BHR were analyzed by age and sex. The χ2 test was used for comparing categorical variables among age and sex groups. The paired t-test was used for comparing spirometry values for each defined symptom. A P value of < 0.05 was considered statistically significant. Ethics statement: Written informed consent was obtained from all parents and subjects. The Institutional Review Board (IRB) of Inha University Hospital approved this study (IRB No.; 11-12, 12-05, 2015-09-007, 2017-02-019). RESULTS: Study subjects From total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively. ISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second. From total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively. ISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second. Prevalence of asthma symptoms Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1). Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1). Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1). Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1). Prevalence of BHR Ten out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT. The prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. Student's t-test. PC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. The prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Ten out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT. The prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. Student's t-test. PC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. The prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Characteristics of symptomatic BHR and asymptomatic BHR There was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795). There was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795). Prevalence of “Current asthma” The prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group. Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001. The prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group. Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001. Subjects who received treatment in the past 12 months The proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively. The proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively. Epidemiological studies on asthma prevalence in Korean children The prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1). The prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6). BHR = bronchial hyperresponsiveness, ns = not shown. aChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study. The prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1). The prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6). BHR = bronchial hyperresponsiveness, ns = not shown. aChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study. Study subjects: From total of 5,531 children included in the survey, 4,791 provided complete data; they responded to questionnaires of asthma symptoms and performed the MCT (Fig. 1). The prevalence rates of BHR and “Current asthma” were 14.5% and 2.7%, respectively. ISAAC = International Study of Asthma and Allergies in Childhood, BHR = bronchial hyperresponsiveness, FEV1 = forced expiratory volume in one second. Prevalence of asthma symptoms: Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” were 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively. Prevalence of “Current asthma symptoms” was 9.6% (Table 1). Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Current asthma symptoms” was higher in boys compared to girls (Table 1). “Wheezing, ever,” “Wheezing, current,” and “Diagnosis, ever” were more prevalent in younger age groups than older age groups (Table 1). Prevalence of BHR: Ten out of 460 children (2%) with “Current asthma symptoms” showed less than 70% of predicted FEV1 value and did not perform MCT. The prevalence of BHR in the general population was 14.5% (Table 2). Table 3 shows PC20 value in BHR (Table 3). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. Student's t-test. PC20 = provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second, Mean = geometric mean, SD = standard deviation. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. ***P < 0.001. The prevalence of BHR in children with “Wheezing ever,” “Wheezing current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 22.3%, 30.5%, 22.4%, 28.8% and 29.9%, respectively. The prevalence of BHR was 26.1% in children with “Current asthma symptoms.” The prevalence of BHR in children with “Wheezing current” (P = 0.034) and “Current asthma symptoms” (P = 0.014) was higher in boys than girls in each age group (Table 4). In the analysis considering sex and age interaction, it was found that the interaction did not show significant difference (data not shown). Values are presented as number (%). Pearson's χ2 test. BHR = bronchial hyperresponsiveness. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. *P < 0.05, **P < 0.01, ***P < 0.001. Characteristics of symptomatic BHR and asymptomatic BHR: There was no significant difference in age (P = 0.729), sex (P = 0.590), BMI (P = 0.903), or allergic rhinitis (P = 0.362) between children with symptomatic BHR and asymptomatic BHR. The PC20 value also did not differ significantly between children with symptomatic BHR and those with asymptomatic BHR (geometric mean 4.59 ± 2.61 vs. 5.39 ± 2.59, P = 0.795). Prevalence of “Current asthma”: The prevalence of “Current asthma” defined by “Current asthma symptoms” with proven bronchial hyperresponsiveness or with less than 70% of predicted FEV1 was 2.7% (Table 5). The prevalence of “Current asthma” was significantly higher in boys than girls in each age group. Values are presented as number (%). Pearson's χ2 test. aDifferences in variables between boys and girls are shown in the Sex column. bDifferences in variables between age groups are shown in the Age column. **P < 0.01, ***P < 0.001. Subjects who received treatment in the past 12 months: The proportion of children who underwent treatment in the past 12 months among those who responded “yes” to “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” and “Exercise, current” and those with BHR was 13%, 28%, 28%, 36%, and 7%, respectively. Epidemiological studies on asthma prevalence in Korean children: The prevalence of “Wheezing, ever” was 17.0% in 199510 and decreased to 13.0% and 10.3% in 200011 and 2006,12 respectively, in Korean elementary school students. In this study, with a study period from 2010 to 2014, the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and “Exercise, current” was 19.6%, 6.9%, 10%, 3.3% and 3.5%, respectively (Table 1). The prevalence of “Wheezing, current” in Korean children were 9.5%, 4.9%, 4.8% and 6.9% in 1995, 2000, 2006 and our study, respectively (Table 6). BHR = bronchial hyperresponsiveness, ns = not shown. aChildren with asthma symptoms since birth; bChildren who experienced one or more wheezing episodes within 12 months of those who have experienced one or more wheezing episodes in their lifetime who have asthma symptoms within 12 months; cChildren who were diagnosed with asthma since birth; dChildren treated for asthma within the past 12 months; eChildren who experienced wheezing episodes after exercise within the past 12 months; fCurrent asthma was defined when the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. gPresent study. DISCUSSION: The purpose of this study was to investigate the prevalence of “Current asthma” with the ISAAC questionnaire and MCT in Korean elementary school students. The study results showed that the prevalence of “Wheezing, ever,” “Wheezing, current,” “Diagnosis, ever,” “Treatment, current,” and "Exercise, current” was 19.6%, 6.9%, 10.0%, 3.3% and 3.5%, respectively in Korean students (age 7–12 years). “Current asthma symptoms” defined as positive response to “Wheezing, current,” “Treatment, current,” or “Exercise, current” was 9.6%. However, among them, 26% of the children had BHR in this cross-sectional study. The reason why students with BHR were low in subjects with current asthma symptoms may be because their symptoms were mild, have no recent symptoms in the previous week, or have been misdiagnosed. In addition, the Korean version of ISAAC questionnaire might have low sensitivity to diagnose asthma. In a study that conducted the asthma ISAAC questionnaire and hypertonic saline bronchial challenge test for 13–14-year-old Korean children, the sensitivity of BHR to “asthma ever” was the highest at 0.61, wheezing after exercise 0.35, nocturnal wheezing 0.24.13 Overall, the sensitivity of the ISAAC questionnaire to BHR was low. These results are consistent with our findings, suggesting that the ISSAC survey questions are not sufficient to predict BHR. Therefore, these results imply that the usefulness of the Korean ISAAC questionnaire for diagnosing current asthma in Korean children should be reevaluated. We defined “Current asthma” if the subjects with “Current asthma symptoms” showed BHR on the MCT or had less than 70% of predicted FEV1 value. The prevalence of “Current asthma” in the current study was 2.7%, consistent with 2.5% of the Seoul study (current asthma, lifetime diagnosis of asthma and experience of wheezing within 12 months),14 but 4.6% in a survey conducted about 20 years ago,7 suggesting decrement of true current asthma. The time trends in asthma symptom prevalence have shown geographic/national differences. The prevalence of “Current asthma symptoms” in children have reduced in Western countries and Oceania, where the prevalence was previously high. On the other hand, the prevalence of “Wheezing, current” increased in parts of Africa, Latin America, and Asia, where the prevalence was previously low. Consequently, international differences in asthma symptom prevalence have reduced.15 In Korean children, the prevalence of asthma symptoms based on questionnaire showed a decreasing trend from 1995 to 2006, however, the study results increased compared to previous 2006 data (Table 6). The previous studies were based on questionnaire survey only, and the 2006 study had a low response rate compared to the current study. Therefore, previous studies have limitations in predicting the true prevalence of asthma. Our study approached the prevalence of “Current asthma” more accurately using both the ISAAC questionnaire and MCT. The current study also included a relatively large number of children. Therefore, the prevalence of asthma can be estimated more accurately than those in previous studies. The prevalence of BHR can vary in studies due to the different study designs including different study subjects, bronchial provocation stimuli or their cutoff value.816171819 The MCT is highly sensitive and widely used to detect and quantify BHR.20 The prevalence of BHR defined as methacholine PC20 ≤ 16 mg/mL was 14.5% in children aged 7–12 years (mean age, 9.6 years) in our study, which was lower than those in previous studies of Korean children with similar study periods; 19.4% (mean age, 9.2 years) in 20128 and 17.5% (age 6–15 years) from 2009 to 2012.21 However, the study was conducted only at 1 school, which is located in Seoul.8 The other study subjects were also in Seoul.21 However, the present study populations were in Incheon, Gwangju (southern inland), and Gyeonggi, which were different from previous studies. A German study showed that the prevalence of BHR increased from 6.4% in 1992–1993 to 11.6% in 1995–1996 in East German children.22 It is suggested that changes in the environment, such as Western lifestyle, attending daycare center, exposure to new allergens and/or air pollutants contributed to the increase in BHR of children for a short period of time after reunification of East Germany.2324 There have been few epidemiological studies of BHR in Korean children with asthma symptoms. The present study found that the prevalence of BHR was 30.5%, 28.8%, and 29.9% in children with “Wheezing, current,” “Treatment, current,” and “Exercise, current,” respectively. The present study result of 26% BHR in children with “Current asthma symptoms” was consistent with 20.5% (age 7–19 years)7 and 20.1% (age 10–12 years)6 in subjects with asthma symptoms in Korean epidemiological studies, which were low, altogether. In contrast to epidemiological studies, clinical studies reported BHR prevalence of 84.0% in Korean children (mean age, 8.7 years) who had asthma symptoms or were diagnosed with asthma21 and 70.7% in Northern Taiwan children (mean age, 7.5 years) who had symptoms within the 12 months,25 which showed that there was a strong correlation between asthma symptoms and BHR. In our study, the prevalence of BHR and asthma symptoms in younger age groups (age 7–8 years and 9–10 years) was higher than those in the 11–12-year age groups, showing significant correlation with age (Table 2). Many studies have suggested that younger children are more sensitive to bronchoprovocation tests.826 One of the reasons is that the airway structure, impedance, and contractile force of the bronchial smooth muscle change with age, reducing airway hypersensitivity as the airways grows in size. Lung function in children continues to increase until adolescence.27 The prevalence of BHR was also correlated with sex and was more prevalent in boys before puberty onset in this study. Physiological function or maturation of the lungs during infancy differs by sex, which extends to pre-puberty, adolescence and adulthood. The prevalence of BHR by sex varies with age. Since the growth spurt begins earlier in girls than in boys, BHR drops earlier in girls compared to boys.28 The current study found that the prevalence of BHR decreased sharply in boys from 11 years of age and from 9 years of age in girls, consistent with findings that prevalence of BHR and asthma in boys decreases with age after puberty and that of asthma in girls increases from puberty.29 As females have increased asthma prevalence starting around puberty, the male sex predominance of asthma and BHR is reversed at puberty onset.30 The mechanisms underlying the sex difference in the prevalence of asthma and BHR are still unknown. Anatomical and hormonal factors are thought to be differently involved in the development of airways between males and females from birth to puberty.313233 Our study included pre-pubertal children, so males showed a higher prevalence of BHR. This study has several limitations. First, we defined BHR as methacholine PC20 ≤ 16 mg/mL without stratifying age groups. Since BHR could be influenced by age, the cutoff values of methacholine PC20 to define BHR may need to be adjusted by age.21 However, the BHR positive criteria was proposed as PC20 ≤ 16 mg/mL9 and considering the age range from 7 to 12 years of this study, the criterion of BHR can be accepted. Second, the study subjects were not national but limited in Incheon, Gwangju, and Gyeonggi areas, and the subject groups were not evenly distributed, especially, age 9–10 years was greater in Incheon, which had higher prevalence of asthma symptoms and BHR. However, the 2006 nationwide data showed that the prevalence of asthma (“Wheezing, current”) was almost similar in Incheon, Gyeonggi and Gwangju, which were slightly more than that in Seoul.34 Third, the MCT may not be appropriate for diagnosing exercise-induced bronchoconstriction (EIB) compare to field exercise challenge test.3135 However, there was less chance of underdiagnosis of current asthma, because the prevalence of EIB from this study was only 3.5%, and about one third of EIB showed BHR. Fourth, although there might have been some patients who had negative BHR in response to asthma medication within 12 months, the number of subjects who had been treated for asthma within the last 12 months was very small, 156 out of total 4,791 (3.3%). In addition, several studies have consistently raised the issue of low adherence to asthma medications.363738 According to a recent Korean study, adherence to asthma medication was low (38% of inhalant users, 50% of oral users and 67% of transdermal patch).39 Therefore, given that the treatment group was very small and the adherence to asthma medications was low, the effect of treatment on the asthma prevalence would be insignificant. Finally, the prevalence of asthma symptoms was highest in the 9–10 year-olds in the current study, but it was expected to be highest in the 7–8-year-olds, making it difficult to identify trend with age. In this study, the number of subjects in the 9–10-year-old group was remarkably large (n = 2,304), and that in the 7–8-year-old group (n = 1,027) was the least. In addition, there was a difference in number among regions. The number of subjects in Incheon, Gwangju, and Gyeonggi was 2,421, 1,029, and 1,341, respectively. Therefore, the reason for the high prevalence of asthma symptoms and BHR in age 9–10 years is thought to be that more children were included in the region (Incheon) with a relatively high prevalence. BHR is an important feature that reflects airway hyperresponsiveness in asthma. Transient or persistent BHR can also occur in exposure to environmental pollutants and irritants, viral respiratory infections, chronic infections, rhinitis, sarcoidosis, mitral valve stenosis, and tracheal dysplasia.40 The positive rate of BHR (22.3%) in children with “Diagnosis, ever” was similar in those with “Wheezing, ever” (22.4%). There may be some reasons why the proportion of BHR was low in children with “Diagnosis ever.” First, cases of early transient wheezer could be included. Second, healthcare environment in Korea including Health Insurance Review and Assessment Service requires asthma diagnosis when doctors prescribe asthma-related medications, such as inhaled corticosteroids, leukotriene receptor antagonist. Third, early and appropriate treatment in early childhood asthma may have contributed to resolution of BHR. Finally, there might be a possibility of overdiagnosis since the understanding of wheezing in the asthma-related questionnaire may have been poorly understood depending on the individuals. In conclusion, these study results showed that about one third of children with asthma symptoms had BHR, and overall prevalence of current asthma was 2.7%. Most nationwide epidemiological studies of Korean children are based on questionnaires only, and studies conducted with both questionnaires and bronchial provocation test have limitations of small number of population and/or local regions. The current large-scaled study with both questionnaires and MCT provides accurate current asthma prevalence in Korean elementary school children. This would help physicians diagnose asthma appropriately. Considering that performing MCT is time and energy consuming and not easy, especially for children, we suggest measuring BHR selectively in children with asthma symptoms. Since, about one third of children with asthma symptoms based on ISAAC questionnaire are likely to be true asthmatics, studies are necessary to develop modified questionnaire which have high diagnostic accuracy of asthma.
Background: Most epidemiological studies depend on the subjects' response to asthma symptom questionnaires. Questionnaire-based study for childhood asthma prevalence may overestimate the true prevalence. The aim of this study was to investigate the prevalence of "Current asthma" using the International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire and methacholine challenge test in Korean children. Methods: Our survey on allergic disease included 4,791 children (age 7-12 years) from 2010 to 2014 in Korean elementary schools. Bronchial hyperresponsiveness (BHR) was defined as provocative concentration of methacholine causing a 20% fall in forced expiratory volume in one second (FEV1) (PC20) ≤ 16 mg/mL. "Current asthma symptoms" was defined as positive response to "Wheezing, current," "Treatment, current," or "Exercise, current." "Current asthma" was defined when the subjects with "Current asthma symptoms" showed BHR on the methacholine challenge test or had less than 70% of predicted FEV1 value. Results: The prevalence of "Wheezing, ever," "Wheezing, current," "Diagnosis, ever," "Treatment, current," "Exercise, current," and "Current asthma symptoms" was 19.6%, 6.9%, 10.0%, 3.3%, 3.5%, and 9.6%, respectively, in our cross-sectional study of Korean elementary school students. The prevalence of BHR in elementary school students was 14.5%. The prevalence of BHR in children with "Wheezing, ever," "Wheezing, current," "Diagnosis, ever," "Treatment, current," and "Exercise, current" was 22.3%, 30.5%, 22.4%, 28.8%, and 29.9%, respectively. BHR was 26.1% in those with "Current asthma symptoms." The prevalence of "Current asthma" was 2.7%. Conclusions: Our large-scale study provides 2.7% prevalence of current asthma in Korean elementary school children. Since approximately one third of the children who have "Current asthma symptoms" present BHR, both subjective and objective methods are required to accurately predict asthma in subjects with asthma symptoms.
null
null
8,329
419
[ 83, 192, 58, 144, 78, 51, 77, 205, 392, 78, 110, 67, 256 ]
17
[ "asthma", "current", "bhr", "prevalence", "wheezing", "age", "symptoms", "children", "asthma symptoms", "current asthma" ]
[ "children current asthma", "accurate asthma prevalence", "asthma symptoms children", "childhood asthma", "asthma prevalence questionnaire" ]
null
null
[CONTENT] Asthma | Prevalence | Child | Epidemiology | Questionnaires | Bronchial Hyperresponsiveness [SUMMARY]
[CONTENT] Asthma | Prevalence | Child | Epidemiology | Questionnaires | Bronchial Hyperresponsiveness [SUMMARY]
[CONTENT] Asthma | Prevalence | Child | Epidemiology | Questionnaires | Bronchial Hyperresponsiveness [SUMMARY]
null
[CONTENT] Asthma | Prevalence | Child | Epidemiology | Questionnaires | Bronchial Hyperresponsiveness [SUMMARY]
null
[CONTENT] Asthma | Bronchial Hyperreactivity | Bronchial Provocation Tests | Bronchoconstrictor Agents | Child | Cross-Sectional Studies | Female | Forced Expiratory Volume | Humans | Male | Methacholine Chloride | Prevalence | Republic of Korea | Respiratory Sounds | Surveys and Questionnaires [SUMMARY]
[CONTENT] Asthma | Bronchial Hyperreactivity | Bronchial Provocation Tests | Bronchoconstrictor Agents | Child | Cross-Sectional Studies | Female | Forced Expiratory Volume | Humans | Male | Methacholine Chloride | Prevalence | Republic of Korea | Respiratory Sounds | Surveys and Questionnaires [SUMMARY]
[CONTENT] Asthma | Bronchial Hyperreactivity | Bronchial Provocation Tests | Bronchoconstrictor Agents | Child | Cross-Sectional Studies | Female | Forced Expiratory Volume | Humans | Male | Methacholine Chloride | Prevalence | Republic of Korea | Respiratory Sounds | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Asthma | Bronchial Hyperreactivity | Bronchial Provocation Tests | Bronchoconstrictor Agents | Child | Cross-Sectional Studies | Female | Forced Expiratory Volume | Humans | Male | Methacholine Chloride | Prevalence | Republic of Korea | Respiratory Sounds | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] children current asthma | accurate asthma prevalence | asthma symptoms children | childhood asthma | asthma prevalence questionnaire [SUMMARY]
[CONTENT] children current asthma | accurate asthma prevalence | asthma symptoms children | childhood asthma | asthma prevalence questionnaire [SUMMARY]
[CONTENT] children current asthma | accurate asthma prevalence | asthma symptoms children | childhood asthma | asthma prevalence questionnaire [SUMMARY]
null
[CONTENT] children current asthma | accurate asthma prevalence | asthma symptoms children | childhood asthma | asthma prevalence questionnaire [SUMMARY]
null
[CONTENT] asthma | current | bhr | prevalence | wheezing | age | symptoms | children | asthma symptoms | current asthma [SUMMARY]
[CONTENT] asthma | current | bhr | prevalence | wheezing | age | symptoms | children | asthma symptoms | current asthma [SUMMARY]
[CONTENT] asthma | current | bhr | prevalence | wheezing | age | symptoms | children | asthma symptoms | current asthma [SUMMARY]
null
[CONTENT] asthma | current | bhr | prevalence | wheezing | age | symptoms | children | asthma symptoms | current asthma [SUMMARY]
null
[CONTENT] asthma | prevalence | studies | children | country | asthma prevalence | bhr | prevalence asthma | korean | population [SUMMARY]
[CONTENT] current | asthma | 12 | subjects | mct | chest | wheezing | spirometer | society | defined [SUMMARY]
[CONTENT] current | wheezing | asthma | age | shown | bhr | column | prevalence | variables | table [SUMMARY]
null
[CONTENT] current | asthma | wheezing | bhr | prevalence | age | 12 | symptoms | children | current asthma [SUMMARY]
null
[CONTENT] ||| ||| the International Study of Asthma and Allergies | Childhood | ISAAC | Korean [SUMMARY]
[CONTENT] 4,791 | age 7-12 years | 2010 | 2014 | Korean ||| Bronchial | BHR | 20% | one second ||| Wheezing ||| BHR | less than 70% [SUMMARY]
[CONTENT] Wheezing | Wheezing | 19.6% | 6.9% | 10.0% | 3.3% | 3.5% | 9.6% | Korean ||| BHR | 14.5% ||| BHR | Wheezing | 22.3% | 30.5% | 22.4% | 28.8% | 29.9% ||| BHR | 26.1% ||| 2.7% [SUMMARY]
null
[CONTENT] ||| ||| the International Study of Asthma and Allergies | Childhood | ISAAC | Korean ||| 4,791 | age 7-12 years | 2010 | 2014 | Korean ||| Bronchial | BHR | 20% | one second ||| Wheezing ||| BHR | less than 70% ||| ||| Wheezing | Wheezing | 19.6% | 6.9% | 10.0% | 3.3% | 3.5% | 9.6% | Korean ||| BHR | 14.5% ||| BHR | Wheezing | 22.3% | 30.5% | 22.4% | 28.8% | 29.9% ||| BHR | 26.1% ||| 2.7% ||| 2.7% | Korean ||| approximately one third | BHR [SUMMARY]
null
Gastrin-releasing peptide receptor expression in non-cancerous bronchial epithelia is associated with lung cancer: a case-control study.
22296774
Normal bronchial tissue expression of GRPR, which encodes the gastrin-releasing peptide receptor, has been previously reported by us to be associated with lung cancer risk in 78 subjects, especially in females. We sought to define the contribution of GRPR expression in bronchial epithelia to lung cancer risk in a larger case-control study where adjustments could be made for tobacco exposure and sex.
BACKGROUND
We evaluated GRPR mRNA levels in histologically normal bronchial epithelial cells from 224 lung cancer patients and 107 surgical cancer-free controls. Associations with lung cancer were tested using logistic regression models.
METHODS
Bronchial GRPR expression was significantly associated with lung cancer (OR = 4.76; 95% CI = 2.32-9.77) in a multivariable logistic regression (MLR) model adjusted for age, sex, smoking status and pulmonary function. MLR analysis stratified by smoking status indicated that ORs were higher in never and former smokers (OR = 7.74; 95% CI = 2.96-20.25) compared to active smokers (OR = 1.69; 95% CI = 0.46-6.33). GRPR expression did not differ by subject sex, and lung cancer risk associated with GRPR expression was not modified by sex.
RESULTS
GRPR expression in non-cancerous bronchial epithelium was significantly associated with the presence of lung cancer in never and former smokers. The association in never and former smokers was found in males and females. Association with lung cancer did not differ by sex in any smoking group.
CONCLUSIONS
[ "Adenocarcinoma", "Adenocarcinoma of Lung", "Adult", "Aged", "Aged, 80 and over", "Bronchi", "Carcinoma, Non-Small-Cell Lung", "Carcinoma, Squamous Cell", "Case-Control Studies", "Female", "Humans", "Lung", "Lung Neoplasms", "Male", "Middle Aged", "Receptors, Bombesin", "Risk", "Small Cell Lung Carcinoma", "Smoking" ]
3305653
Background
Lung cancer incidence rates have been declining for men since the 1980s. However, incidence rates for women over 65 have been increasing or have remained steady during this same time period [1]. Though tobacco use is a significant risk factor for lung cancer, 15-20% of lung cancer patients are lifetime never-smokers. Our group previously reported that bronchial epithelium expression of GRPR, which encodes the gastrin-releasing peptide receptor (GRPR), was associated with a diagnosis of lung cancer in female never smokers [2]. GRPR is an X-linked gene that has been reported to escape X-inactivation [3]. This finding raised the possibility that increased GRPR expression in women accounted for some of the increased incidence rates of lung cancer in never smokers who are female, compared to never smoking men, which was recently reported in a large prospective cohort study [4]. Since GRPR stimulation induces proliferative effects in bronchial cells [5], it is possible that activation of this pathway is a risk factor for lung cancer separate from that of tobacco exposure. GRPR is overexpressed in lung cancers and in head and neck squamous cell carcinoma (HNSCC) [6,7]. We have previously reported elevated levels of GRPR mRNA in lung cancers and HNSCC [6,8]. In addition to cancer-specific overexpression of GRPR, we have demonstrated that mucosal tissues adjacent to HNSCC have GRPR mRNA levels reflective of the adjacent HNSCC tumor [6]. These findings suggest that elevated GRPR mRNA in normal bronchial epithelia may be associated with lung cancer risk and/or may indicate the presence of lung cancer. We undertook a case-control study to determine whether elevated GRPR mRNA expression in normal, at-risk epithelium correlated with the presence of lung cancer. We evaluated the association between GRPR mRNA expression in purified cultured normal bronchial epithelial cells and the presence of lung cancer. Our primary finding was the observed increased expression of GRPR in normal bronchial epithelia in lung cancer cases compared to cancer-free controls. The impact of this was highest in subjects who never smoked or who had undergone smoking cessation before diagnosis. The association was found in both male and female never smokers, suggesting GRPR plays a similar role in development of lung cancer in men and women. The result of this study highlights GRPR overexpression in normal epithelial mucosa as a candidate risk factor for lung cancer, especially in those with limited tobacco exposure.
Methods
Lung cancer case-control study subjects and tissues Lung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies. Characteristics of Lung Cancer Cases and Controls #Median overall survival for cancer cases alive at last follow-up †Rank sum test ‡Chi-square test §Fisher's exact test Lung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies. Characteristics of Lung Cancer Cases and Controls #Median overall survival for cancer cases alive at last follow-up †Rank sum test ‡Chi-square test §Fisher's exact test Detection of GRPR expression in bronchial epithelial cells RNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls. RNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls. Statistical Analyses In order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05. Evaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed. Association of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals. In order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05. Evaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed. Association of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals.
null
null
Conclusions
AME contributed to the conception of analyses, performed analyses, interpreted statistical analyses and writing of the manuscript. AGD coordinated sample storage, retrieval and database. YS and SL performed statistical analyses. JMP, JDL, RD, YEM enrolled patients into the clinical trial, collected and provided specimens for analysis. JRG contributed to the conception and design of this project and contributing to the writing of this manuscript. JMS contributed to the conception and design of this project and contributed to the writing of this manuscript. All authors have read and approved the final manuscript
[ "Background", "Lung cancer case-control study subjects and tissues", "Detection of GRPR expression in bronchial epithelial cells", "Statistical Analyses", "Results", "GRPR expression in bronchial epithelia was more frequent in lung cancer patients than cancer-free control subjects", "Bronchial GRPR expression was not associated with sex, ethnicity or pulmonary function in lung cancer cases or controls", "GRPR expression levels did not reflect disease stage or tumor type", "GRPR expression in non-cancerous bronchial mucosal tissues was significantly associated with lung cancer independent of age, sex and smoking status", "GRPR expression in bronchial epithelium was significantly associated with lung cancer among never smokers and former smokers but not active smokers", "GRPR expression in surrogate tissues was not a prognostic indicator for survival in lung cancer cases", "Discussion", "Conclusions" ]
[ "Lung cancer incidence rates have been declining for men since the 1980s. However, incidence rates for women over 65 have been increasing or have remained steady during this same time period [1]. Though tobacco use is a significant risk factor for lung cancer, 15-20% of lung cancer patients are lifetime never-smokers. Our group previously reported that bronchial epithelium expression of GRPR, which encodes the gastrin-releasing peptide receptor (GRPR), was associated with a diagnosis of lung cancer in female never smokers [2]. GRPR is an X-linked gene that has been reported to escape X-inactivation [3]. This finding raised the possibility that increased GRPR expression in women accounted for some of the increased incidence rates of lung cancer in never smokers who are female, compared to never smoking men, which was recently reported in a large prospective cohort study [4]. Since GRPR stimulation induces proliferative effects in bronchial cells [5], it is possible that activation of this pathway is a risk factor for lung cancer separate from that of tobacco exposure.\nGRPR is overexpressed in lung cancers and in head and neck squamous cell carcinoma (HNSCC) [6,7]. We have previously reported elevated levels of GRPR mRNA in lung cancers and HNSCC [6,8]. In addition to cancer-specific overexpression of GRPR, we have demonstrated that mucosal tissues adjacent to HNSCC have GRPR mRNA levels reflective of the adjacent HNSCC tumor [6]. These findings suggest that elevated GRPR mRNA in normal bronchial epithelia may be associated with lung cancer risk and/or may indicate the presence of lung cancer.\nWe undertook a case-control study to determine whether elevated GRPR mRNA expression in normal, at-risk epithelium correlated with the presence of lung cancer. We evaluated the association between GRPR mRNA expression in purified cultured normal bronchial epithelial cells and the presence of lung cancer. Our primary finding was the observed increased expression of GRPR in normal bronchial epithelia in lung cancer cases compared to cancer-free controls. The impact of this was highest in subjects who never smoked or who had undergone smoking cessation before diagnosis. The association was found in both male and female never smokers, suggesting GRPR plays a similar role in development of lung cancer in men and women. The result of this study highlights GRPR overexpression in normal epithelial mucosa as a candidate risk factor for lung cancer, especially in those with limited tobacco exposure.", "Lung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies.\nCharacteristics of Lung Cancer Cases and Controls\n#Median overall survival for cancer cases alive at last follow-up\n†Rank sum test\n‡Chi-square test\n§Fisher's exact test", "RNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls.", "In order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05.\nEvaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed.\nAssociation of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals.", " GRPR expression in bronchial epithelia was more frequent in lung cancer patients than cancer-free control subjects Presence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression.\nGRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2).\nEvaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status.\n*Significant at P < 0.05\n† Wilcoxon rank sum test\n‡ Chi-square test\n§ Fisher's exact test\nPresence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression.\nGRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2).\nEvaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status.\n*Significant at P < 0.05\n† Wilcoxon rank sum test\n‡ Chi-square test\n§ Fisher's exact test\n Bronchial GRPR expression was not associated with sex, ethnicity or pulmonary function in lung cancer cases or controls In order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2].\nIn this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2).\nEvaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects.\nIn order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2].\nIn this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2).\nEvaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects.\n GRPR expression levels did not reflect disease stage or tumor type In order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2).\nIn order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2).\n GRPR expression in non-cancerous bronchial mucosal tissues was significantly associated with lung cancer independent of age, sex and smoking status Detection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female.\nGRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status.\nDetection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female.\nGRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status.\n GRPR expression in bronchial epithelium was significantly associated with lung cancer among never smokers and former smokers but not active smokers Molecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category.\nBecause we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47).\nMolecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category.\nBecause we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47).\n GRPR expression in surrogate tissues was not a prognostic indicator for survival in lung cancer cases Because GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression.\nBecause GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression.", "Presence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression.\nGRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2).\nEvaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status.\n*Significant at P < 0.05\n† Wilcoxon rank sum test\n‡ Chi-square test\n§ Fisher's exact test", "In order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2].\nIn this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2).\nEvaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects.", "In order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2).", "Detection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female.\nGRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status.", "Molecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category.\nBecause we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47).", "Because GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression.", "In lung cancer GRPR and its ligand are involved in autocrine growth stimulation [6,14]. Previous results showed that lung tumor tissues have elevated GRPR expression, and we have previously described elevated GRPR mRNA levels in histologically normal mucosal tissues adjacent to HNSCC compared to oral mucosal tissues from cancer-free control subjects [6]. GRPR mRNA expression has also been detected in prostate tumors and tissues adjacent to prostate cancers [15]. Our new findings reported here indicate that in a prospectively collected lung cancer case-control population, GRPR expression in at-risk upper aerodigestive mucosa was significantly associated with lung cancer. Importantly, even after controlling for the effects of possible confounding by age, sex, and tobacco use, GRPR expression in non-cancerous mucosal tissues was significantly associated with lung cancer among never and former smokers and appeared to confer similar risk to both sexes.\nOur finding here of no difference in bronchial epithelial cell GRPR expression between men and women did not replicate our previous result [2]. In our previous bronchial cell study, which had a smaller sample size of 78 patients, the presence of cancer was not separately evaluated, and only one never smoker with lung cancer was male [2]. In retrospect, it is likely that the associations between GRPR expression, female sex, and smoking observed in our previous study were actually surrogates for the underlying association between bronchial epithelial GRPR expression and lung cancer, which appears to be most significant in never smokers. Therefore, this study presents important revisions to our previous understanding of the role of GRPR in lung cancers arising in females and males; this study supports a similar role for bronchial GRPR mRNA expression and lung cancer risk for both females and males.\nAlthough in the current study we also found no association between GRPR expression and pack-years of smoking, we did observe associations between smoking status (active versus former smoker versus never smoker) and GPRR expression. We observed these associations for both cases and controls separately, but the relationships were different. The proportion of GRPR positive actively smoking cancer-free control subjects was similar to the proportion of actively smoking lung cancer cases. These findings were consistent with our previously reported findings that GRPR expression in bronchial epithelium was activated with tobacco use [2] and with findings that bombesin-like peptide receptors play a role in wound healing following airway injury [16]. Similar to our previous findings of persistent bronchial GRPR expression after tobacco cessation [5], we detected bronchial GRPR expression in the majority of former smoking lung cancer cases. In contrast, in the current study bronchial GRPR expression was detected in only a minority of cancer-free controls who were former smokers. The inclusion of more cancer-free control subjects in this study compared to our 1997 study has allowed us to evaluate lung cancer patients and cancer-free controls separately and has revealed new insights regarding the relationship between bronchial GRPR expression and tobacco use in cancer-free controls.\nAlthough the number of active smoking surgical controls in our current study was small, the data suggest that bronchial GRPR expression may be induced by tobacco use in subjects without lung cancer, but that the increase in GRPR expression in bronchial mucosa likely subsides following the cessation of smoking in most subjects without lung cancer. Among former smoking lung cancer cases, bronchial GRPR expression may be aberrantly maintained following cessation of smoking, or similar to never smoking lung cancer cases, bronchial GRPR expression may reflect risk that is independent of tobacco use. Our 1997 report indicated that of the 4 cancer-free subjects with defined bronchial GRPR expression and smoking status, 1 active smoker was GRPR positive while 3 subjects who were former or never smokers were negative for GRPR expression. Therefore, although the numbers are small, among cancer-free control subjects, the relationship between smoking status and bronchial GRPR expression in the 1997 study is consistent with our current study results.\nOf special interest was the finding of frequent bronchial GRPR expression among never smoking lung cancer cases. While GRPR expression was detected in only a minority of never smoking cancer-free controls, GRPR expression was detected in almost 90% of never smoking lung cancer cases. Though the specific cause of GRPR expression in never smoking lung cancer cases is unknown, we posit that bronchial GRPR expression may reflect an inherent or conferred risk factor that can be best observed in the absence of the more potent risk factor of tobacco use. Though bronchial GRPR expression was more common among cancer-free controls with a diagnosis of granuloma, suggesting a possible inflammatory component to bronchial GRPR expression among cancer-free controls, bronchial GRPR expression was not increased in lung cancer cases or controls with more severe pulmonary obstruction, which also has an inflammatory component. Therefore, the role of inflammation in elevated bronchial GRPR expression remains undefined.\nGRPR expression in normal bronchial tissues was not correlated with clinical disease stage. Therefore, our data suggest that GRPR expression in surrogate tissues did not reflect tumor burden and, perhaps, was not a direct consequence of the prevalent cancer. Our finding that detectable GRPR expression in normal upper aerodigestive tissues was not an indication of poor overall survival for lung cancer cases indicates that elevated GRPR bronchial cell expression was not associated with disease progression and is, instead, likely to be a marker of risk exposure or a marker of host susceptibility.\nLung cancer cases positive for GRPR bronchial expression were significantly younger than cases negative for GRPR expression, which supports the role of GRPR bronchial expression as conferring lung cancer risk. Though a prospective cohort study will be required to fully understand the relationship between GRPR expression levels in surrogate tissues and the development of lung cancer, GRPR expression in normal bronchial tissues has potential value as a marker for elevated risk, especially in those with little or no tobacco exposure. Though GRPR is overexpressed in many solid tumors, only one other group has evaluated GRPR, GRP and/or their gene product levels in surrogate tissues of cancer patients to date. Uchida et al. reported that serum levels of proGRP, as measured by enzyme-linked immunosorbent assay (ELISA), correlated with tumor GRP gene expression levels in small cell lung cancer (SCLC) patients [17]. GRPR expression levels in tumors were not evaluated in our study, as material for analysis was not available.\nThe increased risk due to elevated GRPR expression may be most apparent in never and former smokers because the contribution of GRPR expression to risk is obscured in active smokers by factors such as genetic abnormalities and inflammatory processes that confer substantial risk from tobacco use. Elevated GRPR expression in the lung may independently contribute to increased cancer risk by promoting proliferation. GRPR is expressed at early embryonic stages in the nervous, urogenital, respiratory, and gastrointestinal systems and expression in these tissues is generally down-regulated before birth [18-20]. The GRPR ligand, GRP, a bombesin-like peptide (BLP) growth factor, is expressed by pulmonary neuroendocrine cells and has been shown to stimulate lung development in utero and to increase growth and maturation of human fetal lung organ cultures [20,21]. In non-cancerous tissues, BLPs stimulate growth of bronchial, gastrointestinal and pancreatic epithelial cells and lead to ligand-dependent hyperplasia [5,19,22-24]. GRPR and GRP are involved in an autocrine stimulation loop in lung cancer and HNSCC [6,14], and GRPR expression has been shown to be positively regulated by GRP [20]. Increased GRPR expression in the lung may, therefore, reflect a state that is more nascent and proliferative in nature than epithelium with low or undetected GRPR expression.\nWe acknowledge our case-control study population limitations. This case-control study required hospital surgical controls, and this limited recruitment with the result that our lung cancer cases are older than our controls and include fewer men. In addition, the exhaustion of samples made quantitative measurements of GRPR mRNA expression impossible. We have confined our analysis to GRPR mRNA because of antibody reagent limitations at the time the samples were evaluated. This leaves the question of whether GRPR protein levels also differ unanswered. Despite these limitations, a high degree of association between detectable GRPR expression in normal bronchial tissue and lung cancer was demonstrated in the case-control population even after adjusting for sex and age.\nThough we did not assess the epidermal growth factor receptor (EGFR) mRNA or protein expression in bronchial epithelial cultures, we speculate that increased GRPR expression contributes to lung cancer through EGFR-dependent and/or -independent mechanisms. The EGFR pathway has been reported to be activated in lung tumors from never smokers with EGFR mutations, and it is possible that lung tumors developing in never smokers have multiple mechanisms for EGFR activation. The GRPR pathway is known to interact with the EGFR pathway in lung cancer cells by increasing the release of EGFR ligands such as amphiregulin [8], which could act to further promote cancer in never smokers who develop EGFR mutations. Alternatively, activation of the GRPR pathway may increase EGFR bronchial cell signaling in the absence of EGFR mutation, providing another route to lung cancer development in never smokers.", "The GRPR pathway may activate proliferative pathways that increase the likelihood of lung cancer development in male and female former and never smokers. We conclude from our data that GRPR expression likely does not contribute to sex differences in rates of lung cancer incidence in never or former smokers." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Lung cancer case-control study subjects and tissues", "Detection of GRPR expression in bronchial epithelial cells", "Statistical Analyses", "Results", "GRPR expression in bronchial epithelia was more frequent in lung cancer patients than cancer-free control subjects", "Bronchial GRPR expression was not associated with sex, ethnicity or pulmonary function in lung cancer cases or controls", "GRPR expression levels did not reflect disease stage or tumor type", "GRPR expression in non-cancerous bronchial mucosal tissues was significantly associated with lung cancer independent of age, sex and smoking status", "GRPR expression in bronchial epithelium was significantly associated with lung cancer among never smokers and former smokers but not active smokers", "GRPR expression in surrogate tissues was not a prognostic indicator for survival in lung cancer cases", "Discussion", "Conclusions" ]
[ "Lung cancer incidence rates have been declining for men since the 1980s. However, incidence rates for women over 65 have been increasing or have remained steady during this same time period [1]. Though tobacco use is a significant risk factor for lung cancer, 15-20% of lung cancer patients are lifetime never-smokers. Our group previously reported that bronchial epithelium expression of GRPR, which encodes the gastrin-releasing peptide receptor (GRPR), was associated with a diagnosis of lung cancer in female never smokers [2]. GRPR is an X-linked gene that has been reported to escape X-inactivation [3]. This finding raised the possibility that increased GRPR expression in women accounted for some of the increased incidence rates of lung cancer in never smokers who are female, compared to never smoking men, which was recently reported in a large prospective cohort study [4]. Since GRPR stimulation induces proliferative effects in bronchial cells [5], it is possible that activation of this pathway is a risk factor for lung cancer separate from that of tobacco exposure.\nGRPR is overexpressed in lung cancers and in head and neck squamous cell carcinoma (HNSCC) [6,7]. We have previously reported elevated levels of GRPR mRNA in lung cancers and HNSCC [6,8]. In addition to cancer-specific overexpression of GRPR, we have demonstrated that mucosal tissues adjacent to HNSCC have GRPR mRNA levels reflective of the adjacent HNSCC tumor [6]. These findings suggest that elevated GRPR mRNA in normal bronchial epithelia may be associated with lung cancer risk and/or may indicate the presence of lung cancer.\nWe undertook a case-control study to determine whether elevated GRPR mRNA expression in normal, at-risk epithelium correlated with the presence of lung cancer. We evaluated the association between GRPR mRNA expression in purified cultured normal bronchial epithelial cells and the presence of lung cancer. Our primary finding was the observed increased expression of GRPR in normal bronchial epithelia in lung cancer cases compared to cancer-free controls. The impact of this was highest in subjects who never smoked or who had undergone smoking cessation before diagnosis. The association was found in both male and female never smokers, suggesting GRPR plays a similar role in development of lung cancer in men and women. The result of this study highlights GRPR overexpression in normal epithelial mucosa as a candidate risk factor for lung cancer, especially in those with limited tobacco exposure.", " Lung cancer case-control study subjects and tissues Lung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies.\nCharacteristics of Lung Cancer Cases and Controls\n#Median overall survival for cancer cases alive at last follow-up\n†Rank sum test\n‡Chi-square test\n§Fisher's exact test\nLung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies.\nCharacteristics of Lung Cancer Cases and Controls\n#Median overall survival for cancer cases alive at last follow-up\n†Rank sum test\n‡Chi-square test\n§Fisher's exact test\n Detection of GRPR expression in bronchial epithelial cells RNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls.\nRNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls.\n Statistical Analyses In order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05.\nEvaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed.\nAssociation of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals.\nIn order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05.\nEvaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed.\nAssociation of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals.", "Lung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies.\nCharacteristics of Lung Cancer Cases and Controls\n#Median overall survival for cancer cases alive at last follow-up\n†Rank sum test\n‡Chi-square test\n§Fisher's exact test", "RNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls.", "In order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05.\nEvaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed.\nAssociation of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals.", " GRPR expression in bronchial epithelia was more frequent in lung cancer patients than cancer-free control subjects Presence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression.\nGRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2).\nEvaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status.\n*Significant at P < 0.05\n† Wilcoxon rank sum test\n‡ Chi-square test\n§ Fisher's exact test\nPresence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression.\nGRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2).\nEvaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status.\n*Significant at P < 0.05\n† Wilcoxon rank sum test\n‡ Chi-square test\n§ Fisher's exact test\n Bronchial GRPR expression was not associated with sex, ethnicity or pulmonary function in lung cancer cases or controls In order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2].\nIn this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2).\nEvaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects.\nIn order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2].\nIn this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2).\nEvaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects.\n GRPR expression levels did not reflect disease stage or tumor type In order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2).\nIn order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2).\n GRPR expression in non-cancerous bronchial mucosal tissues was significantly associated with lung cancer independent of age, sex and smoking status Detection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female.\nGRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status.\nDetection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female.\nGRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status.\n GRPR expression in bronchial epithelium was significantly associated with lung cancer among never smokers and former smokers but not active smokers Molecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category.\nBecause we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47).\nMolecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category.\nBecause we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47).\n GRPR expression in surrogate tissues was not a prognostic indicator for survival in lung cancer cases Because GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression.\nBecause GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression.", "Presence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression.\nGRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2).\nEvaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status.\n*Significant at P < 0.05\n† Wilcoxon rank sum test\n‡ Chi-square test\n§ Fisher's exact test", "In order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2].\nIn this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2).\nEvaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects.", "In order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2).", "Detection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female.\nGRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status.", "Molecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category.\nBecause we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47).", "Because GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression.", "In lung cancer GRPR and its ligand are involved in autocrine growth stimulation [6,14]. Previous results showed that lung tumor tissues have elevated GRPR expression, and we have previously described elevated GRPR mRNA levels in histologically normal mucosal tissues adjacent to HNSCC compared to oral mucosal tissues from cancer-free control subjects [6]. GRPR mRNA expression has also been detected in prostate tumors and tissues adjacent to prostate cancers [15]. Our new findings reported here indicate that in a prospectively collected lung cancer case-control population, GRPR expression in at-risk upper aerodigestive mucosa was significantly associated with lung cancer. Importantly, even after controlling for the effects of possible confounding by age, sex, and tobacco use, GRPR expression in non-cancerous mucosal tissues was significantly associated with lung cancer among never and former smokers and appeared to confer similar risk to both sexes.\nOur finding here of no difference in bronchial epithelial cell GRPR expression between men and women did not replicate our previous result [2]. In our previous bronchial cell study, which had a smaller sample size of 78 patients, the presence of cancer was not separately evaluated, and only one never smoker with lung cancer was male [2]. In retrospect, it is likely that the associations between GRPR expression, female sex, and smoking observed in our previous study were actually surrogates for the underlying association between bronchial epithelial GRPR expression and lung cancer, which appears to be most significant in never smokers. Therefore, this study presents important revisions to our previous understanding of the role of GRPR in lung cancers arising in females and males; this study supports a similar role for bronchial GRPR mRNA expression and lung cancer risk for both females and males.\nAlthough in the current study we also found no association between GRPR expression and pack-years of smoking, we did observe associations between smoking status (active versus former smoker versus never smoker) and GPRR expression. We observed these associations for both cases and controls separately, but the relationships were different. The proportion of GRPR positive actively smoking cancer-free control subjects was similar to the proportion of actively smoking lung cancer cases. These findings were consistent with our previously reported findings that GRPR expression in bronchial epithelium was activated with tobacco use [2] and with findings that bombesin-like peptide receptors play a role in wound healing following airway injury [16]. Similar to our previous findings of persistent bronchial GRPR expression after tobacco cessation [5], we detected bronchial GRPR expression in the majority of former smoking lung cancer cases. In contrast, in the current study bronchial GRPR expression was detected in only a minority of cancer-free controls who were former smokers. The inclusion of more cancer-free control subjects in this study compared to our 1997 study has allowed us to evaluate lung cancer patients and cancer-free controls separately and has revealed new insights regarding the relationship between bronchial GRPR expression and tobacco use in cancer-free controls.\nAlthough the number of active smoking surgical controls in our current study was small, the data suggest that bronchial GRPR expression may be induced by tobacco use in subjects without lung cancer, but that the increase in GRPR expression in bronchial mucosa likely subsides following the cessation of smoking in most subjects without lung cancer. Among former smoking lung cancer cases, bronchial GRPR expression may be aberrantly maintained following cessation of smoking, or similar to never smoking lung cancer cases, bronchial GRPR expression may reflect risk that is independent of tobacco use. Our 1997 report indicated that of the 4 cancer-free subjects with defined bronchial GRPR expression and smoking status, 1 active smoker was GRPR positive while 3 subjects who were former or never smokers were negative for GRPR expression. Therefore, although the numbers are small, among cancer-free control subjects, the relationship between smoking status and bronchial GRPR expression in the 1997 study is consistent with our current study results.\nOf special interest was the finding of frequent bronchial GRPR expression among never smoking lung cancer cases. While GRPR expression was detected in only a minority of never smoking cancer-free controls, GRPR expression was detected in almost 90% of never smoking lung cancer cases. Though the specific cause of GRPR expression in never smoking lung cancer cases is unknown, we posit that bronchial GRPR expression may reflect an inherent or conferred risk factor that can be best observed in the absence of the more potent risk factor of tobacco use. Though bronchial GRPR expression was more common among cancer-free controls with a diagnosis of granuloma, suggesting a possible inflammatory component to bronchial GRPR expression among cancer-free controls, bronchial GRPR expression was not increased in lung cancer cases or controls with more severe pulmonary obstruction, which also has an inflammatory component. Therefore, the role of inflammation in elevated bronchial GRPR expression remains undefined.\nGRPR expression in normal bronchial tissues was not correlated with clinical disease stage. Therefore, our data suggest that GRPR expression in surrogate tissues did not reflect tumor burden and, perhaps, was not a direct consequence of the prevalent cancer. Our finding that detectable GRPR expression in normal upper aerodigestive tissues was not an indication of poor overall survival for lung cancer cases indicates that elevated GRPR bronchial cell expression was not associated with disease progression and is, instead, likely to be a marker of risk exposure or a marker of host susceptibility.\nLung cancer cases positive for GRPR bronchial expression were significantly younger than cases negative for GRPR expression, which supports the role of GRPR bronchial expression as conferring lung cancer risk. Though a prospective cohort study will be required to fully understand the relationship between GRPR expression levels in surrogate tissues and the development of lung cancer, GRPR expression in normal bronchial tissues has potential value as a marker for elevated risk, especially in those with little or no tobacco exposure. Though GRPR is overexpressed in many solid tumors, only one other group has evaluated GRPR, GRP and/or their gene product levels in surrogate tissues of cancer patients to date. Uchida et al. reported that serum levels of proGRP, as measured by enzyme-linked immunosorbent assay (ELISA), correlated with tumor GRP gene expression levels in small cell lung cancer (SCLC) patients [17]. GRPR expression levels in tumors were not evaluated in our study, as material for analysis was not available.\nThe increased risk due to elevated GRPR expression may be most apparent in never and former smokers because the contribution of GRPR expression to risk is obscured in active smokers by factors such as genetic abnormalities and inflammatory processes that confer substantial risk from tobacco use. Elevated GRPR expression in the lung may independently contribute to increased cancer risk by promoting proliferation. GRPR is expressed at early embryonic stages in the nervous, urogenital, respiratory, and gastrointestinal systems and expression in these tissues is generally down-regulated before birth [18-20]. The GRPR ligand, GRP, a bombesin-like peptide (BLP) growth factor, is expressed by pulmonary neuroendocrine cells and has been shown to stimulate lung development in utero and to increase growth and maturation of human fetal lung organ cultures [20,21]. In non-cancerous tissues, BLPs stimulate growth of bronchial, gastrointestinal and pancreatic epithelial cells and lead to ligand-dependent hyperplasia [5,19,22-24]. GRPR and GRP are involved in an autocrine stimulation loop in lung cancer and HNSCC [6,14], and GRPR expression has been shown to be positively regulated by GRP [20]. Increased GRPR expression in the lung may, therefore, reflect a state that is more nascent and proliferative in nature than epithelium with low or undetected GRPR expression.\nWe acknowledge our case-control study population limitations. This case-control study required hospital surgical controls, and this limited recruitment with the result that our lung cancer cases are older than our controls and include fewer men. In addition, the exhaustion of samples made quantitative measurements of GRPR mRNA expression impossible. We have confined our analysis to GRPR mRNA because of antibody reagent limitations at the time the samples were evaluated. This leaves the question of whether GRPR protein levels also differ unanswered. Despite these limitations, a high degree of association between detectable GRPR expression in normal bronchial tissue and lung cancer was demonstrated in the case-control population even after adjusting for sex and age.\nThough we did not assess the epidermal growth factor receptor (EGFR) mRNA or protein expression in bronchial epithelial cultures, we speculate that increased GRPR expression contributes to lung cancer through EGFR-dependent and/or -independent mechanisms. The EGFR pathway has been reported to be activated in lung tumors from never smokers with EGFR mutations, and it is possible that lung tumors developing in never smokers have multiple mechanisms for EGFR activation. The GRPR pathway is known to interact with the EGFR pathway in lung cancer cells by increasing the release of EGFR ligands such as amphiregulin [8], which could act to further promote cancer in never smokers who develop EGFR mutations. Alternatively, activation of the GRPR pathway may increase EGFR bronchial cell signaling in the absence of EGFR mutation, providing another route to lung cancer development in never smokers.", "The GRPR pathway may activate proliferative pathways that increase the likelihood of lung cancer development in male and female former and never smokers. We conclude from our data that GRPR expression likely does not contribute to sex differences in rates of lung cancer incidence in never or former smokers." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Gastrin-releasing peptide receptor", "lung cancer risk", "case-control study", "surrogate tissue" ]
Background: Lung cancer incidence rates have been declining for men since the 1980s. However, incidence rates for women over 65 have been increasing or have remained steady during this same time period [1]. Though tobacco use is a significant risk factor for lung cancer, 15-20% of lung cancer patients are lifetime never-smokers. Our group previously reported that bronchial epithelium expression of GRPR, which encodes the gastrin-releasing peptide receptor (GRPR), was associated with a diagnosis of lung cancer in female never smokers [2]. GRPR is an X-linked gene that has been reported to escape X-inactivation [3]. This finding raised the possibility that increased GRPR expression in women accounted for some of the increased incidence rates of lung cancer in never smokers who are female, compared to never smoking men, which was recently reported in a large prospective cohort study [4]. Since GRPR stimulation induces proliferative effects in bronchial cells [5], it is possible that activation of this pathway is a risk factor for lung cancer separate from that of tobacco exposure. GRPR is overexpressed in lung cancers and in head and neck squamous cell carcinoma (HNSCC) [6,7]. We have previously reported elevated levels of GRPR mRNA in lung cancers and HNSCC [6,8]. In addition to cancer-specific overexpression of GRPR, we have demonstrated that mucosal tissues adjacent to HNSCC have GRPR mRNA levels reflective of the adjacent HNSCC tumor [6]. These findings suggest that elevated GRPR mRNA in normal bronchial epithelia may be associated with lung cancer risk and/or may indicate the presence of lung cancer. We undertook a case-control study to determine whether elevated GRPR mRNA expression in normal, at-risk epithelium correlated with the presence of lung cancer. We evaluated the association between GRPR mRNA expression in purified cultured normal bronchial epithelial cells and the presence of lung cancer. Our primary finding was the observed increased expression of GRPR in normal bronchial epithelia in lung cancer cases compared to cancer-free controls. The impact of this was highest in subjects who never smoked or who had undergone smoking cessation before diagnosis. The association was found in both male and female never smokers, suggesting GRPR plays a similar role in development of lung cancer in men and women. The result of this study highlights GRPR overexpression in normal epithelial mucosa as a candidate risk factor for lung cancer, especially in those with limited tobacco exposure. Methods: Lung cancer case-control study subjects and tissues Lung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies. Characteristics of Lung Cancer Cases and Controls #Median overall survival for cancer cases alive at last follow-up †Rank sum test ‡Chi-square test §Fisher's exact test Lung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies. Characteristics of Lung Cancer Cases and Controls #Median overall survival for cancer cases alive at last follow-up †Rank sum test ‡Chi-square test §Fisher's exact test Detection of GRPR expression in bronchial epithelial cells RNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls. RNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls. Statistical Analyses In order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05. Evaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed. Association of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals. In order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05. Evaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed. Association of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals. Lung cancer case-control study subjects and tissues: Lung cancer cases (n = 224) and surgical control subjects (n = 107) enrolled in prospective thoracic surgical tissue collection protocols from 1992-2004 donated mainstem bronchus biopsy specimens obtained at the time of resection, bronchoscopy, or lung transplant. Questionnaire and pulmonary function tests were administered prior to surgery, and forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) were used to assess airway obstruction [9]. Participants were patients with suspected lung cancer who underwent bronchoscopic or thoracic procedures. Cases had confirmed diagnoses of primary lung cancer while controls had non-cancerous diagnoses. Diagnoses occurring in 5% or more of surgical control subjects included the following: 15 had no diagnosis of disease (14%), 12 had emphysema (11%), 12 had a granuloma (11%), 11 had alpha-1 antitrypsin deficiency (10%), 7 had pulmonary hypertension (9%), 7 had a benign growth (7%), 6 had a lung obstruction (6%), 5 had cystic fibrosis (5%), 5 had a hamartoma (5%) and 5 had pulmonary fibrosis (5%). Two of the 15 control subjects with no diagnosis of disease were lung donors. Tissues from 219 of the 224 cases and 89 of the 107 controls were prospectively collected under protocols approved by the University of Pittsburgh institutional review board (IRB). In a cooperative effort, tissues from 5 of the 224 cases and 18 of the 107 controls were prospectively collected under a surveillance bronchoscopy protocol approved by the University of Colorado IRB. The case-control study populations are described in Table 1. Primary bronchial epithelial cell culture procedures were used to obtain proliferating bronchial epithelial cells as described previously [5]. Bronchial epithelial cell cultures were harvested at passage 1 for GRPR mRNA expression studies. Characteristics of Lung Cancer Cases and Controls #Median overall survival for cancer cases alive at last follow-up †Rank sum test ‡Chi-square test §Fisher's exact test Detection of GRPR expression in bronchial epithelial cells: RNA isolation from bronchial cells and detection of GRPR gene expression was performed as previously described [5]. PCR amplification was performed following oligo-dT-primed reverse transcription (RT) of total RNA using primers GRPR-1 (5'-CTCCCCGTGAACGATGACTGG-3') and GRPR-2 (5'-ATCTTCATCAGGGCATGGGAG-3'). Presence of GRPR product was evaluated by hybridization with a 32P-labeled internal probe (5'-CACCTCCATGCTCCACTTTGTC-3'). The GAPDH gene expression was also evaluated in order to assess RNA integrity and success of the reverse transcription step. GAPDH was successfully amplified from RNA isolated from all cases and controls. Statistical Analyses: In order to test the association of sex and other variables with GRPR bronchial expression, candidate confounding variables including age, sex, ethnicity, smoking status, pack-years of tobacco-use (py), and pulmonary function were evaluated for association with GRPR expression separately for cases and controls for each case-control study using the chi-square test, Fisher's exact test, or Wilcoxon rank sum test as appropriate. All P values reported were 2-sided with significance defined by P < 0.05. Evaluating GRPR expression in non-cancerous bronchial epithelia among cases versus controls was the primary endpoint of the study. Univariate and multivariable logistic regression models were implemented to assess the significance of the association of GRPR bronchial expression with cancer before and after controlling for other important variables. For these models, age and sex were defined a priori to be included. Sex, ethnicity and smoking status were treated as categorical variables, age and pack-years as continuous variables, and pulmonary function as an ordinal variable. The likelihood ratio test was used to test the goodness of fit of logistic regression models. To assess whether the association between bronchial GRPR expression and lung cancer was modified by sex, a sex by GRPR expression interaction term was evaluated for significance in the multivariable logistic regression model. Because disease etiology likely differs for never smokers compared to smokers, a stratified analysis by smoking status was also performed. Association of GRPR expression with overall survival, defined as time from surgery to death, was analyzed using Cox proportional hazards models. Date of surgery, death, and last follow-up were provided by the University of Pittsburgh Cancer Institute (UPCI) Lung Cancer Registrar. Hazard ratios associated with GRPR expression were estimated using multivariable Cox proportional hazards models. The assumption of proportional hazards was assessed for all Cox models by evaluation of scaled Schoenfeld residuals. Results: GRPR expression in bronchial epithelia was more frequent in lung cancer patients than cancer-free control subjects Presence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression. GRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2). Evaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status. *Significant at P < 0.05 † Wilcoxon rank sum test ‡ Chi-square test § Fisher's exact test Presence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression. GRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2). Evaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status. *Significant at P < 0.05 † Wilcoxon rank sum test ‡ Chi-square test § Fisher's exact test Bronchial GRPR expression was not associated with sex, ethnicity or pulmonary function in lung cancer cases or controls In order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2]. In this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2). Evaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects. In order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2]. In this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2). Evaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects. GRPR expression levels did not reflect disease stage or tumor type In order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2). In order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2). GRPR expression in non-cancerous bronchial mucosal tissues was significantly associated with lung cancer independent of age, sex and smoking status Detection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female. GRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status. Detection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female. GRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status. GRPR expression in bronchial epithelium was significantly associated with lung cancer among never smokers and former smokers but not active smokers Molecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category. Because we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47). Molecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category. Because we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47). GRPR expression in surrogate tissues was not a prognostic indicator for survival in lung cancer cases Because GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression. Because GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression. GRPR expression in bronchial epithelia was more frequent in lung cancer patients than cancer-free control subjects: Presence or absence of GRPR mRNA in non-cancerous bronchial epithelial cells derived from mainstem bronchus airway biopsies of lung cancer cases (n = 224) and cancer-free controls (n = 107) (Table 1) was assessed by RT-PCR followed by hybridization with a radio-labeled probe in order to maximize sensitivity. Because the airway biopsy analysis of this cohort began before quantitative PCR (q-PCR) was available, the RT-PCR semi-quantitative technique was used throughout the lung cancer case-control study to maintain consistency and power. Of the 224 lung cancer cases evaluated, 158 (71%) had GRPR expression in bronchial cells. In contrast, only 41 (38%) of the 107 cancer-free surgical controls had detectable GRPR bronchial expression. GRPR expression has been reported to be elevated with tobacco use. We tested for association between GRPR expression in non-cancerous surrogate tissues and smoking status and pack-years of tobacco use stratified by cancer status. Among never and former smokers, lung cancer patients more frequently had detectable GRPR expression while the frequency of detected GRPR mRNA was similar for actively smoking lung cancer cases and controls (Table 2). We observed a statistically significant association between GRPR expression in bronchial mucosa and smoking status among lung cancer patients (P = 0.03, Table 2) with overrepresentation of GRPR bronchial expression among never smoking lung cancer patients. GRPR bronchial expression was also associated with smoking status among cancer-free surgical controls with an overrepresentation of GRPR bronchial expression among active smokers (P = 0.02) (Table 2). Only a minority of the never smoker and former smoker surgical cancer-free control subjects had GRPR bronchial expression while, the majority of actively smoking surgical controls had detectable GRPR bronchial expression. Though there was a significant association between GRPR expression and smoking status for cases and controls, in the analyses stratified by case status we found no statistically significant association between pack-years of smoking and GRPR expression in cases or controls (Table 2). Evaluation of association between GRPR broncial expression and demographic and risk factors stratified by lung cancer case status. *Significant at P < 0.05 † Wilcoxon rank sum test ‡ Chi-square test § Fisher's exact test Bronchial GRPR expression was not associated with sex, ethnicity or pulmonary function in lung cancer cases or controls: In order to assess the association of subject characteristics with GRPR expression independent of cancer, a stratified analysis by case status was performed separately for the lung cancer case and control populations. We hypothesized that GRPR expression would differ by sex because GRPR resides on the portion of the X chromosome reported to escape X-inactivation [3], and we had previously reported that GRPR expression in bronchial tissues was more frequent in women never smokers than men never smokers in a study involving 78 subjects [2]. In this larger study involving 331 subjects, in addition to evaluating differences in GRPR expression by sex, we also tested the association between GRPR expression and age, ethnicity and pulmonary function. GRPR bronchial expression levels did not differ by sex, ethnicity or pulmonary function for either lung cancer cases or controls (Table 2). However, lung cancer cases positive for GRPR expression were statistically younger than GRPR negative cases (Table 2). Evaluating the GRPR expression distribution among surgical cancer-free controls by benign diagnosis, we observed a trend towards a significant association (p = 0.065, Fisher's exact test) with deviations from the average 38% GRPR positive frequency being most apparent for diagnoses of alpha-1 trypsin deficiency (1 of 11 subjects were GRPR positive) and granuloma (9 of 12 subjects were GRPR positive). Only two of the 9 GRPR positive granuloma subjects were active smokers. This trend towards increased bronchial GRPR expression in subjects with granuloma that did not reach statistical significance suggests the possibility that inflammation-induced tissue damage and/or wound repair in response to damage may be associated with bronchial GRPR expression. We did not find evidence of an association between bronchial GRPR expression and hyperproliferative disorders among the cancer-free control subjects. GRPR expression levels did not reflect disease stage or tumor type: In order to determine whether GRPR expression differed by tumor clinical and/or pathological characteristics, we tested for association between tumor characteristics and bronchial GRPR expression in lung cancer cases. The distribution of lung cancer cases by clinical stage and tumor histology is provided in Table 1. Bronchial GRPR expression in non-cancerous mucosa was independent of disease stage (P = 0.49, Table 2) and tumor histology (P = 0.50, Table 2). GRPR expression in non-cancerous bronchial mucosal tissues was significantly associated with lung cancer independent of age, sex and smoking status: Detection of GRPR mRNA in bronchial tissues was significantly associated with presence of lung cancer (O.R = 3.85; 95% CI = 2.37-6.25) (All Subjects, Figure 1). Even after controlling for possible confounding effects of age, sex, smoking status and pulmonary function expression of GRPR in normal bronchial epithelium remained significantly associated with lung cancer (O.R. = 4.76; 95% CI = 2.32-9.77) (All Subjects, Figure 1). GRPR was previously reported by our group in a study of 78 subjects to be more frequently expressed in women with lung cancer than in men with lung cancer [2], suggesting the possibility that this differential expression may account at least in part for the heightened smoking-related lung cancer risk for women observed in some studies [10,11]. However, in this larger study, the pair-wise interaction between sex and GPRR expression was found not to be significant (P = 0.31) in a multivariable logistic regression model also containing age, sex, pulmonary function and bronchial GRPR expression. Therefore, this study did not support our previous hypothesis that GRPR expression among women contributed to their increased lung cancer risk. Rather, it suggested that never smoking status was a confounder in our previous analysis, since the majority of never smokers diagnosed with lung cancer are female. GRPR expression in bronchial epithelia is significantly associated with lung cancer among never and former smokers. Univariate and multivariable logistic regression analysis estimates of odds ratios are provided for all subjects, and separate analyses are provided by smoking status. GRPR expression in bronchial epithelium was significantly associated with lung cancer among never smokers and former smokers but not active smokers: Molecular alterations differ markedly for tobacco-related versus non-tobacco-related lung cancers [12,13]. These finding indicate that disease etiology for lung cancer varies by tobacco-use history. A stratified analysis by smoking status indicated that presence of GRPR expression in bronchial cells was significantly associated with lung cancer for never smokers (O.R. = 17.0; 95% C.I = 3.32-87.13) and former smokers (O.R. = 3.69; 95% C.I = 1.88-7.29) but not active smokers (O.R. = 1.18; 95% CI = 0.34-4.15) (Figure 1), and these odds ratios differed significantly (P < 0.001, Mantel-Haenszel test of homogeneity (M-H test)). The association between GRPR bronchial expression and lung cancer among never smokers and former smokers was significant even after controlling for the effects of age, sex, and pulmonary function (Figure 1). The combined, adjusted odds ratio for never and former smokers, which were found to not differ significantly (P = 0.09; M-H test), was 7.74 (95% CI = 2.96-20.25). A trend towards higher odds ratios with fewer pack-years of tobacco use was observed when the analysis was stratified by tertiles of control subject pack-year use (data not shown); however, the odds ratios did not differ significantly by tertile pack-year category. Because we hypothesized that GRPR expression in bronchial epithelia would contribute to increased risk of lung cancer for women never smokers compared to men never smokers, we tested the interaction between sex and GRPR expression status for association with lung cancer in the never smoking stratum. We found the GRPR expression by sex interaction to not be significantly associated with lung cancer among never smokers in a multiple logistic regression model also containing age, sex and pulmonary function (P = 0.94). Because the number of never smokers in our study was not large (n = 52) and the association between GRPR expression and lung cancer did not differ significantly between never and former smokers, we also tested the GRPR expression by sex interaction for significance for never and former smokers. We found that the GRPR expression by sex interaction was also not significant in this same model when evaluating never and former smokers (P = 0.47). GRPR expression in surrogate tissues was not a prognostic indicator for survival in lung cancer cases: Because GRPR expression in bronchial epithelia was associated with lung cancer, we hypothesized that for cases with detectable bronchial GRPR expression, survival would be reduced compared to cases without detectable GRPR expression. However, the presence of GRPR mRNA in bronchial cells was not an indication of overall lung cancer survival. The hazards ratio for overall lung cancer survival in Cox models adjusted for age, sex, and disease stage was 0.87 (95% CI = 0.57-1.34) for patients with GRPR bronchial expression compared to patients without detectable GRPR bronchial expression. Discussion: In lung cancer GRPR and its ligand are involved in autocrine growth stimulation [6,14]. Previous results showed that lung tumor tissues have elevated GRPR expression, and we have previously described elevated GRPR mRNA levels in histologically normal mucosal tissues adjacent to HNSCC compared to oral mucosal tissues from cancer-free control subjects [6]. GRPR mRNA expression has also been detected in prostate tumors and tissues adjacent to prostate cancers [15]. Our new findings reported here indicate that in a prospectively collected lung cancer case-control population, GRPR expression in at-risk upper aerodigestive mucosa was significantly associated with lung cancer. Importantly, even after controlling for the effects of possible confounding by age, sex, and tobacco use, GRPR expression in non-cancerous mucosal tissues was significantly associated with lung cancer among never and former smokers and appeared to confer similar risk to both sexes. Our finding here of no difference in bronchial epithelial cell GRPR expression between men and women did not replicate our previous result [2]. In our previous bronchial cell study, which had a smaller sample size of 78 patients, the presence of cancer was not separately evaluated, and only one never smoker with lung cancer was male [2]. In retrospect, it is likely that the associations between GRPR expression, female sex, and smoking observed in our previous study were actually surrogates for the underlying association between bronchial epithelial GRPR expression and lung cancer, which appears to be most significant in never smokers. Therefore, this study presents important revisions to our previous understanding of the role of GRPR in lung cancers arising in females and males; this study supports a similar role for bronchial GRPR mRNA expression and lung cancer risk for both females and males. Although in the current study we also found no association between GRPR expression and pack-years of smoking, we did observe associations between smoking status (active versus former smoker versus never smoker) and GPRR expression. We observed these associations for both cases and controls separately, but the relationships were different. The proportion of GRPR positive actively smoking cancer-free control subjects was similar to the proportion of actively smoking lung cancer cases. These findings were consistent with our previously reported findings that GRPR expression in bronchial epithelium was activated with tobacco use [2] and with findings that bombesin-like peptide receptors play a role in wound healing following airway injury [16]. Similar to our previous findings of persistent bronchial GRPR expression after tobacco cessation [5], we detected bronchial GRPR expression in the majority of former smoking lung cancer cases. In contrast, in the current study bronchial GRPR expression was detected in only a minority of cancer-free controls who were former smokers. The inclusion of more cancer-free control subjects in this study compared to our 1997 study has allowed us to evaluate lung cancer patients and cancer-free controls separately and has revealed new insights regarding the relationship between bronchial GRPR expression and tobacco use in cancer-free controls. Although the number of active smoking surgical controls in our current study was small, the data suggest that bronchial GRPR expression may be induced by tobacco use in subjects without lung cancer, but that the increase in GRPR expression in bronchial mucosa likely subsides following the cessation of smoking in most subjects without lung cancer. Among former smoking lung cancer cases, bronchial GRPR expression may be aberrantly maintained following cessation of smoking, or similar to never smoking lung cancer cases, bronchial GRPR expression may reflect risk that is independent of tobacco use. Our 1997 report indicated that of the 4 cancer-free subjects with defined bronchial GRPR expression and smoking status, 1 active smoker was GRPR positive while 3 subjects who were former or never smokers were negative for GRPR expression. Therefore, although the numbers are small, among cancer-free control subjects, the relationship between smoking status and bronchial GRPR expression in the 1997 study is consistent with our current study results. Of special interest was the finding of frequent bronchial GRPR expression among never smoking lung cancer cases. While GRPR expression was detected in only a minority of never smoking cancer-free controls, GRPR expression was detected in almost 90% of never smoking lung cancer cases. Though the specific cause of GRPR expression in never smoking lung cancer cases is unknown, we posit that bronchial GRPR expression may reflect an inherent or conferred risk factor that can be best observed in the absence of the more potent risk factor of tobacco use. Though bronchial GRPR expression was more common among cancer-free controls with a diagnosis of granuloma, suggesting a possible inflammatory component to bronchial GRPR expression among cancer-free controls, bronchial GRPR expression was not increased in lung cancer cases or controls with more severe pulmonary obstruction, which also has an inflammatory component. Therefore, the role of inflammation in elevated bronchial GRPR expression remains undefined. GRPR expression in normal bronchial tissues was not correlated with clinical disease stage. Therefore, our data suggest that GRPR expression in surrogate tissues did not reflect tumor burden and, perhaps, was not a direct consequence of the prevalent cancer. Our finding that detectable GRPR expression in normal upper aerodigestive tissues was not an indication of poor overall survival for lung cancer cases indicates that elevated GRPR bronchial cell expression was not associated with disease progression and is, instead, likely to be a marker of risk exposure or a marker of host susceptibility. Lung cancer cases positive for GRPR bronchial expression were significantly younger than cases negative for GRPR expression, which supports the role of GRPR bronchial expression as conferring lung cancer risk. Though a prospective cohort study will be required to fully understand the relationship between GRPR expression levels in surrogate tissues and the development of lung cancer, GRPR expression in normal bronchial tissues has potential value as a marker for elevated risk, especially in those with little or no tobacco exposure. Though GRPR is overexpressed in many solid tumors, only one other group has evaluated GRPR, GRP and/or their gene product levels in surrogate tissues of cancer patients to date. Uchida et al. reported that serum levels of proGRP, as measured by enzyme-linked immunosorbent assay (ELISA), correlated with tumor GRP gene expression levels in small cell lung cancer (SCLC) patients [17]. GRPR expression levels in tumors were not evaluated in our study, as material for analysis was not available. The increased risk due to elevated GRPR expression may be most apparent in never and former smokers because the contribution of GRPR expression to risk is obscured in active smokers by factors such as genetic abnormalities and inflammatory processes that confer substantial risk from tobacco use. Elevated GRPR expression in the lung may independently contribute to increased cancer risk by promoting proliferation. GRPR is expressed at early embryonic stages in the nervous, urogenital, respiratory, and gastrointestinal systems and expression in these tissues is generally down-regulated before birth [18-20]. The GRPR ligand, GRP, a bombesin-like peptide (BLP) growth factor, is expressed by pulmonary neuroendocrine cells and has been shown to stimulate lung development in utero and to increase growth and maturation of human fetal lung organ cultures [20,21]. In non-cancerous tissues, BLPs stimulate growth of bronchial, gastrointestinal and pancreatic epithelial cells and lead to ligand-dependent hyperplasia [5,19,22-24]. GRPR and GRP are involved in an autocrine stimulation loop in lung cancer and HNSCC [6,14], and GRPR expression has been shown to be positively regulated by GRP [20]. Increased GRPR expression in the lung may, therefore, reflect a state that is more nascent and proliferative in nature than epithelium with low or undetected GRPR expression. We acknowledge our case-control study population limitations. This case-control study required hospital surgical controls, and this limited recruitment with the result that our lung cancer cases are older than our controls and include fewer men. In addition, the exhaustion of samples made quantitative measurements of GRPR mRNA expression impossible. We have confined our analysis to GRPR mRNA because of antibody reagent limitations at the time the samples were evaluated. This leaves the question of whether GRPR protein levels also differ unanswered. Despite these limitations, a high degree of association between detectable GRPR expression in normal bronchial tissue and lung cancer was demonstrated in the case-control population even after adjusting for sex and age. Though we did not assess the epidermal growth factor receptor (EGFR) mRNA or protein expression in bronchial epithelial cultures, we speculate that increased GRPR expression contributes to lung cancer through EGFR-dependent and/or -independent mechanisms. The EGFR pathway has been reported to be activated in lung tumors from never smokers with EGFR mutations, and it is possible that lung tumors developing in never smokers have multiple mechanisms for EGFR activation. The GRPR pathway is known to interact with the EGFR pathway in lung cancer cells by increasing the release of EGFR ligands such as amphiregulin [8], which could act to further promote cancer in never smokers who develop EGFR mutations. Alternatively, activation of the GRPR pathway may increase EGFR bronchial cell signaling in the absence of EGFR mutation, providing another route to lung cancer development in never smokers. Conclusions: The GRPR pathway may activate proliferative pathways that increase the likelihood of lung cancer development in male and female former and never smokers. We conclude from our data that GRPR expression likely does not contribute to sex differences in rates of lung cancer incidence in never or former smokers.
Background: Normal bronchial tissue expression of GRPR, which encodes the gastrin-releasing peptide receptor, has been previously reported by us to be associated with lung cancer risk in 78 subjects, especially in females. We sought to define the contribution of GRPR expression in bronchial epithelia to lung cancer risk in a larger case-control study where adjustments could be made for tobacco exposure and sex. Methods: We evaluated GRPR mRNA levels in histologically normal bronchial epithelial cells from 224 lung cancer patients and 107 surgical cancer-free controls. Associations with lung cancer were tested using logistic regression models. Results: Bronchial GRPR expression was significantly associated with lung cancer (OR = 4.76; 95% CI = 2.32-9.77) in a multivariable logistic regression (MLR) model adjusted for age, sex, smoking status and pulmonary function. MLR analysis stratified by smoking status indicated that ORs were higher in never and former smokers (OR = 7.74; 95% CI = 2.96-20.25) compared to active smokers (OR = 1.69; 95% CI = 0.46-6.33). GRPR expression did not differ by subject sex, and lung cancer risk associated with GRPR expression was not modified by sex. Conclusions: GRPR expression in non-cancerous bronchial epithelium was significantly associated with the presence of lung cancer in never and former smokers. The association in never and former smokers was found in males and females. Association with lung cancer did not differ by sex in any smoking group.
Background: Lung cancer incidence rates have been declining for men since the 1980s. However, incidence rates for women over 65 have been increasing or have remained steady during this same time period [1]. Though tobacco use is a significant risk factor for lung cancer, 15-20% of lung cancer patients are lifetime never-smokers. Our group previously reported that bronchial epithelium expression of GRPR, which encodes the gastrin-releasing peptide receptor (GRPR), was associated with a diagnosis of lung cancer in female never smokers [2]. GRPR is an X-linked gene that has been reported to escape X-inactivation [3]. This finding raised the possibility that increased GRPR expression in women accounted for some of the increased incidence rates of lung cancer in never smokers who are female, compared to never smoking men, which was recently reported in a large prospective cohort study [4]. Since GRPR stimulation induces proliferative effects in bronchial cells [5], it is possible that activation of this pathway is a risk factor for lung cancer separate from that of tobacco exposure. GRPR is overexpressed in lung cancers and in head and neck squamous cell carcinoma (HNSCC) [6,7]. We have previously reported elevated levels of GRPR mRNA in lung cancers and HNSCC [6,8]. In addition to cancer-specific overexpression of GRPR, we have demonstrated that mucosal tissues adjacent to HNSCC have GRPR mRNA levels reflective of the adjacent HNSCC tumor [6]. These findings suggest that elevated GRPR mRNA in normal bronchial epithelia may be associated with lung cancer risk and/or may indicate the presence of lung cancer. We undertook a case-control study to determine whether elevated GRPR mRNA expression in normal, at-risk epithelium correlated with the presence of lung cancer. We evaluated the association between GRPR mRNA expression in purified cultured normal bronchial epithelial cells and the presence of lung cancer. Our primary finding was the observed increased expression of GRPR in normal bronchial epithelia in lung cancer cases compared to cancer-free controls. The impact of this was highest in subjects who never smoked or who had undergone smoking cessation before diagnosis. The association was found in both male and female never smokers, suggesting GRPR plays a similar role in development of lung cancer in men and women. The result of this study highlights GRPR overexpression in normal epithelial mucosa as a candidate risk factor for lung cancer, especially in those with limited tobacco exposure. Conclusions: AME contributed to the conception of analyses, performed analyses, interpreted statistical analyses and writing of the manuscript. AGD coordinated sample storage, retrieval and database. YS and SL performed statistical analyses. JMP, JDL, RD, YEM enrolled patients into the clinical trial, collected and provided specimens for analysis. JRG contributed to the conception and design of this project and contributing to the writing of this manuscript. JMS contributed to the conception and design of this project and contributed to the writing of this manuscript. All authors have read and approved the final manuscript
Background: Normal bronchial tissue expression of GRPR, which encodes the gastrin-releasing peptide receptor, has been previously reported by us to be associated with lung cancer risk in 78 subjects, especially in females. We sought to define the contribution of GRPR expression in bronchial epithelia to lung cancer risk in a larger case-control study where adjustments could be made for tobacco exposure and sex. Methods: We evaluated GRPR mRNA levels in histologically normal bronchial epithelial cells from 224 lung cancer patients and 107 surgical cancer-free controls. Associations with lung cancer were tested using logistic regression models. Results: Bronchial GRPR expression was significantly associated with lung cancer (OR = 4.76; 95% CI = 2.32-9.77) in a multivariable logistic regression (MLR) model adjusted for age, sex, smoking status and pulmonary function. MLR analysis stratified by smoking status indicated that ORs were higher in never and former smokers (OR = 7.74; 95% CI = 2.96-20.25) compared to active smokers (OR = 1.69; 95% CI = 0.46-6.33). GRPR expression did not differ by subject sex, and lung cancer risk associated with GRPR expression was not modified by sex. Conclusions: GRPR expression in non-cancerous bronchial epithelium was significantly associated with the presence of lung cancer in never and former smokers. The association in never and former smokers was found in males and females. Association with lung cancer did not differ by sex in any smoking group.
10,121
285
[ 460, 390, 106, 352, 3472, 431, 329, 82, 294, 436, 101, 1721, 51 ]
14
[ "grpr", "expression", "cancer", "lung", "grpr expression", "lung cancer", "bronchial", "smokers", "cases", "smoking" ]
[ "cancer grpr expression", "grpr expression cancer", "lung cancer controls", "bronchial grpr mrna", "bronchial expression cancer" ]
null
[CONTENT] Gastrin-releasing peptide receptor | lung cancer risk | case-control study | surrogate tissue [SUMMARY]
[CONTENT] Gastrin-releasing peptide receptor | lung cancer risk | case-control study | surrogate tissue [SUMMARY]
null
[CONTENT] Gastrin-releasing peptide receptor | lung cancer risk | case-control study | surrogate tissue [SUMMARY]
[CONTENT] Gastrin-releasing peptide receptor | lung cancer risk | case-control study | surrogate tissue [SUMMARY]
[CONTENT] Gastrin-releasing peptide receptor | lung cancer risk | case-control study | surrogate tissue [SUMMARY]
[CONTENT] Adenocarcinoma | Adenocarcinoma of Lung | Adult | Aged | Aged, 80 and over | Bronchi | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Case-Control Studies | Female | Humans | Lung | Lung Neoplasms | Male | Middle Aged | Receptors, Bombesin | Risk | Small Cell Lung Carcinoma | Smoking [SUMMARY]
[CONTENT] Adenocarcinoma | Adenocarcinoma of Lung | Adult | Aged | Aged, 80 and over | Bronchi | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Case-Control Studies | Female | Humans | Lung | Lung Neoplasms | Male | Middle Aged | Receptors, Bombesin | Risk | Small Cell Lung Carcinoma | Smoking [SUMMARY]
null
[CONTENT] Adenocarcinoma | Adenocarcinoma of Lung | Adult | Aged | Aged, 80 and over | Bronchi | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Case-Control Studies | Female | Humans | Lung | Lung Neoplasms | Male | Middle Aged | Receptors, Bombesin | Risk | Small Cell Lung Carcinoma | Smoking [SUMMARY]
[CONTENT] Adenocarcinoma | Adenocarcinoma of Lung | Adult | Aged | Aged, 80 and over | Bronchi | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Case-Control Studies | Female | Humans | Lung | Lung Neoplasms | Male | Middle Aged | Receptors, Bombesin | Risk | Small Cell Lung Carcinoma | Smoking [SUMMARY]
[CONTENT] Adenocarcinoma | Adenocarcinoma of Lung | Adult | Aged | Aged, 80 and over | Bronchi | Carcinoma, Non-Small-Cell Lung | Carcinoma, Squamous Cell | Case-Control Studies | Female | Humans | Lung | Lung Neoplasms | Male | Middle Aged | Receptors, Bombesin | Risk | Small Cell Lung Carcinoma | Smoking [SUMMARY]
[CONTENT] cancer grpr expression | grpr expression cancer | lung cancer controls | bronchial grpr mrna | bronchial expression cancer [SUMMARY]
[CONTENT] cancer grpr expression | grpr expression cancer | lung cancer controls | bronchial grpr mrna | bronchial expression cancer [SUMMARY]
null
[CONTENT] cancer grpr expression | grpr expression cancer | lung cancer controls | bronchial grpr mrna | bronchial expression cancer [SUMMARY]
[CONTENT] cancer grpr expression | grpr expression cancer | lung cancer controls | bronchial grpr mrna | bronchial expression cancer [SUMMARY]
[CONTENT] cancer grpr expression | grpr expression cancer | lung cancer controls | bronchial grpr mrna | bronchial expression cancer [SUMMARY]
[CONTENT] grpr | expression | cancer | lung | grpr expression | lung cancer | bronchial | smokers | cases | smoking [SUMMARY]
[CONTENT] grpr | expression | cancer | lung | grpr expression | lung cancer | bronchial | smokers | cases | smoking [SUMMARY]
null
[CONTENT] grpr | expression | cancer | lung | grpr expression | lung cancer | bronchial | smokers | cases | smoking [SUMMARY]
[CONTENT] grpr | expression | cancer | lung | grpr expression | lung cancer | bronchial | smokers | cases | smoking [SUMMARY]
[CONTENT] grpr | expression | cancer | lung | grpr expression | lung cancer | bronchial | smokers | cases | smoking [SUMMARY]
[CONTENT] cancer | lung | grpr | lung cancer | normal | hnscc | risk factor lung cancer | risk factor lung | factor lung | factor lung cancer [SUMMARY]
[CONTENT] test | grpr | variables | models | expression | cases | rna | cancer | lung | controls [SUMMARY]
null
[CONTENT] data grpr expression | data grpr | smokers conclude | cancer development male female | cancer development male | activate | rates lung cancer incidence | increase likelihood lung cancer | increase likelihood lung | increase likelihood [SUMMARY]
[CONTENT] grpr | expression | cancer | lung | grpr expression | lung cancer | bronchial | smokers | cases | sex [SUMMARY]
[CONTENT] grpr | expression | cancer | lung | grpr expression | lung cancer | bronchial | smokers | cases | sex [SUMMARY]
[CONTENT] GRPR | 78 ||| [SUMMARY]
[CONTENT] 224 | 107 ||| [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] GRPR | 78 ||| ||| 224 | 107 ||| ||| 4.76 | 95% | CI | 2.32 ||| 7.74 | 95% | CI | 2.96-20.25 | 1.69 | 95% | CI | 0.46 ||| ||| ||| ||| [SUMMARY]
[CONTENT] GRPR | 78 ||| ||| 224 | 107 ||| ||| 4.76 | 95% | CI | 2.32 ||| 7.74 | 95% | CI | 2.96-20.25 | 1.69 | 95% | CI | 0.46 ||| ||| ||| ||| [SUMMARY]
Measuring Nanoparticle Penetration Through Bio-Mimetic Gels.
33833513
In cancer nanomedicine, drugs are transported by nanocarriers through a biological system to produce a therapeutic effect. The efficacy of the treatment is affected by the ability of the nanocarriers to overcome biological transport barriers to reach their target. In this work, we focus on the process of nanocarrier penetration through tumour tissue after extravasation. Visualising the dynamics of nanocarriers in tissue is difficult in vivo, and in vitro assays often do not capture the spatial and physical constraints relevant to model tissue penetration.
BACKGROUND
We propose a new simple, low-cost method to observe the transport dynamics of nanoparticles through a tissue-mimetic microfluidic chip. After loading a chip with triplicate conditions of gel type and loading with microparticles, microscopic analysis allows for tracking of fluorescent nanoparticles as they move through hydrogels (Matrigel and Collagen I) with and without cell-sized microparticles. A bespoke image-processing codebase written in MATLAB allows for statistical analysis of this tracking, and time-dependent dynamics can be determined.
METHODS
To demonstrate the method, we show size-dependence of transport mechanics can be observed, with diffusion of fluorescein dye throughout the channel in 8 h, while 20 nm carboxylate FluoSphere diffusion was hindered through both Collagen I and Matrigel™. Statistical measurements of the results are generated through the software package and show the significance of both size and presence of microparticles on penetration depth.
RESULTS
This provides an easy-to-understand output for the end user to measure nanoparticle tissue penetration, enabling the first steps towards future automated experimentation of transport dynamics for rational nanocarrier design.
CONCLUSION
[ "Collagen", "Diffusion", "Gels", "Humans", "Microfluidics", "Nanomedicine", "Nanoparticles", "Tissue Scaffolds" ]
8020455
Introduction
Nanocarriers can transport chemotherapies through the body, shielding them from healthy tissue and specifically targeting tumours. One of nanomedicine’s advantages is the ability to fine-tune the surface chemistry and size of the nanocarriers to optimize their transport across biological barriers, such as blood vessel walls, extracellular matrix (ECM) and through cell membranes.1,2 Small nanocarrier modifications, however, can lead to extreme differences in their behaviour3–6 and their ability to traverse biological barriers to reach the intended target.7–9 Given the complexity of nanomedicine design and its impact on complex spatiotemporal behaviour in the body, new methods are needed for high-throughput screening of suitable transport dynamics.10,11 Furthermore, evaluating the performance of a therapy within a specific scenario, and the interactions it would encounter, allows for more effective rational nanoparticle design as well as a route to limiting toxic off-target effects.12,13 Tumour-specific uptake models are of immediate importance to the development of new nanotherapies for treating cancer, with almost two-thirds of nanomedicine research currently focused on oncology.14,15 However, solid tumour microenvironments are extremely complex and exhibit a wide range of diffusive and convective matrices.16 The tumour microenvironment, due to its necrotic core and constantly expanding mass, will produce a positive outwards pressure at the edge of the tumour towards the surrounding tissue,17,18 and extravasating nanoparticles nearer the centre must rely on diffusion to penetrate.19 Factors such as the density and orientation of the nano-porous protein network also impact the transport properties of the drug.20–22 Microfluidic models are beginning to emerge in nanomedicine screening, whereby fluid flow is directed through a region of ECM hydrogels that contain tumour cells. Automatic image processing is then used to allow researchers to analyse results in bulk, allowing for customization of the analysis framework to achieve specific results. In recent years, microfluidic 3D tissue scaffolds have been investigated for their beneficial properties of low reagent usage, ability to scale up to high-throughput systems, and flexibility in spatial control.23 The significant improvement on the previous paradigm of 2D cell culture is to include a three-dimensional environment for cells to proliferate, preventing morphological differences in binding properties. The resulting 3D tissue structures are commonly grown in hydrogel matrices comprising natural ECM proteins, which allow the cells to propagate to form biologically mimetic cellular structures, such as spheroids or organoids.24 These rudimentary biological structures are produced to emulate in vivo conditions, such that the natural mechanisms of nanomedicine uptake can be studied in vitro. Ho et al25 produced microfluidic channels that were sandwiched either side of a region of gelled fibrin solution that was covered with human umbilical vein endothelial cells, allowing them to study how fluorescent polystyrene nanoparticles extravasated under flow through the hydrogel. They also extracted fluorescence data in regions of interest and utilized MATLAB to examine the diffusional permeability. Carvalho et al26 used a microfluidic methodology to assemble a colorectal tumour-on-a-chip model by embedding cancer cell lines in Matrigel in a 5 mm chamber and having parallel perfusion channels coated in human colonic microvascular endothelial cells. They produced dendrimer nanoparticles containing a colorectal cancer drug and observed fluorescently labelled dendrimers penetrating the hydrogel. Instead of a scripted analysis, they took microscopy image processing directly into a graphic software for statistical measurements, without mention of an automated solution. Albanese et al27 produced a melanoma spheroid-on-a-chip device where spheroids harvested from culture in a poly-hydroxyethyl methacrylate gel supplemented with Matrigel™ were embedded inside a soft microfluidic chip, with the PDMS microchannel gently compressing the tumour against the glass substrate. Instead of being encased in a solid hydrogel, the spheroid was contained in a small coating of Matrigel and laminin and was subjected to convective flow, which may have contributed to their measured uptake of gold nanoparticles in a manner that does not match normal tumour diffusion conditions. Although the number of exciting new complex 3D testbeds is rapidly growing, there still is not a standardized, easy-to-produce microfluidic testbed, with open-source image processing and diffusion calculation for tissue penetration dynamics. In this paper, we present a novel framework for investigating the diffusive profile of nanoparticles within a model of human tissue. To achieve a diffusive environment, we utilize a completely open channel in a custom-made polymer chip loaded with a tissue scaffold gel containing mammalian-cell-sized microparticles with no applied pressure beyond the initial pipetting motion to introduce the nanoparticle solution (Figure 1). We then investigate how automated fluorescent image processing can be used to track nanoparticles as they move through our tissue-mimetic chip.Figure 1Schematic diagram of channel in polymethylmethacrylate chip – containing tissue-mimetic gel scaffold with an adjacent vessel inlet well, where nanoparticle solutions are loaded for imaging under fluorescent microscopy; fluorescent microparticles are substituted for the physical barriers of live cells. Imaging allows for time-lapse tracking of diffusive nanocarrier flow profiles from the vessel inlet through the tissue scaffold. Schematic diagram of channel in polymethylmethacrylate chip – containing tissue-mimetic gel scaffold with an adjacent vessel inlet well, where nanoparticle solutions are loaded for imaging under fluorescent microscopy; fluorescent microparticles are substituted for the physical barriers of live cells. Imaging allows for time-lapse tracking of diffusive nanocarrier flow profiles from the vessel inlet through the tissue scaffold. Our framework allows for the evaluation of particle dynamics over a range of sizes and within various realistic biomimetic environmental parameters. We test our framework using spherical nanoparticles and track these nanoparticles through an environment of well-characterized commercially available ECM: Matrigel™ and bovine Collagen I. The resulting datasets are analysed using a custom-written automated routine, where the primary objective is to establish a high-throughput testing suite to quantify nanoparticle spatiotemporal dynamics in tissue environments.
null
null
null
null
Conclusion
We have developed a low-cost device that is capable of tracking nanoparticles through tissue-mimetic environments. Here, we use microparticles as proxies for cell-like structures, with future work introducing cells to investigate the influence of cell-binding and internalisation on nanoparticle flow profiles. The images generated allow us to monitor nanoparticle penetration over time, where all analysis is performed automatically with a custom-built framework for penetration detection, here made open source. This demonstrates a device with the potential for high throughput testing of nanoparticle flow properties under a range of tumour-like conditions. Monitoring the dynamics of nanocarriers as they move through tissue-like environments could help optimise their design to overcome transport barriers, leading to better distribution within tumours and reduced off-target effects. In future work, we will refine the workflow to produce a high-throughput screening system whereby this custom chip can be used as a low-cost, modular analysis platform for testing nanomedicine flow through a biological microenvironment (such as a cancerous tumour) – the output of which can then be processed with the press of a button to provide detailed statistical information about the permeation and diffusion characteristics of numerous different conditions at once. Such a testing suite could then interface with machine learning routines to allow for the automatic characterisation and design of nanoparticle-based drug therapies.31 As a demonstration, we compare tissue penetration of two fluorescent nanoparticles of different sizes. As expected, the larger particles clearly have a harder time penetrating through both types of gel. Our next work intends to add multiple types of other fluorescent particles into the mixture, including larger and smaller particles with different charges and targeting moieties. We also seek to investigate whether mathematical analysis can be applied in the reverse manner in this system – whether we can utilize pre-characterized particles of known properties to gather information about the gels themselves. Both approaches would allow for the effective calibration of tissue-scale models of nanoparticle transport, further improving our understanding of the main obstacles for effective penetration into the tumour tissue.32–34
[ "Materials and Methods", "Chip Preparation", "Tissue Scaffold and Nanoparticle Preparation", "Matrigel", "Collagen I", "Gel Loading", "Gelation and Culture Conditions", "Nanoparticle Solution Preparation", "Fluorescent Microscopy", "Image Analysis", "Diffusion Equations", "Results", "Discussion", "Conclusion" ]
[ "Chip Preparation Microfluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir.\nEach channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use.\nMicrofluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir.\nEach channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use.\nTissue Scaffold and Nanoparticle Preparation Matrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nMatrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nCollagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nA collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nGel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nWhen adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nGelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nGiven that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nNanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nA stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nMatrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nMatrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nCollagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nA collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nGel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nWhen adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nGelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nGiven that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nNanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nA stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nFluorescent Microscopy Fluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals.\nThe red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm.\nThe green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm.\nThe blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm.\nFluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals.\nThe red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm.\nThe green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm.\nThe blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm.\nImage Analysis Fluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nGraphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nFor all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C.\nThe normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D).\nAll data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/.\nLAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images.\nFluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nGraphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nFor all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C.\nThe normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D).\nAll data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/.\nLAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images.\nDiffusion Equations The above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments.\nTo verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28\n(1)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\partial C\\left({x,t} \\right)} \\over {\\partial t}} = {\\ }D{\\ }{{{\\partial ^2}C\\left({x,t} \\right)} \\over {\\partial {x^2}}}$$\n\\end{document}\nFor the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${C_0}$$\n\\end{document} from the channel and at the first timepoint, such that \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${\\rm{C}}\\left({0,{\\rm{ }}30{\\rm{mins}}} \\right) = {\\ }{{\\rm{C}}_0}$$\n\\end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28\n(2)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$C\\left({x,t} \\right) = {C_{max}}\\exp \\left({ - {{{x^2}} \\over {4Dt}}} \\right)$$\n\\end{document}\nwhere \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\rm{C}}_{{\\rm{max}}}}$$\n\\end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0).\nFor Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9.\nFor our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.\nTime series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.\nThe above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments.\nTo verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28\n(1)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\partial C\\left({x,t} \\right)} \\over {\\partial t}} = {\\ }D{\\ }{{{\\partial ^2}C\\left({x,t} \\right)} \\over {\\partial {x^2}}}$$\n\\end{document}\nFor the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${C_0}$$\n\\end{document} from the channel and at the first timepoint, such that \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${\\rm{C}}\\left({0,{\\rm{ }}30{\\rm{mins}}} \\right) = {\\ }{{\\rm{C}}_0}$$\n\\end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28\n(2)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$C\\left({x,t} \\right) = {C_{max}}\\exp \\left({ - {{{x^2}} \\over {4Dt}}} \\right)$$\n\\end{document}\nwhere \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\rm{C}}_{{\\rm{max}}}}$$\n\\end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0).\nFor Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9.\nFor our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.\nTime series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.", "Microfluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir.\nEach channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use.", "Matrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nMatrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nCollagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nA collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nGel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nWhen adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nGelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nGiven that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nNanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nA stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.", "Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.", "A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.", "When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).", "Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).", "A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.", "Fluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals.\nThe red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm.\nThe green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm.\nThe blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm.", "Fluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nGraphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nFor all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C.\nThe normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D).\nAll data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/.\nLAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images.", "The above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments.\nTo verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28\n(1)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\partial C\\left({x,t} \\right)} \\over {\\partial t}} = {\\ }D{\\ }{{{\\partial ^2}C\\left({x,t} \\right)} \\over {\\partial {x^2}}}$$\n\\end{document}\nFor the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${C_0}$$\n\\end{document} from the channel and at the first timepoint, such that \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${\\rm{C}}\\left({0,{\\rm{ }}30{\\rm{mins}}} \\right) = {\\ }{{\\rm{C}}_0}$$\n\\end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28\n(2)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$C\\left({x,t} \\right) = {C_{max}}\\exp \\left({ - {{{x^2}} \\over {4Dt}}} \\right)$$\n\\end{document}\nwhere \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\rm{C}}_{{\\rm{max}}}}$$\n\\end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0).\nFor Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9.\nFor our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.\nTime series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.", "In Figure 4, we show example image data from red, green, and blue channels within Matrigel™ or Collagen I, as well as with and without microparticles. We observe that the 20 nm FluoSpheres do not flow to the end of the channel through either Matrigel™ or Collagen I, but that there is a diffusion over the course of 8 h that reaches approximately one-third to one-half of the way through the gel (with low intensity), after accounting for the channel position.\nThe green fluorescein dye is instead able to diffuse through the entire tissue scaffold gels within 8 h, as expected from the smaller hydrodynamic radius of the dye particulates. Images taken over the duration of the experiment can be compiled into a video and analysed to show the spatiotemporal dynamics of the tracked nanoparticles within the tissue, as seen in Figure 5.Figure 5Timescale data for all conditions, showing green dye reaching end of channel at ~4–5 h, and blue 20 nm particles not exceeding ~50% over entire 8 h experiment. Total timepoints = 16, total chamber length after removing data is 7.5 mm.\nTimescale data for all conditions, showing green dye reaching end of channel at ~4–5 h, and blue 20 nm particles not exceeding ~50% over entire 8 h experiment. Total timepoints = 16, total chamber length after removing data is 7.5 mm.\nThe blue 20 nm FluoSpheres were observed to have a larger scatter in the Collagen I with microparticles than any other condition, which we attribute to differences in the porous matrix due to dilution with the microparticles. The 10 μm red microparticles have carboxylate groups on the surface, which may lead to electrostatic charge interaction with the collagen fibrils. We hypothesize that Matrigel does not follow the same trend due to it being a more complex protein matrix, which may prevent it from being so widely affected by the presence of microparticles. Future work would expand upon this by analysing the gel porosity with electron microscopy.\nIn Figure 6, we show the penetration distance for various tissue scaffold conditions for the blue nanoparticles and green dye, averaged across all 36 datasets (triplicate data for all four conditions and three individual chips) and where error bars denote the standard deviation, for the final time point (t = 8 h). A two-sample t-test analysis was performed on the data and confirmed several key differences.Figure 6Results from automated analysis of fluorescent imaging data with a 10% threshold at t = 8 h, showing that green dye can be tracked to its ~100% penetration, and blue 20 nm particles can be tracked to ~40% penetration with the same threshold. The total chamber length after removing data is 7.5 mm.\nResults from automated analysis of fluorescent imaging data with a 10% threshold at t = 8 h, showing that green dye can be tracked to its ~100% penetration, and blue 20 nm particles can be tracked to ~40% penetration with the same threshold. The total chamber length after removing data is 7.5 mm.\nIn Figure 7, first, we observe that blue 20 nm nanoparticles consistently diffuse less deep than the green dye under all conditions, as described above. Further, we observe that the average penetration distance of the green dye is higher without microparticles but broadly similar across gel type, whereas the blue 20 nm penetrated deeper in Matrigel™ than Collagen I. There was no significant difference for 20 nm nanoparticle with or without microparticles.Figure 7Two significant results found in overall analysis, showing green dye is affected by presence or absence of microparticles, and blue nanoparticles are affected by gel type. The total chamber length after removing data is 7.5mm. (***) denotes p<0.005.\nTwo significant results found in overall analysis, showing green dye is affected by presence or absence of microparticles, and blue nanoparticles are affected by gel type. The total chamber length after removing data is 7.5mm. (***) denotes p<0.005.\nSecond, there was a significant difference (***, p<0.005) between the gel types for the blue nanoparticle diffusion. Collagen I proved to be more permeable to the nanoparticle diffusion than Matrigel™, with blue nanoparticles penetrating to a total distance (after 8 h) of 3.51 mm compared to 2.96 mm for Collagen I and Matrigel™, respectively. SEM images of Matrigel™ and a similar bovine Collagen I gel from the literature bear out that Matrigel™ (0.5–1 5 μm) has a smaller porosity than 2 mg/mL Collagen I (~2–5 μm), and this aligns with our analysis of Collagen I as being more permeable.29,30\nThird, we observed a significant difference (***, p<0.005) between conditions with and without microparticles for the green fluorescein dye diffusion. The fluorescein dye permeated more in gels without microparticles (7.22 mm compared to 7.1 mm of the total truncated chamber length); although the data had higher variation (standard deviation of 0.65 mm compared to 0.01 mm). From this, we consider that the presence of microparticles provides obstacles to permeation, which may lead to longer trajectories due to collisions between the dye and microparticles. This would lead to the dye having more variation in how it flows, leading to an overall decrease in diffusion speed.\nFinally, there was no significant difference between the gel types for the green fluorescein dye diffusion. Neither one of the gels appeared to affect the permeation of the dye, which leads to the conclusion that the fluorescein dye was small enough to penetrate the smaller nanostructure of Matrigel™.", "Having observed significant differences within the penetration distances, we consider whether the diffusion coefficient for the particles can be quantified using the model of diffusion described in Section 5.5. Here, we focus on the diffusion coefficient of the green fluorescein dye. We also include the diffusion coefficient, as calculated using the Stokes–Einstein equation:\n(3)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$D = {{{k_B}T} \\over {6\\pi \\eta r}}$$\n\\end{document}\nwhere \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${k_B}$$\n\\end{document} is the Boltzmann constant, \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$T$$\n\\end{document} is the absolute temperature, r, is the hydrodynamic radius and \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$\\eta $$\n\\end{document} is the dynamic viscosity of the medium with we assume to be 0.768 x10−3 kg/ms for Collagen I at 37°C.20 The dynamic viscosity of Matrigel™ was unknown and is not included for reference.\nWe observe that the diffusion coefficients for the green fluorescein dye (575±72.4 μm2/sec averaged across all experiments) are within the expected reference value for the diffusion coefficient (591.6 μm2/sec, as calculated using Stokes–Einstein equation and viscosity of Collagen I given in20). We chose to analyse the fluorescein dye as it fully penetrated the channel within the experimental timeframe and thus gave the best timescale dynamics. This matching of values to the literature shows that our device’s environment is useful for capturing the dynamics of diffusion throughout biomimetic tissue.\nFurthermore, we observe that the diffusion coefficient is higher within Collagen I than within Matrigel™, as expected from the literature images of the porosity. Interestingly, we find that the averaged diffusion coefficient is higher when microparticles are present, counter to what is observed from the analysis of the penetration distance. The most logical conclusion is that the gel’s micron-scale mesh structure is disrupted and enlarged by the presence of the 10 μm particles, providing an easier pathway for the dye to migrate along the channel. To prove this, electron microscopy of the gel porosity with and without microparticles will be required in the future.\nWhile we observe that the diffusion coefficient is slightly higher under the presence of microparticles, the change in the diffusion coefficient is small and within the standard deviation.\nThe mean squared displacement was calculated for all experiments across the three chips analysed in this experiment (Supplemental Table 1). All values for the exponent are below 1, indicating that the diffusion regime is sub-diffusive, as to be expected in porous media. The graphs for the fluorescein MSD curves are shown in Supplemental Figures 1, 3 and 5; while the 20 nm nanoparticle MSD curves are shown in Supplemental Figures 2, 4 and 6.", "We have developed a low-cost device that is capable of tracking nanoparticles through tissue-mimetic environments. Here, we use microparticles as proxies for cell-like structures, with future work introducing cells to investigate the influence of cell-binding and internalisation on nanoparticle flow profiles. The images generated allow us to monitor nanoparticle penetration over time, where all analysis is performed automatically with a custom-built framework for penetration detection, here made open source. This demonstrates a device with the potential for high throughput testing of nanoparticle flow properties under a range of tumour-like conditions.\nMonitoring the dynamics of nanocarriers as they move through tissue-like environments could help optimise their design to overcome transport barriers, leading to better distribution within tumours and reduced off-target effects.\nIn future work, we will refine the workflow to produce a high-throughput screening system whereby this custom chip can be used as a low-cost, modular analysis platform for testing nanomedicine flow through a biological microenvironment (such as a cancerous tumour) – the output of which can then be processed with the press of a button to provide detailed statistical information about the permeation and diffusion characteristics of numerous different conditions at once. Such a testing suite could then interface with machine learning routines to allow for the automatic characterisation and design of nanoparticle-based drug therapies.31\nAs a demonstration, we compare tissue penetration of two fluorescent nanoparticles of different sizes. As expected, the larger particles clearly have a harder time penetrating through both types of gel. Our next work intends to add multiple types of other fluorescent particles into the mixture, including larger and smaller particles with different charges and targeting moieties.\nWe also seek to investigate whether mathematical analysis can be applied in the reverse manner in this system – whether we can utilize pre-characterized particles of known properties to gather information about the gels themselves. Both approaches would allow for the effective calibration of tissue-scale models of nanoparticle transport, further improving our understanding of the main obstacles for effective penetration into the tumour tissue.32–34" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Chip Preparation", "Tissue Scaffold and Nanoparticle Preparation", "Matrigel", "Collagen I", "Gel Loading", "Gelation and Culture Conditions", "Nanoparticle Solution Preparation", "Fluorescent Microscopy", "Image Analysis", "Diffusion Equations", "Results", "Discussion", "Conclusion" ]
[ "Nanocarriers can transport chemotherapies through the body, shielding them from healthy tissue and specifically targeting tumours. One of nanomedicine’s advantages is the ability to fine-tune the surface chemistry and size of the nanocarriers to optimize their transport across biological barriers, such as blood vessel walls, extracellular matrix (ECM) and through cell membranes.1,2 Small nanocarrier modifications, however, can lead to extreme differences in their behaviour3–6 and their ability to traverse biological barriers to reach the intended target.7–9\nGiven the complexity of nanomedicine design and its impact on complex spatiotemporal behaviour in the body, new methods are needed for high-throughput screening of suitable transport dynamics.10,11 Furthermore, evaluating the performance of a therapy within a specific scenario, and the interactions it would encounter, allows for more effective rational nanoparticle design as well as a route to limiting toxic off-target effects.12,13\nTumour-specific uptake models are of immediate importance to the development of new nanotherapies for treating cancer, with almost two-thirds of nanomedicine research currently focused on oncology.14,15 However, solid tumour microenvironments are extremely complex and exhibit a wide range of diffusive and convective matrices.16 The tumour microenvironment, due to its necrotic core and constantly expanding mass, will produce a positive outwards pressure at the edge of the tumour towards the surrounding tissue,17,18 and extravasating nanoparticles nearer the centre must rely on diffusion to penetrate.19 Factors such as the density and orientation of the nano-porous protein network also impact the transport properties of the drug.20–22\nMicrofluidic models are beginning to emerge in nanomedicine screening, whereby fluid flow is directed through a region of ECM hydrogels that contain tumour cells. Automatic image processing is then used to allow researchers to analyse results in bulk, allowing for customization of the analysis framework to achieve specific results.\nIn recent years, microfluidic 3D tissue scaffolds have been investigated for their beneficial properties of low reagent usage, ability to scale up to high-throughput systems, and flexibility in spatial control.23 The significant improvement on the previous paradigm of 2D cell culture is to include a three-dimensional environment for cells to proliferate, preventing morphological differences in binding properties. The resulting 3D tissue structures are commonly grown in hydrogel matrices comprising natural ECM proteins, which allow the cells to propagate to form biologically mimetic cellular structures, such as spheroids or organoids.24 These rudimentary biological structures are produced to emulate in vivo conditions, such that the natural mechanisms of nanomedicine uptake can be studied in vitro.\nHo et al25 produced microfluidic channels that were sandwiched either side of a region of gelled fibrin solution that was covered with human umbilical vein endothelial cells, allowing them to study how fluorescent polystyrene nanoparticles extravasated under flow through the hydrogel. They also extracted fluorescence data in regions of interest and utilized MATLAB to examine the diffusional permeability.\nCarvalho et al26 used a microfluidic methodology to assemble a colorectal tumour-on-a-chip model by embedding cancer cell lines in Matrigel in a 5 mm chamber and having parallel perfusion channels coated in human colonic microvascular endothelial cells. They produced dendrimer nanoparticles containing a colorectal cancer drug and observed fluorescently labelled dendrimers penetrating the hydrogel. Instead of a scripted analysis, they took microscopy image processing directly into a graphic software for statistical measurements, without mention of an automated solution.\nAlbanese et al27 produced a melanoma spheroid-on-a-chip device where spheroids harvested from culture in a poly-hydroxyethyl methacrylate gel supplemented with Matrigel™ were embedded inside a soft microfluidic chip, with the PDMS microchannel gently compressing the tumour against the glass substrate. Instead of being encased in a solid hydrogel, the spheroid was contained in a small coating of Matrigel and laminin and was subjected to convective flow, which may have contributed to their measured uptake of gold nanoparticles in a manner that does not match normal tumour diffusion conditions.\nAlthough the number of exciting new complex 3D testbeds is rapidly growing, there still is not a standardized, easy-to-produce microfluidic testbed, with open-source image processing and diffusion calculation for tissue penetration dynamics. In this paper, we present a novel framework for investigating the diffusive profile of nanoparticles within a model of human tissue. To achieve a diffusive environment, we utilize a completely open channel in a custom-made polymer chip loaded with a tissue scaffold gel containing mammalian-cell-sized microparticles with no applied pressure beyond the initial pipetting motion to introduce the nanoparticle solution (Figure 1). We then investigate how automated fluorescent image processing can be used to track nanoparticles as they move through our tissue-mimetic chip.Figure 1Schematic diagram of channel in polymethylmethacrylate chip – containing tissue-mimetic gel scaffold with an adjacent vessel inlet well, where nanoparticle solutions are loaded for imaging under fluorescent microscopy; fluorescent microparticles are substituted for the physical barriers of live cells. Imaging allows for time-lapse tracking of diffusive nanocarrier flow profiles from the vessel inlet through the tissue scaffold.\nSchematic diagram of channel in polymethylmethacrylate chip – containing tissue-mimetic gel scaffold with an adjacent vessel inlet well, where nanoparticle solutions are loaded for imaging under fluorescent microscopy; fluorescent microparticles are substituted for the physical barriers of live cells. Imaging allows for time-lapse tracking of diffusive nanocarrier flow profiles from the vessel inlet through the tissue scaffold.\nOur framework allows for the evaluation of particle dynamics over a range of sizes and within various realistic biomimetic environmental parameters. We test our framework using spherical nanoparticles and track these nanoparticles through an environment of well-characterized commercially available ECM: Matrigel™ and bovine Collagen I. The resulting datasets are analysed using a custom-written automated routine, where the primary objective is to establish a high-throughput testing suite to quantify nanoparticle spatiotemporal dynamics in tissue environments.", "Chip Preparation Microfluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir.\nEach channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use.\nMicrofluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir.\nEach channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use.\nTissue Scaffold and Nanoparticle Preparation Matrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nMatrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nCollagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nA collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nGel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nWhen adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nGelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nGiven that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nNanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nA stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nMatrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nMatrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nCollagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nA collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nGel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nWhen adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nGelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nGiven that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nNanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nA stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nFluorescent Microscopy Fluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals.\nThe red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm.\nThe green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm.\nThe blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm.\nFluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals.\nThe red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm.\nThe green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm.\nThe blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm.\nImage Analysis Fluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nGraphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nFor all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C.\nThe normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D).\nAll data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/.\nLAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images.\nFluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nGraphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nFor all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C.\nThe normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D).\nAll data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/.\nLAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images.\nDiffusion Equations The above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments.\nTo verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28\n(1)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\partial C\\left({x,t} \\right)} \\over {\\partial t}} = {\\ }D{\\ }{{{\\partial ^2}C\\left({x,t} \\right)} \\over {\\partial {x^2}}}$$\n\\end{document}\nFor the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${C_0}$$\n\\end{document} from the channel and at the first timepoint, such that \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${\\rm{C}}\\left({0,{\\rm{ }}30{\\rm{mins}}} \\right) = {\\ }{{\\rm{C}}_0}$$\n\\end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28\n(2)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$C\\left({x,t} \\right) = {C_{max}}\\exp \\left({ - {{{x^2}} \\over {4Dt}}} \\right)$$\n\\end{document}\nwhere \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\rm{C}}_{{\\rm{max}}}}$$\n\\end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0).\nFor Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9.\nFor our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.\nTime series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.\nThe above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments.\nTo verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28\n(1)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\partial C\\left({x,t} \\right)} \\over {\\partial t}} = {\\ }D{\\ }{{{\\partial ^2}C\\left({x,t} \\right)} \\over {\\partial {x^2}}}$$\n\\end{document}\nFor the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${C_0}$$\n\\end{document} from the channel and at the first timepoint, such that \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${\\rm{C}}\\left({0,{\\rm{ }}30{\\rm{mins}}} \\right) = {\\ }{{\\rm{C}}_0}$$\n\\end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28\n(2)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$C\\left({x,t} \\right) = {C_{max}}\\exp \\left({ - {{{x^2}} \\over {4Dt}}} \\right)$$\n\\end{document}\nwhere \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\rm{C}}_{{\\rm{max}}}}$$\n\\end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0).\nFor Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9.\nFor our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.\nTime series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.", "Microfluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir.\nEach channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use.", "Matrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nMatrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.\nCollagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nA collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.\nGel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nWhen adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nGelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nGiven that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).\nNanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.\nA stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.", "Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use.", "A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used.\nFor gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use.", "When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).\nSchematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs).", "Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E).", "A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F).\nThe excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm.\nThe excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm.\nThe excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm.", "Fluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals.\nThe red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm.\nThe green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm.\nThe blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm.", "Fluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nGraphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image.\nFor all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C.\nThe normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D).\nAll data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/.\nLAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images.", "The above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments.\nTo verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28\n(1)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\partial C\\left({x,t} \\right)} \\over {\\partial t}} = {\\ }D{\\ }{{{\\partial ^2}C\\left({x,t} \\right)} \\over {\\partial {x^2}}}$$\n\\end{document}\nFor the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${C_0}$$\n\\end{document} from the channel and at the first timepoint, such that \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${\\rm{C}}\\left({0,{\\rm{ }}30{\\rm{mins}}} \\right) = {\\ }{{\\rm{C}}_0}$$\n\\end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28\n(2)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$C\\left({x,t} \\right) = {C_{max}}\\exp \\left({ - {{{x^2}} \\over {4Dt}}} \\right)$$\n\\end{document}\nwhere \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${{\\rm{C}}_{{\\rm{max}}}}$$\n\\end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0).\nFor Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9.\nFor our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.\nTime series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines.", "In Figure 4, we show example image data from red, green, and blue channels within Matrigel™ or Collagen I, as well as with and without microparticles. We observe that the 20 nm FluoSpheres do not flow to the end of the channel through either Matrigel™ or Collagen I, but that there is a diffusion over the course of 8 h that reaches approximately one-third to one-half of the way through the gel (with low intensity), after accounting for the channel position.\nThe green fluorescein dye is instead able to diffuse through the entire tissue scaffold gels within 8 h, as expected from the smaller hydrodynamic radius of the dye particulates. Images taken over the duration of the experiment can be compiled into a video and analysed to show the spatiotemporal dynamics of the tracked nanoparticles within the tissue, as seen in Figure 5.Figure 5Timescale data for all conditions, showing green dye reaching end of channel at ~4–5 h, and blue 20 nm particles not exceeding ~50% over entire 8 h experiment. Total timepoints = 16, total chamber length after removing data is 7.5 mm.\nTimescale data for all conditions, showing green dye reaching end of channel at ~4–5 h, and blue 20 nm particles not exceeding ~50% over entire 8 h experiment. Total timepoints = 16, total chamber length after removing data is 7.5 mm.\nThe blue 20 nm FluoSpheres were observed to have a larger scatter in the Collagen I with microparticles than any other condition, which we attribute to differences in the porous matrix due to dilution with the microparticles. The 10 μm red microparticles have carboxylate groups on the surface, which may lead to electrostatic charge interaction with the collagen fibrils. We hypothesize that Matrigel does not follow the same trend due to it being a more complex protein matrix, which may prevent it from being so widely affected by the presence of microparticles. Future work would expand upon this by analysing the gel porosity with electron microscopy.\nIn Figure 6, we show the penetration distance for various tissue scaffold conditions for the blue nanoparticles and green dye, averaged across all 36 datasets (triplicate data for all four conditions and three individual chips) and where error bars denote the standard deviation, for the final time point (t = 8 h). A two-sample t-test analysis was performed on the data and confirmed several key differences.Figure 6Results from automated analysis of fluorescent imaging data with a 10% threshold at t = 8 h, showing that green dye can be tracked to its ~100% penetration, and blue 20 nm particles can be tracked to ~40% penetration with the same threshold. The total chamber length after removing data is 7.5 mm.\nResults from automated analysis of fluorescent imaging data with a 10% threshold at t = 8 h, showing that green dye can be tracked to its ~100% penetration, and blue 20 nm particles can be tracked to ~40% penetration with the same threshold. The total chamber length after removing data is 7.5 mm.\nIn Figure 7, first, we observe that blue 20 nm nanoparticles consistently diffuse less deep than the green dye under all conditions, as described above. Further, we observe that the average penetration distance of the green dye is higher without microparticles but broadly similar across gel type, whereas the blue 20 nm penetrated deeper in Matrigel™ than Collagen I. There was no significant difference for 20 nm nanoparticle with or without microparticles.Figure 7Two significant results found in overall analysis, showing green dye is affected by presence or absence of microparticles, and blue nanoparticles are affected by gel type. The total chamber length after removing data is 7.5mm. (***) denotes p<0.005.\nTwo significant results found in overall analysis, showing green dye is affected by presence or absence of microparticles, and blue nanoparticles are affected by gel type. The total chamber length after removing data is 7.5mm. (***) denotes p<0.005.\nSecond, there was a significant difference (***, p<0.005) between the gel types for the blue nanoparticle diffusion. Collagen I proved to be more permeable to the nanoparticle diffusion than Matrigel™, with blue nanoparticles penetrating to a total distance (after 8 h) of 3.51 mm compared to 2.96 mm for Collagen I and Matrigel™, respectively. SEM images of Matrigel™ and a similar bovine Collagen I gel from the literature bear out that Matrigel™ (0.5–1 5 μm) has a smaller porosity than 2 mg/mL Collagen I (~2–5 μm), and this aligns with our analysis of Collagen I as being more permeable.29,30\nThird, we observed a significant difference (***, p<0.005) between conditions with and without microparticles for the green fluorescein dye diffusion. The fluorescein dye permeated more in gels without microparticles (7.22 mm compared to 7.1 mm of the total truncated chamber length); although the data had higher variation (standard deviation of 0.65 mm compared to 0.01 mm). From this, we consider that the presence of microparticles provides obstacles to permeation, which may lead to longer trajectories due to collisions between the dye and microparticles. This would lead to the dye having more variation in how it flows, leading to an overall decrease in diffusion speed.\nFinally, there was no significant difference between the gel types for the green fluorescein dye diffusion. Neither one of the gels appeared to affect the permeation of the dye, which leads to the conclusion that the fluorescein dye was small enough to penetrate the smaller nanostructure of Matrigel™.", "Having observed significant differences within the penetration distances, we consider whether the diffusion coefficient for the particles can be quantified using the model of diffusion described in Section 5.5. Here, we focus on the diffusion coefficient of the green fluorescein dye. We also include the diffusion coefficient, as calculated using the Stokes–Einstein equation:\n(3)\\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$D = {{{k_B}T} \\over {6\\pi \\eta r}}$$\n\\end{document}\nwhere \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$${k_B}$$\n\\end{document} is the Boltzmann constant, \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$T$$\n\\end{document} is the absolute temperature, r, is the hydrodynamic radius and \\documentclass[12pt]{minimal}\n\\usepackage{wasysym}\n\\usepackage[substack]{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage{amsbsy}\n\\usepackage[mathscr]{eucal}\n\\usepackage{mathrsfs}\n\\DeclareFontFamily{T1}{linotext}{}\n\\DeclareFontShape{T1}{linotext}{m}{n} {linotext }{}\n\\DeclareSymbolFont{linotext}{T1}{linotext}{m}{n}\n\\DeclareSymbolFontAlphabet{\\mathLINOTEXT}{linotext}\n\\begin{document}\n$$\\eta $$\n\\end{document} is the dynamic viscosity of the medium with we assume to be 0.768 x10−3 kg/ms for Collagen I at 37°C.20 The dynamic viscosity of Matrigel™ was unknown and is not included for reference.\nWe observe that the diffusion coefficients for the green fluorescein dye (575±72.4 μm2/sec averaged across all experiments) are within the expected reference value for the diffusion coefficient (591.6 μm2/sec, as calculated using Stokes–Einstein equation and viscosity of Collagen I given in20). We chose to analyse the fluorescein dye as it fully penetrated the channel within the experimental timeframe and thus gave the best timescale dynamics. This matching of values to the literature shows that our device’s environment is useful for capturing the dynamics of diffusion throughout biomimetic tissue.\nFurthermore, we observe that the diffusion coefficient is higher within Collagen I than within Matrigel™, as expected from the literature images of the porosity. Interestingly, we find that the averaged diffusion coefficient is higher when microparticles are present, counter to what is observed from the analysis of the penetration distance. The most logical conclusion is that the gel’s micron-scale mesh structure is disrupted and enlarged by the presence of the 10 μm particles, providing an easier pathway for the dye to migrate along the channel. To prove this, electron microscopy of the gel porosity with and without microparticles will be required in the future.\nWhile we observe that the diffusion coefficient is slightly higher under the presence of microparticles, the change in the diffusion coefficient is small and within the standard deviation.\nThe mean squared displacement was calculated for all experiments across the three chips analysed in this experiment (Supplemental Table 1). All values for the exponent are below 1, indicating that the diffusion regime is sub-diffusive, as to be expected in porous media. The graphs for the fluorescein MSD curves are shown in Supplemental Figures 1, 3 and 5; while the 20 nm nanoparticle MSD curves are shown in Supplemental Figures 2, 4 and 6.", "We have developed a low-cost device that is capable of tracking nanoparticles through tissue-mimetic environments. Here, we use microparticles as proxies for cell-like structures, with future work introducing cells to investigate the influence of cell-binding and internalisation on nanoparticle flow profiles. The images generated allow us to monitor nanoparticle penetration over time, where all analysis is performed automatically with a custom-built framework for penetration detection, here made open source. This demonstrates a device with the potential for high throughput testing of nanoparticle flow properties under a range of tumour-like conditions.\nMonitoring the dynamics of nanocarriers as they move through tissue-like environments could help optimise their design to overcome transport barriers, leading to better distribution within tumours and reduced off-target effects.\nIn future work, we will refine the workflow to produce a high-throughput screening system whereby this custom chip can be used as a low-cost, modular analysis platform for testing nanomedicine flow through a biological microenvironment (such as a cancerous tumour) – the output of which can then be processed with the press of a button to provide detailed statistical information about the permeation and diffusion characteristics of numerous different conditions at once. Such a testing suite could then interface with machine learning routines to allow for the automatic characterisation and design of nanoparticle-based drug therapies.31\nAs a demonstration, we compare tissue penetration of two fluorescent nanoparticles of different sizes. As expected, the larger particles clearly have a harder time penetrating through both types of gel. Our next work intends to add multiple types of other fluorescent particles into the mixture, including larger and smaller particles with different charges and targeting moieties.\nWe also seek to investigate whether mathematical analysis can be applied in the reverse manner in this system – whether we can utilize pre-characterized particles of known properties to gather information about the gels themselves. Both approaches would allow for the effective calibration of tissue-scale models of nanoparticle transport, further improving our understanding of the main obstacles for effective penetration into the tumour tissue.32–34" ]
[ "intro", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "nanomedicine", "microfluidics", "transport barriers", "tissue penetration", "image processing", "fast-prototyping" ]
Introduction: Nanocarriers can transport chemotherapies through the body, shielding them from healthy tissue and specifically targeting tumours. One of nanomedicine’s advantages is the ability to fine-tune the surface chemistry and size of the nanocarriers to optimize their transport across biological barriers, such as blood vessel walls, extracellular matrix (ECM) and through cell membranes.1,2 Small nanocarrier modifications, however, can lead to extreme differences in their behaviour3–6 and their ability to traverse biological barriers to reach the intended target.7–9 Given the complexity of nanomedicine design and its impact on complex spatiotemporal behaviour in the body, new methods are needed for high-throughput screening of suitable transport dynamics.10,11 Furthermore, evaluating the performance of a therapy within a specific scenario, and the interactions it would encounter, allows for more effective rational nanoparticle design as well as a route to limiting toxic off-target effects.12,13 Tumour-specific uptake models are of immediate importance to the development of new nanotherapies for treating cancer, with almost two-thirds of nanomedicine research currently focused on oncology.14,15 However, solid tumour microenvironments are extremely complex and exhibit a wide range of diffusive and convective matrices.16 The tumour microenvironment, due to its necrotic core and constantly expanding mass, will produce a positive outwards pressure at the edge of the tumour towards the surrounding tissue,17,18 and extravasating nanoparticles nearer the centre must rely on diffusion to penetrate.19 Factors such as the density and orientation of the nano-porous protein network also impact the transport properties of the drug.20–22 Microfluidic models are beginning to emerge in nanomedicine screening, whereby fluid flow is directed through a region of ECM hydrogels that contain tumour cells. Automatic image processing is then used to allow researchers to analyse results in bulk, allowing for customization of the analysis framework to achieve specific results. In recent years, microfluidic 3D tissue scaffolds have been investigated for their beneficial properties of low reagent usage, ability to scale up to high-throughput systems, and flexibility in spatial control.23 The significant improvement on the previous paradigm of 2D cell culture is to include a three-dimensional environment for cells to proliferate, preventing morphological differences in binding properties. The resulting 3D tissue structures are commonly grown in hydrogel matrices comprising natural ECM proteins, which allow the cells to propagate to form biologically mimetic cellular structures, such as spheroids or organoids.24 These rudimentary biological structures are produced to emulate in vivo conditions, such that the natural mechanisms of nanomedicine uptake can be studied in vitro. Ho et al25 produced microfluidic channels that were sandwiched either side of a region of gelled fibrin solution that was covered with human umbilical vein endothelial cells, allowing them to study how fluorescent polystyrene nanoparticles extravasated under flow through the hydrogel. They also extracted fluorescence data in regions of interest and utilized MATLAB to examine the diffusional permeability. Carvalho et al26 used a microfluidic methodology to assemble a colorectal tumour-on-a-chip model by embedding cancer cell lines in Matrigel in a 5 mm chamber and having parallel perfusion channels coated in human colonic microvascular endothelial cells. They produced dendrimer nanoparticles containing a colorectal cancer drug and observed fluorescently labelled dendrimers penetrating the hydrogel. Instead of a scripted analysis, they took microscopy image processing directly into a graphic software for statistical measurements, without mention of an automated solution. Albanese et al27 produced a melanoma spheroid-on-a-chip device where spheroids harvested from culture in a poly-hydroxyethyl methacrylate gel supplemented with Matrigel™ were embedded inside a soft microfluidic chip, with the PDMS microchannel gently compressing the tumour against the glass substrate. Instead of being encased in a solid hydrogel, the spheroid was contained in a small coating of Matrigel and laminin and was subjected to convective flow, which may have contributed to their measured uptake of gold nanoparticles in a manner that does not match normal tumour diffusion conditions. Although the number of exciting new complex 3D testbeds is rapidly growing, there still is not a standardized, easy-to-produce microfluidic testbed, with open-source image processing and diffusion calculation for tissue penetration dynamics. In this paper, we present a novel framework for investigating the diffusive profile of nanoparticles within a model of human tissue. To achieve a diffusive environment, we utilize a completely open channel in a custom-made polymer chip loaded with a tissue scaffold gel containing mammalian-cell-sized microparticles with no applied pressure beyond the initial pipetting motion to introduce the nanoparticle solution (Figure 1). We then investigate how automated fluorescent image processing can be used to track nanoparticles as they move through our tissue-mimetic chip.Figure 1Schematic diagram of channel in polymethylmethacrylate chip – containing tissue-mimetic gel scaffold with an adjacent vessel inlet well, where nanoparticle solutions are loaded for imaging under fluorescent microscopy; fluorescent microparticles are substituted for the physical barriers of live cells. Imaging allows for time-lapse tracking of diffusive nanocarrier flow profiles from the vessel inlet through the tissue scaffold. Schematic diagram of channel in polymethylmethacrylate chip – containing tissue-mimetic gel scaffold with an adjacent vessel inlet well, where nanoparticle solutions are loaded for imaging under fluorescent microscopy; fluorescent microparticles are substituted for the physical barriers of live cells. Imaging allows for time-lapse tracking of diffusive nanocarrier flow profiles from the vessel inlet through the tissue scaffold. Our framework allows for the evaluation of particle dynamics over a range of sizes and within various realistic biomimetic environmental parameters. We test our framework using spherical nanoparticles and track these nanoparticles through an environment of well-characterized commercially available ECM: Matrigel™ and bovine Collagen I. The resulting datasets are analysed using a custom-written automated routine, where the primary objective is to establish a high-throughput testing suite to quantify nanoparticle spatiotemporal dynamics in tissue environments. Materials and Methods: Chip Preparation Microfluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir. Each channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use. Microfluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir. Each channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use. Tissue Scaffold and Nanoparticle Preparation Matrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use. Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use. Collagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use. A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use. Gel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Gelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E). Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E). Nanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F). The excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm. The excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm. The excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm. A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F). The excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm. The excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm. The excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm. Matrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use. Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use. Collagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use. A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use. Gel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Gelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E). Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E). Nanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F). The excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm. The excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm. The excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm. A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F). The excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm. The excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm. The excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm. Fluorescent Microscopy Fluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals. The red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm. The green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm. The blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm. Fluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals. The red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm. The green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm. The blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm. Image Analysis Fluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image. Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image. For all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C. The normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D). All data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/. LAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images. Fluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image. Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image. For all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C. The normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D). All data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/. LAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images. Diffusion Equations The above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments. To verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28 (1)\documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${{\partial C\left({x,t} \right)} \over {\partial t}} = {\ }D{\ }{{{\partial ^2}C\left({x,t} \right)} \over {\partial {x^2}}}$$ \end{document} For the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${C_0}$$ \end{document} from the channel and at the first timepoint, such that \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${\rm{C}}\left({0,{\rm{ }}30{\rm{mins}}} \right) = {\ }{{\rm{C}}_0}$$ \end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28 (2)\documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $$C\left({x,t} \right) = {C_{max}}\exp \left({ - {{{x^2}} \over {4Dt}}} \right)$$ \end{document} where \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${{\rm{C}}_{{\rm{max}}}}$$ \end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0). For Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9. For our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines. Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines. The above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments. To verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28 (1)\documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${{\partial C\left({x,t} \right)} \over {\partial t}} = {\ }D{\ }{{{\partial ^2}C\left({x,t} \right)} \over {\partial {x^2}}}$$ \end{document} For the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${C_0}$$ \end{document} from the channel and at the first timepoint, such that \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${\rm{C}}\left({0,{\rm{ }}30{\rm{mins}}} \right) = {\ }{{\rm{C}}_0}$$ \end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28 (2)\documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $$C\left({x,t} \right) = {C_{max}}\exp \left({ - {{{x^2}} \over {4Dt}}} \right)$$ \end{document} where \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${{\rm{C}}_{{\rm{max}}}}$$ \end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0). For Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9. For our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines. Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines. Chip Preparation: Microfluidic devices were fabricated by laser-cutting transparent polymethylmethacrylate (PMMA) 2 mm acrylic sheets on a CO2 laser cutter (Trotec). Three types of cuts of equal surface area were made from the same sheet – A) a plate with multiple channels cut through the acrylic, B) a solid backing plate, and C) a plate of “plugs” that would create a removable physical barrier inside the channel for a liquid reservoir. Each channel was formed by making a cut of 2x10 mm, and the plugs were formed by making cuts of 2.1x2 mm, creating a snug fit when the plug was inserted into the channel. The two halves A and B were then sealed using double-sided polyurethane tape of 2 mm thickness, providing a watertight and optically transparent chip that could be used to contain the biological gels. The plugs from C were inserted at the left-most point of each channel. The chips were then stored at −20 °C for at least 24 h as a rudimentary antibacterial measure while live cells were not in use. Tissue Scaffold and Nanoparticle Preparation: Matrigel Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use. Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use. Collagen I A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use. A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use. Gel Loading When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Gelation and Culture Conditions Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E). Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E). Nanoparticle Solution Preparation A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F). The excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm. The excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm. The excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm. A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F). The excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm. The excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm. The excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm. Matrigel: Matrigel™ (Thermo Fisher) was aliquoted to volumes of 2mL and kept at −20 °C until used. The gel was thawed overnight at 4 °C before use. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of Matrigel™. This was mixed via pipetting up and down with a stirring motion before every use. Collagen I: A collagen gel of 2 mg/mL and pH 7.4 was prepared as follows: for each mL of gel, 400 μL of 5 mg/mL bovine Collagen I (Thermo Fisher) was added to an Eppendorf tube, and combined with 40 μL of 0.5 M sodium hydroxide (Sigma Aldrich), 100 μL of phosphate-buffered saline (PBS) (Thermo Fisher) and 460 μL of deionised water. This mixture was kept at 4 °C until used. For gels containing microparticles, 250 μL of red 10 μm FluoSpheres™ (Thermo Fisher) was added to an Eppendorf vial containing 750 μL of bovine Collagen I gel. This was mixed via pipetting up and down with a stirring motion before every use. Gel Loading: When adding gels to the chip, 40 μL of gel was added per channel, beginning at the plugged end, and gradually moving the pipette tip along the channel. This eventually caused the meniscus of the gel to stick to the channel walls and create a contiguous block. If the meniscus did not reach the end of the channel naturally, the gel was gently pushed with the pipette tip until it contacted and stuck to the end (Figure 2A–C).Figure 2Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Schematic overview of gel loading steps: (A) assembled device prior to gel loading, (B) empty channel with plug, (C) pipetting ECM gel into channel, (D) gelation under incubation, (E) removal of the PMMA plug to form an empty inlet, and (F) loading of the nanoparticle solution (NPs). Gelation and Culture Conditions: Given that the eventual goal of such a device would be to include live cells, the chips were placed in an incubator at 37 °C and 5% CO2 for 1 h before moving them to a heated microscope environment at the same temperature. This also allowed adequate time for complete gelation, such that the plug could be removed to form an inlet for nanoparticle injection (Figure 2D and E). Nanoparticle Solution Preparation: A stock solution of fluorescein-free acid (Sigma) was made up to 100 ug/mL by serial dilution in Milli-Q water. A solution containing both dye and nanoparticles was created for this experiment, formed of 50% 100 µg/mL green fluorescein stock solution, and 50% 20 nm blue FluoSpheres™ in water. 8 μL of this solution was pipetted into each inlet at the start of the experiment before starting the time-lapse acquisition (Figure 2E and F). The excitation wavelength for the red 10 μm polystyrene nanoparticles was 580 nm and the emission wavelength was 605 nm. The excitation wavelength for the green fluorescein dye was 490 nm and the emission wavelength was 514 nm. The excitation wavelength for the blue 20 nm polystyrene nanoparticles was 365 nm and the emission wavelength was 415 nm. Fluorescent Microscopy: Fluorescent images were taken of the loaded chip at 10x magnification on a Leica microscope using their proprietary LAS X capture software. A time-lapse of each channel was acquired automatically by marking the channels as regions of interest in the software and taking images over 8 h at 30 min intervals. The red filter used had an excitation filter wavelength of 560/40 nm and an emission filter wavelength of 645/76 nm. The green filter used had an excitation filter wavelength of 480/40 nm and an emission filter wavelength of 527/30 nm. The blue filter used had an excitation filter wavelength of 350/50 nm and an emission filter wavelength of 460/76 nm. Image Analysis: Fluorescent image data were captured in a rectangular region of interest across the channel width (Figure 3A). The intensity of the fluorescent signal was averaged across the total width of the channel to create a 1D mean intensity profile spanning the length of the channel (data dimensions 1x10,000) for each time point and mixture). A moving window average was then applied to the 1D data, with window size of 1000.Figure 3Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image. Graphical output of automated image analysis, showing (A) initial fluorescent image, (B) data preparation including conversion to 1D intensity profile, (C) averaging, selection of region of interest, and re-scaling (D) the final thresholding procedure. Both the region of interest (white dashed lines) and threshold intensity positions (red dashed lines) are marked on the fluorescent image. For all data, the section of the chamber containing the channel was removed. Additionally, data were seen to finish before the absolute end of the chamber with low amplitude values observed in the same spatial position for all experiment time points and mixtures and were also removed to avoid over-calculation of the penetration depth. The data, truncated to fall between the end of the channel and the end of the chamber, were then normalised by subtracting the minimum intensity value and normalising by the maximum amplitude, such that all intensity values fell between 0 and 1. A schematic of the initial data preparation stage is shown in Figure 3B and C. The normalised and truncated intensity profile was then used to calculate the penetration depth. As the focus is on matching a qualitative difference between mixtures, a simple thresholding method was first used to compare penetration depths. We calculated the point within the chamber at which the intensity met a certain percentage of the maximum intensity (which is 1, given the normalisation procedure described above) with multiple threshold percentages considered (5/10/20/30/40/50%), shown in Figure 3D). All data analyses were performed using custom-written MATLAB scripts (Mathworks, v.2019b). The codebase has been named the Tissue Penetration Analysis Codebase (TPAC) and is hosted at the following address: https://bitbucket.org/hauertlab/tpac/src/master/. LAS X software (Leica) was used to automatically stitch the individual images together using the statistical alignment option and export the data as .tiff format images. Diffusion Equations: The above procedure for calculating normalised penetration distance allowed for comparison of the diffusive profile of fluorescein dye and nanoparticles in different environments. To verify that nanoparticle transport is mostly driven by diffusion and to quantify the diffusion coefficient, we also fitted a model of diffusion, with the assumption that diffusion along the length of the chamber will dominate. We model the green dye concentration, C(x, t), with constant and isotropic diffusion coefficient, D, using the standard equation,28 (1)\documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${{\partial C\left({x,t} \right)} \over {\partial t}} = {\ }D{\ }{{{\partial ^2}C\left({x,t} \right)} \over {\partial {x^2}}}$$ \end{document} For the concentration of fluorescein dye, we observe that immediately after the channel position (marked as “Start” in Figure 3), the concentration monotonically decreases for all time observations and across all mediums. Hence, we assume an instantaneous point release of concentration \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${C_0}$$ \end{document} from the channel and at the first timepoint, such that \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${\rm{C}}\left({0,{\rm{ }}30{\rm{mins}}} \right) = {\ }{{\rm{C}}_0}$$ \end{document}, and that there is an impermeable boundary at x=0. The solution to Equation (2) is given by.28 (2)\documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $$C\left({x,t} \right) = {C_{max}}\exp \left({ - {{{x^2}} \over {4Dt}}} \right)$$ \end{document} where \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${{\rm{C}}_{{\rm{max}}}}$$ \end{document} is the maximum initial concentration rescaled to a value of 1. We fit this equation to the truncated and rescaled fluorescent intensity, normalising only with the maximum amplitude at the initial observation (timepoint 0). For Equation (2), the diffusion coefficient, D, was fitted to data using the MATLAB Curve Fitting Toolbox. We consider only those fits that have an R2 higher than 0.9. For our automated image analysis, we tested 5/10/20/30/40/50% intensity thresholds and found that the results were qualitatively the same across all thresholds. To match the visual inspection, described above, we selected an intensity threshold of 10% to match the observed ~100% penetration of green and ~40% penetration of blue, as shown in Figure 4. All subsequent results are with this 10% threshold, unless otherwise stated. After removing the inlet and chamber end, the total chamber length was 7.5 mm.Figure 4Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines. Time series data from example channels, showing tissue penetration of fluorescein dye and 20 nm particles in different tissue scaffold conditions (gel type vs presence of microparticles). Images represent penetration at time = 0 h, 4 h and 8 h. Boundaries of tissue scaffold location and channel end is denoted between dotted white lines. Results: In Figure 4, we show example image data from red, green, and blue channels within Matrigel™ or Collagen I, as well as with and without microparticles. We observe that the 20 nm FluoSpheres do not flow to the end of the channel through either Matrigel™ or Collagen I, but that there is a diffusion over the course of 8 h that reaches approximately one-third to one-half of the way through the gel (with low intensity), after accounting for the channel position. The green fluorescein dye is instead able to diffuse through the entire tissue scaffold gels within 8 h, as expected from the smaller hydrodynamic radius of the dye particulates. Images taken over the duration of the experiment can be compiled into a video and analysed to show the spatiotemporal dynamics of the tracked nanoparticles within the tissue, as seen in Figure 5.Figure 5Timescale data for all conditions, showing green dye reaching end of channel at ~4–5 h, and blue 20 nm particles not exceeding ~50% over entire 8 h experiment. Total timepoints = 16, total chamber length after removing data is 7.5 mm. Timescale data for all conditions, showing green dye reaching end of channel at ~4–5 h, and blue 20 nm particles not exceeding ~50% over entire 8 h experiment. Total timepoints = 16, total chamber length after removing data is 7.5 mm. The blue 20 nm FluoSpheres were observed to have a larger scatter in the Collagen I with microparticles than any other condition, which we attribute to differences in the porous matrix due to dilution with the microparticles. The 10 μm red microparticles have carboxylate groups on the surface, which may lead to electrostatic charge interaction with the collagen fibrils. We hypothesize that Matrigel does not follow the same trend due to it being a more complex protein matrix, which may prevent it from being so widely affected by the presence of microparticles. Future work would expand upon this by analysing the gel porosity with electron microscopy. In Figure 6, we show the penetration distance for various tissue scaffold conditions for the blue nanoparticles and green dye, averaged across all 36 datasets (triplicate data for all four conditions and three individual chips) and where error bars denote the standard deviation, for the final time point (t = 8 h). A two-sample t-test analysis was performed on the data and confirmed several key differences.Figure 6Results from automated analysis of fluorescent imaging data with a 10% threshold at t = 8 h, showing that green dye can be tracked to its ~100% penetration, and blue 20 nm particles can be tracked to ~40% penetration with the same threshold. The total chamber length after removing data is 7.5 mm. Results from automated analysis of fluorescent imaging data with a 10% threshold at t = 8 h, showing that green dye can be tracked to its ~100% penetration, and blue 20 nm particles can be tracked to ~40% penetration with the same threshold. The total chamber length after removing data is 7.5 mm. In Figure 7, first, we observe that blue 20 nm nanoparticles consistently diffuse less deep than the green dye under all conditions, as described above. Further, we observe that the average penetration distance of the green dye is higher without microparticles but broadly similar across gel type, whereas the blue 20 nm penetrated deeper in Matrigel™ than Collagen I. There was no significant difference for 20 nm nanoparticle with or without microparticles.Figure 7Two significant results found in overall analysis, showing green dye is affected by presence or absence of microparticles, and blue nanoparticles are affected by gel type. The total chamber length after removing data is 7.5mm. (***) denotes p<0.005. Two significant results found in overall analysis, showing green dye is affected by presence or absence of microparticles, and blue nanoparticles are affected by gel type. The total chamber length after removing data is 7.5mm. (***) denotes p<0.005. Second, there was a significant difference (***, p<0.005) between the gel types for the blue nanoparticle diffusion. Collagen I proved to be more permeable to the nanoparticle diffusion than Matrigel™, with blue nanoparticles penetrating to a total distance (after 8 h) of 3.51 mm compared to 2.96 mm for Collagen I and Matrigel™, respectively. SEM images of Matrigel™ and a similar bovine Collagen I gel from the literature bear out that Matrigel™ (0.5–1 5 μm) has a smaller porosity than 2 mg/mL Collagen I (~2–5 μm), and this aligns with our analysis of Collagen I as being more permeable.29,30 Third, we observed a significant difference (***, p<0.005) between conditions with and without microparticles for the green fluorescein dye diffusion. The fluorescein dye permeated more in gels without microparticles (7.22 mm compared to 7.1 mm of the total truncated chamber length); although the data had higher variation (standard deviation of 0.65 mm compared to 0.01 mm). From this, we consider that the presence of microparticles provides obstacles to permeation, which may lead to longer trajectories due to collisions between the dye and microparticles. This would lead to the dye having more variation in how it flows, leading to an overall decrease in diffusion speed. Finally, there was no significant difference between the gel types for the green fluorescein dye diffusion. Neither one of the gels appeared to affect the permeation of the dye, which leads to the conclusion that the fluorescein dye was small enough to penetrate the smaller nanostructure of Matrigel™. Discussion: Having observed significant differences within the penetration distances, we consider whether the diffusion coefficient for the particles can be quantified using the model of diffusion described in Section 5.5. Here, we focus on the diffusion coefficient of the green fluorescein dye. We also include the diffusion coefficient, as calculated using the Stokes–Einstein equation: (3)\documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $$D = {{{k_B}T} \over {6\pi \eta r}}$$ \end{document} where \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $${k_B}$$ \end{document} is the Boltzmann constant, \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $$T$$ \end{document} is the absolute temperature, r, is the hydrodynamic radius and \documentclass[12pt]{minimal} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \DeclareFontFamily{T1}{linotext}{} \DeclareFontShape{T1}{linotext}{m}{n} {linotext }{} \DeclareSymbolFont{linotext}{T1}{linotext}{m}{n} \DeclareSymbolFontAlphabet{\mathLINOTEXT}{linotext} \begin{document} $$\eta $$ \end{document} is the dynamic viscosity of the medium with we assume to be 0.768 x10−3 kg/ms for Collagen I at 37°C.20 The dynamic viscosity of Matrigel™ was unknown and is not included for reference. We observe that the diffusion coefficients for the green fluorescein dye (575±72.4 μm2/sec averaged across all experiments) are within the expected reference value for the diffusion coefficient (591.6 μm2/sec, as calculated using Stokes–Einstein equation and viscosity of Collagen I given in20). We chose to analyse the fluorescein dye as it fully penetrated the channel within the experimental timeframe and thus gave the best timescale dynamics. This matching of values to the literature shows that our device’s environment is useful for capturing the dynamics of diffusion throughout biomimetic tissue. Furthermore, we observe that the diffusion coefficient is higher within Collagen I than within Matrigel™, as expected from the literature images of the porosity. Interestingly, we find that the averaged diffusion coefficient is higher when microparticles are present, counter to what is observed from the analysis of the penetration distance. The most logical conclusion is that the gel’s micron-scale mesh structure is disrupted and enlarged by the presence of the 10 μm particles, providing an easier pathway for the dye to migrate along the channel. To prove this, electron microscopy of the gel porosity with and without microparticles will be required in the future. While we observe that the diffusion coefficient is slightly higher under the presence of microparticles, the change in the diffusion coefficient is small and within the standard deviation. The mean squared displacement was calculated for all experiments across the three chips analysed in this experiment (Supplemental Table 1). All values for the exponent are below 1, indicating that the diffusion regime is sub-diffusive, as to be expected in porous media. The graphs for the fluorescein MSD curves are shown in Supplemental Figures 1, 3 and 5; while the 20 nm nanoparticle MSD curves are shown in Supplemental Figures 2, 4 and 6. Conclusion: We have developed a low-cost device that is capable of tracking nanoparticles through tissue-mimetic environments. Here, we use microparticles as proxies for cell-like structures, with future work introducing cells to investigate the influence of cell-binding and internalisation on nanoparticle flow profiles. The images generated allow us to monitor nanoparticle penetration over time, where all analysis is performed automatically with a custom-built framework for penetration detection, here made open source. This demonstrates a device with the potential for high throughput testing of nanoparticle flow properties under a range of tumour-like conditions. Monitoring the dynamics of nanocarriers as they move through tissue-like environments could help optimise their design to overcome transport barriers, leading to better distribution within tumours and reduced off-target effects. In future work, we will refine the workflow to produce a high-throughput screening system whereby this custom chip can be used as a low-cost, modular analysis platform for testing nanomedicine flow through a biological microenvironment (such as a cancerous tumour) – the output of which can then be processed with the press of a button to provide detailed statistical information about the permeation and diffusion characteristics of numerous different conditions at once. Such a testing suite could then interface with machine learning routines to allow for the automatic characterisation and design of nanoparticle-based drug therapies.31 As a demonstration, we compare tissue penetration of two fluorescent nanoparticles of different sizes. As expected, the larger particles clearly have a harder time penetrating through both types of gel. Our next work intends to add multiple types of other fluorescent particles into the mixture, including larger and smaller particles with different charges and targeting moieties. We also seek to investigate whether mathematical analysis can be applied in the reverse manner in this system – whether we can utilize pre-characterized particles of known properties to gather information about the gels themselves. Both approaches would allow for the effective calibration of tissue-scale models of nanoparticle transport, further improving our understanding of the main obstacles for effective penetration into the tumour tissue.32–34
Background: In cancer nanomedicine, drugs are transported by nanocarriers through a biological system to produce a therapeutic effect. The efficacy of the treatment is affected by the ability of the nanocarriers to overcome biological transport barriers to reach their target. In this work, we focus on the process of nanocarrier penetration through tumour tissue after extravasation. Visualising the dynamics of nanocarriers in tissue is difficult in vivo, and in vitro assays often do not capture the spatial and physical constraints relevant to model tissue penetration. Methods: We propose a new simple, low-cost method to observe the transport dynamics of nanoparticles through a tissue-mimetic microfluidic chip. After loading a chip with triplicate conditions of gel type and loading with microparticles, microscopic analysis allows for tracking of fluorescent nanoparticles as they move through hydrogels (Matrigel and Collagen I) with and without cell-sized microparticles. A bespoke image-processing codebase written in MATLAB allows for statistical analysis of this tracking, and time-dependent dynamics can be determined. Results: To demonstrate the method, we show size-dependence of transport mechanics can be observed, with diffusion of fluorescein dye throughout the channel in 8 h, while 20 nm carboxylate FluoSphere diffusion was hindered through both Collagen I and Matrigel™. Statistical measurements of the results are generated through the software package and show the significance of both size and presence of microparticles on penetration depth. Conclusions: This provides an easy-to-understand output for the end user to measure nanoparticle tissue penetration, enabling the first steps towards future automated experimentation of transport dynamics for rational nanocarrier design.
Introduction: Nanocarriers can transport chemotherapies through the body, shielding them from healthy tissue and specifically targeting tumours. One of nanomedicine’s advantages is the ability to fine-tune the surface chemistry and size of the nanocarriers to optimize their transport across biological barriers, such as blood vessel walls, extracellular matrix (ECM) and through cell membranes.1,2 Small nanocarrier modifications, however, can lead to extreme differences in their behaviour3–6 and their ability to traverse biological barriers to reach the intended target.7–9 Given the complexity of nanomedicine design and its impact on complex spatiotemporal behaviour in the body, new methods are needed for high-throughput screening of suitable transport dynamics.10,11 Furthermore, evaluating the performance of a therapy within a specific scenario, and the interactions it would encounter, allows for more effective rational nanoparticle design as well as a route to limiting toxic off-target effects.12,13 Tumour-specific uptake models are of immediate importance to the development of new nanotherapies for treating cancer, with almost two-thirds of nanomedicine research currently focused on oncology.14,15 However, solid tumour microenvironments are extremely complex and exhibit a wide range of diffusive and convective matrices.16 The tumour microenvironment, due to its necrotic core and constantly expanding mass, will produce a positive outwards pressure at the edge of the tumour towards the surrounding tissue,17,18 and extravasating nanoparticles nearer the centre must rely on diffusion to penetrate.19 Factors such as the density and orientation of the nano-porous protein network also impact the transport properties of the drug.20–22 Microfluidic models are beginning to emerge in nanomedicine screening, whereby fluid flow is directed through a region of ECM hydrogels that contain tumour cells. Automatic image processing is then used to allow researchers to analyse results in bulk, allowing for customization of the analysis framework to achieve specific results. In recent years, microfluidic 3D tissue scaffolds have been investigated for their beneficial properties of low reagent usage, ability to scale up to high-throughput systems, and flexibility in spatial control.23 The significant improvement on the previous paradigm of 2D cell culture is to include a three-dimensional environment for cells to proliferate, preventing morphological differences in binding properties. The resulting 3D tissue structures are commonly grown in hydrogel matrices comprising natural ECM proteins, which allow the cells to propagate to form biologically mimetic cellular structures, such as spheroids or organoids.24 These rudimentary biological structures are produced to emulate in vivo conditions, such that the natural mechanisms of nanomedicine uptake can be studied in vitro. Ho et al25 produced microfluidic channels that were sandwiched either side of a region of gelled fibrin solution that was covered with human umbilical vein endothelial cells, allowing them to study how fluorescent polystyrene nanoparticles extravasated under flow through the hydrogel. They also extracted fluorescence data in regions of interest and utilized MATLAB to examine the diffusional permeability. Carvalho et al26 used a microfluidic methodology to assemble a colorectal tumour-on-a-chip model by embedding cancer cell lines in Matrigel in a 5 mm chamber and having parallel perfusion channels coated in human colonic microvascular endothelial cells. They produced dendrimer nanoparticles containing a colorectal cancer drug and observed fluorescently labelled dendrimers penetrating the hydrogel. Instead of a scripted analysis, they took microscopy image processing directly into a graphic software for statistical measurements, without mention of an automated solution. Albanese et al27 produced a melanoma spheroid-on-a-chip device where spheroids harvested from culture in a poly-hydroxyethyl methacrylate gel supplemented with Matrigel™ were embedded inside a soft microfluidic chip, with the PDMS microchannel gently compressing the tumour against the glass substrate. Instead of being encased in a solid hydrogel, the spheroid was contained in a small coating of Matrigel and laminin and was subjected to convective flow, which may have contributed to their measured uptake of gold nanoparticles in a manner that does not match normal tumour diffusion conditions. Although the number of exciting new complex 3D testbeds is rapidly growing, there still is not a standardized, easy-to-produce microfluidic testbed, with open-source image processing and diffusion calculation for tissue penetration dynamics. In this paper, we present a novel framework for investigating the diffusive profile of nanoparticles within a model of human tissue. To achieve a diffusive environment, we utilize a completely open channel in a custom-made polymer chip loaded with a tissue scaffold gel containing mammalian-cell-sized microparticles with no applied pressure beyond the initial pipetting motion to introduce the nanoparticle solution (Figure 1). We then investigate how automated fluorescent image processing can be used to track nanoparticles as they move through our tissue-mimetic chip.Figure 1Schematic diagram of channel in polymethylmethacrylate chip – containing tissue-mimetic gel scaffold with an adjacent vessel inlet well, where nanoparticle solutions are loaded for imaging under fluorescent microscopy; fluorescent microparticles are substituted for the physical barriers of live cells. Imaging allows for time-lapse tracking of diffusive nanocarrier flow profiles from the vessel inlet through the tissue scaffold. Schematic diagram of channel in polymethylmethacrylate chip – containing tissue-mimetic gel scaffold with an adjacent vessel inlet well, where nanoparticle solutions are loaded for imaging under fluorescent microscopy; fluorescent microparticles are substituted for the physical barriers of live cells. Imaging allows for time-lapse tracking of diffusive nanocarrier flow profiles from the vessel inlet through the tissue scaffold. Our framework allows for the evaluation of particle dynamics over a range of sizes and within various realistic biomimetic environmental parameters. We test our framework using spherical nanoparticles and track these nanoparticles through an environment of well-characterized commercially available ECM: Matrigel™ and bovine Collagen I. The resulting datasets are analysed using a custom-written automated routine, where the primary objective is to establish a high-throughput testing suite to quantify nanoparticle spatiotemporal dynamics in tissue environments. Conclusion: We have developed a low-cost device that is capable of tracking nanoparticles through tissue-mimetic environments. Here, we use microparticles as proxies for cell-like structures, with future work introducing cells to investigate the influence of cell-binding and internalisation on nanoparticle flow profiles. The images generated allow us to monitor nanoparticle penetration over time, where all analysis is performed automatically with a custom-built framework for penetration detection, here made open source. This demonstrates a device with the potential for high throughput testing of nanoparticle flow properties under a range of tumour-like conditions. Monitoring the dynamics of nanocarriers as they move through tissue-like environments could help optimise their design to overcome transport barriers, leading to better distribution within tumours and reduced off-target effects. In future work, we will refine the workflow to produce a high-throughput screening system whereby this custom chip can be used as a low-cost, modular analysis platform for testing nanomedicine flow through a biological microenvironment (such as a cancerous tumour) – the output of which can then be processed with the press of a button to provide detailed statistical information about the permeation and diffusion characteristics of numerous different conditions at once. Such a testing suite could then interface with machine learning routines to allow for the automatic characterisation and design of nanoparticle-based drug therapies.31 As a demonstration, we compare tissue penetration of two fluorescent nanoparticles of different sizes. As expected, the larger particles clearly have a harder time penetrating through both types of gel. Our next work intends to add multiple types of other fluorescent particles into the mixture, including larger and smaller particles with different charges and targeting moieties. We also seek to investigate whether mathematical analysis can be applied in the reverse manner in this system – whether we can utilize pre-characterized particles of known properties to gather information about the gels themselves. Both approaches would allow for the effective calibration of tissue-scale models of nanoparticle transport, further improving our understanding of the main obstacles for effective penetration into the tumour tissue.32–34
Background: In cancer nanomedicine, drugs are transported by nanocarriers through a biological system to produce a therapeutic effect. The efficacy of the treatment is affected by the ability of the nanocarriers to overcome biological transport barriers to reach their target. In this work, we focus on the process of nanocarrier penetration through tumour tissue after extravasation. Visualising the dynamics of nanocarriers in tissue is difficult in vivo, and in vitro assays often do not capture the spatial and physical constraints relevant to model tissue penetration. Methods: We propose a new simple, low-cost method to observe the transport dynamics of nanoparticles through a tissue-mimetic microfluidic chip. After loading a chip with triplicate conditions of gel type and loading with microparticles, microscopic analysis allows for tracking of fluorescent nanoparticles as they move through hydrogels (Matrigel and Collagen I) with and without cell-sized microparticles. A bespoke image-processing codebase written in MATLAB allows for statistical analysis of this tracking, and time-dependent dynamics can be determined. Results: To demonstrate the method, we show size-dependence of transport mechanics can be observed, with diffusion of fluorescein dye throughout the channel in 8 h, while 20 nm carboxylate FluoSphere diffusion was hindered through both Collagen I and Matrigel™. Statistical measurements of the results are generated through the software package and show the significance of both size and presence of microparticles on penetration depth. Conclusions: This provides an easy-to-understand output for the end user to measure nanoparticle tissue penetration, enabling the first steps towards future automated experimentation of transport dynamics for rational nanocarrier design.
13,041
305
[ 6070, 202, 1379, 80, 139, 224, 77, 159, 121, 510, 812, 1053, 684, 388 ]
15
[ "usepackage", "gel", "linotext", "channel", "nm", "μl", "end", "wavelength", "figure", "t1" ]
[ "tumours nanomedicine advantages", "nanomedicine flow biological", "nanocarriers transport", "dynamics nanocarriers tissue", "targeting tumours nanomedicine" ]
null
null
[CONTENT] nanomedicine | microfluidics | transport barriers | tissue penetration | image processing | fast-prototyping [SUMMARY]
null
null
[CONTENT] nanomedicine | microfluidics | transport barriers | tissue penetration | image processing | fast-prototyping [SUMMARY]
[CONTENT] nanomedicine | microfluidics | transport barriers | tissue penetration | image processing | fast-prototyping [SUMMARY]
[CONTENT] nanomedicine | microfluidics | transport barriers | tissue penetration | image processing | fast-prototyping [SUMMARY]
[CONTENT] Collagen | Diffusion | Gels | Humans | Microfluidics | Nanomedicine | Nanoparticles | Tissue Scaffolds [SUMMARY]
null
null
[CONTENT] Collagen | Diffusion | Gels | Humans | Microfluidics | Nanomedicine | Nanoparticles | Tissue Scaffolds [SUMMARY]
[CONTENT] Collagen | Diffusion | Gels | Humans | Microfluidics | Nanomedicine | Nanoparticles | Tissue Scaffolds [SUMMARY]
[CONTENT] Collagen | Diffusion | Gels | Humans | Microfluidics | Nanomedicine | Nanoparticles | Tissue Scaffolds [SUMMARY]
[CONTENT] tumours nanomedicine advantages | nanomedicine flow biological | nanocarriers transport | dynamics nanocarriers tissue | targeting tumours nanomedicine [SUMMARY]
null
null
[CONTENT] tumours nanomedicine advantages | nanomedicine flow biological | nanocarriers transport | dynamics nanocarriers tissue | targeting tumours nanomedicine [SUMMARY]
[CONTENT] tumours nanomedicine advantages | nanomedicine flow biological | nanocarriers transport | dynamics nanocarriers tissue | targeting tumours nanomedicine [SUMMARY]
[CONTENT] tumours nanomedicine advantages | nanomedicine flow biological | nanocarriers transport | dynamics nanocarriers tissue | targeting tumours nanomedicine [SUMMARY]
[CONTENT] usepackage | gel | linotext | channel | nm | μl | end | wavelength | figure | t1 [SUMMARY]
null
null
[CONTENT] usepackage | gel | linotext | channel | nm | μl | end | wavelength | figure | t1 [SUMMARY]
[CONTENT] usepackage | gel | linotext | channel | nm | μl | end | wavelength | figure | t1 [SUMMARY]
[CONTENT] usepackage | gel | linotext | channel | nm | μl | end | wavelength | figure | t1 [SUMMARY]
[CONTENT] tissue | tumour | vessel | microfluidic | nanoparticles | nanomedicine | cells | allows | produced | processing [SUMMARY]
null
null
[CONTENT] like | tissue | testing | work | allow | tumour | particles | nanoparticle | flow | different [SUMMARY]
[CONTENT] usepackage | linotext | nm | gel | μl | wavelength | channel | thermo fisher | fisher | thermo [SUMMARY]
[CONTENT] usepackage | linotext | nm | gel | μl | wavelength | channel | thermo fisher | fisher | thermo [SUMMARY]
[CONTENT] ||| ||| ||| assays [SUMMARY]
null
null
[CONTENT] first [SUMMARY]
[CONTENT] ||| ||| ||| assays ||| ||| Matrigel | Collagen ||| MATLAB ||| 8 | 20 | FluoSphere | Collagen | Matrigel ||| ||| first [SUMMARY]
[CONTENT] ||| ||| ||| assays ||| ||| Matrigel | Collagen ||| MATLAB ||| 8 | 20 | FluoSphere | Collagen | Matrigel ||| ||| first [SUMMARY]
A Prospective Assessment of Knee Arthroscopy Skills Between Medical Students and Residents-Simulator Exercises for Partial Meniscectomy and Analysis of Learning Curves.
34565232
The Covid-19 pandemic has created the largest disruption of education in history. In a response to this, we aimed to evaluate the knee arthroscopy learning curve among medical students and orthopaedic residents.
BACKGROUND
An arthroscopy simulator was used to compare the learning curves of two groups. Medical students with any prior knowledge of arthroscopy (n=24) were compared to a residents group (n=16). Analyzed parameters were "time to complete a task," assessment of the movement of tools and values scoring damage to the surrounding tissues.
METHODS
After several repetitions, both groups improved their skills in terms of time and movement. Residents were on average faster, had less camera movement, and touched the cartilage tissue less often than did students. Students showed a steeper improvement curve than residents for certain parameters, as they started from a different experience level.
RESULTS
The participants were able to reduce the time to complete a task. There was also a decrease in possible damage to the virtual surrounding tissues. In general, the residents had better mean values, but the students had the steeper learning curve. Particularly less experienced surgeons can especially train their hand-eye coordination skills required for arthroscopy surgery. Training simulators are an important training tool that supplements cadaveric training and participation in arthroscopic operations and should be included in training.
CONCLUSION
[ "Arthroscopy", "COVID-19", "Clinical Competence", "Computer Simulation", "Humans", "Internship and Residency", "Knee Joint", "Learning Curve", "Meniscectomy", "Pandemics", "Prospective Studies", "Simulation Training", "Students, Medical" ]
9227956
Introduction
Arthroscopy requires different skills than open surgery due to limited visibility, reduced motion freedom, and non-intuitive hand–eye coordination.1,2 Orthopedic resident surgeons are expected to acquire their early arthroscopy skills under the supervision of an orthopedic consultant.1,3 Arthroscopic procedures require additional training as complex trajectory may be necessary to perform arthroscopic inspection of a narrow joint space with complex anatomical structures. 2 While in several fields like aerospace, military, or critical care, training simulators are well established training methods although health care lags behind in terms of stakeholder engagement, terms of implantation of simulation outcomes. 4 In surgical training, the Halstead’s method “see one, do one, teach one” has traditionally been preferred. 5 Although for a number of reasons including increased public awareness for medical errors, patient safety, heightened patient expectations, strict regulation of residents’ duty-hours, surgeons’ liability, and an increasing emphasis on the efficient use of operating room time, different studies state that the Halstead training method is not appropriate anymore.6–8 Computer-based simulation can provide objective quantitative data for the measurement of performance and skills assessment. For this reason, arthroscopic simulators were developed that also document the paradigm shift in orthopedic education. 9 In a study of McCarthy et al, novices showed significant improvements in task completion time, shorter arthroscope path lengths, shorter probe path lengths, and fewer arthroscope tip contacts when using arthroscopic training simulator with haptic feedback, 10 facilitating the process of acquiring and improving manual skills during arthroscopic surgery.11,12 According to the current literature, there is increased use of skills training modalities, and increasing numbers of simulators have been developed as a result.13–17 Purpose The aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator. The aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator.
Methods
A prospective comparative study was conducted. All participants gave their oral and written informed consent for participation in the study. The local ethics committee waived the need for ethics approval. A total of 24 medical students, without any prior knowledge of arthroscopy (never performed arthroscopy), and 16 orthopedic residents were recruited. The latter had already taken arthroscopy courses (cadaver training course or training with arthroscopic device on a phantom) and performed a small number (less than five) of arthroscopy surgeries. All participants were familiarized with the equipment and received always the same standardized instructions concerning the arthroscopy simulator. They were taught how to manage the 30° arthroscope and the tools. The knee arthroscopic training simulator GMV/insightArthroVR (GMV, Madrid, Spain) was used (Figure 1). It consisted of an anatomic knee model and two instrument phantoms mounted on a cart, a monitor, and training and simulation software (InsightArthroVR Version 2.9). Knee movements from 0° (total extension) to 90° of flexion as well as varus and valgus movements could be simulated. Both anterolateral and anteromedial portals were represented, enabling various arthroscopy knee exercises. Instruments were simulated with two force feedback robotic arms, with instrument phantoms providing haptic feedback. A 30° angled camera lens enables an arthroscopic display just like in real arthroscopic surgery. The probe was physically represented by the Phantom Omni Stylus (GMV, Madrid, Spain).Figure 1.Virtual reality arthroscopy trainer, GMV/insightArthroVR, by courtesy of GMV, Madrid, Spain. Virtual reality arthroscopy trainer, GMV/insightArthroVR, by courtesy of GMV, Madrid, Spain. The simulator offers a broad spectrum of tasks from diagnostic arthroscopy to complex surgical procedures. Our study included training modules for diagnostic arthroscopy using the camera only, diagnostic arthroscopy using camera and probe in finding a sequence of spheres, partial meniscectomy at several locations in the menisci. At the outset of each training exercise, the difficulty level must be determined (choosing from initial, intermediate, and expert), resulting in different sizes of the spheres to be palpated. In the current study, the difficulty of all exercises was set at intermediate. The data were collected over a period of two years in various training sessions. A total of 33 different exercises were carried out by the members of the two groups under investigation. The following parameters were assessed:Time [s] to complete the task. The interval started when one of the instruments was inserted through a portal, and ended when the exercise was finished.The distance covered by the tip of the camera (CDC), the instrument (CDP), and the grasper (CDG) inside the knee phantom. CDP and CDG were held with the right hand only, while CDC was mostly held with the left hand except during diagnostic exercises.Cartilage damage [mm] was the maximum depth applied to specific damageable tissues according to a contact computation algorithm. The colliding tissues, in which the cartilage damage was considered, were the tibial plateau cartilage, femoral condyles, and articular patella cartilage for the knee. The algorithm computes the distance between the collision point and the maximum penetration. The penetration distance is translated to an opposite-direction force applied by the associated Phantom. This force is what is considered cartilage damage. The parameter was recorded for the camera (PDC) and the probe instrument (PDP) as well as for an open grasper (OG)Amount of resected meniscus (%): On completion of a meniscectomy the percentage of resected meniscus was determined. The percentage of the remaining meniscus was calculated as the volume of the meniscus left as a proportion of the total volume (Figure 2).Amount of resected tear (%): Shows the percentage of meniscus tear resected. 100% means resection of the whole meniscus volume depicted by experts (view picture). This includes resection of both the tear itself and the part of healthy meniscus that needs to be surgically removed in order to have a smooth meniscus edge (Figure 2).Figure 2.Example of a meniscectomy tear repair exercise. Trainees had to remove the indicated red area (% amount of resected tear) in order to remove the tear and not exceed the red border using a virtual grasper (% amount of resected meniscus). Time [s] to complete the task. The interval started when one of the instruments was inserted through a portal, and ended when the exercise was finished. The distance covered by the tip of the camera (CDC), the instrument (CDP), and the grasper (CDG) inside the knee phantom. CDP and CDG were held with the right hand only, while CDC was mostly held with the left hand except during diagnostic exercises. Cartilage damage [mm] was the maximum depth applied to specific damageable tissues according to a contact computation algorithm. The colliding tissues, in which the cartilage damage was considered, were the tibial plateau cartilage, femoral condyles, and articular patella cartilage for the knee. The algorithm computes the distance between the collision point and the maximum penetration. The penetration distance is translated to an opposite-direction force applied by the associated Phantom. This force is what is considered cartilage damage. The parameter was recorded for the camera (PDC) and the probe instrument (PDP) as well as for an open grasper (OG) Amount of resected meniscus (%): On completion of a meniscectomy the percentage of resected meniscus was determined. The percentage of the remaining meniscus was calculated as the volume of the meniscus left as a proportion of the total volume (Figure 2). Amount of resected tear (%): Shows the percentage of meniscus tear resected. 100% means resection of the whole meniscus volume depicted by experts (view picture). This includes resection of both the tear itself and the part of healthy meniscus that needs to be surgically removed in order to have a smooth meniscus edge (Figure 2). Example of a meniscectomy tear repair exercise. Trainees had to remove the indicated red area (% amount of resected tear) in order to remove the tear and not exceed the red border using a virtual grasper (% amount of resected meniscus). Statistics A learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time. 18 In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older. 19 In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants. Learning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies. 20 The higher the slope of the regression curves, the steeper the learning. Mean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases. A learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time. 18 In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older. 19 In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants. Learning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies. 20 The higher the slope of the regression curves, the steeper the learning. Mean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases.
Results
Each exercise in the pool of 33 training exercises was repeated on average 39 (median 37, range 1–170) times. Residents improved their arthroscopic skills by reducing the time to complete the task from 7 min to 2 min on average (P<0.0001), while students reduced the time from 6 min to 3 min (Figure 3) with 38 repetitions. However, the students group showed no statistical significant improvement in time (P=0.171). Residents had a steeper learning curve (learned faster) in terms of “time to complete the task” (slope=−0.37, R2=0.72, 95% CI −0.43 to −0.30) by comparison to students (slope=−0.23, R2=0.45, 95% CI −0.31 to −0.15).Figure 3.Learning curves for the mean time including logarithmic regression curves of residents and students performing 38 repetitions of several exercises. Learning curves for the mean time including logarithmic regression curves of residents and students performing 38 repetitions of several exercises. Residents improved their arthroscopic skills by reducing camera movement (CDC) from 2.4 m to .3 m on average over 38 simulations (P<0.001). Residents reduced movement with the probe from 1.1 m to 0.2 m over 17 simulations (P=0.001) while no improvement was observed on the movement with the grasper (CDG) over ten simulations (P=0.776). Students reduced their CDC movements from 2.1 m to .5 m on average over an observation period of 38 simulations (P=0.011) and the CDP from 1.6 m to 0.4 m over 17 simulations (P=0.008). No improvement was observed for CDG over ten simulations (P=0.619). Residents had a steeper learning curve in terms of CDC (slope=−0.50, R2=0.76, 95% CI −0.58 to −.42) in comparison to students (slope=−0.38, R2=0.47, 95% CI −0.49 to −0.26) (Figure 4A). When considering the CDP, the slope of the residents’ learning curve (slope=−0.38, R2=0.59, 95% CI −0.54 to −0.22) was similar to that of students (slope=−0.39, R2=0.63, 95% CI −0.54 to −0.24) (Figure 4B). In CDG, a steeper learning curve was found for students (slope=−0.22, R2=0.41, 95% CI −0.42 to −0.01) than for residents (slope=−0.39, R2=0.66, 95% CI −0.60 to −0.17) (Figure 4C).Figure 4.Learning curves for the distance covered with the camera (CDC) (A), with the probe (CDP) (B), and with the grasper (CDG) (C) including the logarithmic regression curves of residents and students in several exercises. Learning curves for the distance covered with the camera (CDC) (A), with the probe (CDP) (B), and with the grasper (CDG) (C) including the logarithmic regression curves of residents and students in several exercises. No statistically significant difference was observed between residents and students at the start or the end of the learning curve for the parameters time to complete the task, CDC, and CDG (P>0.05) (Table 1). Residents showed a statistically significant lower CDP at the end of the learning curve (P=0.032) than did students, while no statistically significant difference was found at the start of the learning curve.Table 1.Mean (SD) of Time to Complete the Task, CDC, CDP, and CDG were Assessed for the First and Last Exercise and Compared Between Groups.Start (SD)End (SD)P-value Time to complete the task (min) Residents 7 (4)2 (1)<0.001 Students 6 (5)3 (3)0.171 P-value 0.6710.172 CDC (m) Residents 2.4 (1.4)0.3 (0.4)<0.001 Students 2.1 (1.3)0.5 (0.4)0.011 P-value 0.4840.515 CDP (m) Residents 1.1 (0.7)0.2 (0.1)0.001 Students 1.6 (0.9)0.4 (0.1)0.008 P-value 0.1090.032 CDG (m) Residents 1.0 (1.0)0.9 (0.7)0.776 Students 1.5 (2.4)0.8 (0.1)0.619 P-value 0.4980.795 Mean (SD) of Time to Complete the Task, CDC, CDP, and CDG were Assessed for the First and Last Exercise and Compared Between Groups. When considering the cartilage damage values for the camera (PDC), probe (PDP), and grasper (RG), no statistically significant difference was found between residents and students (P>0.05). In 45% of their cases, residents did not touch the bone while moving the camera, while students were able to not touch the bone with the camera in 32% of their cases. The maximum penetration depth of the instruments was around 40 mm for all three tools attached to the simulator (Table 2).Table 2.The Mean (SD) of the Maximum Penetration Depth Is Reported for Residents and Students for the Various Instruments (Camera PDC, Probe PDP, and Grasper PDG) Used During the Exercises. The Percentage of Cartilage Contact Is Reported for Residents and Students Based on All Exercises.Contact (%)No Contact (%)Max Penetration Depth (mm) PDC Residents 554537 (9) Students 683239 (16) P-value 0.714 PDP Residents 100044 (18) Students 100044 (20) P-value 0.999 PDG Residents 100048 (15) Students 100047 (20) P-value 0.889 The Mean (SD) of the Maximum Penetration Depth Is Reported for Residents and Students for the Various Instruments (Camera PDC, Probe PDP, and Grasper PDG) Used During the Exercises. The Percentage of Cartilage Contact Is Reported for Residents and Students Based on All Exercises. Results of the different exercises for removing a meniscus tear are shown in Figure 4. No statistical significant improvement could be observed for the residents meniscus between first and last exercise for the remaining meniscus (P=0.202) as well as for the resected meniscus (0.744) (Table 3). The students group showed also no statistically significant improvement between first and last exercise for the remaining meniscus (P=0.744) and resected meniscus (P=.164) (Table 3). While students showed a higher variation over time in deciding how much tear has to be removed, residents showed a more constant curve (Figure 5A and 5B). However, no statistically significant differences were found between residents and student for the first iteration or after 21 exercises (Table 3). Residents were more conservative in resecting meniscus than were students (Table 3).Table 3.The Mean (SD) of the Percentage of Remaining Meniscus as well as the Percentage of Resected Meniscus Was Assessed for the First and Last Exercise and Compared Between Groups.Start (SD)End (SD)P-value Remaining meniscus (%) Residents 70 (18)82 (9)0.202 Students 80 (12)76 (8)0.540 P-value 0.0730.362 Resected meniscus (%) Residents 75 (25)63 (21)0.744 Students 80 (20)81 (7)0.164 P-value 0.1930.912Figure 5.Mean percentage of remaining meniscus (A) and resected mensicus (B) divided by groups. The Mean (SD) of the Percentage of Remaining Meniscus as well as the Percentage of Resected Meniscus Was Assessed for the First and Last Exercise and Compared Between Groups. Mean percentage of remaining meniscus (A) and resected mensicus (B) divided by groups.
null
null
[ "Purpose", "Statistics" ]
[ "The aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator.", "A learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time.\n18\n In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older.\n19\n In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants.\nLearning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies.\n20\n The higher the slope of the regression curves, the steeper the learning.\nMean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases." ]
[ null, null ]
[ "Introduction", "Purpose", "Methods", "Statistics", "Results", "Discussion" ]
[ "Arthroscopy requires different skills than open surgery due to limited visibility, reduced motion freedom, and non-intuitive hand–eye coordination.1,2 Orthopedic resident surgeons are expected to acquire their early arthroscopy skills under the supervision of an orthopedic consultant.1,3 Arthroscopic procedures require additional training as complex trajectory may be necessary to perform arthroscopic inspection of a narrow joint space with complex anatomical structures.\n2\n While in several fields like aerospace, military, or critical care, training simulators are well established training methods although health care lags behind in terms of stakeholder engagement, terms of implantation of simulation outcomes.\n4\n In surgical training, the Halstead’s method “see one, do one, teach one” has traditionally been preferred.\n5\n Although for a number of reasons including increased public awareness for medical errors, patient safety, heightened patient expectations, strict regulation of residents’ duty-hours, surgeons’ liability, and an increasing emphasis on the efficient use of operating room time, different studies state that the Halstead training method is not appropriate anymore.6–8 Computer-based simulation can provide objective quantitative data for the measurement of performance and skills assessment.\nFor this reason, arthroscopic simulators were developed that also document the paradigm shift in orthopedic education.\n9\n In a study of McCarthy et al, novices showed significant improvements in task completion time, shorter arthroscope path lengths, shorter probe path lengths, and fewer arthroscope tip contacts when using arthroscopic training simulator with haptic feedback,\n10\n facilitating the process of acquiring and improving manual skills during arthroscopic surgery.11,12\nAccording to the current literature, there is increased use of skills training modalities, and increasing numbers of simulators have been developed as a result.13–17\nPurpose The aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator.\nThe aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator.", "The aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator.", "A prospective comparative study was conducted. All participants gave their oral and written informed consent for participation in the study. The local ethics committee waived the need for ethics approval.\nA total of 24 medical students, without any prior knowledge of arthroscopy (never performed arthroscopy), and 16 orthopedic residents were recruited. The latter had already taken arthroscopy courses (cadaver training course or training with arthroscopic device on a phantom) and performed a small number (less than five) of arthroscopy surgeries.\nAll participants were familiarized with the equipment and received always the same standardized instructions concerning the arthroscopy simulator. They were taught how to manage the 30° arthroscope and the tools. The knee arthroscopic training simulator GMV/insightArthroVR (GMV, Madrid, Spain) was used (Figure 1). It consisted of an anatomic knee model and two instrument phantoms mounted on a cart, a monitor, and training and simulation software (InsightArthroVR Version 2.9). Knee movements from 0° (total extension) to 90° of flexion as well as varus and valgus movements could be simulated. Both anterolateral and anteromedial portals were represented, enabling various arthroscopy knee exercises. Instruments were simulated with two force feedback robotic arms, with instrument phantoms providing haptic feedback. A 30° angled camera lens enables an arthroscopic display just like in real arthroscopic surgery. The probe was physically represented by the Phantom Omni Stylus (GMV, Madrid, Spain).Figure 1.Virtual reality arthroscopy trainer, GMV/insightArthroVR, by courtesy of GMV, Madrid, Spain.\nVirtual reality arthroscopy trainer, GMV/insightArthroVR, by courtesy of GMV, Madrid, Spain.\nThe simulator offers a broad spectrum of tasks from diagnostic arthroscopy to complex surgical procedures. Our study included training modules for diagnostic arthroscopy using the camera only, diagnostic arthroscopy using camera and probe in finding a sequence of spheres, partial meniscectomy at several locations in the menisci. At the outset of each training exercise, the difficulty level must be determined (choosing from initial, intermediate, and expert), resulting in different sizes of the spheres to be palpated. In the current study, the difficulty of all exercises was set at intermediate.\nThe data were collected over a period of two years in various training sessions. A total of 33 different exercises were carried out by the members of the two groups under investigation.\nThe following parameters were assessed:Time [s] to complete the task. The interval started when one of the instruments was inserted through a portal, and ended when the exercise was finished.The distance covered by the tip of the camera (CDC), the instrument (CDP), and the grasper (CDG) inside the knee phantom. CDP and CDG were held with the right hand only, while CDC was mostly held with the left hand except during diagnostic exercises.Cartilage damage [mm] was the maximum depth applied to specific damageable tissues according to a contact computation algorithm. The colliding tissues, in which the cartilage damage was considered, were the tibial plateau cartilage, femoral condyles, and articular patella cartilage for the knee. The algorithm computes the distance between the collision point and the maximum penetration. The penetration distance is translated to an opposite-direction force applied by the associated Phantom. This force is what is considered cartilage damage. The parameter was recorded for the camera (PDC) and the probe instrument (PDP) as well as for an open grasper (OG)Amount of resected meniscus (%): On completion of a meniscectomy the percentage of resected meniscus was determined. The percentage of the remaining meniscus was calculated as the volume of the meniscus left as a proportion of the total volume (Figure 2).Amount of resected tear (%): Shows the percentage of meniscus tear resected. 100% means resection of the whole meniscus volume depicted by experts (view picture). This includes resection of both the tear itself and the part of healthy meniscus that needs to be surgically removed in order to have a smooth meniscus edge (Figure 2).Figure 2.Example of a meniscectomy tear repair exercise. Trainees had to remove the indicated red area (% amount of resected tear) in order to remove the tear and not exceed the red border using a virtual grasper (% amount of resected meniscus).\nTime [s] to complete the task. The interval started when one of the instruments was inserted through a portal, and ended when the exercise was finished.\nThe distance covered by the tip of the camera (CDC), the instrument (CDP), and the grasper (CDG) inside the knee phantom. CDP and CDG were held with the right hand only, while CDC was mostly held with the left hand except during diagnostic exercises.\nCartilage damage [mm] was the maximum depth applied to specific damageable tissues according to a contact computation algorithm. The colliding tissues, in which the cartilage damage was considered, were the tibial plateau cartilage, femoral condyles, and articular patella cartilage for the knee. The algorithm computes the distance between the collision point and the maximum penetration. The penetration distance is translated to an opposite-direction force applied by the associated Phantom. This force is what is considered cartilage damage. The parameter was recorded for the camera (PDC) and the probe instrument (PDP) as well as for an open grasper (OG)\nAmount of resected meniscus (%): On completion of a meniscectomy the percentage of resected meniscus was determined. The percentage of the remaining meniscus was calculated as the volume of the meniscus left as a proportion of the total volume (Figure 2).\nAmount of resected tear (%): Shows the percentage of meniscus tear resected. 100% means resection of the whole meniscus volume depicted by experts (view picture). This includes resection of both the tear itself and the part of healthy meniscus that needs to be surgically removed in order to have a smooth meniscus edge (Figure 2).\nExample of a meniscectomy tear repair exercise. Trainees had to remove the indicated red area (% amount of resected tear) in order to remove the tear and not exceed the red border using a virtual grasper (% amount of resected meniscus).\nStatistics A learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time.\n18\n In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older.\n19\n In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants.\nLearning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies.\n20\n The higher the slope of the regression curves, the steeper the learning.\nMean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases.\nA learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time.\n18\n In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older.\n19\n In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants.\nLearning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies.\n20\n The higher the slope of the regression curves, the steeper the learning.\nMean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases.", "A learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time.\n18\n In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older.\n19\n In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants.\nLearning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies.\n20\n The higher the slope of the regression curves, the steeper the learning.\nMean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases.", "Each exercise in the pool of 33 training exercises was repeated on average 39 (median 37, range 1–170) times.\nResidents improved their arthroscopic skills by reducing the time to complete the task from 7 min to 2 min on average (P<0.0001), while students reduced the time from 6 min to 3 min (Figure 3) with 38 repetitions. However, the students group showed no statistical significant improvement in time (P=0.171). Residents had a steeper learning curve (learned faster) in terms of “time to complete the task” (slope=−0.37, R2=0.72, 95% CI −0.43 to −0.30) by comparison to students (slope=−0.23, R2=0.45, 95% CI −0.31 to −0.15).Figure 3.Learning curves for the mean time including logarithmic regression curves of residents and students performing 38 repetitions of several exercises.\nLearning curves for the mean time including logarithmic regression curves of residents and students performing 38 repetitions of several exercises.\nResidents improved their arthroscopic skills by reducing camera movement (CDC) from 2.4 m to .3 m on average over 38 simulations (P<0.001). Residents reduced movement with the probe from 1.1 m to 0.2 m over 17 simulations (P=0.001) while no improvement was observed on the movement with the grasper (CDG) over ten simulations (P=0.776). Students reduced their CDC movements from 2.1 m to .5 m on average over an observation period of 38 simulations (P=0.011) and the CDP from 1.6 m to 0.4 m over 17 simulations (P=0.008). No improvement was observed for CDG over ten simulations (P=0.619). Residents had a steeper learning curve in terms of CDC (slope=−0.50, R2=0.76, 95% CI −0.58 to −.42) in comparison to students (slope=−0.38, R2=0.47, 95% CI −0.49 to −0.26) (Figure 4A). When considering the CDP, the slope of the residents’ learning curve (slope=−0.38, R2=0.59, 95% CI −0.54 to −0.22) was similar to that of students (slope=−0.39, R2=0.63, 95% CI −0.54 to −0.24) (Figure 4B). In CDG, a steeper learning curve was found for students (slope=−0.22, R2=0.41, 95% CI −0.42 to −0.01) than for residents (slope=−0.39, R2=0.66, 95% CI −0.60 to −0.17) (Figure 4C).Figure 4.Learning curves for the distance covered with the camera (CDC) (A), with the probe (CDP) (B), and with the grasper (CDG) (C) including the logarithmic regression curves of residents and students in several exercises.\nLearning curves for the distance covered with the camera (CDC) (A), with the probe (CDP) (B), and with the grasper (CDG) (C) including the logarithmic regression curves of residents and students in several exercises.\nNo statistically significant difference was observed between residents and students at the start or the end of the learning curve for the parameters time to complete the task, CDC, and CDG (P>0.05) (Table 1). Residents showed a statistically significant lower CDP at the end of the learning curve (P=0.032) than did students, while no statistically significant difference was found at the start of the learning curve.Table 1.Mean (SD) of Time to Complete the Task, CDC, CDP, and CDG were Assessed for the First and Last Exercise and Compared Between Groups.Start (SD)End (SD)P-value\nTime to complete the task (min)\n\nResidents\n7 (4)2 (1)<0.001\nStudents\n6 (5)3 (3)0.171\nP-value\n0.6710.172\nCDC (m)\n\nResidents\n2.4 (1.4)0.3 (0.4)<0.001\nStudents\n2.1 (1.3)0.5 (0.4)0.011\nP-value\n0.4840.515\nCDP (m)\n\nResidents\n1.1 (0.7)0.2 (0.1)0.001\nStudents\n1.6 (0.9)0.4 (0.1)0.008\nP-value\n0.1090.032\nCDG (m)\n\nResidents\n1.0 (1.0)0.9 (0.7)0.776\nStudents\n1.5 (2.4)0.8 (0.1)0.619\nP-value\n0.4980.795\nMean (SD) of Time to Complete the Task, CDC, CDP, and CDG were Assessed for the First and Last Exercise and Compared Between Groups.\nWhen considering the cartilage damage values for the camera (PDC), probe (PDP), and grasper (RG), no statistically significant difference was found between residents and students (P>0.05). In 45% of their cases, residents did not touch the bone while moving the camera, while students were able to not touch the bone with the camera in 32% of their cases. The maximum penetration depth of the instruments was around 40 mm for all three tools attached to the simulator (Table 2).Table 2.The Mean (SD) of the Maximum Penetration Depth Is Reported for Residents and Students for the Various Instruments (Camera PDC, Probe PDP, and Grasper PDG) Used During the Exercises. The Percentage of Cartilage Contact Is Reported for Residents and Students Based on All Exercises.Contact (%)No Contact (%)Max Penetration Depth (mm)\nPDC\n\nResidents\n554537 (9)\nStudents\n683239 (16)\nP-value\n0.714\nPDP\n\nResidents\n100044 (18)\nStudents\n100044 (20)\nP-value\n0.999\nPDG\n\nResidents\n100048 (15)\nStudents\n100047 (20)\nP-value\n0.889\nThe Mean (SD) of the Maximum Penetration Depth Is Reported for Residents and Students for the Various Instruments (Camera PDC, Probe PDP, and Grasper PDG) Used During the Exercises. The Percentage of Cartilage Contact Is Reported for Residents and Students Based on All Exercises.\nResults of the different exercises for removing a meniscus tear are shown in Figure 4. No statistical significant improvement could be observed for the residents meniscus between first and last exercise for the remaining meniscus (P=0.202) as well as for the resected meniscus (0.744) (Table 3). The students group showed also no statistically significant improvement between first and last exercise for the remaining meniscus (P=0.744) and resected meniscus (P=.164) (Table 3). While students showed a higher variation over time in deciding how much tear has to be removed, residents showed a more constant curve (Figure 5A and 5B). However, no statistically significant differences were found between residents and student for the first iteration or after 21 exercises (Table 3). Residents were more conservative in resecting meniscus than were students (Table 3).Table 3.The Mean (SD) of the Percentage of Remaining Meniscus as well as the Percentage of Resected Meniscus Was Assessed for the First and Last Exercise and Compared Between Groups.Start (SD)End (SD)P-value\nRemaining meniscus (%)\n\nResidents\n70 (18)82 (9)0.202\nStudents\n80 (12)76 (8)0.540\nP-value\n0.0730.362\nResected meniscus (%)\n\nResidents\n75 (25)63 (21)0.744\nStudents\n80 (20)81 (7)0.164\nP-value\n0.1930.912Figure 5.Mean percentage of remaining meniscus (A) and resected mensicus (B) divided by groups.\nThe Mean (SD) of the Percentage of Remaining Meniscus as well as the Percentage of Resected Meniscus Was Assessed for the First and Last Exercise and Compared Between Groups.\nMean percentage of remaining meniscus (A) and resected mensicus (B) divided by groups.", "Residents showed a statistically significant difference from the first to last exercise, while students did not show a statistically significant improvement in terms of time to complete the task. The learning curve (Figure 1) shows a steeper slope for residents than students in terms of time to compete the task. However, when comparing both groups between each other at the first and last exercise, no statistically significant difference could be found. Therefore, the improvement in terms of time was more visible for the residents group probably due to a better familiarity with the anatomy of the knee joint.\nBoth groups showed a statistically significant improvement in the hand–eye coordination, which was assessed by the parameters CDC and CDP (Table 3). The learning curve of residents was steeper than the learning curve of students in CDC (Figure 4A) resulting in a statistically significant better outcome for residents at the end point of the curve (Table 3). In CDP, the slope of the learning curve of both groups was similar, and no statistically significant difference could be observed between both groups at start and end points (Figure 4B and Table 3).\nWhen changing the probe (CDP) with the grasper (CDG), surprisingly both groups showed no statistically significant improvement in the learning curves (Table 3, Figure 4C), and there was no statistically significant difference between both groups at start or end point of the learning curve (Table 3). The usage of the grasper was not required in exercises where anatomical structures had to be localized and palpated although it was required in all menisectomy exercises.\nWhen comparing students and residents concerning their performance of several arthroscopy exercises, it was seen that there were no statistically significant differences at the beginning or the end of the learning curves except for movement of the probe, where residents required fewer movements than did students in performing the specific tasks. Students reached the same level of dexterity in almost all cases, with the only exception being CDP at the end of the learning curve.\nThe training caused a reduction in time needed to finish an exercise, as well as a decrease in possible damage to the virtual surrounding joint tissues (i.e., cartilage).\nLess experienced surgeons may therefore benefit more from training on the knee arthroscopy simulator. Nevertheless, even more experienced users were able to enhance their skills as the residents’ knee arthroscopy performance was seen to improve in all investigated exercises.\nThe high and steep learning curve seen in both groups in our study is well in line with the results previously published by Rahm et al\n21\n and Dammerer et al\n22\n In their study,\n21\n they assume that this rapid improvement of skills is the result of the increased possibilities the training session offers to also simulate difficult arthroscopically guided surgical interventions (i.e., meniscal repair and anterior cruciate ligament reconstruction).7,21,23,24 The orthopedic resident surgeons in the current study already started from a level of improved knee arthroscopy skills in comparison to the medical students. This might explain why the residents’ learning curve did not turn out to be as steep and high as that of the medical students. Nevertheless, the residents consistently achieved higher scores during the course of knee arthroscopy training, similar to the study by Camp et al and Dammerer et al.12,22 So far, traditional arthroscopic training during residency lacks a standardized, objective evaluation system,\n25\n which creates difficulties in comparing different training methods. Martin et al\n26\n showed in a study the progress of trainees by performing the same exercises before and after cadaveric training on an arthroscopy simulator.\nIn the meniscectomy exercises, residents improved their skills by increasing the remaining meniscus by 10%, while students showed no improvement. However, residents seemed to remove less necessary meniscus than required by comparison to students although the amount did not reach statistical significance. It seems that residents were more conservative in removing the meniscus as compared to students, who had no prior knowledge of meniscectomy, although this could not be proven statistically (Table 3, Figure 5A and 5B). The goal of the menisectomy exercises was to remove as much as possible the damaged meniscus as shown in Figure 2. Like in real surgical scenarios, the surgeon has to decide case by case how much meniscus tissues they want to remove, by not harming the healthy remaining tear. The goal of the exercise shown as grew surface in Figure 2 with the exact boundaries of the damaged meniscus cannot be displayed in the exercise itself. This might explain why the learning effect is less prominent. Each participant had to find out by attempts if he removed too much or too less of damaged meniscus. While the participants were gaining experience in taking decisions, the assessed parameters were not showing an improvement in both groups as it was unclear like in real surgeries what would be the best compromise in removing damaged tear by maintaining as much as possible healthy meniscus tissue. It has to be also said that grasping event haptics of the simulator should be improved as the amount of resected tissue was very difficult to control by the users.\nBoth groups, however, showed a large number of cartilage contacts, when using the different tools. Especially when using the pointing or grasping device in each exercise, contact with the cartilage occurred in all exercises (Table 2). When using the camera, residents touched the cartilage tissue in 55% of their cases, while students did so in 68% of their cases (Table 2). It was seen that the number of repetitions was not large enough to reduce possible cartilage damage in most of the cases.\nWe acknowledge the following limitations of our study. It is not possible to predict any clinical outcome of surgeries in accordance with the use of a training arthroscopy simulator as the training environment differs considerably from a common operation room setting. However, patient outcome will benefit from the skills acquired in the hands-on training. Measurements made with arthroscopy training simulators consider only a few parameters that may not be sufficient to demonstrate the overall learning experience and the complexity of certain arthroscopic surgeries.\nWhen removing the meniscus with an open grasper, poor tactile feedback was noticed, which might influence the quantity of meniscus resected. In addition, the information on how much meniscus should be resected was not visualized in the simulation, which made it impossible to score 100%. However, also in real cases the surgeon has to rely on their personal experience when deciding the resection borders during meniscectomy. Both groups showed a high incidence of touching cartilage structures during the training sessions. Training modules performed on current training simulators should focus on giving appropriate feedback to the trainees concerning damage to the cartilage surface. The effects of training on the arthroscopy simulator may directly lead to a potentially positive impact of arthroscopic interventions in clinical practice. This will result in a benefit for the patient, such as shorter anesthesia duration, reduced risk of infection, as well as a lesser danger for the incorporation of irrigation fluid used in the context of arthroscopy.27–29\nIn conclusion, our results demonstrate the usefulness of an arthroscopy training simulator as an important tool for the improvement of surgical and arthroscopic skills in orthopedic resident surgeons and in medical students. Our study shows a fast (steep) learning curve for orthopedic residents and medical students undergoing a standardized training program on a validated virtual reality-based arthroscopy knee training simulator. Consequently, it can be expected that simulator training sessions will and should become an even more important training tool as a supplement to cadaveric training." ]
[ "intro", null, "methods", null, "results", "discussion" ]
[ "education", "simulation", "knee arthroscopy", "surgical education", "orthopaedic surgery", "arthroscopy simulator" ]
Introduction: Arthroscopy requires different skills than open surgery due to limited visibility, reduced motion freedom, and non-intuitive hand–eye coordination.1,2 Orthopedic resident surgeons are expected to acquire their early arthroscopy skills under the supervision of an orthopedic consultant.1,3 Arthroscopic procedures require additional training as complex trajectory may be necessary to perform arthroscopic inspection of a narrow joint space with complex anatomical structures. 2 While in several fields like aerospace, military, or critical care, training simulators are well established training methods although health care lags behind in terms of stakeholder engagement, terms of implantation of simulation outcomes. 4 In surgical training, the Halstead’s method “see one, do one, teach one” has traditionally been preferred. 5 Although for a number of reasons including increased public awareness for medical errors, patient safety, heightened patient expectations, strict regulation of residents’ duty-hours, surgeons’ liability, and an increasing emphasis on the efficient use of operating room time, different studies state that the Halstead training method is not appropriate anymore.6–8 Computer-based simulation can provide objective quantitative data for the measurement of performance and skills assessment. For this reason, arthroscopic simulators were developed that also document the paradigm shift in orthopedic education. 9 In a study of McCarthy et al, novices showed significant improvements in task completion time, shorter arthroscope path lengths, shorter probe path lengths, and fewer arthroscope tip contacts when using arthroscopic training simulator with haptic feedback, 10 facilitating the process of acquiring and improving manual skills during arthroscopic surgery.11,12 According to the current literature, there is increased use of skills training modalities, and increasing numbers of simulators have been developed as a result.13–17 Purpose The aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator. The aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator. Purpose: The aim of this study was to compare the learning curves of medical students and orthopedic residents during simulated knee arthroscopy procedures using an arthroscopy training simulator. Methods: A prospective comparative study was conducted. All participants gave their oral and written informed consent for participation in the study. The local ethics committee waived the need for ethics approval. A total of 24 medical students, without any prior knowledge of arthroscopy (never performed arthroscopy), and 16 orthopedic residents were recruited. The latter had already taken arthroscopy courses (cadaver training course or training with arthroscopic device on a phantom) and performed a small number (less than five) of arthroscopy surgeries. All participants were familiarized with the equipment and received always the same standardized instructions concerning the arthroscopy simulator. They were taught how to manage the 30° arthroscope and the tools. The knee arthroscopic training simulator GMV/insightArthroVR (GMV, Madrid, Spain) was used (Figure 1). It consisted of an anatomic knee model and two instrument phantoms mounted on a cart, a monitor, and training and simulation software (InsightArthroVR Version 2.9). Knee movements from 0° (total extension) to 90° of flexion as well as varus and valgus movements could be simulated. Both anterolateral and anteromedial portals were represented, enabling various arthroscopy knee exercises. Instruments were simulated with two force feedback robotic arms, with instrument phantoms providing haptic feedback. A 30° angled camera lens enables an arthroscopic display just like in real arthroscopic surgery. The probe was physically represented by the Phantom Omni Stylus (GMV, Madrid, Spain).Figure 1.Virtual reality arthroscopy trainer, GMV/insightArthroVR, by courtesy of GMV, Madrid, Spain. Virtual reality arthroscopy trainer, GMV/insightArthroVR, by courtesy of GMV, Madrid, Spain. The simulator offers a broad spectrum of tasks from diagnostic arthroscopy to complex surgical procedures. Our study included training modules for diagnostic arthroscopy using the camera only, diagnostic arthroscopy using camera and probe in finding a sequence of spheres, partial meniscectomy at several locations in the menisci. At the outset of each training exercise, the difficulty level must be determined (choosing from initial, intermediate, and expert), resulting in different sizes of the spheres to be palpated. In the current study, the difficulty of all exercises was set at intermediate. The data were collected over a period of two years in various training sessions. A total of 33 different exercises were carried out by the members of the two groups under investigation. The following parameters were assessed:Time [s] to complete the task. The interval started when one of the instruments was inserted through a portal, and ended when the exercise was finished.The distance covered by the tip of the camera (CDC), the instrument (CDP), and the grasper (CDG) inside the knee phantom. CDP and CDG were held with the right hand only, while CDC was mostly held with the left hand except during diagnostic exercises.Cartilage damage [mm] was the maximum depth applied to specific damageable tissues according to a contact computation algorithm. The colliding tissues, in which the cartilage damage was considered, were the tibial plateau cartilage, femoral condyles, and articular patella cartilage for the knee. The algorithm computes the distance between the collision point and the maximum penetration. The penetration distance is translated to an opposite-direction force applied by the associated Phantom. This force is what is considered cartilage damage. The parameter was recorded for the camera (PDC) and the probe instrument (PDP) as well as for an open grasper (OG)Amount of resected meniscus (%): On completion of a meniscectomy the percentage of resected meniscus was determined. The percentage of the remaining meniscus was calculated as the volume of the meniscus left as a proportion of the total volume (Figure 2).Amount of resected tear (%): Shows the percentage of meniscus tear resected. 100% means resection of the whole meniscus volume depicted by experts (view picture). This includes resection of both the tear itself and the part of healthy meniscus that needs to be surgically removed in order to have a smooth meniscus edge (Figure 2).Figure 2.Example of a meniscectomy tear repair exercise. Trainees had to remove the indicated red area (% amount of resected tear) in order to remove the tear and not exceed the red border using a virtual grasper (% amount of resected meniscus). Time [s] to complete the task. The interval started when one of the instruments was inserted through a portal, and ended when the exercise was finished. The distance covered by the tip of the camera (CDC), the instrument (CDP), and the grasper (CDG) inside the knee phantom. CDP and CDG were held with the right hand only, while CDC was mostly held with the left hand except during diagnostic exercises. Cartilage damage [mm] was the maximum depth applied to specific damageable tissues according to a contact computation algorithm. The colliding tissues, in which the cartilage damage was considered, were the tibial plateau cartilage, femoral condyles, and articular patella cartilage for the knee. The algorithm computes the distance between the collision point and the maximum penetration. The penetration distance is translated to an opposite-direction force applied by the associated Phantom. This force is what is considered cartilage damage. The parameter was recorded for the camera (PDC) and the probe instrument (PDP) as well as for an open grasper (OG) Amount of resected meniscus (%): On completion of a meniscectomy the percentage of resected meniscus was determined. The percentage of the remaining meniscus was calculated as the volume of the meniscus left as a proportion of the total volume (Figure 2). Amount of resected tear (%): Shows the percentage of meniscus tear resected. 100% means resection of the whole meniscus volume depicted by experts (view picture). This includes resection of both the tear itself and the part of healthy meniscus that needs to be surgically removed in order to have a smooth meniscus edge (Figure 2). Example of a meniscectomy tear repair exercise. Trainees had to remove the indicated red area (% amount of resected tear) in order to remove the tear and not exceed the red border using a virtual grasper (% amount of resected meniscus). Statistics A learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time. 18 In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older. 19 In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants. Learning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies. 20 The higher the slope of the regression curves, the steeper the learning. Mean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases. A learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time. 18 In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older. 19 In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants. Learning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies. 20 The higher the slope of the regression curves, the steeper the learning. Mean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases. Statistics: A learning curve is defined as a mathematical description of someone’s progress in gaining experience or a new skill by repeating it for a period of time. 18 In health care, there is no widely accepted standard way to define or measure a learning curve. The most common variable used to measure a learning curve is the operating time. Furthermore, the duration of the surgical learning curve to reach the plateau phase depends on the investigated item and if the surgeons are novice, experienced, senior, younger, or older. 19 In the present study, the term learning curve is used to describe the progress in improving several simulation parameters gained over time by the individual participants. Learning curves were analyzed using the mean time to complete the task, CDC, CDP, and CDG and by using a logarithmic fitting line to assess the slope of the learning curves. Logarithmic regression was calculated for the learning curves to determine the steepness of the learning curves. Slope, R2, and 95% confidence interval were reported and graphs are shown with log-log line. Logarithmic regression provided the best fit to the learning curves and has been used in several other studies. 20 The higher the slope of the regression curves, the steeper the learning. Mean and standard deviation were reported for the initial and final exercise performed. Differences between start and end point were analyzed with a T-test for dependent samples for the parameters mean time to complete the task, CDC, CDP, CDG, resected meniscus, and remaining mensiscus. Differences between the two groups regarding their learning curves were analyzed using the two-tailed T-test for independent samples with correction for multiple comparisons using the Holm-Sidak procedure considering the first and last exercises of the learning curve. In all analyses (Version 8.0, GraphPad Software, Inc, La Jolla, US-CA), a P-value of .05 was considered statistically significant in all cases. Results: Each exercise in the pool of 33 training exercises was repeated on average 39 (median 37, range 1–170) times. Residents improved their arthroscopic skills by reducing the time to complete the task from 7 min to 2 min on average (P<0.0001), while students reduced the time from 6 min to 3 min (Figure 3) with 38 repetitions. However, the students group showed no statistical significant improvement in time (P=0.171). Residents had a steeper learning curve (learned faster) in terms of “time to complete the task” (slope=−0.37, R2=0.72, 95% CI −0.43 to −0.30) by comparison to students (slope=−0.23, R2=0.45, 95% CI −0.31 to −0.15).Figure 3.Learning curves for the mean time including logarithmic regression curves of residents and students performing 38 repetitions of several exercises. Learning curves for the mean time including logarithmic regression curves of residents and students performing 38 repetitions of several exercises. Residents improved their arthroscopic skills by reducing camera movement (CDC) from 2.4 m to .3 m on average over 38 simulations (P<0.001). Residents reduced movement with the probe from 1.1 m to 0.2 m over 17 simulations (P=0.001) while no improvement was observed on the movement with the grasper (CDG) over ten simulations (P=0.776). Students reduced their CDC movements from 2.1 m to .5 m on average over an observation period of 38 simulations (P=0.011) and the CDP from 1.6 m to 0.4 m over 17 simulations (P=0.008). No improvement was observed for CDG over ten simulations (P=0.619). Residents had a steeper learning curve in terms of CDC (slope=−0.50, R2=0.76, 95% CI −0.58 to −.42) in comparison to students (slope=−0.38, R2=0.47, 95% CI −0.49 to −0.26) (Figure 4A). When considering the CDP, the slope of the residents’ learning curve (slope=−0.38, R2=0.59, 95% CI −0.54 to −0.22) was similar to that of students (slope=−0.39, R2=0.63, 95% CI −0.54 to −0.24) (Figure 4B). In CDG, a steeper learning curve was found for students (slope=−0.22, R2=0.41, 95% CI −0.42 to −0.01) than for residents (slope=−0.39, R2=0.66, 95% CI −0.60 to −0.17) (Figure 4C).Figure 4.Learning curves for the distance covered with the camera (CDC) (A), with the probe (CDP) (B), and with the grasper (CDG) (C) including the logarithmic regression curves of residents and students in several exercises. Learning curves for the distance covered with the camera (CDC) (A), with the probe (CDP) (B), and with the grasper (CDG) (C) including the logarithmic regression curves of residents and students in several exercises. No statistically significant difference was observed between residents and students at the start or the end of the learning curve for the parameters time to complete the task, CDC, and CDG (P>0.05) (Table 1). Residents showed a statistically significant lower CDP at the end of the learning curve (P=0.032) than did students, while no statistically significant difference was found at the start of the learning curve.Table 1.Mean (SD) of Time to Complete the Task, CDC, CDP, and CDG were Assessed for the First and Last Exercise and Compared Between Groups.Start (SD)End (SD)P-value Time to complete the task (min) Residents 7 (4)2 (1)<0.001 Students 6 (5)3 (3)0.171 P-value 0.6710.172 CDC (m) Residents 2.4 (1.4)0.3 (0.4)<0.001 Students 2.1 (1.3)0.5 (0.4)0.011 P-value 0.4840.515 CDP (m) Residents 1.1 (0.7)0.2 (0.1)0.001 Students 1.6 (0.9)0.4 (0.1)0.008 P-value 0.1090.032 CDG (m) Residents 1.0 (1.0)0.9 (0.7)0.776 Students 1.5 (2.4)0.8 (0.1)0.619 P-value 0.4980.795 Mean (SD) of Time to Complete the Task, CDC, CDP, and CDG were Assessed for the First and Last Exercise and Compared Between Groups. When considering the cartilage damage values for the camera (PDC), probe (PDP), and grasper (RG), no statistically significant difference was found between residents and students (P>0.05). In 45% of their cases, residents did not touch the bone while moving the camera, while students were able to not touch the bone with the camera in 32% of their cases. The maximum penetration depth of the instruments was around 40 mm for all three tools attached to the simulator (Table 2).Table 2.The Mean (SD) of the Maximum Penetration Depth Is Reported for Residents and Students for the Various Instruments (Camera PDC, Probe PDP, and Grasper PDG) Used During the Exercises. The Percentage of Cartilage Contact Is Reported for Residents and Students Based on All Exercises.Contact (%)No Contact (%)Max Penetration Depth (mm) PDC Residents 554537 (9) Students 683239 (16) P-value 0.714 PDP Residents 100044 (18) Students 100044 (20) P-value 0.999 PDG Residents 100048 (15) Students 100047 (20) P-value 0.889 The Mean (SD) of the Maximum Penetration Depth Is Reported for Residents and Students for the Various Instruments (Camera PDC, Probe PDP, and Grasper PDG) Used During the Exercises. The Percentage of Cartilage Contact Is Reported for Residents and Students Based on All Exercises. Results of the different exercises for removing a meniscus tear are shown in Figure 4. No statistical significant improvement could be observed for the residents meniscus between first and last exercise for the remaining meniscus (P=0.202) as well as for the resected meniscus (0.744) (Table 3). The students group showed also no statistically significant improvement between first and last exercise for the remaining meniscus (P=0.744) and resected meniscus (P=.164) (Table 3). While students showed a higher variation over time in deciding how much tear has to be removed, residents showed a more constant curve (Figure 5A and 5B). However, no statistically significant differences were found between residents and student for the first iteration or after 21 exercises (Table 3). Residents were more conservative in resecting meniscus than were students (Table 3).Table 3.The Mean (SD) of the Percentage of Remaining Meniscus as well as the Percentage of Resected Meniscus Was Assessed for the First and Last Exercise and Compared Between Groups.Start (SD)End (SD)P-value Remaining meniscus (%) Residents 70 (18)82 (9)0.202 Students 80 (12)76 (8)0.540 P-value 0.0730.362 Resected meniscus (%) Residents 75 (25)63 (21)0.744 Students 80 (20)81 (7)0.164 P-value 0.1930.912Figure 5.Mean percentage of remaining meniscus (A) and resected mensicus (B) divided by groups. The Mean (SD) of the Percentage of Remaining Meniscus as well as the Percentage of Resected Meniscus Was Assessed for the First and Last Exercise and Compared Between Groups. Mean percentage of remaining meniscus (A) and resected mensicus (B) divided by groups. Discussion: Residents showed a statistically significant difference from the first to last exercise, while students did not show a statistically significant improvement in terms of time to complete the task. The learning curve (Figure 1) shows a steeper slope for residents than students in terms of time to compete the task. However, when comparing both groups between each other at the first and last exercise, no statistically significant difference could be found. Therefore, the improvement in terms of time was more visible for the residents group probably due to a better familiarity with the anatomy of the knee joint. Both groups showed a statistically significant improvement in the hand–eye coordination, which was assessed by the parameters CDC and CDP (Table 3). The learning curve of residents was steeper than the learning curve of students in CDC (Figure 4A) resulting in a statistically significant better outcome for residents at the end point of the curve (Table 3). In CDP, the slope of the learning curve of both groups was similar, and no statistically significant difference could be observed between both groups at start and end points (Figure 4B and Table 3). When changing the probe (CDP) with the grasper (CDG), surprisingly both groups showed no statistically significant improvement in the learning curves (Table 3, Figure 4C), and there was no statistically significant difference between both groups at start or end point of the learning curve (Table 3). The usage of the grasper was not required in exercises where anatomical structures had to be localized and palpated although it was required in all menisectomy exercises. When comparing students and residents concerning their performance of several arthroscopy exercises, it was seen that there were no statistically significant differences at the beginning or the end of the learning curves except for movement of the probe, where residents required fewer movements than did students in performing the specific tasks. Students reached the same level of dexterity in almost all cases, with the only exception being CDP at the end of the learning curve. The training caused a reduction in time needed to finish an exercise, as well as a decrease in possible damage to the virtual surrounding joint tissues (i.e., cartilage). Less experienced surgeons may therefore benefit more from training on the knee arthroscopy simulator. Nevertheless, even more experienced users were able to enhance their skills as the residents’ knee arthroscopy performance was seen to improve in all investigated exercises. The high and steep learning curve seen in both groups in our study is well in line with the results previously published by Rahm et al 21 and Dammerer et al 22 In their study, 21 they assume that this rapid improvement of skills is the result of the increased possibilities the training session offers to also simulate difficult arthroscopically guided surgical interventions (i.e., meniscal repair and anterior cruciate ligament reconstruction).7,21,23,24 The orthopedic resident surgeons in the current study already started from a level of improved knee arthroscopy skills in comparison to the medical students. This might explain why the residents’ learning curve did not turn out to be as steep and high as that of the medical students. Nevertheless, the residents consistently achieved higher scores during the course of knee arthroscopy training, similar to the study by Camp et al and Dammerer et al.12,22 So far, traditional arthroscopic training during residency lacks a standardized, objective evaluation system, 25 which creates difficulties in comparing different training methods. Martin et al 26 showed in a study the progress of trainees by performing the same exercises before and after cadaveric training on an arthroscopy simulator. In the meniscectomy exercises, residents improved their skills by increasing the remaining meniscus by 10%, while students showed no improvement. However, residents seemed to remove less necessary meniscus than required by comparison to students although the amount did not reach statistical significance. It seems that residents were more conservative in removing the meniscus as compared to students, who had no prior knowledge of meniscectomy, although this could not be proven statistically (Table 3, Figure 5A and 5B). The goal of the menisectomy exercises was to remove as much as possible the damaged meniscus as shown in Figure 2. Like in real surgical scenarios, the surgeon has to decide case by case how much meniscus tissues they want to remove, by not harming the healthy remaining tear. The goal of the exercise shown as grew surface in Figure 2 with the exact boundaries of the damaged meniscus cannot be displayed in the exercise itself. This might explain why the learning effect is less prominent. Each participant had to find out by attempts if he removed too much or too less of damaged meniscus. While the participants were gaining experience in taking decisions, the assessed parameters were not showing an improvement in both groups as it was unclear like in real surgeries what would be the best compromise in removing damaged tear by maintaining as much as possible healthy meniscus tissue. It has to be also said that grasping event haptics of the simulator should be improved as the amount of resected tissue was very difficult to control by the users. Both groups, however, showed a large number of cartilage contacts, when using the different tools. Especially when using the pointing or grasping device in each exercise, contact with the cartilage occurred in all exercises (Table 2). When using the camera, residents touched the cartilage tissue in 55% of their cases, while students did so in 68% of their cases (Table 2). It was seen that the number of repetitions was not large enough to reduce possible cartilage damage in most of the cases. We acknowledge the following limitations of our study. It is not possible to predict any clinical outcome of surgeries in accordance with the use of a training arthroscopy simulator as the training environment differs considerably from a common operation room setting. However, patient outcome will benefit from the skills acquired in the hands-on training. Measurements made with arthroscopy training simulators consider only a few parameters that may not be sufficient to demonstrate the overall learning experience and the complexity of certain arthroscopic surgeries. When removing the meniscus with an open grasper, poor tactile feedback was noticed, which might influence the quantity of meniscus resected. In addition, the information on how much meniscus should be resected was not visualized in the simulation, which made it impossible to score 100%. However, also in real cases the surgeon has to rely on their personal experience when deciding the resection borders during meniscectomy. Both groups showed a high incidence of touching cartilage structures during the training sessions. Training modules performed on current training simulators should focus on giving appropriate feedback to the trainees concerning damage to the cartilage surface. The effects of training on the arthroscopy simulator may directly lead to a potentially positive impact of arthroscopic interventions in clinical practice. This will result in a benefit for the patient, such as shorter anesthesia duration, reduced risk of infection, as well as a lesser danger for the incorporation of irrigation fluid used in the context of arthroscopy.27–29 In conclusion, our results demonstrate the usefulness of an arthroscopy training simulator as an important tool for the improvement of surgical and arthroscopic skills in orthopedic resident surgeons and in medical students. Our study shows a fast (steep) learning curve for orthopedic residents and medical students undergoing a standardized training program on a validated virtual reality-based arthroscopy knee training simulator. Consequently, it can be expected that simulator training sessions will and should become an even more important training tool as a supplement to cadaveric training.
Background: The Covid-19 pandemic has created the largest disruption of education in history. In a response to this, we aimed to evaluate the knee arthroscopy learning curve among medical students and orthopaedic residents. Methods: An arthroscopy simulator was used to compare the learning curves of two groups. Medical students with any prior knowledge of arthroscopy (n=24) were compared to a residents group (n=16). Analyzed parameters were "time to complete a task," assessment of the movement of tools and values scoring damage to the surrounding tissues. Results: After several repetitions, both groups improved their skills in terms of time and movement. Residents were on average faster, had less camera movement, and touched the cartilage tissue less often than did students. Students showed a steeper improvement curve than residents for certain parameters, as they started from a different experience level. Conclusions: The participants were able to reduce the time to complete a task. There was also a decrease in possible damage to the virtual surrounding tissues. In general, the residents had better mean values, but the students had the steeper learning curve. Particularly less experienced surgeons can especially train their hand-eye coordination skills required for arthroscopy surgery. Training simulators are an important training tool that supplements cadaveric training and participation in arthroscopic operations and should be included in training.
null
null
5,550
257
[ 28, 374 ]
6
[ "learning", "residents", "students", "meniscus", "training", "curve", "curves", "time", "learning curve", "arthroscopy" ]
[ "surgical arthroscopic skills", "usefulness arthroscopy training", "cadaveric training arthroscopy", "arthroscopy training simulators", "arthroscopic training simulator" ]
null
null
[CONTENT] education | simulation | knee arthroscopy | surgical education | orthopaedic surgery | arthroscopy simulator [SUMMARY]
[CONTENT] education | simulation | knee arthroscopy | surgical education | orthopaedic surgery | arthroscopy simulator [SUMMARY]
[CONTENT] education | simulation | knee arthroscopy | surgical education | orthopaedic surgery | arthroscopy simulator [SUMMARY]
null
[CONTENT] education | simulation | knee arthroscopy | surgical education | orthopaedic surgery | arthroscopy simulator [SUMMARY]
null
[CONTENT] Arthroscopy | COVID-19 | Clinical Competence | Computer Simulation | Humans | Internship and Residency | Knee Joint | Learning Curve | Meniscectomy | Pandemics | Prospective Studies | Simulation Training | Students, Medical [SUMMARY]
[CONTENT] Arthroscopy | COVID-19 | Clinical Competence | Computer Simulation | Humans | Internship and Residency | Knee Joint | Learning Curve | Meniscectomy | Pandemics | Prospective Studies | Simulation Training | Students, Medical [SUMMARY]
[CONTENT] Arthroscopy | COVID-19 | Clinical Competence | Computer Simulation | Humans | Internship and Residency | Knee Joint | Learning Curve | Meniscectomy | Pandemics | Prospective Studies | Simulation Training | Students, Medical [SUMMARY]
null
[CONTENT] Arthroscopy | COVID-19 | Clinical Competence | Computer Simulation | Humans | Internship and Residency | Knee Joint | Learning Curve | Meniscectomy | Pandemics | Prospective Studies | Simulation Training | Students, Medical [SUMMARY]
null
[CONTENT] surgical arthroscopic skills | usefulness arthroscopy training | cadaveric training arthroscopy | arthroscopy training simulators | arthroscopic training simulator [SUMMARY]
[CONTENT] surgical arthroscopic skills | usefulness arthroscopy training | cadaveric training arthroscopy | arthroscopy training simulators | arthroscopic training simulator [SUMMARY]
[CONTENT] surgical arthroscopic skills | usefulness arthroscopy training | cadaveric training arthroscopy | arthroscopy training simulators | arthroscopic training simulator [SUMMARY]
null
[CONTENT] surgical arthroscopic skills | usefulness arthroscopy training | cadaveric training arthroscopy | arthroscopy training simulators | arthroscopic training simulator [SUMMARY]
null
[CONTENT] learning | residents | students | meniscus | training | curve | curves | time | learning curve | arthroscopy [SUMMARY]
[CONTENT] learning | residents | students | meniscus | training | curve | curves | time | learning curve | arthroscopy [SUMMARY]
[CONTENT] learning | residents | students | meniscus | training | curve | curves | time | learning curve | arthroscopy [SUMMARY]
null
[CONTENT] learning | residents | students | meniscus | training | curve | curves | time | learning curve | arthroscopy [SUMMARY]
null
[CONTENT] training | arthroscopy | skills | arthroscopic | orthopedic | simulators | procedures | path | developed | lengths [SUMMARY]
[CONTENT] meniscus | learning | tear | resected | curve | learning curve | gmv | cartilage | arthroscopy | curves [SUMMARY]
[CONTENT] residents | students | sd | meniscus | residents students | ci | 95 ci | value | table | 38 [SUMMARY]
null
[CONTENT] learning | arthroscopy | residents | training | students | meniscus | curve | learning curve | curves | time [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] two ||| n=24 ||| [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] ||| ||| two ||| n=24 ||| ||| ||| ||| ||| ||| ||| ||| ||| [SUMMARY]
null
Rhipicephalus microplus serine protease inhibitor family: annotation, expression and functional characterisation assessment.
25564202
Rhipicephalus (Boophilus) microplus evades the host's haemostatic system through a complex protein array secreted into tick saliva. Serine protease inhibitors (serpins) conform an important component of saliva which are represented by a large protease inhibitor family in Ixodidae. These secreted and non-secreted inhibitors modulate diverse and essential proteases involved in different physiological processes.
BACKGROUND
The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi. The database search was conducted on BmiGi V1, BmiGi V2.1, five SSH libraries, Australian tick transcriptome libraries and RmiTR V1 using bioinformatics methods. Semi quantitative PCR was carried out using different adult tissues and tick development stages. The cDNA of four identified R. microplus serpins were cloned and expressed in Pichia pastoris in order to determine biological targets of these serpins utilising protease inhibition assays.
METHODS
A total of four out of twenty-two serpins identified in our analysis are new R. microplus serpins which were named as RmS-19 to RmS-22. The analyses of DNA and predicted amino acid sequences showed high conservation of the R. microplus serpin sequences. The expression data suggested ubiquitous expression of RmS except for RmS-6 and RmS-14 that were expressed only in nymphs and adult female ovaries, respectively. RmS-19, and -20 were expressed in all tissues samples analysed showing their important role in both parasitic and non-parasitic stages of R. microplus development. RmS-21 was not detected in ovaries and RmS-22 was not identified in ovary and nymph samples but were expressed in the rest of the samples analysed. A total of four expressed recombinant serpins showed protease specific inhibition for Chymotrypsin (RmS-1 and RmS-6), Chymotrypsin / Elastase (RmS-3) and Thrombin (RmS-15).
RESULTS
This study constitutes an important contribution and improvement to the knowledge about the physiologic role of R. microplus serpins during the host-tick interaction.
CONCLUSION
[ "Amino Acid Sequence", "Animals", "Base Sequence", "Cloning, Molecular", "DNA", "Female", "Gene Expression Regulation", "Nymph", "Ovary", "Peptide Hydrolases", "Rhipicephalus", "Saliva", "Serine Proteinase Inhibitors" ]
4322644
Background
Ticks are worldwide-distributed ectoparasites that have evolved as obligate haematophagous arthropods of animals and humans. These parasites have been categorised after mosquitoes as the second most important group of vectors transmitting disease-causing agents to mammals [1,2]. In particular, the cattle tick (Rhipicephalus microplus) is considered the most economically important ectoparasite of cattle distributed in tropical and subtropical regions of the world. The principal reason for this affirmation is that R. microplus affects beef and dairy cattle producers causing direct economic losses due to host parasitism and tick borne diseases such as anaplasmosis and babesiosis [3,4]. The success of the parasitic cycle of R. microplus begins with the larval capability to overcome haemostatic and immunological responses of the host. Following larval attachment, a great amount of blood is ingested and digested by ticks in order to complete their parasitic cycle. The full-engorged adult females drop off from host to initiate the non-parasitic phase with the laying and hatching of eggs. R. microplus has an intensive production and physiological secretion of proteins during the entire parasitic cycle in order to disrupt host responses such as protease inhibitors which play an important role in tick survival, feeding and development [5-8]. Serpins (Serine Protease Inhibitors) are important regulatory molecules with roles during host- parasite interactions such as fibrinolysis [9], host response mediated by complement proteases [10], and inflammation [11-13] among other tick physiological functions [14,15]. These protease inhibitors conformed a large superfamily that is extensively distributed within bacteria, insects, parasite, animals and plants [16,17]. Serpins differ from Kunitz protease inhibitors by distinctive conformational change during the inhibition of their target proteases. The presence of a small domain designated as the reactive center loop (RCL) constitutes their most notable characteristic. This domain extends outside of the protein and leads to the formation of the firm bond of the serpin with its specific proteinase [18-20]. Members of the tick Serpin family have been studied and recommended as useful targets for tick vaccine development [21]. Consequently, serpin sequences from diverse tick species have been reported such as, Amblyomma americanum [22], Amblyomma variegatum [23], Amblyomma maculatum [24], Dermacentor variabilis [25]; Rhipicephalus appendiculatus [26], R. microplus [6,27], Haemaphysalis longicornis [28], Ixodes scapularis [21,29], and Ixodes ricinus [9,11]. Additionally, an in silico identification of R. microplus serpin was conducted using different databases [30]. However, a great number of tick serpins continue to be functionally uncharacterised which limits the studies related with their function during host – parasite interaction [11,31,32]. In this study serpins from different R. microplus genomic databases were identified and four new serpins molecules were reported. In silico characterization of these serpins was undertaken using bioinformatics methods. Additionally, R. microplus serpins (RmS) were cloned, sequenced, and expressed in order to determine their protease inhibition specificity. The spatial expression of these serpins was carried out by PCR using cDNA from different tick life stages and female adult organs. Finally, this study is an important step forward in uncovering the role of RmS in the physiology of this ectoparasite and their potential use for the future improvement of ticks control methods.
null
null
Results
Identification of RmS The analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414. There was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1 Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. The analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414. There was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1 Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. RT-PCR analysis Reverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2 Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Reverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2 Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Protease Inhibition of the recombinant R. microplus serpins (rRmS) The coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3 Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3 Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD).
Conclusion
The present study provides an insight into the R. microplus serpin family allowing the study of differential expression within specific organs and different developmental stage with four new R. microplus serpins reported. The successful expression of recombinant serpins allowed the determination of their specific host target(s). Finally, the results obtained offer an important source of information to understand R. microplus serpin function and will deepen the knowledge about the role of serpins during tick-host interactions and tick development.
[ "Bioinformatics and Serpin identification", "Tick sources", "Total RNA extraction", "Isolation, cloning and sequencing of rms genes", "Cloning and expression of RmS in the yeast P. pastoris", "Expression analysis by semi-quantitative Reverse Transcription (RT)-PCR", "Protease inhibition assays", "Statistical analysis", "Identification of RmS", "RT-PCR analysis", "Protease Inhibition of the recombinant R. microplus serpins (rRmS)" ]
[ "The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi [33]. The current available tick serpin sequences of Amblyomma americanum [22], A. maculatum [24], A. variegatum [23], A. monolakensis [34] H. longicornis [28,35], I. ricinus [9,36], Ixodes scapularis [21], R. microplus [37], R. appendiculatus [26], and A. monolakensis [34] were retrieved from the National Centre for Biotechnology Information non-redundant protein (NCBI) (http://www.ncbi.nlm.nih.gov). These tick serpin sequences and the human α1-antitrypsin (GenBank, AAB59495) were used as queries against BmiGi V1 [38], BmiGi V2.1 [37], five SSH libraries [39], Australian tick transcriptome libraries [40] and RmiTR V1 [40] using the Basic Local Alignment Search Tool (BLAST) with the tblastX algorithm [41]. The qualified serpin sequences (E-value < 100) were six-frame translated for deduced protein sequences. The presence of the serpin conserved domain (cd00172) was analysed using the batch CD-Search Tool with an expected value threshold cut-off at 1 against NCBI’s Conserved Domain Database (CDD) [42]. SignalP 4.1 [43] was used to predict signal peptide cleavage sites. Also, the amino acid sequences of the R. microplus serpins were scanned for the presence of the C-terminal sequence Lys-Asp-Glu-Leu (KDEL) the endoplasmic reticulum lumen retention signal (KDEL motif, Prosite ID: PS00014) using ScanProSite (http://prosite.expasy.org/scanprosite/) in order to reduce the incidence of false positive results from the SignalP prediction. Putative N-glycosylation sites were predicted using the NetNGlyc 1.0 server (http://www.cbs.dtu.dk/services/NetNGlyc/).", "Hereford cattle at the tick colony maintained by Biosecurity Queensland from the Queensland Department of Agriculture, Fisheries and Forestry (DAFF) [44] were used to collect the acaricide susceptible strain R. microplus NRFS (Non-Resistant Field Strain). All of the eggs (E), larvae (L), nymphs (N), adult males (M) and feeding females (F) were collected from infested animals maintained within a moat pen (DAFF Animal Ethics approval SA2006/03/96). Tick organs were dissected from 17 day-old adult females for cDNA preparation including salivary glands (FSG), guts (FG) and ovaries (Ovr).", "RNA was isolated from eggs, nymphs, and the organs (guts, ovaries and salivary glands) dissected from semi-engorged females. Ticks/organs were ground in liquid nitrogen using diethylpyrocarbonate water-treated mortar and pestle prior to processing utilising the TRIzol® reagent (Gibco-BRL, USA). The tissue samples were stored in the ice-cold TRIzol® Reagent immediately after dissection, and then homogenised through a sterile 25-gauge needle. The total RNA was isolated following the manufacture’s protocol (Gibco-BRL, USA) and the mRNA was purified using Poly (A) Purist™ MAG Kit (Ambion, USA) as recommended by the manufacturer.", "cDNA from nymphs, ovaries and salivary glands was synthesised from purified mRNA using the BioScript™ Kit (Clontech, USA) following the manufacturer’s recommended protocol. PCRs were conducted for isolation of the rms genes using gene specific 5′ and 3′ primers, and designed for the amplification of the coding sequences (CDS) of serpin. Following the amplification and confirmation of the PCR products by agarose gel electrophoresis, the PCR products were sub-cloned into the pCR 2.1-TOPO® vector following the manufacturer’s instructions (Invitrogen, USA). The recombinant plasmids obtained were named pCR-rms1, rms2 and pCR-rms (n + 1). Ten individual colonies for each clone were selected and grown in 5 mL of LB broths supplemented with ampicillin (50 μg.mL−1) 18 hours prior to the purification of the plasmid using the QIAprep Spin miniprep kit (Qiagen, USA). The direct sequencing of the plasmid inserts was performed using the BigDye v3.1 technology (Applied Biosystems, USA) and analysed on the Applied Biosystems 3130xl Genetic Analyser at the Griffith University DNA Sequencing Facility (School of Biomolecular and Biomedical Science, Griffith University, Qld, Australia). The sequencing reactions were prepared using M13 primers in a 96-well plate format according to the manufacturer’s instructions (Applied Biosystems, USA). The sequences were visualised, edited and aligned using Sequencher v4.5 (Gene Codes Corporation, USA) to remove vector sequence and to thus confirm the CDS of the rms genes.", "The coding sequence of rms1, -rms3, -rms6, and rms15 were inserted into the pPICZα A and pPIC-B expression vector (Invitrogen, USA) for intracellular and extracellular expression. The resultant recombinant plasmids were transformed into the yeast P. pastoris GS115 and SMD1168H by electroporation as described in the Pichia Expression Kit manual (Invitrogen). The recombinant protein were purified from yeast pellet and supernatant using a Histrap FF 5 mL column (GE Healthcare, USA) as recommended by the manufacturer following by a gel filtration purification step using a HiLoad™ 16/600 Superdex™ 200 pg column (GE Healthcare, USA).", "Gene specific primers were used to determine the gene expression pattern in eggs, nymphs, female guts, ovaries and salivary gland samples. A total of fifty ticks were dissected to isolate the different organs samples, and 25 nymphs were used on the preparation of the nymph sample. Approximately, five grams of eggs from ten different ticks were processed to conform this experimental sample. Briefly, the densitograms of amplified PCR products were analysed by ImageJ and normalised using the following formula, Y = V + V(H-X)X where Y = normalised mRNA density, V = observed rms PCR band density in individual samples, H = highest tick housekeeping gene PCR band density among tested samples, X = tick housekeeping gene density in individual samples [22]. All experimental samples were processed in triplicated.", "RmS-1, RmS-3, RmS-6 and RmS-15 expressed in P. pastoris yeast using the methodology reported previously [6,45] were used in this assay. The inhibition test was conducted as reported formerly [46] to screen the activity of RmS-1, -3, -6 and -15 against different proteases including bovine Chymotrypsin and Trypsin, porcine Elastase and Kallikrein, human Plasmin, and Thrombin (Sigma-Aldrich, USA). Briefly, 96-well plates were blocked with Blocking buffer (20 mM Tris-HCl, 150 mM NaCl and 5% skim milk, pH 7.6), and washed three times with Wash buffer (20 mM Tris-HCl, 150 mM NaCl, 0.01% Tween 20, pH 7.6) every 5 min. A total of 50 μL containing each protease were incubated with 50 fold molar of RmS-1, RmS-3, RmS-6 and RmS-15 at 37°C for 60 minutes in duplicate. The specific substrate (0.13 mM) was added and substrate hydrolysis was monitored every 30 second using Epoch Microplate Spectrophotometer (BioTek, USA) (see Table 1). The inhibition rate was calculated by comparing the enzymatic activity in the presence and absence of recombinant RmS. The experiments were conducted in triplicate.Table 1\nThe conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases\n\nEnzymes*\n\n[nM]\n\nBinding buffer\n\nSubstrates*\n\n[mM]\nChymotrypsin1050 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0N-Succinyl-Ala-Ala-Pro-Phe-p-nitroanilide0.13Elastase5050 mM Hepes, 100 mM Nacl, 0.01 % Triton X-100, pH 7.4N-Succinyl-Ala-Ala-Ala-p-nitroanilide0.13Kallikrein5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5N-Benzoyl-Pro-Phe-Arg-p-nitroanilide hydrochloride0.13Plasmin5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5Gly-Arg-p-nitroanilide dihydrochloride0.13Thrombin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0Sar-Pro-Arg-p-nitroanilide dihydrochloride0.13Trypsin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0,N-Benzoyl-Phe-Val-Arg-p-nitroanilide hydrochloride0.13*All enzymes and substrates were purchased from Sigma-Aldrich, USA.\n\nThe conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases\n\n*All enzymes and substrates were purchased from Sigma-Aldrich, USA.", "The semi-quantitative PCR and protease inhibition assay data were evaluated by one-way ANOVA with Bonferroni testing (p ≤ 0.05). All analyses were conducted by the GraphPad Prism version 6.02 (GraphPad Software). Data were represented as the mean ± standard deviation (SD).", "The analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414.\nThere was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1\nAmino acid sequence alignment of the characteristic reactive center loops of\nR. microplus\nserpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47].\n\nAmino acid sequence alignment of the characteristic reactive center loops of\nR. microplus\nserpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47].", "Reverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2\nSemi-quantitative analysis of the expression of\nrms\ntranscripts in\nR. microplus\ntick samples\n.\nA and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively.\n\nSemi-quantitative analysis of the expression of\nrms\ntranscripts in\nR. microplus\ntick samples\n.\nA and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively.", "The coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3\nProtease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast\nP. pastoris\n. The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD).\n\nProtease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast\nP. pastoris\n. The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD)." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Bioinformatics and Serpin identification", "Tick sources", "Total RNA extraction", "Isolation, cloning and sequencing of rms genes", "Cloning and expression of RmS in the yeast P. pastoris", "Expression analysis by semi-quantitative Reverse Transcription (RT)-PCR", "Protease inhibition assays", "Statistical analysis", "Results", "Identification of RmS", "RT-PCR analysis", "Protease Inhibition of the recombinant R. microplus serpins (rRmS)", "Discussion", "Conclusion" ]
[ "Ticks are worldwide-distributed ectoparasites that have evolved as obligate haematophagous arthropods of animals and humans. These parasites have been categorised after mosquitoes as the second most important group of vectors transmitting disease-causing agents to mammals [1,2]. In particular, the cattle tick (Rhipicephalus microplus) is considered the most economically important ectoparasite of cattle distributed in tropical and subtropical regions of the world. The principal reason for this affirmation is that R. microplus affects beef and dairy cattle producers causing direct economic losses due to host parasitism and tick borne diseases such as anaplasmosis and babesiosis [3,4].\nThe success of the parasitic cycle of R. microplus begins with the larval capability to overcome haemostatic and immunological responses of the host. Following larval attachment, a great amount of blood is ingested and digested by ticks in order to complete their parasitic cycle. The full-engorged adult females drop off from host to initiate the non-parasitic phase with the laying and hatching of eggs. R. microplus has an intensive production and physiological secretion of proteins during the entire parasitic cycle in order to disrupt host responses such as protease inhibitors which play an important role in tick survival, feeding and development [5-8]. Serpins (Serine Protease Inhibitors) are important regulatory molecules with roles during host- parasite interactions such as fibrinolysis [9], host response mediated by complement proteases [10], and inflammation [11-13] among other tick physiological functions [14,15]. These protease inhibitors conformed a large superfamily that is extensively distributed within bacteria, insects, parasite, animals and plants [16,17]. Serpins differ from Kunitz protease inhibitors by distinctive conformational change during the inhibition of their target proteases. The presence of a small domain designated as the reactive center loop (RCL) constitutes their most notable characteristic. This domain extends outside of the protein and leads to the formation of the firm bond of the serpin with its specific proteinase [18-20]. Members of the tick Serpin family have been studied and recommended as useful targets for tick vaccine development [21]. Consequently, serpin sequences from diverse tick species have been reported such as, Amblyomma americanum [22], Amblyomma variegatum [23], Amblyomma maculatum [24], Dermacentor variabilis [25]; Rhipicephalus appendiculatus [26], R. microplus [6,27], Haemaphysalis longicornis [28], Ixodes scapularis [21,29], and Ixodes ricinus [9,11]. Additionally, an in silico identification of R. microplus serpin was conducted using different databases [30]. However, a great number of tick serpins continue to be functionally uncharacterised which limits the studies related with their function during host – parasite interaction [11,31,32].\nIn this study serpins from different R. microplus genomic databases were identified and four new serpins molecules were reported. In silico characterization of these serpins was undertaken using bioinformatics methods. Additionally, R. microplus serpins (RmS) were cloned, sequenced, and expressed in order to determine their protease inhibition specificity. The spatial expression of these serpins was carried out by PCR using cDNA from different tick life stages and female adult organs. Finally, this study is an important step forward in uncovering the role of RmS in the physiology of this ectoparasite and their potential use for the future improvement of ticks control methods.", " Bioinformatics and Serpin identification The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi [33]. The current available tick serpin sequences of Amblyomma americanum [22], A. maculatum [24], A. variegatum [23], A. monolakensis [34] H. longicornis [28,35], I. ricinus [9,36], Ixodes scapularis [21], R. microplus [37], R. appendiculatus [26], and A. monolakensis [34] were retrieved from the National Centre for Biotechnology Information non-redundant protein (NCBI) (http://www.ncbi.nlm.nih.gov). These tick serpin sequences and the human α1-antitrypsin (GenBank, AAB59495) were used as queries against BmiGi V1 [38], BmiGi V2.1 [37], five SSH libraries [39], Australian tick transcriptome libraries [40] and RmiTR V1 [40] using the Basic Local Alignment Search Tool (BLAST) with the tblastX algorithm [41]. The qualified serpin sequences (E-value < 100) were six-frame translated for deduced protein sequences. The presence of the serpin conserved domain (cd00172) was analysed using the batch CD-Search Tool with an expected value threshold cut-off at 1 against NCBI’s Conserved Domain Database (CDD) [42]. SignalP 4.1 [43] was used to predict signal peptide cleavage sites. Also, the amino acid sequences of the R. microplus serpins were scanned for the presence of the C-terminal sequence Lys-Asp-Glu-Leu (KDEL) the endoplasmic reticulum lumen retention signal (KDEL motif, Prosite ID: PS00014) using ScanProSite (http://prosite.expasy.org/scanprosite/) in order to reduce the incidence of false positive results from the SignalP prediction. Putative N-glycosylation sites were predicted using the NetNGlyc 1.0 server (http://www.cbs.dtu.dk/services/NetNGlyc/).\nThe identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi [33]. The current available tick serpin sequences of Amblyomma americanum [22], A. maculatum [24], A. variegatum [23], A. monolakensis [34] H. longicornis [28,35], I. ricinus [9,36], Ixodes scapularis [21], R. microplus [37], R. appendiculatus [26], and A. monolakensis [34] were retrieved from the National Centre for Biotechnology Information non-redundant protein (NCBI) (http://www.ncbi.nlm.nih.gov). These tick serpin sequences and the human α1-antitrypsin (GenBank, AAB59495) were used as queries against BmiGi V1 [38], BmiGi V2.1 [37], five SSH libraries [39], Australian tick transcriptome libraries [40] and RmiTR V1 [40] using the Basic Local Alignment Search Tool (BLAST) with the tblastX algorithm [41]. The qualified serpin sequences (E-value < 100) were six-frame translated for deduced protein sequences. The presence of the serpin conserved domain (cd00172) was analysed using the batch CD-Search Tool with an expected value threshold cut-off at 1 against NCBI’s Conserved Domain Database (CDD) [42]. SignalP 4.1 [43] was used to predict signal peptide cleavage sites. Also, the amino acid sequences of the R. microplus serpins were scanned for the presence of the C-terminal sequence Lys-Asp-Glu-Leu (KDEL) the endoplasmic reticulum lumen retention signal (KDEL motif, Prosite ID: PS00014) using ScanProSite (http://prosite.expasy.org/scanprosite/) in order to reduce the incidence of false positive results from the SignalP prediction. Putative N-glycosylation sites were predicted using the NetNGlyc 1.0 server (http://www.cbs.dtu.dk/services/NetNGlyc/).\n Tick sources Hereford cattle at the tick colony maintained by Biosecurity Queensland from the Queensland Department of Agriculture, Fisheries and Forestry (DAFF) [44] were used to collect the acaricide susceptible strain R. microplus NRFS (Non-Resistant Field Strain). All of the eggs (E), larvae (L), nymphs (N), adult males (M) and feeding females (F) were collected from infested animals maintained within a moat pen (DAFF Animal Ethics approval SA2006/03/96). Tick organs were dissected from 17 day-old adult females for cDNA preparation including salivary glands (FSG), guts (FG) and ovaries (Ovr).\nHereford cattle at the tick colony maintained by Biosecurity Queensland from the Queensland Department of Agriculture, Fisheries and Forestry (DAFF) [44] were used to collect the acaricide susceptible strain R. microplus NRFS (Non-Resistant Field Strain). All of the eggs (E), larvae (L), nymphs (N), adult males (M) and feeding females (F) were collected from infested animals maintained within a moat pen (DAFF Animal Ethics approval SA2006/03/96). Tick organs were dissected from 17 day-old adult females for cDNA preparation including salivary glands (FSG), guts (FG) and ovaries (Ovr).\n Total RNA extraction RNA was isolated from eggs, nymphs, and the organs (guts, ovaries and salivary glands) dissected from semi-engorged females. Ticks/organs were ground in liquid nitrogen using diethylpyrocarbonate water-treated mortar and pestle prior to processing utilising the TRIzol® reagent (Gibco-BRL, USA). The tissue samples were stored in the ice-cold TRIzol® Reagent immediately after dissection, and then homogenised through a sterile 25-gauge needle. The total RNA was isolated following the manufacture’s protocol (Gibco-BRL, USA) and the mRNA was purified using Poly (A) Purist™ MAG Kit (Ambion, USA) as recommended by the manufacturer.\nRNA was isolated from eggs, nymphs, and the organs (guts, ovaries and salivary glands) dissected from semi-engorged females. Ticks/organs were ground in liquid nitrogen using diethylpyrocarbonate water-treated mortar and pestle prior to processing utilising the TRIzol® reagent (Gibco-BRL, USA). The tissue samples were stored in the ice-cold TRIzol® Reagent immediately after dissection, and then homogenised through a sterile 25-gauge needle. The total RNA was isolated following the manufacture’s protocol (Gibco-BRL, USA) and the mRNA was purified using Poly (A) Purist™ MAG Kit (Ambion, USA) as recommended by the manufacturer.\n Isolation, cloning and sequencing of rms genes cDNA from nymphs, ovaries and salivary glands was synthesised from purified mRNA using the BioScript™ Kit (Clontech, USA) following the manufacturer’s recommended protocol. PCRs were conducted for isolation of the rms genes using gene specific 5′ and 3′ primers, and designed for the amplification of the coding sequences (CDS) of serpin. Following the amplification and confirmation of the PCR products by agarose gel electrophoresis, the PCR products were sub-cloned into the pCR 2.1-TOPO® vector following the manufacturer’s instructions (Invitrogen, USA). The recombinant plasmids obtained were named pCR-rms1, rms2 and pCR-rms (n + 1). Ten individual colonies for each clone were selected and grown in 5 mL of LB broths supplemented with ampicillin (50 μg.mL−1) 18 hours prior to the purification of the plasmid using the QIAprep Spin miniprep kit (Qiagen, USA). The direct sequencing of the plasmid inserts was performed using the BigDye v3.1 technology (Applied Biosystems, USA) and analysed on the Applied Biosystems 3130xl Genetic Analyser at the Griffith University DNA Sequencing Facility (School of Biomolecular and Biomedical Science, Griffith University, Qld, Australia). The sequencing reactions were prepared using M13 primers in a 96-well plate format according to the manufacturer’s instructions (Applied Biosystems, USA). The sequences were visualised, edited and aligned using Sequencher v4.5 (Gene Codes Corporation, USA) to remove vector sequence and to thus confirm the CDS of the rms genes.\ncDNA from nymphs, ovaries and salivary glands was synthesised from purified mRNA using the BioScript™ Kit (Clontech, USA) following the manufacturer’s recommended protocol. PCRs were conducted for isolation of the rms genes using gene specific 5′ and 3′ primers, and designed for the amplification of the coding sequences (CDS) of serpin. Following the amplification and confirmation of the PCR products by agarose gel electrophoresis, the PCR products were sub-cloned into the pCR 2.1-TOPO® vector following the manufacturer’s instructions (Invitrogen, USA). The recombinant plasmids obtained were named pCR-rms1, rms2 and pCR-rms (n + 1). Ten individual colonies for each clone were selected and grown in 5 mL of LB broths supplemented with ampicillin (50 μg.mL−1) 18 hours prior to the purification of the plasmid using the QIAprep Spin miniprep kit (Qiagen, USA). The direct sequencing of the plasmid inserts was performed using the BigDye v3.1 technology (Applied Biosystems, USA) and analysed on the Applied Biosystems 3130xl Genetic Analyser at the Griffith University DNA Sequencing Facility (School of Biomolecular and Biomedical Science, Griffith University, Qld, Australia). The sequencing reactions were prepared using M13 primers in a 96-well plate format according to the manufacturer’s instructions (Applied Biosystems, USA). The sequences were visualised, edited and aligned using Sequencher v4.5 (Gene Codes Corporation, USA) to remove vector sequence and to thus confirm the CDS of the rms genes.\n Cloning and expression of RmS in the yeast P. pastoris The coding sequence of rms1, -rms3, -rms6, and rms15 were inserted into the pPICZα A and pPIC-B expression vector (Invitrogen, USA) for intracellular and extracellular expression. The resultant recombinant plasmids were transformed into the yeast P. pastoris GS115 and SMD1168H by electroporation as described in the Pichia Expression Kit manual (Invitrogen). The recombinant protein were purified from yeast pellet and supernatant using a Histrap FF 5 mL column (GE Healthcare, USA) as recommended by the manufacturer following by a gel filtration purification step using a HiLoad™ 16/600 Superdex™ 200 pg column (GE Healthcare, USA).\nThe coding sequence of rms1, -rms3, -rms6, and rms15 were inserted into the pPICZα A and pPIC-B expression vector (Invitrogen, USA) for intracellular and extracellular expression. The resultant recombinant plasmids were transformed into the yeast P. pastoris GS115 and SMD1168H by electroporation as described in the Pichia Expression Kit manual (Invitrogen). The recombinant protein were purified from yeast pellet and supernatant using a Histrap FF 5 mL column (GE Healthcare, USA) as recommended by the manufacturer following by a gel filtration purification step using a HiLoad™ 16/600 Superdex™ 200 pg column (GE Healthcare, USA).\n Expression analysis by semi-quantitative Reverse Transcription (RT)-PCR Gene specific primers were used to determine the gene expression pattern in eggs, nymphs, female guts, ovaries and salivary gland samples. A total of fifty ticks were dissected to isolate the different organs samples, and 25 nymphs were used on the preparation of the nymph sample. Approximately, five grams of eggs from ten different ticks were processed to conform this experimental sample. Briefly, the densitograms of amplified PCR products were analysed by ImageJ and normalised using the following formula, Y = V + V(H-X)X where Y = normalised mRNA density, V = observed rms PCR band density in individual samples, H = highest tick housekeeping gene PCR band density among tested samples, X = tick housekeeping gene density in individual samples [22]. All experimental samples were processed in triplicated.\nGene specific primers were used to determine the gene expression pattern in eggs, nymphs, female guts, ovaries and salivary gland samples. A total of fifty ticks were dissected to isolate the different organs samples, and 25 nymphs were used on the preparation of the nymph sample. Approximately, five grams of eggs from ten different ticks were processed to conform this experimental sample. Briefly, the densitograms of amplified PCR products were analysed by ImageJ and normalised using the following formula, Y = V + V(H-X)X where Y = normalised mRNA density, V = observed rms PCR band density in individual samples, H = highest tick housekeeping gene PCR band density among tested samples, X = tick housekeeping gene density in individual samples [22]. All experimental samples were processed in triplicated.\n Protease inhibition assays RmS-1, RmS-3, RmS-6 and RmS-15 expressed in P. pastoris yeast using the methodology reported previously [6,45] were used in this assay. The inhibition test was conducted as reported formerly [46] to screen the activity of RmS-1, -3, -6 and -15 against different proteases including bovine Chymotrypsin and Trypsin, porcine Elastase and Kallikrein, human Plasmin, and Thrombin (Sigma-Aldrich, USA). Briefly, 96-well plates were blocked with Blocking buffer (20 mM Tris-HCl, 150 mM NaCl and 5% skim milk, pH 7.6), and washed three times with Wash buffer (20 mM Tris-HCl, 150 mM NaCl, 0.01% Tween 20, pH 7.6) every 5 min. A total of 50 μL containing each protease were incubated with 50 fold molar of RmS-1, RmS-3, RmS-6 and RmS-15 at 37°C for 60 minutes in duplicate. The specific substrate (0.13 mM) was added and substrate hydrolysis was monitored every 30 second using Epoch Microplate Spectrophotometer (BioTek, USA) (see Table 1). The inhibition rate was calculated by comparing the enzymatic activity in the presence and absence of recombinant RmS. The experiments were conducted in triplicate.Table 1\nThe conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases\n\nEnzymes*\n\n[nM]\n\nBinding buffer\n\nSubstrates*\n\n[mM]\nChymotrypsin1050 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0N-Succinyl-Ala-Ala-Pro-Phe-p-nitroanilide0.13Elastase5050 mM Hepes, 100 mM Nacl, 0.01 % Triton X-100, pH 7.4N-Succinyl-Ala-Ala-Ala-p-nitroanilide0.13Kallikrein5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5N-Benzoyl-Pro-Phe-Arg-p-nitroanilide hydrochloride0.13Plasmin5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5Gly-Arg-p-nitroanilide dihydrochloride0.13Thrombin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0Sar-Pro-Arg-p-nitroanilide dihydrochloride0.13Trypsin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0,N-Benzoyl-Phe-Val-Arg-p-nitroanilide hydrochloride0.13*All enzymes and substrates were purchased from Sigma-Aldrich, USA.\n\nThe conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases\n\n*All enzymes and substrates were purchased from Sigma-Aldrich, USA.\nRmS-1, RmS-3, RmS-6 and RmS-15 expressed in P. pastoris yeast using the methodology reported previously [6,45] were used in this assay. The inhibition test was conducted as reported formerly [46] to screen the activity of RmS-1, -3, -6 and -15 against different proteases including bovine Chymotrypsin and Trypsin, porcine Elastase and Kallikrein, human Plasmin, and Thrombin (Sigma-Aldrich, USA). Briefly, 96-well plates were blocked with Blocking buffer (20 mM Tris-HCl, 150 mM NaCl and 5% skim milk, pH 7.6), and washed three times with Wash buffer (20 mM Tris-HCl, 150 mM NaCl, 0.01% Tween 20, pH 7.6) every 5 min. A total of 50 μL containing each protease were incubated with 50 fold molar of RmS-1, RmS-3, RmS-6 and RmS-15 at 37°C for 60 minutes in duplicate. The specific substrate (0.13 mM) was added and substrate hydrolysis was monitored every 30 second using Epoch Microplate Spectrophotometer (BioTek, USA) (see Table 1). The inhibition rate was calculated by comparing the enzymatic activity in the presence and absence of recombinant RmS. The experiments were conducted in triplicate.Table 1\nThe conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases\n\nEnzymes*\n\n[nM]\n\nBinding buffer\n\nSubstrates*\n\n[mM]\nChymotrypsin1050 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0N-Succinyl-Ala-Ala-Pro-Phe-p-nitroanilide0.13Elastase5050 mM Hepes, 100 mM Nacl, 0.01 % Triton X-100, pH 7.4N-Succinyl-Ala-Ala-Ala-p-nitroanilide0.13Kallikrein5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5N-Benzoyl-Pro-Phe-Arg-p-nitroanilide hydrochloride0.13Plasmin5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5Gly-Arg-p-nitroanilide dihydrochloride0.13Thrombin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0Sar-Pro-Arg-p-nitroanilide dihydrochloride0.13Trypsin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0,N-Benzoyl-Phe-Val-Arg-p-nitroanilide hydrochloride0.13*All enzymes and substrates were purchased from Sigma-Aldrich, USA.\n\nThe conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases\n\n*All enzymes and substrates were purchased from Sigma-Aldrich, USA.\n Statistical analysis The semi-quantitative PCR and protease inhibition assay data were evaluated by one-way ANOVA with Bonferroni testing (p ≤ 0.05). All analyses were conducted by the GraphPad Prism version 6.02 (GraphPad Software). Data were represented as the mean ± standard deviation (SD).\nThe semi-quantitative PCR and protease inhibition assay data were evaluated by one-way ANOVA with Bonferroni testing (p ≤ 0.05). All analyses were conducted by the GraphPad Prism version 6.02 (GraphPad Software). Data were represented as the mean ± standard deviation (SD).", "The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi [33]. The current available tick serpin sequences of Amblyomma americanum [22], A. maculatum [24], A. variegatum [23], A. monolakensis [34] H. longicornis [28,35], I. ricinus [9,36], Ixodes scapularis [21], R. microplus [37], R. appendiculatus [26], and A. monolakensis [34] were retrieved from the National Centre for Biotechnology Information non-redundant protein (NCBI) (http://www.ncbi.nlm.nih.gov). These tick serpin sequences and the human α1-antitrypsin (GenBank, AAB59495) were used as queries against BmiGi V1 [38], BmiGi V2.1 [37], five SSH libraries [39], Australian tick transcriptome libraries [40] and RmiTR V1 [40] using the Basic Local Alignment Search Tool (BLAST) with the tblastX algorithm [41]. The qualified serpin sequences (E-value < 100) were six-frame translated for deduced protein sequences. The presence of the serpin conserved domain (cd00172) was analysed using the batch CD-Search Tool with an expected value threshold cut-off at 1 against NCBI’s Conserved Domain Database (CDD) [42]. SignalP 4.1 [43] was used to predict signal peptide cleavage sites. Also, the amino acid sequences of the R. microplus serpins were scanned for the presence of the C-terminal sequence Lys-Asp-Glu-Leu (KDEL) the endoplasmic reticulum lumen retention signal (KDEL motif, Prosite ID: PS00014) using ScanProSite (http://prosite.expasy.org/scanprosite/) in order to reduce the incidence of false positive results from the SignalP prediction. Putative N-glycosylation sites were predicted using the NetNGlyc 1.0 server (http://www.cbs.dtu.dk/services/NetNGlyc/).", "Hereford cattle at the tick colony maintained by Biosecurity Queensland from the Queensland Department of Agriculture, Fisheries and Forestry (DAFF) [44] were used to collect the acaricide susceptible strain R. microplus NRFS (Non-Resistant Field Strain). All of the eggs (E), larvae (L), nymphs (N), adult males (M) and feeding females (F) were collected from infested animals maintained within a moat pen (DAFF Animal Ethics approval SA2006/03/96). Tick organs were dissected from 17 day-old adult females for cDNA preparation including salivary glands (FSG), guts (FG) and ovaries (Ovr).", "RNA was isolated from eggs, nymphs, and the organs (guts, ovaries and salivary glands) dissected from semi-engorged females. Ticks/organs were ground in liquid nitrogen using diethylpyrocarbonate water-treated mortar and pestle prior to processing utilising the TRIzol® reagent (Gibco-BRL, USA). The tissue samples were stored in the ice-cold TRIzol® Reagent immediately after dissection, and then homogenised through a sterile 25-gauge needle. The total RNA was isolated following the manufacture’s protocol (Gibco-BRL, USA) and the mRNA was purified using Poly (A) Purist™ MAG Kit (Ambion, USA) as recommended by the manufacturer.", "cDNA from nymphs, ovaries and salivary glands was synthesised from purified mRNA using the BioScript™ Kit (Clontech, USA) following the manufacturer’s recommended protocol. PCRs were conducted for isolation of the rms genes using gene specific 5′ and 3′ primers, and designed for the amplification of the coding sequences (CDS) of serpin. Following the amplification and confirmation of the PCR products by agarose gel electrophoresis, the PCR products were sub-cloned into the pCR 2.1-TOPO® vector following the manufacturer’s instructions (Invitrogen, USA). The recombinant plasmids obtained were named pCR-rms1, rms2 and pCR-rms (n + 1). Ten individual colonies for each clone were selected and grown in 5 mL of LB broths supplemented with ampicillin (50 μg.mL−1) 18 hours prior to the purification of the plasmid using the QIAprep Spin miniprep kit (Qiagen, USA). The direct sequencing of the plasmid inserts was performed using the BigDye v3.1 technology (Applied Biosystems, USA) and analysed on the Applied Biosystems 3130xl Genetic Analyser at the Griffith University DNA Sequencing Facility (School of Biomolecular and Biomedical Science, Griffith University, Qld, Australia). The sequencing reactions were prepared using M13 primers in a 96-well plate format according to the manufacturer’s instructions (Applied Biosystems, USA). The sequences were visualised, edited and aligned using Sequencher v4.5 (Gene Codes Corporation, USA) to remove vector sequence and to thus confirm the CDS of the rms genes.", "The coding sequence of rms1, -rms3, -rms6, and rms15 were inserted into the pPICZα A and pPIC-B expression vector (Invitrogen, USA) for intracellular and extracellular expression. The resultant recombinant plasmids were transformed into the yeast P. pastoris GS115 and SMD1168H by electroporation as described in the Pichia Expression Kit manual (Invitrogen). The recombinant protein were purified from yeast pellet and supernatant using a Histrap FF 5 mL column (GE Healthcare, USA) as recommended by the manufacturer following by a gel filtration purification step using a HiLoad™ 16/600 Superdex™ 200 pg column (GE Healthcare, USA).", "Gene specific primers were used to determine the gene expression pattern in eggs, nymphs, female guts, ovaries and salivary gland samples. A total of fifty ticks were dissected to isolate the different organs samples, and 25 nymphs were used on the preparation of the nymph sample. Approximately, five grams of eggs from ten different ticks were processed to conform this experimental sample. Briefly, the densitograms of amplified PCR products were analysed by ImageJ and normalised using the following formula, Y = V + V(H-X)X where Y = normalised mRNA density, V = observed rms PCR band density in individual samples, H = highest tick housekeeping gene PCR band density among tested samples, X = tick housekeeping gene density in individual samples [22]. All experimental samples were processed in triplicated.", "RmS-1, RmS-3, RmS-6 and RmS-15 expressed in P. pastoris yeast using the methodology reported previously [6,45] were used in this assay. The inhibition test was conducted as reported formerly [46] to screen the activity of RmS-1, -3, -6 and -15 against different proteases including bovine Chymotrypsin and Trypsin, porcine Elastase and Kallikrein, human Plasmin, and Thrombin (Sigma-Aldrich, USA). Briefly, 96-well plates were blocked with Blocking buffer (20 mM Tris-HCl, 150 mM NaCl and 5% skim milk, pH 7.6), and washed three times with Wash buffer (20 mM Tris-HCl, 150 mM NaCl, 0.01% Tween 20, pH 7.6) every 5 min. A total of 50 μL containing each protease were incubated with 50 fold molar of RmS-1, RmS-3, RmS-6 and RmS-15 at 37°C for 60 minutes in duplicate. The specific substrate (0.13 mM) was added and substrate hydrolysis was monitored every 30 second using Epoch Microplate Spectrophotometer (BioTek, USA) (see Table 1). The inhibition rate was calculated by comparing the enzymatic activity in the presence and absence of recombinant RmS. The experiments were conducted in triplicate.Table 1\nThe conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases\n\nEnzymes*\n\n[nM]\n\nBinding buffer\n\nSubstrates*\n\n[mM]\nChymotrypsin1050 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0N-Succinyl-Ala-Ala-Pro-Phe-p-nitroanilide0.13Elastase5050 mM Hepes, 100 mM Nacl, 0.01 % Triton X-100, pH 7.4N-Succinyl-Ala-Ala-Ala-p-nitroanilide0.13Kallikrein5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5N-Benzoyl-Pro-Phe-Arg-p-nitroanilide hydrochloride0.13Plasmin5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5Gly-Arg-p-nitroanilide dihydrochloride0.13Thrombin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0Sar-Pro-Arg-p-nitroanilide dihydrochloride0.13Trypsin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0,N-Benzoyl-Phe-Val-Arg-p-nitroanilide hydrochloride0.13*All enzymes and substrates were purchased from Sigma-Aldrich, USA.\n\nThe conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases\n\n*All enzymes and substrates were purchased from Sigma-Aldrich, USA.", "The semi-quantitative PCR and protease inhibition assay data were evaluated by one-way ANOVA with Bonferroni testing (p ≤ 0.05). All analyses were conducted by the GraphPad Prism version 6.02 (GraphPad Software). Data were represented as the mean ± standard deviation (SD).", " Identification of RmS The analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414.\nThere was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1\nAmino acid sequence alignment of the characteristic reactive center loops of\nR. microplus\nserpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47].\n\nAmino acid sequence alignment of the characteristic reactive center loops of\nR. microplus\nserpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47].\nThe analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414.\nThere was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1\nAmino acid sequence alignment of the characteristic reactive center loops of\nR. microplus\nserpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47].\n\nAmino acid sequence alignment of the characteristic reactive center loops of\nR. microplus\nserpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47].\n RT-PCR analysis Reverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2\nSemi-quantitative analysis of the expression of\nrms\ntranscripts in\nR. microplus\ntick samples\n.\nA and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively.\n\nSemi-quantitative analysis of the expression of\nrms\ntranscripts in\nR. microplus\ntick samples\n.\nA and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively.\nReverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2\nSemi-quantitative analysis of the expression of\nrms\ntranscripts in\nR. microplus\ntick samples\n.\nA and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively.\n\nSemi-quantitative analysis of the expression of\nrms\ntranscripts in\nR. microplus\ntick samples\n.\nA and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively.\n Protease Inhibition of the recombinant R. microplus serpins (rRmS) The coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3\nProtease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast\nP. pastoris\n. The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD).\n\nProtease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast\nP. pastoris\n. The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD).\nThe coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3\nProtease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast\nP. pastoris\n. The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD).\n\nProtease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast\nP. pastoris\n. The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD).", "The analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414.\nThere was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1\nAmino acid sequence alignment of the characteristic reactive center loops of\nR. microplus\nserpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47].\n\nAmino acid sequence alignment of the characteristic reactive center loops of\nR. microplus\nserpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47].", "Reverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2\nSemi-quantitative analysis of the expression of\nrms\ntranscripts in\nR. microplus\ntick samples\n.\nA and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively.\n\nSemi-quantitative analysis of the expression of\nrms\ntranscripts in\nR. microplus\ntick samples\n.\nA and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively.", "The coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3\nProtease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast\nP. pastoris\n. The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD).\n\nProtease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast\nP. pastoris\n. The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD).", "The serpin family is conformed by a high and variable number of genes which are found in many different organisms, for example the human genome has approximately 36 serpins genes [48], 29 genes were found in Drosophila melanogaster [49], 45 in I. scapularis, and 17 serpins genes in A. americanum [21,22]. This corroborates a high conservation of the regulatory role of serpins among different species, and their functional versatility suggesting an evolutionary adaptation to confront different and novel proteases [50]. Processes such as host innate immune response regulation [51,52]; tick defences [49,53]; hemolymph coagulation cascade [54] and tick development [55,56] are regulated by serine protease inhibitors. In ixodidae, serpins are an extensive protein family with an important role at the physiological level, particularly during the parasitic periods of attachment and blood feeding [14,21-23,26,28,32,35,52,57,58]. Especially in R. microplus, a single host tick that has a highly efficient and complex combination of proteins in saliva that are useful in order to achieve the successful blood feeding, and serpins have an important role within tick saliva. In this study data obtained from transcriptome studies conducted on different stages of development of R. microplus and stored at CattleTick Base [40] were an important resource for this study to determine the members of the R. microplus serpin family [40]. However, full coverage of R. microplus genome would be necessary to give a precise number of R. microplus serpins [59]. Previous studies have provided important evidence of tick serpin sequences and transcript expression, but research discerning the specific targets or biological functions of these serpins is not forthcoming [6,13,32,52,60,61]. Following the elimination of redundant sequences the data obtained in this research suggested the presence of 22 putative R. microplus serpins from all databases studied. The amino acid sequences of these serpins revealed similar numbers of secreted and non-secreted serpins as described by Tirloni and co-workers [30]. A total of 18 R. microplus serpins showed high amino acid identity (range from 97 to 100%) with serpins reported in the BmGI and RmiTR V1 databases (including USA and Australian R. microplus), and those reported by Tirloni and co-worker (RmINCT-EM database, including Brazilian R. microplus) [30,40,62,63]. This observation confirms the conservation of these serpins in geographically distant populations of R. microplus.\nThe extracellular secretion of serine protease inhibitors during host – parasite interaction is important for ticks in order to overcome the haemostatic response of the host, blood digestion, and defence [64-70]. Anti-haemostasis serpins have been reported from A. americanum [32], H. longicornis [28,35], I. ricinus [9,11,57], and R. haemaphysaloides [71]. This study identified RmS-15 as an anti-haemostatic serpin that specifically inhibited Thrombin, an important serine protease of the coagulation pathway [72]. The result suggests an important role of RmS-15 to impair host blood coagulation during tick feeding. Similar results specifically related with blood coagulation was previously obtained for a mutant M340R of the I. ricinus serpin (Iris) that gained inhibitory activity against Thrombin and Factor Xa after losing its Elastase affinity through directed mutation [9].\nThis study improved the P. pastoris culture, expression and purification of previously described RmS-3 [6] demonstrated by significant inhibition of Chymotrypsin and Elastase observed in this study. The neutrophils’ elastase is discharged at the tick bite site which has reported to have an accumulation of this particular group of cells [73]. Additionally, previous studies have reported that neutrophils contribute to local inflammation during tick infestation which is an evasion mechanism employed by the host to resist tick infestation [67,70,74]. RmS-3 showed high levels of recognition by sera obtained from tick resistant cattle corroborating its secretion within tick saliva and an important role of this serpin during the host – parasite interaction [6]. RmS-3 might play an important role in the inhibition of host immune response. Similar results were obtained with the recombinant serpin from I. ricinus –Iris- with Elastase as its principal natural target [9,13]. However, the high expression of the rms-3 gene observed in this study in tick ovaries is related with its possible role to protect tick reproductive cells from digestive proteases released into tick hemocoel. This defensive pathway was attributed to insect serpins that inhibit Chymotrypsin [75].\nSerpins without a secretion signal have been reported to have a regulatory role in intracellular pathways such as tick development, intracellular digestion or vitellogenesis [67,68,76]. The predicted intracellular serpin, RmS-14 was only detected in nymphs showing specific expression of this serpin at this particular stage of tick development. RmS-14 was not detected by RT-PCR conducted previously in tissue samples from the Porto Alegro R. microplus strain (Rio Grande do Sur, Brazil) [30], however, nymph samples were not screened in this related study.\nFour new serpins are reported in this investigation, two of them, RmS-19 and -20 were expressed in all tissues samples analysed showing their important role in both parasitic and non-parasitic stages of R. microplus development. RmS-21 and -22 were not detected in ovaries suggesting a regulatory role of these serpins in the proteolysis activity during digestion and embryos development in the eggs stage. Additionally, RmS-1 is a serpin that lacks a detectable signal peptide but was found to specifically inhibit Chymotrypsin with comparatively less inhibition of Trypsin and Thrombin. RmS-1 contains two methionines at P4, P5, and cysteines at P1 and P’1 sites of the RCL. The presence of these amino acids sensitive to oxidation (methionine and cysteine) at the RCL is characteristic of human intracellular serpins [77]. Also, RmS-1 clusters together with RAS1 and Lospin7, which are intracellular serpins from R. appendiculatus and A. americanum respectively [22,26]. The secreted and glycosylated RmS-1 expressed in P. pastoris had no significant inhibition against serine proteases tested in this study. However, protease inhibition data were obtained only using an intracellular and non­glycosylated RmS­1 expressed in P. pastoris. Data showed a significant inhibition of Chymotrypsin by the non­glycosylated RmS-1. The rms-1 gene was expressed in all tissue samples analysed suggesting a broad regulatory role. Similar behaviour was observed with RmS-6, where only the intracellular and non­glycosylated RmS-6 showed activity against Chymotrypsin (Figure 3). The rms16 was expressed only in the ovary sample suggesting a role for this serpin during tick embryogenesis or vitellogenesis. Further studies should be conducted in order to understand and characterise the activity and role during tick development and host parasite interaction of all R. microplus serpins identified.", "The present study provides an insight into the R. microplus serpin family allowing the study of differential expression within specific organs and different developmental stage with four new R. microplus serpins reported. The successful expression of recombinant serpins allowed the determination of their specific host target(s). Finally, the results obtained offer an important source of information to understand R. microplus serpin function and will deepen the knowledge about the role of serpins during tick-host interactions and tick development." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusion" ]
[ "Genome", "Protease inhibitor", "Rhipicephalus microplus", "serpin", "Cattle tick" ]
Background: Ticks are worldwide-distributed ectoparasites that have evolved as obligate haematophagous arthropods of animals and humans. These parasites have been categorised after mosquitoes as the second most important group of vectors transmitting disease-causing agents to mammals [1,2]. In particular, the cattle tick (Rhipicephalus microplus) is considered the most economically important ectoparasite of cattle distributed in tropical and subtropical regions of the world. The principal reason for this affirmation is that R. microplus affects beef and dairy cattle producers causing direct economic losses due to host parasitism and tick borne diseases such as anaplasmosis and babesiosis [3,4]. The success of the parasitic cycle of R. microplus begins with the larval capability to overcome haemostatic and immunological responses of the host. Following larval attachment, a great amount of blood is ingested and digested by ticks in order to complete their parasitic cycle. The full-engorged adult females drop off from host to initiate the non-parasitic phase with the laying and hatching of eggs. R. microplus has an intensive production and physiological secretion of proteins during the entire parasitic cycle in order to disrupt host responses such as protease inhibitors which play an important role in tick survival, feeding and development [5-8]. Serpins (Serine Protease Inhibitors) are important regulatory molecules with roles during host- parasite interactions such as fibrinolysis [9], host response mediated by complement proteases [10], and inflammation [11-13] among other tick physiological functions [14,15]. These protease inhibitors conformed a large superfamily that is extensively distributed within bacteria, insects, parasite, animals and plants [16,17]. Serpins differ from Kunitz protease inhibitors by distinctive conformational change during the inhibition of their target proteases. The presence of a small domain designated as the reactive center loop (RCL) constitutes their most notable characteristic. This domain extends outside of the protein and leads to the formation of the firm bond of the serpin with its specific proteinase [18-20]. Members of the tick Serpin family have been studied and recommended as useful targets for tick vaccine development [21]. Consequently, serpin sequences from diverse tick species have been reported such as, Amblyomma americanum [22], Amblyomma variegatum [23], Amblyomma maculatum [24], Dermacentor variabilis [25]; Rhipicephalus appendiculatus [26], R. microplus [6,27], Haemaphysalis longicornis [28], Ixodes scapularis [21,29], and Ixodes ricinus [9,11]. Additionally, an in silico identification of R. microplus serpin was conducted using different databases [30]. However, a great number of tick serpins continue to be functionally uncharacterised which limits the studies related with their function during host – parasite interaction [11,31,32]. In this study serpins from different R. microplus genomic databases were identified and four new serpins molecules were reported. In silico characterization of these serpins was undertaken using bioinformatics methods. Additionally, R. microplus serpins (RmS) were cloned, sequenced, and expressed in order to determine their protease inhibition specificity. The spatial expression of these serpins was carried out by PCR using cDNA from different tick life stages and female adult organs. Finally, this study is an important step forward in uncovering the role of RmS in the physiology of this ectoparasite and their potential use for the future improvement of ticks control methods. Methods: Bioinformatics and Serpin identification The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi [33]. The current available tick serpin sequences of Amblyomma americanum [22], A. maculatum [24], A. variegatum [23], A. monolakensis [34] H. longicornis [28,35], I. ricinus [9,36], Ixodes scapularis [21], R. microplus [37], R. appendiculatus [26], and A. monolakensis [34] were retrieved from the National Centre for Biotechnology Information non-redundant protein (NCBI) (http://www.ncbi.nlm.nih.gov). These tick serpin sequences and the human α1-antitrypsin (GenBank, AAB59495) were used as queries against BmiGi V1 [38], BmiGi V2.1 [37], five SSH libraries [39], Australian tick transcriptome libraries [40] and RmiTR V1 [40] using the Basic Local Alignment Search Tool (BLAST) with the tblastX algorithm [41]. The qualified serpin sequences (E-value < 100) were six-frame translated for deduced protein sequences. The presence of the serpin conserved domain (cd00172) was analysed using the batch CD-Search Tool with an expected value threshold cut-off at 1 against NCBI’s Conserved Domain Database (CDD) [42]. SignalP 4.1 [43] was used to predict signal peptide cleavage sites. Also, the amino acid sequences of the R. microplus serpins were scanned for the presence of the C-terminal sequence Lys-Asp-Glu-Leu (KDEL) the endoplasmic reticulum lumen retention signal (KDEL motif, Prosite ID: PS00014) using ScanProSite (http://prosite.expasy.org/scanprosite/) in order to reduce the incidence of false positive results from the SignalP prediction. Putative N-glycosylation sites were predicted using the NetNGlyc 1.0 server (http://www.cbs.dtu.dk/services/NetNGlyc/). The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi [33]. The current available tick serpin sequences of Amblyomma americanum [22], A. maculatum [24], A. variegatum [23], A. monolakensis [34] H. longicornis [28,35], I. ricinus [9,36], Ixodes scapularis [21], R. microplus [37], R. appendiculatus [26], and A. monolakensis [34] were retrieved from the National Centre for Biotechnology Information non-redundant protein (NCBI) (http://www.ncbi.nlm.nih.gov). These tick serpin sequences and the human α1-antitrypsin (GenBank, AAB59495) were used as queries against BmiGi V1 [38], BmiGi V2.1 [37], five SSH libraries [39], Australian tick transcriptome libraries [40] and RmiTR V1 [40] using the Basic Local Alignment Search Tool (BLAST) with the tblastX algorithm [41]. The qualified serpin sequences (E-value < 100) were six-frame translated for deduced protein sequences. The presence of the serpin conserved domain (cd00172) was analysed using the batch CD-Search Tool with an expected value threshold cut-off at 1 against NCBI’s Conserved Domain Database (CDD) [42]. SignalP 4.1 [43] was used to predict signal peptide cleavage sites. Also, the amino acid sequences of the R. microplus serpins were scanned for the presence of the C-terminal sequence Lys-Asp-Glu-Leu (KDEL) the endoplasmic reticulum lumen retention signal (KDEL motif, Prosite ID: PS00014) using ScanProSite (http://prosite.expasy.org/scanprosite/) in order to reduce the incidence of false positive results from the SignalP prediction. Putative N-glycosylation sites were predicted using the NetNGlyc 1.0 server (http://www.cbs.dtu.dk/services/NetNGlyc/). Tick sources Hereford cattle at the tick colony maintained by Biosecurity Queensland from the Queensland Department of Agriculture, Fisheries and Forestry (DAFF) [44] were used to collect the acaricide susceptible strain R. microplus NRFS (Non-Resistant Field Strain). All of the eggs (E), larvae (L), nymphs (N), adult males (M) and feeding females (F) were collected from infested animals maintained within a moat pen (DAFF Animal Ethics approval SA2006/03/96). Tick organs were dissected from 17 day-old adult females for cDNA preparation including salivary glands (FSG), guts (FG) and ovaries (Ovr). Hereford cattle at the tick colony maintained by Biosecurity Queensland from the Queensland Department of Agriculture, Fisheries and Forestry (DAFF) [44] were used to collect the acaricide susceptible strain R. microplus NRFS (Non-Resistant Field Strain). All of the eggs (E), larvae (L), nymphs (N), adult males (M) and feeding females (F) were collected from infested animals maintained within a moat pen (DAFF Animal Ethics approval SA2006/03/96). Tick organs were dissected from 17 day-old adult females for cDNA preparation including salivary glands (FSG), guts (FG) and ovaries (Ovr). Total RNA extraction RNA was isolated from eggs, nymphs, and the organs (guts, ovaries and salivary glands) dissected from semi-engorged females. Ticks/organs were ground in liquid nitrogen using diethylpyrocarbonate water-treated mortar and pestle prior to processing utilising the TRIzol® reagent (Gibco-BRL, USA). The tissue samples were stored in the ice-cold TRIzol® Reagent immediately after dissection, and then homogenised through a sterile 25-gauge needle. The total RNA was isolated following the manufacture’s protocol (Gibco-BRL, USA) and the mRNA was purified using Poly (A) Purist™ MAG Kit (Ambion, USA) as recommended by the manufacturer. RNA was isolated from eggs, nymphs, and the organs (guts, ovaries and salivary glands) dissected from semi-engorged females. Ticks/organs were ground in liquid nitrogen using diethylpyrocarbonate water-treated mortar and pestle prior to processing utilising the TRIzol® reagent (Gibco-BRL, USA). The tissue samples were stored in the ice-cold TRIzol® Reagent immediately after dissection, and then homogenised through a sterile 25-gauge needle. The total RNA was isolated following the manufacture’s protocol (Gibco-BRL, USA) and the mRNA was purified using Poly (A) Purist™ MAG Kit (Ambion, USA) as recommended by the manufacturer. Isolation, cloning and sequencing of rms genes cDNA from nymphs, ovaries and salivary glands was synthesised from purified mRNA using the BioScript™ Kit (Clontech, USA) following the manufacturer’s recommended protocol. PCRs were conducted for isolation of the rms genes using gene specific 5′ and 3′ primers, and designed for the amplification of the coding sequences (CDS) of serpin. Following the amplification and confirmation of the PCR products by agarose gel electrophoresis, the PCR products were sub-cloned into the pCR 2.1-TOPO® vector following the manufacturer’s instructions (Invitrogen, USA). The recombinant plasmids obtained were named pCR-rms1, rms2 and pCR-rms (n + 1). Ten individual colonies for each clone were selected and grown in 5 mL of LB broths supplemented with ampicillin (50 μg.mL−1) 18 hours prior to the purification of the plasmid using the QIAprep Spin miniprep kit (Qiagen, USA). The direct sequencing of the plasmid inserts was performed using the BigDye v3.1 technology (Applied Biosystems, USA) and analysed on the Applied Biosystems 3130xl Genetic Analyser at the Griffith University DNA Sequencing Facility (School of Biomolecular and Biomedical Science, Griffith University, Qld, Australia). The sequencing reactions were prepared using M13 primers in a 96-well plate format according to the manufacturer’s instructions (Applied Biosystems, USA). The sequences were visualised, edited and aligned using Sequencher v4.5 (Gene Codes Corporation, USA) to remove vector sequence and to thus confirm the CDS of the rms genes. cDNA from nymphs, ovaries and salivary glands was synthesised from purified mRNA using the BioScript™ Kit (Clontech, USA) following the manufacturer’s recommended protocol. PCRs were conducted for isolation of the rms genes using gene specific 5′ and 3′ primers, and designed for the amplification of the coding sequences (CDS) of serpin. Following the amplification and confirmation of the PCR products by agarose gel electrophoresis, the PCR products were sub-cloned into the pCR 2.1-TOPO® vector following the manufacturer’s instructions (Invitrogen, USA). The recombinant plasmids obtained were named pCR-rms1, rms2 and pCR-rms (n + 1). Ten individual colonies for each clone were selected and grown in 5 mL of LB broths supplemented with ampicillin (50 μg.mL−1) 18 hours prior to the purification of the plasmid using the QIAprep Spin miniprep kit (Qiagen, USA). The direct sequencing of the plasmid inserts was performed using the BigDye v3.1 technology (Applied Biosystems, USA) and analysed on the Applied Biosystems 3130xl Genetic Analyser at the Griffith University DNA Sequencing Facility (School of Biomolecular and Biomedical Science, Griffith University, Qld, Australia). The sequencing reactions were prepared using M13 primers in a 96-well plate format according to the manufacturer’s instructions (Applied Biosystems, USA). The sequences were visualised, edited and aligned using Sequencher v4.5 (Gene Codes Corporation, USA) to remove vector sequence and to thus confirm the CDS of the rms genes. Cloning and expression of RmS in the yeast P. pastoris The coding sequence of rms1, -rms3, -rms6, and rms15 were inserted into the pPICZα A and pPIC-B expression vector (Invitrogen, USA) for intracellular and extracellular expression. The resultant recombinant plasmids were transformed into the yeast P. pastoris GS115 and SMD1168H by electroporation as described in the Pichia Expression Kit manual (Invitrogen). The recombinant protein were purified from yeast pellet and supernatant using a Histrap FF 5 mL column (GE Healthcare, USA) as recommended by the manufacturer following by a gel filtration purification step using a HiLoad™ 16/600 Superdex™ 200 pg column (GE Healthcare, USA). The coding sequence of rms1, -rms3, -rms6, and rms15 were inserted into the pPICZα A and pPIC-B expression vector (Invitrogen, USA) for intracellular and extracellular expression. The resultant recombinant plasmids were transformed into the yeast P. pastoris GS115 and SMD1168H by electroporation as described in the Pichia Expression Kit manual (Invitrogen). The recombinant protein were purified from yeast pellet and supernatant using a Histrap FF 5 mL column (GE Healthcare, USA) as recommended by the manufacturer following by a gel filtration purification step using a HiLoad™ 16/600 Superdex™ 200 pg column (GE Healthcare, USA). Expression analysis by semi-quantitative Reverse Transcription (RT)-PCR Gene specific primers were used to determine the gene expression pattern in eggs, nymphs, female guts, ovaries and salivary gland samples. A total of fifty ticks were dissected to isolate the different organs samples, and 25 nymphs were used on the preparation of the nymph sample. Approximately, five grams of eggs from ten different ticks were processed to conform this experimental sample. Briefly, the densitograms of amplified PCR products were analysed by ImageJ and normalised using the following formula, Y = V + V(H-X)X where Y = normalised mRNA density, V = observed rms PCR band density in individual samples, H = highest tick housekeeping gene PCR band density among tested samples, X = tick housekeeping gene density in individual samples [22]. All experimental samples were processed in triplicated. Gene specific primers were used to determine the gene expression pattern in eggs, nymphs, female guts, ovaries and salivary gland samples. A total of fifty ticks were dissected to isolate the different organs samples, and 25 nymphs were used on the preparation of the nymph sample. Approximately, five grams of eggs from ten different ticks were processed to conform this experimental sample. Briefly, the densitograms of amplified PCR products were analysed by ImageJ and normalised using the following formula, Y = V + V(H-X)X where Y = normalised mRNA density, V = observed rms PCR band density in individual samples, H = highest tick housekeeping gene PCR band density among tested samples, X = tick housekeeping gene density in individual samples [22]. All experimental samples were processed in triplicated. Protease inhibition assays RmS-1, RmS-3, RmS-6 and RmS-15 expressed in P. pastoris yeast using the methodology reported previously [6,45] were used in this assay. The inhibition test was conducted as reported formerly [46] to screen the activity of RmS-1, -3, -6 and -15 against different proteases including bovine Chymotrypsin and Trypsin, porcine Elastase and Kallikrein, human Plasmin, and Thrombin (Sigma-Aldrich, USA). Briefly, 96-well plates were blocked with Blocking buffer (20 mM Tris-HCl, 150 mM NaCl and 5% skim milk, pH 7.6), and washed three times with Wash buffer (20 mM Tris-HCl, 150 mM NaCl, 0.01% Tween 20, pH 7.6) every 5 min. A total of 50 μL containing each protease were incubated with 50 fold molar of RmS-1, RmS-3, RmS-6 and RmS-15 at 37°C for 60 minutes in duplicate. The specific substrate (0.13 mM) was added and substrate hydrolysis was monitored every 30 second using Epoch Microplate Spectrophotometer (BioTek, USA) (see Table 1). The inhibition rate was calculated by comparing the enzymatic activity in the presence and absence of recombinant RmS. The experiments were conducted in triplicate.Table 1 The conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases Enzymes* [nM] Binding buffer Substrates* [mM] Chymotrypsin1050 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0N-Succinyl-Ala-Ala-Pro-Phe-p-nitroanilide0.13Elastase5050 mM Hepes, 100 mM Nacl, 0.01 % Triton X-100, pH 7.4N-Succinyl-Ala-Ala-Ala-p-nitroanilide0.13Kallikrein5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5N-Benzoyl-Pro-Phe-Arg-p-nitroanilide hydrochloride0.13Plasmin5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5Gly-Arg-p-nitroanilide dihydrochloride0.13Thrombin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0Sar-Pro-Arg-p-nitroanilide dihydrochloride0.13Trypsin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0,N-Benzoyl-Phe-Val-Arg-p-nitroanilide hydrochloride0.13*All enzymes and substrates were purchased from Sigma-Aldrich, USA. The conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases *All enzymes and substrates were purchased from Sigma-Aldrich, USA. RmS-1, RmS-3, RmS-6 and RmS-15 expressed in P. pastoris yeast using the methodology reported previously [6,45] were used in this assay. The inhibition test was conducted as reported formerly [46] to screen the activity of RmS-1, -3, -6 and -15 against different proteases including bovine Chymotrypsin and Trypsin, porcine Elastase and Kallikrein, human Plasmin, and Thrombin (Sigma-Aldrich, USA). Briefly, 96-well plates were blocked with Blocking buffer (20 mM Tris-HCl, 150 mM NaCl and 5% skim milk, pH 7.6), and washed three times with Wash buffer (20 mM Tris-HCl, 150 mM NaCl, 0.01% Tween 20, pH 7.6) every 5 min. A total of 50 μL containing each protease were incubated with 50 fold molar of RmS-1, RmS-3, RmS-6 and RmS-15 at 37°C for 60 minutes in duplicate. The specific substrate (0.13 mM) was added and substrate hydrolysis was monitored every 30 second using Epoch Microplate Spectrophotometer (BioTek, USA) (see Table 1). The inhibition rate was calculated by comparing the enzymatic activity in the presence and absence of recombinant RmS. The experiments were conducted in triplicate.Table 1 The conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases Enzymes* [nM] Binding buffer Substrates* [mM] Chymotrypsin1050 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0N-Succinyl-Ala-Ala-Pro-Phe-p-nitroanilide0.13Elastase5050 mM Hepes, 100 mM Nacl, 0.01 % Triton X-100, pH 7.4N-Succinyl-Ala-Ala-Ala-p-nitroanilide0.13Kallikrein5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5N-Benzoyl-Pro-Phe-Arg-p-nitroanilide hydrochloride0.13Plasmin5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5Gly-Arg-p-nitroanilide dihydrochloride0.13Thrombin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0Sar-Pro-Arg-p-nitroanilide dihydrochloride0.13Trypsin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0,N-Benzoyl-Phe-Val-Arg-p-nitroanilide hydrochloride0.13*All enzymes and substrates were purchased from Sigma-Aldrich, USA. The conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases *All enzymes and substrates were purchased from Sigma-Aldrich, USA. Statistical analysis The semi-quantitative PCR and protease inhibition assay data were evaluated by one-way ANOVA with Bonferroni testing (p ≤ 0.05). All analyses were conducted by the GraphPad Prism version 6.02 (GraphPad Software). Data were represented as the mean ± standard deviation (SD). The semi-quantitative PCR and protease inhibition assay data were evaluated by one-way ANOVA with Bonferroni testing (p ≤ 0.05). All analyses were conducted by the GraphPad Prism version 6.02 (GraphPad Software). Data were represented as the mean ± standard deviation (SD). Bioinformatics and Serpin identification: The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi [33]. The current available tick serpin sequences of Amblyomma americanum [22], A. maculatum [24], A. variegatum [23], A. monolakensis [34] H. longicornis [28,35], I. ricinus [9,36], Ixodes scapularis [21], R. microplus [37], R. appendiculatus [26], and A. monolakensis [34] were retrieved from the National Centre for Biotechnology Information non-redundant protein (NCBI) (http://www.ncbi.nlm.nih.gov). These tick serpin sequences and the human α1-antitrypsin (GenBank, AAB59495) were used as queries against BmiGi V1 [38], BmiGi V2.1 [37], five SSH libraries [39], Australian tick transcriptome libraries [40] and RmiTR V1 [40] using the Basic Local Alignment Search Tool (BLAST) with the tblastX algorithm [41]. The qualified serpin sequences (E-value < 100) were six-frame translated for deduced protein sequences. The presence of the serpin conserved domain (cd00172) was analysed using the batch CD-Search Tool with an expected value threshold cut-off at 1 against NCBI’s Conserved Domain Database (CDD) [42]. SignalP 4.1 [43] was used to predict signal peptide cleavage sites. Also, the amino acid sequences of the R. microplus serpins were scanned for the presence of the C-terminal sequence Lys-Asp-Glu-Leu (KDEL) the endoplasmic reticulum lumen retention signal (KDEL motif, Prosite ID: PS00014) using ScanProSite (http://prosite.expasy.org/scanprosite/) in order to reduce the incidence of false positive results from the SignalP prediction. Putative N-glycosylation sites were predicted using the NetNGlyc 1.0 server (http://www.cbs.dtu.dk/services/NetNGlyc/). Tick sources: Hereford cattle at the tick colony maintained by Biosecurity Queensland from the Queensland Department of Agriculture, Fisheries and Forestry (DAFF) [44] were used to collect the acaricide susceptible strain R. microplus NRFS (Non-Resistant Field Strain). All of the eggs (E), larvae (L), nymphs (N), adult males (M) and feeding females (F) were collected from infested animals maintained within a moat pen (DAFF Animal Ethics approval SA2006/03/96). Tick organs were dissected from 17 day-old adult females for cDNA preparation including salivary glands (FSG), guts (FG) and ovaries (Ovr). Total RNA extraction: RNA was isolated from eggs, nymphs, and the organs (guts, ovaries and salivary glands) dissected from semi-engorged females. Ticks/organs were ground in liquid nitrogen using diethylpyrocarbonate water-treated mortar and pestle prior to processing utilising the TRIzol® reagent (Gibco-BRL, USA). The tissue samples were stored in the ice-cold TRIzol® Reagent immediately after dissection, and then homogenised through a sterile 25-gauge needle. The total RNA was isolated following the manufacture’s protocol (Gibco-BRL, USA) and the mRNA was purified using Poly (A) Purist™ MAG Kit (Ambion, USA) as recommended by the manufacturer. Isolation, cloning and sequencing of rms genes: cDNA from nymphs, ovaries and salivary glands was synthesised from purified mRNA using the BioScript™ Kit (Clontech, USA) following the manufacturer’s recommended protocol. PCRs were conducted for isolation of the rms genes using gene specific 5′ and 3′ primers, and designed for the amplification of the coding sequences (CDS) of serpin. Following the amplification and confirmation of the PCR products by agarose gel electrophoresis, the PCR products were sub-cloned into the pCR 2.1-TOPO® vector following the manufacturer’s instructions (Invitrogen, USA). The recombinant plasmids obtained were named pCR-rms1, rms2 and pCR-rms (n + 1). Ten individual colonies for each clone were selected and grown in 5 mL of LB broths supplemented with ampicillin (50 μg.mL−1) 18 hours prior to the purification of the plasmid using the QIAprep Spin miniprep kit (Qiagen, USA). The direct sequencing of the plasmid inserts was performed using the BigDye v3.1 technology (Applied Biosystems, USA) and analysed on the Applied Biosystems 3130xl Genetic Analyser at the Griffith University DNA Sequencing Facility (School of Biomolecular and Biomedical Science, Griffith University, Qld, Australia). The sequencing reactions were prepared using M13 primers in a 96-well plate format according to the manufacturer’s instructions (Applied Biosystems, USA). The sequences were visualised, edited and aligned using Sequencher v4.5 (Gene Codes Corporation, USA) to remove vector sequence and to thus confirm the CDS of the rms genes. Cloning and expression of RmS in the yeast P. pastoris: The coding sequence of rms1, -rms3, -rms6, and rms15 were inserted into the pPICZα A and pPIC-B expression vector (Invitrogen, USA) for intracellular and extracellular expression. The resultant recombinant plasmids were transformed into the yeast P. pastoris GS115 and SMD1168H by electroporation as described in the Pichia Expression Kit manual (Invitrogen). The recombinant protein were purified from yeast pellet and supernatant using a Histrap FF 5 mL column (GE Healthcare, USA) as recommended by the manufacturer following by a gel filtration purification step using a HiLoad™ 16/600 Superdex™ 200 pg column (GE Healthcare, USA). Expression analysis by semi-quantitative Reverse Transcription (RT)-PCR: Gene specific primers were used to determine the gene expression pattern in eggs, nymphs, female guts, ovaries and salivary gland samples. A total of fifty ticks were dissected to isolate the different organs samples, and 25 nymphs were used on the preparation of the nymph sample. Approximately, five grams of eggs from ten different ticks were processed to conform this experimental sample. Briefly, the densitograms of amplified PCR products were analysed by ImageJ and normalised using the following formula, Y = V + V(H-X)X where Y = normalised mRNA density, V = observed rms PCR band density in individual samples, H = highest tick housekeeping gene PCR band density among tested samples, X = tick housekeeping gene density in individual samples [22]. All experimental samples were processed in triplicated. Protease inhibition assays: RmS-1, RmS-3, RmS-6 and RmS-15 expressed in P. pastoris yeast using the methodology reported previously [6,45] were used in this assay. The inhibition test was conducted as reported formerly [46] to screen the activity of RmS-1, -3, -6 and -15 against different proteases including bovine Chymotrypsin and Trypsin, porcine Elastase and Kallikrein, human Plasmin, and Thrombin (Sigma-Aldrich, USA). Briefly, 96-well plates were blocked with Blocking buffer (20 mM Tris-HCl, 150 mM NaCl and 5% skim milk, pH 7.6), and washed three times with Wash buffer (20 mM Tris-HCl, 150 mM NaCl, 0.01% Tween 20, pH 7.6) every 5 min. A total of 50 μL containing each protease were incubated with 50 fold molar of RmS-1, RmS-3, RmS-6 and RmS-15 at 37°C for 60 minutes in duplicate. The specific substrate (0.13 mM) was added and substrate hydrolysis was monitored every 30 second using Epoch Microplate Spectrophotometer (BioTek, USA) (see Table 1). The inhibition rate was calculated by comparing the enzymatic activity in the presence and absence of recombinant RmS. The experiments were conducted in triplicate.Table 1 The conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases Enzymes* [nM] Binding buffer Substrates* [mM] Chymotrypsin1050 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0N-Succinyl-Ala-Ala-Pro-Phe-p-nitroanilide0.13Elastase5050 mM Hepes, 100 mM Nacl, 0.01 % Triton X-100, pH 7.4N-Succinyl-Ala-Ala-Ala-p-nitroanilide0.13Kallikrein5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5N-Benzoyl-Pro-Phe-Arg-p-nitroanilide hydrochloride0.13Plasmin5020 mM Tris-HCl, 150 mM NaCl, 0.02 % Triton X-100, pH 8.5Gly-Arg-p-nitroanilide dihydrochloride0.13Thrombin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0Sar-Pro-Arg-p-nitroanilide dihydrochloride0.13Trypsin250 mM Tris-HCl, 150 mM NaCl, 20 mM CaCl2, 0.01 % Triton X-100, pH 8.0,N-Benzoyl-Phe-Val-Arg-p-nitroanilide hydrochloride0.13*All enzymes and substrates were purchased from Sigma-Aldrich, USA. The conditions of serpin inhibition reactions against commercially available bovine, porcine and human serine proteases *All enzymes and substrates were purchased from Sigma-Aldrich, USA. Statistical analysis: The semi-quantitative PCR and protease inhibition assay data were evaluated by one-way ANOVA with Bonferroni testing (p ≤ 0.05). All analyses were conducted by the GraphPad Prism version 6.02 (GraphPad Software). Data were represented as the mean ± standard deviation (SD). Results: Identification of RmS The analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414. There was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1 Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. The analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414. There was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1 Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. RT-PCR analysis Reverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2 Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Reverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2 Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Protease Inhibition of the recombinant R. microplus serpins (rRmS) The coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3 Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3 Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). Identification of RmS: The analysis of the R. microplus sequence databases revealed twenty-two different putative RmS ultimately identified after the elimination of redundant sequences. The full CDS for RmS-1 to RmS-18 reported by Tirloni and co-workers [30] were found in this study. The percentage identities after the alignment of the RmS was variable for example, RmS-3 and RmS-20 showed a 94% and 31% identities with hypothetical bacterial serpin (Paraphysa parvula) and R. appendiculatus Serpin-3, respectively. The reactive center loop characteristic of serine protease inhibitor family was found in the CDS of RmS- 19 to – 22. These new sequences were deposited in the Genbank with the following Accession Numbers: RmS-19: KP121409, RmS-20: KP121408, RmS-21: KP121411, and RmS-22: KP121414. There was observed a high variability of the identity among the RmS family that ranged from 29% between RmS-14 and RmS-15 to 62% between RmS-3 and RmS-5. The characteristic reactive center loop (RCL) domain associated with the serpin family members was found in all RmS (Figure 1). The type of amino acid at the P1 site of the RCL showed a high variation, for example in RmS-1, -4, 7, 10, 11, -14, 20 to 22 have a polar uncharged amino acid, but RmS-2, -3, -12 and -19 have hydrophobic amino acids. Basic amino acids such as arginine or lysine at the P1 site were found in RmS-5, -6, -9, -13. -15 to -18 (Figure 1). The consensus amino acid motif VNEEGT [47] and the canonical sequence representing the RCL hinge from P17 to P8 (EEGTIATAVT) [18] which are characteristic of serpins were highly conserved in the RmS aligned (Figure 1). Finally, the data confirmed the conservation of the reactive center loop and the characteristic motif of this proteins family in RmS -19 to RmS-22.Figure 1 Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. Amino acid sequence alignment of the characteristic reactive center loops of R. microplus serpins (RmS-1 to –22). Highly conserved residues and motifs were highlighted in gray shade. The P1 regions were highlighted with a dash dot line rectangle over the specific amino acid sequence [47]. RT-PCR analysis: Reverse transcriptase PCR analysis was used to validate the spatial expression of rms-1, rms-3 to -6, rms -14, rms -15, -19 to rms-22 transcripts. Data showed expression of the rms transcripts in different organs and developmental stages of R. microplus (Figure 2A -D). High expression of rms-1, -3, -5 and -15 in adult female compared with nymph stage was observed (Figure 2A and B), and rms-3 transcript was not detected in eggs. Expression of rms -14 and rms -6 was only detected in nymphs and ovaries respectively. The rms -4 was detected only in ovaries and salivary glands (Figure 2A and B). The rms-1, -3, -5, -15, -19 to -22 transcripts were highly expressed in almost all tissues and tick stages analysed. The rms-21 transcript was expressed in all tick samples except in ovaries, no expression of rms-22 transcript was detected in nymph and ovaries (Figure 2C and D).Figure 2 Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Semi-quantitative analysis of the expression of rms transcripts in R. microplus tick samples . A and C: PCR products obtained from cDNA samples from different tick’ tissues. The PCR products were run in 1% Tris Borate agarose gels. B and D: Normalised mRNA density was obtained after the densitogram analyses of the amplified PCR products. All experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. Protease Inhibition of the recombinant R. microplus serpins (rRmS): The coding sequences for rms-1, -3, -6 and -15 were cloned and expressed in P. pastoris yeast in order to test their inhibitory activity. These serpins were tested against different serine proteases including Chymotrypsin, Elastase, Kallikrein, Thrombin and Trypsin. The protease activity analysis showed that RmS-1 is a strong inhibitor of Chymotrypsin but weak inhibitor of Trypsin and Thrombin. RmS-3 has Chymotrypsin and Elastase as its principal target molecules and some faint inhibition of Trypsin and Thrombin was observed (Figure 3). RmS-15 exhibited strong inhibitory action of Thrombin while RmS-6 only inhibits Chymotrypsin (Figure 3).Figure 3 Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). Protease inhibition profiles for the recombinant RmS-1, -3, -6 and RmS-15 obtained from the yeast P. pastoris . The RmS-1 and RmS-16 were expressed intracellular in the yeast P. pastoris. The symbols ** and *** indicate statistical significance with p < 0.05 and p < 0.001, respectively. The experiments were conducted in triplicate. Data were represented as the mean ± standard deviation (SD). Discussion: The serpin family is conformed by a high and variable number of genes which are found in many different organisms, for example the human genome has approximately 36 serpins genes [48], 29 genes were found in Drosophila melanogaster [49], 45 in I. scapularis, and 17 serpins genes in A. americanum [21,22]. This corroborates a high conservation of the regulatory role of serpins among different species, and their functional versatility suggesting an evolutionary adaptation to confront different and novel proteases [50]. Processes such as host innate immune response regulation [51,52]; tick defences [49,53]; hemolymph coagulation cascade [54] and tick development [55,56] are regulated by serine protease inhibitors. In ixodidae, serpins are an extensive protein family with an important role at the physiological level, particularly during the parasitic periods of attachment and blood feeding [14,21-23,26,28,32,35,52,57,58]. Especially in R. microplus, a single host tick that has a highly efficient and complex combination of proteins in saliva that are useful in order to achieve the successful blood feeding, and serpins have an important role within tick saliva. In this study data obtained from transcriptome studies conducted on different stages of development of R. microplus and stored at CattleTick Base [40] were an important resource for this study to determine the members of the R. microplus serpin family [40]. However, full coverage of R. microplus genome would be necessary to give a precise number of R. microplus serpins [59]. Previous studies have provided important evidence of tick serpin sequences and transcript expression, but research discerning the specific targets or biological functions of these serpins is not forthcoming [6,13,32,52,60,61]. Following the elimination of redundant sequences the data obtained in this research suggested the presence of 22 putative R. microplus serpins from all databases studied. The amino acid sequences of these serpins revealed similar numbers of secreted and non-secreted serpins as described by Tirloni and co-workers [30]. A total of 18 R. microplus serpins showed high amino acid identity (range from 97 to 100%) with serpins reported in the BmGI and RmiTR V1 databases (including USA and Australian R. microplus), and those reported by Tirloni and co-worker (RmINCT-EM database, including Brazilian R. microplus) [30,40,62,63]. This observation confirms the conservation of these serpins in geographically distant populations of R. microplus. The extracellular secretion of serine protease inhibitors during host – parasite interaction is important for ticks in order to overcome the haemostatic response of the host, blood digestion, and defence [64-70]. Anti-haemostasis serpins have been reported from A. americanum [32], H. longicornis [28,35], I. ricinus [9,11,57], and R. haemaphysaloides [71]. This study identified RmS-15 as an anti-haemostatic serpin that specifically inhibited Thrombin, an important serine protease of the coagulation pathway [72]. The result suggests an important role of RmS-15 to impair host blood coagulation during tick feeding. Similar results specifically related with blood coagulation was previously obtained for a mutant M340R of the I. ricinus serpin (Iris) that gained inhibitory activity against Thrombin and Factor Xa after losing its Elastase affinity through directed mutation [9]. This study improved the P. pastoris culture, expression and purification of previously described RmS-3 [6] demonstrated by significant inhibition of Chymotrypsin and Elastase observed in this study. The neutrophils’ elastase is discharged at the tick bite site which has reported to have an accumulation of this particular group of cells [73]. Additionally, previous studies have reported that neutrophils contribute to local inflammation during tick infestation which is an evasion mechanism employed by the host to resist tick infestation [67,70,74]. RmS-3 showed high levels of recognition by sera obtained from tick resistant cattle corroborating its secretion within tick saliva and an important role of this serpin during the host – parasite interaction [6]. RmS-3 might play an important role in the inhibition of host immune response. Similar results were obtained with the recombinant serpin from I. ricinus –Iris- with Elastase as its principal natural target [9,13]. However, the high expression of the rms-3 gene observed in this study in tick ovaries is related with its possible role to protect tick reproductive cells from digestive proteases released into tick hemocoel. This defensive pathway was attributed to insect serpins that inhibit Chymotrypsin [75]. Serpins without a secretion signal have been reported to have a regulatory role in intracellular pathways such as tick development, intracellular digestion or vitellogenesis [67,68,76]. The predicted intracellular serpin, RmS-14 was only detected in nymphs showing specific expression of this serpin at this particular stage of tick development. RmS-14 was not detected by RT-PCR conducted previously in tissue samples from the Porto Alegro R. microplus strain (Rio Grande do Sur, Brazil) [30], however, nymph samples were not screened in this related study. Four new serpins are reported in this investigation, two of them, RmS-19 and -20 were expressed in all tissues samples analysed showing their important role in both parasitic and non-parasitic stages of R. microplus development. RmS-21 and -22 were not detected in ovaries suggesting a regulatory role of these serpins in the proteolysis activity during digestion and embryos development in the eggs stage. Additionally, RmS-1 is a serpin that lacks a detectable signal peptide but was found to specifically inhibit Chymotrypsin with comparatively less inhibition of Trypsin and Thrombin. RmS-1 contains two methionines at P4, P5, and cysteines at P1 and P’1 sites of the RCL. The presence of these amino acids sensitive to oxidation (methionine and cysteine) at the RCL is characteristic of human intracellular serpins [77]. Also, RmS-1 clusters together with RAS1 and Lospin7, which are intracellular serpins from R. appendiculatus and A. americanum respectively [22,26]. The secreted and glycosylated RmS-1 expressed in P. pastoris had no significant inhibition against serine proteases tested in this study. However, protease inhibition data were obtained only using an intracellular and non­glycosylated RmS­1 expressed in P. pastoris. Data showed a significant inhibition of Chymotrypsin by the non­glycosylated RmS-1. The rms-1 gene was expressed in all tissue samples analysed suggesting a broad regulatory role. Similar behaviour was observed with RmS-6, where only the intracellular and non­glycosylated RmS-6 showed activity against Chymotrypsin (Figure 3). The rms16 was expressed only in the ovary sample suggesting a role for this serpin during tick embryogenesis or vitellogenesis. Further studies should be conducted in order to understand and characterise the activity and role during tick development and host parasite interaction of all R. microplus serpins identified. Conclusion: The present study provides an insight into the R. microplus serpin family allowing the study of differential expression within specific organs and different developmental stage with four new R. microplus serpins reported. The successful expression of recombinant serpins allowed the determination of their specific host target(s). Finally, the results obtained offer an important source of information to understand R. microplus serpin function and will deepen the knowledge about the role of serpins during tick-host interactions and tick development.
Background: Rhipicephalus (Boophilus) microplus evades the host's haemostatic system through a complex protein array secreted into tick saliva. Serine protease inhibitors (serpins) conform an important component of saliva which are represented by a large protease inhibitor family in Ixodidae. These secreted and non-secreted inhibitors modulate diverse and essential proteases involved in different physiological processes. Methods: The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi. The database search was conducted on BmiGi V1, BmiGi V2.1, five SSH libraries, Australian tick transcriptome libraries and RmiTR V1 using bioinformatics methods. Semi quantitative PCR was carried out using different adult tissues and tick development stages. The cDNA of four identified R. microplus serpins were cloned and expressed in Pichia pastoris in order to determine biological targets of these serpins utilising protease inhibition assays. Results: A total of four out of twenty-two serpins identified in our analysis are new R. microplus serpins which were named as RmS-19 to RmS-22. The analyses of DNA and predicted amino acid sequences showed high conservation of the R. microplus serpin sequences. The expression data suggested ubiquitous expression of RmS except for RmS-6 and RmS-14 that were expressed only in nymphs and adult female ovaries, respectively. RmS-19, and -20 were expressed in all tissues samples analysed showing their important role in both parasitic and non-parasitic stages of R. microplus development. RmS-21 was not detected in ovaries and RmS-22 was not identified in ovary and nymph samples but were expressed in the rest of the samples analysed. A total of four expressed recombinant serpins showed protease specific inhibition for Chymotrypsin (RmS-1 and RmS-6), Chymotrypsin / Elastase (RmS-3) and Thrombin (RmS-15). Conclusions: This study constitutes an important contribution and improvement to the knowledge about the physiologic role of R. microplus serpins during the host-tick interaction.
Background: Ticks are worldwide-distributed ectoparasites that have evolved as obligate haematophagous arthropods of animals and humans. These parasites have been categorised after mosquitoes as the second most important group of vectors transmitting disease-causing agents to mammals [1,2]. In particular, the cattle tick (Rhipicephalus microplus) is considered the most economically important ectoparasite of cattle distributed in tropical and subtropical regions of the world. The principal reason for this affirmation is that R. microplus affects beef and dairy cattle producers causing direct economic losses due to host parasitism and tick borne diseases such as anaplasmosis and babesiosis [3,4]. The success of the parasitic cycle of R. microplus begins with the larval capability to overcome haemostatic and immunological responses of the host. Following larval attachment, a great amount of blood is ingested and digested by ticks in order to complete their parasitic cycle. The full-engorged adult females drop off from host to initiate the non-parasitic phase with the laying and hatching of eggs. R. microplus has an intensive production and physiological secretion of proteins during the entire parasitic cycle in order to disrupt host responses such as protease inhibitors which play an important role in tick survival, feeding and development [5-8]. Serpins (Serine Protease Inhibitors) are important regulatory molecules with roles during host- parasite interactions such as fibrinolysis [9], host response mediated by complement proteases [10], and inflammation [11-13] among other tick physiological functions [14,15]. These protease inhibitors conformed a large superfamily that is extensively distributed within bacteria, insects, parasite, animals and plants [16,17]. Serpins differ from Kunitz protease inhibitors by distinctive conformational change during the inhibition of their target proteases. The presence of a small domain designated as the reactive center loop (RCL) constitutes their most notable characteristic. This domain extends outside of the protein and leads to the formation of the firm bond of the serpin with its specific proteinase [18-20]. Members of the tick Serpin family have been studied and recommended as useful targets for tick vaccine development [21]. Consequently, serpin sequences from diverse tick species have been reported such as, Amblyomma americanum [22], Amblyomma variegatum [23], Amblyomma maculatum [24], Dermacentor variabilis [25]; Rhipicephalus appendiculatus [26], R. microplus [6,27], Haemaphysalis longicornis [28], Ixodes scapularis [21,29], and Ixodes ricinus [9,11]. Additionally, an in silico identification of R. microplus serpin was conducted using different databases [30]. However, a great number of tick serpins continue to be functionally uncharacterised which limits the studies related with their function during host – parasite interaction [11,31,32]. In this study serpins from different R. microplus genomic databases were identified and four new serpins molecules were reported. In silico characterization of these serpins was undertaken using bioinformatics methods. Additionally, R. microplus serpins (RmS) were cloned, sequenced, and expressed in order to determine their protease inhibition specificity. The spatial expression of these serpins was carried out by PCR using cDNA from different tick life stages and female adult organs. Finally, this study is an important step forward in uncovering the role of RmS in the physiology of this ectoparasite and their potential use for the future improvement of ticks control methods. Conclusion: The present study provides an insight into the R. microplus serpin family allowing the study of differential expression within specific organs and different developmental stage with four new R. microplus serpins reported. The successful expression of recombinant serpins allowed the determination of their specific host target(s). Finally, the results obtained offer an important source of information to understand R. microplus serpin function and will deepen the knowledge about the role of serpins during tick-host interactions and tick development.
Background: Rhipicephalus (Boophilus) microplus evades the host's haemostatic system through a complex protein array secreted into tick saliva. Serine protease inhibitors (serpins) conform an important component of saliva which are represented by a large protease inhibitor family in Ixodidae. These secreted and non-secreted inhibitors modulate diverse and essential proteases involved in different physiological processes. Methods: The identification of R. microplus serpin sequences was performed through a web-based bioinformatics environment called Yabi. The database search was conducted on BmiGi V1, BmiGi V2.1, five SSH libraries, Australian tick transcriptome libraries and RmiTR V1 using bioinformatics methods. Semi quantitative PCR was carried out using different adult tissues and tick development stages. The cDNA of four identified R. microplus serpins were cloned and expressed in Pichia pastoris in order to determine biological targets of these serpins utilising protease inhibition assays. Results: A total of four out of twenty-two serpins identified in our analysis are new R. microplus serpins which were named as RmS-19 to RmS-22. The analyses of DNA and predicted amino acid sequences showed high conservation of the R. microplus serpin sequences. The expression data suggested ubiquitous expression of RmS except for RmS-6 and RmS-14 that were expressed only in nymphs and adult female ovaries, respectively. RmS-19, and -20 were expressed in all tissues samples analysed showing their important role in both parasitic and non-parasitic stages of R. microplus development. RmS-21 was not detected in ovaries and RmS-22 was not identified in ovary and nymph samples but were expressed in the rest of the samples analysed. A total of four expressed recombinant serpins showed protease specific inhibition for Chymotrypsin (RmS-1 and RmS-6), Chymotrypsin / Elastase (RmS-3) and Thrombin (RmS-15). Conclusions: This study constitutes an important contribution and improvement to the knowledge about the physiologic role of R. microplus serpins during the host-tick interaction.
10,744
352
[ 340, 124, 128, 285, 117, 148, 515, 56, 467, 413, 274 ]
16
[ "rms", "tick", "mm", "microplus", "pcr", "serpin", "usa", "serpins", "rms rms", "expression" ]
[ "microplus tick", "ectoparasite cattle distributed", "tick borne diseases", "parasite interaction microplus", "tick rhipicephalus" ]
null
[CONTENT] Genome | Protease inhibitor | Rhipicephalus microplus | serpin | Cattle tick [SUMMARY]
null
[CONTENT] Genome | Protease inhibitor | Rhipicephalus microplus | serpin | Cattle tick [SUMMARY]
[CONTENT] Genome | Protease inhibitor | Rhipicephalus microplus | serpin | Cattle tick [SUMMARY]
[CONTENT] Genome | Protease inhibitor | Rhipicephalus microplus | serpin | Cattle tick [SUMMARY]
[CONTENT] Genome | Protease inhibitor | Rhipicephalus microplus | serpin | Cattle tick [SUMMARY]
[CONTENT] Amino Acid Sequence | Animals | Base Sequence | Cloning, Molecular | DNA | Female | Gene Expression Regulation | Nymph | Ovary | Peptide Hydrolases | Rhipicephalus | Saliva | Serine Proteinase Inhibitors [SUMMARY]
null
[CONTENT] Amino Acid Sequence | Animals | Base Sequence | Cloning, Molecular | DNA | Female | Gene Expression Regulation | Nymph | Ovary | Peptide Hydrolases | Rhipicephalus | Saliva | Serine Proteinase Inhibitors [SUMMARY]
[CONTENT] Amino Acid Sequence | Animals | Base Sequence | Cloning, Molecular | DNA | Female | Gene Expression Regulation | Nymph | Ovary | Peptide Hydrolases | Rhipicephalus | Saliva | Serine Proteinase Inhibitors [SUMMARY]
[CONTENT] Amino Acid Sequence | Animals | Base Sequence | Cloning, Molecular | DNA | Female | Gene Expression Regulation | Nymph | Ovary | Peptide Hydrolases | Rhipicephalus | Saliva | Serine Proteinase Inhibitors [SUMMARY]
[CONTENT] Amino Acid Sequence | Animals | Base Sequence | Cloning, Molecular | DNA | Female | Gene Expression Regulation | Nymph | Ovary | Peptide Hydrolases | Rhipicephalus | Saliva | Serine Proteinase Inhibitors [SUMMARY]
[CONTENT] microplus tick | ectoparasite cattle distributed | tick borne diseases | parasite interaction microplus | tick rhipicephalus [SUMMARY]
null
[CONTENT] microplus tick | ectoparasite cattle distributed | tick borne diseases | parasite interaction microplus | tick rhipicephalus [SUMMARY]
[CONTENT] microplus tick | ectoparasite cattle distributed | tick borne diseases | parasite interaction microplus | tick rhipicephalus [SUMMARY]
[CONTENT] microplus tick | ectoparasite cattle distributed | tick borne diseases | parasite interaction microplus | tick rhipicephalus [SUMMARY]
[CONTENT] microplus tick | ectoparasite cattle distributed | tick borne diseases | parasite interaction microplus | tick rhipicephalus [SUMMARY]
[CONTENT] rms | tick | mm | microplus | pcr | serpin | usa | serpins | rms rms | expression [SUMMARY]
null
[CONTENT] rms | tick | mm | microplus | pcr | serpin | usa | serpins | rms rms | expression [SUMMARY]
[CONTENT] rms | tick | mm | microplus | pcr | serpin | usa | serpins | rms rms | expression [SUMMARY]
[CONTENT] rms | tick | mm | microplus | pcr | serpin | usa | serpins | rms rms | expression [SUMMARY]
[CONTENT] rms | tick | mm | microplus | pcr | serpin | usa | serpins | rms rms | expression [SUMMARY]
[CONTENT] host | tick | serpins | microplus | important | protease inhibitors | inhibitors | parasitic | distributed | parasitic cycle [SUMMARY]
null
[CONTENT] rms | figure | amino | rms rms | expression rms | 15 | 22 | rms 22 | amino acid | acid [SUMMARY]
[CONTENT] serpins | host | microplus | study | microplus serpin | expression | study differential expression | serpin function deepen knowledge | serpin function deepen | serpin function [SUMMARY]
[CONTENT] rms | tick | mm | usa | microplus | pcr | serpins | samples | expression | serpin [SUMMARY]
[CONTENT] rms | tick | mm | usa | microplus | pcr | serpins | samples | expression | serpin [SUMMARY]
[CONTENT] ||| Ixodidae ||| [SUMMARY]
null
[CONTENT] four | twenty-two | R. | RmS-22 ||| R. ||| RmS | RmS-6 | RmS-14 ||| -20 | R. ||| RmS-21 | RmS-22 | nymph ||| four | Chymotrypsin / Elastase | Thrombin [SUMMARY]
[CONTENT] R. [SUMMARY]
[CONTENT] ||| Ixodidae ||| ||| R. | Yabi ||| BmiGi V1 | BmiGi | five | SSH | Australian | V1 ||| PCR ||| four | R. | Pichia ||| four | twenty-two | R. | RmS-22 ||| R. ||| RmS | RmS-6 | RmS-14 ||| -20 | R. ||| RmS-21 | RmS-22 | nymph ||| four | Chymotrypsin / Elastase | Thrombin ||| R. [SUMMARY]
[CONTENT] ||| Ixodidae ||| ||| R. | Yabi ||| BmiGi V1 | BmiGi | five | SSH | Australian | V1 ||| PCR ||| four | R. | Pichia ||| four | twenty-two | R. | RmS-22 ||| R. ||| RmS | RmS-6 | RmS-14 ||| -20 | R. ||| RmS-21 | RmS-22 | nymph ||| four | Chymotrypsin / Elastase | Thrombin ||| R. [SUMMARY]
Validation of the hepatocellular carcinoma early detection screening algorithm Doylestown and aMAP in a cohort of Chinese with cirrhosis.
35218083
Previous studies have developed some blood-based biomarker algorithms such as the Doylestown algorithm and aMAP score to improve the detection of Hepatocellular carcinoma (HCC). However, no one has studied the application of the Doylestown algorithm in the Chinese. Meanwhile, which of these two screening models is more suitable for people with liver cirrhosis remains to be investigated.
BACKGROUND
In this study, HCC surveillance was performed by radiographic imaging and testing for tumor markers every 6 months from August 21, 2018, to January 12, 2021. We conducted a retrospective study of 742 liver cirrhosis patients, and among them, 20 developed HCC during follow-up. Samples from these patients at three follow-up time points were tested to evaluate alpha-fetoprotein (AFP), the Doylestown algorithm, and aMAP score.
METHODS
Overall, 521 liver cirrhosis patients underwent semiannual longitudinal follow-up three times. Five patients were diagnosed with HCC within 0-6 months of the third follow-up. We found that for these liver cirrhosis patients, the Doylestown algorithm had the highest accuracy for HCC detection, with areas under the receiver operating characteristic curve (AUCs) of 0.763, 0.801, and 0.867 for follow-ups 1-3, respectively. Compared with AFP at 20 ng/ml, the Doylestown algorithm increased biomarker performance by 7.4%, 21%, and 13% for follow-ups 1-3, respectively.
RESULTS
Our findings show that the Doylestown algorithm performance appeared to be optimal for HCC early screening in the Chinese cirrhotic population when compared with the aMAP score and AFP at 20 ng/ml.
CONCLUSIONS
[ "Algorithms", "Biomarkers, Tumor", "Carcinoma, Hepatocellular", "China", "Humans", "Liver Cirrhosis", "Liver Neoplasms", "Retrospective Studies", "Sensitivity and Specificity", "alpha-Fetoproteins" ]
8993634
INTRODUCTION
Hepatocellular carcinoma (HCC) is the fifth most common cancer and the second leading cause of cancer death in China. 1 , 2 Meanwhile, the prognosis of HCC patients remains poor. Patients with advanced HCC have few treatment options, with 5‐year survival rates ranging from 5% to 14%, whereas patients with early HCC can undergo radical treatments, including surgical resection, ablation, and liver transplantation. 3 , 4 The 5‐year survival rate for HCC patients diagnosed at an early stage is up to 75%. 5 , 6 , 7 Therefore, early detection of HCC is a key component toward reducing HCC mortality. About 80%–90% of HCC cases occur in patients with cirrhosis and cirrhosis is also the major cause of liver disease‐related morbidity and mortality worldwide. 6 , 8 Chinese guideline for stratified screening and surveillance of primary liver cancer (2020 Edition) recommends abdominal ultrasonography combined with serum a‐fetoprotein (AFP) every 6 months as a surveillance program for HCC in patients with cirrhosis. 9 However, ultrasound is dependent on operator experience and difficult to perform in obese patients. 10 , 11 AFP has suboptimal sensitivity and specificity (41% to 65% and 80% to 94%), and its usefulness has been widely debated. 12 At the currently used cutoff (20 μg/L), this strategy misses about a third of HCC in the early stage. 13 , 14 Novel surveillance strategies are urgently needed to improve the accuracy of early HCC detection. Wang et al. 15 developed a logistic regression algorithm named the Doylestown algorithm that utilizes AFP, age, gender, alkaline phosphatase (ALK), and alanine aminotransferase (ALT) levels to improve the detection of HCC, particularly for those with cirrhosis. However, it is uncertain whether the Doylestown algorithm is applicable for predicting HCC occurrence in the Chinese population. Recently, aMAP risk score consisting of age, gender, total bilirubin (TB), albumin (ALB), and platelets (PLT) was reported to predict HCC development in patients with chronic hepatitis. 16 aMAP score has the advantage of assessing HCC risk with different ethnicities including Asian and Caucasian ethnicities. However, the cohort of this study is mainly hepatitis B, hepatitis C, and other chronic hepatitis patients. Whether aMAP score is suitable for assessing the risk of HCC in patients with cirrhosis is still uncertain. This study aims to compare multiple biomarkers (including AFP, the Doylestown algorithm, and aMAP score) in Chinese patients with cirrhosis and investigate the clinical utility of the Doylestown algorithm and aMAP score.
null
null
RESULTS
Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score Patients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis. A total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend. Individual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521) a A total of 1563 samples consisting of three time points each from 521 individual patients were examined. Time 1 is 12–18 months prior. Time 2 is 6–12 months prior. Time 3 is 0–6 months prior. Friedman test. Mean level of alanine aminotransferase (ALT) with the range indicated (in U/L). Mean level of alkaline phosphatase (ALK) with the range indicated (in U/L). Mean level of total bilirubin (TB) with the range indicated (in μmol/L). Mean level of albumin (ALB) with the range indicated (in g/L). Mean level of platelets (PLT) with the range indicated (in 103/mm). Mean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml). Mean level of the Doylestown algorithm with the range indicated. Mean level of the aMAP score with the range indicated. Longitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively. Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference Patients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis. A total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend. Individual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521) a A total of 1563 samples consisting of three time points each from 521 individual patients were examined. Time 1 is 12–18 months prior. Time 2 is 6–12 months prior. Time 3 is 0–6 months prior. Friedman test. Mean level of alanine aminotransferase (ALT) with the range indicated (in U/L). Mean level of alkaline phosphatase (ALK) with the range indicated (in U/L). Mean level of total bilirubin (TB) with the range indicated (in μmol/L). Mean level of albumin (ALB) with the range indicated (in g/L). Mean level of platelets (PLT) with the range indicated (in 103/mm). Mean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml). Mean level of the Doylestown algorithm with the range indicated. Mean level of the aMAP score with the range indicated. Longitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively. Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference HCC Patient characteristics The HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis. The HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis. Biomarker performance at three follow‐up visits before HCC diagnosis A total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall. Prediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers Abbreviations: NPV, Negative predictive value; PPV, Positive predictive value. AFP‐20: Using the AFP cutoff values of 20 ng/ml. DAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5. aMAP‐50: Using the aMAP score cutoff values of 50. aMAP‐60: Using the aMAP score cutoff values of 60. Receiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits Next, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance. A total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall. Prediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers Abbreviations: NPV, Negative predictive value; PPV, Positive predictive value. AFP‐20: Using the AFP cutoff values of 20 ng/ml. DAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5. aMAP‐50: Using the aMAP score cutoff values of 50. aMAP‐60: Using the aMAP score cutoff values of 60. Receiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits Next, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance. Biomarker performance at each follow‐up For very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved. Receiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit For very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved. Receiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit
CONCLUSIONS
In summary, our study results highlight the potential of novel biomarker panels, including the Doylestown algorithm and aMAP score, to improve early HCC detection. The high accuracy of these biomarker panels is likely related to the inclusion of multiple biomarkers and demographics associated with higher HCC risk. Blood‐based biomarkers also have high patient acceptance and are easy to implement in clinical practice. The performance of the Doylestown algorithm was superior to that of AFP and aMAP score, and the algorithm was also applicable to the Chinese population. The aMAP score demonstrated very high sensitivity and was suitable for preliminary HCC screening, but it is necessary to combine with other screening methods to further identify suspected HCC patients.
[ "INTRODUCTION", "Study populations", "Statistical methods", "Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score", "HCC Patient characteristics", "Biomarker performance at three follow‐up visits before HCC diagnosis", "Biomarker performance at each follow‐up", "CONSENT FOR PUBLICATION" ]
[ "Hepatocellular carcinoma (HCC) is the fifth most common cancer and the second leading cause of cancer death in China.\n1\n, \n2\n Meanwhile, the prognosis of HCC patients remains poor. Patients with advanced HCC have few treatment options, with 5‐year survival rates ranging from 5% to 14%, whereas patients with early HCC can undergo radical treatments, including surgical resection, ablation, and liver transplantation.\n3\n, \n4\n The 5‐year survival rate for HCC patients diagnosed at an early stage is up to 75%.\n5\n, \n6\n, \n7\n Therefore, early detection of HCC is a key component toward reducing HCC mortality.\nAbout 80%–90% of HCC cases occur in patients with cirrhosis and cirrhosis is also the major cause of liver disease‐related morbidity and mortality worldwide.\n6\n, \n8\n Chinese guideline for stratified screening and surveillance of primary liver cancer (2020 Edition) recommends abdominal ultrasonography combined with serum a‐fetoprotein (AFP) every 6 months as a surveillance program for HCC in patients with cirrhosis.\n9\n However, ultrasound is dependent on operator experience and difficult to perform in obese patients.\n10\n, \n11\n AFP has suboptimal sensitivity and specificity (41% to 65% and 80% to 94%), and its usefulness has been widely debated.\n12\n At the currently used cutoff (20 μg/L), this strategy misses about a third of HCC in the early stage.\n13\n, \n14\n Novel surveillance strategies are urgently needed to improve the accuracy of early HCC detection.\nWang et al.\n15\n developed a logistic regression algorithm named the Doylestown algorithm that utilizes AFP, age, gender, alkaline phosphatase (ALK), and alanine aminotransferase (ALT) levels to improve the detection of HCC, particularly for those with cirrhosis. However, it is uncertain whether the Doylestown algorithm is applicable for predicting HCC occurrence in the Chinese population. Recently, aMAP risk score consisting of age, gender, total bilirubin (TB), albumin (ALB), and platelets (PLT) was reported to predict HCC development in patients with chronic hepatitis.\n16\n aMAP score has the advantage of assessing HCC risk with different ethnicities including Asian and Caucasian ethnicities. However, the cohort of this study is mainly hepatitis B, hepatitis C, and other chronic hepatitis patients. Whether aMAP score is suitable for assessing the risk of HCC in patients with cirrhosis is still uncertain. This study aims to compare multiple biomarkers (including AFP, the Doylestown algorithm, and aMAP score) in Chinese patients with cirrhosis and investigate the clinical utility of the Doylestown algorithm and aMAP score.", "A total of 814 patients with liver disease from the Hwa Mei Hospital, University of Chinese Academy of Science were enrolled into our study between August 2018 and May 2020. The study was approved by the Ethics Committees of Hwa Mei Hospital (IRB No: PJ‐NBEY‐KY‐2018–023–01), and written informed consent was obtained from all participants. Serum and plasma were collected at each visit and stored at −80°C. Patients were followed with semiannual surveillance until HCC occurred, death, or study termination. HCC free status at the start of all patients was confirmed by radiological imaging, including computed tomography (CT) and magnetic resonance imaging (MRI). Patients with non‐cirrhotic, suspicious liver masses, or HCC before study enrollment were excluded. In addition, only those patients who had the required components of the Doylestown algorithm (age, gender, ALT, ALK, and AFP) and aMAP score (age, gender, TB, ALB, and PLT) were utilized in this study. A total of 72 cases were excluded for the following reasons: one case transfer to other hospitals for treatment, four non‐cirrhotic cases, and 67 cases with insufficient clinical data for analysis. Ultimately, 742 patients were enrolled for analysis. The racial background of all patients was uniformly Chinese. Summary etiologic data of the cohort are given in Table 1. Patients with hepatitis B or C were treated with nucleos(t)ide analogs or direct‐acting antiviral agent treatment, respectively.\nEtiologic data among 742 cirrhosis patients\nHepatitis B virus.\nNon‐alcoholic fatty liver disease.\nHepatitis C virus.\nPatients were followed up from August 21, 2018, to January 12, 2021. During this period, HCC surveillance was performed by radiographic imaging (CT and/or MRI) and testing for tumor markers every 6 months. The diagnosis of HCC was made based on radiological imaging. This cohort has been followed for a median of 378 days (range of 254–840). 20 patients were eventually diagnosed with early HCC in this study (Table S1). Early‐stage HCC was defined as Barcelona Clinic Liver Cancer(BCLC) stage 0 or A.\n17\n\n\nAll laboratory data, including serum AFP and values for ALT, ALK, TB, ALB, and PLT were determined using standard methods at Hwa Mei Hospital clinical laboratory. More details for patient enrollment can be found in Supporting information.", "The Doylestown algorithm was calculated based on a previous study,\n15\n as follows:\n1/(1+EXP(‐(−10.307+(0.097*age[year])+1.645*Gender[male:1,female:0]+(2.314*log10AFP[ng/ml])+(0.011*ALK[U/L])+(−0.008* ALT[U/L])))).\nThe output value ranged from 0 to 1. The cutoff value of 0.5 was used to identify patients with HCC.\nThe aMAP score was calculated based on a previous study,\n16\n as follows:\n((age[year]*0.06+gender*0.89[male:1,female:0]+0.48*((log10bilirubin[μmol/L]*0.66) + (albumin [g/L]*‐0.085)) ‐ 0.01*platelets [103/mm]) +7.4) / 14.77*100.\nThe output value ranged from 0 to 100. The cutoff value of 50 was associated with a medium‐risk group. The cutoff value of 60 resulted in a high‐risk group.\nA t test was performed to compare AFP, Doylestown algorithm, and aMAP score. Moreover, their correlation was tested using Pearson's correlation coefficient. All statistical tests were two‐sided and evaluated at the 0.05 level of statistical significance. All statistical analyses were performed using the R language, version 3.6.0.", "Patients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis.\nA total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend.\nIndividual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521)\na\n\n\nA total of 1563 samples consisting of three time points each from 521 individual patients were examined.\nTime 1 is 12–18 months prior.\nTime 2 is 6–12 months prior.\nTime 3 is 0–6 months prior.\nFriedman test.\nMean level of alanine aminotransferase (ALT) with the range indicated (in U/L).\nMean level of alkaline phosphatase (ALK) with the range indicated (in U/L).\nMean level of total bilirubin (TB) with the range indicated (in μmol/L).\nMean level of albumin (ALB) with the range indicated (in g/L).\nMean level of platelets (PLT) with the range indicated (in 103/mm).\nMean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml).\nMean level of the Doylestown algorithm with the range indicated.\nMean level of the aMAP score with the range indicated.\nLongitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively.\nDistribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference", "The HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis.", "A total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall.\nPrediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers\nAbbreviations: NPV, Negative predictive value; PPV, Positive predictive value.\nAFP‐20: Using the AFP cutoff values of 20 ng/ml.\nDAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5.\naMAP‐50: Using the aMAP score cutoff values of 50.\naMAP‐60: Using the aMAP score cutoff values of 60.\nReceiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits\nNext, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance.", "For very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved.\nReceiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit", "Not applicable." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study populations", "Statistical methods", "RESULTS", "Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score", "HCC Patient characteristics", "Biomarker performance at three follow‐up visits before HCC diagnosis", "Biomarker performance at each follow‐up", "DISCUSSION", "CONCLUSIONS", "CONFLICTS OF INTEREST", "CONSENT FOR PUBLICATION", "Supporting information" ]
[ "Hepatocellular carcinoma (HCC) is the fifth most common cancer and the second leading cause of cancer death in China.\n1\n, \n2\n Meanwhile, the prognosis of HCC patients remains poor. Patients with advanced HCC have few treatment options, with 5‐year survival rates ranging from 5% to 14%, whereas patients with early HCC can undergo radical treatments, including surgical resection, ablation, and liver transplantation.\n3\n, \n4\n The 5‐year survival rate for HCC patients diagnosed at an early stage is up to 75%.\n5\n, \n6\n, \n7\n Therefore, early detection of HCC is a key component toward reducing HCC mortality.\nAbout 80%–90% of HCC cases occur in patients with cirrhosis and cirrhosis is also the major cause of liver disease‐related morbidity and mortality worldwide.\n6\n, \n8\n Chinese guideline for stratified screening and surveillance of primary liver cancer (2020 Edition) recommends abdominal ultrasonography combined with serum a‐fetoprotein (AFP) every 6 months as a surveillance program for HCC in patients with cirrhosis.\n9\n However, ultrasound is dependent on operator experience and difficult to perform in obese patients.\n10\n, \n11\n AFP has suboptimal sensitivity and specificity (41% to 65% and 80% to 94%), and its usefulness has been widely debated.\n12\n At the currently used cutoff (20 μg/L), this strategy misses about a third of HCC in the early stage.\n13\n, \n14\n Novel surveillance strategies are urgently needed to improve the accuracy of early HCC detection.\nWang et al.\n15\n developed a logistic regression algorithm named the Doylestown algorithm that utilizes AFP, age, gender, alkaline phosphatase (ALK), and alanine aminotransferase (ALT) levels to improve the detection of HCC, particularly for those with cirrhosis. However, it is uncertain whether the Doylestown algorithm is applicable for predicting HCC occurrence in the Chinese population. Recently, aMAP risk score consisting of age, gender, total bilirubin (TB), albumin (ALB), and platelets (PLT) was reported to predict HCC development in patients with chronic hepatitis.\n16\n aMAP score has the advantage of assessing HCC risk with different ethnicities including Asian and Caucasian ethnicities. However, the cohort of this study is mainly hepatitis B, hepatitis C, and other chronic hepatitis patients. Whether aMAP score is suitable for assessing the risk of HCC in patients with cirrhosis is still uncertain. This study aims to compare multiple biomarkers (including AFP, the Doylestown algorithm, and aMAP score) in Chinese patients with cirrhosis and investigate the clinical utility of the Doylestown algorithm and aMAP score.", " Study populations A total of 814 patients with liver disease from the Hwa Mei Hospital, University of Chinese Academy of Science were enrolled into our study between August 2018 and May 2020. The study was approved by the Ethics Committees of Hwa Mei Hospital (IRB No: PJ‐NBEY‐KY‐2018–023–01), and written informed consent was obtained from all participants. Serum and plasma were collected at each visit and stored at −80°C. Patients were followed with semiannual surveillance until HCC occurred, death, or study termination. HCC free status at the start of all patients was confirmed by radiological imaging, including computed tomography (CT) and magnetic resonance imaging (MRI). Patients with non‐cirrhotic, suspicious liver masses, or HCC before study enrollment were excluded. In addition, only those patients who had the required components of the Doylestown algorithm (age, gender, ALT, ALK, and AFP) and aMAP score (age, gender, TB, ALB, and PLT) were utilized in this study. A total of 72 cases were excluded for the following reasons: one case transfer to other hospitals for treatment, four non‐cirrhotic cases, and 67 cases with insufficient clinical data for analysis. Ultimately, 742 patients were enrolled for analysis. The racial background of all patients was uniformly Chinese. Summary etiologic data of the cohort are given in Table 1. Patients with hepatitis B or C were treated with nucleos(t)ide analogs or direct‐acting antiviral agent treatment, respectively.\nEtiologic data among 742 cirrhosis patients\nHepatitis B virus.\nNon‐alcoholic fatty liver disease.\nHepatitis C virus.\nPatients were followed up from August 21, 2018, to January 12, 2021. During this period, HCC surveillance was performed by radiographic imaging (CT and/or MRI) and testing for tumor markers every 6 months. The diagnosis of HCC was made based on radiological imaging. This cohort has been followed for a median of 378 days (range of 254–840). 20 patients were eventually diagnosed with early HCC in this study (Table S1). Early‐stage HCC was defined as Barcelona Clinic Liver Cancer(BCLC) stage 0 or A.\n17\n\n\nAll laboratory data, including serum AFP and values for ALT, ALK, TB, ALB, and PLT were determined using standard methods at Hwa Mei Hospital clinical laboratory. More details for patient enrollment can be found in Supporting information.\nA total of 814 patients with liver disease from the Hwa Mei Hospital, University of Chinese Academy of Science were enrolled into our study between August 2018 and May 2020. The study was approved by the Ethics Committees of Hwa Mei Hospital (IRB No: PJ‐NBEY‐KY‐2018–023–01), and written informed consent was obtained from all participants. Serum and plasma were collected at each visit and stored at −80°C. Patients were followed with semiannual surveillance until HCC occurred, death, or study termination. HCC free status at the start of all patients was confirmed by radiological imaging, including computed tomography (CT) and magnetic resonance imaging (MRI). Patients with non‐cirrhotic, suspicious liver masses, or HCC before study enrollment were excluded. In addition, only those patients who had the required components of the Doylestown algorithm (age, gender, ALT, ALK, and AFP) and aMAP score (age, gender, TB, ALB, and PLT) were utilized in this study. A total of 72 cases were excluded for the following reasons: one case transfer to other hospitals for treatment, four non‐cirrhotic cases, and 67 cases with insufficient clinical data for analysis. Ultimately, 742 patients were enrolled for analysis. The racial background of all patients was uniformly Chinese. Summary etiologic data of the cohort are given in Table 1. Patients with hepatitis B or C were treated with nucleos(t)ide analogs or direct‐acting antiviral agent treatment, respectively.\nEtiologic data among 742 cirrhosis patients\nHepatitis B virus.\nNon‐alcoholic fatty liver disease.\nHepatitis C virus.\nPatients were followed up from August 21, 2018, to January 12, 2021. During this period, HCC surveillance was performed by radiographic imaging (CT and/or MRI) and testing for tumor markers every 6 months. The diagnosis of HCC was made based on radiological imaging. This cohort has been followed for a median of 378 days (range of 254–840). 20 patients were eventually diagnosed with early HCC in this study (Table S1). Early‐stage HCC was defined as Barcelona Clinic Liver Cancer(BCLC) stage 0 or A.\n17\n\n\nAll laboratory data, including serum AFP and values for ALT, ALK, TB, ALB, and PLT were determined using standard methods at Hwa Mei Hospital clinical laboratory. More details for patient enrollment can be found in Supporting information.\n Statistical methods The Doylestown algorithm was calculated based on a previous study,\n15\n as follows:\n1/(1+EXP(‐(−10.307+(0.097*age[year])+1.645*Gender[male:1,female:0]+(2.314*log10AFP[ng/ml])+(0.011*ALK[U/L])+(−0.008* ALT[U/L])))).\nThe output value ranged from 0 to 1. The cutoff value of 0.5 was used to identify patients with HCC.\nThe aMAP score was calculated based on a previous study,\n16\n as follows:\n((age[year]*0.06+gender*0.89[male:1,female:0]+0.48*((log10bilirubin[μmol/L]*0.66) + (albumin [g/L]*‐0.085)) ‐ 0.01*platelets [103/mm]) +7.4) / 14.77*100.\nThe output value ranged from 0 to 100. The cutoff value of 50 was associated with a medium‐risk group. The cutoff value of 60 resulted in a high‐risk group.\nA t test was performed to compare AFP, Doylestown algorithm, and aMAP score. Moreover, their correlation was tested using Pearson's correlation coefficient. All statistical tests were two‐sided and evaluated at the 0.05 level of statistical significance. All statistical analyses were performed using the R language, version 3.6.0.\nThe Doylestown algorithm was calculated based on a previous study,\n15\n as follows:\n1/(1+EXP(‐(−10.307+(0.097*age[year])+1.645*Gender[male:1,female:0]+(2.314*log10AFP[ng/ml])+(0.011*ALK[U/L])+(−0.008* ALT[U/L])))).\nThe output value ranged from 0 to 1. The cutoff value of 0.5 was used to identify patients with HCC.\nThe aMAP score was calculated based on a previous study,\n16\n as follows:\n((age[year]*0.06+gender*0.89[male:1,female:0]+0.48*((log10bilirubin[μmol/L]*0.66) + (albumin [g/L]*‐0.085)) ‐ 0.01*platelets [103/mm]) +7.4) / 14.77*100.\nThe output value ranged from 0 to 100. The cutoff value of 50 was associated with a medium‐risk group. The cutoff value of 60 resulted in a high‐risk group.\nA t test was performed to compare AFP, Doylestown algorithm, and aMAP score. Moreover, their correlation was tested using Pearson's correlation coefficient. All statistical tests were two‐sided and evaluated at the 0.05 level of statistical significance. All statistical analyses were performed using the R language, version 3.6.0.", "A total of 814 patients with liver disease from the Hwa Mei Hospital, University of Chinese Academy of Science were enrolled into our study between August 2018 and May 2020. The study was approved by the Ethics Committees of Hwa Mei Hospital (IRB No: PJ‐NBEY‐KY‐2018–023–01), and written informed consent was obtained from all participants. Serum and plasma were collected at each visit and stored at −80°C. Patients were followed with semiannual surveillance until HCC occurred, death, or study termination. HCC free status at the start of all patients was confirmed by radiological imaging, including computed tomography (CT) and magnetic resonance imaging (MRI). Patients with non‐cirrhotic, suspicious liver masses, or HCC before study enrollment were excluded. In addition, only those patients who had the required components of the Doylestown algorithm (age, gender, ALT, ALK, and AFP) and aMAP score (age, gender, TB, ALB, and PLT) were utilized in this study. A total of 72 cases were excluded for the following reasons: one case transfer to other hospitals for treatment, four non‐cirrhotic cases, and 67 cases with insufficient clinical data for analysis. Ultimately, 742 patients were enrolled for analysis. The racial background of all patients was uniformly Chinese. Summary etiologic data of the cohort are given in Table 1. Patients with hepatitis B or C were treated with nucleos(t)ide analogs or direct‐acting antiviral agent treatment, respectively.\nEtiologic data among 742 cirrhosis patients\nHepatitis B virus.\nNon‐alcoholic fatty liver disease.\nHepatitis C virus.\nPatients were followed up from August 21, 2018, to January 12, 2021. During this period, HCC surveillance was performed by radiographic imaging (CT and/or MRI) and testing for tumor markers every 6 months. The diagnosis of HCC was made based on radiological imaging. This cohort has been followed for a median of 378 days (range of 254–840). 20 patients were eventually diagnosed with early HCC in this study (Table S1). Early‐stage HCC was defined as Barcelona Clinic Liver Cancer(BCLC) stage 0 or A.\n17\n\n\nAll laboratory data, including serum AFP and values for ALT, ALK, TB, ALB, and PLT were determined using standard methods at Hwa Mei Hospital clinical laboratory. More details for patient enrollment can be found in Supporting information.", "The Doylestown algorithm was calculated based on a previous study,\n15\n as follows:\n1/(1+EXP(‐(−10.307+(0.097*age[year])+1.645*Gender[male:1,female:0]+(2.314*log10AFP[ng/ml])+(0.011*ALK[U/L])+(−0.008* ALT[U/L])))).\nThe output value ranged from 0 to 1. The cutoff value of 0.5 was used to identify patients with HCC.\nThe aMAP score was calculated based on a previous study,\n16\n as follows:\n((age[year]*0.06+gender*0.89[male:1,female:0]+0.48*((log10bilirubin[μmol/L]*0.66) + (albumin [g/L]*‐0.085)) ‐ 0.01*platelets [103/mm]) +7.4) / 14.77*100.\nThe output value ranged from 0 to 100. The cutoff value of 50 was associated with a medium‐risk group. The cutoff value of 60 resulted in a high‐risk group.\nA t test was performed to compare AFP, Doylestown algorithm, and aMAP score. Moreover, their correlation was tested using Pearson's correlation coefficient. All statistical tests were two‐sided and evaluated at the 0.05 level of statistical significance. All statistical analyses were performed using the R language, version 3.6.0.", " Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score Patients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis.\nA total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend.\nIndividual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521)\na\n\n\nA total of 1563 samples consisting of three time points each from 521 individual patients were examined.\nTime 1 is 12–18 months prior.\nTime 2 is 6–12 months prior.\nTime 3 is 0–6 months prior.\nFriedman test.\nMean level of alanine aminotransferase (ALT) with the range indicated (in U/L).\nMean level of alkaline phosphatase (ALK) with the range indicated (in U/L).\nMean level of total bilirubin (TB) with the range indicated (in μmol/L).\nMean level of albumin (ALB) with the range indicated (in g/L).\nMean level of platelets (PLT) with the range indicated (in 103/mm).\nMean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml).\nMean level of the Doylestown algorithm with the range indicated.\nMean level of the aMAP score with the range indicated.\nLongitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively.\nDistribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference\nPatients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis.\nA total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend.\nIndividual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521)\na\n\n\nA total of 1563 samples consisting of three time points each from 521 individual patients were examined.\nTime 1 is 12–18 months prior.\nTime 2 is 6–12 months prior.\nTime 3 is 0–6 months prior.\nFriedman test.\nMean level of alanine aminotransferase (ALT) with the range indicated (in U/L).\nMean level of alkaline phosphatase (ALK) with the range indicated (in U/L).\nMean level of total bilirubin (TB) with the range indicated (in μmol/L).\nMean level of albumin (ALB) with the range indicated (in g/L).\nMean level of platelets (PLT) with the range indicated (in 103/mm).\nMean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml).\nMean level of the Doylestown algorithm with the range indicated.\nMean level of the aMAP score with the range indicated.\nLongitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively.\nDistribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference\n HCC Patient characteristics The HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis.\nThe HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis.\n Biomarker performance at three follow‐up visits before HCC diagnosis A total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall.\nPrediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers\nAbbreviations: NPV, Negative predictive value; PPV, Positive predictive value.\nAFP‐20: Using the AFP cutoff values of 20 ng/ml.\nDAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5.\naMAP‐50: Using the aMAP score cutoff values of 50.\naMAP‐60: Using the aMAP score cutoff values of 60.\nReceiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits\nNext, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance.\nA total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall.\nPrediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers\nAbbreviations: NPV, Negative predictive value; PPV, Positive predictive value.\nAFP‐20: Using the AFP cutoff values of 20 ng/ml.\nDAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5.\naMAP‐50: Using the aMAP score cutoff values of 50.\naMAP‐60: Using the aMAP score cutoff values of 60.\nReceiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits\nNext, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance.\n Biomarker performance at each follow‐up For very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved.\nReceiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit\nFor very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved.\nReceiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit", "Patients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis.\nA total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend.\nIndividual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521)\na\n\n\nA total of 1563 samples consisting of three time points each from 521 individual patients were examined.\nTime 1 is 12–18 months prior.\nTime 2 is 6–12 months prior.\nTime 3 is 0–6 months prior.\nFriedman test.\nMean level of alanine aminotransferase (ALT) with the range indicated (in U/L).\nMean level of alkaline phosphatase (ALK) with the range indicated (in U/L).\nMean level of total bilirubin (TB) with the range indicated (in μmol/L).\nMean level of albumin (ALB) with the range indicated (in g/L).\nMean level of platelets (PLT) with the range indicated (in 103/mm).\nMean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml).\nMean level of the Doylestown algorithm with the range indicated.\nMean level of the aMAP score with the range indicated.\nLongitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively.\nDistribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference", "The HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis.", "A total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall.\nPrediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers\nAbbreviations: NPV, Negative predictive value; PPV, Positive predictive value.\nAFP‐20: Using the AFP cutoff values of 20 ng/ml.\nDAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5.\naMAP‐50: Using the aMAP score cutoff values of 50.\naMAP‐60: Using the aMAP score cutoff values of 60.\nReceiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits\nNext, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance.", "For very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved.\nReceiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit", "The development and validation of the Doylestown algorithm and aMAP score in predicting HCC were performed using clinical variables from case‐control or nested case‐control studies.\n15\n, \n16\n Furthermore, Wang et al.\n18\n developed a secondary Doylestown Plus algorithm that incorporated cosylated kininogen as a marker to improve the detection of HCC. Yamashita et al.\n19\n investigated the clinical utility of the aMAP score for predicting HCC occurrence and the incidence‐free rate after a sustained virologic response in chronic hepatitis C. The current study was a retrospective, longitudinal study applying the AFP, Doylestown algorithm, and aMAP score to consecutive time points for patients with liver cirrhosis to compare the performance of the algorithm in detecting HCC. Additionally, to our knowledge, this study also investigated the application of the Doylestown algorithm to the Chinese population for the first time.\nAmong the three evaluation models, the specificity of the Doylestown algorithm was similar or slightly less than AFP, but the Doylestown algorithm showed the best performance, and the aMAP score performance was relatively poor. The Doylestown algorithm, by combining AFP with other clinical values, increased the detection rate of HCC when compared to AFP alone in all of the tested time points. In the 521 liver cirrhosis patients with triple follow‐up, using a fixed cutoff of 20 ng/mL for AFP and 0.5 output units for the Doylestown algorithm resulted in a 7.4% increased biomarker performance at the closest time point to HCC diagnosis (0–6 month prior), a 21% increase 6–12 months before diagnosis, and a 13% increase 12–18 months before diagnosis. The Doylestown algorithm demonstrated better performance for liver cirrhosis patients, and it was also suitable for the Chinese population. However, the overall Doylestown algorithm value for the Chinese population was lower than that of the Caucasian population. For example, 20 patients diagnosed with HCC had an average Doylestown algorithm of 0.49 0–6 months before diagnosis, and 17 patients diagnosed with HCC had an average Doylestown algorithm of 0.42 6–12 months before diagnosis. In comparison, studies have shown that the average Doylestown algorithm for Caucasian patients diagnosed with HCC 12 months before the diagnosis is 0.58.\n20\n Therefore, we believe that the Doylestown algorithm is also suitable for Chinese patients with liver cirrhosis, but the cutoff value needed to be adjusted before application.\nThe aMAP score demonstrated the highest sensitivity in this analysis, but it introduced a very high number of false‐positive cases. One reason for this result could be because most of the aMAP score training sets were hepatitis B patients, and the performance of the model was not as favorable as that of the Doylestown algorithm for patients with liver cirrhosis. Among patients who developed HCC, when comparing a cutoff of 50 for the aMAP score to AFP at 20 ng/mL, the aMAP score advanced the average time for early diagnosis of HCC from 161 days to 299 days (Table S2). The number of people who were diagnosed in advance by AFP was the least, with only 5 patients. In contrast, the number of patients who were diagnosed in advance by aMAP was 18 (the highest). The aMAP score performance at the second follow‐up (6–12 months prior) was superior to that of the other two time periods. Compared with the Doylestown algorithm, the aMAP score may be more suitable for early screening of liver cancer (6–12 months in advance). Although the false‐positive rate was relatively high, this model could be fit for preliminary screening of liver cancer (to reduce missed diagnosis) and then be combined with other screening methods (to reduce misdiagnosis) to further confirm the suspected liver cancer patients.\nA limitation of the current study was the small sample size and short follow‐up time. For example, clinical data were only available from 20 patients at a time of 0–6 months before HCC diagnosis. If restricted to samples within 12–18 months of HCC diagnosis, there were only 5 HCC patients. A larger study with time points up to 3–5 years prior to HCC diagnosis should be performed to fully demonstrate the true benefit of using this algorithm for HCC surveillance in the Chinese population. In addition, our study was conducted at a single center with a limited number of liver disease patients. It is necessary to verify the Doylestown algorithm through a multi‐center cooperative study. At the same time, we must recognize that the Doylestown algorithm and the aMAP score can detect HCC patients in advance compared with traditional imaging screening methods, but the sensitivity and specificity of the model should be improved.\nThis study demonstrated that the Doylestown algorithm, by using readily available clinical parameters, is superior to AFP alone in accurately predicting the development of HCC among patients in the Chinese population with liver cirrhosis. While this algorithm is practical and easy to adopt in routine clinical use, it is important to emphasize that the current algorithm can be further complemented and improved by combining with novel biomarkers such as DNA, RNA, proteins, exosomes, or epigenetic markers to achieve early detection of HCC in the future.", "In summary, our study results highlight the potential of novel biomarker panels, including the Doylestown algorithm and aMAP score, to improve early HCC detection. The high accuracy of these biomarker panels is likely related to the inclusion of multiple biomarkers and demographics associated with higher HCC risk. Blood‐based biomarkers also have high patient acceptance and are easy to implement in clinical practice. The performance of the Doylestown algorithm was superior to that of AFP and aMAP score, and the algorithm was also applicable to the Chinese population. The aMAP score demonstrated very high sensitivity and was suitable for preliminary HCC screening, but it is necessary to combine with other screening methods to further identify suspected HCC patients.", "Linhong Li and Juhong Zhou are employees of Oriomics Biotech Inc. No other disclosures were reported. The authors have no conflicts of interest to disclose.", "Not applicable.", "Table S1‐S2\nClick here for additional data file." ]
[ null, "materials-and-methods", null, null, "results", null, null, null, null, "discussion", "conclusions", "COI-statement", null, "supplementary-material" ]
[ "alpha‐fetoprotein", "aMAP score", "Doylestown algorithm", "early screening", "hepatocellular carcinoma" ]
INTRODUCTION: Hepatocellular carcinoma (HCC) is the fifth most common cancer and the second leading cause of cancer death in China. 1 , 2 Meanwhile, the prognosis of HCC patients remains poor. Patients with advanced HCC have few treatment options, with 5‐year survival rates ranging from 5% to 14%, whereas patients with early HCC can undergo radical treatments, including surgical resection, ablation, and liver transplantation. 3 , 4 The 5‐year survival rate for HCC patients diagnosed at an early stage is up to 75%. 5 , 6 , 7 Therefore, early detection of HCC is a key component toward reducing HCC mortality. About 80%–90% of HCC cases occur in patients with cirrhosis and cirrhosis is also the major cause of liver disease‐related morbidity and mortality worldwide. 6 , 8 Chinese guideline for stratified screening and surveillance of primary liver cancer (2020 Edition) recommends abdominal ultrasonography combined with serum a‐fetoprotein (AFP) every 6 months as a surveillance program for HCC in patients with cirrhosis. 9 However, ultrasound is dependent on operator experience and difficult to perform in obese patients. 10 , 11 AFP has suboptimal sensitivity and specificity (41% to 65% and 80% to 94%), and its usefulness has been widely debated. 12 At the currently used cutoff (20 μg/L), this strategy misses about a third of HCC in the early stage. 13 , 14 Novel surveillance strategies are urgently needed to improve the accuracy of early HCC detection. Wang et al. 15 developed a logistic regression algorithm named the Doylestown algorithm that utilizes AFP, age, gender, alkaline phosphatase (ALK), and alanine aminotransferase (ALT) levels to improve the detection of HCC, particularly for those with cirrhosis. However, it is uncertain whether the Doylestown algorithm is applicable for predicting HCC occurrence in the Chinese population. Recently, aMAP risk score consisting of age, gender, total bilirubin (TB), albumin (ALB), and platelets (PLT) was reported to predict HCC development in patients with chronic hepatitis. 16 aMAP score has the advantage of assessing HCC risk with different ethnicities including Asian and Caucasian ethnicities. However, the cohort of this study is mainly hepatitis B, hepatitis C, and other chronic hepatitis patients. Whether aMAP score is suitable for assessing the risk of HCC in patients with cirrhosis is still uncertain. This study aims to compare multiple biomarkers (including AFP, the Doylestown algorithm, and aMAP score) in Chinese patients with cirrhosis and investigate the clinical utility of the Doylestown algorithm and aMAP score. MATERIALS AND METHODS: Study populations A total of 814 patients with liver disease from the Hwa Mei Hospital, University of Chinese Academy of Science were enrolled into our study between August 2018 and May 2020. The study was approved by the Ethics Committees of Hwa Mei Hospital (IRB No: PJ‐NBEY‐KY‐2018–023–01), and written informed consent was obtained from all participants. Serum and plasma were collected at each visit and stored at −80°C. Patients were followed with semiannual surveillance until HCC occurred, death, or study termination. HCC free status at the start of all patients was confirmed by radiological imaging, including computed tomography (CT) and magnetic resonance imaging (MRI). Patients with non‐cirrhotic, suspicious liver masses, or HCC before study enrollment were excluded. In addition, only those patients who had the required components of the Doylestown algorithm (age, gender, ALT, ALK, and AFP) and aMAP score (age, gender, TB, ALB, and PLT) were utilized in this study. A total of 72 cases were excluded for the following reasons: one case transfer to other hospitals for treatment, four non‐cirrhotic cases, and 67 cases with insufficient clinical data for analysis. Ultimately, 742 patients were enrolled for analysis. The racial background of all patients was uniformly Chinese. Summary etiologic data of the cohort are given in Table 1. Patients with hepatitis B or C were treated with nucleos(t)ide analogs or direct‐acting antiviral agent treatment, respectively. Etiologic data among 742 cirrhosis patients Hepatitis B virus. Non‐alcoholic fatty liver disease. Hepatitis C virus. Patients were followed up from August 21, 2018, to January 12, 2021. During this period, HCC surveillance was performed by radiographic imaging (CT and/or MRI) and testing for tumor markers every 6 months. The diagnosis of HCC was made based on radiological imaging. This cohort has been followed for a median of 378 days (range of 254–840). 20 patients were eventually diagnosed with early HCC in this study (Table S1). Early‐stage HCC was defined as Barcelona Clinic Liver Cancer(BCLC) stage 0 or A. 17 All laboratory data, including serum AFP and values for ALT, ALK, TB, ALB, and PLT were determined using standard methods at Hwa Mei Hospital clinical laboratory. More details for patient enrollment can be found in Supporting information. A total of 814 patients with liver disease from the Hwa Mei Hospital, University of Chinese Academy of Science were enrolled into our study between August 2018 and May 2020. The study was approved by the Ethics Committees of Hwa Mei Hospital (IRB No: PJ‐NBEY‐KY‐2018–023–01), and written informed consent was obtained from all participants. Serum and plasma were collected at each visit and stored at −80°C. Patients were followed with semiannual surveillance until HCC occurred, death, or study termination. HCC free status at the start of all patients was confirmed by radiological imaging, including computed tomography (CT) and magnetic resonance imaging (MRI). Patients with non‐cirrhotic, suspicious liver masses, or HCC before study enrollment were excluded. In addition, only those patients who had the required components of the Doylestown algorithm (age, gender, ALT, ALK, and AFP) and aMAP score (age, gender, TB, ALB, and PLT) were utilized in this study. A total of 72 cases were excluded for the following reasons: one case transfer to other hospitals for treatment, four non‐cirrhotic cases, and 67 cases with insufficient clinical data for analysis. Ultimately, 742 patients were enrolled for analysis. The racial background of all patients was uniformly Chinese. Summary etiologic data of the cohort are given in Table 1. Patients with hepatitis B or C were treated with nucleos(t)ide analogs or direct‐acting antiviral agent treatment, respectively. Etiologic data among 742 cirrhosis patients Hepatitis B virus. Non‐alcoholic fatty liver disease. Hepatitis C virus. Patients were followed up from August 21, 2018, to January 12, 2021. During this period, HCC surveillance was performed by radiographic imaging (CT and/or MRI) and testing for tumor markers every 6 months. The diagnosis of HCC was made based on radiological imaging. This cohort has been followed for a median of 378 days (range of 254–840). 20 patients were eventually diagnosed with early HCC in this study (Table S1). Early‐stage HCC was defined as Barcelona Clinic Liver Cancer(BCLC) stage 0 or A. 17 All laboratory data, including serum AFP and values for ALT, ALK, TB, ALB, and PLT were determined using standard methods at Hwa Mei Hospital clinical laboratory. More details for patient enrollment can be found in Supporting information. Statistical methods The Doylestown algorithm was calculated based on a previous study, 15 as follows: 1/(1+EXP(‐(−10.307+(0.097*age[year])+1.645*Gender[male:1,female:0]+(2.314*log10AFP[ng/ml])+(0.011*ALK[U/L])+(−0.008* ALT[U/L])))). The output value ranged from 0 to 1. The cutoff value of 0.5 was used to identify patients with HCC. The aMAP score was calculated based on a previous study, 16 as follows: ((age[year]*0.06+gender*0.89[male:1,female:0]+0.48*((log10bilirubin[μmol/L]*0.66) + (albumin [g/L]*‐0.085)) ‐ 0.01*platelets [103/mm]) +7.4) / 14.77*100. The output value ranged from 0 to 100. The cutoff value of 50 was associated with a medium‐risk group. The cutoff value of 60 resulted in a high‐risk group. A t test was performed to compare AFP, Doylestown algorithm, and aMAP score. Moreover, their correlation was tested using Pearson's correlation coefficient. All statistical tests were two‐sided and evaluated at the 0.05 level of statistical significance. All statistical analyses were performed using the R language, version 3.6.0. The Doylestown algorithm was calculated based on a previous study, 15 as follows: 1/(1+EXP(‐(−10.307+(0.097*age[year])+1.645*Gender[male:1,female:0]+(2.314*log10AFP[ng/ml])+(0.011*ALK[U/L])+(−0.008* ALT[U/L])))). The output value ranged from 0 to 1. The cutoff value of 0.5 was used to identify patients with HCC. The aMAP score was calculated based on a previous study, 16 as follows: ((age[year]*0.06+gender*0.89[male:1,female:0]+0.48*((log10bilirubin[μmol/L]*0.66) + (albumin [g/L]*‐0.085)) ‐ 0.01*platelets [103/mm]) +7.4) / 14.77*100. The output value ranged from 0 to 100. The cutoff value of 50 was associated with a medium‐risk group. The cutoff value of 60 resulted in a high‐risk group. A t test was performed to compare AFP, Doylestown algorithm, and aMAP score. Moreover, their correlation was tested using Pearson's correlation coefficient. All statistical tests were two‐sided and evaluated at the 0.05 level of statistical significance. All statistical analyses were performed using the R language, version 3.6.0. Study populations: A total of 814 patients with liver disease from the Hwa Mei Hospital, University of Chinese Academy of Science were enrolled into our study between August 2018 and May 2020. The study was approved by the Ethics Committees of Hwa Mei Hospital (IRB No: PJ‐NBEY‐KY‐2018–023–01), and written informed consent was obtained from all participants. Serum and plasma were collected at each visit and stored at −80°C. Patients were followed with semiannual surveillance until HCC occurred, death, or study termination. HCC free status at the start of all patients was confirmed by radiological imaging, including computed tomography (CT) and magnetic resonance imaging (MRI). Patients with non‐cirrhotic, suspicious liver masses, or HCC before study enrollment were excluded. In addition, only those patients who had the required components of the Doylestown algorithm (age, gender, ALT, ALK, and AFP) and aMAP score (age, gender, TB, ALB, and PLT) were utilized in this study. A total of 72 cases were excluded for the following reasons: one case transfer to other hospitals for treatment, four non‐cirrhotic cases, and 67 cases with insufficient clinical data for analysis. Ultimately, 742 patients were enrolled for analysis. The racial background of all patients was uniformly Chinese. Summary etiologic data of the cohort are given in Table 1. Patients with hepatitis B or C were treated with nucleos(t)ide analogs or direct‐acting antiviral agent treatment, respectively. Etiologic data among 742 cirrhosis patients Hepatitis B virus. Non‐alcoholic fatty liver disease. Hepatitis C virus. Patients were followed up from August 21, 2018, to January 12, 2021. During this period, HCC surveillance was performed by radiographic imaging (CT and/or MRI) and testing for tumor markers every 6 months. The diagnosis of HCC was made based on radiological imaging. This cohort has been followed for a median of 378 days (range of 254–840). 20 patients were eventually diagnosed with early HCC in this study (Table S1). Early‐stage HCC was defined as Barcelona Clinic Liver Cancer(BCLC) stage 0 or A. 17 All laboratory data, including serum AFP and values for ALT, ALK, TB, ALB, and PLT were determined using standard methods at Hwa Mei Hospital clinical laboratory. More details for patient enrollment can be found in Supporting information. Statistical methods: The Doylestown algorithm was calculated based on a previous study, 15 as follows: 1/(1+EXP(‐(−10.307+(0.097*age[year])+1.645*Gender[male:1,female:0]+(2.314*log10AFP[ng/ml])+(0.011*ALK[U/L])+(−0.008* ALT[U/L])))). The output value ranged from 0 to 1. The cutoff value of 0.5 was used to identify patients with HCC. The aMAP score was calculated based on a previous study, 16 as follows: ((age[year]*0.06+gender*0.89[male:1,female:0]+0.48*((log10bilirubin[μmol/L]*0.66) + (albumin [g/L]*‐0.085)) ‐ 0.01*platelets [103/mm]) +7.4) / 14.77*100. The output value ranged from 0 to 100. The cutoff value of 50 was associated with a medium‐risk group. The cutoff value of 60 resulted in a high‐risk group. A t test was performed to compare AFP, Doylestown algorithm, and aMAP score. Moreover, their correlation was tested using Pearson's correlation coefficient. All statistical tests were two‐sided and evaluated at the 0.05 level of statistical significance. All statistical analyses were performed using the R language, version 3.6.0. RESULTS: Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score Patients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis. A total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend. Individual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521) a A total of 1563 samples consisting of three time points each from 521 individual patients were examined. Time 1 is 12–18 months prior. Time 2 is 6–12 months prior. Time 3 is 0–6 months prior. Friedman test. Mean level of alanine aminotransferase (ALT) with the range indicated (in U/L). Mean level of alkaline phosphatase (ALK) with the range indicated (in U/L). Mean level of total bilirubin (TB) with the range indicated (in μmol/L). Mean level of albumin (ALB) with the range indicated (in g/L). Mean level of platelets (PLT) with the range indicated (in 103/mm). Mean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml). Mean level of the Doylestown algorithm with the range indicated. Mean level of the aMAP score with the range indicated. Longitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively. Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference Patients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis. A total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend. Individual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521) a A total of 1563 samples consisting of three time points each from 521 individual patients were examined. Time 1 is 12–18 months prior. Time 2 is 6–12 months prior. Time 3 is 0–6 months prior. Friedman test. Mean level of alanine aminotransferase (ALT) with the range indicated (in U/L). Mean level of alkaline phosphatase (ALK) with the range indicated (in U/L). Mean level of total bilirubin (TB) with the range indicated (in μmol/L). Mean level of albumin (ALB) with the range indicated (in g/L). Mean level of platelets (PLT) with the range indicated (in 103/mm). Mean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml). Mean level of the Doylestown algorithm with the range indicated. Mean level of the aMAP score with the range indicated. Longitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively. Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference HCC Patient characteristics The HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis. The HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis. Biomarker performance at three follow‐up visits before HCC diagnosis A total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall. Prediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers Abbreviations: NPV, Negative predictive value; PPV, Positive predictive value. AFP‐20: Using the AFP cutoff values of 20 ng/ml. DAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5. aMAP‐50: Using the aMAP score cutoff values of 50. aMAP‐60: Using the aMAP score cutoff values of 60. Receiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits Next, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance. A total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall. Prediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers Abbreviations: NPV, Negative predictive value; PPV, Positive predictive value. AFP‐20: Using the AFP cutoff values of 20 ng/ml. DAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5. aMAP‐50: Using the aMAP score cutoff values of 50. aMAP‐60: Using the aMAP score cutoff values of 60. Receiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits Next, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance. Biomarker performance at each follow‐up For very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved. Receiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit For very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved. Receiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score: Patients were followed up three times from August 21, 2018, to January 12, 2021. Among 742 patients with cirrhosis during that period, we conducted a retrospective study of 20 HCC cases and 722 cirrhosis control subjects. In the cirrhosis control group, 516 patients were followed up three times, 603 were followed up at least twice, and 722 patients were followed up at least once. Among the HCC cases, five patients were followed up three times at HCC diagnosis, 17 were followed up at least twice before diagnosis, and 20 patients were followed up at least once before diagnosis. A total of 521 patients were followed up three times. As shown in Table 2, the mean age of these patients was 52 (range, 31–75) years. The majority of the patients were men (70%). The mean values for ALT, ALK, TB, ALB, and PLT are listed in Table 2. As Table 2 shows, TB, ALB, and PLT values remained similar over about 12 months for HCC and non‐HCC patients. Since these three factors are key components of the aMAP score, the aMAP score values were also similar over the three time points. The ALT and AFP levels were higher at the first follow‐up visit (Time 1) in non‐HCC patients, and the Doylestown algorithm values were also higher at Time 1. AFP levels of HCC patients increased close to the time of HCC diagnosis, and their Doylestown algorithm score (DAs) had the same trend. Individual component and the AFP, Doylestown algorithm, and aMAP score at different time points in patients with liver cirrhosis (n = 521) a A total of 1563 samples consisting of three time points each from 521 individual patients were examined. Time 1 is 12–18 months prior. Time 2 is 6–12 months prior. Time 3 is 0–6 months prior. Friedman test. Mean level of alanine aminotransferase (ALT) with the range indicated (in U/L). Mean level of alkaline phosphatase (ALK) with the range indicated (in U/L). Mean level of total bilirubin (TB) with the range indicated (in μmol/L). Mean level of albumin (ALB) with the range indicated (in g/L). Mean level of platelets (PLT) with the range indicated (in 103/mm). Mean level of alpha‐fetoprotein (AFP) with the range indicated (in ng/ml). Mean level of the Doylestown algorithm with the range indicated. Mean level of the aMAP score with the range indicated. Longitudinal data were available from 521 patients at the first follow‐up visit (12–18 months before diagnosis), second follow‐up visit (6–12 months before diagnosis), and third follow‐up visit (0–6 months before diagnosis). Figure 1 shows the distribution of AFP, Doylestown algorithm score, and aMAP score in all patients at different times, respectively. Distribution of AFP, Doylestown algorithm score (DAs), and aMAP score. Retrospective longitudinal data at three follow‐up visits were available from 516 patients without HCC and 5 patients with HCC. * Represents p < 0.05; ** represents p < 0.01; **** represents p < 0.0001; ns represents no significant difference HCC Patient characteristics: The HCC patient characteristics are detailed in Table S1. All HCC patients had BCLC stage 0/A (0/11, A/9). Most HCC patients were diagnosed using MRI, except for two patients by CT. The maximum diameter of the tumor at the time of diagnosis was no more than 40mm (7–32 mm). Half of the HCC patients had undergone surgery after diagnosis. Among the HCC cases, 20 patients underwent biomarker assessment at 0–6 months before HCC diagnosis, 17 patients within 6–12 months before diagnosis, and 5 patients at 12–18 months before diagnosis. Biomarker performance at three follow‐up visits before HCC diagnosis: A total of 521 liver cirrhosis patients underwent three longitudinal follow‐ups. Five patients were diagnosed with HCC within 0–6 months of the third follow‐up. The Doylestown algorithm and aMAP score values were calculated using patient clinical data. As for five HCC patients with three follow‐up visits, we found that the Doylestown algorithm provided the highest positive predictive value, the aMAP score had the highest sensitivity and negative predictive value, and the AFP model had the highest specificity (Table 3). The Doylestown algorithm demonstrated the highest accuracy for HCC detection, with an area under the receiver operating characteristic curve (AUC) of 0.763 12–18 months before diagnosis (95% CI, 0.592–0.935), 0.801 6–12 months before diagnosis (95% CI, 0.669–0.933), and 0.867 0–6 months before diagnosis (95% CI, 0.798–0.937) (Figure 2A–C). Furthermore, the diagnostic value of the Doylestown algorithm showed an increasing trend with the prolonged follow‐up time. The Doylestown algorithm and AFP demonstrated the best performance at 0–6 months before diagnosis, but the aMAP score had the best performance at 6–12 months before diagnosis, and the aMAP score showed a poor performance overall. Prediction accuracy for hepatocellular carcinoma development in patients with liver cirrhosis cohorts (n = 521) using different biomarkers Abbreviations: NPV, Negative predictive value; PPV, Positive predictive value. AFP‐20: Using the AFP cutoff values of 20 ng/ml. DAs‐0.5: Using the Doylestown algorithm cutoff values of 0.5. aMAP‐50: Using the aMAP score cutoff values of 50. aMAP‐60: Using the aMAP score cutoff values of 60. Receiver operating characteristic curves for patients with cirrhosis at three follow‐up visits before HCC diagnosis. (A) The first follow‐up visit. (B) The second follow‐up visit. (C) The third follow‐up visit. (D) The highest clinical factors value was recorded in the three follow‐up visits Next, we used the maximum values of ALT, ALK, AFP, TB, ALB, and PLT among three follow‐up visits of each patient for model construction, in order to explore whether this analytic strategy will further improve model performance. However, the new AUC generated by the new method (Figure 2D) was similar to the AUC calculated using the values of these laboratory variables at the third follow‐up visit (Figure 2C). Therefore, this new analytic strategy did not improve the model performance. Biomarker performance at each follow‐up: For very few HCC patients who were diagnosed at the third follow‐up in our cohort, we further analyzed the predictive value for all the cases with final HCC diagnosis with the data of cirrhosis patients at each follow‐up time. There were 742 liver cirrhosis patients with the data of the first follow‐up visit, and in which 20 HCC were diagnosed in the end. And a total of 610 liver cirrhosis patients were followed up twice with 17 HCC diagnoses at last. When adjusting the strategy to analyze all patients with liver cancer and cirrhosis at each follow‐up (Figure 3A, B), the AUC for the Doylestown algorithm increased from 0.763 to 0.776 at the first follow‐up and from 0.801 to 0.808 at the second follow‐up. As the number of patients increased, the performance of the three models improved. Receiver operating characteristic curves for patients with cirrhosis at each follow‐up. (A) The first follow‐up visit. (B) The second follow‐up visit DISCUSSION: The development and validation of the Doylestown algorithm and aMAP score in predicting HCC were performed using clinical variables from case‐control or nested case‐control studies. 15 , 16 Furthermore, Wang et al. 18 developed a secondary Doylestown Plus algorithm that incorporated cosylated kininogen as a marker to improve the detection of HCC. Yamashita et al. 19 investigated the clinical utility of the aMAP score for predicting HCC occurrence and the incidence‐free rate after a sustained virologic response in chronic hepatitis C. The current study was a retrospective, longitudinal study applying the AFP, Doylestown algorithm, and aMAP score to consecutive time points for patients with liver cirrhosis to compare the performance of the algorithm in detecting HCC. Additionally, to our knowledge, this study also investigated the application of the Doylestown algorithm to the Chinese population for the first time. Among the three evaluation models, the specificity of the Doylestown algorithm was similar or slightly less than AFP, but the Doylestown algorithm showed the best performance, and the aMAP score performance was relatively poor. The Doylestown algorithm, by combining AFP with other clinical values, increased the detection rate of HCC when compared to AFP alone in all of the tested time points. In the 521 liver cirrhosis patients with triple follow‐up, using a fixed cutoff of 20 ng/mL for AFP and 0.5 output units for the Doylestown algorithm resulted in a 7.4% increased biomarker performance at the closest time point to HCC diagnosis (0–6 month prior), a 21% increase 6–12 months before diagnosis, and a 13% increase 12–18 months before diagnosis. The Doylestown algorithm demonstrated better performance for liver cirrhosis patients, and it was also suitable for the Chinese population. However, the overall Doylestown algorithm value for the Chinese population was lower than that of the Caucasian population. For example, 20 patients diagnosed with HCC had an average Doylestown algorithm of 0.49 0–6 months before diagnosis, and 17 patients diagnosed with HCC had an average Doylestown algorithm of 0.42 6–12 months before diagnosis. In comparison, studies have shown that the average Doylestown algorithm for Caucasian patients diagnosed with HCC 12 months before the diagnosis is 0.58. 20 Therefore, we believe that the Doylestown algorithm is also suitable for Chinese patients with liver cirrhosis, but the cutoff value needed to be adjusted before application. The aMAP score demonstrated the highest sensitivity in this analysis, but it introduced a very high number of false‐positive cases. One reason for this result could be because most of the aMAP score training sets were hepatitis B patients, and the performance of the model was not as favorable as that of the Doylestown algorithm for patients with liver cirrhosis. Among patients who developed HCC, when comparing a cutoff of 50 for the aMAP score to AFP at 20 ng/mL, the aMAP score advanced the average time for early diagnosis of HCC from 161 days to 299 days (Table S2). The number of people who were diagnosed in advance by AFP was the least, with only 5 patients. In contrast, the number of patients who were diagnosed in advance by aMAP was 18 (the highest). The aMAP score performance at the second follow‐up (6–12 months prior) was superior to that of the other two time periods. Compared with the Doylestown algorithm, the aMAP score may be more suitable for early screening of liver cancer (6–12 months in advance). Although the false‐positive rate was relatively high, this model could be fit for preliminary screening of liver cancer (to reduce missed diagnosis) and then be combined with other screening methods (to reduce misdiagnosis) to further confirm the suspected liver cancer patients. A limitation of the current study was the small sample size and short follow‐up time. For example, clinical data were only available from 20 patients at a time of 0–6 months before HCC diagnosis. If restricted to samples within 12–18 months of HCC diagnosis, there were only 5 HCC patients. A larger study with time points up to 3–5 years prior to HCC diagnosis should be performed to fully demonstrate the true benefit of using this algorithm for HCC surveillance in the Chinese population. In addition, our study was conducted at a single center with a limited number of liver disease patients. It is necessary to verify the Doylestown algorithm through a multi‐center cooperative study. At the same time, we must recognize that the Doylestown algorithm and the aMAP score can detect HCC patients in advance compared with traditional imaging screening methods, but the sensitivity and specificity of the model should be improved. This study demonstrated that the Doylestown algorithm, by using readily available clinical parameters, is superior to AFP alone in accurately predicting the development of HCC among patients in the Chinese population with liver cirrhosis. While this algorithm is practical and easy to adopt in routine clinical use, it is important to emphasize that the current algorithm can be further complemented and improved by combining with novel biomarkers such as DNA, RNA, proteins, exosomes, or epigenetic markers to achieve early detection of HCC in the future. CONCLUSIONS: In summary, our study results highlight the potential of novel biomarker panels, including the Doylestown algorithm and aMAP score, to improve early HCC detection. The high accuracy of these biomarker panels is likely related to the inclusion of multiple biomarkers and demographics associated with higher HCC risk. Blood‐based biomarkers also have high patient acceptance and are easy to implement in clinical practice. The performance of the Doylestown algorithm was superior to that of AFP and aMAP score, and the algorithm was also applicable to the Chinese population. The aMAP score demonstrated very high sensitivity and was suitable for preliminary HCC screening, but it is necessary to combine with other screening methods to further identify suspected HCC patients. CONFLICTS OF INTEREST: Linhong Li and Juhong Zhou are employees of Oriomics Biotech Inc. No other disclosures were reported. The authors have no conflicts of interest to disclose. CONSENT FOR PUBLICATION: Not applicable. Supporting information: Table S1‐S2 Click here for additional data file.
Background: Previous studies have developed some blood-based biomarker algorithms such as the Doylestown algorithm and aMAP score to improve the detection of Hepatocellular carcinoma (HCC). However, no one has studied the application of the Doylestown algorithm in the Chinese. Meanwhile, which of these two screening models is more suitable for people with liver cirrhosis remains to be investigated. Methods: In this study, HCC surveillance was performed by radiographic imaging and testing for tumor markers every 6 months from August 21, 2018, to January 12, 2021. We conducted a retrospective study of 742 liver cirrhosis patients, and among them, 20 developed HCC during follow-up. Samples from these patients at three follow-up time points were tested to evaluate alpha-fetoprotein (AFP), the Doylestown algorithm, and aMAP score. Results: Overall, 521 liver cirrhosis patients underwent semiannual longitudinal follow-up three times. Five patients were diagnosed with HCC within 0-6 months of the third follow-up. We found that for these liver cirrhosis patients, the Doylestown algorithm had the highest accuracy for HCC detection, with areas under the receiver operating characteristic curve (AUCs) of 0.763, 0.801, and 0.867 for follow-ups 1-3, respectively. Compared with AFP at 20 ng/ml, the Doylestown algorithm increased biomarker performance by 7.4%, 21%, and 13% for follow-ups 1-3, respectively. Conclusions: Our findings show that the Doylestown algorithm performance appeared to be optimal for HCC early screening in the Chinese cirrhotic population when compared with the aMAP score and AFP at 20 ng/ml.
INTRODUCTION: Hepatocellular carcinoma (HCC) is the fifth most common cancer and the second leading cause of cancer death in China. 1 , 2 Meanwhile, the prognosis of HCC patients remains poor. Patients with advanced HCC have few treatment options, with 5‐year survival rates ranging from 5% to 14%, whereas patients with early HCC can undergo radical treatments, including surgical resection, ablation, and liver transplantation. 3 , 4 The 5‐year survival rate for HCC patients diagnosed at an early stage is up to 75%. 5 , 6 , 7 Therefore, early detection of HCC is a key component toward reducing HCC mortality. About 80%–90% of HCC cases occur in patients with cirrhosis and cirrhosis is also the major cause of liver disease‐related morbidity and mortality worldwide. 6 , 8 Chinese guideline for stratified screening and surveillance of primary liver cancer (2020 Edition) recommends abdominal ultrasonography combined with serum a‐fetoprotein (AFP) every 6 months as a surveillance program for HCC in patients with cirrhosis. 9 However, ultrasound is dependent on operator experience and difficult to perform in obese patients. 10 , 11 AFP has suboptimal sensitivity and specificity (41% to 65% and 80% to 94%), and its usefulness has been widely debated. 12 At the currently used cutoff (20 μg/L), this strategy misses about a third of HCC in the early stage. 13 , 14 Novel surveillance strategies are urgently needed to improve the accuracy of early HCC detection. Wang et al. 15 developed a logistic regression algorithm named the Doylestown algorithm that utilizes AFP, age, gender, alkaline phosphatase (ALK), and alanine aminotransferase (ALT) levels to improve the detection of HCC, particularly for those with cirrhosis. However, it is uncertain whether the Doylestown algorithm is applicable for predicting HCC occurrence in the Chinese population. Recently, aMAP risk score consisting of age, gender, total bilirubin (TB), albumin (ALB), and platelets (PLT) was reported to predict HCC development in patients with chronic hepatitis. 16 aMAP score has the advantage of assessing HCC risk with different ethnicities including Asian and Caucasian ethnicities. However, the cohort of this study is mainly hepatitis B, hepatitis C, and other chronic hepatitis patients. Whether aMAP score is suitable for assessing the risk of HCC in patients with cirrhosis is still uncertain. This study aims to compare multiple biomarkers (including AFP, the Doylestown algorithm, and aMAP score) in Chinese patients with cirrhosis and investigate the clinical utility of the Doylestown algorithm and aMAP score. CONCLUSIONS: In summary, our study results highlight the potential of novel biomarker panels, including the Doylestown algorithm and aMAP score, to improve early HCC detection. The high accuracy of these biomarker panels is likely related to the inclusion of multiple biomarkers and demographics associated with higher HCC risk. Blood‐based biomarkers also have high patient acceptance and are easy to implement in clinical practice. The performance of the Doylestown algorithm was superior to that of AFP and aMAP score, and the algorithm was also applicable to the Chinese population. The aMAP score demonstrated very high sensitivity and was suitable for preliminary HCC screening, but it is necessary to combine with other screening methods to further identify suspected HCC patients.
Background: Previous studies have developed some blood-based biomarker algorithms such as the Doylestown algorithm and aMAP score to improve the detection of Hepatocellular carcinoma (HCC). However, no one has studied the application of the Doylestown algorithm in the Chinese. Meanwhile, which of these two screening models is more suitable for people with liver cirrhosis remains to be investigated. Methods: In this study, HCC surveillance was performed by radiographic imaging and testing for tumor markers every 6 months from August 21, 2018, to January 12, 2021. We conducted a retrospective study of 742 liver cirrhosis patients, and among them, 20 developed HCC during follow-up. Samples from these patients at three follow-up time points were tested to evaluate alpha-fetoprotein (AFP), the Doylestown algorithm, and aMAP score. Results: Overall, 521 liver cirrhosis patients underwent semiannual longitudinal follow-up three times. Five patients were diagnosed with HCC within 0-6 months of the third follow-up. We found that for these liver cirrhosis patients, the Doylestown algorithm had the highest accuracy for HCC detection, with areas under the receiver operating characteristic curve (AUCs) of 0.763, 0.801, and 0.867 for follow-ups 1-3, respectively. Compared with AFP at 20 ng/ml, the Doylestown algorithm increased biomarker performance by 7.4%, 21%, and 13% for follow-ups 1-3, respectively. Conclusions: Our findings show that the Doylestown algorithm performance appeared to be optimal for HCC early screening in the Chinese cirrhotic population when compared with the aMAP score and AFP at 20 ng/ml.
7,946
319
[ 518, 444, 196, 652, 112, 468, 180, 3 ]
14
[ "patients", "hcc", "follow", "algorithm", "score", "doylestown", "doylestown algorithm", "diagnosis", "amap", "amap score" ]
[ "patients liver cancer", "hcc patients cirrhosis", "accuracy hepatocellular carcinoma", "china prognosis hcc", "screening liver cancer" ]
null
[CONTENT] alpha‐fetoprotein | aMAP score | Doylestown algorithm | early screening | hepatocellular carcinoma [SUMMARY]
null
[CONTENT] alpha‐fetoprotein | aMAP score | Doylestown algorithm | early screening | hepatocellular carcinoma [SUMMARY]
[CONTENT] alpha‐fetoprotein | aMAP score | Doylestown algorithm | early screening | hepatocellular carcinoma [SUMMARY]
[CONTENT] alpha‐fetoprotein | aMAP score | Doylestown algorithm | early screening | hepatocellular carcinoma [SUMMARY]
[CONTENT] alpha‐fetoprotein | aMAP score | Doylestown algorithm | early screening | hepatocellular carcinoma [SUMMARY]
[CONTENT] Algorithms | Biomarkers, Tumor | Carcinoma, Hepatocellular | China | Humans | Liver Cirrhosis | Liver Neoplasms | Retrospective Studies | Sensitivity and Specificity | alpha-Fetoproteins [SUMMARY]
null
[CONTENT] Algorithms | Biomarkers, Tumor | Carcinoma, Hepatocellular | China | Humans | Liver Cirrhosis | Liver Neoplasms | Retrospective Studies | Sensitivity and Specificity | alpha-Fetoproteins [SUMMARY]
[CONTENT] Algorithms | Biomarkers, Tumor | Carcinoma, Hepatocellular | China | Humans | Liver Cirrhosis | Liver Neoplasms | Retrospective Studies | Sensitivity and Specificity | alpha-Fetoproteins [SUMMARY]
[CONTENT] Algorithms | Biomarkers, Tumor | Carcinoma, Hepatocellular | China | Humans | Liver Cirrhosis | Liver Neoplasms | Retrospective Studies | Sensitivity and Specificity | alpha-Fetoproteins [SUMMARY]
[CONTENT] Algorithms | Biomarkers, Tumor | Carcinoma, Hepatocellular | China | Humans | Liver Cirrhosis | Liver Neoplasms | Retrospective Studies | Sensitivity and Specificity | alpha-Fetoproteins [SUMMARY]
[CONTENT] patients liver cancer | hcc patients cirrhosis | accuracy hepatocellular carcinoma | china prognosis hcc | screening liver cancer [SUMMARY]
null
[CONTENT] patients liver cancer | hcc patients cirrhosis | accuracy hepatocellular carcinoma | china prognosis hcc | screening liver cancer [SUMMARY]
[CONTENT] patients liver cancer | hcc patients cirrhosis | accuracy hepatocellular carcinoma | china prognosis hcc | screening liver cancer [SUMMARY]
[CONTENT] patients liver cancer | hcc patients cirrhosis | accuracy hepatocellular carcinoma | china prognosis hcc | screening liver cancer [SUMMARY]
[CONTENT] patients liver cancer | hcc patients cirrhosis | accuracy hepatocellular carcinoma | china prognosis hcc | screening liver cancer [SUMMARY]
[CONTENT] patients | hcc | follow | algorithm | score | doylestown | doylestown algorithm | diagnosis | amap | amap score [SUMMARY]
null
[CONTENT] patients | hcc | follow | algorithm | score | doylestown | doylestown algorithm | diagnosis | amap | amap score [SUMMARY]
[CONTENT] patients | hcc | follow | algorithm | score | doylestown | doylestown algorithm | diagnosis | amap | amap score [SUMMARY]
[CONTENT] patients | hcc | follow | algorithm | score | doylestown | doylestown algorithm | diagnosis | amap | amap score [SUMMARY]
[CONTENT] patients | hcc | follow | algorithm | score | doylestown | doylestown algorithm | diagnosis | amap | amap score [SUMMARY]
[CONTENT] hcc | patients | early | cirrhosis | hepatitis | patients cirrhosis | score | amap | algorithm | risk [SUMMARY]
null
[CONTENT] follow | patients | hcc | diagnosis | mean | months | follow visit | score | time | range indicated [SUMMARY]
[CONTENT] high | biomarker panels | panels | hcc | screening | amap score | amap | score | biomarker | algorithm [SUMMARY]
[CONTENT] patients | hcc | applicable | follow | diagnosis | algorithm | score | amap | doylestown | doylestown algorithm [SUMMARY]
[CONTENT] patients | hcc | applicable | follow | diagnosis | algorithm | score | amap | doylestown | doylestown algorithm [SUMMARY]
[CONTENT] Doylestown | aMAP ||| Chinese ||| two [SUMMARY]
null
[CONTENT] 521 | three ||| Five | HCC | 0-6 months of the | third ||| Doylestown | 0.763 | 0.801 | 0.867 | 1-3 ||| AFP | 20 ng/ml | Doylestown | 7.4% | 21% | 13% | 1-3 [SUMMARY]
[CONTENT] Doylestown | Chinese | aMAP | AFP | 20 ng/ml [SUMMARY]
[CONTENT] Doylestown | aMAP ||| Chinese ||| two ||| HCC | every 6 months | August 21, 2018 | January 12, 2021 ||| 742 | 20 ||| three | AFP | aMAP ||| 521 | three ||| Five | HCC | 0-6 months of the | third ||| Doylestown | 0.763 | 0.801 | 0.867 | 1-3 ||| AFP | 20 ng/ml | Doylestown | 7.4% | 21% | 13% | 1-3 ||| Doylestown | Chinese | aMAP | AFP | 20 ng/ml [SUMMARY]
[CONTENT] Doylestown | aMAP ||| Chinese ||| two ||| HCC | every 6 months | August 21, 2018 | January 12, 2021 ||| 742 | 20 ||| three | AFP | aMAP ||| 521 | three ||| Five | HCC | 0-6 months of the | third ||| Doylestown | 0.763 | 0.801 | 0.867 | 1-3 ||| AFP | 20 ng/ml | Doylestown | 7.4% | 21% | 13% | 1-3 ||| Doylestown | Chinese | aMAP | AFP | 20 ng/ml [SUMMARY]
Agreement between activPAL and ActiGraph for assessing children's sedentary time.
22340137
Accelerometers have been used to determine the amount of time that children spend sedentary. However, as time spent sitting may be detrimental to health, research is needed to examine whether accelerometer sedentary cut-points reflect the amount of time children spend sitting. The aim of this study was to: a) examine agreement between ActiGraph (AG) cut-points for sedentary time and objectively-assessed periods of free-living sitting and sitting plus standing time using the activPAL (aP); and b) identify cut-points to determine time spent sitting and sitting plus standing.
BACKGROUND
Forty-eight children (54% boys) aged 8-12 years wore a waist-mounted AG and thigh-mounted aP for two consecutive school days (9-3:30 pm). AG data were analyzed using 17 cut-points between 50-850 counts·min-1 in 50 counts·min-1 increments to determine sedentary time during class-time, break time and school hours. Sitting and sitting plus standing time were obtained from the aP for these periods. Limits of agreement were computed to evaluate bias between AG50 to AG850 sedentary time and sitting and sitting plus standing time. Receiver Operator Characteristic (ROC) analyses identified AG cut-points that maximized sensitivity and specificity for sitting and sitting plus standing time.
METHODS
The smallest mean bias between aP sitting time and AG sedentary time was AG150 for class time (3.8 minutes), AG50 for break time (-0.8 minutes), and AG100 for school hours (-5.2 minutes). For sitting plus standing time, the smallest bias was observed for AG850. ROC analyses revealed an optimal cut-point of 96 counts·min-1 (AUC = 0.75) for sitting time, which had acceptable sensitivity (71.7%) and specificity (67.8%). No optimal cut-point was obtained for sitting plus standing (AUC = 0.51).
RESULTS
Estimates of free-living sitting time in children during school hours can be obtained using an AG cut-point of 100 counts·min-1. Higher sedentary cut-points may capture both sitting and standing time.
CONCLUSIONS
[ "Actigraphy", "Area Under Curve", "Bias", "Child", "Exercise", "Female", "Health Behavior", "Humans", "Male", "Monitoring, Ambulatory", "Motor Activity", "Posture", "ROC Curve", "Reference Values", "Reproducibility of Results", "Schools", "Sedentary Behavior", "Time Factors" ]
3311087
Background
There is increasing interest in the effects of sedentary behaviors on children's and adults' health [1,2] largely due to emerging evidence that objectively-assessed sedentary time is associated with cardio-metabolic health [3-5]. The ActiGraph (AG) accelerometer has been commonly used in the objective assessment of sedentary time. However, there is considerable variability in the cut-points used to identify sedentary time using this accelerometer in child populations. AG sedentary time cut-points used in school-aged children and adolescents have included 100 counts·min-1 [6,7], 200 counts·min-1 [8], 500 counts·min-1 [3,4], and 800 counts·min-1 [9], yet only two thresholds (100 and 800 counts·min-1) have been validated [6,7,9]. Objective measures such as accelerometers estimate sedentary time based on a lack of movement [10]. Sedentary behavior is typically defined as sitting behaviors that require low levels of energy expenditure to perform (≤1.5 METS) [11], and a lack of movement may indicate low levels of energy expenditure when using an accelerometer. Time spent in sedentary behavior is distinct from the lack of physical activity, which is defined as the amount of time not spent engaged in physical activity of a particular intensity (often moderate-to-vigorous physical activity), and often incorporates light intensity physical activity behaviors [12]. It is possible, however that low movement may be recorded by a hip-mounted accelerometer, but the individual could be standing (which is a very light intensity activity [12]) therefore more energy may be expended than that typically associated with sedentary behaviors [1]. Though the differences in energy expenditure may be considered negligible, the accumulation of these differences may have implications for energy balance over time [13]. In recent years, opportunities for measuring patterns of sitting/lying time (referred to as sitting time hereon in) have been made possible through the use of inclinometers (e.g. activPAL [aP], PAL Technologies Ltd, Glasgow, UK) to detect postures. To date, however, no studies have used the aP to determine the accuracy of AG cut-points for assessing sitting time or to identify whether sitting can be differentiated from sitting and standing time. Whilst the AG is unable to provide postural information, sitting and sitting plus standing require little vertical acceleration. Consequently, research is needed to examine whether accelerometer sedentary cut-points reflect the amount of time children spend sitting, [12] particularly as the AG is likely to continue to be used to measure both sedentary time and physical activity intensities. The aim of this study was to examine the agreement between AG cut-points for sedentary time and objectively-assessed sitting and sitting plus standing time in children using the aP during the school day. Class time and break time were also examined separately as class time is typically sedentary, while all children have opportunities for activity during recess and lunchtime. It was hypothesized that a lower AG cut-point would have greater agreement with aP sitting time and a higher AG cut-point would have greater agreement with aP sitting plus standing time. A secondary aim was to examine whether an accelerometer count cut-point could be used to determine time spent sitting and sitting plus standing.
Method
Participants Following approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer). Following approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer). Procedure Participants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study. Participants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study. Measures Each child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14]. The activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16]. Each child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14]. The activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16]. Data management Data were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria. AG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses. Data were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria. AG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses. Statistical analyses Descriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points. Receiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19]. Descriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points. Receiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19].
null
null
Conclusion
JS conceived the study and secured the funding. JS and LA planned the study design. LA collected the data. EOC performed the initial data manipulation. NDR performed the statistical analyses. NDR, JS, AT and KR and interpreted the data. NDR wrote the manuscript. AT, KR, JS, LA and EOC critically reviewed and revised the manuscript. All authors read and approved the final version of the manuscript.
[ "Background", "Participants", "Procedure", "Measures", "Data management", "Statistical analyses", "Results", "Discussion", "Conclusion" ]
[ "There is increasing interest in the effects of sedentary behaviors on children's and adults' health [1,2] largely due to emerging evidence that objectively-assessed sedentary time is associated with cardio-metabolic health [3-5]. The ActiGraph (AG) accelerometer has been commonly used in the objective assessment of sedentary time. However, there is considerable variability in the cut-points used to identify sedentary time using this accelerometer in child populations. AG sedentary time cut-points used in school-aged children and adolescents have included 100 counts·min-1 [6,7], 200 counts·min-1 [8], 500 counts·min-1 [3,4], and 800 counts·min-1 [9], yet only two thresholds (100 and 800 counts·min-1) have been validated [6,7,9].\nObjective measures such as accelerometers estimate sedentary time based on a lack of movement [10]. Sedentary behavior is typically defined as sitting behaviors that require low levels of energy expenditure to perform (≤1.5 METS) [11], and a lack of movement may indicate low levels of energy expenditure when using an accelerometer. Time spent in sedentary behavior is distinct from the lack of physical activity, which is defined as the amount of time not spent engaged in physical activity of a particular intensity (often moderate-to-vigorous physical activity), and often incorporates light intensity physical activity behaviors [12]. It is possible, however that low movement may be recorded by a hip-mounted accelerometer, but the individual could be standing (which is a very light intensity activity [12]) therefore more energy may be expended than that typically associated with sedentary behaviors [1]. Though the differences in energy expenditure may be considered negligible, the accumulation of these differences may have implications for energy balance over time [13].\nIn recent years, opportunities for measuring patterns of sitting/lying time (referred to as sitting time hereon in) have been made possible through the use of inclinometers (e.g. activPAL [aP], PAL Technologies Ltd, Glasgow, UK) to detect postures. To date, however, no studies have used the aP to determine the accuracy of AG cut-points for assessing sitting time or to identify whether sitting can be differentiated from sitting and standing time. Whilst the AG is unable to provide postural information, sitting and sitting plus standing require little vertical acceleration. Consequently, research is needed to examine whether accelerometer sedentary cut-points reflect the amount of time children spend sitting, [12] particularly as the AG is likely to continue to be used to measure both sedentary time and physical activity intensities.\nThe aim of this study was to examine the agreement between AG cut-points for sedentary time and objectively-assessed sitting and sitting plus standing time in children using the aP during the school day. Class time and break time were also examined separately as class time is typically sedentary, while all children have opportunities for activity during recess and lunchtime. It was hypothesized that a lower AG cut-point would have greater agreement with aP sitting time and a higher AG cut-point would have greater agreement with aP sitting plus standing time. A secondary aim was to examine whether an accelerometer count cut-point could be used to determine time spent sitting and sitting plus standing.", "Following approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer).", "Participants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study.", "Each child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14].\nThe activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16].", "Data were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria.\nAG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses.", "Descriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points.\nReceiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19].", "The time spent sedentary according to AG cut-points and time spent sitting and sitting plus standing according to the aP is shown in Table 1. On average, the aP revealed that children spent 218.9 minutes and 315.5 minutes of the school day sitting and sitting plus standing. This equated to 56.1% and 80.9% of the school day (total duration = 390 minutes) spent sitting and sitting plus standing, respectively. According to the AG cut-points, children were sedentary for 192 minutes (AG50) to 309.7 minutes (AG850) of the school day. Table 2 presents the percentage agreement, mean differences and 95% limits of agreement using the Bland-Altman method,[17] between aP sitting time and the 17 AG thresholds between AG50 to AG850 for class time, break time and school hours. The level of agreement was moderate to high for AG50 (69-70.8%). The lowest percentage agreement was for AG850 (36-62.8%). The lowest mean bias for sitting time, regardless of direction, was AG150 for class time (3.8 minutes), AG50 for break time (-0.8 minutes), and AG100 for the school day (-5.2 minutes). However, the 95% limits of agreement were wide for these thresholds, and ranged from 49.4 minutes for break time (AG50) to 144.7 minutes for the school day (AG100). A Bland-Altman plot demonstrating the agreement between aP and AG100 for the school day is shown in Figure 1.\nBland-Altman plot of the difference between time spent sedentry (ActiGraph 100 counts -min-1) and time spent sitting (aP).\nMean (range) time (minutes) spent sedentary according to activPAL and ActiGraph cut-points\n1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9]\nConcurrent comparison between sedentary time using different Actigraph (AG) cut-points and activPAL (aP) sitting time\n1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9]\nTable 3 reports the percentage agreement, mean differences and 95% limits of agreement between aP sitting plus standing time and AG thresholds. The highest level of agreement for sitting plus standing time was AG250 for class time (79.5%), AG50 for break time (70.8%), and AG200 for the whole school day (76.6%). The smallest bias for sitting plus standing time was AG850 for class time (-4.7 minutes), break time (-1.1 minutes) and the school day (-5.8 minutes). The 95% limits of agreement were 38.8 minutes (class time), 28.1 minutes (break time), and 63 minutes (school day) based on the smallest mean differences across the school day. Figures 2 and 3 illustrate the concurrent measurement patterns of sitting and sitting plus standing time in 5 minute intervals across school hours for AG100 and AG850, respectively, based on the findings from the Bland-Altman analyses above.\nConcurrent measurement pattern of aP sitting time and AG sedentary time defined as 100 counts min-1.\nConcurrent measurement pattern of aP sitting plus standing time and AG sedentary time defined as 850 counts min-1.\nConcurrent comparison between sedentary time using different Actigraph (AG) cut-points and activPAL (aP) sitting plus standing time\n1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9]\nAccording to ROC analyses, the optimal sensitivity and specificity based on the AUC (0.75) for sitting time was at an accelerometer cut-point of 24 counts per 15 second epoch (96 counts·min-1). The sensitivity and specificity of this cut-point were 71.7% and 67.8%, respectively. In the cross-validation group, the sensitivity, specificity and percentage agreement were 71.4%, 70.8% and 71.1% respectively. For sitting plus standing time, the AUC was poor (0.51). Based on the recommendations of Welk [19], no further analyses were undertaken.", "This is the first study to examine the agreement between AG cut-points for sedentary time and objectively-assessed periods of free-living sitting and sitting plus standing time in children using the aP, and to examine whether an accelerometer count cut-point could be used to determine time spent sitting and sitting plus standing. This study found that during school hours, the lowest mean bias (-5.2 minutes) between AG sedentary time and aP sitting time was observed for an AG cut-point of 100 counts·min-1 in this age group of children. Furthermore, the ROC curve analysis for sitting time provided an optimal cut-point of 96 counts.min-1 (24 counts per 15 seconds), which had reasonable agreement, sensitivity and specificity in the cross-validation group. This provides support to previous studies that have determined that 100 counts.min-1 was the optimal cut-point for measuring youth sedentary time in free-living conditions [6,7], which also had an excellent ability to classify sedentary time in children [20]. Though it should be noted that the present study's sensitivity and specificity were lower than previous studies [6,20], this is the first study to use postural information as the criterion measure, demonstrating that a cut-point of 100 counts·min reflects the time children spend sitting.\nIt should be noted that while the mean bias suggested that 100 counts.min-1 provided good agreement with aP sitting time, the limits of agreement were wide (range -77.6 to 67.1 minutes). This indicates that while the mean difference is small at a group level, the variance is larger and reflects a greater degree of under- and over-estimation at the individual level between the aP and the AG. This degree of variability at the individual level may be problematic in determining behavior change at this level following an intervention, for example. The wide limits of agreement may be attributable, to some extent, to the way that sedentary time is determined and the positioning of the monitor [21]. The aP uses a thigh-mounted inclinometer to obtain information concerning posture, whilst the hip-mounted AG determines sedentary time due to a lack of vertical displacement. Interestingly, while these monitors are measuring different outcome variables, which mean that a discrepancy will occur between the monitor's outputs, the group average concurrent measurement pattern between the AG and the aP depicted in Figure 1 was similar at 100 counts.min-1.\nSeveral studies have examined the utility of the AG to detect sedentary time in adults. Hart et al. [22] found that an AG cut-point of 50 counts.min-1 may be a better estimate of sitting time (when using the 7164 model). The present study found the highest percentage agreement between sitting and an AG cut-point of 50 counts.min-1, which somewhat supports this finding. Estimates of the validity of the 100 counts.min-1 cut-point in adults has been mixed, with this threshold resulting in significantly more sedentary time when using the AG GT1M model compared to the aP [21], whilst others found it underestimated sedentary time [23]. In the latter study, a cut-point of 150 counts.min-1 was the most accurate threshold for defining sitting time using the aP as the criterion, which is consistent with the finding for class time in the present study. It should be noted, however, that a GT3X with the low frequency extension filter option selected was used [23], which may account for some of the variability observed between studies. This option extends the lower threshold for signal detection, as it was found that a higher level of acceleration was needed to generate counts in the GT1M and GT3X AG models compared to the 7164 [24].\nThere is wide variation in published AG cut-points used to define sedentary time in children [25]. In the present study, a smaller mean bias was observed for AG500 [3,4] and AG800 [9] for sitting plus standing time, compared to sitting time. Previous studies have found that higher cut-points in adults detected more sedentary time compared with time spent sitting from the aP [21,23]. Trost and colleagues [20] found of the commonly used sedentary cut-points, AG800 had fair classification accuracy and low specificity, indicating that this cut-point was incorrectly classifying activity as sedentary time [19]. Overall, the findings from the present study and previous studies suggest that higher AG cut-points are capturing more activity than can be associated with sitting time, therefore studies that have used higher AG sedentary cut-points should be viewed with this limitation in mind. A limitation of hip-mounted accelerometers is their susceptibility of misclassifying standing light-to-moderate intensity activities as sedentary [20,26]. In the present study the ROC curve analyses for sitting plus standing resulted in a poor AUC, which meant that the associated cut-point would be ineffective characterizing sitting plus standing. This demonstrates that the AG cannot differentiate sitting from standing with minimal movement, and that researchers interested in examining time spent sitting plus standing should use objective monitors with inclinometers, such as the aP [23].\nThis study found that agreement between aP and the AG derived sedentary time varied depending on the period of day that was being examined. The lowest mean bias for break time and class time were observed at AG50 and AG150, respectively, for sitting time though the limits of agreement were also wide at these thresholds. At a practical level, it is unlikely that different cut-points are needed to assess sedentary time during different parts of the day. However, it appears that these findings reflect the variability in children's sitting time across the day. For example, sitting accounts for a small proportion of break time [27], yet accounts for a large proportion of class time [25]. Future studies that aim to reduce time spent sitting during specific periods of school hours should be aware of such bias when assessing the effectiveness of different strategies.\nThere are several limitations that warrant attention. Firstly, no true criterion of sedentary time, such as direct observation, was used in this study. While the aP has been validated for assessing sitting time in preschool [28] and adult populations [16], it has not yet been validated in school-age child populations. Secondly, the monitor used in this study was the GT1M, which has been succeeded by the GT3X and the GT3X+ AG models. While there are emerging data that the activity counts are comparable between the GT1M and the GT3X in adults [29], differences have been noted in low count ranges [24]. As such, these findings are only generalizable to data collected using the normal filter. Thirdly, data analyses were restricted to two school days (9 am-3:30 pm), as children wore both monitors simultaneously for two days during this time only. Further research should examine the agreement between the aP and the AG during waking hours across multiple days. It should be noted, however, that 100% compliance during the school day meant that consecutive zeros were indicative of no movement rather than non-wear, which is a strength of this study.", "An AG cut-point of 100 counts·min-1 provided a good estimate of free-living sitting time in children during school hours. Higher cut-points that have been used to report children's sedentary time may capture both sitting and standing time. Further research is needed to examine the use of the 100 counts·min-1 cut-point to determine sitting time across the whole day, and against health indices in children." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Method", "Participants", "Procedure", "Measures", "Data management", "Statistical analyses", "Results", "Discussion", "Conclusion" ]
[ "There is increasing interest in the effects of sedentary behaviors on children's and adults' health [1,2] largely due to emerging evidence that objectively-assessed sedentary time is associated with cardio-metabolic health [3-5]. The ActiGraph (AG) accelerometer has been commonly used in the objective assessment of sedentary time. However, there is considerable variability in the cut-points used to identify sedentary time using this accelerometer in child populations. AG sedentary time cut-points used in school-aged children and adolescents have included 100 counts·min-1 [6,7], 200 counts·min-1 [8], 500 counts·min-1 [3,4], and 800 counts·min-1 [9], yet only two thresholds (100 and 800 counts·min-1) have been validated [6,7,9].\nObjective measures such as accelerometers estimate sedentary time based on a lack of movement [10]. Sedentary behavior is typically defined as sitting behaviors that require low levels of energy expenditure to perform (≤1.5 METS) [11], and a lack of movement may indicate low levels of energy expenditure when using an accelerometer. Time spent in sedentary behavior is distinct from the lack of physical activity, which is defined as the amount of time not spent engaged in physical activity of a particular intensity (often moderate-to-vigorous physical activity), and often incorporates light intensity physical activity behaviors [12]. It is possible, however that low movement may be recorded by a hip-mounted accelerometer, but the individual could be standing (which is a very light intensity activity [12]) therefore more energy may be expended than that typically associated with sedentary behaviors [1]. Though the differences in energy expenditure may be considered negligible, the accumulation of these differences may have implications for energy balance over time [13].\nIn recent years, opportunities for measuring patterns of sitting/lying time (referred to as sitting time hereon in) have been made possible through the use of inclinometers (e.g. activPAL [aP], PAL Technologies Ltd, Glasgow, UK) to detect postures. To date, however, no studies have used the aP to determine the accuracy of AG cut-points for assessing sitting time or to identify whether sitting can be differentiated from sitting and standing time. Whilst the AG is unable to provide postural information, sitting and sitting plus standing require little vertical acceleration. Consequently, research is needed to examine whether accelerometer sedentary cut-points reflect the amount of time children spend sitting, [12] particularly as the AG is likely to continue to be used to measure both sedentary time and physical activity intensities.\nThe aim of this study was to examine the agreement between AG cut-points for sedentary time and objectively-assessed sitting and sitting plus standing time in children using the aP during the school day. Class time and break time were also examined separately as class time is typically sedentary, while all children have opportunities for activity during recess and lunchtime. It was hypothesized that a lower AG cut-point would have greater agreement with aP sitting time and a higher AG cut-point would have greater agreement with aP sitting plus standing time. A secondary aim was to examine whether an accelerometer count cut-point could be used to determine time spent sitting and sitting plus standing.", " Participants Following approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer).\nFollowing approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer).\n Procedure Participants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study.\nParticipants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study.\n Measures Each child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14].\nThe activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16].\nEach child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14].\nThe activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16].\n Data management Data were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria.\nAG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses.\nData were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria.\nAG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses.\n Statistical analyses Descriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points.\nReceiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19].\nDescriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points.\nReceiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19].", "Following approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer).", "Participants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study.", "Each child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14].\nThe activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16].", "Data were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria.\nAG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses.", "Descriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points.\nReceiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19].", "The time spent sedentary according to AG cut-points and time spent sitting and sitting plus standing according to the aP is shown in Table 1. On average, the aP revealed that children spent 218.9 minutes and 315.5 minutes of the school day sitting and sitting plus standing. This equated to 56.1% and 80.9% of the school day (total duration = 390 minutes) spent sitting and sitting plus standing, respectively. According to the AG cut-points, children were sedentary for 192 minutes (AG50) to 309.7 minutes (AG850) of the school day. Table 2 presents the percentage agreement, mean differences and 95% limits of agreement using the Bland-Altman method,[17] between aP sitting time and the 17 AG thresholds between AG50 to AG850 for class time, break time and school hours. The level of agreement was moderate to high for AG50 (69-70.8%). The lowest percentage agreement was for AG850 (36-62.8%). The lowest mean bias for sitting time, regardless of direction, was AG150 for class time (3.8 minutes), AG50 for break time (-0.8 minutes), and AG100 for the school day (-5.2 minutes). However, the 95% limits of agreement were wide for these thresholds, and ranged from 49.4 minutes for break time (AG50) to 144.7 minutes for the school day (AG100). A Bland-Altman plot demonstrating the agreement between aP and AG100 for the school day is shown in Figure 1.\nBland-Altman plot of the difference between time spent sedentry (ActiGraph 100 counts -min-1) and time spent sitting (aP).\nMean (range) time (minutes) spent sedentary according to activPAL and ActiGraph cut-points\n1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9]\nConcurrent comparison between sedentary time using different Actigraph (AG) cut-points and activPAL (aP) sitting time\n1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9]\nTable 3 reports the percentage agreement, mean differences and 95% limits of agreement between aP sitting plus standing time and AG thresholds. The highest level of agreement for sitting plus standing time was AG250 for class time (79.5%), AG50 for break time (70.8%), and AG200 for the whole school day (76.6%). The smallest bias for sitting plus standing time was AG850 for class time (-4.7 minutes), break time (-1.1 minutes) and the school day (-5.8 minutes). The 95% limits of agreement were 38.8 minutes (class time), 28.1 minutes (break time), and 63 minutes (school day) based on the smallest mean differences across the school day. Figures 2 and 3 illustrate the concurrent measurement patterns of sitting and sitting plus standing time in 5 minute intervals across school hours for AG100 and AG850, respectively, based on the findings from the Bland-Altman analyses above.\nConcurrent measurement pattern of aP sitting time and AG sedentary time defined as 100 counts min-1.\nConcurrent measurement pattern of aP sitting plus standing time and AG sedentary time defined as 850 counts min-1.\nConcurrent comparison between sedentary time using different Actigraph (AG) cut-points and activPAL (aP) sitting plus standing time\n1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9]\nAccording to ROC analyses, the optimal sensitivity and specificity based on the AUC (0.75) for sitting time was at an accelerometer cut-point of 24 counts per 15 second epoch (96 counts·min-1). The sensitivity and specificity of this cut-point were 71.7% and 67.8%, respectively. In the cross-validation group, the sensitivity, specificity and percentage agreement were 71.4%, 70.8% and 71.1% respectively. For sitting plus standing time, the AUC was poor (0.51). Based on the recommendations of Welk [19], no further analyses were undertaken.", "This is the first study to examine the agreement between AG cut-points for sedentary time and objectively-assessed periods of free-living sitting and sitting plus standing time in children using the aP, and to examine whether an accelerometer count cut-point could be used to determine time spent sitting and sitting plus standing. This study found that during school hours, the lowest mean bias (-5.2 minutes) between AG sedentary time and aP sitting time was observed for an AG cut-point of 100 counts·min-1 in this age group of children. Furthermore, the ROC curve analysis for sitting time provided an optimal cut-point of 96 counts.min-1 (24 counts per 15 seconds), which had reasonable agreement, sensitivity and specificity in the cross-validation group. This provides support to previous studies that have determined that 100 counts.min-1 was the optimal cut-point for measuring youth sedentary time in free-living conditions [6,7], which also had an excellent ability to classify sedentary time in children [20]. Though it should be noted that the present study's sensitivity and specificity were lower than previous studies [6,20], this is the first study to use postural information as the criterion measure, demonstrating that a cut-point of 100 counts·min reflects the time children spend sitting.\nIt should be noted that while the mean bias suggested that 100 counts.min-1 provided good agreement with aP sitting time, the limits of agreement were wide (range -77.6 to 67.1 minutes). This indicates that while the mean difference is small at a group level, the variance is larger and reflects a greater degree of under- and over-estimation at the individual level between the aP and the AG. This degree of variability at the individual level may be problematic in determining behavior change at this level following an intervention, for example. The wide limits of agreement may be attributable, to some extent, to the way that sedentary time is determined and the positioning of the monitor [21]. The aP uses a thigh-mounted inclinometer to obtain information concerning posture, whilst the hip-mounted AG determines sedentary time due to a lack of vertical displacement. Interestingly, while these monitors are measuring different outcome variables, which mean that a discrepancy will occur between the monitor's outputs, the group average concurrent measurement pattern between the AG and the aP depicted in Figure 1 was similar at 100 counts.min-1.\nSeveral studies have examined the utility of the AG to detect sedentary time in adults. Hart et al. [22] found that an AG cut-point of 50 counts.min-1 may be a better estimate of sitting time (when using the 7164 model). The present study found the highest percentage agreement between sitting and an AG cut-point of 50 counts.min-1, which somewhat supports this finding. Estimates of the validity of the 100 counts.min-1 cut-point in adults has been mixed, with this threshold resulting in significantly more sedentary time when using the AG GT1M model compared to the aP [21], whilst others found it underestimated sedentary time [23]. In the latter study, a cut-point of 150 counts.min-1 was the most accurate threshold for defining sitting time using the aP as the criterion, which is consistent with the finding for class time in the present study. It should be noted, however, that a GT3X with the low frequency extension filter option selected was used [23], which may account for some of the variability observed between studies. This option extends the lower threshold for signal detection, as it was found that a higher level of acceleration was needed to generate counts in the GT1M and GT3X AG models compared to the 7164 [24].\nThere is wide variation in published AG cut-points used to define sedentary time in children [25]. In the present study, a smaller mean bias was observed for AG500 [3,4] and AG800 [9] for sitting plus standing time, compared to sitting time. Previous studies have found that higher cut-points in adults detected more sedentary time compared with time spent sitting from the aP [21,23]. Trost and colleagues [20] found of the commonly used sedentary cut-points, AG800 had fair classification accuracy and low specificity, indicating that this cut-point was incorrectly classifying activity as sedentary time [19]. Overall, the findings from the present study and previous studies suggest that higher AG cut-points are capturing more activity than can be associated with sitting time, therefore studies that have used higher AG sedentary cut-points should be viewed with this limitation in mind. A limitation of hip-mounted accelerometers is their susceptibility of misclassifying standing light-to-moderate intensity activities as sedentary [20,26]. In the present study the ROC curve analyses for sitting plus standing resulted in a poor AUC, which meant that the associated cut-point would be ineffective characterizing sitting plus standing. This demonstrates that the AG cannot differentiate sitting from standing with minimal movement, and that researchers interested in examining time spent sitting plus standing should use objective monitors with inclinometers, such as the aP [23].\nThis study found that agreement between aP and the AG derived sedentary time varied depending on the period of day that was being examined. The lowest mean bias for break time and class time were observed at AG50 and AG150, respectively, for sitting time though the limits of agreement were also wide at these thresholds. At a practical level, it is unlikely that different cut-points are needed to assess sedentary time during different parts of the day. However, it appears that these findings reflect the variability in children's sitting time across the day. For example, sitting accounts for a small proportion of break time [27], yet accounts for a large proportion of class time [25]. Future studies that aim to reduce time spent sitting during specific periods of school hours should be aware of such bias when assessing the effectiveness of different strategies.\nThere are several limitations that warrant attention. Firstly, no true criterion of sedentary time, such as direct observation, was used in this study. While the aP has been validated for assessing sitting time in preschool [28] and adult populations [16], it has not yet been validated in school-age child populations. Secondly, the monitor used in this study was the GT1M, which has been succeeded by the GT3X and the GT3X+ AG models. While there are emerging data that the activity counts are comparable between the GT1M and the GT3X in adults [29], differences have been noted in low count ranges [24]. As such, these findings are only generalizable to data collected using the normal filter. Thirdly, data analyses were restricted to two school days (9 am-3:30 pm), as children wore both monitors simultaneously for two days during this time only. Further research should examine the agreement between the aP and the AG during waking hours across multiple days. It should be noted, however, that 100% compliance during the school day meant that consecutive zeros were indicative of no movement rather than non-wear, which is a strength of this study.", "An AG cut-point of 100 counts·min-1 provided a good estimate of free-living sitting time in children during school hours. Higher cut-points that have been used to report children's sedentary time may capture both sitting and standing time. Further research is needed to examine the use of the 100 counts·min-1 cut-point to determine sitting time across the whole day, and against health indices in children." ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[ "Accelerometry", "Children", "Objective assessment", "Sedentary time", "Sedentary behavior" ]
Background: There is increasing interest in the effects of sedentary behaviors on children's and adults' health [1,2] largely due to emerging evidence that objectively-assessed sedentary time is associated with cardio-metabolic health [3-5]. The ActiGraph (AG) accelerometer has been commonly used in the objective assessment of sedentary time. However, there is considerable variability in the cut-points used to identify sedentary time using this accelerometer in child populations. AG sedentary time cut-points used in school-aged children and adolescents have included 100 counts·min-1 [6,7], 200 counts·min-1 [8], 500 counts·min-1 [3,4], and 800 counts·min-1 [9], yet only two thresholds (100 and 800 counts·min-1) have been validated [6,7,9]. Objective measures such as accelerometers estimate sedentary time based on a lack of movement [10]. Sedentary behavior is typically defined as sitting behaviors that require low levels of energy expenditure to perform (≤1.5 METS) [11], and a lack of movement may indicate low levels of energy expenditure when using an accelerometer. Time spent in sedentary behavior is distinct from the lack of physical activity, which is defined as the amount of time not spent engaged in physical activity of a particular intensity (often moderate-to-vigorous physical activity), and often incorporates light intensity physical activity behaviors [12]. It is possible, however that low movement may be recorded by a hip-mounted accelerometer, but the individual could be standing (which is a very light intensity activity [12]) therefore more energy may be expended than that typically associated with sedentary behaviors [1]. Though the differences in energy expenditure may be considered negligible, the accumulation of these differences may have implications for energy balance over time [13]. In recent years, opportunities for measuring patterns of sitting/lying time (referred to as sitting time hereon in) have been made possible through the use of inclinometers (e.g. activPAL [aP], PAL Technologies Ltd, Glasgow, UK) to detect postures. To date, however, no studies have used the aP to determine the accuracy of AG cut-points for assessing sitting time or to identify whether sitting can be differentiated from sitting and standing time. Whilst the AG is unable to provide postural information, sitting and sitting plus standing require little vertical acceleration. Consequently, research is needed to examine whether accelerometer sedentary cut-points reflect the amount of time children spend sitting, [12] particularly as the AG is likely to continue to be used to measure both sedentary time and physical activity intensities. The aim of this study was to examine the agreement between AG cut-points for sedentary time and objectively-assessed sitting and sitting plus standing time in children using the aP during the school day. Class time and break time were also examined separately as class time is typically sedentary, while all children have opportunities for activity during recess and lunchtime. It was hypothesized that a lower AG cut-point would have greater agreement with aP sitting time and a higher AG cut-point would have greater agreement with aP sitting plus standing time. A secondary aim was to examine whether an accelerometer count cut-point could be used to determine time spent sitting and sitting plus standing. Method: Participants Following approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer). Following approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer). Procedure Participants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study. Participants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study. Measures Each child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14]. The activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16]. Each child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14]. The activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16]. Data management Data were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria. AG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses. Data were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria. AG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses. Statistical analyses Descriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points. Receiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19]. Descriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points. Receiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19]. Participants: Following approval from the Deakin University Human Ethics Advisory Group (Health) and the Department of Education and Early Childhood Development, one school of low socioeconomic status located in Melbourne, Australia, was invited to participate in the study. Once informed written consent had been obtained from the school Principal, all children in Grades 3-6 (n = 255; aged 8-12 years) were invited to participate, with 56 children (32 boys, 24 girls; 22% response rate) returning informed written parental and student consent forms. Data were collected in November and December 2009 (late spring/early summer). Procedure: Participants wore an AG and aP simultaneously for two consecutive school days. Children in grades 3-4 (n = 20) were fitted with the aP and accelerometer at the start of the school day on day one by the research team, and were instructed to wear both monitors during all waking activities, except during water-based activities (such as swimming and bathing), until the end of the following school day (day 2). The researcher team monitored that the devices were worn across both days. The monitors were then collected and the data downloaded. The monitors were then distributed to children in Grades 5-6 (n = 28) on the same day the following week using the same procedure. Overall, six children were absent on data collection days, and did not receive the monitors. The final sample comprised 48 children (26 boys; 22 girls; mean age = 10.3 ± 1.2 years). All children received an active toy as compensation for their participation in the study. Measures: Each child wore a GT1M AG on their right hip using an adjustable nylon belt. The accelerometer is a small and lightweight monitor that measures vertical acceleration and deceleration of human motion. Detected accelerations are filtered, converted to a number (counts), and subsequently summed over a specified time interval (epoch), which in this study was 15 seconds. Firmware version 4.3.0 was used and the normal filter was selected. The AG is the most commonly used accelerometer in field-based research, and has been shown to have acceptable reliability and validity in pediatric populations [14]. The activPAL Professional is a small uni-axial accelerometer, worn midline on the anterior aspect of the thigh, which detects limb position using an inclinometer. The monitor was enclosed in a small pocket in an adjustable elasticized belt which was secured at the mid-anterior position on the child's thigh. Data concerning limb position are sampled at 10 Hz, and this information was used to estimate time spent sitting/lying, upright or walking in 15-second epochs [15]. While the aP has not been validated for measuring sitting time in school-age children, it has demonstrated acceptable reliability and validity for measuring sitting time in adults [16]. Data management: Data were downloaded using aP (v5.8.3.5) and AG (v4.2.0) software and initially screened for compliance to the procedure. Two children did not return monitors to school, and were excluded from the study. To be included in the analyses, children had to have worn both monitors for two complete school days (9 am to 3:30 pm). Furthermore, since the evaluation of compliance in wearing the accelerometer is often a contentious issue in field-based research, this approach ensured that zero counts were indicative of no movement and could be retained for analyses. All children who returned the monitors met these criteria. AG and aP data were matched by day and time and processed using a customized macro. The processed data were handled in two ways. Firstly, a number of different count thresholds were used to define sedentary time using the AG data. Total durations of counts below 50 counts·min-1 (AG50) and in increments increasing by 50 counts·min-1 up to 850 counts·min-1 (total of 17 cut-points) were extracted to reflect the range of different cut-points used to define sedentary time in the literature to date [3,4,6-9]. Sedentary time was defined as the number of minutes that the count data were below these specified cut points. The number of minutes spent sitting, upright and stepping were obtained from the aP for each day. Seconds of stepping were subtracted from time spent upright to compute time spent standing (upright) but not stepping. On both days, time spent sedentary (AG), and sitting and sitting plus standing (aP) were computed for class time, recess and lunch time (break time) and total school hours (9 am to 3:30 pm). Data were averaged across the two days. Data recorded outside of school hours were excluded. Secondly, AG and aP epochs were also individually matched by day and time. Dichotomized variables were created to categorize each epoch as a) sitting or not sitting (aP), b) sitting plus standing or not sitting plus standing (aP), or c) sedentary or not sedentary as defined using the 17 different AG cut-points. Data were extracted for class time, break time and total school hours and used in subsequent analyses. Statistical analyses: Descriptive statistics were calculated for all variables. Percentage agreement between the AG and the aP (e.g. AG output at a specified cut-point classes epoch as sedentary time and aP identifies an epoch as sitting) was initially determined using the dichotomized data. The Bland-Altman method [17] was used to evaluate the bias and limits of agreement between the 17 sedentary cut-points from AG50 to AG850 and aP sitting and sitting plus standing time during class time, break time and the school day using the continuous data (min/day). Analyses were conducted in Stata 11.0, and statistical significance was set at p < 0.05. Concurrent time interval data (expressed as a median) across school hours were plotted to visually examine patterns of sitting and sitting plus standing against the 17 different AG cut-points. Receiver operating characteristic (ROC) curve analyses were performed using MedCalc v.11.4.2.0 (MedCalc Software, Belgium) using the dichotomized data. ROC analysis provides an empirical basis for determining appropriate cut-points with the aim of reducing misclassification through examination of sensitivity (true positive rate) and specificity (false positive rate). The area under the curve (AUC) represents the accuracy of a cut-point, with ROC AUC values of ≥ 0.90 considered excellent, 0.80-0.89 good, 0.70-0.79 fair, and < 0.70 poor [18]. Data from 50% (n = 24) of the children were randomly selected to identify cut-points which maximized the sensitivity and specificity for sitting and sitting plus standing time. The identified cut-points were then cross-validated in the remaining children (n = 24) as previously recommended [19]. Results: The time spent sedentary according to AG cut-points and time spent sitting and sitting plus standing according to the aP is shown in Table 1. On average, the aP revealed that children spent 218.9 minutes and 315.5 minutes of the school day sitting and sitting plus standing. This equated to 56.1% and 80.9% of the school day (total duration = 390 minutes) spent sitting and sitting plus standing, respectively. According to the AG cut-points, children were sedentary for 192 minutes (AG50) to 309.7 minutes (AG850) of the school day. Table 2 presents the percentage agreement, mean differences and 95% limits of agreement using the Bland-Altman method,[17] between aP sitting time and the 17 AG thresholds between AG50 to AG850 for class time, break time and school hours. The level of agreement was moderate to high for AG50 (69-70.8%). The lowest percentage agreement was for AG850 (36-62.8%). The lowest mean bias for sitting time, regardless of direction, was AG150 for class time (3.8 minutes), AG50 for break time (-0.8 minutes), and AG100 for the school day (-5.2 minutes). However, the 95% limits of agreement were wide for these thresholds, and ranged from 49.4 minutes for break time (AG50) to 144.7 minutes for the school day (AG100). A Bland-Altman plot demonstrating the agreement between aP and AG100 for the school day is shown in Figure 1. Bland-Altman plot of the difference between time spent sedentry (ActiGraph 100 counts -min-1) and time spent sitting (aP). Mean (range) time (minutes) spent sedentary according to activPAL and ActiGraph cut-points 1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9] Concurrent comparison between sedentary time using different Actigraph (AG) cut-points and activPAL (aP) sitting time 1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9] Table 3 reports the percentage agreement, mean differences and 95% limits of agreement between aP sitting plus standing time and AG thresholds. The highest level of agreement for sitting plus standing time was AG250 for class time (79.5%), AG50 for break time (70.8%), and AG200 for the whole school day (76.6%). The smallest bias for sitting plus standing time was AG850 for class time (-4.7 minutes), break time (-1.1 minutes) and the school day (-5.8 minutes). The 95% limits of agreement were 38.8 minutes (class time), 28.1 minutes (break time), and 63 minutes (school day) based on the smallest mean differences across the school day. Figures 2 and 3 illustrate the concurrent measurement patterns of sitting and sitting plus standing time in 5 minute intervals across school hours for AG100 and AG850, respectively, based on the findings from the Bland-Altman analyses above. Concurrent measurement pattern of aP sitting time and AG sedentary time defined as 100 counts min-1. Concurrent measurement pattern of aP sitting plus standing time and AG sedentary time defined as 850 counts min-1. Concurrent comparison between sedentary time using different Actigraph (AG) cut-points and activPAL (aP) sitting plus standing time 1 Treuth et al. [7]; Evenson et al. [6]; 2 Riddoch et al. [8]; 3 Ekelund et al. [3]; Sardinha et al. [4]; 4 Puyau et al. [9] According to ROC analyses, the optimal sensitivity and specificity based on the AUC (0.75) for sitting time was at an accelerometer cut-point of 24 counts per 15 second epoch (96 counts·min-1). The sensitivity and specificity of this cut-point were 71.7% and 67.8%, respectively. In the cross-validation group, the sensitivity, specificity and percentage agreement were 71.4%, 70.8% and 71.1% respectively. For sitting plus standing time, the AUC was poor (0.51). Based on the recommendations of Welk [19], no further analyses were undertaken. Discussion: This is the first study to examine the agreement between AG cut-points for sedentary time and objectively-assessed periods of free-living sitting and sitting plus standing time in children using the aP, and to examine whether an accelerometer count cut-point could be used to determine time spent sitting and sitting plus standing. This study found that during school hours, the lowest mean bias (-5.2 minutes) between AG sedentary time and aP sitting time was observed for an AG cut-point of 100 counts·min-1 in this age group of children. Furthermore, the ROC curve analysis for sitting time provided an optimal cut-point of 96 counts.min-1 (24 counts per 15 seconds), which had reasonable agreement, sensitivity and specificity in the cross-validation group. This provides support to previous studies that have determined that 100 counts.min-1 was the optimal cut-point for measuring youth sedentary time in free-living conditions [6,7], which also had an excellent ability to classify sedentary time in children [20]. Though it should be noted that the present study's sensitivity and specificity were lower than previous studies [6,20], this is the first study to use postural information as the criterion measure, demonstrating that a cut-point of 100 counts·min reflects the time children spend sitting. It should be noted that while the mean bias suggested that 100 counts.min-1 provided good agreement with aP sitting time, the limits of agreement were wide (range -77.6 to 67.1 minutes). This indicates that while the mean difference is small at a group level, the variance is larger and reflects a greater degree of under- and over-estimation at the individual level between the aP and the AG. This degree of variability at the individual level may be problematic in determining behavior change at this level following an intervention, for example. The wide limits of agreement may be attributable, to some extent, to the way that sedentary time is determined and the positioning of the monitor [21]. The aP uses a thigh-mounted inclinometer to obtain information concerning posture, whilst the hip-mounted AG determines sedentary time due to a lack of vertical displacement. Interestingly, while these monitors are measuring different outcome variables, which mean that a discrepancy will occur between the monitor's outputs, the group average concurrent measurement pattern between the AG and the aP depicted in Figure 1 was similar at 100 counts.min-1. Several studies have examined the utility of the AG to detect sedentary time in adults. Hart et al. [22] found that an AG cut-point of 50 counts.min-1 may be a better estimate of sitting time (when using the 7164 model). The present study found the highest percentage agreement between sitting and an AG cut-point of 50 counts.min-1, which somewhat supports this finding. Estimates of the validity of the 100 counts.min-1 cut-point in adults has been mixed, with this threshold resulting in significantly more sedentary time when using the AG GT1M model compared to the aP [21], whilst others found it underestimated sedentary time [23]. In the latter study, a cut-point of 150 counts.min-1 was the most accurate threshold for defining sitting time using the aP as the criterion, which is consistent with the finding for class time in the present study. It should be noted, however, that a GT3X with the low frequency extension filter option selected was used [23], which may account for some of the variability observed between studies. This option extends the lower threshold for signal detection, as it was found that a higher level of acceleration was needed to generate counts in the GT1M and GT3X AG models compared to the 7164 [24]. There is wide variation in published AG cut-points used to define sedentary time in children [25]. In the present study, a smaller mean bias was observed for AG500 [3,4] and AG800 [9] for sitting plus standing time, compared to sitting time. Previous studies have found that higher cut-points in adults detected more sedentary time compared with time spent sitting from the aP [21,23]. Trost and colleagues [20] found of the commonly used sedentary cut-points, AG800 had fair classification accuracy and low specificity, indicating that this cut-point was incorrectly classifying activity as sedentary time [19]. Overall, the findings from the present study and previous studies suggest that higher AG cut-points are capturing more activity than can be associated with sitting time, therefore studies that have used higher AG sedentary cut-points should be viewed with this limitation in mind. A limitation of hip-mounted accelerometers is their susceptibility of misclassifying standing light-to-moderate intensity activities as sedentary [20,26]. In the present study the ROC curve analyses for sitting plus standing resulted in a poor AUC, which meant that the associated cut-point would be ineffective characterizing sitting plus standing. This demonstrates that the AG cannot differentiate sitting from standing with minimal movement, and that researchers interested in examining time spent sitting plus standing should use objective monitors with inclinometers, such as the aP [23]. This study found that agreement between aP and the AG derived sedentary time varied depending on the period of day that was being examined. The lowest mean bias for break time and class time were observed at AG50 and AG150, respectively, for sitting time though the limits of agreement were also wide at these thresholds. At a practical level, it is unlikely that different cut-points are needed to assess sedentary time during different parts of the day. However, it appears that these findings reflect the variability in children's sitting time across the day. For example, sitting accounts for a small proportion of break time [27], yet accounts for a large proportion of class time [25]. Future studies that aim to reduce time spent sitting during specific periods of school hours should be aware of such bias when assessing the effectiveness of different strategies. There are several limitations that warrant attention. Firstly, no true criterion of sedentary time, such as direct observation, was used in this study. While the aP has been validated for assessing sitting time in preschool [28] and adult populations [16], it has not yet been validated in school-age child populations. Secondly, the monitor used in this study was the GT1M, which has been succeeded by the GT3X and the GT3X+ AG models. While there are emerging data that the activity counts are comparable between the GT1M and the GT3X in adults [29], differences have been noted in low count ranges [24]. As such, these findings are only generalizable to data collected using the normal filter. Thirdly, data analyses were restricted to two school days (9 am-3:30 pm), as children wore both monitors simultaneously for two days during this time only. Further research should examine the agreement between the aP and the AG during waking hours across multiple days. It should be noted, however, that 100% compliance during the school day meant that consecutive zeros were indicative of no movement rather than non-wear, which is a strength of this study. Conclusion: An AG cut-point of 100 counts·min-1 provided a good estimate of free-living sitting time in children during school hours. Higher cut-points that have been used to report children's sedentary time may capture both sitting and standing time. Further research is needed to examine the use of the 100 counts·min-1 cut-point to determine sitting time across the whole day, and against health indices in children.
Background: Accelerometers have been used to determine the amount of time that children spend sedentary. However, as time spent sitting may be detrimental to health, research is needed to examine whether accelerometer sedentary cut-points reflect the amount of time children spend sitting. The aim of this study was to: a) examine agreement between ActiGraph (AG) cut-points for sedentary time and objectively-assessed periods of free-living sitting and sitting plus standing time using the activPAL (aP); and b) identify cut-points to determine time spent sitting and sitting plus standing. Methods: Forty-eight children (54% boys) aged 8-12 years wore a waist-mounted AG and thigh-mounted aP for two consecutive school days (9-3:30 pm). AG data were analyzed using 17 cut-points between 50-850 counts·min-1 in 50 counts·min-1 increments to determine sedentary time during class-time, break time and school hours. Sitting and sitting plus standing time were obtained from the aP for these periods. Limits of agreement were computed to evaluate bias between AG50 to AG850 sedentary time and sitting and sitting plus standing time. Receiver Operator Characteristic (ROC) analyses identified AG cut-points that maximized sensitivity and specificity for sitting and sitting plus standing time. Results: The smallest mean bias between aP sitting time and AG sedentary time was AG150 for class time (3.8 minutes), AG50 for break time (-0.8 minutes), and AG100 for school hours (-5.2 minutes). For sitting plus standing time, the smallest bias was observed for AG850. ROC analyses revealed an optimal cut-point of 96 counts·min-1 (AUC = 0.75) for sitting time, which had acceptable sensitivity (71.7%) and specificity (67.8%). No optimal cut-point was obtained for sitting plus standing (AUC = 0.51). Conclusions: Estimates of free-living sitting time in children during school hours can be obtained using an AG cut-point of 100 counts·min-1. Higher sedentary cut-points may capture both sitting and standing time.
Background: There is increasing interest in the effects of sedentary behaviors on children's and adults' health [1,2] largely due to emerging evidence that objectively-assessed sedentary time is associated with cardio-metabolic health [3-5]. The ActiGraph (AG) accelerometer has been commonly used in the objective assessment of sedentary time. However, there is considerable variability in the cut-points used to identify sedentary time using this accelerometer in child populations. AG sedentary time cut-points used in school-aged children and adolescents have included 100 counts·min-1 [6,7], 200 counts·min-1 [8], 500 counts·min-1 [3,4], and 800 counts·min-1 [9], yet only two thresholds (100 and 800 counts·min-1) have been validated [6,7,9]. Objective measures such as accelerometers estimate sedentary time based on a lack of movement [10]. Sedentary behavior is typically defined as sitting behaviors that require low levels of energy expenditure to perform (≤1.5 METS) [11], and a lack of movement may indicate low levels of energy expenditure when using an accelerometer. Time spent in sedentary behavior is distinct from the lack of physical activity, which is defined as the amount of time not spent engaged in physical activity of a particular intensity (often moderate-to-vigorous physical activity), and often incorporates light intensity physical activity behaviors [12]. It is possible, however that low movement may be recorded by a hip-mounted accelerometer, but the individual could be standing (which is a very light intensity activity [12]) therefore more energy may be expended than that typically associated with sedentary behaviors [1]. Though the differences in energy expenditure may be considered negligible, the accumulation of these differences may have implications for energy balance over time [13]. In recent years, opportunities for measuring patterns of sitting/lying time (referred to as sitting time hereon in) have been made possible through the use of inclinometers (e.g. activPAL [aP], PAL Technologies Ltd, Glasgow, UK) to detect postures. To date, however, no studies have used the aP to determine the accuracy of AG cut-points for assessing sitting time or to identify whether sitting can be differentiated from sitting and standing time. Whilst the AG is unable to provide postural information, sitting and sitting plus standing require little vertical acceleration. Consequently, research is needed to examine whether accelerometer sedentary cut-points reflect the amount of time children spend sitting, [12] particularly as the AG is likely to continue to be used to measure both sedentary time and physical activity intensities. The aim of this study was to examine the agreement between AG cut-points for sedentary time and objectively-assessed sitting and sitting plus standing time in children using the aP during the school day. Class time and break time were also examined separately as class time is typically sedentary, while all children have opportunities for activity during recess and lunchtime. It was hypothesized that a lower AG cut-point would have greater agreement with aP sitting time and a higher AG cut-point would have greater agreement with aP sitting plus standing time. A secondary aim was to examine whether an accelerometer count cut-point could be used to determine time spent sitting and sitting plus standing. Conclusion: JS conceived the study and secured the funding. JS and LA planned the study design. LA collected the data. EOC performed the initial data manipulation. NDR performed the statistical analyses. NDR, JS, AT and KR and interpreted the data. NDR wrote the manuscript. AT, KR, JS, LA and EOC critically reviewed and revised the manuscript. All authors read and approved the final version of the manuscript.
Background: Accelerometers have been used to determine the amount of time that children spend sedentary. However, as time spent sitting may be detrimental to health, research is needed to examine whether accelerometer sedentary cut-points reflect the amount of time children spend sitting. The aim of this study was to: a) examine agreement between ActiGraph (AG) cut-points for sedentary time and objectively-assessed periods of free-living sitting and sitting plus standing time using the activPAL (aP); and b) identify cut-points to determine time spent sitting and sitting plus standing. Methods: Forty-eight children (54% boys) aged 8-12 years wore a waist-mounted AG and thigh-mounted aP for two consecutive school days (9-3:30 pm). AG data were analyzed using 17 cut-points between 50-850 counts·min-1 in 50 counts·min-1 increments to determine sedentary time during class-time, break time and school hours. Sitting and sitting plus standing time were obtained from the aP for these periods. Limits of agreement were computed to evaluate bias between AG50 to AG850 sedentary time and sitting and sitting plus standing time. Receiver Operator Characteristic (ROC) analyses identified AG cut-points that maximized sensitivity and specificity for sitting and sitting plus standing time. Results: The smallest mean bias between aP sitting time and AG sedentary time was AG150 for class time (3.8 minutes), AG50 for break time (-0.8 minutes), and AG100 for school hours (-5.2 minutes). For sitting plus standing time, the smallest bias was observed for AG850. ROC analyses revealed an optimal cut-point of 96 counts·min-1 (AUC = 0.75) for sitting time, which had acceptable sensitivity (71.7%) and specificity (67.8%). No optimal cut-point was obtained for sitting plus standing (AUC = 0.51). Conclusions: Estimates of free-living sitting time in children during school hours can be obtained using an AG cut-point of 100 counts·min-1. Higher sedentary cut-points may capture both sitting and standing time.
6,875
401
[ 624, 118, 192, 238, 426, 320, 863, 1366, 77 ]
10
[ "time", "sitting", "ag", "cut", "ap", "sedentary", "school", "data", "children", "standing" ]
[ "detect sedentary time", "classify sedentary time", "activity sedentary time", "sedentary time accelerometer", "assessment sedentary time" ]
null
[CONTENT] Accelerometry | Children | Objective assessment | Sedentary time | Sedentary behavior [SUMMARY]
[CONTENT] Accelerometry | Children | Objective assessment | Sedentary time | Sedentary behavior [SUMMARY]
null
[CONTENT] Accelerometry | Children | Objective assessment | Sedentary time | Sedentary behavior [SUMMARY]
[CONTENT] Accelerometry | Children | Objective assessment | Sedentary time | Sedentary behavior [SUMMARY]
[CONTENT] Accelerometry | Children | Objective assessment | Sedentary time | Sedentary behavior [SUMMARY]
[CONTENT] Actigraphy | Area Under Curve | Bias | Child | Exercise | Female | Health Behavior | Humans | Male | Monitoring, Ambulatory | Motor Activity | Posture | ROC Curve | Reference Values | Reproducibility of Results | Schools | Sedentary Behavior | Time Factors [SUMMARY]
[CONTENT] Actigraphy | Area Under Curve | Bias | Child | Exercise | Female | Health Behavior | Humans | Male | Monitoring, Ambulatory | Motor Activity | Posture | ROC Curve | Reference Values | Reproducibility of Results | Schools | Sedentary Behavior | Time Factors [SUMMARY]
null
[CONTENT] Actigraphy | Area Under Curve | Bias | Child | Exercise | Female | Health Behavior | Humans | Male | Monitoring, Ambulatory | Motor Activity | Posture | ROC Curve | Reference Values | Reproducibility of Results | Schools | Sedentary Behavior | Time Factors [SUMMARY]
[CONTENT] Actigraphy | Area Under Curve | Bias | Child | Exercise | Female | Health Behavior | Humans | Male | Monitoring, Ambulatory | Motor Activity | Posture | ROC Curve | Reference Values | Reproducibility of Results | Schools | Sedentary Behavior | Time Factors [SUMMARY]
[CONTENT] Actigraphy | Area Under Curve | Bias | Child | Exercise | Female | Health Behavior | Humans | Male | Monitoring, Ambulatory | Motor Activity | Posture | ROC Curve | Reference Values | Reproducibility of Results | Schools | Sedentary Behavior | Time Factors [SUMMARY]
[CONTENT] detect sedentary time | classify sedentary time | activity sedentary time | sedentary time accelerometer | assessment sedentary time [SUMMARY]
[CONTENT] detect sedentary time | classify sedentary time | activity sedentary time | sedentary time accelerometer | assessment sedentary time [SUMMARY]
null
[CONTENT] detect sedentary time | classify sedentary time | activity sedentary time | sedentary time accelerometer | assessment sedentary time [SUMMARY]
[CONTENT] detect sedentary time | classify sedentary time | activity sedentary time | sedentary time accelerometer | assessment sedentary time [SUMMARY]
[CONTENT] detect sedentary time | classify sedentary time | activity sedentary time | sedentary time accelerometer | assessment sedentary time [SUMMARY]
[CONTENT] time | sitting | ag | cut | ap | sedentary | school | data | children | standing [SUMMARY]
[CONTENT] time | sitting | ag | cut | ap | sedentary | school | data | children | standing [SUMMARY]
null
[CONTENT] time | sitting | ag | cut | ap | sedentary | school | data | children | standing [SUMMARY]
[CONTENT] time | sitting | ag | cut | ap | sedentary | school | data | children | standing [SUMMARY]
[CONTENT] time | sitting | ag | cut | ap | sedentary | school | data | children | standing [SUMMARY]
[CONTENT] time | sitting | sedentary | activity | physical | physical activity | energy | behaviors | cut | sedentary time [SUMMARY]
[CONTENT] time | data | sitting | ap | cut | ag | children | school | monitors | day [SUMMARY]
null
[CONTENT] time | cut | sitting | 100 counts min | 100 counts | 100 | children | point | counts min | sitting time [SUMMARY]
[CONTENT] time | sitting | cut | sedentary | ag | ap | data | children | school | day [SUMMARY]
[CONTENT] time | sitting | cut | sedentary | ag | ap | data | children | school | day [SUMMARY]
[CONTENT] ||| ||| ActiGraph | AG [SUMMARY]
[CONTENT] Forty-eight | 54% | 8-12 years | AG | two consecutive school days | 9-3:30 pm ||| AG | 17 | between 50-850 | 50 ||| ||| AG850 ||| Receiver Operator Characteristic | ROC | AG [SUMMARY]
null
[CONTENT] school hours | AG | 100 ||| [SUMMARY]
[CONTENT] ||| ||| ActiGraph | AG ||| Forty-eight | 54% | 8-12 years | AG | two consecutive school days | 9-3:30 pm ||| AG | 17 | between 50-850 | 50 ||| ||| AG850 ||| Receiver Operator Characteristic | ROC | AG ||| ||| AG | AG150 | 3.8 minutes | (-0.8 minutes | AG100 | school hours ||| AG850 ||| ROC | 96 | 0.75 | 71.7% | 67.8% ||| 0.51 ||| school hours | AG | 100 ||| [SUMMARY]
[CONTENT] ||| ||| ActiGraph | AG ||| Forty-eight | 54% | 8-12 years | AG | two consecutive school days | 9-3:30 pm ||| AG | 17 | between 50-850 | 50 ||| ||| AG850 ||| Receiver Operator Characteristic | ROC | AG ||| ||| AG | AG150 | 3.8 minutes | (-0.8 minutes | AG100 | school hours ||| AG850 ||| ROC | 96 | 0.75 | 71.7% | 67.8% ||| 0.51 ||| school hours | AG | 100 ||| [SUMMARY]
Developmental neurotoxicity of perfluorinated chemicals modeled in vitro.
18560525
The widespread detection of perfluoroalkyl acids and their derivatives in wildlife and humans, and their entry into the immature brain, raise increasing concern about whether these agents might be developmental neurotoxicants.
BACKGROUND
We assessed inhibition of DNA synthesis, deficits in cell numbers and growth, oxidative stress, reduced cell viability, and shifts in differentiation toward or away from the dopamine (DA) and acetylcholine (ACh) neurotransmitter phenotypes.
METHODS
In general, the rank order of adverse effects was PFOSA > PFOS > PFBS approximately PFOA. However, superimposed on this scheme, the various agents differed in their underlying mechanisms and specific outcomes. Notably, PFOS promoted differentiation into the ACh phenotype at the expense of the DA phenotype, PFBS suppressed differentiation of both phenotypes, PFOSA enhanced differentiation of both, and PFOA had little or no effect on phenotypic specification.
RESULTS
These findings indicate that all perfluorinated chemicals are not the same in their impact on neurodevelopment and that it is unlikely that there is one simple, shared mechanism by which they all produce their effects. Our results reinforce the potential for in vitro models to aid in the rapid and cost-effective screening for comparative effects among different chemicals in the same class and in relation to known developmental neurotoxicants.
CONCLUSIONS
[ "Alkanesulfonic Acids", "Animals", "Caprylates", "Cell Differentiation", "Cell Proliferation", "Cell Survival", "Fluorocarbons", "Neurons", "Oxidative Stress", "PC12 Cells", "Rats", "Sulfonamides" ]
2430225
null
null
Data analysis
All studies were performed on 8–16 separate cultures for each measure and treatment, using 2–4 separate batches of cells. Results are presented as mean ± SE, with treatment comparisons carried out by analysis of variance (ANOVA) followed by Fisher’s protected least significant difference test for post hoc comparisons of individual treatments. In the initial test, we evaluated two ANOVA factors (treatment, cell batch) and found that the results did not vary among the different batches of cells, so results across the different batches were combined for presentation. Significance was assumed at p < 0.05.
Results
PFOS In undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D). In differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B). With the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C). In undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D). In differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B). With the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C). PFOA Unlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA). In differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C). Unlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA). In differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C). PFOSA The effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D). In differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B). Assessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C). The effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D). In differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B). Assessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C). PFBS The effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS. The effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS.
null
null
[ "Cell cultures", "DNA synthesis", "DNA and protein ratios", "Oxidative stress", "Viability", "Enzyme activities", "PFOS", "PFOA", "PFOSA", "PFBS" ]
[ "Because of the clonal instability of the PC12 cell line (Fujita et al. 1989), the experiments were performed on cells that had undergone fewer than five passages. PC12 cells (American Type Culture Collection, 1721-CRL; obtained from the Duke Comprehensive Cancer Center, Durham, NC) were grown under standard conditions (Crumpton et al. 2000a; Qiao et al. 2003; Song et al. 1998) in RPMI-1640 medium (Invitrogen, Carlsbad, CA) supplemented with 10% inactivated horse serum (Sigma Chemical Co., St. Louis, MO), 5% fetal bovine serum (Sigma), and 50 μg/mL penicillin streptomycin (Invitrogen); cells were incubated with 7.5% CO2 at 37°C. For studies in the undifferentiated state, the medium was changed 24 hr after seeding to include 50 μM CPF (98.8% purity; Chem Service, West Chester, PA), or varying concentrations of each of the four perfluorinated chemicals (supplied by Battelle, Columbus, OH): PFOS (97% purity), PFOA (99.2% purity), PFOSA (99.4% purity), and PFBS potassium salt (98.2% purity). Because of the limited water solubility of CPF and some of the perfluorinated chemicals, all test agents were dissolved in dimethyl sulfoxide (DMSO) to achieve a final concentration in the culture medium of 0.1%, which has no effect on replication or differentiation of PC12 cells (Qiao et al. 2001, 2003; Song et al. 1998); control cultures contained the same concentration of DMSO. For studies in differentiating cells, 24 hr after seeding, the medium was changed to include 50 ng/mL of 2.5 S murine nerve growth factor (Invitrogen) and DMSO with or without the test agents; these cells were examined for up to 6 days, with medium changes (including test agents) every 48 hr. We chose the CPF concentration to elicit a robust response for each of the effects to be compared to the actions of perfluorinated chemicals, and accordingly, we selected a concentration that elicits inhibition of DNA synthesis and interference with cell acquisition and oxidative stress, but that lies below the threshold for outright cytotoxicity or loss of viability (Bagchi et al. 1995; Crumpton et al. 2000b; Das and Barone 1999; Jameson et al. 2006b; Qiao et al. 2001, 2003, 2005; Slotkin et al. 2007; Song et al. 1998).", "We measured DNA synthesis in undifferentiated cells. Twenty-four hours after plating, we changed the medium to include the test agents. After 23 hr of exposure, we initiated the measurement of DNA synthesis by changing the medium to include 1 μCi/mL of [3H]thymidine (specific activity, 2 Ci/mmol; GE Healthcare, Piscataway, NJ) along with the continued inclusion of the test substances. After 1 hr, the medium was aspirated and cells were harvested, and DNA was precipitated and separated from other macromolecules by established procedures that produce quantitative recovery of DNA (Bell et al. 1986; Slotkin et al. 1984). The DNA fraction was counted for radiolabel and the DNA concentration was determined spectrophotometrically by absorbance at 260 nm. We corrected incorporation values to the amount of DNA present in each culture to provide an index of DNA synthesis per cell (Winick and Noble 1965), and we recorded the total DNA content.", "We determined DNA and protein ratios in differentiating cells after 6 days of continuous exposure to the test agents. Cells were harvested and washed, and the DNA and protein fractions were isolated and analyzed as described previously (Slotkin et al. 2007; Song et al. 1998), with DNA and total protein analyzed by dye binding (Trauth et al. 2000). To prepare the cell membrane fraction, the homogenates were sedimented at 40,000 × g for 10 min and the pellet was washed and resedimented. Aliquots of the final resuspension were then assayed for membrane protein (Smith et al. 1985).", "We evaluated the degree of lipid peroxidation in undifferentiated cells after 24 hr of exposure to test agents, and in differentiating cells after 4 days of exposure. We measured the concentration of MDA by reaction with thiobarbituric acid using a modification (Qiao et al. 2005) of published procedures (Guan et al. 2003). To give the MDA concentration per cell, values were calculated relative to the amount of DNA.", "To assess cell viability, the cell culture medium was changed to include trypan blue (1 volume per 2.5 vol of medium; Sigma) and cells were examined for staining under 400× magnification, counting an average of 100 cells per field in four different fields per culture. Assessments were made after 24 hr of exposure in undifferentiated cells and after 4 days for differentiating cells.", "Differentiating cells were harvested after 6 days of exposure, as described above, and were disrupted by homogenization in a ground-glass homogenizer fitted with a ground-glass pestle and using a buffer consisting of 154 mM NaCl and 10 mM sodium-potassium phosphate (pH 7.4). Aliquots were withdrawn for measurement of DNA (Trauth et al. 2000).\nChAT assays were conducted following published techniques (Lau et al. 1988) using a substrate of 50 μM [14C]acetyl–coenzyme A (specific activity, 60 mCi/mmol; PerkinElmer Life Sciences, Boston, MA). Labeled ACh was counted in a liquid scintillation counter and activity calculated as nanomoles synthesized per hour per microgram DNA.\nTH activity was measured using [14C]tyrosine as a substrate and trapping the evolved 14CO2 after coupled decarboxylation (Lau et al. 1988; Waymire et al. 1971). Each assay contained 0.5 μCi of generally labeled [14C]tyrosine (specific activity, 438 mCi/mmol; Sigma) as substrate, and activity was calculated on the same basis as for ChAT.", "In undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D).\nIn differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B).\nWith the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C).", "Unlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA).\nIn differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C).", "The effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D).\nIn differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B).\nAssessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C).", "The effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Materials and Methods", "Cell cultures", "DNA synthesis", "DNA and protein ratios", "Oxidative stress", "Viability", "Enzyme activities", "Data analysis", "Results", "PFOS", "PFOA", "PFOSA", "PFBS", "Discussion" ]
[ "All of the techniques used in this study have been reported previously; thus, only brief descriptions are given here.\n Cell cultures Because of the clonal instability of the PC12 cell line (Fujita et al. 1989), the experiments were performed on cells that had undergone fewer than five passages. PC12 cells (American Type Culture Collection, 1721-CRL; obtained from the Duke Comprehensive Cancer Center, Durham, NC) were grown under standard conditions (Crumpton et al. 2000a; Qiao et al. 2003; Song et al. 1998) in RPMI-1640 medium (Invitrogen, Carlsbad, CA) supplemented with 10% inactivated horse serum (Sigma Chemical Co., St. Louis, MO), 5% fetal bovine serum (Sigma), and 50 μg/mL penicillin streptomycin (Invitrogen); cells were incubated with 7.5% CO2 at 37°C. For studies in the undifferentiated state, the medium was changed 24 hr after seeding to include 50 μM CPF (98.8% purity; Chem Service, West Chester, PA), or varying concentrations of each of the four perfluorinated chemicals (supplied by Battelle, Columbus, OH): PFOS (97% purity), PFOA (99.2% purity), PFOSA (99.4% purity), and PFBS potassium salt (98.2% purity). Because of the limited water solubility of CPF and some of the perfluorinated chemicals, all test agents were dissolved in dimethyl sulfoxide (DMSO) to achieve a final concentration in the culture medium of 0.1%, which has no effect on replication or differentiation of PC12 cells (Qiao et al. 2001, 2003; Song et al. 1998); control cultures contained the same concentration of DMSO. For studies in differentiating cells, 24 hr after seeding, the medium was changed to include 50 ng/mL of 2.5 S murine nerve growth factor (Invitrogen) and DMSO with or without the test agents; these cells were examined for up to 6 days, with medium changes (including test agents) every 48 hr. We chose the CPF concentration to elicit a robust response for each of the effects to be compared to the actions of perfluorinated chemicals, and accordingly, we selected a concentration that elicits inhibition of DNA synthesis and interference with cell acquisition and oxidative stress, but that lies below the threshold for outright cytotoxicity or loss of viability (Bagchi et al. 1995; Crumpton et al. 2000b; Das and Barone 1999; Jameson et al. 2006b; Qiao et al. 2001, 2003, 2005; Slotkin et al. 2007; Song et al. 1998).\nBecause of the clonal instability of the PC12 cell line (Fujita et al. 1989), the experiments were performed on cells that had undergone fewer than five passages. PC12 cells (American Type Culture Collection, 1721-CRL; obtained from the Duke Comprehensive Cancer Center, Durham, NC) were grown under standard conditions (Crumpton et al. 2000a; Qiao et al. 2003; Song et al. 1998) in RPMI-1640 medium (Invitrogen, Carlsbad, CA) supplemented with 10% inactivated horse serum (Sigma Chemical Co., St. Louis, MO), 5% fetal bovine serum (Sigma), and 50 μg/mL penicillin streptomycin (Invitrogen); cells were incubated with 7.5% CO2 at 37°C. For studies in the undifferentiated state, the medium was changed 24 hr after seeding to include 50 μM CPF (98.8% purity; Chem Service, West Chester, PA), or varying concentrations of each of the four perfluorinated chemicals (supplied by Battelle, Columbus, OH): PFOS (97% purity), PFOA (99.2% purity), PFOSA (99.4% purity), and PFBS potassium salt (98.2% purity). Because of the limited water solubility of CPF and some of the perfluorinated chemicals, all test agents were dissolved in dimethyl sulfoxide (DMSO) to achieve a final concentration in the culture medium of 0.1%, which has no effect on replication or differentiation of PC12 cells (Qiao et al. 2001, 2003; Song et al. 1998); control cultures contained the same concentration of DMSO. For studies in differentiating cells, 24 hr after seeding, the medium was changed to include 50 ng/mL of 2.5 S murine nerve growth factor (Invitrogen) and DMSO with or without the test agents; these cells were examined for up to 6 days, with medium changes (including test agents) every 48 hr. We chose the CPF concentration to elicit a robust response for each of the effects to be compared to the actions of perfluorinated chemicals, and accordingly, we selected a concentration that elicits inhibition of DNA synthesis and interference with cell acquisition and oxidative stress, but that lies below the threshold for outright cytotoxicity or loss of viability (Bagchi et al. 1995; Crumpton et al. 2000b; Das and Barone 1999; Jameson et al. 2006b; Qiao et al. 2001, 2003, 2005; Slotkin et al. 2007; Song et al. 1998).\n DNA synthesis We measured DNA synthesis in undifferentiated cells. Twenty-four hours after plating, we changed the medium to include the test agents. After 23 hr of exposure, we initiated the measurement of DNA synthesis by changing the medium to include 1 μCi/mL of [3H]thymidine (specific activity, 2 Ci/mmol; GE Healthcare, Piscataway, NJ) along with the continued inclusion of the test substances. After 1 hr, the medium was aspirated and cells were harvested, and DNA was precipitated and separated from other macromolecules by established procedures that produce quantitative recovery of DNA (Bell et al. 1986; Slotkin et al. 1984). The DNA fraction was counted for radiolabel and the DNA concentration was determined spectrophotometrically by absorbance at 260 nm. We corrected incorporation values to the amount of DNA present in each culture to provide an index of DNA synthesis per cell (Winick and Noble 1965), and we recorded the total DNA content.\nWe measured DNA synthesis in undifferentiated cells. Twenty-four hours after plating, we changed the medium to include the test agents. After 23 hr of exposure, we initiated the measurement of DNA synthesis by changing the medium to include 1 μCi/mL of [3H]thymidine (specific activity, 2 Ci/mmol; GE Healthcare, Piscataway, NJ) along with the continued inclusion of the test substances. After 1 hr, the medium was aspirated and cells were harvested, and DNA was precipitated and separated from other macromolecules by established procedures that produce quantitative recovery of DNA (Bell et al. 1986; Slotkin et al. 1984). The DNA fraction was counted for radiolabel and the DNA concentration was determined spectrophotometrically by absorbance at 260 nm. We corrected incorporation values to the amount of DNA present in each culture to provide an index of DNA synthesis per cell (Winick and Noble 1965), and we recorded the total DNA content.\n DNA and protein ratios We determined DNA and protein ratios in differentiating cells after 6 days of continuous exposure to the test agents. Cells were harvested and washed, and the DNA and protein fractions were isolated and analyzed as described previously (Slotkin et al. 2007; Song et al. 1998), with DNA and total protein analyzed by dye binding (Trauth et al. 2000). To prepare the cell membrane fraction, the homogenates were sedimented at 40,000 × g for 10 min and the pellet was washed and resedimented. Aliquots of the final resuspension were then assayed for membrane protein (Smith et al. 1985).\nWe determined DNA and protein ratios in differentiating cells after 6 days of continuous exposure to the test agents. Cells were harvested and washed, and the DNA and protein fractions were isolated and analyzed as described previously (Slotkin et al. 2007; Song et al. 1998), with DNA and total protein analyzed by dye binding (Trauth et al. 2000). To prepare the cell membrane fraction, the homogenates were sedimented at 40,000 × g for 10 min and the pellet was washed and resedimented. Aliquots of the final resuspension were then assayed for membrane protein (Smith et al. 1985).\n Oxidative stress We evaluated the degree of lipid peroxidation in undifferentiated cells after 24 hr of exposure to test agents, and in differentiating cells after 4 days of exposure. We measured the concentration of MDA by reaction with thiobarbituric acid using a modification (Qiao et al. 2005) of published procedures (Guan et al. 2003). To give the MDA concentration per cell, values were calculated relative to the amount of DNA.\nWe evaluated the degree of lipid peroxidation in undifferentiated cells after 24 hr of exposure to test agents, and in differentiating cells after 4 days of exposure. We measured the concentration of MDA by reaction with thiobarbituric acid using a modification (Qiao et al. 2005) of published procedures (Guan et al. 2003). To give the MDA concentration per cell, values were calculated relative to the amount of DNA.\n Viability To assess cell viability, the cell culture medium was changed to include trypan blue (1 volume per 2.5 vol of medium; Sigma) and cells were examined for staining under 400× magnification, counting an average of 100 cells per field in four different fields per culture. Assessments were made after 24 hr of exposure in undifferentiated cells and after 4 days for differentiating cells.\nTo assess cell viability, the cell culture medium was changed to include trypan blue (1 volume per 2.5 vol of medium; Sigma) and cells were examined for staining under 400× magnification, counting an average of 100 cells per field in four different fields per culture. Assessments were made after 24 hr of exposure in undifferentiated cells and after 4 days for differentiating cells.\n Enzyme activities Differentiating cells were harvested after 6 days of exposure, as described above, and were disrupted by homogenization in a ground-glass homogenizer fitted with a ground-glass pestle and using a buffer consisting of 154 mM NaCl and 10 mM sodium-potassium phosphate (pH 7.4). Aliquots were withdrawn for measurement of DNA (Trauth et al. 2000).\nChAT assays were conducted following published techniques (Lau et al. 1988) using a substrate of 50 μM [14C]acetyl–coenzyme A (specific activity, 60 mCi/mmol; PerkinElmer Life Sciences, Boston, MA). Labeled ACh was counted in a liquid scintillation counter and activity calculated as nanomoles synthesized per hour per microgram DNA.\nTH activity was measured using [14C]tyrosine as a substrate and trapping the evolved 14CO2 after coupled decarboxylation (Lau et al. 1988; Waymire et al. 1971). Each assay contained 0.5 μCi of generally labeled [14C]tyrosine (specific activity, 438 mCi/mmol; Sigma) as substrate, and activity was calculated on the same basis as for ChAT.\nDifferentiating cells were harvested after 6 days of exposure, as described above, and were disrupted by homogenization in a ground-glass homogenizer fitted with a ground-glass pestle and using a buffer consisting of 154 mM NaCl and 10 mM sodium-potassium phosphate (pH 7.4). Aliquots were withdrawn for measurement of DNA (Trauth et al. 2000).\nChAT assays were conducted following published techniques (Lau et al. 1988) using a substrate of 50 μM [14C]acetyl–coenzyme A (specific activity, 60 mCi/mmol; PerkinElmer Life Sciences, Boston, MA). Labeled ACh was counted in a liquid scintillation counter and activity calculated as nanomoles synthesized per hour per microgram DNA.\nTH activity was measured using [14C]tyrosine as a substrate and trapping the evolved 14CO2 after coupled decarboxylation (Lau et al. 1988; Waymire et al. 1971). Each assay contained 0.5 μCi of generally labeled [14C]tyrosine (specific activity, 438 mCi/mmol; Sigma) as substrate, and activity was calculated on the same basis as for ChAT.\n Data analysis All studies were performed on 8–16 separate cultures for each measure and treatment, using 2–4 separate batches of cells. Results are presented as mean ± SE, with treatment comparisons carried out by analysis of variance (ANOVA) followed by Fisher’s protected least significant difference test for post hoc comparisons of individual treatments. In the initial test, we evaluated two ANOVA factors (treatment, cell batch) and found that the results did not vary among the different batches of cells, so results across the different batches were combined for presentation. Significance was assumed at p < 0.05.\nAll studies were performed on 8–16 separate cultures for each measure and treatment, using 2–4 separate batches of cells. Results are presented as mean ± SE, with treatment comparisons carried out by analysis of variance (ANOVA) followed by Fisher’s protected least significant difference test for post hoc comparisons of individual treatments. In the initial test, we evaluated two ANOVA factors (treatment, cell batch) and found that the results did not vary among the different batches of cells, so results across the different batches were combined for presentation. Significance was assumed at p < 0.05.", "Because of the clonal instability of the PC12 cell line (Fujita et al. 1989), the experiments were performed on cells that had undergone fewer than five passages. PC12 cells (American Type Culture Collection, 1721-CRL; obtained from the Duke Comprehensive Cancer Center, Durham, NC) were grown under standard conditions (Crumpton et al. 2000a; Qiao et al. 2003; Song et al. 1998) in RPMI-1640 medium (Invitrogen, Carlsbad, CA) supplemented with 10% inactivated horse serum (Sigma Chemical Co., St. Louis, MO), 5% fetal bovine serum (Sigma), and 50 μg/mL penicillin streptomycin (Invitrogen); cells were incubated with 7.5% CO2 at 37°C. For studies in the undifferentiated state, the medium was changed 24 hr after seeding to include 50 μM CPF (98.8% purity; Chem Service, West Chester, PA), or varying concentrations of each of the four perfluorinated chemicals (supplied by Battelle, Columbus, OH): PFOS (97% purity), PFOA (99.2% purity), PFOSA (99.4% purity), and PFBS potassium salt (98.2% purity). Because of the limited water solubility of CPF and some of the perfluorinated chemicals, all test agents were dissolved in dimethyl sulfoxide (DMSO) to achieve a final concentration in the culture medium of 0.1%, which has no effect on replication or differentiation of PC12 cells (Qiao et al. 2001, 2003; Song et al. 1998); control cultures contained the same concentration of DMSO. For studies in differentiating cells, 24 hr after seeding, the medium was changed to include 50 ng/mL of 2.5 S murine nerve growth factor (Invitrogen) and DMSO with or without the test agents; these cells were examined for up to 6 days, with medium changes (including test agents) every 48 hr. We chose the CPF concentration to elicit a robust response for each of the effects to be compared to the actions of perfluorinated chemicals, and accordingly, we selected a concentration that elicits inhibition of DNA synthesis and interference with cell acquisition and oxidative stress, but that lies below the threshold for outright cytotoxicity or loss of viability (Bagchi et al. 1995; Crumpton et al. 2000b; Das and Barone 1999; Jameson et al. 2006b; Qiao et al. 2001, 2003, 2005; Slotkin et al. 2007; Song et al. 1998).", "We measured DNA synthesis in undifferentiated cells. Twenty-four hours after plating, we changed the medium to include the test agents. After 23 hr of exposure, we initiated the measurement of DNA synthesis by changing the medium to include 1 μCi/mL of [3H]thymidine (specific activity, 2 Ci/mmol; GE Healthcare, Piscataway, NJ) along with the continued inclusion of the test substances. After 1 hr, the medium was aspirated and cells were harvested, and DNA was precipitated and separated from other macromolecules by established procedures that produce quantitative recovery of DNA (Bell et al. 1986; Slotkin et al. 1984). The DNA fraction was counted for radiolabel and the DNA concentration was determined spectrophotometrically by absorbance at 260 nm. We corrected incorporation values to the amount of DNA present in each culture to provide an index of DNA synthesis per cell (Winick and Noble 1965), and we recorded the total DNA content.", "We determined DNA and protein ratios in differentiating cells after 6 days of continuous exposure to the test agents. Cells were harvested and washed, and the DNA and protein fractions were isolated and analyzed as described previously (Slotkin et al. 2007; Song et al. 1998), with DNA and total protein analyzed by dye binding (Trauth et al. 2000). To prepare the cell membrane fraction, the homogenates were sedimented at 40,000 × g for 10 min and the pellet was washed and resedimented. Aliquots of the final resuspension were then assayed for membrane protein (Smith et al. 1985).", "We evaluated the degree of lipid peroxidation in undifferentiated cells after 24 hr of exposure to test agents, and in differentiating cells after 4 days of exposure. We measured the concentration of MDA by reaction with thiobarbituric acid using a modification (Qiao et al. 2005) of published procedures (Guan et al. 2003). To give the MDA concentration per cell, values were calculated relative to the amount of DNA.", "To assess cell viability, the cell culture medium was changed to include trypan blue (1 volume per 2.5 vol of medium; Sigma) and cells were examined for staining under 400× magnification, counting an average of 100 cells per field in four different fields per culture. Assessments were made after 24 hr of exposure in undifferentiated cells and after 4 days for differentiating cells.", "Differentiating cells were harvested after 6 days of exposure, as described above, and were disrupted by homogenization in a ground-glass homogenizer fitted with a ground-glass pestle and using a buffer consisting of 154 mM NaCl and 10 mM sodium-potassium phosphate (pH 7.4). Aliquots were withdrawn for measurement of DNA (Trauth et al. 2000).\nChAT assays were conducted following published techniques (Lau et al. 1988) using a substrate of 50 μM [14C]acetyl–coenzyme A (specific activity, 60 mCi/mmol; PerkinElmer Life Sciences, Boston, MA). Labeled ACh was counted in a liquid scintillation counter and activity calculated as nanomoles synthesized per hour per microgram DNA.\nTH activity was measured using [14C]tyrosine as a substrate and trapping the evolved 14CO2 after coupled decarboxylation (Lau et al. 1988; Waymire et al. 1971). Each assay contained 0.5 μCi of generally labeled [14C]tyrosine (specific activity, 438 mCi/mmol; Sigma) as substrate, and activity was calculated on the same basis as for ChAT.", "All studies were performed on 8–16 separate cultures for each measure and treatment, using 2–4 separate batches of cells. Results are presented as mean ± SE, with treatment comparisons carried out by analysis of variance (ANOVA) followed by Fisher’s protected least significant difference test for post hoc comparisons of individual treatments. In the initial test, we evaluated two ANOVA factors (treatment, cell batch) and found that the results did not vary among the different batches of cells, so results across the different batches were combined for presentation. Significance was assumed at p < 0.05.", " PFOS In undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D).\nIn differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B).\nWith the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C).\nIn undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D).\nIn differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B).\nWith the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C).\n PFOA Unlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA).\nIn differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C).\nUnlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA).\nIn differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C).\n PFOSA The effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D).\nIn differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B).\nAssessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C).\nThe effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D).\nIn differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B).\nAssessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C).\n PFBS The effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS.\nThe effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS.", "In undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D).\nIn differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B).\nWith the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C).", "Unlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA).\nIn differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C).", "The effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D).\nIn differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B).\nAssessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C).", "The effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS.", "To our knowledge, this is the first demonstration that PFAAs do indeed have direct, developmental neurotoxicant actions and that they target specific events in neural cell differentiation. In general, the rank order of adverse effects was PFOSA > PFOS > PFBS ≈ PFOA. However, superimposed on this scheme, the various agents differed in their underlying mechanisms and specific outcomes, indicating that all PFAAs are not the same in their impact on neurodevelopment and that it is unlikely that there is one simple, shared mechanism by which they all produce their effects.\nThe greater toxicity of PFOSA can be partially attributed to its less hydrophilic nature. Because the other agents are free acids (PFOA) or sulfonic acids (PFOS, PFBS), PFOSA will more readily cross the cell membrane, achieving higher intracellular levels. By itself, this finding gives important information readily translatable to developmental neurotoxicity in vivo: PFAAs that are more hydrophobic or that form less polar metabolites are likely to be more problematic, especially since the same physicochemical properties govern passage across the placental and blood–brain barriers. Nevertheless, pharmacokinetic differences cannot account for the disparities in actions among the various PFAAs: PFOS elicited larger changes than PFBS, despite the fact that both are sulfonic acids, and all four agents had differing, even opposite, actions on neurotransmitter phenotypes. Certainly, one likely possibility for the greater toxicity of PFOSA is its ability to uncouple mitochondrial oxidative function, whereas PFOS and PFOA act simply as surfactants at the mitochondrial membrane (Starkov and Wallace 2002). This feature may contribute to disparities in the targeting of specific events in cell differentiation that are affected by oxidative stress and other downstream events linked to mitochondrial dysfunction.\nPFOSA was the only one of the agents tested that matched or exceeded the ability of CPF to inhibit DNA synthesis in undifferentiated cells. Likewise, PFOSA elicited the greatest degree of oxidative stress and cell loss, regardless of whether cells were undifferentiated or differentiating. Nevertheless, even with PFOSA we did not see significant loss of cell viability until the concentration was raised to 250 μM, implying that there are factors other than cytotoxicity that contribute to the net effects. In fact, our results point to strong promotion of the switch from cell replication to cell differentiation, which would also contribute to a reduction in DNA synthesis and in cell numbers. The protein ratios provide support for this interpretation. First, the protein/DNA ratio increased with PFOSA treatment at concentrations below the threshold for cytotoxicity, indicating an increase in cell size rather than the suppression of growth that would be expected from cytotoxic actions. Second, the membrane/total protein ratio also showed a rise with subtoxic PFOSA treatments, indicating augmented membrane complexity, which is commensurate with generation of neurites and cellular organelles that accompanies differentiation. The final proof of a prodifferentiation effect can be seen from the strong promotion of both neurotransmitter phenotypes, evidenced by marked increases in both TH and ChAT. The differentiation pattern triggered by PFOSA is not, however, a normal one: At low concentrations, the TH/ChAT ratio was slightly, but significantly decreased, whereas at high concentrations, it rose markedly. This means that PFOSA alters the differentiation fate of the cells, switching them weakly to the ACh phenotype at low concentrations, and strongly to the DA phenotype at high concentrations. If similar effects happen in vivo, it might be expected that neurons will differentiate into inappropriate phenotypes; this would lead to miswiring of neural circuits, with presynaptic projections for a given neurotransmitter juxtaposed to post-synaptic elements containing incorrect receptors, resulting in nonfunctional synapses. Given the biphasic dose–response curve, the outcomes would be very different for an individual exposed to low doses that promote the ACh phenotype as opposed to an individual receiving higher exposures that promote the DA phenotype. Such switching has already been seen with other agents that produce phenotype shifts in the PC12 model, notably CPF (Barone et al. 2000; Dam et al. 1999; Jameson et al. 2006b; Pope 1999; Rice and Barone 2000; Slotkin 2004a; Vidair 2004). The biphasic nature of the concentration–effect relationship suggests that there are at least two separate mechanisms operating to turn on expression of neurotransmitter phenotypes. Oxidative stress, which was prominent for PFOSA, is likely to be one contributory factor, since oxidative stress itself is a prodifferentiation signal (Katoh et al. 1997). However, it is also possible that PFOSA switches the phenotypes by turning specific DA- or ACh-related genes on or off inappropriately, as has been shown for the organophosphates (Slotkin and Seidler 2007).\nAlthough PFOS also inhibited DNA synthesis in undifferentiated PC12 cells, its effects were less than those seen with the positive test compound (CPF) and far smaller than those seen with PFOSA. There was no parallel reduction in DNA content, which is not surprising given the small effect on DNA synthesis and the fact that a 24-hr exposure is less than a doubling time for PC12 cell replication. Notably, we did not find any reductions in cell numbers even with a 6-day exposure of differentiating cells, showing that the effects of PFOS are indeed fundamentally different from those of PFOSA. This was further confirmed by a lack of any evidence for a global prodifferentiation effect because there were no changes in the protein ratios and only a small degree of oxidative stress. Nonetheless, PFOS altered the phenotypic fate of the cells, promoting the ACh phenotype at the expense of the DA phenotype; the effect was again biphasic, as the increase in ChAT regressed back to normal as the PFOS concentration was raised above the point where lipid peroxidation and cytotoxicity emerged. There are two important points made by these findings: a) The impact of PFOS on neurotransmitter phenotype is radically different from that of PFOSA; and b) some of the effects on differentiation are distinct from those related to oxidative stress and cytotoxicity, and occupy a part of the concentration–effect curve below the thresholds for those common adverse events.\nOf the four agents, PFOA and PFBS had the least effect on most of the parameters connoting cell acquisition and growth. Neither agent produced any substantial inhibition of DNA synthesis or cell loss in undifferentiated cells, and a small decrease in DNA in differentiating cells was found only at the highest concentration, and only for PFOA. Although both agents produced small but significant decreases in viability at concentrations of 100 or 250 μM, the effect was obviously insufficient to have any impact on the total number of cells remaining in the culture. Similarly, PFOA had no effect on protein ratios, and PFBS had only a small effect on total protein/DNA without any impact on membrane/total protein. For phenotypic outcomes, the two agents were substantially different from PFOSA and PFOS, both in the type and magnitude of effects. PFOA caused a slight reduction in TH and, consequently, a minor shift favoring the ACh phenotype (reduced TH/ChAT ratio). In contrast, PFBS retarded differentiation into both phenotypes, an effect not seen with any other agent; accordingly, although the TH/ChAT ratio was unaffected by PFBS, both TH and ChAT were significantly reduced, indicating a likelihood of impaired function for both neurotransmitters. Once again, this demonstrates that the effect on phenotypic fate of the cells is distinct from any other effects on cell replication, growth, or viability.\nOur results thus point to the potential for PFAAs to evoke developmental neurotoxicity through direct actions on replicating and differentiating neurons, effects distinct from the indirect consequences of endocrine disruption or metabolic or other secondary mechanisms. Importantly, the various PFAAs differ not only in their propensity to elicit outright neurotoxicity or oxidative stress but also in their ability to augment or suppress specific neurotransmitter phenotypes, thus shifting the differentiation fate of the neuron. These effects can even be in opposite directions for the various PFAAs, and because the concentration–effect curve is biphasic, different levels of exposure can be expected to produce disparate outcomes directed toward a given neurotransmitter phenotype. The fact that there are stark differences in developmental neurotoxicant actions among otherwise similar PFAAs means that it is important to take developmental neurotoxicity into account in the design of future members of this class. As demonstrated here, this can be aided by the use of in vitro models that permit rapid and cost-effective screening for developmental neurotoxicity." ]
[ "materials|methods", null, null, null, null, null, null, "methods", "results", null, null, null, null, "discussion" ]
[ "developmental neurotoxicity", "PC12 cells", "perfluorinated chemicals", "perfluoroalkyl acids", "perfluorobutane sulfonate", "perfluorooctane sulfonamide", "perfluorooctane sulfonate", "perfluorooctanoate", "perfluorooctanoic acid" ]
Materials and Methods: All of the techniques used in this study have been reported previously; thus, only brief descriptions are given here. Cell cultures Because of the clonal instability of the PC12 cell line (Fujita et al. 1989), the experiments were performed on cells that had undergone fewer than five passages. PC12 cells (American Type Culture Collection, 1721-CRL; obtained from the Duke Comprehensive Cancer Center, Durham, NC) were grown under standard conditions (Crumpton et al. 2000a; Qiao et al. 2003; Song et al. 1998) in RPMI-1640 medium (Invitrogen, Carlsbad, CA) supplemented with 10% inactivated horse serum (Sigma Chemical Co., St. Louis, MO), 5% fetal bovine serum (Sigma), and 50 μg/mL penicillin streptomycin (Invitrogen); cells were incubated with 7.5% CO2 at 37°C. For studies in the undifferentiated state, the medium was changed 24 hr after seeding to include 50 μM CPF (98.8% purity; Chem Service, West Chester, PA), or varying concentrations of each of the four perfluorinated chemicals (supplied by Battelle, Columbus, OH): PFOS (97% purity), PFOA (99.2% purity), PFOSA (99.4% purity), and PFBS potassium salt (98.2% purity). Because of the limited water solubility of CPF and some of the perfluorinated chemicals, all test agents were dissolved in dimethyl sulfoxide (DMSO) to achieve a final concentration in the culture medium of 0.1%, which has no effect on replication or differentiation of PC12 cells (Qiao et al. 2001, 2003; Song et al. 1998); control cultures contained the same concentration of DMSO. For studies in differentiating cells, 24 hr after seeding, the medium was changed to include 50 ng/mL of 2.5 S murine nerve growth factor (Invitrogen) and DMSO with or without the test agents; these cells were examined for up to 6 days, with medium changes (including test agents) every 48 hr. We chose the CPF concentration to elicit a robust response for each of the effects to be compared to the actions of perfluorinated chemicals, and accordingly, we selected a concentration that elicits inhibition of DNA synthesis and interference with cell acquisition and oxidative stress, but that lies below the threshold for outright cytotoxicity or loss of viability (Bagchi et al. 1995; Crumpton et al. 2000b; Das and Barone 1999; Jameson et al. 2006b; Qiao et al. 2001, 2003, 2005; Slotkin et al. 2007; Song et al. 1998). Because of the clonal instability of the PC12 cell line (Fujita et al. 1989), the experiments were performed on cells that had undergone fewer than five passages. PC12 cells (American Type Culture Collection, 1721-CRL; obtained from the Duke Comprehensive Cancer Center, Durham, NC) were grown under standard conditions (Crumpton et al. 2000a; Qiao et al. 2003; Song et al. 1998) in RPMI-1640 medium (Invitrogen, Carlsbad, CA) supplemented with 10% inactivated horse serum (Sigma Chemical Co., St. Louis, MO), 5% fetal bovine serum (Sigma), and 50 μg/mL penicillin streptomycin (Invitrogen); cells were incubated with 7.5% CO2 at 37°C. For studies in the undifferentiated state, the medium was changed 24 hr after seeding to include 50 μM CPF (98.8% purity; Chem Service, West Chester, PA), or varying concentrations of each of the four perfluorinated chemicals (supplied by Battelle, Columbus, OH): PFOS (97% purity), PFOA (99.2% purity), PFOSA (99.4% purity), and PFBS potassium salt (98.2% purity). Because of the limited water solubility of CPF and some of the perfluorinated chemicals, all test agents were dissolved in dimethyl sulfoxide (DMSO) to achieve a final concentration in the culture medium of 0.1%, which has no effect on replication or differentiation of PC12 cells (Qiao et al. 2001, 2003; Song et al. 1998); control cultures contained the same concentration of DMSO. For studies in differentiating cells, 24 hr after seeding, the medium was changed to include 50 ng/mL of 2.5 S murine nerve growth factor (Invitrogen) and DMSO with or without the test agents; these cells were examined for up to 6 days, with medium changes (including test agents) every 48 hr. We chose the CPF concentration to elicit a robust response for each of the effects to be compared to the actions of perfluorinated chemicals, and accordingly, we selected a concentration that elicits inhibition of DNA synthesis and interference with cell acquisition and oxidative stress, but that lies below the threshold for outright cytotoxicity or loss of viability (Bagchi et al. 1995; Crumpton et al. 2000b; Das and Barone 1999; Jameson et al. 2006b; Qiao et al. 2001, 2003, 2005; Slotkin et al. 2007; Song et al. 1998). DNA synthesis We measured DNA synthesis in undifferentiated cells. Twenty-four hours after plating, we changed the medium to include the test agents. After 23 hr of exposure, we initiated the measurement of DNA synthesis by changing the medium to include 1 μCi/mL of [3H]thymidine (specific activity, 2 Ci/mmol; GE Healthcare, Piscataway, NJ) along with the continued inclusion of the test substances. After 1 hr, the medium was aspirated and cells were harvested, and DNA was precipitated and separated from other macromolecules by established procedures that produce quantitative recovery of DNA (Bell et al. 1986; Slotkin et al. 1984). The DNA fraction was counted for radiolabel and the DNA concentration was determined spectrophotometrically by absorbance at 260 nm. We corrected incorporation values to the amount of DNA present in each culture to provide an index of DNA synthesis per cell (Winick and Noble 1965), and we recorded the total DNA content. We measured DNA synthesis in undifferentiated cells. Twenty-four hours after plating, we changed the medium to include the test agents. After 23 hr of exposure, we initiated the measurement of DNA synthesis by changing the medium to include 1 μCi/mL of [3H]thymidine (specific activity, 2 Ci/mmol; GE Healthcare, Piscataway, NJ) along with the continued inclusion of the test substances. After 1 hr, the medium was aspirated and cells were harvested, and DNA was precipitated and separated from other macromolecules by established procedures that produce quantitative recovery of DNA (Bell et al. 1986; Slotkin et al. 1984). The DNA fraction was counted for radiolabel and the DNA concentration was determined spectrophotometrically by absorbance at 260 nm. We corrected incorporation values to the amount of DNA present in each culture to provide an index of DNA synthesis per cell (Winick and Noble 1965), and we recorded the total DNA content. DNA and protein ratios We determined DNA and protein ratios in differentiating cells after 6 days of continuous exposure to the test agents. Cells were harvested and washed, and the DNA and protein fractions were isolated and analyzed as described previously (Slotkin et al. 2007; Song et al. 1998), with DNA and total protein analyzed by dye binding (Trauth et al. 2000). To prepare the cell membrane fraction, the homogenates were sedimented at 40,000 × g for 10 min and the pellet was washed and resedimented. Aliquots of the final resuspension were then assayed for membrane protein (Smith et al. 1985). We determined DNA and protein ratios in differentiating cells after 6 days of continuous exposure to the test agents. Cells were harvested and washed, and the DNA and protein fractions were isolated and analyzed as described previously (Slotkin et al. 2007; Song et al. 1998), with DNA and total protein analyzed by dye binding (Trauth et al. 2000). To prepare the cell membrane fraction, the homogenates were sedimented at 40,000 × g for 10 min and the pellet was washed and resedimented. Aliquots of the final resuspension were then assayed for membrane protein (Smith et al. 1985). Oxidative stress We evaluated the degree of lipid peroxidation in undifferentiated cells after 24 hr of exposure to test agents, and in differentiating cells after 4 days of exposure. We measured the concentration of MDA by reaction with thiobarbituric acid using a modification (Qiao et al. 2005) of published procedures (Guan et al. 2003). To give the MDA concentration per cell, values were calculated relative to the amount of DNA. We evaluated the degree of lipid peroxidation in undifferentiated cells after 24 hr of exposure to test agents, and in differentiating cells after 4 days of exposure. We measured the concentration of MDA by reaction with thiobarbituric acid using a modification (Qiao et al. 2005) of published procedures (Guan et al. 2003). To give the MDA concentration per cell, values were calculated relative to the amount of DNA. Viability To assess cell viability, the cell culture medium was changed to include trypan blue (1 volume per 2.5 vol of medium; Sigma) and cells were examined for staining under 400× magnification, counting an average of 100 cells per field in four different fields per culture. Assessments were made after 24 hr of exposure in undifferentiated cells and after 4 days for differentiating cells. To assess cell viability, the cell culture medium was changed to include trypan blue (1 volume per 2.5 vol of medium; Sigma) and cells were examined for staining under 400× magnification, counting an average of 100 cells per field in four different fields per culture. Assessments were made after 24 hr of exposure in undifferentiated cells and after 4 days for differentiating cells. Enzyme activities Differentiating cells were harvested after 6 days of exposure, as described above, and were disrupted by homogenization in a ground-glass homogenizer fitted with a ground-glass pestle and using a buffer consisting of 154 mM NaCl and 10 mM sodium-potassium phosphate (pH 7.4). Aliquots were withdrawn for measurement of DNA (Trauth et al. 2000). ChAT assays were conducted following published techniques (Lau et al. 1988) using a substrate of 50 μM [14C]acetyl–coenzyme A (specific activity, 60 mCi/mmol; PerkinElmer Life Sciences, Boston, MA). Labeled ACh was counted in a liquid scintillation counter and activity calculated as nanomoles synthesized per hour per microgram DNA. TH activity was measured using [14C]tyrosine as a substrate and trapping the evolved 14CO2 after coupled decarboxylation (Lau et al. 1988; Waymire et al. 1971). Each assay contained 0.5 μCi of generally labeled [14C]tyrosine (specific activity, 438 mCi/mmol; Sigma) as substrate, and activity was calculated on the same basis as for ChAT. Differentiating cells were harvested after 6 days of exposure, as described above, and were disrupted by homogenization in a ground-glass homogenizer fitted with a ground-glass pestle and using a buffer consisting of 154 mM NaCl and 10 mM sodium-potassium phosphate (pH 7.4). Aliquots were withdrawn for measurement of DNA (Trauth et al. 2000). ChAT assays were conducted following published techniques (Lau et al. 1988) using a substrate of 50 μM [14C]acetyl–coenzyme A (specific activity, 60 mCi/mmol; PerkinElmer Life Sciences, Boston, MA). Labeled ACh was counted in a liquid scintillation counter and activity calculated as nanomoles synthesized per hour per microgram DNA. TH activity was measured using [14C]tyrosine as a substrate and trapping the evolved 14CO2 after coupled decarboxylation (Lau et al. 1988; Waymire et al. 1971). Each assay contained 0.5 μCi of generally labeled [14C]tyrosine (specific activity, 438 mCi/mmol; Sigma) as substrate, and activity was calculated on the same basis as for ChAT. Data analysis All studies were performed on 8–16 separate cultures for each measure and treatment, using 2–4 separate batches of cells. Results are presented as mean ± SE, with treatment comparisons carried out by analysis of variance (ANOVA) followed by Fisher’s protected least significant difference test for post hoc comparisons of individual treatments. In the initial test, we evaluated two ANOVA factors (treatment, cell batch) and found that the results did not vary among the different batches of cells, so results across the different batches were combined for presentation. Significance was assumed at p < 0.05. All studies were performed on 8–16 separate cultures for each measure and treatment, using 2–4 separate batches of cells. Results are presented as mean ± SE, with treatment comparisons carried out by analysis of variance (ANOVA) followed by Fisher’s protected least significant difference test for post hoc comparisons of individual treatments. In the initial test, we evaluated two ANOVA factors (treatment, cell batch) and found that the results did not vary among the different batches of cells, so results across the different batches were combined for presentation. Significance was assumed at p < 0.05. Cell cultures: Because of the clonal instability of the PC12 cell line (Fujita et al. 1989), the experiments were performed on cells that had undergone fewer than five passages. PC12 cells (American Type Culture Collection, 1721-CRL; obtained from the Duke Comprehensive Cancer Center, Durham, NC) were grown under standard conditions (Crumpton et al. 2000a; Qiao et al. 2003; Song et al. 1998) in RPMI-1640 medium (Invitrogen, Carlsbad, CA) supplemented with 10% inactivated horse serum (Sigma Chemical Co., St. Louis, MO), 5% fetal bovine serum (Sigma), and 50 μg/mL penicillin streptomycin (Invitrogen); cells were incubated with 7.5% CO2 at 37°C. For studies in the undifferentiated state, the medium was changed 24 hr after seeding to include 50 μM CPF (98.8% purity; Chem Service, West Chester, PA), or varying concentrations of each of the four perfluorinated chemicals (supplied by Battelle, Columbus, OH): PFOS (97% purity), PFOA (99.2% purity), PFOSA (99.4% purity), and PFBS potassium salt (98.2% purity). Because of the limited water solubility of CPF and some of the perfluorinated chemicals, all test agents were dissolved in dimethyl sulfoxide (DMSO) to achieve a final concentration in the culture medium of 0.1%, which has no effect on replication or differentiation of PC12 cells (Qiao et al. 2001, 2003; Song et al. 1998); control cultures contained the same concentration of DMSO. For studies in differentiating cells, 24 hr after seeding, the medium was changed to include 50 ng/mL of 2.5 S murine nerve growth factor (Invitrogen) and DMSO with or without the test agents; these cells were examined for up to 6 days, with medium changes (including test agents) every 48 hr. We chose the CPF concentration to elicit a robust response for each of the effects to be compared to the actions of perfluorinated chemicals, and accordingly, we selected a concentration that elicits inhibition of DNA synthesis and interference with cell acquisition and oxidative stress, but that lies below the threshold for outright cytotoxicity or loss of viability (Bagchi et al. 1995; Crumpton et al. 2000b; Das and Barone 1999; Jameson et al. 2006b; Qiao et al. 2001, 2003, 2005; Slotkin et al. 2007; Song et al. 1998). DNA synthesis: We measured DNA synthesis in undifferentiated cells. Twenty-four hours after plating, we changed the medium to include the test agents. After 23 hr of exposure, we initiated the measurement of DNA synthesis by changing the medium to include 1 μCi/mL of [3H]thymidine (specific activity, 2 Ci/mmol; GE Healthcare, Piscataway, NJ) along with the continued inclusion of the test substances. After 1 hr, the medium was aspirated and cells were harvested, and DNA was precipitated and separated from other macromolecules by established procedures that produce quantitative recovery of DNA (Bell et al. 1986; Slotkin et al. 1984). The DNA fraction was counted for radiolabel and the DNA concentration was determined spectrophotometrically by absorbance at 260 nm. We corrected incorporation values to the amount of DNA present in each culture to provide an index of DNA synthesis per cell (Winick and Noble 1965), and we recorded the total DNA content. DNA and protein ratios: We determined DNA and protein ratios in differentiating cells after 6 days of continuous exposure to the test agents. Cells were harvested and washed, and the DNA and protein fractions were isolated and analyzed as described previously (Slotkin et al. 2007; Song et al. 1998), with DNA and total protein analyzed by dye binding (Trauth et al. 2000). To prepare the cell membrane fraction, the homogenates were sedimented at 40,000 × g for 10 min and the pellet was washed and resedimented. Aliquots of the final resuspension were then assayed for membrane protein (Smith et al. 1985). Oxidative stress: We evaluated the degree of lipid peroxidation in undifferentiated cells after 24 hr of exposure to test agents, and in differentiating cells after 4 days of exposure. We measured the concentration of MDA by reaction with thiobarbituric acid using a modification (Qiao et al. 2005) of published procedures (Guan et al. 2003). To give the MDA concentration per cell, values were calculated relative to the amount of DNA. Viability: To assess cell viability, the cell culture medium was changed to include trypan blue (1 volume per 2.5 vol of medium; Sigma) and cells were examined for staining under 400× magnification, counting an average of 100 cells per field in four different fields per culture. Assessments were made after 24 hr of exposure in undifferentiated cells and after 4 days for differentiating cells. Enzyme activities: Differentiating cells were harvested after 6 days of exposure, as described above, and were disrupted by homogenization in a ground-glass homogenizer fitted with a ground-glass pestle and using a buffer consisting of 154 mM NaCl and 10 mM sodium-potassium phosphate (pH 7.4). Aliquots were withdrawn for measurement of DNA (Trauth et al. 2000). ChAT assays were conducted following published techniques (Lau et al. 1988) using a substrate of 50 μM [14C]acetyl–coenzyme A (specific activity, 60 mCi/mmol; PerkinElmer Life Sciences, Boston, MA). Labeled ACh was counted in a liquid scintillation counter and activity calculated as nanomoles synthesized per hour per microgram DNA. TH activity was measured using [14C]tyrosine as a substrate and trapping the evolved 14CO2 after coupled decarboxylation (Lau et al. 1988; Waymire et al. 1971). Each assay contained 0.5 μCi of generally labeled [14C]tyrosine (specific activity, 438 mCi/mmol; Sigma) as substrate, and activity was calculated on the same basis as for ChAT. Data analysis: All studies were performed on 8–16 separate cultures for each measure and treatment, using 2–4 separate batches of cells. Results are presented as mean ± SE, with treatment comparisons carried out by analysis of variance (ANOVA) followed by Fisher’s protected least significant difference test for post hoc comparisons of individual treatments. In the initial test, we evaluated two ANOVA factors (treatment, cell batch) and found that the results did not vary among the different batches of cells, so results across the different batches were combined for presentation. Significance was assumed at p < 0.05. Results: PFOS In undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D). In differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B). With the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C). In undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D). In differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B). With the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C). PFOA Unlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA). In differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C). Unlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA). In differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C). PFOSA The effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D). In differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B). Assessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C). The effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D). In differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B). Assessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C). PFBS The effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS. The effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS. PFOS: In undifferentiated PC12 cells, PFOS elicited a small but statistically significant reduction in DNA synthesis within 24 hr of exposure (Figure 1A). The effect was smaller than that elicited by 50 μM CPF, even when the PFOS concentration was raised to 250 μM. None of the concentrations elicited a decrement in the number of cells as monitored by DNA content (Figure 1B). Nevertheless, PFOS evoked a greater degree of lipid peroxidation than did CPF (Figure 1C), with significant effects even at the lowest concentration (10 μM). The effects were insufficient to trigger a loss of viability as monitored by trypan blue exclusion (Figure 1D). In differentiating cells, 6 days of exposure to PFOS failed to cause any alterations in indices of cell number (Figure 2A), size (Figure 2B), or the membrane outgrowth associated with neurite formation (Figure 2C), whereas the positive test compound, CPF, showed significant reductions in DNA content and increments in both of the protein ratios. Indices of lipid peroxidation and cell viability were conducted after 4 days of exposure, because these factors represent a forerunner of cell loss. In contrast to the effects in undifferentiated cells, PFOS evoked less oxidative stress than did CPF (Figure 3A). PFOS decreased cell viability only at the highest concentration (Figure 3B). With the onset of differentiation, PC12 cells showed increased expression of both TH (Figure 4A) and ChAT (Figure 4B), with a much greater effect on the latter, so the TH/ChAT ratio fell by nearly an order of magnitude (Figure 4C). PFOS interfered with the differentiation into the DA phenotype, as evidenced by a decrement in TH that was significant at concentrations > 50 μM (Figure 4A). At the same time, it enhanced expression of the ACh phenotype, as shown by significant increases in ChAT (Figure 4B); the effect peaked at 50 μM PFOS and then declined, thus displaying an “inverted-U” concentration–effect relationship. The combination of reduced TH and augmented ChAT produced a robust net shift toward the ACh phenotype, as shown by a significant reduction in the TH/ChAT ratio, even at the lowest PFOS concentration (Figure 4C). PFOA: Unlike PFOS, 24 hr of exposure of undifferentiated cells to PFOA produced inhibition of DNA synthesis only at 250 μM, the highest concentration tested (Figure 1A). As before, there were no effects on DNA content (Figure 1B). PFOA also produced a significant overall increase in lipid peroxidation, but the effect achieved statistical significance at only one concentration (10 μM); unlike PFOS, the effect was smaller than for the positive test compound, CPF (Figure 1C). Cell viability was significantly reduced at the two highest concentrations (Figure 1D), but the effect was not statistically distinguishable from the nonsignificant increase seen with PFOS (no interaction of treatment × agent in a two-factor ANOVA). In differentiating cells, PFOA also proved negative for effects on cell number, except at the highest concentration (Figure 2A), and had no discernible impact on the protein/DNA ratio (Figure 2B) or the membrane/total protein ratio (Figure 2C); however, significant effects were seen for all markers with CPF. The differentiating cells also showed some evidence of oxidative stress elicited by PFOA, albeit to a lesser extent than for CPF (Figure 3A), and there were no effects on cell viability as monitored by trypan blue exclusion (Figure 3B). Unlike PFOS, PFOA had only minor effects on differentiation of PC12 cells into the DA and ACh phenotypes. We observed a small decrement in TH activity that was significant at only two of the four concentrations tested (Figure 4A). There was no significant overall effect on ChAT (Figure 4B). The TH/ChAT ratio similarly showed only a small but statistically significant decrement at the lowest PFOA concentration (Figure 4C). PFOSA: The effects of PFOSA were substantially different from those of PFOS or PFOA. In undifferentiated cells, PFOSA produced significant inhibition of DNA synthesis at all concentrations tested (Figure 1A). The reduction was equivalent to that of CPF at equimolar concentrations and then showed progressively greater loss at higher concentrations, so that at 250 μM PFOSA, DNA synthesis was almost totally arrested. Even within the span of the 24-hr exposure, 250 μM PFOSA caused a 50% decrease in the number of cells, as monitored by DNA content (Figure 1B). Because this reduction occurred in less than the doubling time for undifferentiated PC12 cells (48–72 hr), it suggested that there was an adverse effect on existing cells rather than just inhibition of new cell formation. Indeed, we found a greater degree of oxidative stress for PFOSA than for CPF, even at one-fifth the concentration, and a massive increase in lipid peroxidation at the highest concentration (Figure 1C). The effects were accompanied by a major decrease in viability, indicated by a rise in trypan blue–stained cells (Figure 1D). In differentiating cells, 6 days of exposure to PFOSA produced significant decrements in DNA content, with a near-total loss of cells at the highest concentration (Figure 2A); accordingly, protein ratios could not be evaluated at 250 μM. At 100 μM PFOSA, the remaining cells showed a significant increase in the protein/DNA ratio (Figure 2B), and there were small increments in the membrane/total protein ratio that achieved significance at 50 and 100 μM (Figure 2C). Because of the loss of cells at 6 days, we evaluated indices of cell damage at the 4-day point. Lipid peroxidation was readily demonstrable at PFOSA concentrations > 10 μM, with a massive increase at 250 μM (Figure 3A), at which point loss of viability was readily demonstrable (Figure 3B). Assessments of the impact of PFOSA on neurotransmitter phenotype were likewise truncated at 100 μM since few cells survived for 6 days at 250 μM. PFOSA had a promotional effect on TH at 50 or 100 μM, reaching three times control values at the higher concentration (Figure 4A). Differentiation into the ACh phenotype was also augmented by PFOSA (Figure 4B). However, there was a disparate concentration–effect relationship for the two phenotypes: at low concentrations, PFOSA shifted differentiation toward the ACh phenotype, as evidenced by a decrease in TH/ChAT, whereas at 100 μM, the effect on the DA phenotype predominated, producing a large increment in TH/ChAT (Figure 4C). PFBS: The effects of PFBS were somewhat similar to those of PFOA. In undifferentiated cells, there was little or no effect on DNA synthesis (Figure 1A), no shortfall in cell numbers (Figure 1B), and no significant lipid peroxidation (Figure 1C), although at high concentrations there was a small loss of viability (Figure 1D). Similarly, in differentiating cells, PFBS did not evoke a reduction in DNA content (Figure 2A), although it did produce significant cell enlargement as evidenced by an increase in the total protein/DNA ratio (Figure 2B). Like PFOA, PFBS did not change the membrane/total protein ratio (Figure 2C). PFBS evoked lipid peroxidation in differentiating cells, of about the same magnitude as that seen with PFOA but slightly less than that of the positive test compound, CPF (Figure 3A). Viability in differentiating cells was not compromised until the PFBS concentration was raised to 250 μM (Figure 3B). Notably, though, PFBS had a unique effect on differentiation into the two neurotransmitter phenotypes, displaying a concentration-dependent decrease in both the expression of TH (Figure 4A) and ChAT (Figure 4B), a pattern that was not seen with any other agent. Accordingly, the ratio of TH/ChAT was unchanged (Figure 4C) because both enzymes were reduced in parallel by PFBS. Discussion: To our knowledge, this is the first demonstration that PFAAs do indeed have direct, developmental neurotoxicant actions and that they target specific events in neural cell differentiation. In general, the rank order of adverse effects was PFOSA > PFOS > PFBS ≈ PFOA. However, superimposed on this scheme, the various agents differed in their underlying mechanisms and specific outcomes, indicating that all PFAAs are not the same in their impact on neurodevelopment and that it is unlikely that there is one simple, shared mechanism by which they all produce their effects. The greater toxicity of PFOSA can be partially attributed to its less hydrophilic nature. Because the other agents are free acids (PFOA) or sulfonic acids (PFOS, PFBS), PFOSA will more readily cross the cell membrane, achieving higher intracellular levels. By itself, this finding gives important information readily translatable to developmental neurotoxicity in vivo: PFAAs that are more hydrophobic or that form less polar metabolites are likely to be more problematic, especially since the same physicochemical properties govern passage across the placental and blood–brain barriers. Nevertheless, pharmacokinetic differences cannot account for the disparities in actions among the various PFAAs: PFOS elicited larger changes than PFBS, despite the fact that both are sulfonic acids, and all four agents had differing, even opposite, actions on neurotransmitter phenotypes. Certainly, one likely possibility for the greater toxicity of PFOSA is its ability to uncouple mitochondrial oxidative function, whereas PFOS and PFOA act simply as surfactants at the mitochondrial membrane (Starkov and Wallace 2002). This feature may contribute to disparities in the targeting of specific events in cell differentiation that are affected by oxidative stress and other downstream events linked to mitochondrial dysfunction. PFOSA was the only one of the agents tested that matched or exceeded the ability of CPF to inhibit DNA synthesis in undifferentiated cells. Likewise, PFOSA elicited the greatest degree of oxidative stress and cell loss, regardless of whether cells were undifferentiated or differentiating. Nevertheless, even with PFOSA we did not see significant loss of cell viability until the concentration was raised to 250 μM, implying that there are factors other than cytotoxicity that contribute to the net effects. In fact, our results point to strong promotion of the switch from cell replication to cell differentiation, which would also contribute to a reduction in DNA synthesis and in cell numbers. The protein ratios provide support for this interpretation. First, the protein/DNA ratio increased with PFOSA treatment at concentrations below the threshold for cytotoxicity, indicating an increase in cell size rather than the suppression of growth that would be expected from cytotoxic actions. Second, the membrane/total protein ratio also showed a rise with subtoxic PFOSA treatments, indicating augmented membrane complexity, which is commensurate with generation of neurites and cellular organelles that accompanies differentiation. The final proof of a prodifferentiation effect can be seen from the strong promotion of both neurotransmitter phenotypes, evidenced by marked increases in both TH and ChAT. The differentiation pattern triggered by PFOSA is not, however, a normal one: At low concentrations, the TH/ChAT ratio was slightly, but significantly decreased, whereas at high concentrations, it rose markedly. This means that PFOSA alters the differentiation fate of the cells, switching them weakly to the ACh phenotype at low concentrations, and strongly to the DA phenotype at high concentrations. If similar effects happen in vivo, it might be expected that neurons will differentiate into inappropriate phenotypes; this would lead to miswiring of neural circuits, with presynaptic projections for a given neurotransmitter juxtaposed to post-synaptic elements containing incorrect receptors, resulting in nonfunctional synapses. Given the biphasic dose–response curve, the outcomes would be very different for an individual exposed to low doses that promote the ACh phenotype as opposed to an individual receiving higher exposures that promote the DA phenotype. Such switching has already been seen with other agents that produce phenotype shifts in the PC12 model, notably CPF (Barone et al. 2000; Dam et al. 1999; Jameson et al. 2006b; Pope 1999; Rice and Barone 2000; Slotkin 2004a; Vidair 2004). The biphasic nature of the concentration–effect relationship suggests that there are at least two separate mechanisms operating to turn on expression of neurotransmitter phenotypes. Oxidative stress, which was prominent for PFOSA, is likely to be one contributory factor, since oxidative stress itself is a prodifferentiation signal (Katoh et al. 1997). However, it is also possible that PFOSA switches the phenotypes by turning specific DA- or ACh-related genes on or off inappropriately, as has been shown for the organophosphates (Slotkin and Seidler 2007). Although PFOS also inhibited DNA synthesis in undifferentiated PC12 cells, its effects were less than those seen with the positive test compound (CPF) and far smaller than those seen with PFOSA. There was no parallel reduction in DNA content, which is not surprising given the small effect on DNA synthesis and the fact that a 24-hr exposure is less than a doubling time for PC12 cell replication. Notably, we did not find any reductions in cell numbers even with a 6-day exposure of differentiating cells, showing that the effects of PFOS are indeed fundamentally different from those of PFOSA. This was further confirmed by a lack of any evidence for a global prodifferentiation effect because there were no changes in the protein ratios and only a small degree of oxidative stress. Nonetheless, PFOS altered the phenotypic fate of the cells, promoting the ACh phenotype at the expense of the DA phenotype; the effect was again biphasic, as the increase in ChAT regressed back to normal as the PFOS concentration was raised above the point where lipid peroxidation and cytotoxicity emerged. There are two important points made by these findings: a) The impact of PFOS on neurotransmitter phenotype is radically different from that of PFOSA; and b) some of the effects on differentiation are distinct from those related to oxidative stress and cytotoxicity, and occupy a part of the concentration–effect curve below the thresholds for those common adverse events. Of the four agents, PFOA and PFBS had the least effect on most of the parameters connoting cell acquisition and growth. Neither agent produced any substantial inhibition of DNA synthesis or cell loss in undifferentiated cells, and a small decrease in DNA in differentiating cells was found only at the highest concentration, and only for PFOA. Although both agents produced small but significant decreases in viability at concentrations of 100 or 250 μM, the effect was obviously insufficient to have any impact on the total number of cells remaining in the culture. Similarly, PFOA had no effect on protein ratios, and PFBS had only a small effect on total protein/DNA without any impact on membrane/total protein. For phenotypic outcomes, the two agents were substantially different from PFOSA and PFOS, both in the type and magnitude of effects. PFOA caused a slight reduction in TH and, consequently, a minor shift favoring the ACh phenotype (reduced TH/ChAT ratio). In contrast, PFBS retarded differentiation into both phenotypes, an effect not seen with any other agent; accordingly, although the TH/ChAT ratio was unaffected by PFBS, both TH and ChAT were significantly reduced, indicating a likelihood of impaired function for both neurotransmitters. Once again, this demonstrates that the effect on phenotypic fate of the cells is distinct from any other effects on cell replication, growth, or viability. Our results thus point to the potential for PFAAs to evoke developmental neurotoxicity through direct actions on replicating and differentiating neurons, effects distinct from the indirect consequences of endocrine disruption or metabolic or other secondary mechanisms. Importantly, the various PFAAs differ not only in their propensity to elicit outright neurotoxicity or oxidative stress but also in their ability to augment or suppress specific neurotransmitter phenotypes, thus shifting the differentiation fate of the neuron. These effects can even be in opposite directions for the various PFAAs, and because the concentration–effect curve is biphasic, different levels of exposure can be expected to produce disparate outcomes directed toward a given neurotransmitter phenotype. The fact that there are stark differences in developmental neurotoxicant actions among otherwise similar PFAAs means that it is important to take developmental neurotoxicity into account in the design of future members of this class. As demonstrated here, this can be aided by the use of in vitro models that permit rapid and cost-effective screening for developmental neurotoxicity.
Background: The widespread detection of perfluoroalkyl acids and their derivatives in wildlife and humans, and their entry into the immature brain, raise increasing concern about whether these agents might be developmental neurotoxicants. Methods: We assessed inhibition of DNA synthesis, deficits in cell numbers and growth, oxidative stress, reduced cell viability, and shifts in differentiation toward or away from the dopamine (DA) and acetylcholine (ACh) neurotransmitter phenotypes. Results: In general, the rank order of adverse effects was PFOSA > PFOS > PFBS approximately PFOA. However, superimposed on this scheme, the various agents differed in their underlying mechanisms and specific outcomes. Notably, PFOS promoted differentiation into the ACh phenotype at the expense of the DA phenotype, PFBS suppressed differentiation of both phenotypes, PFOSA enhanced differentiation of both, and PFOA had little or no effect on phenotypic specification. Conclusions: These findings indicate that all perfluorinated chemicals are not the same in their impact on neurodevelopment and that it is unlikely that there is one simple, shared mechanism by which they all produce their effects. Our results reinforce the potential for in vitro models to aid in the rapid and cost-effective screening for comparative effects among different chemicals in the same class and in relation to known developmental neurotoxicants.
null
null
9,897
243
[ 468, 179, 115, 79, 70, 202, 425, 327, 495, 259 ]
14
[ "figure", "cells", "dna", "concentration", "cell", "μm", "effect", "pfosa", "pfos", "significant" ]
[ "cell culture medium", "cell cultures clonal", "pc12 cells effects", "05 cell cultures", "undifferentiated pc12 cells" ]
null
null
null
null
[CONTENT] developmental neurotoxicity | PC12 cells | perfluorinated chemicals | perfluoroalkyl acids | perfluorobutane sulfonate | perfluorooctane sulfonamide | perfluorooctane sulfonate | perfluorooctanoate | perfluorooctanoic acid [SUMMARY]
[CONTENT] developmental neurotoxicity | PC12 cells | perfluorinated chemicals | perfluoroalkyl acids | perfluorobutane sulfonate | perfluorooctane sulfonamide | perfluorooctane sulfonate | perfluorooctanoate | perfluorooctanoic acid [SUMMARY]
null
[CONTENT] developmental neurotoxicity | PC12 cells | perfluorinated chemicals | perfluoroalkyl acids | perfluorobutane sulfonate | perfluorooctane sulfonamide | perfluorooctane sulfonate | perfluorooctanoate | perfluorooctanoic acid [SUMMARY]
null
null
[CONTENT] Alkanesulfonic Acids | Animals | Caprylates | Cell Differentiation | Cell Proliferation | Cell Survival | Fluorocarbons | Neurons | Oxidative Stress | PC12 Cells | Rats | Sulfonamides [SUMMARY]
[CONTENT] Alkanesulfonic Acids | Animals | Caprylates | Cell Differentiation | Cell Proliferation | Cell Survival | Fluorocarbons | Neurons | Oxidative Stress | PC12 Cells | Rats | Sulfonamides [SUMMARY]
null
[CONTENT] Alkanesulfonic Acids | Animals | Caprylates | Cell Differentiation | Cell Proliferation | Cell Survival | Fluorocarbons | Neurons | Oxidative Stress | PC12 Cells | Rats | Sulfonamides [SUMMARY]
null
null
[CONTENT] cell culture medium | cell cultures clonal | pc12 cells effects | 05 cell cultures | undifferentiated pc12 cells [SUMMARY]
[CONTENT] cell culture medium | cell cultures clonal | pc12 cells effects | 05 cell cultures | undifferentiated pc12 cells [SUMMARY]
null
[CONTENT] cell culture medium | cell cultures clonal | pc12 cells effects | 05 cell cultures | undifferentiated pc12 cells [SUMMARY]
null
null
[CONTENT] figure | cells | dna | concentration | cell | μm | effect | pfosa | pfos | significant [SUMMARY]
[CONTENT] figure | cells | dna | concentration | cell | μm | effect | pfosa | pfos | significant [SUMMARY]
null
[CONTENT] figure | cells | dna | concentration | cell | μm | effect | pfosa | pfos | significant [SUMMARY]
null
null
[CONTENT] batches | results | treatment | cells results | batches cells results | different batches | comparisons | batches cells | separate | anova [SUMMARY]
[CONTENT] figure | μm | significant | pfosa | pfos | cells | effect | concentration | th | ratio [SUMMARY]
null
[CONTENT] figure | cells | dna | concentration | pfosa | cell | medium | μm | pfos | effect [SUMMARY]
null
null
[CONTENT] [SUMMARY]
[CONTENT] ||| PFOS ||| PFBS | PFOA ||| ||| PFOS | PFBS | PFOA [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| PFOS ||| PFBS | PFOA ||| ||| PFOS | PFBS | PFOA ||| one ||| [SUMMARY]
null
Survey of the Impact of COVID-19 on Pediatric Orthopaedic Surgeons Globally.
34171889
The coronavirus disease 2019 (COVID-19) pandemic required rapid, global health care shifts to prioritize urgent or pandemic-related care and minimize transmission. Little is known about impacts on pediatric orthopaedic surgeons during this time. We aimed to investigate COVID-19-related changes in practice, training, and research among pediatric orthopaedic surgeons globally.
BACKGROUND
An online survey was administered to orthopaedic surgeons with interest in pediatrics in April 2020 and a follow-up was administered in February 2021. The surveys captured demographics and surgeons' self-reported experiences during the pandemic. Participants were recruited from web media and available email lists of orthopaedic societies over a 2-month period. Descriptive statistics were used to analyze results, stratified by the severity of local COVID-19-related measures.
METHODS
A total of 460 responses from 45 countries were collected for initial survey. Of these, 358 (78.5%) respondents reported lockdown measures in their region at time of survey. Most (n=337, 94.4%) reported pausing all elective procedures. Surgeons reported a reduction in the average number of surgeries per week, from 6.89 (SD=4.61) prepandemic to 1.25 (SD=2.26) at time of survey (mean difference=5.64; 95% confidence interval=5.19, 6.10). Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=56.10, 95% confidence interval=5.61, 60.58). In total, 177 (39.4%) surgeons reported using virtual modes of outpatient appointments for the first time. Of 290 surgeons with trainees, 223 (84.5%) reported implementation of systems to continue training such as webinars or virtual rounds. Of 192 respondents with research, 149 (82.8%) reported continuing research activities during the pandemic with most reporting either cessation (n=75, 64.15%), or reduction (n=25, 29.9%) in participant recruitment. A total of 111 responses from 28 countries were collected during follow-up. Surgeons described policy and circumstantial changes that facilitated resumption of clinical work.
RESULTS
The COVID-19 pandemic and its related counter measures have had significant impacts on pediatric orthopaedic practice and increased uptake of technology to provide care continuity. Rigorous epidemiological studies are needed to assess impacts of delayed and virtual care on patient outcomes.
CONCLUSIONS
[ "COVID-19", "Child", "Communicable Disease Control", "Cross-Sectional Studies", "Humans", "Orthopedic Surgeons", "Orthopedics", "Pandemics", "Pediatrics", "SARS-CoV-2", "Surgeons", "Surveys and Questionnaires" ]
8357035
null
null
METHODS
An international online survey of orthopaedic surgeons was conducted from April to May 2020 and repeated as a follow-up survey in February to March 2021. Upon ethics approval at our institution, participants were recruited through web media and email lists of orthopaedic societies across the globe (Appendix A, Supplemental Digital Content 1, http://links.lww.com/BPO/A361). The initial survey link and invitation to participate was shared with organizations via email, with request for dissemination throughout their networks. Participants were contacted directly for follow-up using contact information provided at initial survey. The follow-up survey was also opened to additional respondents through orthopaedic societies. Participants were pediatric orthopaedic surgeons, or orthopaedic surgeons with interest in pediatrics, who were practicing in the field during or immediately before the COVID-19 pandemic in March 2020. Participants were excluded if they could not read/write in English or were not practicing immediately before pandemic. The survey was developed by the study team and included closed-ended and open-ended questions regarding demographics, patient care, hospital guidelines, and impacts on training, research and wellness during the COVID-19 pandemic. Participants were also asked about their use of virtual technologies. Additional questions were added to the follow-up survey to capture detail on local state of pandemic and other long-term changes. Data were collected using REDCap electronic data capture tools13,14 hosted at our center’s Research Institute. Continuous variables were summarized by means and SDs, or medians and interquartile ranges depending on observed distribution. Categorical variables were summarized as frequencies and percentages. Where appropriate, groups were compared with 2 sample t tests (or Mann-Whitney for skewed data) for continuous variables and χ2 tests for categorical, with mean differences, or risk ratios and corresponding 95% confidence intervals, respectively. Analyses were conducted in Excel and R statistical software,15 version 3.6.3. Responses were included if participants submitted the final survey page. Due to survey structure, not all questions were answered by each participant (ie, additional probing questions appeared based on response to previous questions). As such, data is presented as a percentage of total responses per question. See Appendix B for an overview of “yes” responses across multiple survey variables (Supplemental Digital Content 2, http://links.lww.com/BPO/A362).
RESULTS
Initial Survey April to May 2020 Demographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice Clinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Employer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. Technology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. Wellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree Teaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Research Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Demographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice Clinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Employer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. Technology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. Wellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree Teaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Research Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Follow-up Survey February to March 2021 A total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363). Clinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Employer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Technology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. Wellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Teaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Research A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. A total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363). Clinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Employer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Technology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. Wellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Teaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Research A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.
null
null
[ "Initial Survey April to May 2020", "Demographics", "Clinical Practice", "Employer Guidelines", "Technology", "Wellness", "Teaching/Training", "Research", "Follow-up Survey February to March 2021", "Clinical Practice", "Employer Guidelines", "Technology", "Wellness", "Teaching/Training", "Research", "Limitations", "Lessons Learned" ]
[ "Demographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice\nA total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice\nClinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).\nSurgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).\nEmployer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.\nAt time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.\nTechnology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.\nDuring the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.\nWellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree\nIn total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree\nTeaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.\nParticipants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.\nResearch Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.\nRespondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.", "A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice", "Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).", "At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.", "During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.", "In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree", "Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.", "Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.", "A total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363).\nClinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.\nParticipants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.\nEmployer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.\nParticipants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.\nTechnology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.\nA total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.\nWellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.\nMost (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.\nTeaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.\nMost (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.\nResearch A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.\nA total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.", "Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.", "Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.", "A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.", "Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.", "Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.", "A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.", "Our study relied on self-report, so recall bias is an inherent a limitation. Furthermore, selection bias may limit generalizability of our findings; those with greater case load, stress, or challenges from the pandemic may have been less likely to participate due to other demands on their time. Small sample size within global context is another limitation. This study captured a wide range of respondents with an international focus, and thus may not be generalizable in any single local context. Importantly, we completed most data collection during the initial wave of the pandemic in April to May 2020, with only a small sample for follow-up in February to March 2021. Thus, changes in clinical practices, guidelines, and policies that evolved as new information emerged and regional case numbers shifted may not have been captured. Attrition may be due to survey fatigue, increased clinical workload, or other factors. A larger longitudinal assessment is needed to fully capture impacts on pediatric orthopaedic practice across the entirety of the pandemic.", "We found significant impacts on pediatric orthopaedic patient care during the COVID-19 pandemic. Telemedicine provided many surgeons an opportunity to continue care despite local restrictions; building and maintaining infrastructure to support virtual care is vital to be better prepared for future health system disruptions. Regular communications and formation of task forces in hospitals/clinics helped support surgeon wellbeing during this time. Long-term, epidemiological evaluations of clinical outcomes are needed to determine impact of care delays on patients. In-depth evaluation of patient and surgeon experiences with virtual care is needed to optimize use of telemedicine." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "METHODS", "RESULTS", "Initial Survey April to May 2020", "Demographics", "Clinical Practice", "Employer Guidelines", "Technology", "Wellness", "Teaching/Training", "Research", "Follow-up Survey February to March 2021", "Clinical Practice", "Employer Guidelines", "Technology", "Wellness", "Teaching/Training", "Research", "DISCUSSION", "Limitations", "Lessons Learned", "Supplementary Material" ]
[ "An international online survey of orthopaedic surgeons was conducted from April to May 2020 and repeated as a follow-up survey in February to March 2021. Upon ethics approval at our institution, participants were recruited through web media and email lists of orthopaedic societies across the globe (Appendix A, Supplemental Digital Content 1, http://links.lww.com/BPO/A361). The initial survey link and invitation to participate was shared with organizations via email, with request for dissemination throughout their networks. Participants were contacted directly for follow-up using contact information provided at initial survey. The follow-up survey was also opened to additional respondents through orthopaedic societies. Participants were pediatric orthopaedic surgeons, or orthopaedic surgeons with interest in pediatrics, who were practicing in the field during or immediately before the COVID-19 pandemic in March 2020. Participants were excluded if they could not read/write in English or were not practicing immediately before pandemic.\nThe survey was developed by the study team and included closed-ended and open-ended questions regarding demographics, patient care, hospital guidelines, and impacts on training, research and wellness during the COVID-19 pandemic. Participants were also asked about their use of virtual technologies. Additional questions were added to the follow-up survey to capture detail on local state of pandemic and other long-term changes. Data were collected using REDCap electronic data capture tools13,14 hosted at our center’s Research Institute.\nContinuous variables were summarized by means and SDs, or medians and interquartile ranges depending on observed distribution. Categorical variables were summarized as frequencies and percentages. Where appropriate, groups were compared with 2 sample t tests (or Mann-Whitney for skewed data) for continuous variables and χ2 tests for categorical, with mean differences, or risk ratios and corresponding 95% confidence intervals, respectively. Analyses were conducted in Excel and R statistical software,15 version 3.6.3. Responses were included if participants submitted the final survey page. Due to survey structure, not all questions were answered by each participant (ie, additional probing questions appeared based on response to previous questions). As such, data is presented as a percentage of total responses per question. See Appendix B for an overview of “yes” responses across multiple survey variables (Supplemental Digital Content 2, http://links.lww.com/BPO/A362).", "Initial Survey April to May 2020 Demographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice\nA total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice\nClinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).\nSurgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).\nEmployer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.\nAt time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.\nTechnology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.\nDuring the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.\nWellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree\nIn total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree\nTeaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.\nParticipants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.\nResearch Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.\nRespondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.\nDemographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice\nA total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice\nClinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).\nSurgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).\nEmployer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.\nAt time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.\nTechnology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.\nDuring the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.\nWellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree\nIn total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree\nTeaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.\nParticipants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.\nResearch Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.\nRespondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.\nFollow-up Survey February to March 2021 A total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363).\nClinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.\nParticipants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.\nEmployer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.\nParticipants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.\nTechnology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.\nA total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.\nWellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.\nMost (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.\nTeaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.\nMost (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.\nResearch A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.\nA total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.\nA total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363).\nClinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.\nParticipants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.\nEmployer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.\nParticipants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.\nTechnology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.\nA total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.\nWellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.\nMost (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.\nTeaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.\nMost (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.\nResearch A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.\nA total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.", "Demographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice\nA total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice\nClinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).\nSurgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).\nEmployer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.\nAt time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.\nTechnology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.\nDuring the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.\nWellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree\nIn total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree\nTeaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.\nParticipants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.\nResearch Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.\nRespondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.", "A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region.\nParticipant Demographics and Setting of Their Orthopaedic Practice", "Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care.\nCancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts.\nRespondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61).", "At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership.\nMost (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense.", "During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region.\nWhere applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic.", "In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic.\nNearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2).\nAverage Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree", "Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic.", "Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment.", "A total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363).\nClinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.\nParticipants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.\nEmployer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.\nParticipants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.\nTechnology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.\nA total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.\nWellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.\nMost (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.\nTeaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.\nMost (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.\nResearch A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.\nA total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.", "Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week.", "Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors.", "A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care.", "Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic.", "Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic.", "A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey.", "Surgeons reported impacts of COVID-19-related restrictions on care, wellness, training and research. We found significant interruptions in patient care, alongside substantial uptake of virtual technologies to maintain some care continuity that persisted even at follow-up a year later. The international reach of this survey allowed us to capture impacts on a global scale, in light of clear differences in disease spread and pandemic responses across jurisdictions. Despite variation in number of cases per country, our results demonstrated common challenges and impacts across the globe.\nSurgeons reported regional restrictions and lockdown measures that underscored necessary shifts in clinical practice toward urgent/emergent procedures and care, with a reduction of in-person appointments that continued through follow-up a year later. This aligned with published recommendations from the American College of Surgeons7,10 and others.6,8,9 Importantly, the reported decrease in postoperative patient appointments reported at initial survey may be a result of lowered volume of surgeries, rather than lack of follow-up for patients who received surgery.\nGiven the importance of timely care in pediatric orthopaedics, it is unsurprising that most respondents reported providing some care continuity through virtual modalities (telemedicine) which continued throughout the pandemic.16–19 Telemedicine provides care while minimizing disease transmission, and has been widely used during the COVID-19 pandemic.16,17,20 Uptake of telemedicine has even been described as a positive outcome of the pandemic,21 as it can reduce financial and travel burden on patients while improving access to care for vulnerable and remote communities.22,23 Our findings align with existing literature; respondents maintained care continuity with telemedicine and were interested in use during everyday health care in the future. Many participants continued using telemedicine at follow-up, demonstrating this shift to integration of telemedicine in regular care. However, we found challenges to virtual care, including inability to bill equivalent to an in-person appointment, and inability to provide prescriptions. While these were not identified as reasons for not using telemedicine, it is plausible such challenges may dissuade continuation of use. Surgeons who did not offer virtual care explained lack of resources or training as main barriers. Regional infrastructure and technological uptake are other known barriers to telemedicine.17,24\n\nParticipants did not report the same uptake in use of virtual technologies for clinical training activities initially or at follow-up, which may indicate importance of in-person training and the value of trainees in clinical care, even throughout a pandemic.25–27 Implementation of wellness activities was far more likely if the hospital developed a task force. This may indicate importance and utility of such designated bodies during large scale stresses on health care systems.\nAs expected, many participants reported reduction or discontinuation of research activities at time of initial survey, likely due to prioritization of patient care and reduction of nonessential activities at many health care facilities.5 This continued at follow-up despite reported increase in nonessential activities, suggesting additional factors may have prevented return to prepandemic levels.\nLimitations Our study relied on self-report, so recall bias is an inherent a limitation. Furthermore, selection bias may limit generalizability of our findings; those with greater case load, stress, or challenges from the pandemic may have been less likely to participate due to other demands on their time. Small sample size within global context is another limitation. This study captured a wide range of respondents with an international focus, and thus may not be generalizable in any single local context. Importantly, we completed most data collection during the initial wave of the pandemic in April to May 2020, with only a small sample for follow-up in February to March 2021. Thus, changes in clinical practices, guidelines, and policies that evolved as new information emerged and regional case numbers shifted may not have been captured. Attrition may be due to survey fatigue, increased clinical workload, or other factors. A larger longitudinal assessment is needed to fully capture impacts on pediatric orthopaedic practice across the entirety of the pandemic.\nOur study relied on self-report, so recall bias is an inherent a limitation. Furthermore, selection bias may limit generalizability of our findings; those with greater case load, stress, or challenges from the pandemic may have been less likely to participate due to other demands on their time. Small sample size within global context is another limitation. This study captured a wide range of respondents with an international focus, and thus may not be generalizable in any single local context. Importantly, we completed most data collection during the initial wave of the pandemic in April to May 2020, with only a small sample for follow-up in February to March 2021. Thus, changes in clinical practices, guidelines, and policies that evolved as new information emerged and regional case numbers shifted may not have been captured. Attrition may be due to survey fatigue, increased clinical workload, or other factors. A larger longitudinal assessment is needed to fully capture impacts on pediatric orthopaedic practice across the entirety of the pandemic.\nLessons Learned We found significant impacts on pediatric orthopaedic patient care during the COVID-19 pandemic. Telemedicine provided many surgeons an opportunity to continue care despite local restrictions; building and maintaining infrastructure to support virtual care is vital to be better prepared for future health system disruptions. Regular communications and formation of task forces in hospitals/clinics helped support surgeon wellbeing during this time. Long-term, epidemiological evaluations of clinical outcomes are needed to determine impact of care delays on patients. In-depth evaluation of patient and surgeon experiences with virtual care is needed to optimize use of telemedicine.\nWe found significant impacts on pediatric orthopaedic patient care during the COVID-19 pandemic. Telemedicine provided many surgeons an opportunity to continue care despite local restrictions; building and maintaining infrastructure to support virtual care is vital to be better prepared for future health system disruptions. Regular communications and formation of task forces in hospitals/clinics helped support surgeon wellbeing during this time. Long-term, epidemiological evaluations of clinical outcomes are needed to determine impact of care delays on patients. In-depth evaluation of patient and surgeon experiences with virtual care is needed to optimize use of telemedicine.", "Our study relied on self-report, so recall bias is an inherent a limitation. Furthermore, selection bias may limit generalizability of our findings; those with greater case load, stress, or challenges from the pandemic may have been less likely to participate due to other demands on their time. Small sample size within global context is another limitation. This study captured a wide range of respondents with an international focus, and thus may not be generalizable in any single local context. Importantly, we completed most data collection during the initial wave of the pandemic in April to May 2020, with only a small sample for follow-up in February to March 2021. Thus, changes in clinical practices, guidelines, and policies that evolved as new information emerged and regional case numbers shifted may not have been captured. Attrition may be due to survey fatigue, increased clinical workload, or other factors. A larger longitudinal assessment is needed to fully capture impacts on pediatric orthopaedic practice across the entirety of the pandemic.", "We found significant impacts on pediatric orthopaedic patient care during the COVID-19 pandemic. Telemedicine provided many surgeons an opportunity to continue care despite local restrictions; building and maintaining infrastructure to support virtual care is vital to be better prepared for future health system disruptions. Regular communications and formation of task forces in hospitals/clinics helped support surgeon wellbeing during this time. Long-term, epidemiological evaluations of clinical outcomes are needed to determine impact of care delays on patients. In-depth evaluation of patient and surgeon experiences with virtual care is needed to optimize use of telemedicine.", "Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website, www.pedorthopaedics.com." ]
[ "methods", "results", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "discussion", null, null, "supplementary-material" ]
[ "pandemic", "COVID-19", "global health care", "survey", "cross-sectional", "orthopaedic surgery" ]
METHODS: An international online survey of orthopaedic surgeons was conducted from April to May 2020 and repeated as a follow-up survey in February to March 2021. Upon ethics approval at our institution, participants were recruited through web media and email lists of orthopaedic societies across the globe (Appendix A, Supplemental Digital Content 1, http://links.lww.com/BPO/A361). The initial survey link and invitation to participate was shared with organizations via email, with request for dissemination throughout their networks. Participants were contacted directly for follow-up using contact information provided at initial survey. The follow-up survey was also opened to additional respondents through orthopaedic societies. Participants were pediatric orthopaedic surgeons, or orthopaedic surgeons with interest in pediatrics, who were practicing in the field during or immediately before the COVID-19 pandemic in March 2020. Participants were excluded if they could not read/write in English or were not practicing immediately before pandemic. The survey was developed by the study team and included closed-ended and open-ended questions regarding demographics, patient care, hospital guidelines, and impacts on training, research and wellness during the COVID-19 pandemic. Participants were also asked about their use of virtual technologies. Additional questions were added to the follow-up survey to capture detail on local state of pandemic and other long-term changes. Data were collected using REDCap electronic data capture tools13,14 hosted at our center’s Research Institute. Continuous variables were summarized by means and SDs, or medians and interquartile ranges depending on observed distribution. Categorical variables were summarized as frequencies and percentages. Where appropriate, groups were compared with 2 sample t tests (or Mann-Whitney for skewed data) for continuous variables and χ2 tests for categorical, with mean differences, or risk ratios and corresponding 95% confidence intervals, respectively. Analyses were conducted in Excel and R statistical software,15 version 3.6.3. Responses were included if participants submitted the final survey page. Due to survey structure, not all questions were answered by each participant (ie, additional probing questions appeared based on response to previous questions). As such, data is presented as a percentage of total responses per question. See Appendix B for an overview of “yes” responses across multiple survey variables (Supplemental Digital Content 2, http://links.lww.com/BPO/A362). RESULTS: Initial Survey April to May 2020 Demographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice Clinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Employer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. Technology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. Wellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree Teaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Research Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Demographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice Clinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Employer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. Technology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. Wellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree Teaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Research Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Follow-up Survey February to March 2021 A total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363). Clinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Employer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Technology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. Wellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Teaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Research A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. A total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363). Clinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Employer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Technology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. Wellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Teaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Research A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. Initial Survey April to May 2020: Demographics A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice Clinical Practice Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Employer Guidelines At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. Technology During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. Wellness In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree Teaching/Training Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Research Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Demographics: A total of 460 responses from orthopaedic surgeons across 45 countries were collected. Table 1 describes demographics and surgical practice setting. Most (n=414/451, 91.8%) surgeons completed orthopaedic subspecialty training in pediatrics (n=390/412, 94.7%). Most (358/456, 78.5%) respondents reported their region was in total lockdown/stay home orders at time of survey, where only essential services or businesses could operate outside the home. Other restrictions included suggested stay-home guidelines (but nonessential work was permitted) (n=114), mandatory curfews (n=46), schools closed/online only (n=270), social distancing (n=279), limitations on group gathering size (n=249) and/or other (n=13) such as mask requirements. Only 3 (0.7%) respondents reported no restrictions in place in their region. Participant Demographics and Setting of Their Orthopaedic Practice Clinical Practice: Surgeons reported a reduction in number of surgeries, decreasing from an average of 6.89 (SD=4.61) weekly surgeries prepandemic to 1.25 (SD=2.26) at time of survey [mean difference=−5.64; 95% confidence interval (CI)=−6.10 to −5.19]. A total of 123/454 (27.1%) respondents indicated the pandemic affected their ability to provide urgent/emergent care. Cancellation of elective procedures differed based on lockdown measures; 337/358 (94.1%) respondents in total lockdown had completely stopped elective procedures, compared with 69/96 (71.9%) of those not under total lockdown, indicating a 30% increase in cancellations among those reporting lockdown; relative risk=1.30 (95% CI=1.15, 1.49, P<0.001). To manage delay of elective procedures, respondents requested permission to see patients on a case-by-case basis (n=291/448, 65.0%), took a break until lockdown eased (n=168/448, 37.5%), and/or employed other measures (n=34/448, 7.6%), such as updating clinic layouts to minimize contacts. Respondents also reported reduction in outpatient appointments during the pandemic. Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=−56.10, 95% CI=−60.58 to −5.61). Employer Guidelines: At time of survey, 353/452 (78.1%) respondents were not required to go to their hospital/clinic daily, and 419/454 (92.3%) had rearranged working practices, such as limiting physical contact with patients. Only 51/452 (11.3%) respondents had been reassigned to a nonorthopedic duty, such as COVID-19-related care, triaging or screening patients, or hospital leadership. Most (n=434/450, 95.8%) participants reported their hospital/clinic regularly communicated COVID-19 information. In total, 393/452 (86.9%) respondents received guidelines regarding surgery on patients with COVID-19. However, only 219/453 (48.3%) surgeons reported their workplace screened patients for COVID-19 before surgery. At time of survey, 398/452 (88.1%) respondents indicated that personal protective equipment (PPE) was available at their hospital/clinic, but 43/452 (9.5%) were expected to acquire PPE at their own expense. Technology: During the pandemic, 177/449 (39.4%) respondents began using virtual modalities for outpatient appointments for the first time, while 163 (36.3%) used these much more than before the pandemic. Only 53 (11.8%) respondents did not conduct virtual appointments at all; of these respondents, 23 (43.4%) reported virtual appointments were unavailable in their region. Surgeons in India represented 15/23 (65.2%) of those reporting not available in their region. Where applicable, participants reported conducting a mean of 11.95 virtual appointments per week during pandemic (median=8.5, SD=12.92). Formats included phone calls (n=241), video calls (n=157), telehealth/e-health (n=167) and/or other (n=20). Some (103/387, 26.6%) respondents were able to record virtual appointments, and 112/387 (29.1%) could provide scanned prescriptions. Many surgeons (264/384, 68.8%) indicated they could charge for virtual appointments, but only 115/263 (43.7%) were able to bill the same amount as an in-person appointments; 98 (37.3%) could bill less. Despite challenges, 329/448 (73.4%) respondents indicated they would continue using virtual appointments after the pandemic. Wellness: In total, 281/452 (62.2%) surgeons reported changes to their income due to the COVID-19 pandemic; 210/281 (74.7%) reported a significant decrease, and 67 (23.8%) reported a small decrease. However, only 163/443 (36.8%) reported reduced full-time equivalent working hours due to pandemic. Some reported reduced full-time equivalent for mid-level provider/clinical staff (197/443, 44.5%), and/or their administrative and research staff (n=185, 41.8%) due to the pandemic. Nearly half (n=224/451, 49.7%) of respondents reported that their hospital/department implemented wellness activities for faculty and staff specific to the pandemic. A total of 353/452 (78.1%) respondents reported their hospitals had an emergency operation center/COVID-19 task force. Implementation of wellness activities was over 7 times more likely if the hospital/employer also had a task force (relative risk=7.25, 95% CI=4.03, 13.0). Respondents also reflected on how they felt their health care systems were handling the pandemic by rating responses on a scale of 0 (strongly disagree) to 10 (strongly agree). Responses varied by country (Table 2). Average Ratings of Surgeon’s Own Health Systems’ Response to Coronavirus Disease 2019 Pandemic, Where 0=Strongly Disagree and 10=Strongly Agree Teaching/Training: Participants were asked to report impacts on clinical teaching/training activities for medical students, residents, or fellows, if applicable; 290/460 (63.0%) completed this section. Most (223/264, 84.5%) respondents reported systems were in place to continue training during the pandemic, including webinars (n=169), virtual rounds (n=114), online access to journals/textbooks (n=104) and/or continued in-person training (n=97). Importantly, 272/289 (94.1%) surgeons believed trainees had a role in the ongoing pandemic. Research: Respondents were asked to report impacts on their research activities, if applicable; 44.8% (n=192/437) completed this section. Most respondents (149/180, 82.8%) were continuing research activities during the pandemic. However, 105/127 (82.7%) respondents reported research personnel were not working on-site at hospital/clinic at time of survey. For 117 surgeons whose research involved patient recruitment, most reported either cessation (n=75/117, 64.15%), or reduction (n=40, 34.2%) in recruitment. Follow-up Survey February to March 2021: A total of 111 orthopaedic surgeons across 28 countries completed the follow-up survey. Key questions from each section above were analyzed. See Appendix C for additional variables from the follow-up survey (Supplemental Digital Content 3, http://links.lww.com/BPO/A363). Clinical Practice Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Employer Guidelines Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Technology A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. Wellness Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Teaching/Training Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Research A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. Clinical Practice: Participants reported completing an average of 4.85 (SD=3.63) elective surgeries and 3.13 (SD=5.51) urgent surgeries in the past week. On average, surgeons reported 59.8 (SD=44.2) outpatient appointments in the past week. Employer Guidelines: Participants indicated that resumption of clinical practice was dependent on lifted lockdowns (n=43), updated clinical protocols to meet public health guidance (n=43), reductions in case numbers in their region (n=36), and/or increased clinical demands (n=24). Adaptions required to resume practice included limiting patient chaperones (n=80), PPE requirements (n=77), reduced number of surgeries (n=40), and/or other (n=23) changes such as COVID-19 tests for patients/visitors. Technology: A total of 86/110 respondents reported using telehealth/virtual appointments for new patients (n=41), follow-up patients (n=79) and/or postoperative appointments (n=56). Similar to initial survey, most (77/110; 70.0%) indicated they will continue using telehealth to provide care. Wellness: Most (95/110, 85.6%) respondents indicated their hospital routinely screened patients for COVID-19 before surgery, suggesting an increase from initial survey. A total of 93/111 (83.8%) surgeons reported that they took holiday or vacation days during the pandemic; most indicated this was less time off (n=48/109, 44.4%) or about the same (n=41, 37.6%) compared with typical years prepandemic. A total of 26/111 (23.4%) reported taking a leave of absence at some point during the pandemic. Teaching/Training: Most (72/98, n=73.5%) respondents who engaged in teaching and training activities indicated they had lost training opportunities due to the pandemic. Research: A total of 68 (61.3%) respondents were continuing research activities at time of follow-up. Of 48 surgeons whose research involved patient recruitment, most reported reduction (n=30) or cessation (n=4) of recruitment, similar to initial survey. DISCUSSION: Surgeons reported impacts of COVID-19-related restrictions on care, wellness, training and research. We found significant interruptions in patient care, alongside substantial uptake of virtual technologies to maintain some care continuity that persisted even at follow-up a year later. The international reach of this survey allowed us to capture impacts on a global scale, in light of clear differences in disease spread and pandemic responses across jurisdictions. Despite variation in number of cases per country, our results demonstrated common challenges and impacts across the globe. Surgeons reported regional restrictions and lockdown measures that underscored necessary shifts in clinical practice toward urgent/emergent procedures and care, with a reduction of in-person appointments that continued through follow-up a year later. This aligned with published recommendations from the American College of Surgeons7,10 and others.6,8,9 Importantly, the reported decrease in postoperative patient appointments reported at initial survey may be a result of lowered volume of surgeries, rather than lack of follow-up for patients who received surgery. Given the importance of timely care in pediatric orthopaedics, it is unsurprising that most respondents reported providing some care continuity through virtual modalities (telemedicine) which continued throughout the pandemic.16–19 Telemedicine provides care while minimizing disease transmission, and has been widely used during the COVID-19 pandemic.16,17,20 Uptake of telemedicine has even been described as a positive outcome of the pandemic,21 as it can reduce financial and travel burden on patients while improving access to care for vulnerable and remote communities.22,23 Our findings align with existing literature; respondents maintained care continuity with telemedicine and were interested in use during everyday health care in the future. Many participants continued using telemedicine at follow-up, demonstrating this shift to integration of telemedicine in regular care. However, we found challenges to virtual care, including inability to bill equivalent to an in-person appointment, and inability to provide prescriptions. While these were not identified as reasons for not using telemedicine, it is plausible such challenges may dissuade continuation of use. Surgeons who did not offer virtual care explained lack of resources or training as main barriers. Regional infrastructure and technological uptake are other known barriers to telemedicine.17,24 Participants did not report the same uptake in use of virtual technologies for clinical training activities initially or at follow-up, which may indicate importance of in-person training and the value of trainees in clinical care, even throughout a pandemic.25–27 Implementation of wellness activities was far more likely if the hospital developed a task force. This may indicate importance and utility of such designated bodies during large scale stresses on health care systems. As expected, many participants reported reduction or discontinuation of research activities at time of initial survey, likely due to prioritization of patient care and reduction of nonessential activities at many health care facilities.5 This continued at follow-up despite reported increase in nonessential activities, suggesting additional factors may have prevented return to prepandemic levels. Limitations Our study relied on self-report, so recall bias is an inherent a limitation. Furthermore, selection bias may limit generalizability of our findings; those with greater case load, stress, or challenges from the pandemic may have been less likely to participate due to other demands on their time. Small sample size within global context is another limitation. This study captured a wide range of respondents with an international focus, and thus may not be generalizable in any single local context. Importantly, we completed most data collection during the initial wave of the pandemic in April to May 2020, with only a small sample for follow-up in February to March 2021. Thus, changes in clinical practices, guidelines, and policies that evolved as new information emerged and regional case numbers shifted may not have been captured. Attrition may be due to survey fatigue, increased clinical workload, or other factors. A larger longitudinal assessment is needed to fully capture impacts on pediatric orthopaedic practice across the entirety of the pandemic. Our study relied on self-report, so recall bias is an inherent a limitation. Furthermore, selection bias may limit generalizability of our findings; those with greater case load, stress, or challenges from the pandemic may have been less likely to participate due to other demands on their time. Small sample size within global context is another limitation. This study captured a wide range of respondents with an international focus, and thus may not be generalizable in any single local context. Importantly, we completed most data collection during the initial wave of the pandemic in April to May 2020, with only a small sample for follow-up in February to March 2021. Thus, changes in clinical practices, guidelines, and policies that evolved as new information emerged and regional case numbers shifted may not have been captured. Attrition may be due to survey fatigue, increased clinical workload, or other factors. A larger longitudinal assessment is needed to fully capture impacts on pediatric orthopaedic practice across the entirety of the pandemic. Lessons Learned We found significant impacts on pediatric orthopaedic patient care during the COVID-19 pandemic. Telemedicine provided many surgeons an opportunity to continue care despite local restrictions; building and maintaining infrastructure to support virtual care is vital to be better prepared for future health system disruptions. Regular communications and formation of task forces in hospitals/clinics helped support surgeon wellbeing during this time. Long-term, epidemiological evaluations of clinical outcomes are needed to determine impact of care delays on patients. In-depth evaluation of patient and surgeon experiences with virtual care is needed to optimize use of telemedicine. We found significant impacts on pediatric orthopaedic patient care during the COVID-19 pandemic. Telemedicine provided many surgeons an opportunity to continue care despite local restrictions; building and maintaining infrastructure to support virtual care is vital to be better prepared for future health system disruptions. Regular communications and formation of task forces in hospitals/clinics helped support surgeon wellbeing during this time. Long-term, epidemiological evaluations of clinical outcomes are needed to determine impact of care delays on patients. In-depth evaluation of patient and surgeon experiences with virtual care is needed to optimize use of telemedicine. Limitations: Our study relied on self-report, so recall bias is an inherent a limitation. Furthermore, selection bias may limit generalizability of our findings; those with greater case load, stress, or challenges from the pandemic may have been less likely to participate due to other demands on their time. Small sample size within global context is another limitation. This study captured a wide range of respondents with an international focus, and thus may not be generalizable in any single local context. Importantly, we completed most data collection during the initial wave of the pandemic in April to May 2020, with only a small sample for follow-up in February to March 2021. Thus, changes in clinical practices, guidelines, and policies that evolved as new information emerged and regional case numbers shifted may not have been captured. Attrition may be due to survey fatigue, increased clinical workload, or other factors. A larger longitudinal assessment is needed to fully capture impacts on pediatric orthopaedic practice across the entirety of the pandemic. Lessons Learned: We found significant impacts on pediatric orthopaedic patient care during the COVID-19 pandemic. Telemedicine provided many surgeons an opportunity to continue care despite local restrictions; building and maintaining infrastructure to support virtual care is vital to be better prepared for future health system disruptions. Regular communications and formation of task forces in hospitals/clinics helped support surgeon wellbeing during this time. Long-term, epidemiological evaluations of clinical outcomes are needed to determine impact of care delays on patients. In-depth evaluation of patient and surgeon experiences with virtual care is needed to optimize use of telemedicine. Supplementary Material: Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website, www.pedorthopaedics.com.
Background: The coronavirus disease 2019 (COVID-19) pandemic required rapid, global health care shifts to prioritize urgent or pandemic-related care and minimize transmission. Little is known about impacts on pediatric orthopaedic surgeons during this time. We aimed to investigate COVID-19-related changes in practice, training, and research among pediatric orthopaedic surgeons globally. Methods: An online survey was administered to orthopaedic surgeons with interest in pediatrics in April 2020 and a follow-up was administered in February 2021. The surveys captured demographics and surgeons' self-reported experiences during the pandemic. Participants were recruited from web media and available email lists of orthopaedic societies over a 2-month period. Descriptive statistics were used to analyze results, stratified by the severity of local COVID-19-related measures. Results: A total of 460 responses from 45 countries were collected for initial survey. Of these, 358 (78.5%) respondents reported lockdown measures in their region at time of survey. Most (n=337, 94.4%) reported pausing all elective procedures. Surgeons reported a reduction in the average number of surgeries per week, from 6.89 (SD=4.61) prepandemic to 1.25 (SD=2.26) at time of survey (mean difference=5.64; 95% confidence interval=5.19, 6.10). Average number of elective outpatient appointments per week decreased from 67.89 (SD=45.78) prepandemic to 11.79 (SD=15.83) at time of survey (mean difference=56.10, 95% confidence interval=5.61, 60.58). In total, 177 (39.4%) surgeons reported using virtual modes of outpatient appointments for the first time. Of 290 surgeons with trainees, 223 (84.5%) reported implementation of systems to continue training such as webinars or virtual rounds. Of 192 respondents with research, 149 (82.8%) reported continuing research activities during the pandemic with most reporting either cessation (n=75, 64.15%), or reduction (n=25, 29.9%) in participant recruitment. A total of 111 responses from 28 countries were collected during follow-up. Surgeons described policy and circumstantial changes that facilitated resumption of clinical work. Conclusions: The COVID-19 pandemic and its related counter measures have had significant impacts on pediatric orthopaedic practice and increased uptake of technology to provide care continuity. Rigorous epidemiological studies are needed to assess impacts of delayed and virtual care on patient outcomes.
null
null
13,552
438
[ 2532, 163, 238, 172, 231, 252, 102, 96, 776, 40, 89, 54, 97, 26, 48, 191, 106 ]
21
[ "respondents", "reported", "pandemic", "surgeons", "appointments", "survey", "total", "time", "virtual", "19" ]
[ "pediatric orthopaedic surgeons", "responses orthopaedic surgeons", "orthopaedic patient care", "online survey orthopaedic", "orthopaedic surgeons interest" ]
null
null
null
null
[CONTENT] pandemic | COVID-19 | global health care | survey | cross-sectional | orthopaedic surgery [SUMMARY]
[CONTENT] pandemic | COVID-19 | global health care | survey | cross-sectional | orthopaedic surgery [SUMMARY]
null
[CONTENT] pandemic | COVID-19 | global health care | survey | cross-sectional | orthopaedic surgery [SUMMARY]
null
null
[CONTENT] COVID-19 | Child | Communicable Disease Control | Cross-Sectional Studies | Humans | Orthopedic Surgeons | Orthopedics | Pandemics | Pediatrics | SARS-CoV-2 | Surgeons | Surveys and Questionnaires [SUMMARY]
[CONTENT] COVID-19 | Child | Communicable Disease Control | Cross-Sectional Studies | Humans | Orthopedic Surgeons | Orthopedics | Pandemics | Pediatrics | SARS-CoV-2 | Surgeons | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] COVID-19 | Child | Communicable Disease Control | Cross-Sectional Studies | Humans | Orthopedic Surgeons | Orthopedics | Pandemics | Pediatrics | SARS-CoV-2 | Surgeons | Surveys and Questionnaires [SUMMARY]
null
null
[CONTENT] pediatric orthopaedic surgeons | responses orthopaedic surgeons | orthopaedic patient care | online survey orthopaedic | orthopaedic surgeons interest [SUMMARY]
[CONTENT] pediatric orthopaedic surgeons | responses orthopaedic surgeons | orthopaedic patient care | online survey orthopaedic | orthopaedic surgeons interest [SUMMARY]
null
[CONTENT] pediatric orthopaedic surgeons | responses orthopaedic surgeons | orthopaedic patient care | online survey orthopaedic | orthopaedic surgeons interest [SUMMARY]
null
null
[CONTENT] respondents | reported | pandemic | surgeons | appointments | survey | total | time | virtual | 19 [SUMMARY]
[CONTENT] respondents | reported | pandemic | surgeons | appointments | survey | total | time | virtual | 19 [SUMMARY]
null
[CONTENT] respondents | reported | pandemic | surgeons | appointments | survey | total | time | virtual | 19 [SUMMARY]
null
null
[CONTENT] survey | questions | variables | data | participants | orthopaedic | follow survey | follow | additional | orthopaedic surgeons [SUMMARY]
[CONTENT] reported | respondents | pandemic | appointments | total | 452 | sd | virtual | virtual appointments | surgeons [SUMMARY]
null
[CONTENT] reported | respondents | pandemic | appointments | survey | total | virtual | patients | surgeons | research [SUMMARY]
null
null
[CONTENT] April 2020 | February 2021 ||| ||| 2-month ||| COVID-19 [SUMMARY]
[CONTENT] 460 | 45 ||| 358 | 78.5% ||| 94.4% ||| 6.89 | 1.25 | 95% | 6.10 ||| 67.89 | 11.79 | difference=56.10 | 95% | 60.58 ||| 177 | 39.4% | first ||| 290 | 223 | 84.5% ||| 192 | 149 | 82.8% | 64.15% | n=25 | 29.9% ||| 111 | 28 ||| [SUMMARY]
null
[CONTENT] 2019 | COVID-19 ||| ||| COVID-19 ||| April 2020 | February 2021 ||| ||| 2-month ||| COVID-19 ||| 460 | 45 ||| 358 | 78.5% ||| 94.4% ||| 6.89 | 1.25 | 95% | 6.10 ||| 67.89 | 11.79 | difference=56.10 | 95% | 60.58 ||| 177 | 39.4% | first ||| 290 | 223 | 84.5% ||| 192 | 149 | 82.8% | 64.15% | n=25 | 29.9% ||| 111 | 28 ||| ||| COVID-19 ||| [SUMMARY]
null
Impact of head and neck cancer adaptive radiotherapy to spare the parotid glands and decrease the risk of xerostomia.
25573091
Large anatomical variations occur during the course of intensity-modulated radiation therapy (IMRT) for locally advanced head and neck cancer (LAHNC). The risks are therefore a parotid glands (PG) overdose and a xerostomia increase. The purposes of the study were to estimate: - the PG overdose and the xerostomia risk increase during a "standard" IMRT (IMRTstd); - the benefits of an adaptive IMRT (ART) with weekly replanning to spare the PGs and limit the risk of xerostomia.
BACKGROUND
Fifteen patients received radical IMRT (70 Gy) for LAHNC. Weekly CTs were used to estimate the dose distributions delivered during the treatment, corresponding either to the initial planning (IMRTstd) or to weekly replanning (ART). PGs dose were recalculated at the fraction, from the weekly CTs. PG cumulated doses were then estimated using deformable image registration. The following PG doses were compared: pre-treatment planned dose, per-treatment IMRTstd and ART. The corresponding estimated risks of xerostomia were also compared. Correlations between anatomical markers and dose differences were searched.
MATERIAL AND METHODS
Compared to the initial planning, a PG overdose was observed during IMRTstd for 59% of the PGs, with an average increase of 3.7 Gy (10.0 Gy maximum) for the mean dose, and of 8.2% (23.9% maximum) for the risk of xerostomia. Compared to the initial planning, weekly replanning reduced the PG mean dose for all the patients (p<0.05). In the overirradiated PG group, weekly replanning reduced the mean dose by 5.1 Gy (12.2 Gy maximum) and the absolute risk of xerostomia by 11% (p<0.01) (30% maximum). The PG overdose and the dosimetric benefit of replanning increased with the tumor shrinkage and the neck thickness reduction (p<0.001).
RESULTS
During the course of LAHNC IMRT, around 60% of the PGs are overdosed of 4 Gy. Weekly replanning decreased the PG mean dose by 5 Gy, and therefore by 11% the xerostomia risk.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Female", "Head and Neck Neoplasms", "Humans", "Male", "Middle Aged", "Neoplasm Staging", "Organ Sparing Treatments", "Parotid Gland", "Prognosis", "Radiometry", "Radiotherapy Dosage", "Radiotherapy Planning, Computer-Assisted", "Xerostomia" ]
4311461
Introduction
The treatment of unresectable Head & Neck Cancer (HNC) consists of a chemoradiotherapy [1,2]. One of the most common toxicity of this treatment is xerostomia, inducing difficulties in swallowing and speaking, loss of taste, and dental caries, with therefore a direct impact on patient quality of life. Xerostomia is mainly caused by radiation induced damage mainly to the parotid glands (PG), and to a lesser extend to the submandibular glands [3]. Intensity modulated radiotherapy (IMRT) permits to deliver highly conformal dose in complex anatomical structures, while sparing critical structures. Indeed, three randomized studies have demonstrated improving (PG) sparing by using IMRT compared to non-IMRT techniques, resulting in better salivary flow and decreased xerostomia risk [4-6]. However, large variations can be observed during the course of IMRT treatment, such as body weight loss [7,8], primary tumor shrinking [7], and PG volume reduction [9]. Due to these anatomical variations and to the tight IMRT dose gradient, the actual administered dose may therefore not correspond to the planned dose, with a risk of radiation overdose to the PGs (Figure 1) [10,11]. This dose difference clearly reduces the expected clinical benefits of IMRT, increasing the risk of xerostomia. Although bone-based image-guided radiation therapy (IGRT) allows for setup error correction, the actual delivered dose to the PGs remains higher than the planned dose [12], due to the fact that IGRT does not take shape/volume variations into account. By performing one or more new planning during the radiotherapy treatment, adaptive radiotherapy (ART) aims to correct such uncertainties. ART has been already shown to decrease the mean PG dose during locally advanced head and neck cancer IMRT [13], but no surrogate of the PG dose difference and of the dosimetric benefit of ART has yet been identified. In the context of IMRT for locally advanced HNC, this study sought to:estimate the difference between the planned dose and the actual delivered dose (without replanning) to the PGs, i.e., the PG overdose;estimate the PG dose difference with replanning and without replanning to spare the PGs while keeping the same planning target volume (PTV), i.e., the benefit of ART;identify anatomical markers correlated with these dose differences (PG overdose and ART benefit).Figure 1 Illustration of the anatomical variations on the dose distribution. IMRT dose distributions at different times for a given patient, showing the PG overdose without replanning (B) and the benefit of replanning (C). A: Planned dose on the pre-treatment CT (CT0). B: Actual delivered dose without replanning during the treatment (Week 3). C: Adaptive planned dose with replanning to spare the parotid glands (PG) at the same fraction (Week 3). PGs are shown by the red line. The full red represents the Clinical Target Volume (CTV70). The arrow show the head thickness. Figure 1B and 1C compared to 1A shows that the PGs and the CTV70 volumes and the neck thickness have decreased. These anatomical variations have led to dose hotspots in the neck, close to the internal part of the two PG (Figure 1B). Replanning (Figure 1C) allowed to spare the PG even better than on the planning (Figure 1A). estimate the difference between the planned dose and the actual delivered dose (without replanning) to the PGs, i.e., the PG overdose; estimate the PG dose difference with replanning and without replanning to spare the PGs while keeping the same planning target volume (PTV), i.e., the benefit of ART; identify anatomical markers correlated with these dose differences (PG overdose and ART benefit). Illustration of the anatomical variations on the dose distribution. IMRT dose distributions at different times for a given patient, showing the PG overdose without replanning (B) and the benefit of replanning (C). A: Planned dose on the pre-treatment CT (CT0). B: Actual delivered dose without replanning during the treatment (Week 3). C: Adaptive planned dose with replanning to spare the parotid glands (PG) at the same fraction (Week 3). PGs are shown by the red line. The full red represents the Clinical Target Volume (CTV70). The arrow show the head thickness. Figure 1B and 1C compared to 1A shows that the PGs and the CTV70 volumes and the neck thickness have decreased. These anatomical variations have led to dose hotspots in the neck, close to the internal part of the two PG (Figure 1B). Replanning (Figure 1C) allowed to spare the PG even better than on the planning (Figure 1A).
null
null
Results
Since 3 ipsilateral PGs were completely included within the PTV (Patient number 7, 10 and 11), they were excluded from the analysis, resulting in a total of 27 PGs analyzed. The average Dice score [19] for PG registration, from the weekly CT to each planning CT was 0.92 (from 0.83 to 0.95). Quantification of anatomical variations during the 7 weeks of treatment From CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%). The distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm). The thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm). From CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%). The distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm). The thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm). Dose comparison between planned dose, and doses with or without replanning in PGs The per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2 Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients D mean (Gy),mean (Min-max;SD) p-value Planned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)). p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). The per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2 Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients D mean (Gy),mean (Min-max;SD) p-value Planned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)). p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). Anatomical parameters correlated with PG overdose or replanning benefit PG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose. PG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose.
Discussion and conclusion
The main goal of definitive chemoradiotherapy in locally advanced HNC is to improve locoregional control, while keeping a high quality of life. Reducing the dose in the PGs during the whole course of IMRT and therefore xerostomia is a major challenge. Indeed, we found the majority of the PGs (59%) being overirradiated of a mean dose of 4 Gy (up to 10 Gy), resulting to an absolute increase risk of xerostomia of 8% (up to 24%). The ART strategy appears to benefit not only to the over-irradiated PG patients, reducing the mean dose of 5 Gy (up to 12 Gy) and the xerostomia risk of 11% (up to 30%), but also to the non-over-irradiated PGs. These results suggest thus a large use of ART for the majority of locally advanced HNC patients. In our study, four patients (N° 4, 7, 10 and 12) have not clear benefit from replanning. These patients were presented a spontaneous decrease of the mean PG dose during the treatment. No more gain was possible with the replanning due to the other constraints (homogeneity, spinal cord, brainstem …). The GORTEC dose volume constraints has been respected for all the replanning. The dosimetric benefit of ART has been shown in a limited number of studies, and not exclusively for the PGs. In a series of 22 patients, Schwartz et al. evaluated the impact of one and two replanning using daily CT on rails [23]. The mean PG dose was decreased of 3.8% for contralateral PGs and of 9% for ipsilateral PGs, with possible sparing of the oral cavity and larynx. In another series of 20 patients, a single replanning performed at the 3rd or 4th week of treatment decreased the mean PG dose of 10 Gy [9]. On the other hand, Castadot et al. didn’t show any dosimetric benefits for PGs when using four replanning in a series of 10 patients, however reducing the spinal cord dose and improving the CTV56 dose conformation [24]. The optimal number and time of replanning are unclear. Wu et al. concluded that one replanning decreased the mean PG dose by 3%, two replanning by 5%, and six replanning by 6% [13]. A “maximalist” weekly replanning strategy was considered feasible in our study, as in an ongoing randomized study (ARTIX) comparing one IMRT based planning to a weekly based IMRT replanning. The benefit of such strategy has to be demonstrated compared to other replanning strategies. Ongoing study (like ARTFORCE trial) test the benefit of only one replanning [24]. The benefit of each supplementary weekly replanning has to be evaluated. A true adaptive RT strategy should be personalized to each patient, ranging potentially from no re-planning to a maximalist weekly replanning. Ideally, replanning decisions may likely be based on either geometrical criteria or cumulated dose monitoring corresponding to the dose-guided RT approach. Replanning is also particularly time-consuming, complete delineation taking up to 2.5 hours in our experience and that of others [25-28]. Deformable image co-registration software can be used to propagate the OAR contours from the initial planning CT to the per-therapeutic planning CT, reducing the delineation time by approximately a factor 3 [26,28]. The CTV delineation should be however carefully checked, to prevent recurrence due to inadequately reduced CTV. Indeed, the goal of ART in our study was to spare the PGs during treatment as they were spared at the planning, while keeping the same appropriate CTV coverage (and not to reduce the CTV coverage). The analysis of the anatomical variations occurring within the course of IMRT is crucial to understand the overdose of the PGs and to identify early the sub-group of the overdosed PGs (59%). We found that mean volumes decreased by 28% for the PGs and 31% for the CTV, in agreement with the literature reporting values of 15% to 28% for the PGs, 69% for the GTV, and 8% to 51% for the CTV [7,9,13,23,26,29]. We found that the PGs overdose (without replanning) and the dosimetric benefit of replanning increased with the tumor shrinkage and the reduction of head thickness. The last one is likely explained by loss of weight, tumor shrinking and decrease of the PG volume. The reduction of the head thickness leads consequently to the occurrence of dose hotspot in the neck, close or within the PGs (Figure 1). Other studies also found that reduction of the neck diameter increases the risk of over-irradiation [30,31]. The variation of the mean PG dose was more important between the CT0 and the CT1 than between each weekly CT. This difference may be explained by the delay between CT0 and the first weekly CT. In our study, the PG dose differences between the fraction and the initial planning are likely related to both the set-up error (we did not quantified) and the anatomical structures volume/shape variations. Systematic set-up errors may increase the mean PG dose by around 3% by mm of displacement [32]. This point suggests, for a daily practice, to combine both a daily bone registration to correct the set-up errors and replanning to correct the anatomical variations. Fraction comparison only provides information for a specific moment and there is a need for full treatment dose evaluation and comparison. Deformable registration enables dose fraction accumulation [33]. Since PG shape and volume variations were limited, our study’s Dice scores were relatively high (0.92). However, the Dice score does not provide any information regarding the registration’s anatomical “point to point” correspondence accuracy. Moreover, the possibility of PG defects observed over the course of radiotherapy [34] should prompt careful consideration of this cumulated dose approach, thereby justifying an independent “fraction to fraction” dose analysis. Our results, based on both weekly fraction and cumulated dose, were consistent. The 3D dose visualization and differential DVH of the dose difference between the cumulated dose and planning dose (Figure 1) revealed moreover the heterogeneity of hotspot distribution in PGs, which may also impact on the xerostomia risk. The cranial part of the PGs seems to be more critical [35,36], maybe due to the presence of an important concentration of salivary gland stem cells at this level [37]. The possible heterogeneity of the radiosensitivity within the PG could be therefore more carefully investigated in order to consider to spare not only the full gland (represented by a mean dose endpoint) but also subparts of the gland. Indeed, relatively small dose (10 Gy) within the PG may cause severe loss of function [38], and dose greater than 20 Gy may cause up to 90% loss of the acinar cells [39]. It seems also that radiation-induced gland dysfunction are due to membrane damage, causing secondarily necrosis of acinar cells and atrophy of the lobules [40]. Our study exhibits limitations. The small patient number did not allow us to analyze the potential impact of tumor localization. Even if CTs from a single patient were always delineated by the same radiation oncologist, intra-observer variabilities in organ delineation are also potentially responsible for uncertainties. Moreover, the clinical benefit of the weekly replanning has been estimated and was not reported in the study. In conclusion, an ART strategy combining a daily bone registration and a weekly replanning may be proposed for locally advanced HNC, with an expected benefit to decrease xerostomia. This PG-sparing strategy appears however particularly complex and should be therefore assessed within prospective trials, with a special attention for CTV delineation. The optimal number and time of replanning are unclear. The benefit of a weekly replanning strategy versus other replanning strategies have to been demonstrated.
[ "Patients and tumors", "Treatment and planning", "Weekly dose estimations, in cases of replanning and without replanning", "Total cumulated dose estimations by deformable registration", "Anatomical variation description", "Statistical analysis", "Quantification of anatomical variations during the 7 weeks of treatment", "Dose comparison between planned dose, and doses with or without replanning in PGs", "PG dose comparisons at the per-treatment weekly fraction", "PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks", "Anatomical parameters correlated with PG overdose or replanning benefit" ]
[ "The study enrolled a total of 15 patients with a mean age of 65 years (ranging from 50 to 87 years). Patient, tumor, and treatment characteristics are provided on Table 1. All tumors were locally advanced (Stage III or IV, AJCC 7th ed). The mean PG volume was 25.3 cc (ranging from 16.6 cc to 52.1 cc, standard deviation (SD): 8.1 cc).Table 1\nPatient, tumor, and treatment characteristics at the initial planning (CT0)\n\nID\n\nGender\n\nAge\n\nTumor localization\n\nTNM\n\nVolume (cc)\n\nD\nmean\n(Gy)\n\nXerostomia NTCP (%)\n[\n21\n]\n\nCTV70\n\nHLP\n\nCLP\n\nHLP\n\nCLP\n\nHLP\n\nCLP\n1M86TonsilT3N145.252.148.630.231.126.528.32F63TonsilT2Nx26.331.127.531.42629.018.73M74OropharynxT3N2c181.524.920.737.931.144.328.44F66OropharynxT2N2c107.227.823.432.927.932.322.05M57VelumT3N062.420.718.028.127.822.421.76M67OropharynxT3N2c156.224.522.730.829.424.721.47M52OropharynxT4N2165.1N/A21.6N/A28.7N/A23.48M67TrigoneT4N1139.322.019.330.729.227.424.49F65OropharynxT3N3237.523.920.242.431.155.228.210F65OropharynxT4N3257.9N/A24.5N/A35.2N/A37.711M50OropharynxT4N2c434.5N/A17.7N/A36.3N/A40.312M53OropharynxT3N014.416.623.341.324.252.915.913M73OropharynxT3N2c147.029.429.254.632.281.730.714M56LarynxT3N014.022.829.219.79.210.12.715M75HypopharynxT2N276.320.322.429.429.125.024.4M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nPatient, tumor, and treatment characteristics at the initial planning (CT0)\n\nM: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].", "All patients underwent IMRT using a total dose of 70 Gy (2 Gy/fraction/day, 35 fractions), with a simultaneous integrated boost technique [14] and concomitant chemotherapy. Planning CTs (CT0) with intravenous contrast agents were acquired with 2 mm slice thickness from the vertex to the carina. A thermoplastic head and shoulder mask with five fixation points was used. PET-CT and MRI co-registration was used for tumor delineation. Three target volumes were generated. Gross tumor volume (GTV) corresponded to the primary tumor along with involved lymph nodes. Clinical target volume 70 Gy (CTV70) was equal to GTV plus a 5 mm 3D margin, which was adjusted to exclude air cavities and bone mass without evidence of tumor invasion. CTV63 corresponded to the area at high-risk of microscopic spread, while CTV56 corresponded to the prophylactic irradiation area. GTV, CTV63, CTV56, and all organs at risk were manually delineated on each CT slice. Adding a 5 mm 3D margin around the CTVs generated the PTVs. PTV expansion was limited to 3 mm from the skin surface in order to avoid the build-up region and to limit skin toxicity [15]. All IMRT plans were generated using Pinnacle V9.2. Seven Coplanar 6-MV photon beams were employed with a step and shoot IMRT technique. The prescribed dose was 70 Gy to PTV70, 63 Gy to PTV63, and 56 Gy to PTV56. The collapsed cone convolution/superposition algorithm was used for dose calculation. The maximum dose within the PTV was 110% (D2%). The minimum PTV volume covered by the 95% isodose line was 95%. Dose constraints were set according to the GORTEC recommendations [16]: a mean dose (Dmean) <30 Gy and a median dose <26 Gy for contralateral PGs.\nPatients were treated as planned on CT0 and no changes were applied to dose distribution during treatment. During the treatment course, weekly in-room stereoscopic imaging corrected set-up errors >5 mm. All patients signed an informed consent form. The study was approved by the institutional review board (ARTIX study NCT01874587).", "During the treatment, each patient underwent six weekly CTs (CT1 to CT6) according to the same modalities as CT0, except for the intravenous contrast agents (not systematically used, particularly in case of cisplatin based chemotherapy). For each patient, the anatomical structures were manually segmented on each weekly CT by the same radiation oncologist. In case of complete response, initial macroscopically-involved areas were still included in the CTV70, which was adjusted to exclude any air cavities and bone mass without evidence of initial tumor invasion.\nActual weekly doses (Figure 2, Step 1A) were estimated by calculating the dose distribution on the weekly CT, using treatment parameters and isocenter from CT0. Weekly re-planned doses (Figure 2, Step 1C) were calculated by generating a new IMRT plan on each weekly CT in accordance with the dose constraints described for the initial planning. PTV coverage did not differ between initial planning and weekly re-planned CT. The dose constraints for the organs at risk have respected the GORTEC recommendations at the initial planning and in all replanning.Figure 2\nOverall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared.\n\nOverall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared.", "Cumulated doses were estimated for the two scenarios, with or without replanning (Figure 2 Steps 1B and 1D), according to the following deformable image registration procedure (contour-guided Demons registration algorithm) [17]. For PGs on each CT, a signed distance map was generated to represent the squared Euclidean distance between each voxel and the PG surface. Distance maps of each PG were then registered using the Demons registration algorithm [18]. The resulting deformation fields were employed to map the weekly dose distributions to the planning CT using tri-linear interpolation. Next, the mapped dose distributions were summed to estimate the cumulated dose for each PG. The average Dice score for PG registration, from the weekly CT to each planning CT was computed as followed:\nDice score = 2×(|A∩B|)/(|A| + |B|), where: A is the delineated PG contour on the weekly CT, B is the planning contour propagated by the registration and |.| denotes the number of voxels encompassed by the contour. The Dice score ranges from 0 (worst case: no match between both contours) to 1 (perfect match) [19]. A 3D dose difference in the PG was calculated between the cumulated dose distribution and planned dose distribution.", "Anatomical variations (between CT0 and weekly CTs) were characterized by variations in CTV70 and PG volumes, in the distances between PGs and CTV70 and in the thickness of the neck (at the level of the geometrical centers of the PGs). The distance between PGs and CTV70 corresponded to the minimal distance between the surfaces of the two contours (PG-CTVds), computed using an Euclidean distance map of the first contour, iteratively considering all the points of the second contour and keeping the resulting minimal distance.", "The impact of the anatomical variations on PG dose was analyzed considering Dmean and the full DVH. Their impact on the risk of xerostomia was estimated by using the LKB NTCP model (n = 1, m = 0.4, and TD50 = 39.9) [20,21], the complication being defined as a salivary flow ratio <25% of the pretreatment one [22].\nThe mean PG dose differences between the weekly doses (with and without replanning) and the planned dose were calculated (Figure 2). The PG overdose was assessed as the difference between the dose without replanning (at the fraction or cumulated) and the dose at the planning. The benefit of weekly replanning was assessed as the difference between the doses with replanning and without replanning (at the fraction or cumulated). Linear mixed-effects models were used to test if the following parameters were correlated with the PG overdose or the benefit of the weekly replanning: initial volumes of the CTV70 and of the PGs, decreasing (between the weekly CT and the planning CT) of the volume of the CTV70 (in cc and %) and the PGs (in cc and %), shortening of the distance between PGs and CTV70, reduction of the head thickness and the time between the CT0 and the beginning of treatment. All dose and volume comparisons were performed using nonparametric tests (Wilcoxon test). Statistical analysis was carried out using the Statistical Package for the Social Sciences V 20.0.", "From CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%).\nThe distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm).\nThe thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm).", "The per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2\nParotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients\n\nD\nmean\n(Gy),mean (Min-max;SD)\n\np-value\nPlanned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different.\n\nParotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients\n\nPGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).\np values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different.\n PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).\nIn order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).\n PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).\nA PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).", "In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).", "A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).", "PG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Patients and tumors", "Treatment and planning", "Weekly dose estimations, in cases of replanning and without replanning", "Total cumulated dose estimations by deformable registration", "Anatomical variation description", "Statistical analysis", "Results", "Quantification of anatomical variations during the 7 weeks of treatment", "Dose comparison between planned dose, and doses with or without replanning in PGs", "PG dose comparisons at the per-treatment weekly fraction", "PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks", "Anatomical parameters correlated with PG overdose or replanning benefit", "Discussion and conclusion" ]
[ "The treatment of unresectable Head & Neck Cancer (HNC) consists of a chemoradiotherapy [1,2]. One of the most common toxicity of this treatment is xerostomia, inducing difficulties in swallowing and speaking, loss of taste, and dental caries, with therefore a direct impact on patient quality of life. Xerostomia is mainly caused by radiation induced damage mainly to the parotid glands (PG), and to a lesser extend to the submandibular glands [3]. Intensity modulated radiotherapy (IMRT) permits to deliver highly conformal dose in complex anatomical structures, while sparing critical structures. Indeed, three randomized studies have demonstrated improving (PG) sparing by using IMRT compared to non-IMRT techniques, resulting in better salivary flow and decreased xerostomia risk [4-6]. However, large variations can be observed during the course of IMRT treatment, such as body weight loss [7,8], primary tumor shrinking [7], and PG volume reduction [9]. Due to these anatomical variations and to the tight IMRT dose gradient, the actual administered dose may therefore not correspond to the planned dose, with a risk of radiation overdose to the PGs (Figure 1) [10,11]. This dose difference clearly reduces the expected clinical benefits of IMRT, increasing the risk of xerostomia. Although bone-based image-guided radiation therapy (IGRT) allows for setup error correction, the actual delivered dose to the PGs remains higher than the planned dose [12], due to the fact that IGRT does not take shape/volume variations into account. By performing one or more new planning during the radiotherapy treatment, adaptive radiotherapy (ART) aims to correct such uncertainties. ART has been already shown to decrease the mean PG dose during locally advanced head and neck cancer IMRT [13], but no surrogate of the PG dose difference and of the dosimetric benefit of ART has yet been identified. In the context of IMRT for locally advanced HNC, this study sought to:estimate the difference between the planned dose and the actual delivered dose (without replanning) to the PGs, i.e., the PG overdose;estimate the PG dose difference with replanning and without replanning to spare the PGs while keeping the same planning target volume (PTV), i.e., the benefit of ART;identify anatomical markers correlated with these dose differences (PG overdose and ART benefit).Figure 1\nIllustration of the anatomical variations on the dose distribution. IMRT dose distributions at different times for a given patient, showing the PG overdose without replanning (B) and the benefit of replanning (C). A: Planned dose on the pre-treatment CT (CT0). B: Actual delivered dose without replanning during the treatment (Week 3). C: Adaptive planned dose with replanning to spare the parotid glands (PG) at the same fraction (Week 3). PGs are shown by the red line. The full red represents the Clinical Target Volume (CTV70). The arrow show the head thickness. Figure 1B and 1C compared to 1A shows that the PGs and the CTV70 volumes and the neck thickness have decreased. These anatomical variations have led to dose hotspots in the neck, close to the internal part of the two PG (Figure 1B). Replanning (Figure 1C) allowed to spare the PG even better than on the planning (Figure 1A).\nestimate the difference between the planned dose and the actual delivered dose (without replanning) to the PGs, i.e., the PG overdose;\nestimate the PG dose difference with replanning and without replanning to spare the PGs while keeping the same planning target volume (PTV), i.e., the benefit of ART;\nidentify anatomical markers correlated with these dose differences (PG overdose and ART benefit).\n\nIllustration of the anatomical variations on the dose distribution. IMRT dose distributions at different times for a given patient, showing the PG overdose without replanning (B) and the benefit of replanning (C). A: Planned dose on the pre-treatment CT (CT0). B: Actual delivered dose without replanning during the treatment (Week 3). C: Adaptive planned dose with replanning to spare the parotid glands (PG) at the same fraction (Week 3). PGs are shown by the red line. The full red represents the Clinical Target Volume (CTV70). The arrow show the head thickness. Figure 1B and 1C compared to 1A shows that the PGs and the CTV70 volumes and the neck thickness have decreased. These anatomical variations have led to dose hotspots in the neck, close to the internal part of the two PG (Figure 1B). Replanning (Figure 1C) allowed to spare the PG even better than on the planning (Figure 1A).", " Patients and tumors The study enrolled a total of 15 patients with a mean age of 65 years (ranging from 50 to 87 years). Patient, tumor, and treatment characteristics are provided on Table 1. All tumors were locally advanced (Stage III or IV, AJCC 7th ed). The mean PG volume was 25.3 cc (ranging from 16.6 cc to 52.1 cc, standard deviation (SD): 8.1 cc).Table 1\nPatient, tumor, and treatment characteristics at the initial planning (CT0)\n\nID\n\nGender\n\nAge\n\nTumor localization\n\nTNM\n\nVolume (cc)\n\nD\nmean\n(Gy)\n\nXerostomia NTCP (%)\n[\n21\n]\n\nCTV70\n\nHLP\n\nCLP\n\nHLP\n\nCLP\n\nHLP\n\nCLP\n1M86TonsilT3N145.252.148.630.231.126.528.32F63TonsilT2Nx26.331.127.531.42629.018.73M74OropharynxT3N2c181.524.920.737.931.144.328.44F66OropharynxT2N2c107.227.823.432.927.932.322.05M57VelumT3N062.420.718.028.127.822.421.76M67OropharynxT3N2c156.224.522.730.829.424.721.47M52OropharynxT4N2165.1N/A21.6N/A28.7N/A23.48M67TrigoneT4N1139.322.019.330.729.227.424.49F65OropharynxT3N3237.523.920.242.431.155.228.210F65OropharynxT4N3257.9N/A24.5N/A35.2N/A37.711M50OropharynxT4N2c434.5N/A17.7N/A36.3N/A40.312M53OropharynxT3N014.416.623.341.324.252.915.913M73OropharynxT3N2c147.029.429.254.632.281.730.714M56LarynxT3N014.022.829.219.79.210.12.715M75HypopharynxT2N276.320.322.429.429.125.024.4M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nPatient, tumor, and treatment characteristics at the initial planning (CT0)\n\nM: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nThe study enrolled a total of 15 patients with a mean age of 65 years (ranging from 50 to 87 years). Patient, tumor, and treatment characteristics are provided on Table 1. All tumors were locally advanced (Stage III or IV, AJCC 7th ed). The mean PG volume was 25.3 cc (ranging from 16.6 cc to 52.1 cc, standard deviation (SD): 8.1 cc).Table 1\nPatient, tumor, and treatment characteristics at the initial planning (CT0)\n\nID\n\nGender\n\nAge\n\nTumor localization\n\nTNM\n\nVolume (cc)\n\nD\nmean\n(Gy)\n\nXerostomia NTCP (%)\n[\n21\n]\n\nCTV70\n\nHLP\n\nCLP\n\nHLP\n\nCLP\n\nHLP\n\nCLP\n1M86TonsilT3N145.252.148.630.231.126.528.32F63TonsilT2Nx26.331.127.531.42629.018.73M74OropharynxT3N2c181.524.920.737.931.144.328.44F66OropharynxT2N2c107.227.823.432.927.932.322.05M57VelumT3N062.420.718.028.127.822.421.76M67OropharynxT3N2c156.224.522.730.829.424.721.47M52OropharynxT4N2165.1N/A21.6N/A28.7N/A23.48M67TrigoneT4N1139.322.019.330.729.227.424.49F65OropharynxT3N3237.523.920.242.431.155.228.210F65OropharynxT4N3257.9N/A24.5N/A35.2N/A37.711M50OropharynxT4N2c434.5N/A17.7N/A36.3N/A40.312M53OropharynxT3N014.416.623.341.324.252.915.913M73OropharynxT3N2c147.029.429.254.632.281.730.714M56LarynxT3N014.022.829.219.79.210.12.715M75HypopharynxT2N276.320.322.429.429.125.024.4M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nPatient, tumor, and treatment characteristics at the initial planning (CT0)\n\nM: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n Treatment and planning All patients underwent IMRT using a total dose of 70 Gy (2 Gy/fraction/day, 35 fractions), with a simultaneous integrated boost technique [14] and concomitant chemotherapy. Planning CTs (CT0) with intravenous contrast agents were acquired with 2 mm slice thickness from the vertex to the carina. A thermoplastic head and shoulder mask with five fixation points was used. PET-CT and MRI co-registration was used for tumor delineation. Three target volumes were generated. Gross tumor volume (GTV) corresponded to the primary tumor along with involved lymph nodes. Clinical target volume 70 Gy (CTV70) was equal to GTV plus a 5 mm 3D margin, which was adjusted to exclude air cavities and bone mass without evidence of tumor invasion. CTV63 corresponded to the area at high-risk of microscopic spread, while CTV56 corresponded to the prophylactic irradiation area. GTV, CTV63, CTV56, and all organs at risk were manually delineated on each CT slice. Adding a 5 mm 3D margin around the CTVs generated the PTVs. PTV expansion was limited to 3 mm from the skin surface in order to avoid the build-up region and to limit skin toxicity [15]. All IMRT plans were generated using Pinnacle V9.2. Seven Coplanar 6-MV photon beams were employed with a step and shoot IMRT technique. The prescribed dose was 70 Gy to PTV70, 63 Gy to PTV63, and 56 Gy to PTV56. The collapsed cone convolution/superposition algorithm was used for dose calculation. The maximum dose within the PTV was 110% (D2%). The minimum PTV volume covered by the 95% isodose line was 95%. Dose constraints were set according to the GORTEC recommendations [16]: a mean dose (Dmean) <30 Gy and a median dose <26 Gy for contralateral PGs.\nPatients were treated as planned on CT0 and no changes were applied to dose distribution during treatment. During the treatment course, weekly in-room stereoscopic imaging corrected set-up errors >5 mm. All patients signed an informed consent form. The study was approved by the institutional review board (ARTIX study NCT01874587).\nAll patients underwent IMRT using a total dose of 70 Gy (2 Gy/fraction/day, 35 fractions), with a simultaneous integrated boost technique [14] and concomitant chemotherapy. Planning CTs (CT0) with intravenous contrast agents were acquired with 2 mm slice thickness from the vertex to the carina. A thermoplastic head and shoulder mask with five fixation points was used. PET-CT and MRI co-registration was used for tumor delineation. Three target volumes were generated. Gross tumor volume (GTV) corresponded to the primary tumor along with involved lymph nodes. Clinical target volume 70 Gy (CTV70) was equal to GTV plus a 5 mm 3D margin, which was adjusted to exclude air cavities and bone mass without evidence of tumor invasion. CTV63 corresponded to the area at high-risk of microscopic spread, while CTV56 corresponded to the prophylactic irradiation area. GTV, CTV63, CTV56, and all organs at risk were manually delineated on each CT slice. Adding a 5 mm 3D margin around the CTVs generated the PTVs. PTV expansion was limited to 3 mm from the skin surface in order to avoid the build-up region and to limit skin toxicity [15]. All IMRT plans were generated using Pinnacle V9.2. Seven Coplanar 6-MV photon beams were employed with a step and shoot IMRT technique. The prescribed dose was 70 Gy to PTV70, 63 Gy to PTV63, and 56 Gy to PTV56. The collapsed cone convolution/superposition algorithm was used for dose calculation. The maximum dose within the PTV was 110% (D2%). The minimum PTV volume covered by the 95% isodose line was 95%. Dose constraints were set according to the GORTEC recommendations [16]: a mean dose (Dmean) <30 Gy and a median dose <26 Gy for contralateral PGs.\nPatients were treated as planned on CT0 and no changes were applied to dose distribution during treatment. During the treatment course, weekly in-room stereoscopic imaging corrected set-up errors >5 mm. All patients signed an informed consent form. The study was approved by the institutional review board (ARTIX study NCT01874587).\n Weekly dose estimations, in cases of replanning and without replanning During the treatment, each patient underwent six weekly CTs (CT1 to CT6) according to the same modalities as CT0, except for the intravenous contrast agents (not systematically used, particularly in case of cisplatin based chemotherapy). For each patient, the anatomical structures were manually segmented on each weekly CT by the same radiation oncologist. In case of complete response, initial macroscopically-involved areas were still included in the CTV70, which was adjusted to exclude any air cavities and bone mass without evidence of initial tumor invasion.\nActual weekly doses (Figure 2, Step 1A) were estimated by calculating the dose distribution on the weekly CT, using treatment parameters and isocenter from CT0. Weekly re-planned doses (Figure 2, Step 1C) were calculated by generating a new IMRT plan on each weekly CT in accordance with the dose constraints described for the initial planning. PTV coverage did not differ between initial planning and weekly re-planned CT. The dose constraints for the organs at risk have respected the GORTEC recommendations at the initial planning and in all replanning.Figure 2\nOverall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared.\n\nOverall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared.\nDuring the treatment, each patient underwent six weekly CTs (CT1 to CT6) according to the same modalities as CT0, except for the intravenous contrast agents (not systematically used, particularly in case of cisplatin based chemotherapy). For each patient, the anatomical structures were manually segmented on each weekly CT by the same radiation oncologist. In case of complete response, initial macroscopically-involved areas were still included in the CTV70, which was adjusted to exclude any air cavities and bone mass without evidence of initial tumor invasion.\nActual weekly doses (Figure 2, Step 1A) were estimated by calculating the dose distribution on the weekly CT, using treatment parameters and isocenter from CT0. Weekly re-planned doses (Figure 2, Step 1C) were calculated by generating a new IMRT plan on each weekly CT in accordance with the dose constraints described for the initial planning. PTV coverage did not differ between initial planning and weekly re-planned CT. The dose constraints for the organs at risk have respected the GORTEC recommendations at the initial planning and in all replanning.Figure 2\nOverall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared.\n\nOverall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared.\n Total cumulated dose estimations by deformable registration Cumulated doses were estimated for the two scenarios, with or without replanning (Figure 2 Steps 1B and 1D), according to the following deformable image registration procedure (contour-guided Demons registration algorithm) [17]. For PGs on each CT, a signed distance map was generated to represent the squared Euclidean distance between each voxel and the PG surface. Distance maps of each PG were then registered using the Demons registration algorithm [18]. The resulting deformation fields were employed to map the weekly dose distributions to the planning CT using tri-linear interpolation. Next, the mapped dose distributions were summed to estimate the cumulated dose for each PG. The average Dice score for PG registration, from the weekly CT to each planning CT was computed as followed:\nDice score = 2×(|A∩B|)/(|A| + |B|), where: A is the delineated PG contour on the weekly CT, B is the planning contour propagated by the registration and |.| denotes the number of voxels encompassed by the contour. The Dice score ranges from 0 (worst case: no match between both contours) to 1 (perfect match) [19]. A 3D dose difference in the PG was calculated between the cumulated dose distribution and planned dose distribution.\nCumulated doses were estimated for the two scenarios, with or without replanning (Figure 2 Steps 1B and 1D), according to the following deformable image registration procedure (contour-guided Demons registration algorithm) [17]. For PGs on each CT, a signed distance map was generated to represent the squared Euclidean distance between each voxel and the PG surface. Distance maps of each PG were then registered using the Demons registration algorithm [18]. The resulting deformation fields were employed to map the weekly dose distributions to the planning CT using tri-linear interpolation. Next, the mapped dose distributions were summed to estimate the cumulated dose for each PG. The average Dice score for PG registration, from the weekly CT to each planning CT was computed as followed:\nDice score = 2×(|A∩B|)/(|A| + |B|), where: A is the delineated PG contour on the weekly CT, B is the planning contour propagated by the registration and |.| denotes the number of voxels encompassed by the contour. The Dice score ranges from 0 (worst case: no match between both contours) to 1 (perfect match) [19]. A 3D dose difference in the PG was calculated between the cumulated dose distribution and planned dose distribution.\n Anatomical variation description Anatomical variations (between CT0 and weekly CTs) were characterized by variations in CTV70 and PG volumes, in the distances between PGs and CTV70 and in the thickness of the neck (at the level of the geometrical centers of the PGs). The distance between PGs and CTV70 corresponded to the minimal distance between the surfaces of the two contours (PG-CTVds), computed using an Euclidean distance map of the first contour, iteratively considering all the points of the second contour and keeping the resulting minimal distance.\nAnatomical variations (between CT0 and weekly CTs) were characterized by variations in CTV70 and PG volumes, in the distances between PGs and CTV70 and in the thickness of the neck (at the level of the geometrical centers of the PGs). The distance between PGs and CTV70 corresponded to the minimal distance between the surfaces of the two contours (PG-CTVds), computed using an Euclidean distance map of the first contour, iteratively considering all the points of the second contour and keeping the resulting minimal distance.\n Statistical analysis The impact of the anatomical variations on PG dose was analyzed considering Dmean and the full DVH. Their impact on the risk of xerostomia was estimated by using the LKB NTCP model (n = 1, m = 0.4, and TD50 = 39.9) [20,21], the complication being defined as a salivary flow ratio <25% of the pretreatment one [22].\nThe mean PG dose differences between the weekly doses (with and without replanning) and the planned dose were calculated (Figure 2). The PG overdose was assessed as the difference between the dose without replanning (at the fraction or cumulated) and the dose at the planning. The benefit of weekly replanning was assessed as the difference between the doses with replanning and without replanning (at the fraction or cumulated). Linear mixed-effects models were used to test if the following parameters were correlated with the PG overdose or the benefit of the weekly replanning: initial volumes of the CTV70 and of the PGs, decreasing (between the weekly CT and the planning CT) of the volume of the CTV70 (in cc and %) and the PGs (in cc and %), shortening of the distance between PGs and CTV70, reduction of the head thickness and the time between the CT0 and the beginning of treatment. All dose and volume comparisons were performed using nonparametric tests (Wilcoxon test). Statistical analysis was carried out using the Statistical Package for the Social Sciences V 20.0.\nThe impact of the anatomical variations on PG dose was analyzed considering Dmean and the full DVH. Their impact on the risk of xerostomia was estimated by using the LKB NTCP model (n = 1, m = 0.4, and TD50 = 39.9) [20,21], the complication being defined as a salivary flow ratio <25% of the pretreatment one [22].\nThe mean PG dose differences between the weekly doses (with and without replanning) and the planned dose were calculated (Figure 2). The PG overdose was assessed as the difference between the dose without replanning (at the fraction or cumulated) and the dose at the planning. The benefit of weekly replanning was assessed as the difference between the doses with replanning and without replanning (at the fraction or cumulated). Linear mixed-effects models were used to test if the following parameters were correlated with the PG overdose or the benefit of the weekly replanning: initial volumes of the CTV70 and of the PGs, decreasing (between the weekly CT and the planning CT) of the volume of the CTV70 (in cc and %) and the PGs (in cc and %), shortening of the distance between PGs and CTV70, reduction of the head thickness and the time between the CT0 and the beginning of treatment. All dose and volume comparisons were performed using nonparametric tests (Wilcoxon test). Statistical analysis was carried out using the Statistical Package for the Social Sciences V 20.0.", "The study enrolled a total of 15 patients with a mean age of 65 years (ranging from 50 to 87 years). Patient, tumor, and treatment characteristics are provided on Table 1. All tumors were locally advanced (Stage III or IV, AJCC 7th ed). The mean PG volume was 25.3 cc (ranging from 16.6 cc to 52.1 cc, standard deviation (SD): 8.1 cc).Table 1\nPatient, tumor, and treatment characteristics at the initial planning (CT0)\n\nID\n\nGender\n\nAge\n\nTumor localization\n\nTNM\n\nVolume (cc)\n\nD\nmean\n(Gy)\n\nXerostomia NTCP (%)\n[\n21\n]\n\nCTV70\n\nHLP\n\nCLP\n\nHLP\n\nCLP\n\nHLP\n\nCLP\n1M86TonsilT3N145.252.148.630.231.126.528.32F63TonsilT2Nx26.331.127.531.42629.018.73M74OropharynxT3N2c181.524.920.737.931.144.328.44F66OropharynxT2N2c107.227.823.432.927.932.322.05M57VelumT3N062.420.718.028.127.822.421.76M67OropharynxT3N2c156.224.522.730.829.424.721.47M52OropharynxT4N2165.1N/A21.6N/A28.7N/A23.48M67TrigoneT4N1139.322.019.330.729.227.424.49F65OropharynxT3N3237.523.920.242.431.155.228.210F65OropharynxT4N3257.9N/A24.5N/A35.2N/A37.711M50OropharynxT4N2c434.5N/A17.7N/A36.3N/A40.312M53OropharynxT3N014.416.623.341.324.252.915.913M73OropharynxT3N2c147.029.429.254.632.281.730.714M56LarynxT3N014.022.829.219.79.210.12.715M75HypopharynxT2N276.320.322.429.429.125.024.4M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nPatient, tumor, and treatment characteristics at the initial planning (CT0)\n\nM: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].", "All patients underwent IMRT using a total dose of 70 Gy (2 Gy/fraction/day, 35 fractions), with a simultaneous integrated boost technique [14] and concomitant chemotherapy. Planning CTs (CT0) with intravenous contrast agents were acquired with 2 mm slice thickness from the vertex to the carina. A thermoplastic head and shoulder mask with five fixation points was used. PET-CT and MRI co-registration was used for tumor delineation. Three target volumes were generated. Gross tumor volume (GTV) corresponded to the primary tumor along with involved lymph nodes. Clinical target volume 70 Gy (CTV70) was equal to GTV plus a 5 mm 3D margin, which was adjusted to exclude air cavities and bone mass without evidence of tumor invasion. CTV63 corresponded to the area at high-risk of microscopic spread, while CTV56 corresponded to the prophylactic irradiation area. GTV, CTV63, CTV56, and all organs at risk were manually delineated on each CT slice. Adding a 5 mm 3D margin around the CTVs generated the PTVs. PTV expansion was limited to 3 mm from the skin surface in order to avoid the build-up region and to limit skin toxicity [15]. All IMRT plans were generated using Pinnacle V9.2. Seven Coplanar 6-MV photon beams were employed with a step and shoot IMRT technique. The prescribed dose was 70 Gy to PTV70, 63 Gy to PTV63, and 56 Gy to PTV56. The collapsed cone convolution/superposition algorithm was used for dose calculation. The maximum dose within the PTV was 110% (D2%). The minimum PTV volume covered by the 95% isodose line was 95%. Dose constraints were set according to the GORTEC recommendations [16]: a mean dose (Dmean) <30 Gy and a median dose <26 Gy for contralateral PGs.\nPatients were treated as planned on CT0 and no changes were applied to dose distribution during treatment. During the treatment course, weekly in-room stereoscopic imaging corrected set-up errors >5 mm. All patients signed an informed consent form. The study was approved by the institutional review board (ARTIX study NCT01874587).", "During the treatment, each patient underwent six weekly CTs (CT1 to CT6) according to the same modalities as CT0, except for the intravenous contrast agents (not systematically used, particularly in case of cisplatin based chemotherapy). For each patient, the anatomical structures were manually segmented on each weekly CT by the same radiation oncologist. In case of complete response, initial macroscopically-involved areas were still included in the CTV70, which was adjusted to exclude any air cavities and bone mass without evidence of initial tumor invasion.\nActual weekly doses (Figure 2, Step 1A) were estimated by calculating the dose distribution on the weekly CT, using treatment parameters and isocenter from CT0. Weekly re-planned doses (Figure 2, Step 1C) were calculated by generating a new IMRT plan on each weekly CT in accordance with the dose constraints described for the initial planning. PTV coverage did not differ between initial planning and weekly re-planned CT. The dose constraints for the organs at risk have respected the GORTEC recommendations at the initial planning and in all replanning.Figure 2\nOverall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared.\n\nOverall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared.", "Cumulated doses were estimated for the two scenarios, with or without replanning (Figure 2 Steps 1B and 1D), according to the following deformable image registration procedure (contour-guided Demons registration algorithm) [17]. For PGs on each CT, a signed distance map was generated to represent the squared Euclidean distance between each voxel and the PG surface. Distance maps of each PG were then registered using the Demons registration algorithm [18]. The resulting deformation fields were employed to map the weekly dose distributions to the planning CT using tri-linear interpolation. Next, the mapped dose distributions were summed to estimate the cumulated dose for each PG. The average Dice score for PG registration, from the weekly CT to each planning CT was computed as followed:\nDice score = 2×(|A∩B|)/(|A| + |B|), where: A is the delineated PG contour on the weekly CT, B is the planning contour propagated by the registration and |.| denotes the number of voxels encompassed by the contour. The Dice score ranges from 0 (worst case: no match between both contours) to 1 (perfect match) [19]. A 3D dose difference in the PG was calculated between the cumulated dose distribution and planned dose distribution.", "Anatomical variations (between CT0 and weekly CTs) were characterized by variations in CTV70 and PG volumes, in the distances between PGs and CTV70 and in the thickness of the neck (at the level of the geometrical centers of the PGs). The distance between PGs and CTV70 corresponded to the minimal distance between the surfaces of the two contours (PG-CTVds), computed using an Euclidean distance map of the first contour, iteratively considering all the points of the second contour and keeping the resulting minimal distance.", "The impact of the anatomical variations on PG dose was analyzed considering Dmean and the full DVH. Their impact on the risk of xerostomia was estimated by using the LKB NTCP model (n = 1, m = 0.4, and TD50 = 39.9) [20,21], the complication being defined as a salivary flow ratio <25% of the pretreatment one [22].\nThe mean PG dose differences between the weekly doses (with and without replanning) and the planned dose were calculated (Figure 2). The PG overdose was assessed as the difference between the dose without replanning (at the fraction or cumulated) and the dose at the planning. The benefit of weekly replanning was assessed as the difference between the doses with replanning and without replanning (at the fraction or cumulated). Linear mixed-effects models were used to test if the following parameters were correlated with the PG overdose or the benefit of the weekly replanning: initial volumes of the CTV70 and of the PGs, decreasing (between the weekly CT and the planning CT) of the volume of the CTV70 (in cc and %) and the PGs (in cc and %), shortening of the distance between PGs and CTV70, reduction of the head thickness and the time between the CT0 and the beginning of treatment. All dose and volume comparisons were performed using nonparametric tests (Wilcoxon test). Statistical analysis was carried out using the Statistical Package for the Social Sciences V 20.0.", "Since 3 ipsilateral PGs were completely included within the PTV (Patient number 7, 10 and 11), they were excluded from the analysis, resulting in a total of 27 PGs analyzed. The average Dice score [19] for PG registration, from the weekly CT to each planning CT was 0.92 (from 0.83 to 0.95).\n Quantification of anatomical variations during the 7 weeks of treatment From CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%).\nThe distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm).\nThe thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm).\nFrom CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%).\nThe distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm).\nThe thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm).\n Dose comparison between planned dose, and doses with or without replanning in PGs The per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2\nParotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients\n\nD\nmean\n(Gy),mean (Min-max;SD)\n\np-value\nPlanned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different.\n\nParotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients\n\nPGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).\np values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different.\n PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).\nIn order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).\n PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).\nA PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).\nThe per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2\nParotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients\n\nD\nmean\n(Gy),mean (Min-max;SD)\n\np-value\nPlanned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different.\n\nParotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients\n\nPGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).\np values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different.\n PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).\nIn order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).\n PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).\nA PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).\n Anatomical parameters correlated with PG overdose or replanning benefit PG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose.\nPG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose.", "From CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%).\nThe distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm).\nThe thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm).", "The per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2\nParotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients\n\nD\nmean\n(Gy),mean (Min-max;SD)\n\np-value\nPlanned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different.\n\nParotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients\n\nPGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).\np values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different.\n PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).\nIn order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).\n PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).\nA PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).", "In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\n\nVariation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated.\nThen, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy).", "A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\n\nParotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients (\n4\na). The corresponding impact on the xerostomia risk (%) is presented Figure 4\nb. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nMean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16).\n\nFigure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs.\nWeekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\n\nReplanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].\nIn the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01).", "PG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose.", "The main goal of definitive chemoradiotherapy in locally advanced HNC is to improve locoregional control, while keeping a high quality of life. Reducing the dose in the PGs during the whole course of IMRT and therefore xerostomia is a major challenge. Indeed, we found the majority of the PGs (59%) being overirradiated of a mean dose of 4 Gy (up to 10 Gy), resulting to an absolute increase risk of xerostomia of 8% (up to 24%). The ART strategy appears to benefit not only to the over-irradiated PG patients, reducing the mean dose of 5 Gy (up to 12 Gy) and the xerostomia risk of 11% (up to 30%), but also to the non-over-irradiated PGs. These results suggest thus a large use of ART for the majority of locally advanced HNC patients. In our study, four patients (N° 4, 7, 10 and 12) have not clear benefit from replanning. These patients were presented a spontaneous decrease of the mean PG dose during the treatment. No more gain was possible with the replanning due to the other constraints (homogeneity, spinal cord, brainstem …). The GORTEC dose volume constraints has been respected for all the replanning.\nThe dosimetric benefit of ART has been shown in a limited number of studies, and not exclusively for the PGs. In a series of 22 patients, Schwartz et al. evaluated the impact of one and two replanning using daily CT on rails [23]. The mean PG dose was decreased of 3.8% for contralateral PGs and of 9% for ipsilateral PGs, with possible sparing of the oral cavity and larynx. In another series of 20 patients, a single replanning performed at the 3rd or 4th week of treatment decreased the mean PG dose of 10 Gy [9]. On the other hand, Castadot et al. didn’t show any dosimetric benefits for PGs when using four replanning in a series of 10 patients, however reducing the spinal cord dose and improving the CTV56 dose conformation [24].\nThe optimal number and time of replanning are unclear. Wu et al. concluded that one replanning decreased the mean PG dose by 3%, two replanning by 5%, and six replanning by 6% [13]. A “maximalist” weekly replanning strategy was considered feasible in our study, as in an ongoing randomized study (ARTIX) comparing one IMRT based planning to a weekly based IMRT replanning. The benefit of such strategy has to be demonstrated compared to other replanning strategies. Ongoing study (like ARTFORCE trial) test the benefit of only one replanning [24]. The benefit of each supplementary weekly replanning has to be evaluated. A true adaptive RT strategy should be personalized to each patient, ranging potentially from no re-planning to a maximalist weekly replanning. Ideally, replanning decisions may likely be based on either geometrical criteria or cumulated dose monitoring corresponding to the dose-guided RT approach. Replanning is also particularly time-consuming, complete delineation taking up to 2.5 hours in our experience and that of others [25-28]. Deformable image co-registration software can be used to propagate the OAR contours from the initial planning CT to the per-therapeutic planning CT, reducing the delineation time by approximately a factor 3 [26,28]. The CTV delineation should be however carefully checked, to prevent recurrence due to inadequately reduced CTV. Indeed, the goal of ART in our study was to spare the PGs during treatment as they were spared at the planning, while keeping the same appropriate CTV coverage (and not to reduce the CTV coverage).\nThe analysis of the anatomical variations occurring within the course of IMRT is crucial to understand the overdose of the PGs and to identify early the sub-group of the overdosed PGs (59%). We found that mean volumes decreased by 28% for the PGs and 31% for the CTV, in agreement with the literature reporting values of 15% to 28% for the PGs, 69% for the GTV, and 8% to 51% for the CTV [7,9,13,23,26,29]. We found that the PGs overdose (without replanning) and the dosimetric benefit of replanning increased with the tumor shrinkage and the reduction of head thickness. The last one is likely explained by loss of weight, tumor shrinking and decrease of the PG volume. The reduction of the head thickness leads consequently to the occurrence of dose hotspot in the neck, close or within the PGs (Figure 1). Other studies also found that reduction of the neck diameter increases the risk of over-irradiation [30,31]. The variation of the mean PG dose was more important between the CT0 and the CT1 than between each weekly CT. This difference may be explained by the delay between CT0 and the first weekly CT. In our study, the PG dose differences between the fraction and the initial planning are likely related to both the set-up error (we did not quantified) and the anatomical structures volume/shape variations. Systematic set-up errors may increase the mean PG dose by around 3% by mm of displacement [32]. This point suggests, for a daily practice, to combine both a daily bone registration to correct the set-up errors and replanning to correct the anatomical variations.\nFraction comparison only provides information for a specific moment and there is a need for full treatment dose evaluation and comparison. Deformable registration enables dose fraction accumulation [33]. Since PG shape and volume variations were limited, our study’s Dice scores were relatively high (0.92). However, the Dice score does not provide any information regarding the registration’s anatomical “point to point” correspondence accuracy. Moreover, the possibility of PG defects observed over the course of radiotherapy [34] should prompt careful consideration of this cumulated dose approach, thereby justifying an independent “fraction to fraction” dose analysis. Our results, based on both weekly fraction and cumulated dose, were consistent. The 3D dose visualization and differential DVH of the dose difference between the cumulated dose and planning dose (Figure 1) revealed moreover the heterogeneity of hotspot distribution in PGs, which may also impact on the xerostomia risk. The cranial part of the PGs seems to be more critical [35,36], maybe due to the presence of an important concentration of salivary gland stem cells at this level [37]. The possible heterogeneity of the radiosensitivity within the PG could be therefore more carefully investigated in order to consider to spare not only the full gland (represented by a mean dose endpoint) but also subparts of the gland. Indeed, relatively small dose (10 Gy) within the PG may cause severe loss of function [38], and dose greater than 20 Gy may cause up to 90% loss of the acinar cells [39]. It seems also that radiation-induced gland dysfunction are due to membrane damage, causing secondarily necrosis of acinar cells and atrophy of the lobules [40].\nOur study exhibits limitations. The small patient number did not allow us to analyze the potential impact of tumor localization. Even if CTs from a single patient were always delineated by the same radiation oncologist, intra-observer variabilities in organ delineation are also potentially responsible for uncertainties. Moreover, the clinical benefit of the weekly replanning has been estimated and was not reported in the study.\nIn conclusion, an ART strategy combining a daily bone registration and a weekly replanning may be proposed for locally advanced HNC, with an expected benefit to decrease xerostomia. This PG-sparing strategy appears however particularly complex and should be therefore assessed within prospective trials, with a special attention for CTV delineation. The optimal number and time of replanning are unclear. The benefit of a weekly replanning strategy versus other replanning strategies have to been demonstrated." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, null, "conclusion" ]
[ "Head and neck cancer", "Anatomical variation", "Adaptive RT", "Xerostomia" ]
Introduction: The treatment of unresectable Head & Neck Cancer (HNC) consists of a chemoradiotherapy [1,2]. One of the most common toxicity of this treatment is xerostomia, inducing difficulties in swallowing and speaking, loss of taste, and dental caries, with therefore a direct impact on patient quality of life. Xerostomia is mainly caused by radiation induced damage mainly to the parotid glands (PG), and to a lesser extend to the submandibular glands [3]. Intensity modulated radiotherapy (IMRT) permits to deliver highly conformal dose in complex anatomical structures, while sparing critical structures. Indeed, three randomized studies have demonstrated improving (PG) sparing by using IMRT compared to non-IMRT techniques, resulting in better salivary flow and decreased xerostomia risk [4-6]. However, large variations can be observed during the course of IMRT treatment, such as body weight loss [7,8], primary tumor shrinking [7], and PG volume reduction [9]. Due to these anatomical variations and to the tight IMRT dose gradient, the actual administered dose may therefore not correspond to the planned dose, with a risk of radiation overdose to the PGs (Figure 1) [10,11]. This dose difference clearly reduces the expected clinical benefits of IMRT, increasing the risk of xerostomia. Although bone-based image-guided radiation therapy (IGRT) allows for setup error correction, the actual delivered dose to the PGs remains higher than the planned dose [12], due to the fact that IGRT does not take shape/volume variations into account. By performing one or more new planning during the radiotherapy treatment, adaptive radiotherapy (ART) aims to correct such uncertainties. ART has been already shown to decrease the mean PG dose during locally advanced head and neck cancer IMRT [13], but no surrogate of the PG dose difference and of the dosimetric benefit of ART has yet been identified. In the context of IMRT for locally advanced HNC, this study sought to:estimate the difference between the planned dose and the actual delivered dose (without replanning) to the PGs, i.e., the PG overdose;estimate the PG dose difference with replanning and without replanning to spare the PGs while keeping the same planning target volume (PTV), i.e., the benefit of ART;identify anatomical markers correlated with these dose differences (PG overdose and ART benefit).Figure 1 Illustration of the anatomical variations on the dose distribution. IMRT dose distributions at different times for a given patient, showing the PG overdose without replanning (B) and the benefit of replanning (C). A: Planned dose on the pre-treatment CT (CT0). B: Actual delivered dose without replanning during the treatment (Week 3). C: Adaptive planned dose with replanning to spare the parotid glands (PG) at the same fraction (Week 3). PGs are shown by the red line. The full red represents the Clinical Target Volume (CTV70). The arrow show the head thickness. Figure 1B and 1C compared to 1A shows that the PGs and the CTV70 volumes and the neck thickness have decreased. These anatomical variations have led to dose hotspots in the neck, close to the internal part of the two PG (Figure 1B). Replanning (Figure 1C) allowed to spare the PG even better than on the planning (Figure 1A). estimate the difference between the planned dose and the actual delivered dose (without replanning) to the PGs, i.e., the PG overdose; estimate the PG dose difference with replanning and without replanning to spare the PGs while keeping the same planning target volume (PTV), i.e., the benefit of ART; identify anatomical markers correlated with these dose differences (PG overdose and ART benefit). Illustration of the anatomical variations on the dose distribution. IMRT dose distributions at different times for a given patient, showing the PG overdose without replanning (B) and the benefit of replanning (C). A: Planned dose on the pre-treatment CT (CT0). B: Actual delivered dose without replanning during the treatment (Week 3). C: Adaptive planned dose with replanning to spare the parotid glands (PG) at the same fraction (Week 3). PGs are shown by the red line. The full red represents the Clinical Target Volume (CTV70). The arrow show the head thickness. Figure 1B and 1C compared to 1A shows that the PGs and the CTV70 volumes and the neck thickness have decreased. These anatomical variations have led to dose hotspots in the neck, close to the internal part of the two PG (Figure 1B). Replanning (Figure 1C) allowed to spare the PG even better than on the planning (Figure 1A). Materials and methods: Patients and tumors The study enrolled a total of 15 patients with a mean age of 65 years (ranging from 50 to 87 years). Patient, tumor, and treatment characteristics are provided on Table 1. All tumors were locally advanced (Stage III or IV, AJCC 7th ed). The mean PG volume was 25.3 cc (ranging from 16.6 cc to 52.1 cc, standard deviation (SD): 8.1 cc).Table 1 Patient, tumor, and treatment characteristics at the initial planning (CT0) ID Gender Age Tumor localization TNM Volume (cc) D mean (Gy) Xerostomia NTCP (%) [ 21 ] CTV70 HLP CLP HLP CLP HLP CLP 1M86TonsilT3N145.252.148.630.231.126.528.32F63TonsilT2Nx26.331.127.531.42629.018.73M74OropharynxT3N2c181.524.920.737.931.144.328.44F66OropharynxT2N2c107.227.823.432.927.932.322.05M57VelumT3N062.420.718.028.127.822.421.76M67OropharynxT3N2c156.224.522.730.829.424.721.47M52OropharynxT4N2165.1N/A21.6N/A28.7N/A23.48M67TrigoneT4N1139.322.019.330.729.227.424.49F65OropharynxT3N3237.523.920.242.431.155.228.210F65OropharynxT4N3257.9N/A24.5N/A35.2N/A37.711M50OropharynxT4N2c434.5N/A17.7N/A36.3N/A40.312M53OropharynxT3N014.416.623.341.324.252.915.913M73OropharynxT3N2c147.029.429.254.632.281.730.714M56LarynxT3N014.022.829.219.79.210.12.715M75HypopharynxT2N276.320.322.429.429.125.024.4M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Patient, tumor, and treatment characteristics at the initial planning (CT0) M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. The study enrolled a total of 15 patients with a mean age of 65 years (ranging from 50 to 87 years). Patient, tumor, and treatment characteristics are provided on Table 1. All tumors were locally advanced (Stage III or IV, AJCC 7th ed). The mean PG volume was 25.3 cc (ranging from 16.6 cc to 52.1 cc, standard deviation (SD): 8.1 cc).Table 1 Patient, tumor, and treatment characteristics at the initial planning (CT0) ID Gender Age Tumor localization TNM Volume (cc) D mean (Gy) Xerostomia NTCP (%) [ 21 ] CTV70 HLP CLP HLP CLP HLP CLP 1M86TonsilT3N145.252.148.630.231.126.528.32F63TonsilT2Nx26.331.127.531.42629.018.73M74OropharynxT3N2c181.524.920.737.931.144.328.44F66OropharynxT2N2c107.227.823.432.927.932.322.05M57VelumT3N062.420.718.028.127.822.421.76M67OropharynxT3N2c156.224.522.730.829.424.721.47M52OropharynxT4N2165.1N/A21.6N/A28.7N/A23.48M67TrigoneT4N1139.322.019.330.729.227.424.49F65OropharynxT3N3237.523.920.242.431.155.228.210F65OropharynxT4N3257.9N/A24.5N/A35.2N/A37.711M50OropharynxT4N2c434.5N/A17.7N/A36.3N/A40.312M53OropharynxT3N014.416.623.341.324.252.915.913M73OropharynxT3N2c147.029.429.254.632.281.730.714M56LarynxT3N014.022.829.219.79.210.12.715M75HypopharynxT2N276.320.322.429.429.125.024.4M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Patient, tumor, and treatment characteristics at the initial planning (CT0) M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Treatment and planning All patients underwent IMRT using a total dose of 70 Gy (2 Gy/fraction/day, 35 fractions), with a simultaneous integrated boost technique [14] and concomitant chemotherapy. Planning CTs (CT0) with intravenous contrast agents were acquired with 2 mm slice thickness from the vertex to the carina. A thermoplastic head and shoulder mask with five fixation points was used. PET-CT and MRI co-registration was used for tumor delineation. Three target volumes were generated. Gross tumor volume (GTV) corresponded to the primary tumor along with involved lymph nodes. Clinical target volume 70 Gy (CTV70) was equal to GTV plus a 5 mm 3D margin, which was adjusted to exclude air cavities and bone mass without evidence of tumor invasion. CTV63 corresponded to the area at high-risk of microscopic spread, while CTV56 corresponded to the prophylactic irradiation area. GTV, CTV63, CTV56, and all organs at risk were manually delineated on each CT slice. Adding a 5 mm 3D margin around the CTVs generated the PTVs. PTV expansion was limited to 3 mm from the skin surface in order to avoid the build-up region and to limit skin toxicity [15]. All IMRT plans were generated using Pinnacle V9.2. Seven Coplanar 6-MV photon beams were employed with a step and shoot IMRT technique. The prescribed dose was 70 Gy to PTV70, 63 Gy to PTV63, and 56 Gy to PTV56. The collapsed cone convolution/superposition algorithm was used for dose calculation. The maximum dose within the PTV was 110% (D2%). The minimum PTV volume covered by the 95% isodose line was 95%. Dose constraints were set according to the GORTEC recommendations [16]: a mean dose (Dmean) <30 Gy and a median dose <26 Gy for contralateral PGs. Patients were treated as planned on CT0 and no changes were applied to dose distribution during treatment. During the treatment course, weekly in-room stereoscopic imaging corrected set-up errors >5 mm. All patients signed an informed consent form. The study was approved by the institutional review board (ARTIX study NCT01874587). All patients underwent IMRT using a total dose of 70 Gy (2 Gy/fraction/day, 35 fractions), with a simultaneous integrated boost technique [14] and concomitant chemotherapy. Planning CTs (CT0) with intravenous contrast agents were acquired with 2 mm slice thickness from the vertex to the carina. A thermoplastic head and shoulder mask with five fixation points was used. PET-CT and MRI co-registration was used for tumor delineation. Three target volumes were generated. Gross tumor volume (GTV) corresponded to the primary tumor along with involved lymph nodes. Clinical target volume 70 Gy (CTV70) was equal to GTV plus a 5 mm 3D margin, which was adjusted to exclude air cavities and bone mass without evidence of tumor invasion. CTV63 corresponded to the area at high-risk of microscopic spread, while CTV56 corresponded to the prophylactic irradiation area. GTV, CTV63, CTV56, and all organs at risk were manually delineated on each CT slice. Adding a 5 mm 3D margin around the CTVs generated the PTVs. PTV expansion was limited to 3 mm from the skin surface in order to avoid the build-up region and to limit skin toxicity [15]. All IMRT plans were generated using Pinnacle V9.2. Seven Coplanar 6-MV photon beams were employed with a step and shoot IMRT technique. The prescribed dose was 70 Gy to PTV70, 63 Gy to PTV63, and 56 Gy to PTV56. The collapsed cone convolution/superposition algorithm was used for dose calculation. The maximum dose within the PTV was 110% (D2%). The minimum PTV volume covered by the 95% isodose line was 95%. Dose constraints were set according to the GORTEC recommendations [16]: a mean dose (Dmean) <30 Gy and a median dose <26 Gy for contralateral PGs. Patients were treated as planned on CT0 and no changes were applied to dose distribution during treatment. During the treatment course, weekly in-room stereoscopic imaging corrected set-up errors >5 mm. All patients signed an informed consent form. The study was approved by the institutional review board (ARTIX study NCT01874587). Weekly dose estimations, in cases of replanning and without replanning During the treatment, each patient underwent six weekly CTs (CT1 to CT6) according to the same modalities as CT0, except for the intravenous contrast agents (not systematically used, particularly in case of cisplatin based chemotherapy). For each patient, the anatomical structures were manually segmented on each weekly CT by the same radiation oncologist. In case of complete response, initial macroscopically-involved areas were still included in the CTV70, which was adjusted to exclude any air cavities and bone mass without evidence of initial tumor invasion. Actual weekly doses (Figure 2, Step 1A) were estimated by calculating the dose distribution on the weekly CT, using treatment parameters and isocenter from CT0. Weekly re-planned doses (Figure 2, Step 1C) were calculated by generating a new IMRT plan on each weekly CT in accordance with the dose constraints described for the initial planning. PTV coverage did not differ between initial planning and weekly re-planned CT. The dose constraints for the organs at risk have respected the GORTEC recommendations at the initial planning and in all replanning.Figure 2 Overall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared. Overall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared. During the treatment, each patient underwent six weekly CTs (CT1 to CT6) according to the same modalities as CT0, except for the intravenous contrast agents (not systematically used, particularly in case of cisplatin based chemotherapy). For each patient, the anatomical structures were manually segmented on each weekly CT by the same radiation oncologist. In case of complete response, initial macroscopically-involved areas were still included in the CTV70, which was adjusted to exclude any air cavities and bone mass without evidence of initial tumor invasion. Actual weekly doses (Figure 2, Step 1A) were estimated by calculating the dose distribution on the weekly CT, using treatment parameters and isocenter from CT0. Weekly re-planned doses (Figure 2, Step 1C) were calculated by generating a new IMRT plan on each weekly CT in accordance with the dose constraints described for the initial planning. PTV coverage did not differ between initial planning and weekly re-planned CT. The dose constraints for the organs at risk have respected the GORTEC recommendations at the initial planning and in all replanning.Figure 2 Overall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared. Overall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared. Total cumulated dose estimations by deformable registration Cumulated doses were estimated for the two scenarios, with or without replanning (Figure 2 Steps 1B and 1D), according to the following deformable image registration procedure (contour-guided Demons registration algorithm) [17]. For PGs on each CT, a signed distance map was generated to represent the squared Euclidean distance between each voxel and the PG surface. Distance maps of each PG were then registered using the Demons registration algorithm [18]. The resulting deformation fields were employed to map the weekly dose distributions to the planning CT using tri-linear interpolation. Next, the mapped dose distributions were summed to estimate the cumulated dose for each PG. The average Dice score for PG registration, from the weekly CT to each planning CT was computed as followed: Dice score = 2×(|A∩B|)/(|A| + |B|), where: A is the delineated PG contour on the weekly CT, B is the planning contour propagated by the registration and |.| denotes the number of voxels encompassed by the contour. The Dice score ranges from 0 (worst case: no match between both contours) to 1 (perfect match) [19]. A 3D dose difference in the PG was calculated between the cumulated dose distribution and planned dose distribution. Cumulated doses were estimated for the two scenarios, with or without replanning (Figure 2 Steps 1B and 1D), according to the following deformable image registration procedure (contour-guided Demons registration algorithm) [17]. For PGs on each CT, a signed distance map was generated to represent the squared Euclidean distance between each voxel and the PG surface. Distance maps of each PG were then registered using the Demons registration algorithm [18]. The resulting deformation fields were employed to map the weekly dose distributions to the planning CT using tri-linear interpolation. Next, the mapped dose distributions were summed to estimate the cumulated dose for each PG. The average Dice score for PG registration, from the weekly CT to each planning CT was computed as followed: Dice score = 2×(|A∩B|)/(|A| + |B|), where: A is the delineated PG contour on the weekly CT, B is the planning contour propagated by the registration and |.| denotes the number of voxels encompassed by the contour. The Dice score ranges from 0 (worst case: no match between both contours) to 1 (perfect match) [19]. A 3D dose difference in the PG was calculated between the cumulated dose distribution and planned dose distribution. Anatomical variation description Anatomical variations (between CT0 and weekly CTs) were characterized by variations in CTV70 and PG volumes, in the distances between PGs and CTV70 and in the thickness of the neck (at the level of the geometrical centers of the PGs). The distance between PGs and CTV70 corresponded to the minimal distance between the surfaces of the two contours (PG-CTVds), computed using an Euclidean distance map of the first contour, iteratively considering all the points of the second contour and keeping the resulting minimal distance. Anatomical variations (between CT0 and weekly CTs) were characterized by variations in CTV70 and PG volumes, in the distances between PGs and CTV70 and in the thickness of the neck (at the level of the geometrical centers of the PGs). The distance between PGs and CTV70 corresponded to the minimal distance between the surfaces of the two contours (PG-CTVds), computed using an Euclidean distance map of the first contour, iteratively considering all the points of the second contour and keeping the resulting minimal distance. Statistical analysis The impact of the anatomical variations on PG dose was analyzed considering Dmean and the full DVH. Their impact on the risk of xerostomia was estimated by using the LKB NTCP model (n = 1, m = 0.4, and TD50 = 39.9) [20,21], the complication being defined as a salivary flow ratio <25% of the pretreatment one [22]. The mean PG dose differences between the weekly doses (with and without replanning) and the planned dose were calculated (Figure 2). The PG overdose was assessed as the difference between the dose without replanning (at the fraction or cumulated) and the dose at the planning. The benefit of weekly replanning was assessed as the difference between the doses with replanning and without replanning (at the fraction or cumulated). Linear mixed-effects models were used to test if the following parameters were correlated with the PG overdose or the benefit of the weekly replanning: initial volumes of the CTV70 and of the PGs, decreasing (between the weekly CT and the planning CT) of the volume of the CTV70 (in cc and %) and the PGs (in cc and %), shortening of the distance between PGs and CTV70, reduction of the head thickness and the time between the CT0 and the beginning of treatment. All dose and volume comparisons were performed using nonparametric tests (Wilcoxon test). Statistical analysis was carried out using the Statistical Package for the Social Sciences V 20.0. The impact of the anatomical variations on PG dose was analyzed considering Dmean and the full DVH. Their impact on the risk of xerostomia was estimated by using the LKB NTCP model (n = 1, m = 0.4, and TD50 = 39.9) [20,21], the complication being defined as a salivary flow ratio <25% of the pretreatment one [22]. The mean PG dose differences between the weekly doses (with and without replanning) and the planned dose were calculated (Figure 2). The PG overdose was assessed as the difference between the dose without replanning (at the fraction or cumulated) and the dose at the planning. The benefit of weekly replanning was assessed as the difference between the doses with replanning and without replanning (at the fraction or cumulated). Linear mixed-effects models were used to test if the following parameters were correlated with the PG overdose or the benefit of the weekly replanning: initial volumes of the CTV70 and of the PGs, decreasing (between the weekly CT and the planning CT) of the volume of the CTV70 (in cc and %) and the PGs (in cc and %), shortening of the distance between PGs and CTV70, reduction of the head thickness and the time between the CT0 and the beginning of treatment. All dose and volume comparisons were performed using nonparametric tests (Wilcoxon test). Statistical analysis was carried out using the Statistical Package for the Social Sciences V 20.0. Patients and tumors: The study enrolled a total of 15 patients with a mean age of 65 years (ranging from 50 to 87 years). Patient, tumor, and treatment characteristics are provided on Table 1. All tumors were locally advanced (Stage III or IV, AJCC 7th ed). The mean PG volume was 25.3 cc (ranging from 16.6 cc to 52.1 cc, standard deviation (SD): 8.1 cc).Table 1 Patient, tumor, and treatment characteristics at the initial planning (CT0) ID Gender Age Tumor localization TNM Volume (cc) D mean (Gy) Xerostomia NTCP (%) [ 21 ] CTV70 HLP CLP HLP CLP HLP CLP 1M86TonsilT3N145.252.148.630.231.126.528.32F63TonsilT2Nx26.331.127.531.42629.018.73M74OropharynxT3N2c181.524.920.737.931.144.328.44F66OropharynxT2N2c107.227.823.432.927.932.322.05M57VelumT3N062.420.718.028.127.822.421.76M67OropharynxT3N2c156.224.522.730.829.424.721.47M52OropharynxT4N2165.1N/A21.6N/A28.7N/A23.48M67TrigoneT4N1139.322.019.330.729.227.424.49F65OropharynxT3N3237.523.920.242.431.155.228.210F65OropharynxT4N3257.9N/A24.5N/A35.2N/A37.711M50OropharynxT4N2c434.5N/A17.7N/A36.3N/A40.312M53OropharynxT3N014.416.623.341.324.252.915.913M73OropharynxT3N2c147.029.429.254.632.281.730.714M56LarynxT3N014.022.829.219.79.210.12.715M75HypopharynxT2N276.320.322.429.429.125.024.4M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Patient, tumor, and treatment characteristics at the initial planning (CT0) M: male; F: female; CT0: initial planning; CTV70: clinical target volume receiving 70 Gy; PGs: parotid glands; HLP: homolateral PGs; CLP: contralateral PGs; Dmean: mean dose at initial planning; N/A: not applicable (PGs included in the CTV), NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Treatment and planning: All patients underwent IMRT using a total dose of 70 Gy (2 Gy/fraction/day, 35 fractions), with a simultaneous integrated boost technique [14] and concomitant chemotherapy. Planning CTs (CT0) with intravenous contrast agents were acquired with 2 mm slice thickness from the vertex to the carina. A thermoplastic head and shoulder mask with five fixation points was used. PET-CT and MRI co-registration was used for tumor delineation. Three target volumes were generated. Gross tumor volume (GTV) corresponded to the primary tumor along with involved lymph nodes. Clinical target volume 70 Gy (CTV70) was equal to GTV plus a 5 mm 3D margin, which was adjusted to exclude air cavities and bone mass without evidence of tumor invasion. CTV63 corresponded to the area at high-risk of microscopic spread, while CTV56 corresponded to the prophylactic irradiation area. GTV, CTV63, CTV56, and all organs at risk were manually delineated on each CT slice. Adding a 5 mm 3D margin around the CTVs generated the PTVs. PTV expansion was limited to 3 mm from the skin surface in order to avoid the build-up region and to limit skin toxicity [15]. All IMRT plans were generated using Pinnacle V9.2. Seven Coplanar 6-MV photon beams were employed with a step and shoot IMRT technique. The prescribed dose was 70 Gy to PTV70, 63 Gy to PTV63, and 56 Gy to PTV56. The collapsed cone convolution/superposition algorithm was used for dose calculation. The maximum dose within the PTV was 110% (D2%). The minimum PTV volume covered by the 95% isodose line was 95%. Dose constraints were set according to the GORTEC recommendations [16]: a mean dose (Dmean) <30 Gy and a median dose <26 Gy for contralateral PGs. Patients were treated as planned on CT0 and no changes were applied to dose distribution during treatment. During the treatment course, weekly in-room stereoscopic imaging corrected set-up errors >5 mm. All patients signed an informed consent form. The study was approved by the institutional review board (ARTIX study NCT01874587). Weekly dose estimations, in cases of replanning and without replanning: During the treatment, each patient underwent six weekly CTs (CT1 to CT6) according to the same modalities as CT0, except for the intravenous contrast agents (not systematically used, particularly in case of cisplatin based chemotherapy). For each patient, the anatomical structures were manually segmented on each weekly CT by the same radiation oncologist. In case of complete response, initial macroscopically-involved areas were still included in the CTV70, which was adjusted to exclude any air cavities and bone mass without evidence of initial tumor invasion. Actual weekly doses (Figure 2, Step 1A) were estimated by calculating the dose distribution on the weekly CT, using treatment parameters and isocenter from CT0. Weekly re-planned doses (Figure 2, Step 1C) were calculated by generating a new IMRT plan on each weekly CT in accordance with the dose constraints described for the initial planning. PTV coverage did not differ between initial planning and weekly re-planned CT. The dose constraints for the organs at risk have respected the GORTEC recommendations at the initial planning and in all replanning.Figure 2 Overall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared. Overall study flow chart. Weekly CT scans were performed during the 7 weeks of treatment. Doses were calculated on each weekly fraction, corresponding either to the initial planning (step 1A) or to a replanning to spare the parotid glands (step 1C). Corresponding cumulated doses were calculated (steps 1B and 1C) using elastic registration. Doses were then compared. Total cumulated dose estimations by deformable registration: Cumulated doses were estimated for the two scenarios, with or without replanning (Figure 2 Steps 1B and 1D), according to the following deformable image registration procedure (contour-guided Demons registration algorithm) [17]. For PGs on each CT, a signed distance map was generated to represent the squared Euclidean distance between each voxel and the PG surface. Distance maps of each PG were then registered using the Demons registration algorithm [18]. The resulting deformation fields were employed to map the weekly dose distributions to the planning CT using tri-linear interpolation. Next, the mapped dose distributions were summed to estimate the cumulated dose for each PG. The average Dice score for PG registration, from the weekly CT to each planning CT was computed as followed: Dice score = 2×(|A∩B|)/(|A| + |B|), where: A is the delineated PG contour on the weekly CT, B is the planning contour propagated by the registration and |.| denotes the number of voxels encompassed by the contour. The Dice score ranges from 0 (worst case: no match between both contours) to 1 (perfect match) [19]. A 3D dose difference in the PG was calculated between the cumulated dose distribution and planned dose distribution. Anatomical variation description: Anatomical variations (between CT0 and weekly CTs) were characterized by variations in CTV70 and PG volumes, in the distances between PGs and CTV70 and in the thickness of the neck (at the level of the geometrical centers of the PGs). The distance between PGs and CTV70 corresponded to the minimal distance between the surfaces of the two contours (PG-CTVds), computed using an Euclidean distance map of the first contour, iteratively considering all the points of the second contour and keeping the resulting minimal distance. Statistical analysis: The impact of the anatomical variations on PG dose was analyzed considering Dmean and the full DVH. Their impact on the risk of xerostomia was estimated by using the LKB NTCP model (n = 1, m = 0.4, and TD50 = 39.9) [20,21], the complication being defined as a salivary flow ratio <25% of the pretreatment one [22]. The mean PG dose differences between the weekly doses (with and without replanning) and the planned dose were calculated (Figure 2). The PG overdose was assessed as the difference between the dose without replanning (at the fraction or cumulated) and the dose at the planning. The benefit of weekly replanning was assessed as the difference between the doses with replanning and without replanning (at the fraction or cumulated). Linear mixed-effects models were used to test if the following parameters were correlated with the PG overdose or the benefit of the weekly replanning: initial volumes of the CTV70 and of the PGs, decreasing (between the weekly CT and the planning CT) of the volume of the CTV70 (in cc and %) and the PGs (in cc and %), shortening of the distance between PGs and CTV70, reduction of the head thickness and the time between the CT0 and the beginning of treatment. All dose and volume comparisons were performed using nonparametric tests (Wilcoxon test). Statistical analysis was carried out using the Statistical Package for the Social Sciences V 20.0. Results: Since 3 ipsilateral PGs were completely included within the PTV (Patient number 7, 10 and 11), they were excluded from the analysis, resulting in a total of 27 PGs analyzed. The average Dice score [19] for PG registration, from the weekly CT to each planning CT was 0.92 (from 0.83 to 0.95). Quantification of anatomical variations during the 7 weeks of treatment From CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%). The distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm). The thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm). From CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%). The distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm). The thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm). Dose comparison between planned dose, and doses with or without replanning in PGs The per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2 Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients D mean (Gy),mean (Min-max;SD) p-value Planned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)). p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). The per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2 Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients D mean (Gy),mean (Min-max;SD) p-value Planned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)). p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). Anatomical parameters correlated with PG overdose or replanning benefit PG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose. PG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose. Quantification of anatomical variations during the 7 weeks of treatment: From CT0 to CT6, the PG volumes decreased by a mean value of 28.3% (ranging from 0.0 to 63.4%, SD 18%), corresponding to an average decrease of 1.1 cc/week (ranging from 0.0 to 2.2 cc/week). The CTV70 decreased by a mean value of 31% (ranging from 73% to −13%, SD 28%). The distance between the PGs and the CTV (PG-CTVds) decreased in 74% of the PGs by 4.3 mm on average (ranging from 0.1 to 12 mm, SD 3.7 mm), whereas it increased in the other 26% of the PGs by 3.2 mm on average (ranging from 1.1 to 6.3 mm, SD 2.1 mm). The thickness of the neck decreased for 78% of the patients by a mean value of 7.9 mm (ranging from 0.1 to 26.6 mm, SD 6.2 mm). Dose comparison between planned dose, and doses with or without replanning in PGs: The per-treatment PG doses (with or without replanning) were analyzed, first considering the weekly fractions and then, using the cumulated doses from all weekly fractions, for all the 15 patients. The results are shown in Table 2.Table 2 Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients D mean (Gy),mean (Min-max;SD) p-value Planned dose (1)30.9 (9.2-54.6; 7.9)-Doses at the fractionWithout replanning (2)33.0 (7.7-61.2; 9.9)With replanning (3)29.4 (4.1-51.7; 8.3)PG overdose (4) = (2)-(1)1.8 (−10.6-24.9; 5.8)<0,001Replanning benefit (5) = (3)-(2)3.8 (0–23.8; 4.0)<0,001Cumulated dosesWithout replanning (2)32.0 (8.7-57.6; 9.3)-With replanning (3)28.6 (4.6-51.2; 8.4)PG Overdose (4) = (2)-(1)1.1 (−7.9-10.0; 4.1)0,1Replanning benefit (5) = (3)-(2)3.6 (0–12.2; 3.3)<0,001PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)).p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. Parotid gland overdose and replanning benefit assessments, based on the fraction or the cumulated doses, for all the 15 patients PGs: parotid glands; Dmean: First, the mean PG dose was calculated for each patient and each week (DmeanWeekly). Then, the mean of the DMeanWeekly was calculated for each patient (DMeanPt). Finally, the mean of the DmeanPt was calculated for the whole population (D(mean)). p values are calculated using the Wilcoxon test, to test if the Dmean in (1) and (2), and if the Dmean in (2) and (3) are statistically different. PG dose comparisons at the per-treatment weekly fraction In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). PG dose comparisons at the per-treatment weekly fraction: In order to assess the PG overdose, comparison was first made between the dose at the fraction without replanning (Figure 2 Step 1A) and the planned dose. For 67% of the plans, the Dmean increased on average by 4.8 Gy (up to 24.9 Gy, SD 4.6 Gy). In the other 33% of plans, the Dmean decreased by 3.9 Gy (up to 10.7 Gy, SD 2.9 Gy). The variation of the mean PG dose during the treatment was showed Figure 3 for two representative patients.Figure 3 Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Variation over time of the mean PG dose for two representative patients. Red line corresponding to patient N°1 who presenting an increasing of the mean PG dose cumulated. Blue line corresponding to the patient N°12 who presenting a decreasing of the mean PG dose cumulated. Then, to assess the benefit of replanning, comparison was made between the dose with (Figure 2 step 1C) and without replanning (Figure 2 Step 1A). In 85% of the plans, replanning decreased the Dmean on average by 4.6 Gy (up to 23.8 Gy, SD 4.0 Gy). PG dose comparisons using the cumulated doses and the corresponding estimated xerostomia risks: A PG overdose was reported in 59% (N = 16) of the PGs. Figure 4a shows the Dmean difference for each PG of each patients. Ten out of fifteen patients received a higher Dmean in at least one PG (6 patients in the 2 PGs), which corresponded to a Dmean increase of an average of 3.7 Gy (ranging from 0.4 to 10.0 Gy, SD 2.9 Gy). Figure 5 shows the average planned DVH (red line) and the average cumulated DVH without replanning (blue line).Figure 4 Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21].Figure 5 Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Parotid gland overdose assessment: Difference between the mean cumulated dose (without replanning) and the mean dose at the planning, in each of the parotid gland, for each of the 15 patients ( 4 a). The corresponding impact on the xerostomia risk (%) is presented Figure 4 b. NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Mean parotid gland dose-volume histograms (DVHs) showing the impact of replanning on the over-irradiated PGs (n = 16). Figure 4b shows the corresponding difference in the estimated xerostomia risk. The average absolute increased risk of xerostomia was 3% (ranging from −16.7 to 23.9%, SD 2.9%) in all patients, and was 8.2% (ranging from 3.8 to 23.9%, SD 7.1%) among patients with an increased dose to PGs. Weekly replanning enabled the Dmean to be reduced to at least the same value as that of the pre-treatment planning for all over-irradiated PGs (Figure 6). In the subgroup of over-irradiated PGs, the mean Dmean difference between the cumulated doses with replanning and without replanning was therefore 5.1 Gy (ranging from 0.6 to 12.2 Gy, SD 3.3 Gy) (p = 0.001). In the subgroup of non-over-irradiated PGs, this mean Dmean difference was 1.4 Gy (ranging from 0 to 4.1 Gy, SD 1.7 Gy) (p = 0.001). Figure 5 displays the impact of the replanning to decrease the PG dose, with the average cumulated DVH with replanning (green line) and without replanning (blue line).Figure 6 Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. Replanning benefit assessement: cumulated mean dose difference between the dose with replanning and the dose without replanning, in each of the parotid gland (ipsilateral and contralateral), for each of the 15 patients (6a), and corresponding estimated xerostomia risk (%) (6b). NTCP: normal tissue complication risk of xerostomia defined as a salivary flow ratio <25% of the pretreatment one [21]. In the over-irradiated PG group, the replanning decreased the xerostomia risk by 11% on average (ranging from 1 to 30%, SD 8%) (p < 0.01). Anatomical parameters correlated with PG overdose or replanning benefit: PG overdose and replanning benefit (at the fraction or cumulated) increased with the CTV70 shrinkage and the reduction of neck thickness (p < 0.01). At the fraction, a reduction of 10 cc of the CTV70 or of 1 mm of the neck thickness leads to an increase of the mean PG dose of 0.3 Gy. The PG volume variation has no impact on the mean PG dose. Discussion and conclusion: The main goal of definitive chemoradiotherapy in locally advanced HNC is to improve locoregional control, while keeping a high quality of life. Reducing the dose in the PGs during the whole course of IMRT and therefore xerostomia is a major challenge. Indeed, we found the majority of the PGs (59%) being overirradiated of a mean dose of 4 Gy (up to 10 Gy), resulting to an absolute increase risk of xerostomia of 8% (up to 24%). The ART strategy appears to benefit not only to the over-irradiated PG patients, reducing the mean dose of 5 Gy (up to 12 Gy) and the xerostomia risk of 11% (up to 30%), but also to the non-over-irradiated PGs. These results suggest thus a large use of ART for the majority of locally advanced HNC patients. In our study, four patients (N° 4, 7, 10 and 12) have not clear benefit from replanning. These patients were presented a spontaneous decrease of the mean PG dose during the treatment. No more gain was possible with the replanning due to the other constraints (homogeneity, spinal cord, brainstem …). The GORTEC dose volume constraints has been respected for all the replanning. The dosimetric benefit of ART has been shown in a limited number of studies, and not exclusively for the PGs. In a series of 22 patients, Schwartz et al. evaluated the impact of one and two replanning using daily CT on rails [23]. The mean PG dose was decreased of 3.8% for contralateral PGs and of 9% for ipsilateral PGs, with possible sparing of the oral cavity and larynx. In another series of 20 patients, a single replanning performed at the 3rd or 4th week of treatment decreased the mean PG dose of 10 Gy [9]. On the other hand, Castadot et al. didn’t show any dosimetric benefits for PGs when using four replanning in a series of 10 patients, however reducing the spinal cord dose and improving the CTV56 dose conformation [24]. The optimal number and time of replanning are unclear. Wu et al. concluded that one replanning decreased the mean PG dose by 3%, two replanning by 5%, and six replanning by 6% [13]. A “maximalist” weekly replanning strategy was considered feasible in our study, as in an ongoing randomized study (ARTIX) comparing one IMRT based planning to a weekly based IMRT replanning. The benefit of such strategy has to be demonstrated compared to other replanning strategies. Ongoing study (like ARTFORCE trial) test the benefit of only one replanning [24]. The benefit of each supplementary weekly replanning has to be evaluated. A true adaptive RT strategy should be personalized to each patient, ranging potentially from no re-planning to a maximalist weekly replanning. Ideally, replanning decisions may likely be based on either geometrical criteria or cumulated dose monitoring corresponding to the dose-guided RT approach. Replanning is also particularly time-consuming, complete delineation taking up to 2.5 hours in our experience and that of others [25-28]. Deformable image co-registration software can be used to propagate the OAR contours from the initial planning CT to the per-therapeutic planning CT, reducing the delineation time by approximately a factor 3 [26,28]. The CTV delineation should be however carefully checked, to prevent recurrence due to inadequately reduced CTV. Indeed, the goal of ART in our study was to spare the PGs during treatment as they were spared at the planning, while keeping the same appropriate CTV coverage (and not to reduce the CTV coverage). The analysis of the anatomical variations occurring within the course of IMRT is crucial to understand the overdose of the PGs and to identify early the sub-group of the overdosed PGs (59%). We found that mean volumes decreased by 28% for the PGs and 31% for the CTV, in agreement with the literature reporting values of 15% to 28% for the PGs, 69% for the GTV, and 8% to 51% for the CTV [7,9,13,23,26,29]. We found that the PGs overdose (without replanning) and the dosimetric benefit of replanning increased with the tumor shrinkage and the reduction of head thickness. The last one is likely explained by loss of weight, tumor shrinking and decrease of the PG volume. The reduction of the head thickness leads consequently to the occurrence of dose hotspot in the neck, close or within the PGs (Figure 1). Other studies also found that reduction of the neck diameter increases the risk of over-irradiation [30,31]. The variation of the mean PG dose was more important between the CT0 and the CT1 than between each weekly CT. This difference may be explained by the delay between CT0 and the first weekly CT. In our study, the PG dose differences between the fraction and the initial planning are likely related to both the set-up error (we did not quantified) and the anatomical structures volume/shape variations. Systematic set-up errors may increase the mean PG dose by around 3% by mm of displacement [32]. This point suggests, for a daily practice, to combine both a daily bone registration to correct the set-up errors and replanning to correct the anatomical variations. Fraction comparison only provides information for a specific moment and there is a need for full treatment dose evaluation and comparison. Deformable registration enables dose fraction accumulation [33]. Since PG shape and volume variations were limited, our study’s Dice scores were relatively high (0.92). However, the Dice score does not provide any information regarding the registration’s anatomical “point to point” correspondence accuracy. Moreover, the possibility of PG defects observed over the course of radiotherapy [34] should prompt careful consideration of this cumulated dose approach, thereby justifying an independent “fraction to fraction” dose analysis. Our results, based on both weekly fraction and cumulated dose, were consistent. The 3D dose visualization and differential DVH of the dose difference between the cumulated dose and planning dose (Figure 1) revealed moreover the heterogeneity of hotspot distribution in PGs, which may also impact on the xerostomia risk. The cranial part of the PGs seems to be more critical [35,36], maybe due to the presence of an important concentration of salivary gland stem cells at this level [37]. The possible heterogeneity of the radiosensitivity within the PG could be therefore more carefully investigated in order to consider to spare not only the full gland (represented by a mean dose endpoint) but also subparts of the gland. Indeed, relatively small dose (10 Gy) within the PG may cause severe loss of function [38], and dose greater than 20 Gy may cause up to 90% loss of the acinar cells [39]. It seems also that radiation-induced gland dysfunction are due to membrane damage, causing secondarily necrosis of acinar cells and atrophy of the lobules [40]. Our study exhibits limitations. The small patient number did not allow us to analyze the potential impact of tumor localization. Even if CTs from a single patient were always delineated by the same radiation oncologist, intra-observer variabilities in organ delineation are also potentially responsible for uncertainties. Moreover, the clinical benefit of the weekly replanning has been estimated and was not reported in the study. In conclusion, an ART strategy combining a daily bone registration and a weekly replanning may be proposed for locally advanced HNC, with an expected benefit to decrease xerostomia. This PG-sparing strategy appears however particularly complex and should be therefore assessed within prospective trials, with a special attention for CTV delineation. The optimal number and time of replanning are unclear. The benefit of a weekly replanning strategy versus other replanning strategies have to been demonstrated.
Background: Large anatomical variations occur during the course of intensity-modulated radiation therapy (IMRT) for locally advanced head and neck cancer (LAHNC). The risks are therefore a parotid glands (PG) overdose and a xerostomia increase. The purposes of the study were to estimate: - the PG overdose and the xerostomia risk increase during a "standard" IMRT (IMRTstd); - the benefits of an adaptive IMRT (ART) with weekly replanning to spare the PGs and limit the risk of xerostomia. Methods: Fifteen patients received radical IMRT (70 Gy) for LAHNC. Weekly CTs were used to estimate the dose distributions delivered during the treatment, corresponding either to the initial planning (IMRTstd) or to weekly replanning (ART). PGs dose were recalculated at the fraction, from the weekly CTs. PG cumulated doses were then estimated using deformable image registration. The following PG doses were compared: pre-treatment planned dose, per-treatment IMRTstd and ART. The corresponding estimated risks of xerostomia were also compared. Correlations between anatomical markers and dose differences were searched. Results: Compared to the initial planning, a PG overdose was observed during IMRTstd for 59% of the PGs, with an average increase of 3.7 Gy (10.0 Gy maximum) for the mean dose, and of 8.2% (23.9% maximum) for the risk of xerostomia. Compared to the initial planning, weekly replanning reduced the PG mean dose for all the patients (p<0.05). In the overirradiated PG group, weekly replanning reduced the mean dose by 5.1 Gy (12.2 Gy maximum) and the absolute risk of xerostomia by 11% (p<0.01) (30% maximum). The PG overdose and the dosimetric benefit of replanning increased with the tumor shrinkage and the neck thickness reduction (p<0.001). Conclusions: During the course of LAHNC IMRT, around 60% of the PGs are overdosed of 4 Gy. Weekly replanning decreased the PG mean dose by 5 Gy, and therefore by 11% the xerostomia risk.
Introduction: The treatment of unresectable Head & Neck Cancer (HNC) consists of a chemoradiotherapy [1,2]. One of the most common toxicity of this treatment is xerostomia, inducing difficulties in swallowing and speaking, loss of taste, and dental caries, with therefore a direct impact on patient quality of life. Xerostomia is mainly caused by radiation induced damage mainly to the parotid glands (PG), and to a lesser extend to the submandibular glands [3]. Intensity modulated radiotherapy (IMRT) permits to deliver highly conformal dose in complex anatomical structures, while sparing critical structures. Indeed, three randomized studies have demonstrated improving (PG) sparing by using IMRT compared to non-IMRT techniques, resulting in better salivary flow and decreased xerostomia risk [4-6]. However, large variations can be observed during the course of IMRT treatment, such as body weight loss [7,8], primary tumor shrinking [7], and PG volume reduction [9]. Due to these anatomical variations and to the tight IMRT dose gradient, the actual administered dose may therefore not correspond to the planned dose, with a risk of radiation overdose to the PGs (Figure 1) [10,11]. This dose difference clearly reduces the expected clinical benefits of IMRT, increasing the risk of xerostomia. Although bone-based image-guided radiation therapy (IGRT) allows for setup error correction, the actual delivered dose to the PGs remains higher than the planned dose [12], due to the fact that IGRT does not take shape/volume variations into account. By performing one or more new planning during the radiotherapy treatment, adaptive radiotherapy (ART) aims to correct such uncertainties. ART has been already shown to decrease the mean PG dose during locally advanced head and neck cancer IMRT [13], but no surrogate of the PG dose difference and of the dosimetric benefit of ART has yet been identified. In the context of IMRT for locally advanced HNC, this study sought to:estimate the difference between the planned dose and the actual delivered dose (without replanning) to the PGs, i.e., the PG overdose;estimate the PG dose difference with replanning and without replanning to spare the PGs while keeping the same planning target volume (PTV), i.e., the benefit of ART;identify anatomical markers correlated with these dose differences (PG overdose and ART benefit).Figure 1 Illustration of the anatomical variations on the dose distribution. IMRT dose distributions at different times for a given patient, showing the PG overdose without replanning (B) and the benefit of replanning (C). A: Planned dose on the pre-treatment CT (CT0). B: Actual delivered dose without replanning during the treatment (Week 3). C: Adaptive planned dose with replanning to spare the parotid glands (PG) at the same fraction (Week 3). PGs are shown by the red line. The full red represents the Clinical Target Volume (CTV70). The arrow show the head thickness. Figure 1B and 1C compared to 1A shows that the PGs and the CTV70 volumes and the neck thickness have decreased. These anatomical variations have led to dose hotspots in the neck, close to the internal part of the two PG (Figure 1B). Replanning (Figure 1C) allowed to spare the PG even better than on the planning (Figure 1A). estimate the difference between the planned dose and the actual delivered dose (without replanning) to the PGs, i.e., the PG overdose; estimate the PG dose difference with replanning and without replanning to spare the PGs while keeping the same planning target volume (PTV), i.e., the benefit of ART; identify anatomical markers correlated with these dose differences (PG overdose and ART benefit). Illustration of the anatomical variations on the dose distribution. IMRT dose distributions at different times for a given patient, showing the PG overdose without replanning (B) and the benefit of replanning (C). A: Planned dose on the pre-treatment CT (CT0). B: Actual delivered dose without replanning during the treatment (Week 3). C: Adaptive planned dose with replanning to spare the parotid glands (PG) at the same fraction (Week 3). PGs are shown by the red line. The full red represents the Clinical Target Volume (CTV70). The arrow show the head thickness. Figure 1B and 1C compared to 1A shows that the PGs and the CTV70 volumes and the neck thickness have decreased. These anatomical variations have led to dose hotspots in the neck, close to the internal part of the two PG (Figure 1B). Replanning (Figure 1C) allowed to spare the PG even better than on the planning (Figure 1A). Discussion and conclusion: The main goal of definitive chemoradiotherapy in locally advanced HNC is to improve locoregional control, while keeping a high quality of life. Reducing the dose in the PGs during the whole course of IMRT and therefore xerostomia is a major challenge. Indeed, we found the majority of the PGs (59%) being overirradiated of a mean dose of 4 Gy (up to 10 Gy), resulting to an absolute increase risk of xerostomia of 8% (up to 24%). The ART strategy appears to benefit not only to the over-irradiated PG patients, reducing the mean dose of 5 Gy (up to 12 Gy) and the xerostomia risk of 11% (up to 30%), but also to the non-over-irradiated PGs. These results suggest thus a large use of ART for the majority of locally advanced HNC patients. In our study, four patients (N° 4, 7, 10 and 12) have not clear benefit from replanning. These patients were presented a spontaneous decrease of the mean PG dose during the treatment. No more gain was possible with the replanning due to the other constraints (homogeneity, spinal cord, brainstem …). The GORTEC dose volume constraints has been respected for all the replanning. The dosimetric benefit of ART has been shown in a limited number of studies, and not exclusively for the PGs. In a series of 22 patients, Schwartz et al. evaluated the impact of one and two replanning using daily CT on rails [23]. The mean PG dose was decreased of 3.8% for contralateral PGs and of 9% for ipsilateral PGs, with possible sparing of the oral cavity and larynx. In another series of 20 patients, a single replanning performed at the 3rd or 4th week of treatment decreased the mean PG dose of 10 Gy [9]. On the other hand, Castadot et al. didn’t show any dosimetric benefits for PGs when using four replanning in a series of 10 patients, however reducing the spinal cord dose and improving the CTV56 dose conformation [24]. The optimal number and time of replanning are unclear. Wu et al. concluded that one replanning decreased the mean PG dose by 3%, two replanning by 5%, and six replanning by 6% [13]. A “maximalist” weekly replanning strategy was considered feasible in our study, as in an ongoing randomized study (ARTIX) comparing one IMRT based planning to a weekly based IMRT replanning. The benefit of such strategy has to be demonstrated compared to other replanning strategies. Ongoing study (like ARTFORCE trial) test the benefit of only one replanning [24]. The benefit of each supplementary weekly replanning has to be evaluated. A true adaptive RT strategy should be personalized to each patient, ranging potentially from no re-planning to a maximalist weekly replanning. Ideally, replanning decisions may likely be based on either geometrical criteria or cumulated dose monitoring corresponding to the dose-guided RT approach. Replanning is also particularly time-consuming, complete delineation taking up to 2.5 hours in our experience and that of others [25-28]. Deformable image co-registration software can be used to propagate the OAR contours from the initial planning CT to the per-therapeutic planning CT, reducing the delineation time by approximately a factor 3 [26,28]. The CTV delineation should be however carefully checked, to prevent recurrence due to inadequately reduced CTV. Indeed, the goal of ART in our study was to spare the PGs during treatment as they were spared at the planning, while keeping the same appropriate CTV coverage (and not to reduce the CTV coverage). The analysis of the anatomical variations occurring within the course of IMRT is crucial to understand the overdose of the PGs and to identify early the sub-group of the overdosed PGs (59%). We found that mean volumes decreased by 28% for the PGs and 31% for the CTV, in agreement with the literature reporting values of 15% to 28% for the PGs, 69% for the GTV, and 8% to 51% for the CTV [7,9,13,23,26,29]. We found that the PGs overdose (without replanning) and the dosimetric benefit of replanning increased with the tumor shrinkage and the reduction of head thickness. The last one is likely explained by loss of weight, tumor shrinking and decrease of the PG volume. The reduction of the head thickness leads consequently to the occurrence of dose hotspot in the neck, close or within the PGs (Figure 1). Other studies also found that reduction of the neck diameter increases the risk of over-irradiation [30,31]. The variation of the mean PG dose was more important between the CT0 and the CT1 than between each weekly CT. This difference may be explained by the delay between CT0 and the first weekly CT. In our study, the PG dose differences between the fraction and the initial planning are likely related to both the set-up error (we did not quantified) and the anatomical structures volume/shape variations. Systematic set-up errors may increase the mean PG dose by around 3% by mm of displacement [32]. This point suggests, for a daily practice, to combine both a daily bone registration to correct the set-up errors and replanning to correct the anatomical variations. Fraction comparison only provides information for a specific moment and there is a need for full treatment dose evaluation and comparison. Deformable registration enables dose fraction accumulation [33]. Since PG shape and volume variations were limited, our study’s Dice scores were relatively high (0.92). However, the Dice score does not provide any information regarding the registration’s anatomical “point to point” correspondence accuracy. Moreover, the possibility of PG defects observed over the course of radiotherapy [34] should prompt careful consideration of this cumulated dose approach, thereby justifying an independent “fraction to fraction” dose analysis. Our results, based on both weekly fraction and cumulated dose, were consistent. The 3D dose visualization and differential DVH of the dose difference between the cumulated dose and planning dose (Figure 1) revealed moreover the heterogeneity of hotspot distribution in PGs, which may also impact on the xerostomia risk. The cranial part of the PGs seems to be more critical [35,36], maybe due to the presence of an important concentration of salivary gland stem cells at this level [37]. The possible heterogeneity of the radiosensitivity within the PG could be therefore more carefully investigated in order to consider to spare not only the full gland (represented by a mean dose endpoint) but also subparts of the gland. Indeed, relatively small dose (10 Gy) within the PG may cause severe loss of function [38], and dose greater than 20 Gy may cause up to 90% loss of the acinar cells [39]. It seems also that radiation-induced gland dysfunction are due to membrane damage, causing secondarily necrosis of acinar cells and atrophy of the lobules [40]. Our study exhibits limitations. The small patient number did not allow us to analyze the potential impact of tumor localization. Even if CTs from a single patient were always delineated by the same radiation oncologist, intra-observer variabilities in organ delineation are also potentially responsible for uncertainties. Moreover, the clinical benefit of the weekly replanning has been estimated and was not reported in the study. In conclusion, an ART strategy combining a daily bone registration and a weekly replanning may be proposed for locally advanced HNC, with an expected benefit to decrease xerostomia. This PG-sparing strategy appears however particularly complex and should be therefore assessed within prospective trials, with a special attention for CTV delineation. The optimal number and time of replanning are unclear. The benefit of a weekly replanning strategy versus other replanning strategies have to been demonstrated.
Background: Large anatomical variations occur during the course of intensity-modulated radiation therapy (IMRT) for locally advanced head and neck cancer (LAHNC). The risks are therefore a parotid glands (PG) overdose and a xerostomia increase. The purposes of the study were to estimate: - the PG overdose and the xerostomia risk increase during a "standard" IMRT (IMRTstd); - the benefits of an adaptive IMRT (ART) with weekly replanning to spare the PGs and limit the risk of xerostomia. Methods: Fifteen patients received radical IMRT (70 Gy) for LAHNC. Weekly CTs were used to estimate the dose distributions delivered during the treatment, corresponding either to the initial planning (IMRTstd) or to weekly replanning (ART). PGs dose were recalculated at the fraction, from the weekly CTs. PG cumulated doses were then estimated using deformable image registration. The following PG doses were compared: pre-treatment planned dose, per-treatment IMRTstd and ART. The corresponding estimated risks of xerostomia were also compared. Correlations between anatomical markers and dose differences were searched. Results: Compared to the initial planning, a PG overdose was observed during IMRTstd for 59% of the PGs, with an average increase of 3.7 Gy (10.0 Gy maximum) for the mean dose, and of 8.2% (23.9% maximum) for the risk of xerostomia. Compared to the initial planning, weekly replanning reduced the PG mean dose for all the patients (p<0.05). In the overirradiated PG group, weekly replanning reduced the mean dose by 5.1 Gy (12.2 Gy maximum) and the absolute risk of xerostomia by 11% (p<0.01) (30% maximum). The PG overdose and the dosimetric benefit of replanning increased with the tumor shrinkage and the neck thickness reduction (p<0.001). Conclusions: During the course of LAHNC IMRT, around 60% of the PGs are overdosed of 4 Gy. Weekly replanning decreased the PG mean dose by 5 Gy, and therefore by 11% the xerostomia risk.
17,179
390
[ 353, 415, 349, 236, 97, 285, 186, 2486, 273, 751, 80 ]
15
[ "dose", "replanning", "pg", "mean", "gy", "pgs", "figure", "cumulated", "patients", "risk" ]
[ "modulated radiotherapy", "xerostomia risk cranial", "chemoradiotherapy common toxicity", "neck cancer imrt", "xerostomia defined salivary" ]
null
[CONTENT] Head and neck cancer | Anatomical variation | Adaptive RT | Xerostomia [SUMMARY]
null
[CONTENT] Head and neck cancer | Anatomical variation | Adaptive RT | Xerostomia [SUMMARY]
[CONTENT] Head and neck cancer | Anatomical variation | Adaptive RT | Xerostomia [SUMMARY]
[CONTENT] Head and neck cancer | Anatomical variation | Adaptive RT | Xerostomia [SUMMARY]
[CONTENT] Head and neck cancer | Anatomical variation | Adaptive RT | Xerostomia [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Female | Head and Neck Neoplasms | Humans | Male | Middle Aged | Neoplasm Staging | Organ Sparing Treatments | Parotid Gland | Prognosis | Radiometry | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Xerostomia [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Female | Head and Neck Neoplasms | Humans | Male | Middle Aged | Neoplasm Staging | Organ Sparing Treatments | Parotid Gland | Prognosis | Radiometry | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Xerostomia [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Female | Head and Neck Neoplasms | Humans | Male | Middle Aged | Neoplasm Staging | Organ Sparing Treatments | Parotid Gland | Prognosis | Radiometry | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Xerostomia [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Female | Head and Neck Neoplasms | Humans | Male | Middle Aged | Neoplasm Staging | Organ Sparing Treatments | Parotid Gland | Prognosis | Radiometry | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Xerostomia [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Female | Head and Neck Neoplasms | Humans | Male | Middle Aged | Neoplasm Staging | Organ Sparing Treatments | Parotid Gland | Prognosis | Radiometry | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Xerostomia [SUMMARY]
[CONTENT] modulated radiotherapy | xerostomia risk cranial | chemoradiotherapy common toxicity | neck cancer imrt | xerostomia defined salivary [SUMMARY]
null
[CONTENT] modulated radiotherapy | xerostomia risk cranial | chemoradiotherapy common toxicity | neck cancer imrt | xerostomia defined salivary [SUMMARY]
[CONTENT] modulated radiotherapy | xerostomia risk cranial | chemoradiotherapy common toxicity | neck cancer imrt | xerostomia defined salivary [SUMMARY]
[CONTENT] modulated radiotherapy | xerostomia risk cranial | chemoradiotherapy common toxicity | neck cancer imrt | xerostomia defined salivary [SUMMARY]
[CONTENT] modulated radiotherapy | xerostomia risk cranial | chemoradiotherapy common toxicity | neck cancer imrt | xerostomia defined salivary [SUMMARY]
[CONTENT] dose | replanning | pg | mean | gy | pgs | figure | cumulated | patients | risk [SUMMARY]
null
[CONTENT] dose | replanning | pg | mean | gy | pgs | figure | cumulated | patients | risk [SUMMARY]
[CONTENT] dose | replanning | pg | mean | gy | pgs | figure | cumulated | patients | risk [SUMMARY]
[CONTENT] dose | replanning | pg | mean | gy | pgs | figure | cumulated | patients | risk [SUMMARY]
[CONTENT] dose | replanning | pg | mean | gy | pgs | figure | cumulated | patients | risk [SUMMARY]
[CONTENT] dose | pg | replanning | imrt | art | actual delivered dose | actual delivered | delivered dose | delivered | actual [SUMMARY]
null
[CONTENT] replanning | dose | mean | gy | pg | parotid gland | figure | patients | sd | gland [SUMMARY]
[CONTENT] replanning | dose | strategy | pgs | study | pg | benefit | ctv | weekly | art [SUMMARY]
[CONTENT] dose | replanning | pg | gy | mean | pgs | figure | weekly | cumulated | mm [SUMMARY]
[CONTENT] dose | replanning | pg | gy | mean | pgs | figure | weekly | cumulated | mm [SUMMARY]
[CONTENT] ||| ||| PG | weekly [SUMMARY]
null
[CONTENT] 59% | 3.7 | 10.0 | 8.2% | 23.9% ||| weekly | PG | p<0.05 ||| PG group | weekly | 5.1 | 12.2 | 11% | 30% ||| [SUMMARY]
[CONTENT] around 60% | 4 ||| PG | 5 | 11% [SUMMARY]
[CONTENT] ||| ||| PG | weekly ||| Fifteen | 70 ||| ||| weekly ||| ||| PG ||| ||| ||| 59% | 3.7 | 10.0 | 8.2% | 23.9% ||| weekly | PG | p<0.05 ||| PG group | weekly | 5.1 | 12.2 | 11% | 30% ||| ||| around 60% | 4 ||| PG | 5 | 11% [SUMMARY]
[CONTENT] ||| ||| PG | weekly ||| Fifteen | 70 ||| ||| weekly ||| ||| PG ||| ||| ||| 59% | 3.7 | 10.0 | 8.2% | 23.9% ||| weekly | PG | p<0.05 ||| PG group | weekly | 5.1 | 12.2 | 11% | 30% ||| ||| around 60% | 4 ||| PG | 5 | 11% [SUMMARY]
β-Eudesmol Inhibits the Migration of Cholangiocarcinoma Cells by Suppressing Epithelial-Mesenchymal Transition via PI3K/AKT and p38MAPK Modulation.
36037109
Cholangiocarcinoma (CCA) is a highly aggressive tumor with a greater risk of distant metastasis. A drug that prevents CCA development and spread is urgently needed.  In this research, we investigated the effect of β-eudesmol on the migration and invasion and epithelial-mesenchymal transformation (EMT) of the CCA cell line.
BACKGROUND
MTT and transwell assays were used to investigate the antiproliferative activity, as well as activity on cell migration and cell invasion. Real-time PCR and western blot analysis were used to investigate the expression of EMT marker genes and proteins.
MATERIALS AND METHODS
β-eudesmol was shown to exhibit potent antiproliferative activity (IC50 92.25-185.67 µM) and to significantly reduce CCA cell migration and invasion (27.3-62.7%). At both mRNA and protein levels, it significantly up-regulated the expression of epithelial marker E-cadherin (3-3.4-fold), while down-regulated the expression of mesenchymal markers-vimentin (0.6-0.8-fold) and snail-1 (0.4-0.6-fold). Furthermore, β-eudesmol inhibited PI3K and AKT phosphorylation (0.5-0.8-fold), while activating p38MAPK activity (1.2-3.6-fold).
RESULTS
Altogether, the anti-metastatic activity of β-eudesmol might be due to its suppressive effect on EMT via modulating the PI3K/AKT and p38MAPK signaling cascades.
CONCLUSION
[ "Bile Duct Neoplasms", "Bile Ducts, Intrahepatic", "Cell Line, Tumor", "Cell Movement", "Cell Proliferation", "Cholangiocarcinoma", "Epithelial-Mesenchymal Transition", "Humans", "Phosphatidylinositol 3-Kinases", "Proto-Oncogene Proteins c-akt", "Sesquiterpenes, Eudesmane", "p38 Mitogen-Activated Protein Kinases" ]
9741878
Introduction
Cholangiocarcinoma (CCA), a malignant transformation of epithelial cells lining the bile duct, is a serious public health issue in several parts of the world, especially in Asia. Thailand is the country where the incidence of CCA is highest in the world (Goral, 2017). The ingestion of improperly cooked, fermented, or preserved cyprinid fish species food containing the fluke, Opisthorchis viverrini, is linked to the highest prevalence of CCA in Northeast Thailand (Aukkanimart et al., 2017; Kamsa-ard et al., 2021). Besides, genetic and epigenetic changes in regulatory genes in cholangiocytes have been associated with activation of oncogenes and deregulation of tumor suppressor genes in CCA (Fava and Lorenzini, 2012). Metastasis is one of the primary cause of CCA related death (Hahn et al., 2020). The increased motility and invasiveness of migrating cells are needed for the first few steps of metastasis, and they have been connected to the epithelial-mesenchymal transition (EMT). EMT is a reversible transformation of epithelial cells into mesenchymal cells with the capacity to migrate and invade. Reduced expression of epithelial markers like E-cadherin and β-catenin and increased expression of mesenchymal markers like N-cadherin, Vimentin, and Snail-1 have been reported in both intrahepatic and extrahepatic CCA (Settakorn t al., 2005; Sat et al., 2010; Ryu et al., 2012; Huang et al., 2014; Vaquero et al., 2017). The loss of E-cadherin, a primary epithelial marker, is a necessary condition for EMT and metastasis stimulation (Heerboth et al., 2015). The main obstacles in CCA management are the lack of early screening tools and poor clinical results. Metastasis emerges at the time of diagnosis is due to the late clinical presentation and the lack of effective non-surgical therapies. So, a drug that stops CCA from progressing and spreading is urgently needed. The dried rhizome of the Atractylodes lancea (AL) has been used in Japanese Kampo medicine (“So-jutsu”), Thai traditional medicine (“Khod-Kha-Mao”), and Chinese traditional medicine (“Cang Zhu”) for treatment of various illnesses, including gastrointestinal problems, rheumatic diseases, influenza, and night blindness (Koonrungsesomboon et al., 2014; Na-Bangchang et al., 2017). The β-eudesmol, a sesquiterpenoid alcohol, is a major component of the AL rhizome extract. It exhibits diverse biological and therapeutic activities (Achrya et al., 2021). It shows potent antiproliferative activity against CCA, liver cancer, leukemia, and melanoma cells (Bomfim et al., 2013; Li et al., 2013). Previous research on CCA cells in vitro and in animal models has shown promising anti-CCA activity of β-eudesmol (Plengsuriyakarn et al., 2015; Mathema et al., 2017; Kotawong et al., 2018). It induced apoptosis in CCA cell lines through the activation of caspase-3 and 7 (Kotawong et al., 2018). The expression of detoxifying enzyme heme oxygenase (HO)-1 and NAD(P)H quinone dehydrogenase (NOQ)-1 in CCA cells is also suppressed by β-eudesmol (Mathema et al., 2017; Srijiwangsa et al., 2018). The antiproliferative activity of β-eudesmol against CCA cells is attributed to its inhibitory activity on STAT1/3 phosphorylation and NF-κB expression (Mathema et al., 2017). In the xenografted nude mouse model of CCA, a high dose of β-eudesmol (100 mg/kg body weight for 30 days) prevented tumor volume and lung metastasis (Plengsuriyakarn et al., 2015). However, the influence of β-eudesmol on CCA cell migration in vitro and its possible mechanism have yet to be investigated. The present study reported significant inhibitory activity of β-eudesmol on the migration and invasion of intrahepatic CCA cell line (HuCCT1). The anti-migration activity of β-eudesmol might be due to its suppressive action on EMT via PI3K/AKT and p38MAPK modulation.
null
null
Results
Antiproliferative Activity of β-eudesmol The antiproliferative activity of β-eudesmol was assessed using the MTT assay after 24, 48, and 72 hours of cells exposure. The compound showed potent antiproliferative activity on HuCCT1 cells in a time- and concentration-dependent manner (Figure 1A). The corresponding IC50 values [median (range)] for the 24, 48 and 72 hours exposure were 180 (185.67-175.63), 157 (149.75-162.35), and 99.90 (92.25-105.65-) µM, respectively. Under light microscopy, the irregular morphology and visible signs of cellular debris and floating dead cells were visible after β-eudesmol treatment at the concentrations of 78.5 µM and 157 µM (Figure 1B). Inhibitory Effects on Migration and Invasion The number of cells that migrated through the transwell membrane was stained and counted. β-Eudesmol inhibited HuCCT1 cell migration in a concentration-dependent manner. After 24 hours of exposure to 78.5, 157, and 240 µM of β-eudesmol, the number of migrating cells was significantly decreased compared to untreated control cells (Figure 2A). At 78.5, 157, and 240 µM of β-eudesmol exposure, the percents [median (range)] of cells migrating were 59.4 (56.1-62.7) %, 29.7 (27.3-32.1) %, and 13.26 (10.26- 16.26) %, respectively. The invasion assay was performed with a matrigel-coated transwell membrane, and the number of cells passing through the coated membrane after β-eudesmol exposure was stained and counted. β-Eudesmol at 78.5, 157, and 240 µM substantially reduced cell invasion to 62.98 (56.18-69.78) %, 41.09 (38.29-43.89) %, and 29.52 (27.72-31.32) %, respectively, compared with the untreated control cells (Figure 2B). Modulatory Effects on Genes Associated with EMT mRNA expression levels of the three EMT-associated genes, i.e., E-cadherin, vimentin, and snail-1 were examined and compared to the untreated control after a 24-hour β-eudesmol exposure (Figures 3 A, B, C). Results showed that exposure to 78.5 µM and 157 µM β-eudesmol enhanced E-cadherin gene expression by 3-fold (p=0.037) and 3.4-fold (p=0.037), respectively. Similarly, 78.5 µM reduced vimentin gene expression by 0.7-fold (p=0.037), while 157 µM reduced gene expression by 0.5-fold (p=0.037). Likewise, β-eudesmol at 78.5 µM and 195 µM reduced the expression of the snail-1 gene by 0.8-fold (p=0.03) and 0.4-fold (p=0.037), respectively. Suppression of EMT via PI3K/AKT and p38MAPK Modulation HuCCT1 cells were exposed to β-eudesmol for 24 hours and the expression of EMT marker proteins (E-cadherin, vimentin, and snail) in the extracted lysate was investigated using Western blot analysis. The results showed that β-eudesmol increased E-cadherin expression by 2.4-fold (p=0.037) and 2.7-fold (p=0.037) at 78.5 µM and 157 µM, respectively (Figures 4 A, H). Vimentin expression was decreased by 0.8-fold (p=0.037) and 0.6-fold (p=0.037) at both concentrations (Figsures 4 A, I). In addition, snail-1 expression was decreased by 0.6-fold (p=0.037) and 0.4-fold (p=0.037), respectively, at both concentrations (Figures A, J). Altogether, it was concluded that β-eudesmol down-regulated EMT by modulating the expression E-cadherin, vimentin, and the transcription factor snail-1. PI3K/AKT and p38MAPK are among the upstream regulator of the EMT process. The expression levels of these proteins in β-eudesmol treated cells were determined. The expression levels of total PI3K, phospho-PI3K at Tyr 458, and their ratios were initially investigated. The results revealed that exposing HuCCT1 cells to 78.5 µM β-eudesmol reduced total PI3K and p-PI3K expression by 0.8-fold (p=0.03) and 0.5-fold (p=0.03), respectively. On the other hand, β-eudesmol at 157 µM slightly increased the expression of total PI3K and p-PI3K by 1.4-fold (p=0.03) and 1.8-fold (p=0.03), respectively. On a basal level, 78.5 M, and 157 M, the ratios of p-PI3K to PI3K were 0.7, 0.4, and 0.9, respectively (Figures A, B, C). Similarly, total AKT, phospho-AKT at Ser 473, and their ratios were also affected by the β-eudesmol treatment. At 78.5 µM, the expression of p-AKT was increased by 1.2-fold (p=0.037), while total AKT expression was decreased by 0.9-fold (p=0.037). At 157 µM, however, the expression of total and p-AKT nearly was reduced by 0.8-fold (p=0.037) and 0.7-fold (p=0.037), respectively. At the basal level, the ratio of p-AKT to AKT was 0.8, but treatment with 78.5 µM and 157 µM β-eudesmol reduced the ratio to 0.6 (p=0.05) and 1 (p=0.05), respectively (Figures 4 A, D, E). These results suggested that β-eudesmol at low concentration (78.5 µM) inhibited the phosphorylation of PI3K and AKT proteins. Additionally, the effects of β-eudesmol treatment on total p38MAPK expression, p-p38MAPK (at Thr180/Tyr182) expression, and their ratios were investigated. β-Eudesmol increased the expression of p-p38MAPK by 3-fold (p=0.037) and 3.6-fold (p=0.034) at 78.5 µM and 157 µM. At 157 µM, it slightly increased the expression of total p38MAPK by 1.2-fold (p=0.037). The ratio of p-p38MAK to p38MAPK at basal level was 0.18, and was increased to 0.6-fold (p=0.05) and 0.5-fold (p=0.05) when exposed to 78.5 μM and 157 μM β-eudesmol, respectively (Figures 4 A, F, G). These results suggested that β-eudesmol activated the p38MAPK signaling pathway. List of Genes and Primers Used for PCR Antiproliferative Activity of β-eudesmol on HuCCT1 Cells. Figure (A) shows the viability % of HuCCT1 cells after exposure to different concentrations of β-eudesmol for 24 hours, 48 hours, and 72 hours. The cell viability was determined by MTT assay. Data are expressed as the median (range) from the three independent experiments. Figure (B) shows the morphological evidence of β-eudesmol-induced cell death. The black arrow in the picture indicates normal cells (in control) and dead cells (in treated), respectively Effects of β-eudesmol on Migration and Invasion of HuCCT1 Cells. Figure (A) shows the representative number of cells migrated after exposure to 0, 78.5, 157, 240 μM ATD for 24 hours. Figure (B) shows the representative number of cells invaded after exposure to 0, 78.5, 157, 240 μM ATD for 24 hours. The cells were fixed, stained, and ten representative fields were counted under the light microscope. The bar graph data represents the median (range) from three independent experiments. *p=0.05, **p=0.04 vs untreated control cells Effect of β-eudesmol Treatment on the Expression of EMT Associated Genes. Bar graph (A) shows the fold change in expression of E-cadherin mRNA. Bar graph (B) shows the fold change in expression of Vimentin mRNA. Bar graph (C) shows the fold change in expression of the Snail-1 mRNA. The data represents the median (range) of the three independent experiments. * p=0.037 vs untreated control Western Blot Results Showing the Effects of β-eudesmol on EMT Markers and PI3K/AKT and MAPK Expression. The protein was extracted by using the RIPA buffer and 30 µg whole protein per lane was separated by SDS-PAGE (8-12.5% gel concentration). Figure A shows the representative pictures of the protein bands showing the expression levels of p-p38MAPK, p38MAPK, p-AKT, AKT, p-PI3K, PI3K, E-cadherin, Vimentin, and Snail-1 in HuCCT1 cells treated with β-eudesmol at 0, 78.5 µM and 157 µM for 24 hours. Bar graph B shows the fold change in expression of p-PI3K and PI3K, and bar graph C shows their ratios. Bar graph D shows the fold change in expression of p-AKT and AKT, and bar graph E shows their ratios. Bar graph F shows the fold change in expression of p-p38MAPK and p38MAPK, and bar graph G shows their ratios. Bar graph H, I, and J show the fold change in expression of E-cadherin, vimentin, and snail-1, respectively. β-actin was taken as an internal loading control. *p=0.05, **p=0.037, ***p=0.034 vs untreated control cells. Abbreviations: PI3K, Phosphatidylinositide-3 kinase; AKT, Protein kinase B; MAPK, Mitogen-activated protein kinase; EMT, epithelial-mesenchymal transition
null
null
[ "Author Contribution Statement" ]
[ "BA, WC, and KN were involved in the design of the experimental study. BA performed the experiments. BA and WC performed data analysis. BA drafted the manuscript. KN revised the manuscript. All authors reviewed and approved the final manuscript for submission. All meet the ICMJE criteria for authorship." ]
[ null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion", "Author Contribution Statement" ]
[ "Cholangiocarcinoma (CCA), a malignant transformation of epithelial cells lining the bile duct, is a serious public health issue in several parts of the world, especially in Asia. Thailand is the country where the incidence of CCA is highest in the world (Goral, 2017). The ingestion of improperly cooked, fermented, or preserved cyprinid fish species food containing the fluke, Opisthorchis viverrini, is linked to the highest prevalence of CCA in Northeast Thailand (Aukkanimart et al., 2017; Kamsa-ard et al., 2021). Besides, genetic and epigenetic changes in regulatory genes in cholangiocytes have been associated with activation of oncogenes and deregulation of tumor suppressor genes in CCA (Fava and Lorenzini, 2012). \nMetastasis is one of the primary cause of CCA related death (Hahn et al., 2020). The increased motility and invasiveness of migrating cells are needed for the first few steps of metastasis, and they have been connected to the epithelial-mesenchymal transition (EMT). EMT is a reversible transformation of epithelial cells into mesenchymal cells with the capacity to migrate and invade. Reduced expression of epithelial markers like E-cadherin and β-catenin and increased expression of mesenchymal markers like N-cadherin, Vimentin, and Snail-1 have been reported in both intrahepatic and extrahepatic CCA (Settakorn t al., 2005; Sat et al., 2010; Ryu et al., 2012; Huang et al., 2014; Vaquero et al., 2017). The loss of E-cadherin, a primary epithelial marker, is a necessary condition for EMT and metastasis stimulation (Heerboth et al., 2015). The main obstacles in CCA management are the lack of early screening tools and poor clinical results. Metastasis emerges at the time of diagnosis is due to the late clinical presentation and the lack of effective non-surgical therapies. So, a drug that stops CCA from progressing and spreading is urgently needed.\nThe dried rhizome of the Atractylodes lancea (AL) has been used in Japanese Kampo medicine (“So-jutsu”), Thai traditional medicine (“Khod-Kha-Mao”), and Chinese traditional medicine (“Cang Zhu”) for treatment of various illnesses, including gastrointestinal problems, rheumatic diseases, influenza, and night blindness (Koonrungsesomboon et al., 2014; Na-Bangchang et al., 2017). The β-eudesmol, a sesquiterpenoid alcohol, is a major component of the AL rhizome extract. It exhibits diverse biological and therapeutic activities (Achrya et al., 2021). It shows potent antiproliferative activity against CCA, liver cancer, leukemia, and melanoma cells (Bomfim et al., 2013; Li et al., 2013). Previous research on CCA cells in vitro and in animal models has shown promising anti-CCA activity of β-eudesmol (Plengsuriyakarn et al., 2015; Mathema et al., 2017; Kotawong et al., 2018). It induced apoptosis in CCA cell lines through the activation of caspase-3 and 7 (Kotawong et al., 2018). The expression of detoxifying enzyme heme oxygenase (HO)-1 and NAD(P)H quinone dehydrogenase (NOQ)-1 in CCA cells is also suppressed by β-eudesmol (Mathema et al., 2017; Srijiwangsa et al., 2018). The antiproliferative activity of β-eudesmol against CCA cells is attributed to its inhibitory activity on STAT1/3 phosphorylation and NF-κB expression (Mathema et al., 2017). In the xenografted nude mouse model of CCA, a high dose of β-eudesmol (100 mg/kg body weight for 30 days) prevented tumor volume and lung metastasis (Plengsuriyakarn et al., 2015). However, the influence of β-eudesmol on CCA cell migration in vitro and its possible mechanism have yet to be investigated. The present study reported significant inhibitory activity of β-eudesmol on the migration and invasion of intrahepatic CCA cell line (HuCCT1). The anti-migration activity of β-eudesmol might be due to its suppressive action on EMT via PI3K/AKT and p38MAPK modulation.", "\nChemical and Reagents\n\nHuCCT1 cells were purchased from the Japanese Collection of Research Bioresources Cell Bank (JCRB), Japan. Roswell park memorial institute-1640 (RPMI-1640), fetal bovine serum (FBS), trypsin-EDTA (0.25 %), and antibiotic-antimycotic (100x) containing 10,000 U/ml of penicillin, and 10,000 U/ml streptomycins were purchased from Gibco BRL life technologies (Grand Island, NY, USA). The pure β-eudesmol was purchased from the Wako Pure Chemical Industry (Osaka, Japan). The 3-(4, 5-dimethyl-2-thiazoyl)-2, 5-diphenyl-2H-tetrazolium bromide (MTT) was purchased from Sigma Aldrich (St. Louis, MO, USA). Matrigel-coated transwell membrane was purchased from BD Biosciences (San Jose, CA, USA). The TRIzol reagent was purchased from Sigma Aldrich (St. Louis, MO, USA). RQ1 RNase-free DNase kit was purchased from Promega (Madison, WI, USA). SuperScriptTM III first-strand synthesis cDNA synthesis kit, PierceTM BCA protein assay kit, and alkaline phosphatase (AP)-labeled anti-rabbit secondary antibody were obtained from Thermo Fischer Scientific (Waltham, MA, USA). The iTaqTM universal SYBR green supermix was purchased from Bio-Rad (Hercules, CA, USA). All the rabbit primary antibodies (β-actin, PI3K, p-PI3K, AKT, p-AKT, p38MAPK, p-p38MAPK, E-cadherin, Vimentin, and Snail-1), protease inhibitor cocktail, and RIPA lysis buffer were purchased from Cell Signaling Technology Inc. (Beverly, MA, USA). The AP-chromogen BCIP/NBT was purchased from Amresco (Solon, OH, USA).\n\nCell Culture\n\nHuCCT1 cells were grown in RPMI medium supplemented with 10% FBS and 1% antibiotic-antimycotic solution. Cells were maintained in a 5% CO2 environment at 37 oC. Cell subcultures were performed every 3-4 days using 0.25% trypsin-EDTA.\n\nAntiproliferative Assay\n\nThe standard colorimetric MTT assay was used to assess cell viability. HuCCT1 cells were seeded (10,000 cells/well) onto a 96-well microtiter plate and incubated at 37°C for 24 hours. The cells were exposed to different concentrations of β-eudesmol (17.5, 35, 70, 140, 280, 560, and 1,120 µM) and further incubated for additional 24 hours, 48 hours, and 72 hours, respectively. The culture medium was discarded, and 20 µL of 5 mg/mL MTT reagent was added to each well. The plate was incubated at 37 °C in the dark for 4 hours. To solubilize the formazan crystals, 100 µL DMSO was applied to each well after withdrawal of the supernatant without disturbing the bottom layer. The plate was incubated at room temperature (25 oC) for 30 minutes (with gentle shaking), and the absorbance was measured at 570 nm using 96-well VarioskanTM Microplate Reader (Thermo Scientific, Rockford, USA). CalcuSyn version 2.11 software (Biosoft, Cambridge, UK) was used to determine the cell viability (%) and the corresponding IC50 (half-maximal inhibitory concentration).\n\nTranswell Migration and Invasion Assay\n\nTranswell assay (24 well, 8 µm pore size membrane, Corning Inc., NY, USA) was utilized to evaluate the migration of HuCCT1 cells (both treated and control) following the previously described protocol with modifications (Trnh et al., 2017). The HuCCT1 cells were collected and resuspended in a serum-free RPMI medium after being pretreated with β-eudesmol for 24 hours. The transwell was loaded with 200 µL of cell suspension (1x105 cells) in the upper chamber and 600 µL of 20 % FBS-containing RPMI media in the lower chamber. The cells that did not migrate and remained inside the cup were removed with a cotton swab after 24 hours of incubation. The cells that migrated through the membrane to the bottom surface of the cup were fixed in methanol, stained with Giemsa, and counted under a light microscope. A similar protocol was used for the invasion assay, with the exception that the transwell chambers were pre-coated with 100 µl of matrigel (diluted in serum-free RPMI media) and incubated at 37 oC for 1 hour. The cells were fixed, stained, and counted. Each experiment was repeated three times.\n\nReal-time PCR\n\nThe 2 x 105 cells were plated onto a -well plate and incubated at 37 oC overnight. The cells were further incubated for 24 hours after being treated with different concentrations of β-eudesmol. Only the culture media was used to treat the control cells (negative control). Total RNA was extracted from the cells using TRIzol reagent following the manufacturer’s instructions. NanoDrop was used to determine the RNA concentration and purity. The RQ1 DNase kit was used to remove contaminating genomic DNA from the total RNA extracted. Single-stranded cDNA was synthesized from total RNA (1 µg) using Superscript III reverse transcriptase cDNA construction kit. The concentration of the synthesized cDNA product was determined and used as a template for real-time PCR. The CFX96 real-time PCR detection system (Bio-Rad, Hercules, CA, USA) was used to determine the mRNA expression level of the targeted genes using iTaq Universal SYBR green supermix and specific primer sequences. The PCR conditions were denaturation at 95oC for 5 min, and 40 cycles of amplification at 95 oC for 15 sec and annealing at 62 oC for 1 min. The delta-delta Ct technique was used to measure gene expression levels in comparison to controls (Livak et al., 2001), and the expression was normalized using the housekeeping gene GAPDH. The list of target genes and primer sequences is shown in Table 1. \n\nWestern Blot Analysis\n\nWestern blot analysis was performed according to the previously described method with slight modifications (Mathema et al., 2017) to evaluate the expression levels of EMT-related proteins and signaling pathways. Briefly, after exposing to β-eudesmol for 24 hours, the cells were washed with PBS and lysed in RIPA buffer containing a protease and phosphatase inhibitor cocktail. The PierceTM BCA protein assay kit was used to determine the amount of protein in the sample. After heat denaturation, equal amounts of protein samples (30 µg/lane) were separated on a nitrocellulose membrane using 8-12.5 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). The membrane was blocked for 30 minutes in TBS with 5% BSA before being incubated overnight at 4°C with appropriate dilutions of the primary antibodies. After washing with TBS containing 0.1% Tween-20, the membrane was incubated with an AP-conjugated secondary antibody for 1 hour. The AP chromogen (BCIP/NBT) system was used to colorimetrically detect the corresponding protein bands. Finally, the protein band was densitometrically analyzed with Image J software (NIH, USA), and the amount was normalized to an β-actin internal control.\n\nStatistical Analysis\n\nIBM SPSS Statistics 21 (Chicago, Illinois, USA) was used to perform the statistical analysis. Data are presented as a median (range). The non-parametric tests Mann-Whitney-U and Kruskal-Wallis tests were used for comparison of two or more than two groups of quantitative variables. The statistical significance level was set at 0.05.", "\nAntiproliferative Activity of β-eudesmol\n\nThe antiproliferative activity of β-eudesmol was assessed using the MTT assay after 24, 48, and 72 hours of cells exposure. The compound showed potent antiproliferative activity on HuCCT1 cells in a time- and concentration-dependent manner (Figure 1A). The corresponding IC50 values [median (range)] for the 24, 48 and 72 hours exposure were 180 (185.67-175.63), 157 (149.75-162.35), and 99.90 (92.25-105.65-) µM, respectively. Under light microscopy, the irregular morphology and visible signs of cellular debris and floating dead cells were visible after β-eudesmol treatment at the concentrations of 78.5 µM and 157 µM (Figure 1B).\n\nInhibitory Effects on Migration and Invasion\n\nThe number of cells that migrated through the transwell membrane was stained and counted. β-Eudesmol inhibited HuCCT1 cell migration in a concentration-dependent manner. After 24 hours of exposure to 78.5, 157, and 240 µM of β-eudesmol, the number of migrating cells was significantly decreased compared to untreated control cells (Figure 2A). At 78.5, 157, and 240 µM of β-eudesmol exposure, the percents [median (range)] of cells migrating were 59.4 (56.1-62.7) %, 29.7 (27.3-32.1) %, and 13.26 (10.26- 16.26) %, respectively.\nThe invasion assay was performed with a matrigel-coated transwell membrane, and the number of cells passing through the coated membrane after β-eudesmol exposure was stained and counted. β-Eudesmol at 78.5, 157, and 240 µM substantially reduced cell invasion to 62.98 (56.18-69.78) %, 41.09 (38.29-43.89) %, and 29.52 (27.72-31.32) %, respectively, compared with the untreated control cells (Figure 2B).\n\nModulatory Effects on Genes Associated with EMT\n\nmRNA expression levels of the three EMT-associated genes, i.e., E-cadherin, vimentin, and snail-1 were examined and compared to the untreated control after a 24-hour β-eudesmol exposure (Figures 3 A, B, C). Results showed that exposure to 78.5 µM and 157 µM β-eudesmol enhanced E-cadherin gene expression by 3-fold (p=0.037) and 3.4-fold (p=0.037), respectively. Similarly, 78.5 µM reduced vimentin gene expression by 0.7-fold (p=0.037), while 157 µM reduced gene expression by 0.5-fold (p=0.037). Likewise, β-eudesmol at 78.5 µM and 195 µM reduced the expression of the snail-1 gene by 0.8-fold (p=0.03) and 0.4-fold (p=0.037), respectively. \n\nSuppression of EMT via PI3K/AKT and p38MAPK Modulation\n\nHuCCT1 cells were exposed to β-eudesmol for 24 hours and the expression of EMT marker proteins (E-cadherin, vimentin, and snail) in the extracted lysate was investigated using Western blot analysis. The results showed that β-eudesmol increased E-cadherin expression by 2.4-fold (p=0.037) and 2.7-fold (p=0.037) at 78.5 µM and 157 µM, respectively (Figures 4 A, H). Vimentin expression was decreased by 0.8-fold (p=0.037) and 0.6-fold (p=0.037) at both concentrations (Figsures 4 A, I). In addition, snail-1 expression was decreased by 0.6-fold (p=0.037) and 0.4-fold (p=0.037), respectively, at both concentrations (Figures A, J). Altogether, it was concluded that β-eudesmol down-regulated EMT by modulating the expression E-cadherin, vimentin, and the transcription factor snail-1.\nPI3K/AKT and p38MAPK are among the upstream regulator of the EMT process. The expression levels of these proteins in β-eudesmol treated cells were determined. The expression levels of total PI3K, phospho-PI3K at Tyr 458, and their ratios were initially investigated. The results revealed that exposing HuCCT1 cells to 78.5 µM β-eudesmol reduced total PI3K and p-PI3K expression by 0.8-fold (p=0.03) and 0.5-fold (p=0.03), respectively. On the other hand, β-eudesmol at 157 µM slightly increased the expression of total PI3K and p-PI3K by 1.4-fold (p=0.03) and 1.8-fold (p=0.03), respectively. On a basal level, 78.5 M, and 157 M, the ratios of p-PI3K to PI3K were 0.7, 0.4, and 0.9, respectively (Figures A, B, C). Similarly, total AKT, phospho-AKT at Ser 473, and their ratios were also affected by the β-eudesmol treatment. At 78.5 µM, the expression of p-AKT was increased by 1.2-fold (p=0.037), while total AKT expression was decreased by 0.9-fold (p=0.037). At 157 µM, however, the expression of total and p-AKT nearly was reduced by 0.8-fold (p=0.037) and 0.7-fold (p=0.037), respectively. At the basal level, the ratio of p-AKT to AKT was 0.8, but treatment with 78.5 µM and 157 µM β-eudesmol reduced the ratio to 0.6 (p=0.05) and 1 (p=0.05), respectively (Figures 4 A, D, E). These results suggested that β-eudesmol at low concentration (78.5 µM) inhibited the phosphorylation of PI3K and AKT proteins. \nAdditionally, the effects of β-eudesmol treatment on total p38MAPK expression, p-p38MAPK (at Thr180/Tyr182) expression, and their ratios were investigated. β-Eudesmol increased the expression of p-p38MAPK by 3-fold (p=0.037) and 3.6-fold (p=0.034) at 78.5 µM and 157 µM. At 157 µM, it slightly increased the expression of total p38MAPK by 1.2-fold (p=0.037). The ratio of p-p38MAK to p38MAPK at basal level was 0.18, and was increased to 0.6-fold (p=0.05) and 0.5-fold (p=0.05) when exposed to 78.5 μM and 157 μM β-eudesmol, respectively (Figures 4 A, F, G). These results suggested that β-eudesmol activated the p38MAPK signaling pathway.\nList of Genes and Primers Used for PCR\nAntiproliferative Activity of β-eudesmol on HuCCT1 Cells. Figure (A) shows the viability % of HuCCT1 cells after exposure to different concentrations of β-eudesmol for 24 hours, 48 hours, and 72 hours. The cell viability was determined by MTT assay. Data are expressed as the median (range) from the three independent experiments. Figure (B) shows the morphological evidence of β-eudesmol-induced cell death. The black arrow in the picture indicates normal cells (in control) and dead cells (in treated), respectively\nEffects of β-eudesmol on Migration and Invasion of HuCCT1 Cells. Figure (A) shows the representative number of cells migrated after exposure to 0, 78.5, 157, 240 μM ATD for 24 hours. Figure (B) shows the representative number of cells invaded after exposure to 0, 78.5, 157, 240 μM ATD for 24 hours. The cells were fixed, stained, and ten representative fields were counted under the light microscope. The bar graph data represents the median (range) from three independent experiments. *p=0.05, **p=0.04 vs untreated control cells\nEffect of β-eudesmol Treatment on the Expression of EMT Associated Genes. Bar graph (A) shows the fold change in expression of E-cadherin mRNA. Bar graph (B) shows the fold change in expression of Vimentin mRNA. Bar graph (C) shows the fold change in expression of the Snail-1 mRNA. The data represents the median (range) of the three independent experiments. * p=0.037 vs untreated control\nWestern Blot Results Showing the Effects of β-eudesmol on EMT Markers and PI3K/AKT and MAPK Expression. The protein was extracted by using the RIPA buffer and 30 µg whole protein per lane was separated by SDS-PAGE (8-12.5% gel concentration). Figure A shows the representative pictures of the protein bands showing the expression levels of p-p38MAPK, p38MAPK, p-AKT, AKT, p-PI3K, PI3K, E-cadherin, Vimentin, and Snail-1 in HuCCT1 cells treated with β-eudesmol at 0, 78.5 µM and 157 µM for 24 hours. Bar graph B shows the fold change in expression of p-PI3K and PI3K, and bar graph C shows their ratios. Bar graph D shows the fold change in expression of p-AKT and AKT, and bar graph E shows their ratios. Bar graph F shows the fold change in expression of p-p38MAPK and p38MAPK, and bar graph G shows their ratios. Bar graph H, I, and J show the fold change in expression of E-cadherin, vimentin, and snail-1, respectively. β-actin was taken as an internal loading control. *p=0.05, **p=0.037, ***p=0.034 vs untreated control cells. Abbreviations: PI3K, Phosphatidylinositide-3 kinase; AKT, Protein kinase B; MAPK, Mitogen-activated protein kinase; EMT, epithelial-mesenchymal transition", "β-Eudesmol exhibited potent antiproliferative activity that is time- and concentration-dependent (Mathema et al., 2017; Kotawng et al., 2018). The antiproliferative activity of β-eudesmol on HuCCT1 is qualitatively consistent with the findings from other studies in other types of human cancers, including lung cancer, liver cancer, and prostate cancer (Archarya et al., 2021). It significantly decreased HuCCT1 cell migration and invasion in an in vitro transwell assay in a concentration-dependent manner.\nMost CCA-related deaths occur because of metastasis (Hahn et al., 2020). The increased motility and invasive nature of migrating cells are required for the first few steps of metastasis, and it has been suggested that they are linked to the epithelial-mesenchymal transition (EMT) phase. EMT is a reversible mechanism in which epithelial cells transform into mesenchymal cells and develop the ability to migrate and invade (Heerboth et al., 2015). During EMT, epithelial markers such as E-cadherin, zonula occludens (ZO-1) and occludin are down-regulated, while mesenchymal markers such as N-cadherin, vimentin, and fibronectin are up-regulated. Previous studies have shown that CCA patients have lower levels of the epithelial marker E-cadherin and higher levels of mesenchymal markers, including vimentin and snail-1 (Settkorn et al., 2005; Sato et al., 2010; Ryu et al., 2012). In the present study however, β-eudesmol treatment up-regulated the expression of epithelial marker E-cadherin in HuCCT1 cells, while down-regulated the expression of mesenchymal markers (vimentin and snail-1) at both the mRNA and protein levels. Up-regulation of snail-1 expression has been shown to inhibit E-cadherin expression (Cano et al., 2000). It is possible that the low level of snail-1 expression is responsible for the enhancement of E-cadherin expression in β-eudesmol treated HuCCT1 cells. This finding suggested that β-eudesmol inhibited HuCCT1 cell migration and invasion by suppressing the formation of mesenchymal phenotypes.\nEMT is a complicated mechanism controlled by the interactions of several signaling pathways. Some of the molecular signaling pathways linked to EMT induction include transforming growth factor (TGF)-β, fibroblast growth factor, epidermal growth factor, Ras, Src, integrin, Wnt/β-catenin, Notch, and PI3K/AKT. The PI3K/AKT signaling pathway can cooperate with other signaling pathways such as TGF-, NF-B, and Wnt/-catenin to induce the EMT process (Xu et al., 2015). Besides, increased PI3K/AKT activation was also linked to increased metastasis in CCA (Yothaisong t al., 2013). The compound is gaining interest as a potential target for the prevention and treatment of metastatic tumors like CCA. The phosphorylation of both PI3K and AKT in HuCCT1 cells was significantly reduced by β-eudesmol at a low concentration (78.5 µM). This suggested that β-eudesmol could be utilized as a PI3K/AKT inhibitor at low concentration. In previous research, natural compound rotenone showed anticancer activity in colon cancer by inhibiting migration and EMT via regulating the activity of the PI3K/AKT signaling pathway (Xue et al., 2020). It is possible that the modulatory effect of β-eudesmol on the PI3K/AKT signaling pathway might be responsible for the up-regulated expression of E-cadherin and down-regulated expression of vimentin and snail-1. \nβ-eudesmol increased p38MAPK activity in HuCCT1 cells. Various cellular stresses, including chemotherapeutic agents, stimulate this stress-response pathway. In addition to its function in stress responses, studies have shown that p38 MAPK mediates pathways that lead to cell apoptosis and growth inhibitory signals (Olson et al., 2004). It has been documented that cisplatin-induced cancer cell death needs p38 MAPK activation (Olson et al., 2004). On the other hand, p38MAPK activation has been shown to have both anti-apoptotic and proliferative effects (Wada and Penninger, 2004). TGF-β-mediated EMT and cell migration have been shown to be involved in the activation of p38MAPK (Bakin et al., 2002). However, the relationship between β-eudesmol-induced p38MAPK activation and its role in EMT and metastasis of CCA cells needs further research. \nIn conclusion, β-eudesmol exhibited potent antiproliferative activity against HuCCT1 cells. It inhibited HuCCT1 cell migration and invasion in vitro. The anti-migration action of β-eudesmol might be due to its suppressive effect on EMT, as evidenced by the changes in expression of EMT markers. The compound also inhibited phosphorylation of PI3K and AKT, while activating phosphorylation of p38MAPK, which could attribute to its anti-migration and anti-EMT effect. The findings from the current study suggested that β-eudesmol can act as a suppressor of EMT in CCA, but further research with additional CCA cell lines and animal models is required.", "BA, WC, and KN were involved in the design of the experimental study. BA performed the experiments. BA and WC performed data analysis. BA drafted the manuscript. KN revised the manuscript. All authors reviewed and approved the final manuscript for submission. All meet the ICMJE criteria for authorship." ]
[ "intro", "materials|methods", "results", "discussion", null ]
[ "Atractylodes lancea", "EMT", "migration and invasion", "cell proliferation" ]
Introduction: Cholangiocarcinoma (CCA), a malignant transformation of epithelial cells lining the bile duct, is a serious public health issue in several parts of the world, especially in Asia. Thailand is the country where the incidence of CCA is highest in the world (Goral, 2017). The ingestion of improperly cooked, fermented, or preserved cyprinid fish species food containing the fluke, Opisthorchis viverrini, is linked to the highest prevalence of CCA in Northeast Thailand (Aukkanimart et al., 2017; Kamsa-ard et al., 2021). Besides, genetic and epigenetic changes in regulatory genes in cholangiocytes have been associated with activation of oncogenes and deregulation of tumor suppressor genes in CCA (Fava and Lorenzini, 2012). Metastasis is one of the primary cause of CCA related death (Hahn et al., 2020). The increased motility and invasiveness of migrating cells are needed for the first few steps of metastasis, and they have been connected to the epithelial-mesenchymal transition (EMT). EMT is a reversible transformation of epithelial cells into mesenchymal cells with the capacity to migrate and invade. Reduced expression of epithelial markers like E-cadherin and β-catenin and increased expression of mesenchymal markers like N-cadherin, Vimentin, and Snail-1 have been reported in both intrahepatic and extrahepatic CCA (Settakorn t al., 2005; Sat et al., 2010; Ryu et al., 2012; Huang et al., 2014; Vaquero et al., 2017). The loss of E-cadherin, a primary epithelial marker, is a necessary condition for EMT and metastasis stimulation (Heerboth et al., 2015). The main obstacles in CCA management are the lack of early screening tools and poor clinical results. Metastasis emerges at the time of diagnosis is due to the late clinical presentation and the lack of effective non-surgical therapies. So, a drug that stops CCA from progressing and spreading is urgently needed. The dried rhizome of the Atractylodes lancea (AL) has been used in Japanese Kampo medicine (“So-jutsu”), Thai traditional medicine (“Khod-Kha-Mao”), and Chinese traditional medicine (“Cang Zhu”) for treatment of various illnesses, including gastrointestinal problems, rheumatic diseases, influenza, and night blindness (Koonrungsesomboon et al., 2014; Na-Bangchang et al., 2017). The β-eudesmol, a sesquiterpenoid alcohol, is a major component of the AL rhizome extract. It exhibits diverse biological and therapeutic activities (Achrya et al., 2021). It shows potent antiproliferative activity against CCA, liver cancer, leukemia, and melanoma cells (Bomfim et al., 2013; Li et al., 2013). Previous research on CCA cells in vitro and in animal models has shown promising anti-CCA activity of β-eudesmol (Plengsuriyakarn et al., 2015; Mathema et al., 2017; Kotawong et al., 2018). It induced apoptosis in CCA cell lines through the activation of caspase-3 and 7 (Kotawong et al., 2018). The expression of detoxifying enzyme heme oxygenase (HO)-1 and NAD(P)H quinone dehydrogenase (NOQ)-1 in CCA cells is also suppressed by β-eudesmol (Mathema et al., 2017; Srijiwangsa et al., 2018). The antiproliferative activity of β-eudesmol against CCA cells is attributed to its inhibitory activity on STAT1/3 phosphorylation and NF-κB expression (Mathema et al., 2017). In the xenografted nude mouse model of CCA, a high dose of β-eudesmol (100 mg/kg body weight for 30 days) prevented tumor volume and lung metastasis (Plengsuriyakarn et al., 2015). However, the influence of β-eudesmol on CCA cell migration in vitro and its possible mechanism have yet to be investigated. The present study reported significant inhibitory activity of β-eudesmol on the migration and invasion of intrahepatic CCA cell line (HuCCT1). The anti-migration activity of β-eudesmol might be due to its suppressive action on EMT via PI3K/AKT and p38MAPK modulation. Materials and Methods: Chemical and Reagents HuCCT1 cells were purchased from the Japanese Collection of Research Bioresources Cell Bank (JCRB), Japan. Roswell park memorial institute-1640 (RPMI-1640), fetal bovine serum (FBS), trypsin-EDTA (0.25 %), and antibiotic-antimycotic (100x) containing 10,000 U/ml of penicillin, and 10,000 U/ml streptomycins were purchased from Gibco BRL life technologies (Grand Island, NY, USA). The pure β-eudesmol was purchased from the Wako Pure Chemical Industry (Osaka, Japan). The 3-(4, 5-dimethyl-2-thiazoyl)-2, 5-diphenyl-2H-tetrazolium bromide (MTT) was purchased from Sigma Aldrich (St. Louis, MO, USA). Matrigel-coated transwell membrane was purchased from BD Biosciences (San Jose, CA, USA). The TRIzol reagent was purchased from Sigma Aldrich (St. Louis, MO, USA). RQ1 RNase-free DNase kit was purchased from Promega (Madison, WI, USA). SuperScriptTM III first-strand synthesis cDNA synthesis kit, PierceTM BCA protein assay kit, and alkaline phosphatase (AP)-labeled anti-rabbit secondary antibody were obtained from Thermo Fischer Scientific (Waltham, MA, USA). The iTaqTM universal SYBR green supermix was purchased from Bio-Rad (Hercules, CA, USA). All the rabbit primary antibodies (β-actin, PI3K, p-PI3K, AKT, p-AKT, p38MAPK, p-p38MAPK, E-cadherin, Vimentin, and Snail-1), protease inhibitor cocktail, and RIPA lysis buffer were purchased from Cell Signaling Technology Inc. (Beverly, MA, USA). The AP-chromogen BCIP/NBT was purchased from Amresco (Solon, OH, USA). Cell Culture HuCCT1 cells were grown in RPMI medium supplemented with 10% FBS and 1% antibiotic-antimycotic solution. Cells were maintained in a 5% CO2 environment at 37 oC. Cell subcultures were performed every 3-4 days using 0.25% trypsin-EDTA. Antiproliferative Assay The standard colorimetric MTT assay was used to assess cell viability. HuCCT1 cells were seeded (10,000 cells/well) onto a 96-well microtiter plate and incubated at 37°C for 24 hours. The cells were exposed to different concentrations of β-eudesmol (17.5, 35, 70, 140, 280, 560, and 1,120 µM) and further incubated for additional 24 hours, 48 hours, and 72 hours, respectively. The culture medium was discarded, and 20 µL of 5 mg/mL MTT reagent was added to each well. The plate was incubated at 37 °C in the dark for 4 hours. To solubilize the formazan crystals, 100 µL DMSO was applied to each well after withdrawal of the supernatant without disturbing the bottom layer. The plate was incubated at room temperature (25 oC) for 30 minutes (with gentle shaking), and the absorbance was measured at 570 nm using 96-well VarioskanTM Microplate Reader (Thermo Scientific, Rockford, USA). CalcuSyn version 2.11 software (Biosoft, Cambridge, UK) was used to determine the cell viability (%) and the corresponding IC50 (half-maximal inhibitory concentration). Transwell Migration and Invasion Assay Transwell assay (24 well, 8 µm pore size membrane, Corning Inc., NY, USA) was utilized to evaluate the migration of HuCCT1 cells (both treated and control) following the previously described protocol with modifications (Trnh et al., 2017). The HuCCT1 cells were collected and resuspended in a serum-free RPMI medium after being pretreated with β-eudesmol for 24 hours. The transwell was loaded with 200 µL of cell suspension (1x105 cells) in the upper chamber and 600 µL of 20 % FBS-containing RPMI media in the lower chamber. The cells that did not migrate and remained inside the cup were removed with a cotton swab after 24 hours of incubation. The cells that migrated through the membrane to the bottom surface of the cup were fixed in methanol, stained with Giemsa, and counted under a light microscope. A similar protocol was used for the invasion assay, with the exception that the transwell chambers were pre-coated with 100 µl of matrigel (diluted in serum-free RPMI media) and incubated at 37 oC for 1 hour. The cells were fixed, stained, and counted. Each experiment was repeated three times. Real-time PCR The 2 x 105 cells were plated onto a -well plate and incubated at 37 oC overnight. The cells were further incubated for 24 hours after being treated with different concentrations of β-eudesmol. Only the culture media was used to treat the control cells (negative control). Total RNA was extracted from the cells using TRIzol reagent following the manufacturer’s instructions. NanoDrop was used to determine the RNA concentration and purity. The RQ1 DNase kit was used to remove contaminating genomic DNA from the total RNA extracted. Single-stranded cDNA was synthesized from total RNA (1 µg) using Superscript III reverse transcriptase cDNA construction kit. The concentration of the synthesized cDNA product was determined and used as a template for real-time PCR. The CFX96 real-time PCR detection system (Bio-Rad, Hercules, CA, USA) was used to determine the mRNA expression level of the targeted genes using iTaq Universal SYBR green supermix and specific primer sequences. The PCR conditions were denaturation at 95oC for 5 min, and 40 cycles of amplification at 95 oC for 15 sec and annealing at 62 oC for 1 min. The delta-delta Ct technique was used to measure gene expression levels in comparison to controls (Livak et al., 2001), and the expression was normalized using the housekeeping gene GAPDH. The list of target genes and primer sequences is shown in Table 1. Western Blot Analysis Western blot analysis was performed according to the previously described method with slight modifications (Mathema et al., 2017) to evaluate the expression levels of EMT-related proteins and signaling pathways. Briefly, after exposing to β-eudesmol for 24 hours, the cells were washed with PBS and lysed in RIPA buffer containing a protease and phosphatase inhibitor cocktail. The PierceTM BCA protein assay kit was used to determine the amount of protein in the sample. After heat denaturation, equal amounts of protein samples (30 µg/lane) were separated on a nitrocellulose membrane using 8-12.5 % sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). The membrane was blocked for 30 minutes in TBS with 5% BSA before being incubated overnight at 4°C with appropriate dilutions of the primary antibodies. After washing with TBS containing 0.1% Tween-20, the membrane was incubated with an AP-conjugated secondary antibody for 1 hour. The AP chromogen (BCIP/NBT) system was used to colorimetrically detect the corresponding protein bands. Finally, the protein band was densitometrically analyzed with Image J software (NIH, USA), and the amount was normalized to an β-actin internal control. Statistical Analysis IBM SPSS Statistics 21 (Chicago, Illinois, USA) was used to perform the statistical analysis. Data are presented as a median (range). The non-parametric tests Mann-Whitney-U and Kruskal-Wallis tests were used for comparison of two or more than two groups of quantitative variables. The statistical significance level was set at 0.05. Results: Antiproliferative Activity of β-eudesmol The antiproliferative activity of β-eudesmol was assessed using the MTT assay after 24, 48, and 72 hours of cells exposure. The compound showed potent antiproliferative activity on HuCCT1 cells in a time- and concentration-dependent manner (Figure 1A). The corresponding IC50 values [median (range)] for the 24, 48 and 72 hours exposure were 180 (185.67-175.63), 157 (149.75-162.35), and 99.90 (92.25-105.65-) µM, respectively. Under light microscopy, the irregular morphology and visible signs of cellular debris and floating dead cells were visible after β-eudesmol treatment at the concentrations of 78.5 µM and 157 µM (Figure 1B). Inhibitory Effects on Migration and Invasion The number of cells that migrated through the transwell membrane was stained and counted. β-Eudesmol inhibited HuCCT1 cell migration in a concentration-dependent manner. After 24 hours of exposure to 78.5, 157, and 240 µM of β-eudesmol, the number of migrating cells was significantly decreased compared to untreated control cells (Figure 2A). At 78.5, 157, and 240 µM of β-eudesmol exposure, the percents [median (range)] of cells migrating were 59.4 (56.1-62.7) %, 29.7 (27.3-32.1) %, and 13.26 (10.26- 16.26) %, respectively. The invasion assay was performed with a matrigel-coated transwell membrane, and the number of cells passing through the coated membrane after β-eudesmol exposure was stained and counted. β-Eudesmol at 78.5, 157, and 240 µM substantially reduced cell invasion to 62.98 (56.18-69.78) %, 41.09 (38.29-43.89) %, and 29.52 (27.72-31.32) %, respectively, compared with the untreated control cells (Figure 2B). Modulatory Effects on Genes Associated with EMT mRNA expression levels of the three EMT-associated genes, i.e., E-cadherin, vimentin, and snail-1 were examined and compared to the untreated control after a 24-hour β-eudesmol exposure (Figures 3 A, B, C). Results showed that exposure to 78.5 µM and 157 µM β-eudesmol enhanced E-cadherin gene expression by 3-fold (p=0.037) and 3.4-fold (p=0.037), respectively. Similarly, 78.5 µM reduced vimentin gene expression by 0.7-fold (p=0.037), while 157 µM reduced gene expression by 0.5-fold (p=0.037). Likewise, β-eudesmol at 78.5 µM and 195 µM reduced the expression of the snail-1 gene by 0.8-fold (p=0.03) and 0.4-fold (p=0.037), respectively. Suppression of EMT via PI3K/AKT and p38MAPK Modulation HuCCT1 cells were exposed to β-eudesmol for 24 hours and the expression of EMT marker proteins (E-cadherin, vimentin, and snail) in the extracted lysate was investigated using Western blot analysis. The results showed that β-eudesmol increased E-cadherin expression by 2.4-fold (p=0.037) and 2.7-fold (p=0.037) at 78.5 µM and 157 µM, respectively (Figures 4 A, H). Vimentin expression was decreased by 0.8-fold (p=0.037) and 0.6-fold (p=0.037) at both concentrations (Figsures 4 A, I). In addition, snail-1 expression was decreased by 0.6-fold (p=0.037) and 0.4-fold (p=0.037), respectively, at both concentrations (Figures A, J). Altogether, it was concluded that β-eudesmol down-regulated EMT by modulating the expression E-cadherin, vimentin, and the transcription factor snail-1. PI3K/AKT and p38MAPK are among the upstream regulator of the EMT process. The expression levels of these proteins in β-eudesmol treated cells were determined. The expression levels of total PI3K, phospho-PI3K at Tyr 458, and their ratios were initially investigated. The results revealed that exposing HuCCT1 cells to 78.5 µM β-eudesmol reduced total PI3K and p-PI3K expression by 0.8-fold (p=0.03) and 0.5-fold (p=0.03), respectively. On the other hand, β-eudesmol at 157 µM slightly increased the expression of total PI3K and p-PI3K by 1.4-fold (p=0.03) and 1.8-fold (p=0.03), respectively. On a basal level, 78.5 M, and 157 M, the ratios of p-PI3K to PI3K were 0.7, 0.4, and 0.9, respectively (Figures A, B, C). Similarly, total AKT, phospho-AKT at Ser 473, and their ratios were also affected by the β-eudesmol treatment. At 78.5 µM, the expression of p-AKT was increased by 1.2-fold (p=0.037), while total AKT expression was decreased by 0.9-fold (p=0.037). At 157 µM, however, the expression of total and p-AKT nearly was reduced by 0.8-fold (p=0.037) and 0.7-fold (p=0.037), respectively. At the basal level, the ratio of p-AKT to AKT was 0.8, but treatment with 78.5 µM and 157 µM β-eudesmol reduced the ratio to 0.6 (p=0.05) and 1 (p=0.05), respectively (Figures 4 A, D, E). These results suggested that β-eudesmol at low concentration (78.5 µM) inhibited the phosphorylation of PI3K and AKT proteins. Additionally, the effects of β-eudesmol treatment on total p38MAPK expression, p-p38MAPK (at Thr180/Tyr182) expression, and their ratios were investigated. β-Eudesmol increased the expression of p-p38MAPK by 3-fold (p=0.037) and 3.6-fold (p=0.034) at 78.5 µM and 157 µM. At 157 µM, it slightly increased the expression of total p38MAPK by 1.2-fold (p=0.037). The ratio of p-p38MAK to p38MAPK at basal level was 0.18, and was increased to 0.6-fold (p=0.05) and 0.5-fold (p=0.05) when exposed to 78.5 μM and 157 μM β-eudesmol, respectively (Figures 4 A, F, G). These results suggested that β-eudesmol activated the p38MAPK signaling pathway. List of Genes and Primers Used for PCR Antiproliferative Activity of β-eudesmol on HuCCT1 Cells. Figure (A) shows the viability % of HuCCT1 cells after exposure to different concentrations of β-eudesmol for 24 hours, 48 hours, and 72 hours. The cell viability was determined by MTT assay. Data are expressed as the median (range) from the three independent experiments. Figure (B) shows the morphological evidence of β-eudesmol-induced cell death. The black arrow in the picture indicates normal cells (in control) and dead cells (in treated), respectively Effects of β-eudesmol on Migration and Invasion of HuCCT1 Cells. Figure (A) shows the representative number of cells migrated after exposure to 0, 78.5, 157, 240 μM ATD for 24 hours. Figure (B) shows the representative number of cells invaded after exposure to 0, 78.5, 157, 240 μM ATD for 24 hours. The cells were fixed, stained, and ten representative fields were counted under the light microscope. The bar graph data represents the median (range) from three independent experiments. *p=0.05, **p=0.04 vs untreated control cells Effect of β-eudesmol Treatment on the Expression of EMT Associated Genes. Bar graph (A) shows the fold change in expression of E-cadherin mRNA. Bar graph (B) shows the fold change in expression of Vimentin mRNA. Bar graph (C) shows the fold change in expression of the Snail-1 mRNA. The data represents the median (range) of the three independent experiments. * p=0.037 vs untreated control Western Blot Results Showing the Effects of β-eudesmol on EMT Markers and PI3K/AKT and MAPK Expression. The protein was extracted by using the RIPA buffer and 30 µg whole protein per lane was separated by SDS-PAGE (8-12.5% gel concentration). Figure A shows the representative pictures of the protein bands showing the expression levels of p-p38MAPK, p38MAPK, p-AKT, AKT, p-PI3K, PI3K, E-cadherin, Vimentin, and Snail-1 in HuCCT1 cells treated with β-eudesmol at 0, 78.5 µM and 157 µM for 24 hours. Bar graph B shows the fold change in expression of p-PI3K and PI3K, and bar graph C shows their ratios. Bar graph D shows the fold change in expression of p-AKT and AKT, and bar graph E shows their ratios. Bar graph F shows the fold change in expression of p-p38MAPK and p38MAPK, and bar graph G shows their ratios. Bar graph H, I, and J show the fold change in expression of E-cadherin, vimentin, and snail-1, respectively. β-actin was taken as an internal loading control. *p=0.05, **p=0.037, ***p=0.034 vs untreated control cells. Abbreviations: PI3K, Phosphatidylinositide-3 kinase; AKT, Protein kinase B; MAPK, Mitogen-activated protein kinase; EMT, epithelial-mesenchymal transition Discussion: β-Eudesmol exhibited potent antiproliferative activity that is time- and concentration-dependent (Mathema et al., 2017; Kotawng et al., 2018). The antiproliferative activity of β-eudesmol on HuCCT1 is qualitatively consistent with the findings from other studies in other types of human cancers, including lung cancer, liver cancer, and prostate cancer (Archarya et al., 2021). It significantly decreased HuCCT1 cell migration and invasion in an in vitro transwell assay in a concentration-dependent manner. Most CCA-related deaths occur because of metastasis (Hahn et al., 2020). The increased motility and invasive nature of migrating cells are required for the first few steps of metastasis, and it has been suggested that they are linked to the epithelial-mesenchymal transition (EMT) phase. EMT is a reversible mechanism in which epithelial cells transform into mesenchymal cells and develop the ability to migrate and invade (Heerboth et al., 2015). During EMT, epithelial markers such as E-cadherin, zonula occludens (ZO-1) and occludin are down-regulated, while mesenchymal markers such as N-cadherin, vimentin, and fibronectin are up-regulated. Previous studies have shown that CCA patients have lower levels of the epithelial marker E-cadherin and higher levels of mesenchymal markers, including vimentin and snail-1 (Settkorn et al., 2005; Sato et al., 2010; Ryu et al., 2012). In the present study however, β-eudesmol treatment up-regulated the expression of epithelial marker E-cadherin in HuCCT1 cells, while down-regulated the expression of mesenchymal markers (vimentin and snail-1) at both the mRNA and protein levels. Up-regulation of snail-1 expression has been shown to inhibit E-cadherin expression (Cano et al., 2000). It is possible that the low level of snail-1 expression is responsible for the enhancement of E-cadherin expression in β-eudesmol treated HuCCT1 cells. This finding suggested that β-eudesmol inhibited HuCCT1 cell migration and invasion by suppressing the formation of mesenchymal phenotypes. EMT is a complicated mechanism controlled by the interactions of several signaling pathways. Some of the molecular signaling pathways linked to EMT induction include transforming growth factor (TGF)-β, fibroblast growth factor, epidermal growth factor, Ras, Src, integrin, Wnt/β-catenin, Notch, and PI3K/AKT. The PI3K/AKT signaling pathway can cooperate with other signaling pathways such as TGF-, NF-B, and Wnt/-catenin to induce the EMT process (Xu et al., 2015). Besides, increased PI3K/AKT activation was also linked to increased metastasis in CCA (Yothaisong t al., 2013). The compound is gaining interest as a potential target for the prevention and treatment of metastatic tumors like CCA. The phosphorylation of both PI3K and AKT in HuCCT1 cells was significantly reduced by β-eudesmol at a low concentration (78.5 µM). This suggested that β-eudesmol could be utilized as a PI3K/AKT inhibitor at low concentration. In previous research, natural compound rotenone showed anticancer activity in colon cancer by inhibiting migration and EMT via regulating the activity of the PI3K/AKT signaling pathway (Xue et al., 2020). It is possible that the modulatory effect of β-eudesmol on the PI3K/AKT signaling pathway might be responsible for the up-regulated expression of E-cadherin and down-regulated expression of vimentin and snail-1. β-eudesmol increased p38MAPK activity in HuCCT1 cells. Various cellular stresses, including chemotherapeutic agents, stimulate this stress-response pathway. In addition to its function in stress responses, studies have shown that p38 MAPK mediates pathways that lead to cell apoptosis and growth inhibitory signals (Olson et al., 2004). It has been documented that cisplatin-induced cancer cell death needs p38 MAPK activation (Olson et al., 2004). On the other hand, p38MAPK activation has been shown to have both anti-apoptotic and proliferative effects (Wada and Penninger, 2004). TGF-β-mediated EMT and cell migration have been shown to be involved in the activation of p38MAPK (Bakin et al., 2002). However, the relationship between β-eudesmol-induced p38MAPK activation and its role in EMT and metastasis of CCA cells needs further research. In conclusion, β-eudesmol exhibited potent antiproliferative activity against HuCCT1 cells. It inhibited HuCCT1 cell migration and invasion in vitro. The anti-migration action of β-eudesmol might be due to its suppressive effect on EMT, as evidenced by the changes in expression of EMT markers. The compound also inhibited phosphorylation of PI3K and AKT, while activating phosphorylation of p38MAPK, which could attribute to its anti-migration and anti-EMT effect. The findings from the current study suggested that β-eudesmol can act as a suppressor of EMT in CCA, but further research with additional CCA cell lines and animal models is required. Author Contribution Statement: BA, WC, and KN were involved in the design of the experimental study. BA performed the experiments. BA and WC performed data analysis. BA drafted the manuscript. KN revised the manuscript. All authors reviewed and approved the final manuscript for submission. All meet the ICMJE criteria for authorship.
Background: Cholangiocarcinoma (CCA) is a highly aggressive tumor with a greater risk of distant metastasis. A drug that prevents CCA development and spread is urgently needed.  In this research, we investigated the effect of β-eudesmol on the migration and invasion and epithelial-mesenchymal transformation (EMT) of the CCA cell line. Methods: MTT and transwell assays were used to investigate the antiproliferative activity, as well as activity on cell migration and cell invasion. Real-time PCR and western blot analysis were used to investigate the expression of EMT marker genes and proteins. Results: β-eudesmol was shown to exhibit potent antiproliferative activity (IC50 92.25-185.67 µM) and to significantly reduce CCA cell migration and invasion (27.3-62.7%). At both mRNA and protein levels, it significantly up-regulated the expression of epithelial marker E-cadherin (3-3.4-fold), while down-regulated the expression of mesenchymal markers-vimentin (0.6-0.8-fold) and snail-1 (0.4-0.6-fold). Furthermore, β-eudesmol inhibited PI3K and AKT phosphorylation (0.5-0.8-fold), while activating p38MAPK activity (1.2-3.6-fold). Conclusions: Altogether, the anti-metastatic activity of β-eudesmol might be due to its suppressive effect on EMT via modulating the PI3K/AKT and p38MAPK signaling cascades.
null
null
4,986
271
[ 57 ]
5
[ "cells", "eudesmol", "expression", "fold", "µm", "pi3k", "emt", "akt", "cca", "hucct1" ]
[ "regulatory genes cholangiocytes", "cholangiocytes", "genes cholangiocytes associated", "cholangiocarcinoma", "cholangiocarcinoma cca malignant" ]
null
null
null
[CONTENT] Atractylodes lancea | EMT | migration and invasion | cell proliferation [SUMMARY]
null
[CONTENT] Atractylodes lancea | EMT | migration and invasion | cell proliferation [SUMMARY]
null
[CONTENT] Atractylodes lancea | EMT | migration and invasion | cell proliferation [SUMMARY]
null
[CONTENT] Bile Duct Neoplasms | Bile Ducts, Intrahepatic | Cell Line, Tumor | Cell Movement | Cell Proliferation | Cholangiocarcinoma | Epithelial-Mesenchymal Transition | Humans | Phosphatidylinositol 3-Kinases | Proto-Oncogene Proteins c-akt | Sesquiterpenes, Eudesmane | p38 Mitogen-Activated Protein Kinases [SUMMARY]
null
[CONTENT] Bile Duct Neoplasms | Bile Ducts, Intrahepatic | Cell Line, Tumor | Cell Movement | Cell Proliferation | Cholangiocarcinoma | Epithelial-Mesenchymal Transition | Humans | Phosphatidylinositol 3-Kinases | Proto-Oncogene Proteins c-akt | Sesquiterpenes, Eudesmane | p38 Mitogen-Activated Protein Kinases [SUMMARY]
null
[CONTENT] Bile Duct Neoplasms | Bile Ducts, Intrahepatic | Cell Line, Tumor | Cell Movement | Cell Proliferation | Cholangiocarcinoma | Epithelial-Mesenchymal Transition | Humans | Phosphatidylinositol 3-Kinases | Proto-Oncogene Proteins c-akt | Sesquiterpenes, Eudesmane | p38 Mitogen-Activated Protein Kinases [SUMMARY]
null
[CONTENT] regulatory genes cholangiocytes | cholangiocytes | genes cholangiocytes associated | cholangiocarcinoma | cholangiocarcinoma cca malignant [SUMMARY]
null
[CONTENT] regulatory genes cholangiocytes | cholangiocytes | genes cholangiocytes associated | cholangiocarcinoma | cholangiocarcinoma cca malignant [SUMMARY]
null
[CONTENT] regulatory genes cholangiocytes | cholangiocytes | genes cholangiocytes associated | cholangiocarcinoma | cholangiocarcinoma cca malignant [SUMMARY]
null
[CONTENT] cells | eudesmol | expression | fold | µm | pi3k | emt | akt | cca | hucct1 [SUMMARY]
null
[CONTENT] cells | eudesmol | expression | fold | µm | pi3k | emt | akt | cca | hucct1 [SUMMARY]
null
[CONTENT] cells | eudesmol | expression | fold | µm | pi3k | emt | akt | cca | hucct1 [SUMMARY]
null
[CONTENT] cca | 2017 | cells | eudesmol | metastasis | activity | epithelial | medicine | activity eudesmol | 2015 [SUMMARY]
null
[CONTENT] fold | 037 | 157 | expression | eudesmol | µm | fold 037 | 78 | cells | respectively [SUMMARY]
null
[CONTENT] cells | eudesmol | cca | expression | ba | fold | emt | manuscript | pi3k | usa [SUMMARY]
null
[CONTENT] CCA ||| CCA ||| EMT | CCA [SUMMARY]
null
[CONTENT] 92.25 | CCA | 27.3-62.7% ||| 3-3.4-fold | 0.6-0.8-fold | 0.4-0.6-fold ||| PI3K and AKT | 0.5 | 0.8-fold | 1.2-3.6-fold [SUMMARY]
null
[CONTENT] CCA ||| CCA ||| EMT | CCA ||| MTT ||| PCR | EMT ||| 92.25 | CCA | 27.3-62.7% ||| 3-3.4-fold | 0.6-0.8-fold | 0.4-0.6-fold ||| PI3K and AKT | 0.5 | 0.8-fold | 1.2-3.6-fold ||| EMT [SUMMARY]
null
Impact of antiretroviral therapy on fertility desires among HIV-infected persons in rural Uganda.
21975089
Little is known about the fertility desires of HIV infected individuals on highly active antiretroviral therapy (HAART). In order to contribute more knowledge to this topic we conducted a study to determine if HIV-infected persons on HAART have different fertility desires compared to persons not on HAART, and if the knowledge about HIV transmission from mother-to-child is different in the two groups.
BACKGROUND
The study was a cross-sectional survey comparing two groups of HIV-positive participants: those who were on HAART and those who were not. Semi-structured interviews were conducted with 199 HIV patients living in a rural area of western Uganda. The desire for future children was measured by the question in the questionnaire "Do you want more children in future." The respondents' HAART status was derived from the interviews and verified using health records. Descriptive, bivariate and multivariate methods were used to analyze the relationship between HAART treatment status and the desire for future children.
METHODS
Results from the multivariate logistic regression model indicated an adjusted odds ratio (OR) of 1.08 (95% CI 0.40-2.90) for those on HAART wanting more children (crude OR 1.86, 95% CI 0.82-4.21). Statistically significant predictors for desiring more children were younger age, having a higher number of living children and male sex. Knowledge of the risks for mother-to-child-transmission of HIV was similar in both groups.
RESULTS
The conclusions from this study are that the HAART treatment status of HIV patients did not influence the desire for children. The non-significant association between the desire for more children and the HAART treatment status could be caused by a lack of knowledge in HIV-infected persons/couples about the positive impact of HAART in reducing HIV transmission from mother-to-child. We recommend that the health care system ensures proper training of staff and appropriate communication to those living with HIV as well as to the general community.
CONCLUSIONS
[ "Adolescent", "Adult", "Antiretroviral Therapy, Highly Active", "Cross-Sectional Studies", "Family Characteristics", "Female", "HIV Infections", "Health Knowledge, Attitudes, Practice", "Humans", "Infectious Disease Transmission, Vertical", "Intention", "Male", "Reproductive Behavior", "Rural Health", "Socioeconomic Factors", "Uganda", "Young Adult" ]
3214790
Background
The decision whether or not to have children is often complex and influenced by many factors. HIV-positive individuals in Africa have additional considerations to take into account when deciding whether or not to have children. These include the possibility of passing HIV from mother-to-child and the likelihood that one or both parents could die prior to the child reaching adulthood [1]. Mother-to-child transmission (MTCT) of HIV happens in either of two ways: through perinatal infection and through breastfeeding. Regardless of these concerns, many Africans after receiving a positive HIV diagnosis still elect to have children for various personal, cultural and economic reasons [2]. In most African societies a common expectation of marriage is that the couple will have children [3]. This is an especially important expectation in Uganda as children become members of the paternal clan [4]. In African societies women are often valued by their ability to bear children and a very high social good is placed on fertility; therefore, the pressure on women to have children is very high [5]. The birth of an HIV-positive child to a HIV-positive parent or couple is a situation faced by many families in sub-Saharan African countries. However, the advent of available highly active antiretroviral therapy (HAART) in many sub-Saharan African countries has changed the situation for HIV-positive persons. HAART has been shown to drastically reduce the risk of MTCT of HIV: The risk of MTCT in Denmark was less than 1% in all HIV-positive women (all on HAART) who gave birth between 2000-2005, and 1.7% in a large reference center for pregnant HIV-positive women in Belgium [6,7]. Similarly, HAART has been found to reduce the risk of vertical transmission of HIV in sub-Saharan African countries. For example, the risk of MTCT of HIV in women from a general sample from several sub-Saharan African countries decreased from 30.9% in untreated women to 4.9% in women on HAART [8]. Similarly, in Botswana, the number of new pediatric HIV infections has dropped from 4,600 in 1999 to 890 in 2007 due to nearly complete coverage of an effective antiretroviral treatment program [9]. In Cote d'Ivoire, the MTCT rate of HIV was 5.6% in formula-fed infants and 6.8% in infants with short-term breastfeeding reported by mothers on HAART [10]. These figures show the vast benefits of HAART in reducing the risk of vertical transmission of HIV. Some studies from sub-Saharan Africa have shown that an HIV diagnosis causes people generally to choose to have fewer children [3,11,12]. Other research has shown that HIV infection does not have a marked impact on fertility decisions, particularly for those who do not show signs or symptoms of disease [2,13,14]. Based on many reports indicating that HAART provision significantly decreases the risk of vertical transmission of HIV, one could expect that HIV-positive mothers/couples on HAART in sub-Saharan African countries would by now be more likely to opt to have children. It is therefore important to investigate if voluntary testing and counseling (VCT) for HIV infection and programs to prevent mother-to-child transmission (PMTCT) of HIV services are effective in counseling their clients on these recent findings and benefits of HAART. One study from South Africa stated that HAART increased the fertility desire in couples over time [15]. Similarly, a pan African study from seven countries showed that the pregnancy rate of HIV-positive women on HAART was significantly higher compared to those not on HAART [16]. A Zimbabwean study found that women on HAART were more likely to plan for a future child, but some women still had doubts about the effectiveness of HAART to prevent mother-to-child transmission of HIV [17]. Ugandan studies on the association between HAART and fertility revealed mixed results. One study from rural south-western Uganda showed that HIV-positive women on HAART had an increased desire for children but did not experience an increase in actual fertility [18]. Similarly, another study from rural eastern Uganda showed that HAART was not associated with pregnancy [19]. A survey of women attending Mbarara hospital in south-western Uganda revealed that HIV-positive women had a lower desire for future children irrespective of their HAART treatment status than HIV-negative women [20]. Another study from Mbarara district in south-western Uganda, found that HIV-positive women on HAART were more than three times likely to use contraception compared to those not on HAART [21]. In contrast to these findings, one study from Rakai district in central Uganda described a significant positive association between HAART treatment and becoming pregnant in 712 women attending a hospital [22]. The participants in the above mentioned studies included mostly only HIV-positive women, that is, HIV-positive men were not interviewed. Male involvement in reproductive health and fertility has long been recognized as important, but has not been achieved in a tangible way in many sub-Saharan African countries [23-25]. In order to have a balanced view on whether being on HAART changes fertility desires of individuals and couples, it is crucial to include both sexes. We interviewed male and female HIV-positive participants on this issue. The objectives of the study were: 1. To investigate whether HIV-positive persons on HAART had different fertility desires compared to HIV-positive persons not on HAART and whether these were different between men and women; 2. To assess the knowledge of mother-to-child transmission of HIV in participants on HAART and not on HAART. The study took place from September to December 2006 in two districts, Kabarole and Kamwenge, in western Uganda. Other results from the main quantitative component of this study on fertility and HIV status are published elsewhere [26].
Methods
Study setting Participants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans. Participants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans. Study design The study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants. The study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants. Recruitment of participants The study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records. The study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records. Data collection and analysis A questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable "Do you want more children in future" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as "Do you want more children in future in addition to the current pregnancy"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes. All of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable "Desire to have more children in future" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model. A questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable "Do you want more children in future" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as "Do you want more children in future in addition to the current pregnancy"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes. All of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable "Desire to have more children in future" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model. Study approval and ethical considerations Ethics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form. Ethics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form.
null
null
Conclusions
WK was involved in all stages of the study and wrote the article. JH had major input in the development of the proposal and conducted the field work in Uganda. She commented on the draft manuscript and helped with the interpretation of the study results. GSJ developed the statistical part of the proposal and analyzed the data. He also provided input into the manuscript. AA helped with the supervision of the field work and provided input into the manuscript and the interpretation of the data. TR was involved in the design of the study and supervised the field work in Uganda. He also provided input into the manuscript and helped to interpret the study results within the context of the Ugandan health care system. All authors read and approved the final manuscript.
[ "Background", "Study setting", "Study design", "Recruitment of participants", "Data collection and analysis", "Study approval and ethical considerations", "Results", "Demographic and socio-economic characteristics of study participants", "Fertility desires", "Knowledge and risk perceptions of mother-to-child-transmission of HIV", "Discussion", "Limitations", "Conclusions" ]
[ "The decision whether or not to have children is often complex and influenced by many factors. HIV-positive individuals in Africa have additional considerations to take into account when deciding whether or not to have children. These include the possibility of passing HIV from mother-to-child and the likelihood that one or both parents could die prior to the child reaching adulthood [1]. Mother-to-child transmission (MTCT) of HIV happens in either of two ways: through perinatal infection and through breastfeeding. Regardless of these concerns, many Africans after receiving a positive HIV diagnosis still elect to have children for various personal, cultural and economic reasons [2]. In most African societies a common expectation of marriage is that the couple will have children [3]. This is an especially important expectation in Uganda as children become members of the paternal clan [4]. In African societies women are often valued by their ability to bear children and a very high social good is placed on fertility; therefore, the pressure on women to have children is very high [5].\nThe birth of an HIV-positive child to a HIV-positive parent or couple is a situation faced by many families in sub-Saharan African countries. However, the advent of available highly active antiretroviral therapy (HAART) in many sub-Saharan African countries has changed the situation for HIV-positive persons. HAART has been shown to drastically reduce the risk of MTCT of HIV: The risk of MTCT in Denmark was less than 1% in all HIV-positive women (all on HAART) who gave birth between 2000-2005, and 1.7% in a large reference center for pregnant HIV-positive women in Belgium [6,7]. Similarly, HAART has been found to reduce the risk of vertical transmission of HIV in sub-Saharan African countries. For example, the risk of MTCT of HIV in women from a general sample from several sub-Saharan African countries decreased from 30.9% in untreated women to 4.9% in women on HAART [8]. Similarly, in Botswana, the number of new pediatric HIV infections has dropped from 4,600 in 1999 to 890 in 2007 due to nearly complete coverage of an effective antiretroviral treatment program [9]. In Cote d'Ivoire, the MTCT rate of HIV was 5.6% in formula-fed infants and 6.8% in infants with short-term breastfeeding reported by mothers on HAART [10]. These figures show the vast benefits of HAART in reducing the risk of vertical transmission of HIV. Some studies from sub-Saharan Africa have shown that an HIV diagnosis causes people generally to choose to have fewer children [3,11,12]. Other research has shown that HIV infection does not have a marked impact on fertility decisions, particularly for those who do not show signs or symptoms of disease [2,13,14].\nBased on many reports indicating that HAART provision significantly decreases the risk of vertical transmission of HIV, one could expect that HIV-positive mothers/couples on HAART in sub-Saharan African countries would by now be more likely to opt to have children. It is therefore important to investigate if voluntary testing and counseling (VCT) for HIV infection and programs to prevent mother-to-child transmission (PMTCT) of HIV services are effective in counseling their clients on these recent findings and benefits of HAART. One study from South Africa stated that HAART increased the fertility desire in couples over time [15]. Similarly, a pan African study from seven countries showed that the pregnancy rate of HIV-positive women on HAART was significantly higher compared to those not on HAART [16]. A Zimbabwean study found that women on HAART were more likely to plan for a future child, but some women still had doubts about the effectiveness of HAART to prevent mother-to-child transmission of HIV [17].\nUgandan studies on the association between HAART and fertility revealed mixed results. One study from rural south-western Uganda showed that HIV-positive women on HAART had an increased desire for children but did not experience an increase in actual fertility [18]. Similarly, another study from rural eastern Uganda showed that HAART was not associated with pregnancy [19]. A survey of women attending Mbarara hospital in south-western Uganda revealed that HIV-positive women had a lower desire for future children irrespective of their HAART treatment status than HIV-negative women [20]. Another study from Mbarara district in south-western Uganda, found that HIV-positive women on HAART were more than three times likely to use contraception compared to those not on HAART [21]. In contrast to these findings, one study from Rakai district in central Uganda described a significant positive association between HAART treatment and becoming pregnant in 712 women attending a hospital [22].\nThe participants in the above mentioned studies included mostly only HIV-positive women, that is, HIV-positive men were not interviewed. Male involvement in reproductive health and fertility has long been recognized as important, but has not been achieved in a tangible way in many sub-Saharan African countries [23-25]. In order to have a balanced view on whether being on HAART changes fertility desires of individuals and couples, it is crucial to include both sexes. We interviewed male and female HIV-positive participants on this issue. The objectives of the study were:\n1. To investigate whether HIV-positive persons on HAART had different fertility desires compared to HIV-positive persons not on HAART and whether these were different between men and women;\n2. To assess the knowledge of mother-to-child transmission of HIV in participants on HAART and not on HAART.\nThe study took place from September to December 2006 in two districts, Kabarole and Kamwenge, in western Uganda. Other results from the main quantitative component of this study on fertility and HIV status are published elsewhere [26].", "Participants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans.", "The study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants.", "The study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records.", "A questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable \"Do you want more children in future\" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as \"Do you want more children in future in addition to the current pregnancy\"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes.\nAll of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable \"Desire to have more children in future\" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model.", "Ethics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form.", " Demographic and socio-economic characteristics of study participants One hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1:\nDemographic characteristics and antiretroviral treatment status of survey respondents (n = 199)\na The p-values are based on chi-square test unless otherwise specified.\nb Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure).\nc The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively.\nd Two independent sample t-test p-value.\nUse of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2:\nContraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199)\nOne hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1:\nDemographic characteristics and antiretroviral treatment status of survey respondents (n = 199)\na The p-values are based on chi-square test unless otherwise specified.\nb Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure).\nc The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively.\nd Two independent sample t-test p-value.\nUse of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2:\nContraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199)\n Fertility desires Overall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009).\nIn bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3:\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis\nA younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children.\nA sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4).\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) by sex - Logistic Regression Multivariate Analysis\nThe main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3).\nOverall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009).\nIn bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3:\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis\nA younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children.\nA sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4).\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) by sex - Logistic Regression Multivariate Analysis\nThe main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3).\n Knowledge and risk perceptions of mother-to-child-transmission of HIV Participants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5).\nKnowledge and risk perception of mothers on HAART and not on HAART\n* Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART\nThe results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same.\nThe questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby.\nParticipants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5).\nKnowledge and risk perception of mothers on HAART and not on HAART\n* Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART\nThe results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same.\nThe questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby.", "One hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1:\nDemographic characteristics and antiretroviral treatment status of survey respondents (n = 199)\na The p-values are based on chi-square test unless otherwise specified.\nb Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure).\nc The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively.\nd Two independent sample t-test p-value.\nUse of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2:\nContraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199)", "Overall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009).\nIn bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3:\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis\nA younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children.\nA sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4).\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) by sex - Logistic Regression Multivariate Analysis\nThe main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3).", "Participants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5).\nKnowledge and risk perception of mothers on HAART and not on HAART\n* Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART\nThe results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same.\nThe questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby.", "This cross-sectional study conducted in western Uganda included HIV-positive respondents on HAART and not on HAART in order to elucidate their fertility desires as a result of their HIV treatment status. Importantly, this study included men and women which enabled us to assess gender differences when the variable \"fertility desires\" in conjunction with HAART was modeled for men and women separately. Several other studies have reviewed only fertility desires in (pregnant) women thus missing the male partners' perspectives on fertility desires. These studies limited their interviews to women perhaps because access to interviewing women is easier and also because traditionally childbearing is generally associated with the female gender [18,19,29,30].\nOur study contributes to expanded knowledge on the relationship between HAART treatment status and the desire for children. We did not find that HAART treatment status had an impact for the desire for children. It is important for the scientific community and health care workers to know this. It is also important to assess the level of knowledge on the relationship between MTCT and HAART in health care workers who are working in HIV and family planning programs. They should be aware of the impact of HAART on fertility desires (and on possible negative maternal pregnancy outcomes) in order to provide updated, balanced and effective counselling for family planning and HIV care and prevention program clients.\nOur main study finding was that we did not detect an association between being on HAART and an increased desire for future children. Therefore, it is not likely a consideration by HIV-infected persons and couples in their discussions about planning and achieving their desired family size. This corresponds to what Homsy et al. found in their study from eastern Uganda [19] and what Snow described from Mbarara [20].\nMales were generally more likely than females to want more children which we expected and which is in accordance with the literature from sub-Saharan Africa. Other predictors for a higher fertility desire (see Table 3) were younger age, higher number of living children, and higher occupational status (only for women), which are consistent with other findings from the literature for sub-Saharan Africa. In the gender-specific sub-analysis, men on HAART had a tendency to desire more children compared to those men not on HAART, while women on HAART had less desire for future children compared to women not on HAART (both associations were not statistically significant). However, these results from the sub-analysis have to be considered with caution, as the 95% confidence intervals were very wide and difficult to interpret. In addition, the sample size in this sub-analysis was small and may not provide the statistical power to detect a statistically significant difference. The gender differences with respect to the direction of the relationship between HAART treatment and fertility desires should be tested in larger studies with sufficient power. If confirmed, it would be important for professionals to use different approaches in counseling male and female persons on their fertility options when they are HIV-positive. It would be also important to confirm if women on HAART have less desire for future children than women not on HAART and, if so, why.\nThe non-association between fertility desires and being on HAART treatment in HIV-infected rural Ugandans is not too surprising, as poor rural Ugandans with low educational attainment do not base their life decisions on the probabilities of HIV transmission but rather on life experiences. The hypothesis that their fertility desires are based on the probability of mother-to-child transmission of HIV seems therefore counterintuitive, as many other factors may play a role in fertility desires. These factors could be whether the parents will be alive to see the child grow and if they will be well enough to care for the child. If parents are receiving HAART, we can assume that they were quite sick (stage 3 or 4 or CD+ count of less than 200 cells/mm3 [31]. This would suggest that at some point in time they have experienced some of the more challenging aspects of living with HIV such as morbidity and adverse reactions to HAART treatment regimens. These experiences might have reduced their fertility desires. In addition, the provision of HAART in Uganda is not guaranteed, making the choice to have children especially risky for the group relying on HAART for their well being [32]. Also, the risk of vertical HIV transmission for mothers on HAART is not zero and so a small risk remains.\nIt was surprising to find that the knowledge about vertical transmission of HIV and its risk reduction in mothers on HAART did not differ whether participants were on HAART or not. We expected participants in the HAART group to be more knowledgeable, as there is the duty for the HAART clinic staff to provide the proper and updated information to their clients. This lack of knowledge is regrettable, as the HAART clinic staff should have had many opportunities as a result of their treatment program (especially during the monthly monitoring visits) to properly inform their clients.\nReproductive decision-making may change if women/couples receive HAART for a longer time period. We collected our data in 2006 amidst a recent and rapid expansion of available free HAART services in Uganda. As we did not ask the question of the duration of HAART treatment in our survey, we were unable to incorporate this variable in our statistical models. It has been suggested that the duration of HAART treatment could play a role in the association between HAART status and fertility desires, e.g. women on HAART treatment for a longer time may be more inclined to have more children [21]. Future studies may want to include patients with extended HAART experience to elucidate their fertility desires.\n Limitations Our study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution.\nIn addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews.\nOur study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution.\nIn addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews.", "Our study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution.\nIn addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews.", "In order to provide updated, complete, comprehensive and balanced information during counseling and treatment sessions to HIV-infected persons/couples ethically, counseling staff has the responsibility to inform clients not only of the benefits that can be conferred by HAART but also of the possible risks for negative pregnancy outcomes [33,34]. There should be no delay in relaying the complete educational package to this group of HIV-positive persons and couples, as well as to the population in general. HIV counseling, care and prevention services as well as family planning programs are still not adequately addressing the benefits of HAART. The package should be modified to include current and updated information re: the effectiveness of HAART on reducing vertical transmission of HIV. Health care workers and counselors who provide HIV/family planning counseling should upgrade their MTCT/HAART knowledge and be required to provide this information to all service users in an unbiased way. Service providers should be evaluated on their practice of providing this vital, relevant and updated information to their clients. Evidence on the positive impact of HAART on vertical HIV transmission is crucial for the HIV-positive persons/couples to know when they are making childbearing decisions." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study setting", "Study design", "Recruitment of participants", "Data collection and analysis", "Study approval and ethical considerations", "Results", "Demographic and socio-economic characteristics of study participants", "Fertility desires", "Knowledge and risk perceptions of mother-to-child-transmission of HIV", "Discussion", "Limitations", "Conclusions" ]
[ "The decision whether or not to have children is often complex and influenced by many factors. HIV-positive individuals in Africa have additional considerations to take into account when deciding whether or not to have children. These include the possibility of passing HIV from mother-to-child and the likelihood that one or both parents could die prior to the child reaching adulthood [1]. Mother-to-child transmission (MTCT) of HIV happens in either of two ways: through perinatal infection and through breastfeeding. Regardless of these concerns, many Africans after receiving a positive HIV diagnosis still elect to have children for various personal, cultural and economic reasons [2]. In most African societies a common expectation of marriage is that the couple will have children [3]. This is an especially important expectation in Uganda as children become members of the paternal clan [4]. In African societies women are often valued by their ability to bear children and a very high social good is placed on fertility; therefore, the pressure on women to have children is very high [5].\nThe birth of an HIV-positive child to a HIV-positive parent or couple is a situation faced by many families in sub-Saharan African countries. However, the advent of available highly active antiretroviral therapy (HAART) in many sub-Saharan African countries has changed the situation for HIV-positive persons. HAART has been shown to drastically reduce the risk of MTCT of HIV: The risk of MTCT in Denmark was less than 1% in all HIV-positive women (all on HAART) who gave birth between 2000-2005, and 1.7% in a large reference center for pregnant HIV-positive women in Belgium [6,7]. Similarly, HAART has been found to reduce the risk of vertical transmission of HIV in sub-Saharan African countries. For example, the risk of MTCT of HIV in women from a general sample from several sub-Saharan African countries decreased from 30.9% in untreated women to 4.9% in women on HAART [8]. Similarly, in Botswana, the number of new pediatric HIV infections has dropped from 4,600 in 1999 to 890 in 2007 due to nearly complete coverage of an effective antiretroviral treatment program [9]. In Cote d'Ivoire, the MTCT rate of HIV was 5.6% in formula-fed infants and 6.8% in infants with short-term breastfeeding reported by mothers on HAART [10]. These figures show the vast benefits of HAART in reducing the risk of vertical transmission of HIV. Some studies from sub-Saharan Africa have shown that an HIV diagnosis causes people generally to choose to have fewer children [3,11,12]. Other research has shown that HIV infection does not have a marked impact on fertility decisions, particularly for those who do not show signs or symptoms of disease [2,13,14].\nBased on many reports indicating that HAART provision significantly decreases the risk of vertical transmission of HIV, one could expect that HIV-positive mothers/couples on HAART in sub-Saharan African countries would by now be more likely to opt to have children. It is therefore important to investigate if voluntary testing and counseling (VCT) for HIV infection and programs to prevent mother-to-child transmission (PMTCT) of HIV services are effective in counseling their clients on these recent findings and benefits of HAART. One study from South Africa stated that HAART increased the fertility desire in couples over time [15]. Similarly, a pan African study from seven countries showed that the pregnancy rate of HIV-positive women on HAART was significantly higher compared to those not on HAART [16]. A Zimbabwean study found that women on HAART were more likely to plan for a future child, but some women still had doubts about the effectiveness of HAART to prevent mother-to-child transmission of HIV [17].\nUgandan studies on the association between HAART and fertility revealed mixed results. One study from rural south-western Uganda showed that HIV-positive women on HAART had an increased desire for children but did not experience an increase in actual fertility [18]. Similarly, another study from rural eastern Uganda showed that HAART was not associated with pregnancy [19]. A survey of women attending Mbarara hospital in south-western Uganda revealed that HIV-positive women had a lower desire for future children irrespective of their HAART treatment status than HIV-negative women [20]. Another study from Mbarara district in south-western Uganda, found that HIV-positive women on HAART were more than three times likely to use contraception compared to those not on HAART [21]. In contrast to these findings, one study from Rakai district in central Uganda described a significant positive association between HAART treatment and becoming pregnant in 712 women attending a hospital [22].\nThe participants in the above mentioned studies included mostly only HIV-positive women, that is, HIV-positive men were not interviewed. Male involvement in reproductive health and fertility has long been recognized as important, but has not been achieved in a tangible way in many sub-Saharan African countries [23-25]. In order to have a balanced view on whether being on HAART changes fertility desires of individuals and couples, it is crucial to include both sexes. We interviewed male and female HIV-positive participants on this issue. The objectives of the study were:\n1. To investigate whether HIV-positive persons on HAART had different fertility desires compared to HIV-positive persons not on HAART and whether these were different between men and women;\n2. To assess the knowledge of mother-to-child transmission of HIV in participants on HAART and not on HAART.\nThe study took place from September to December 2006 in two districts, Kabarole and Kamwenge, in western Uganda. Other results from the main quantitative component of this study on fertility and HIV status are published elsewhere [26].", " Study setting Participants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans.\nParticipants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans.\n Study design The study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants.\nThe study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants.\n Recruitment of participants The study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records.\nThe study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records.\n Data collection and analysis A questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable \"Do you want more children in future\" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as \"Do you want more children in future in addition to the current pregnancy\"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes.\nAll of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable \"Desire to have more children in future\" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model.\nA questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable \"Do you want more children in future\" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as \"Do you want more children in future in addition to the current pregnancy\"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes.\nAll of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable \"Desire to have more children in future\" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model.\n Study approval and ethical considerations Ethics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form.\nEthics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form.", "Participants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans.", "The study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants.", "The study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records.", "A questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable \"Do you want more children in future\" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as \"Do you want more children in future in addition to the current pregnancy\"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes.\nAll of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable \"Desire to have more children in future\" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model.", "Ethics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form.", " Demographic and socio-economic characteristics of study participants One hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1:\nDemographic characteristics and antiretroviral treatment status of survey respondents (n = 199)\na The p-values are based on chi-square test unless otherwise specified.\nb Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure).\nc The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively.\nd Two independent sample t-test p-value.\nUse of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2:\nContraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199)\nOne hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1:\nDemographic characteristics and antiretroviral treatment status of survey respondents (n = 199)\na The p-values are based on chi-square test unless otherwise specified.\nb Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure).\nc The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively.\nd Two independent sample t-test p-value.\nUse of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2:\nContraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199)\n Fertility desires Overall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009).\nIn bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3:\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis\nA younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children.\nA sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4).\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) by sex - Logistic Regression Multivariate Analysis\nThe main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3).\nOverall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009).\nIn bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3:\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis\nA younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children.\nA sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4).\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) by sex - Logistic Regression Multivariate Analysis\nThe main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3).\n Knowledge and risk perceptions of mother-to-child-transmission of HIV Participants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5).\nKnowledge and risk perception of mothers on HAART and not on HAART\n* Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART\nThe results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same.\nThe questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby.\nParticipants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5).\nKnowledge and risk perception of mothers on HAART and not on HAART\n* Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART\nThe results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same.\nThe questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby.", "One hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1:\nDemographic characteristics and antiretroviral treatment status of survey respondents (n = 199)\na The p-values are based on chi-square test unless otherwise specified.\nb Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure).\nc The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively.\nd Two independent sample t-test p-value.\nUse of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2:\nContraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199)", "Overall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009).\nIn bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3:\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis\nA younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children.\nA sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4).\nOdds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable \"desire to have more children\" (n = 199) by sex - Logistic Regression Multivariate Analysis\nThe main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3).", "Participants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5).\nKnowledge and risk perception of mothers on HAART and not on HAART\n* Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART\nThe results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same.\nThe questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby.", "This cross-sectional study conducted in western Uganda included HIV-positive respondents on HAART and not on HAART in order to elucidate their fertility desires as a result of their HIV treatment status. Importantly, this study included men and women which enabled us to assess gender differences when the variable \"fertility desires\" in conjunction with HAART was modeled for men and women separately. Several other studies have reviewed only fertility desires in (pregnant) women thus missing the male partners' perspectives on fertility desires. These studies limited their interviews to women perhaps because access to interviewing women is easier and also because traditionally childbearing is generally associated with the female gender [18,19,29,30].\nOur study contributes to expanded knowledge on the relationship between HAART treatment status and the desire for children. We did not find that HAART treatment status had an impact for the desire for children. It is important for the scientific community and health care workers to know this. It is also important to assess the level of knowledge on the relationship between MTCT and HAART in health care workers who are working in HIV and family planning programs. They should be aware of the impact of HAART on fertility desires (and on possible negative maternal pregnancy outcomes) in order to provide updated, balanced and effective counselling for family planning and HIV care and prevention program clients.\nOur main study finding was that we did not detect an association between being on HAART and an increased desire for future children. Therefore, it is not likely a consideration by HIV-infected persons and couples in their discussions about planning and achieving their desired family size. This corresponds to what Homsy et al. found in their study from eastern Uganda [19] and what Snow described from Mbarara [20].\nMales were generally more likely than females to want more children which we expected and which is in accordance with the literature from sub-Saharan Africa. Other predictors for a higher fertility desire (see Table 3) were younger age, higher number of living children, and higher occupational status (only for women), which are consistent with other findings from the literature for sub-Saharan Africa. In the gender-specific sub-analysis, men on HAART had a tendency to desire more children compared to those men not on HAART, while women on HAART had less desire for future children compared to women not on HAART (both associations were not statistically significant). However, these results from the sub-analysis have to be considered with caution, as the 95% confidence intervals were very wide and difficult to interpret. In addition, the sample size in this sub-analysis was small and may not provide the statistical power to detect a statistically significant difference. The gender differences with respect to the direction of the relationship between HAART treatment and fertility desires should be tested in larger studies with sufficient power. If confirmed, it would be important for professionals to use different approaches in counseling male and female persons on their fertility options when they are HIV-positive. It would be also important to confirm if women on HAART have less desire for future children than women not on HAART and, if so, why.\nThe non-association between fertility desires and being on HAART treatment in HIV-infected rural Ugandans is not too surprising, as poor rural Ugandans with low educational attainment do not base their life decisions on the probabilities of HIV transmission but rather on life experiences. The hypothesis that their fertility desires are based on the probability of mother-to-child transmission of HIV seems therefore counterintuitive, as many other factors may play a role in fertility desires. These factors could be whether the parents will be alive to see the child grow and if they will be well enough to care for the child. If parents are receiving HAART, we can assume that they were quite sick (stage 3 or 4 or CD+ count of less than 200 cells/mm3 [31]. This would suggest that at some point in time they have experienced some of the more challenging aspects of living with HIV such as morbidity and adverse reactions to HAART treatment regimens. These experiences might have reduced their fertility desires. In addition, the provision of HAART in Uganda is not guaranteed, making the choice to have children especially risky for the group relying on HAART for their well being [32]. Also, the risk of vertical HIV transmission for mothers on HAART is not zero and so a small risk remains.\nIt was surprising to find that the knowledge about vertical transmission of HIV and its risk reduction in mothers on HAART did not differ whether participants were on HAART or not. We expected participants in the HAART group to be more knowledgeable, as there is the duty for the HAART clinic staff to provide the proper and updated information to their clients. This lack of knowledge is regrettable, as the HAART clinic staff should have had many opportunities as a result of their treatment program (especially during the monthly monitoring visits) to properly inform their clients.\nReproductive decision-making may change if women/couples receive HAART for a longer time period. We collected our data in 2006 amidst a recent and rapid expansion of available free HAART services in Uganda. As we did not ask the question of the duration of HAART treatment in our survey, we were unable to incorporate this variable in our statistical models. It has been suggested that the duration of HAART treatment could play a role in the association between HAART status and fertility desires, e.g. women on HAART treatment for a longer time may be more inclined to have more children [21]. Future studies may want to include patients with extended HAART experience to elucidate their fertility desires.\n Limitations Our study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution.\nIn addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews.\nOur study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution.\nIn addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews.", "Our study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution.\nIn addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews.", "In order to provide updated, complete, comprehensive and balanced information during counseling and treatment sessions to HIV-infected persons/couples ethically, counseling staff has the responsibility to inform clients not only of the benefits that can be conferred by HAART but also of the possible risks for negative pregnancy outcomes [33,34]. There should be no delay in relaying the complete educational package to this group of HIV-positive persons and couples, as well as to the population in general. HIV counseling, care and prevention services as well as family planning programs are still not adequately addressing the benefits of HAART. The package should be modified to include current and updated information re: the effectiveness of HAART on reducing vertical transmission of HIV. Health care workers and counselors who provide HIV/family planning counseling should upgrade their MTCT/HAART knowledge and be required to provide this information to all service users in an unbiased way. Service providers should be evaluated on their practice of providing this vital, relevant and updated information to their clients. Evidence on the positive impact of HAART on vertical HIV transmission is crucial for the HIV-positive persons/couples to know when they are making childbearing decisions." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "highly active antiretroviral therapy", "fertility desires", "family planning", "HIV/AIDS", "knowledge", "mother-to-child-transmission", "peri-natal transmission", "resource-limited setting", "Uganda" ]
Background: The decision whether or not to have children is often complex and influenced by many factors. HIV-positive individuals in Africa have additional considerations to take into account when deciding whether or not to have children. These include the possibility of passing HIV from mother-to-child and the likelihood that one or both parents could die prior to the child reaching adulthood [1]. Mother-to-child transmission (MTCT) of HIV happens in either of two ways: through perinatal infection and through breastfeeding. Regardless of these concerns, many Africans after receiving a positive HIV diagnosis still elect to have children for various personal, cultural and economic reasons [2]. In most African societies a common expectation of marriage is that the couple will have children [3]. This is an especially important expectation in Uganda as children become members of the paternal clan [4]. In African societies women are often valued by their ability to bear children and a very high social good is placed on fertility; therefore, the pressure on women to have children is very high [5]. The birth of an HIV-positive child to a HIV-positive parent or couple is a situation faced by many families in sub-Saharan African countries. However, the advent of available highly active antiretroviral therapy (HAART) in many sub-Saharan African countries has changed the situation for HIV-positive persons. HAART has been shown to drastically reduce the risk of MTCT of HIV: The risk of MTCT in Denmark was less than 1% in all HIV-positive women (all on HAART) who gave birth between 2000-2005, and 1.7% in a large reference center for pregnant HIV-positive women in Belgium [6,7]. Similarly, HAART has been found to reduce the risk of vertical transmission of HIV in sub-Saharan African countries. For example, the risk of MTCT of HIV in women from a general sample from several sub-Saharan African countries decreased from 30.9% in untreated women to 4.9% in women on HAART [8]. Similarly, in Botswana, the number of new pediatric HIV infections has dropped from 4,600 in 1999 to 890 in 2007 due to nearly complete coverage of an effective antiretroviral treatment program [9]. In Cote d'Ivoire, the MTCT rate of HIV was 5.6% in formula-fed infants and 6.8% in infants with short-term breastfeeding reported by mothers on HAART [10]. These figures show the vast benefits of HAART in reducing the risk of vertical transmission of HIV. Some studies from sub-Saharan Africa have shown that an HIV diagnosis causes people generally to choose to have fewer children [3,11,12]. Other research has shown that HIV infection does not have a marked impact on fertility decisions, particularly for those who do not show signs or symptoms of disease [2,13,14]. Based on many reports indicating that HAART provision significantly decreases the risk of vertical transmission of HIV, one could expect that HIV-positive mothers/couples on HAART in sub-Saharan African countries would by now be more likely to opt to have children. It is therefore important to investigate if voluntary testing and counseling (VCT) for HIV infection and programs to prevent mother-to-child transmission (PMTCT) of HIV services are effective in counseling their clients on these recent findings and benefits of HAART. One study from South Africa stated that HAART increased the fertility desire in couples over time [15]. Similarly, a pan African study from seven countries showed that the pregnancy rate of HIV-positive women on HAART was significantly higher compared to those not on HAART [16]. A Zimbabwean study found that women on HAART were more likely to plan for a future child, but some women still had doubts about the effectiveness of HAART to prevent mother-to-child transmission of HIV [17]. Ugandan studies on the association between HAART and fertility revealed mixed results. One study from rural south-western Uganda showed that HIV-positive women on HAART had an increased desire for children but did not experience an increase in actual fertility [18]. Similarly, another study from rural eastern Uganda showed that HAART was not associated with pregnancy [19]. A survey of women attending Mbarara hospital in south-western Uganda revealed that HIV-positive women had a lower desire for future children irrespective of their HAART treatment status than HIV-negative women [20]. Another study from Mbarara district in south-western Uganda, found that HIV-positive women on HAART were more than three times likely to use contraception compared to those not on HAART [21]. In contrast to these findings, one study from Rakai district in central Uganda described a significant positive association between HAART treatment and becoming pregnant in 712 women attending a hospital [22]. The participants in the above mentioned studies included mostly only HIV-positive women, that is, HIV-positive men were not interviewed. Male involvement in reproductive health and fertility has long been recognized as important, but has not been achieved in a tangible way in many sub-Saharan African countries [23-25]. In order to have a balanced view on whether being on HAART changes fertility desires of individuals and couples, it is crucial to include both sexes. We interviewed male and female HIV-positive participants on this issue. The objectives of the study were: 1. To investigate whether HIV-positive persons on HAART had different fertility desires compared to HIV-positive persons not on HAART and whether these were different between men and women; 2. To assess the knowledge of mother-to-child transmission of HIV in participants on HAART and not on HAART. The study took place from September to December 2006 in two districts, Kabarole and Kamwenge, in western Uganda. Other results from the main quantitative component of this study on fertility and HIV status are published elsewhere [26]. Methods: Study setting Participants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans. Participants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans. Study design The study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants. The study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants. Recruitment of participants The study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records. The study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records. Data collection and analysis A questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable "Do you want more children in future" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as "Do you want more children in future in addition to the current pregnancy"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes. All of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable "Desire to have more children in future" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model. A questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable "Do you want more children in future" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as "Do you want more children in future in addition to the current pregnancy"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes. All of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable "Desire to have more children in future" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model. Study approval and ethical considerations Ethics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form. Ethics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form. Study setting: Participants were recruited through correct health centres located in Rwimi and Kibiito sub-counties in Kabarole District, and the Bigodi sub-county in Kamwenge District. The government-run health centers were located along two major roads and offered clinical and public health services, as well as VCT programs to prevent mother-to-child transmission of HIV. The Rwimi and Kibiito Health Centres also offered free HAART for eligible HIV patients (CD4 cell count < 200 cells/ml and/or WHO stage 3 and higher). The area surrounding these health centres has a predominantly agricultural-based economy and subsistence farming is the main occupation. This area is very fertile, allowing for high yields of basic food crops such as maize, cassava, cooking banana and beans. Study design: The study applied a cross-sectional, quantitative design. We report the findings based on a survey which used a structured questionnaire administered through interviews to gather data from participants. Recruitment of participants: The study inclusion criteria were: age 18-44 years, married or cohabitating with a partner and having a positive HIV test result. Persons who were bedridden were excluded from the study. All HIV-positive persons registered at two health centers (Rwimi and Kibiito) were invited to participate in the study. In order to increase the sample size, all HIV-positive individuals of an HIV patient support group in Bigodi sub-county were also included in the study. Individuals identified were contacted by health care workers and asked to participate in the study. Ninety-two percent of eligible patients agreed to be interviewed. Participants had the option of being interviewed either in their home or at their local health centre. Of the 199 persons who participated in the study, 122 individuals were on HAART and 77 were not on HAART. The HAART status was derived from interviews and verified using health records. Data collection and analysis: A questionnaire was developed in consultation with local experts to assess the socio-demographic characteristics of the study participants. Most questions used were derived from published sources and were already tested for its reliability and validity, e.g. questions derived from the Ugandan Demographic and Health Survey [27]. The questionnaire included questions related to HIV testing, HIV status and fertility desires, treatment status (HAART or no HAART), attitudes towards childbearing of HIV-infected women/couples and risk perceptions of MTCT of HIV. The questionnaire was translated into the local language, Rutooro, and back translated into English for linguistic reliability. It was pre-tested in the study area by 15 persons not part of the study. The instruments' reliability was assessed through a test-retest exercise of 26 randomly selected participants seven days after the questionnaire was first administered. The overall percent agreement obtained in the re-test was 92.4%. For the most important question referring to the main study outcome variable "Do you want more children in future" agreement was 96.2%. (For those participants who reported to be pregnant at the time of the interview this question was phrased as "Do you want more children in future in addition to the current pregnancy"). The final questionnaire was administered by trained interviewers in the local language. The interviews lasted around 40 minutes. All of the interview data was entered into Microsoft Access and then transferred into STATA for statistical analysis [28]. A p value of < 0.05 was considered statistically significant (all statistical tests were two sided). Data from open-ended survey questions was coded and analyzed using descriptive statistics. The chi-square test and t-test were used for bivariate data analysis. Bivariate and multivariate logistic regression was used to examine and model the variable "Desire to have more children in future" with a binary outcome (yes/no) and HAART status as the main covariate of interest. Independent variables included demographic and socio-economic characteristics as well as various HIV-related factors such as the HIV-serostatus of the respondent's partner, experience of any AIDS-related symptoms or illness, and experience of an AIDS-related death (either within their family or of a child in their community). All the independent variables that were significant at p < 0.2 in bivariate analyses or important confounding variables (e.g. sex) were selected and fit into a multivariate model. Variables found to be statistically significant in the multivariate model (p < 0.05) were kept in the final model. Study approval and ethical considerations: Ethics approval was provided by the University of Alberta's Health Research Ethics Board Panel B. In Uganda, approval for the study was obtained from the Uganda National Council of Science and Technology, Kampala. The study was also approved by the District Health Officers of Kabarole and Kamwenge districts, and the political representatives of the study areas. Each participant was informed about the study through an information letter that was read to each participant. All participants signed a consent form. Results: Demographic and socio-economic characteristics of study participants One hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1: Demographic characteristics and antiretroviral treatment status of survey respondents (n = 199) a The p-values are based on chi-square test unless otherwise specified. b Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure). c The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively. d Two independent sample t-test p-value. Use of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2: Contraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199) One hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1: Demographic characteristics and antiretroviral treatment status of survey respondents (n = 199) a The p-values are based on chi-square test unless otherwise specified. b Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure). c The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively. d Two independent sample t-test p-value. Use of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2: Contraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199) Fertility desires Overall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009). In bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3: Odds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable "desire to have more children" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis A younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children. A sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4). Odds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable "desire to have more children" (n = 199) by sex - Logistic Regression Multivariate Analysis The main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3). Overall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009). In bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3: Odds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable "desire to have more children" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis A younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children. A sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4). Odds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable "desire to have more children" (n = 199) by sex - Logistic Regression Multivariate Analysis The main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3). Knowledge and risk perceptions of mother-to-child-transmission of HIV Participants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5). Knowledge and risk perception of mothers on HAART and not on HAART * Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART The results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same. The questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby. Participants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5). Knowledge and risk perception of mothers on HAART and not on HAART * Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART The results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same. The questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby. Demographic and socio-economic characteristics of study participants: One hundred twenty two (61.3%) of the study participants were female which is representative of this clinic population. The mean age of males was 35.7 years (SD 5.7 years, range 20-44 years) while the mean age of females was 33.3 years (SD 6.0 years, range 18-44 years). This age difference was statistically signficant (p = 0.006). One-hundred and twenty two (61.3%) participants were currently on HAART while 77 (38.7%) were not. More female participants were on HAART compared to males (41.8% vs. 33.8%, p = 0.257). Both female and male participants on HAART were younger compared to those not on HAART (female: 31.1 vs. 34.8 years, p < 0.001; male: 33.1 vs. 37.0 years, p = 0.004). Five participants (6.5%) in the HAART group reported to be pregnant during the survey, compared to three (2.5%) in the non-HAART group. The majority of the study respondents were subsistence farmers with a primary education, living in metal-roofed mud huts. The participants had a wide range of tribal and religious affiliations. For more details see Table 1: Demographic characteristics and antiretroviral treatment status of survey respondents (n = 199) a The p-values are based on chi-square test unless otherwise specified. b Dwelling quality variable was divided into three categories as low, medium and high based on the house floor, wall and roof structure (low: mud floor, mud/thatched walls and grass/thatched roof; medium: mud floor, mud/thatched walls and metal roof; high: cement/concrete/wood floor, walls of permanent materials and metal roof or any two out of three (i.e., floor, walls and roof) of this structure). c The percentage who wants more children for age group 18-24, 25-29, 30-34, 35-39 and 40-44 years is 42, 30, 18, 4 and 4, respectively. d Two independent sample t-test p-value. Use of effective family planning methods such as oral contraceptives and injectables was generally low in our respondents, however there was a high rate of condom use. Participants on HAART reported a much lower rate of condom use. For more details on contraceptive use see Table 2: Contraceptive use stratified for sex and HAART treatment status of survey respondents (n = 199) Fertility desires: Overall, it was found that 13.6% of all participants wanted to have more children. HIV- positive individuals on HAART were more likely to want more children; however, this difference was not statistically significant (18.2% on HAART vs. 10.7% not on HAART, p = 0.131). Those on HAART who had two or more living children showed a trend to want to continue childbearing than those who were not on HAART (18.2% vs. 7.9%, p = 0.380). Those who wanted to continue childbearing wanted an average of 2.3 additional children (2.1 for participants on HAART vs. 2.4 for participants not on HAART). Three out of 14 pregnant women (21.4%) and 13 out of 132 non-pregnant women (9.9%) wanted more children (p = 0.187). For all participants, the average number of children desired (completed family size) was 4.3 with 3.7 children for participants on HAART vs. 4.6 children for participants not on HAART (p = 0.009). In bivariate analysis, respondents who were on HAART were slightly more likely to say that they wanted more children in future (OR 1.86, p = 0.135), and this trend disappeared in the multivariate model. Results from the multivariate logistic regression analysis indicate that the odds of desiring more children were quite similar and not statistically different for those on HAART compared to those not on HAART. The final logistic regression model was adjusted for age, sex, occupation, and number of living children. Bivariate and multivariate logistic regression analysis results are shown in Table 3: Odds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable "desire to have more children" (n = 199) - Logistic Regression Bivariate and Multivariate Analysis A younger age, male sex, occupation as businessperson, and a smaller number of living children were significant predictors for the desire to have more children. A sub-analysis was conducted to compare the fertility desires of men and women separately as fertility intentions of each sex may differ due to the diverse roles that men and women play in fertility decision-making and childbearing (Table 4). Odds ratio (OR) and 95% confidence interval (95%CI) for a dependent variable "desire to have more children" (n = 199) by sex - Logistic Regression Multivariate Analysis The main result of the sub-analysis was that being on HAART was not a predictor for fertility desires of males or females. However, the direction of the adjusted OR was different; men on HAART had a tendency to desire more children whereas women on HAART tended to desire fewer children than their untreated counterparts. Additionally, a higher number of living children was a statistically significant predictor for a lower desire for more children in women (OR 0.35, 95% CI 0.18-0.70). This association was the same in men, but did not reach statistical significance. We also found a significant association between the desire to have more children and the occupational status as well as the number of living children for females, but not for males (see Table 3). Knowledge and risk perceptions of mother-to-child-transmission of HIV: Participants were asked questions to assess their knowledge of MTCT of HIV, including whether or not being on HAART would influence perceptions of mother-to child transmission. Many respondents said that it depended upon whether the woman delivered her baby in the village or at the health centre/hospital. Participants were asked if MTCT of HIV was possible given differing treatment and delivery scenarios (see Table 5). Knowledge and risk perception of mothers on HAART and not on HAART * Responses of the participants were not significantly different (i.e., p > 0.05) for those on HAART vs. not on HAART The results from Table 5 show that participants generally knew being on HAART reduces the risk of MTCT. A majority of respondents said that women on HAART cannot transmit HIV to their unborn/newborn child (the correct answer would have been that the MTCT risk is minimal). Most respondents stated that the birth setting in the hospital (as compared to a home birth usually attended by traditional birth attendants), was a preferred place in order to reduce MTCT as hospital/health centre deliveries were perceived to be more sanitary and were assisted by trained medical personnel. Participants tended to think that the birth setting was more important than the mother's HAART status. HIV-positive participants on HAART and those not on HAART gave more or less the same responses. Answers to all questions did not differ statistically based upon the respondents' HAART status, indicating that the MTCT knowledge and risk perception of HIV in both groups was the same. The questionnaire also assessed the attitude towards childbearing in HIV-positive women/couples. Generally, most participants were not in favor of having children when one or both of the parents are HIV-infected (90.9% of participants on HAART vs. 94.4% of participants not on HAART, p = 0.496). They also thought that their community members had a negative attitude towards HIV-infected couples when they had a newborn baby. Discussion: This cross-sectional study conducted in western Uganda included HIV-positive respondents on HAART and not on HAART in order to elucidate their fertility desires as a result of their HIV treatment status. Importantly, this study included men and women which enabled us to assess gender differences when the variable "fertility desires" in conjunction with HAART was modeled for men and women separately. Several other studies have reviewed only fertility desires in (pregnant) women thus missing the male partners' perspectives on fertility desires. These studies limited their interviews to women perhaps because access to interviewing women is easier and also because traditionally childbearing is generally associated with the female gender [18,19,29,30]. Our study contributes to expanded knowledge on the relationship between HAART treatment status and the desire for children. We did not find that HAART treatment status had an impact for the desire for children. It is important for the scientific community and health care workers to know this. It is also important to assess the level of knowledge on the relationship between MTCT and HAART in health care workers who are working in HIV and family planning programs. They should be aware of the impact of HAART on fertility desires (and on possible negative maternal pregnancy outcomes) in order to provide updated, balanced and effective counselling for family planning and HIV care and prevention program clients. Our main study finding was that we did not detect an association between being on HAART and an increased desire for future children. Therefore, it is not likely a consideration by HIV-infected persons and couples in their discussions about planning and achieving their desired family size. This corresponds to what Homsy et al. found in their study from eastern Uganda [19] and what Snow described from Mbarara [20]. Males were generally more likely than females to want more children which we expected and which is in accordance with the literature from sub-Saharan Africa. Other predictors for a higher fertility desire (see Table 3) were younger age, higher number of living children, and higher occupational status (only for women), which are consistent with other findings from the literature for sub-Saharan Africa. In the gender-specific sub-analysis, men on HAART had a tendency to desire more children compared to those men not on HAART, while women on HAART had less desire for future children compared to women not on HAART (both associations were not statistically significant). However, these results from the sub-analysis have to be considered with caution, as the 95% confidence intervals were very wide and difficult to interpret. In addition, the sample size in this sub-analysis was small and may not provide the statistical power to detect a statistically significant difference. The gender differences with respect to the direction of the relationship between HAART treatment and fertility desires should be tested in larger studies with sufficient power. If confirmed, it would be important for professionals to use different approaches in counseling male and female persons on their fertility options when they are HIV-positive. It would be also important to confirm if women on HAART have less desire for future children than women not on HAART and, if so, why. The non-association between fertility desires and being on HAART treatment in HIV-infected rural Ugandans is not too surprising, as poor rural Ugandans with low educational attainment do not base their life decisions on the probabilities of HIV transmission but rather on life experiences. The hypothesis that their fertility desires are based on the probability of mother-to-child transmission of HIV seems therefore counterintuitive, as many other factors may play a role in fertility desires. These factors could be whether the parents will be alive to see the child grow and if they will be well enough to care for the child. If parents are receiving HAART, we can assume that they were quite sick (stage 3 or 4 or CD+ count of less than 200 cells/mm3 [31]. This would suggest that at some point in time they have experienced some of the more challenging aspects of living with HIV such as morbidity and adverse reactions to HAART treatment regimens. These experiences might have reduced their fertility desires. In addition, the provision of HAART in Uganda is not guaranteed, making the choice to have children especially risky for the group relying on HAART for their well being [32]. Also, the risk of vertical HIV transmission for mothers on HAART is not zero and so a small risk remains. It was surprising to find that the knowledge about vertical transmission of HIV and its risk reduction in mothers on HAART did not differ whether participants were on HAART or not. We expected participants in the HAART group to be more knowledgeable, as there is the duty for the HAART clinic staff to provide the proper and updated information to their clients. This lack of knowledge is regrettable, as the HAART clinic staff should have had many opportunities as a result of their treatment program (especially during the monthly monitoring visits) to properly inform their clients. Reproductive decision-making may change if women/couples receive HAART for a longer time period. We collected our data in 2006 amidst a recent and rapid expansion of available free HAART services in Uganda. As we did not ask the question of the duration of HAART treatment in our survey, we were unable to incorporate this variable in our statistical models. It has been suggested that the duration of HAART treatment could play a role in the association between HAART status and fertility desires, e.g. women on HAART treatment for a longer time may be more inclined to have more children [21]. Future studies may want to include patients with extended HAART experience to elucidate their fertility desires. Limitations Our study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution. In addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews. Our study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution. In addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews. Limitations: Our study had some limitations. First, the odds ratios were based on cross-sectional data which precludes assessing the causality of the associations described. Second, social desirability bias cannot be excluded as the information collected was sensitive. However, we used highly trained and experienced interviewers to minimize this bias. Third, we may have experienced a selection bias as we were not able to perform a true random sampling procedure. The fourth limitation was that we were not able to assess any clinical parameters in regard to the HIV disease progression of participants. As some participants on HAART may have been in a more advanced clinical stage of HIV, they may be less likely wanting more children in future due to their reduced physical health status. If this is true, we may have overstated the non-association between HAART and fertility desires for more children. Finally, as free HAART was made available in Uganda in 2005, it was likely that some of our participants were put on HAART during this expansion and therefore had a short duration of antiretroviral treatment when they were interviewed by us. Another shortcoming is that this study was conducted in 2006; since then there have been changes in HAART promotion and its use in Uganda, the results should be interpreted with caution. In addition to interviewing participants about the desire for children, we also assessed the pregnancy status at the time of the survey by asking the respondents if they (or their partners) were pregnant. The proportion for those on HAART who reported to be pregnant at the time of the interview were similar compared to those respondents (or their partners) not on HAART at the time of the interviews. However, as the absolute numbers were very small, the results from this comparison have limitations. In spite of these limitations, these pregnancy numbers somewhat validate the findings from the interviews. Conclusions: In order to provide updated, complete, comprehensive and balanced information during counseling and treatment sessions to HIV-infected persons/couples ethically, counseling staff has the responsibility to inform clients not only of the benefits that can be conferred by HAART but also of the possible risks for negative pregnancy outcomes [33,34]. There should be no delay in relaying the complete educational package to this group of HIV-positive persons and couples, as well as to the population in general. HIV counseling, care and prevention services as well as family planning programs are still not adequately addressing the benefits of HAART. The package should be modified to include current and updated information re: the effectiveness of HAART on reducing vertical transmission of HIV. Health care workers and counselors who provide HIV/family planning counseling should upgrade their MTCT/HAART knowledge and be required to provide this information to all service users in an unbiased way. Service providers should be evaluated on their practice of providing this vital, relevant and updated information to their clients. Evidence on the positive impact of HAART on vertical HIV transmission is crucial for the HIV-positive persons/couples to know when they are making childbearing decisions.
Background: Little is known about the fertility desires of HIV infected individuals on highly active antiretroviral therapy (HAART). In order to contribute more knowledge to this topic we conducted a study to determine if HIV-infected persons on HAART have different fertility desires compared to persons not on HAART, and if the knowledge about HIV transmission from mother-to-child is different in the two groups. Methods: The study was a cross-sectional survey comparing two groups of HIV-positive participants: those who were on HAART and those who were not. Semi-structured interviews were conducted with 199 HIV patients living in a rural area of western Uganda. The desire for future children was measured by the question in the questionnaire "Do you want more children in future." The respondents' HAART status was derived from the interviews and verified using health records. Descriptive, bivariate and multivariate methods were used to analyze the relationship between HAART treatment status and the desire for future children. Results: Results from the multivariate logistic regression model indicated an adjusted odds ratio (OR) of 1.08 (95% CI 0.40-2.90) for those on HAART wanting more children (crude OR 1.86, 95% CI 0.82-4.21). Statistically significant predictors for desiring more children were younger age, having a higher number of living children and male sex. Knowledge of the risks for mother-to-child-transmission of HIV was similar in both groups. Conclusions: The conclusions from this study are that the HAART treatment status of HIV patients did not influence the desire for children. The non-significant association between the desire for more children and the HAART treatment status could be caused by a lack of knowledge in HIV-infected persons/couples about the positive impact of HAART in reducing HIV transmission from mother-to-child. We recommend that the health care system ensures proper training of staff and appropriate communication to those living with HIV as well as to the general community.
Background: The decision whether or not to have children is often complex and influenced by many factors. HIV-positive individuals in Africa have additional considerations to take into account when deciding whether or not to have children. These include the possibility of passing HIV from mother-to-child and the likelihood that one or both parents could die prior to the child reaching adulthood [1]. Mother-to-child transmission (MTCT) of HIV happens in either of two ways: through perinatal infection and through breastfeeding. Regardless of these concerns, many Africans after receiving a positive HIV diagnosis still elect to have children for various personal, cultural and economic reasons [2]. In most African societies a common expectation of marriage is that the couple will have children [3]. This is an especially important expectation in Uganda as children become members of the paternal clan [4]. In African societies women are often valued by their ability to bear children and a very high social good is placed on fertility; therefore, the pressure on women to have children is very high [5]. The birth of an HIV-positive child to a HIV-positive parent or couple is a situation faced by many families in sub-Saharan African countries. However, the advent of available highly active antiretroviral therapy (HAART) in many sub-Saharan African countries has changed the situation for HIV-positive persons. HAART has been shown to drastically reduce the risk of MTCT of HIV: The risk of MTCT in Denmark was less than 1% in all HIV-positive women (all on HAART) who gave birth between 2000-2005, and 1.7% in a large reference center for pregnant HIV-positive women in Belgium [6,7]. Similarly, HAART has been found to reduce the risk of vertical transmission of HIV in sub-Saharan African countries. For example, the risk of MTCT of HIV in women from a general sample from several sub-Saharan African countries decreased from 30.9% in untreated women to 4.9% in women on HAART [8]. Similarly, in Botswana, the number of new pediatric HIV infections has dropped from 4,600 in 1999 to 890 in 2007 due to nearly complete coverage of an effective antiretroviral treatment program [9]. In Cote d'Ivoire, the MTCT rate of HIV was 5.6% in formula-fed infants and 6.8% in infants with short-term breastfeeding reported by mothers on HAART [10]. These figures show the vast benefits of HAART in reducing the risk of vertical transmission of HIV. Some studies from sub-Saharan Africa have shown that an HIV diagnosis causes people generally to choose to have fewer children [3,11,12]. Other research has shown that HIV infection does not have a marked impact on fertility decisions, particularly for those who do not show signs or symptoms of disease [2,13,14]. Based on many reports indicating that HAART provision significantly decreases the risk of vertical transmission of HIV, one could expect that HIV-positive mothers/couples on HAART in sub-Saharan African countries would by now be more likely to opt to have children. It is therefore important to investigate if voluntary testing and counseling (VCT) for HIV infection and programs to prevent mother-to-child transmission (PMTCT) of HIV services are effective in counseling their clients on these recent findings and benefits of HAART. One study from South Africa stated that HAART increased the fertility desire in couples over time [15]. Similarly, a pan African study from seven countries showed that the pregnancy rate of HIV-positive women on HAART was significantly higher compared to those not on HAART [16]. A Zimbabwean study found that women on HAART were more likely to plan for a future child, but some women still had doubts about the effectiveness of HAART to prevent mother-to-child transmission of HIV [17]. Ugandan studies on the association between HAART and fertility revealed mixed results. One study from rural south-western Uganda showed that HIV-positive women on HAART had an increased desire for children but did not experience an increase in actual fertility [18]. Similarly, another study from rural eastern Uganda showed that HAART was not associated with pregnancy [19]. A survey of women attending Mbarara hospital in south-western Uganda revealed that HIV-positive women had a lower desire for future children irrespective of their HAART treatment status than HIV-negative women [20]. Another study from Mbarara district in south-western Uganda, found that HIV-positive women on HAART were more than three times likely to use contraception compared to those not on HAART [21]. In contrast to these findings, one study from Rakai district in central Uganda described a significant positive association between HAART treatment and becoming pregnant in 712 women attending a hospital [22]. The participants in the above mentioned studies included mostly only HIV-positive women, that is, HIV-positive men were not interviewed. Male involvement in reproductive health and fertility has long been recognized as important, but has not been achieved in a tangible way in many sub-Saharan African countries [23-25]. In order to have a balanced view on whether being on HAART changes fertility desires of individuals and couples, it is crucial to include both sexes. We interviewed male and female HIV-positive participants on this issue. The objectives of the study were: 1. To investigate whether HIV-positive persons on HAART had different fertility desires compared to HIV-positive persons not on HAART and whether these were different between men and women; 2. To assess the knowledge of mother-to-child transmission of HIV in participants on HAART and not on HAART. The study took place from September to December 2006 in two districts, Kabarole and Kamwenge, in western Uganda. Other results from the main quantitative component of this study on fertility and HIV status are published elsewhere [26]. Conclusions: WK was involved in all stages of the study and wrote the article. JH had major input in the development of the proposal and conducted the field work in Uganda. She commented on the draft manuscript and helped with the interpretation of the study results. GSJ developed the statistical part of the proposal and analyzed the data. He also provided input into the manuscript. AA helped with the supervision of the field work and provided input into the manuscript and the interpretation of the data. TR was involved in the design of the study and supervised the field work in Uganda. He also provided input into the manuscript and helped to interpret the study results within the context of the Ugandan health care system. All authors read and approved the final manuscript.
Background: Little is known about the fertility desires of HIV infected individuals on highly active antiretroviral therapy (HAART). In order to contribute more knowledge to this topic we conducted a study to determine if HIV-infected persons on HAART have different fertility desires compared to persons not on HAART, and if the knowledge about HIV transmission from mother-to-child is different in the two groups. Methods: The study was a cross-sectional survey comparing two groups of HIV-positive participants: those who were on HAART and those who were not. Semi-structured interviews were conducted with 199 HIV patients living in a rural area of western Uganda. The desire for future children was measured by the question in the questionnaire "Do you want more children in future." The respondents' HAART status was derived from the interviews and verified using health records. Descriptive, bivariate and multivariate methods were used to analyze the relationship between HAART treatment status and the desire for future children. Results: Results from the multivariate logistic regression model indicated an adjusted odds ratio (OR) of 1.08 (95% CI 0.40-2.90) for those on HAART wanting more children (crude OR 1.86, 95% CI 0.82-4.21). Statistically significant predictors for desiring more children were younger age, having a higher number of living children and male sex. Knowledge of the risks for mother-to-child-transmission of HIV was similar in both groups. Conclusions: The conclusions from this study are that the HAART treatment status of HIV patients did not influence the desire for children. The non-significant association between the desire for more children and the HAART treatment status could be caused by a lack of knowledge in HIV-infected persons/couples about the positive impact of HAART in reducing HIV transmission from mother-to-child. We recommend that the health care system ensures proper training of staff and appropriate communication to those living with HIV as well as to the general community.
10,746
382
[ 1135, 142, 33, 171, 484, 87, 2934, 480, 596, 374, 1785, 350, 223 ]
14
[ "haart", "hiv", "children", "participants", "study", "women", "health", "fertility", "participants haart", "positive" ]
[ "hiv positive parent", "fertility hiv status", "childbearing hiv positive", "fertility options hiv", "hiv transmission mothers" ]
null
[CONTENT] highly active antiretroviral therapy | fertility desires | family planning | HIV/AIDS | knowledge | mother-to-child-transmission | peri-natal transmission | resource-limited setting | Uganda [SUMMARY]
[CONTENT] highly active antiretroviral therapy | fertility desires | family planning | HIV/AIDS | knowledge | mother-to-child-transmission | peri-natal transmission | resource-limited setting | Uganda [SUMMARY]
null
[CONTENT] highly active antiretroviral therapy | fertility desires | family planning | HIV/AIDS | knowledge | mother-to-child-transmission | peri-natal transmission | resource-limited setting | Uganda [SUMMARY]
[CONTENT] highly active antiretroviral therapy | fertility desires | family planning | HIV/AIDS | knowledge | mother-to-child-transmission | peri-natal transmission | resource-limited setting | Uganda [SUMMARY]
[CONTENT] highly active antiretroviral therapy | fertility desires | family planning | HIV/AIDS | knowledge | mother-to-child-transmission | peri-natal transmission | resource-limited setting | Uganda [SUMMARY]
[CONTENT] Adolescent | Adult | Antiretroviral Therapy, Highly Active | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Health Knowledge, Attitudes, Practice | Humans | Infectious Disease Transmission, Vertical | Intention | Male | Reproductive Behavior | Rural Health | Socioeconomic Factors | Uganda | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Antiretroviral Therapy, Highly Active | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Health Knowledge, Attitudes, Practice | Humans | Infectious Disease Transmission, Vertical | Intention | Male | Reproductive Behavior | Rural Health | Socioeconomic Factors | Uganda | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Antiretroviral Therapy, Highly Active | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Health Knowledge, Attitudes, Practice | Humans | Infectious Disease Transmission, Vertical | Intention | Male | Reproductive Behavior | Rural Health | Socioeconomic Factors | Uganda | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Antiretroviral Therapy, Highly Active | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Health Knowledge, Attitudes, Practice | Humans | Infectious Disease Transmission, Vertical | Intention | Male | Reproductive Behavior | Rural Health | Socioeconomic Factors | Uganda | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Antiretroviral Therapy, Highly Active | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Health Knowledge, Attitudes, Practice | Humans | Infectious Disease Transmission, Vertical | Intention | Male | Reproductive Behavior | Rural Health | Socioeconomic Factors | Uganda | Young Adult [SUMMARY]
[CONTENT] hiv positive parent | fertility hiv status | childbearing hiv positive | fertility options hiv | hiv transmission mothers [SUMMARY]
[CONTENT] hiv positive parent | fertility hiv status | childbearing hiv positive | fertility options hiv | hiv transmission mothers [SUMMARY]
null
[CONTENT] hiv positive parent | fertility hiv status | childbearing hiv positive | fertility options hiv | hiv transmission mothers [SUMMARY]
[CONTENT] hiv positive parent | fertility hiv status | childbearing hiv positive | fertility options hiv | hiv transmission mothers [SUMMARY]
[CONTENT] hiv positive parent | fertility hiv status | childbearing hiv positive | fertility options hiv | hiv transmission mothers [SUMMARY]
[CONTENT] haart | hiv | children | participants | study | women | health | fertility | participants haart | positive [SUMMARY]
[CONTENT] haart | hiv | children | participants | study | women | health | fertility | participants haart | positive [SUMMARY]
null
[CONTENT] haart | hiv | children | participants | study | women | health | fertility | participants haart | positive [SUMMARY]
[CONTENT] haart | hiv | children | participants | study | women | health | fertility | participants haart | positive [SUMMARY]
[CONTENT] haart | hiv | children | participants | study | women | health | fertility | participants haart | positive [SUMMARY]
[CONTENT] hiv | haart | women | positive | african | hiv positive | countries | saharan | sub saharan | african countries [SUMMARY]
[CONTENT] study | hiv | health | questionnaire | related | variables | test | local | data | model [SUMMARY]
null
[CONTENT] counseling | hiv | persons couples | updated | provide | information | package | positive persons couples | service | hiv positive persons couples [SUMMARY]
[CONTENT] haart | hiv | children | study | participants | health | women | positive | participants haart | respondents [SUMMARY]
[CONTENT] haart | hiv | children | study | participants | health | women | positive | participants haart | respondents [SUMMARY]
[CONTENT] HAART ||| HAART | HAART | two [SUMMARY]
[CONTENT] two | HAART ||| 199 | Uganda ||| ||| HAART ||| HAART [SUMMARY]
null
[CONTENT] HAART ||| HAART | HAART ||| [SUMMARY]
[CONTENT] HAART ||| HAART | HAART | two ||| two | HAART ||| 199 | Uganda ||| ||| HAART ||| HAART ||| 1.08 | 95% | CI | 0.40 | HAART | 1.86 | 95% | CI | 0.82-4.21 ||| ||| ||| HAART ||| HAART | HAART ||| [SUMMARY]
[CONTENT] HAART ||| HAART | HAART | two ||| two | HAART ||| 199 | Uganda ||| ||| HAART ||| HAART ||| 1.08 | 95% | CI | 0.40 | HAART | 1.86 | 95% | CI | 0.82-4.21 ||| ||| ||| HAART ||| HAART | HAART ||| [SUMMARY]
Automated detection of smiles as discrete episodes.
36205621
Patients seeking restorative and orthodontic treatment expect an improvement in their smiles and oral health-related quality of life. Nonetheless, the qualitative and quantitative characteristics of dynamic smiles are yet to be understood.
BACKGROUND
A software script was developed using the Facial Action Coding System (FACS) and artificial intelligence to assess activations of (1) cheek raiser, a marker of smile genuineness; (2) lip corner puller, a marker of smile intensity; and (3) perioral lip muscles, a marker of lips apart. Thirty study participants were asked to view a series of amusing videos. A full-face video was recorded using a webcam. The onset and cessation of smile episodes were identified by two examiners trained with FACS coding. A Receiver Operating Characteristic (ROC) curve was then used to assess detection accuracy and optimise thresholding. The videos of participants were then analysed off-line to automatedly assess the features of smiles.
MATERIALS AND METHODS
The area under the ROC curve for smile detection was 0.94, with a sensitivity of 82.9% and a specificity of 89.7%. The software correctly identified 90.0% of smile episodes. While watching the amusing videos, study participants smiled 1.6 (±0.8) times per minute.
RESULTS
Features of smiles such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The software can be used to investigate the impact of oral conditions and their rehabilitation on smiles.
CONCLUSIONS
[ "Humans", "Artificial Intelligence", "Quality of Life", "Facial Expression", "Smiling", "Lip" ]
9828522
INTRODUCTION
Smiling is a spontaneous facial expression occurring throughout everyday life, which varies largely between individuals. 1 While the interpretation of smiling may appear straightforward, it is actually one of the most complex facial expressions, and can be ambiguous. 2 Not only can smiles have different forms and meanings, but they are also found in different situations and as a consequence of different eliciting factors. 3 , 4 Smile analysis in dentistry has largely focused on static images. 5 Nonetheless, more recently, there has been a paradigm shift in treatment planning and smile rehabilitation from using static smiles to dynamic smiles; herein lies the ‘art of the smile’. 6 As the pursuit for better dentofacial aesthetics increases, it is essential to distinguish between posed and spontaneous smiles, differences between which are significant and can influence treatment planning and smile design. 5 Understanding the characteristics of different smiles and the associated age‐related changes in orofacial musculature, for example, is important to the decision‐making process to achieve ‘ideal’ tooth display. 7 However, this process should not be confined to the aesthetic elements alone but should also extend to understand whether an oral rehabilitation treatment, including orthodontics, actually affects the number and the way a patient smiles. 8 , 9 Smiling that depicts situations of spontaneous pure enjoyment or laughter are often referred to as the genuine ‘Duchenne’ smiles, to acknowledge the scientist who first described their features. 10 , 11 The Duchenne smile prompts a combined activation of the zygomaticus major and the orbicularis oculi muscles. This pattern of muscular activity distinguishes between genuine smiles and ‘social’ smiles, which are generally expressed during conditions of non‐enjoyment. 12 , 13 The identification of Duchenne smiles relies on subtle analysis of facial expressions. 14 The Facial Action Coding System (FACS) 15 is a popular and reliable method for detecting and quantifying the frequency of facial expressions from full‐face video recordings. 16 The FACS uses action units (AUs), which code for actions of individual or groups of muscles during facial expression. 15 The activation level of each AU is scored using intensity scores, ranging from ‘trace’ to ‘maximum’. According to FACS, the onset of a smile can be identified when the activation of the zygomaticus major displays traces of raised skin within the lower‐to‐middle nasolabial area and other traces of upwardly angled and elongated lip corners. 15 These muscle activities would increase in intensity until the smile apex is reached before reverting until no further traces of activation of the zygomaticus major could be recognised; hence, the smile offset is denoted. 15 The introduction of FACS has undoubtedly challenged the study of facial expressions as it allows real‐time assessment of emotions; however, its utilisation for manual detection and coding of AUs presents with limitations; (a) the need for experienced coders who are able to accurately identify on a frame‐wise basis the onset, apex, and offset of a smile, 16 (b) the coding process is extremely laborious, posing a huge challenge in large‐scale research, (c) susceptibility to observer biases 17 and high costs. 18 The observable limitations encountered with manual analyses of smiles has led to computing developments to automatedly detect dynamic smiling features. 19 FACS focuses primarily on the identification of active target AUs frame‐by‐frame and do not include comprehensive analyses of smiling as discrete episodes, so that their individual features and patterns can be characterised. An episode‐wise analysis of individual smiles would allow researchers to address questions such as how often, how long, how strong, and how genuinely do individuals smile under different experimental and/or situational factors, and what is the impact of factors such as, oral health‐related conditions, on the way people smile. This would also pave the way to understand the dynamic characteristics of smiles in oral rehabilitation patients 8 and assist in areas where smile rehabilitation through individualised muscle mimicry and training is demanded. 20 The aim of this study is to develop and validate a user‐friendly software script, based on well‐established pattern‐recognition algorithms for tracking facial landmarks and facial AUs, so that discrete smile episodes can be analysed off‐line from full‐face videos and quantified in terms of frequency, duration, authenticity, and intensity of smile.
null
null
RESULTS
Study participants were young adults, mostly Caucasian (>80%), about half of them were females, and had a broad range of malocclusions (Table 1). Demographic characteristics of the participants The distinct smile episodes that were manually identified frame‐wise by coders were used to build a ROC curve (Figure 2). The area under the curve of the ROC curve was 0.94, and the overall accuracy of smiling frames detection was 84.5%. The maximisation of Youden index indicated that detection accuracy was highest with thresholds of 0.5 for AU6 and 1.5 for AU12. These thresholds resulted in a sensitivity of 82.9% and a specificity of 89.7% and were used in subsequent episode‐wise analysis in the study sample. ROC curve based on two thresholds for AU6 and AU12 and frame‐wise detection of smiles After calibration of the algorithm, the true‐positive detection of individual smiling episodes was 90.0%. In addition, 11.3% of confounder tasks were detected as false positives. The tasks more often misclassified as smiles were mouth covering, which amounted to around one‐third of the false detections and yawning, which amounted to around 20% of the false positive detections. Study participants smiled approximately seven times according to the classifier, with each smile episode lasting approximately 10 s, or about one‐third of the duration of the humorous videos. The features of smiling episodes showed a large inter‐individual variation in the frequency, intensity, and duration of smiles (Figure 3). Three‐dimensional histogram depicting all the smiling episodes detected by duration and intensity of AU12. Descriptive statistics for the individual features of smiles, such as activation of specific AUs are given in Table 2. Activation of AU12, which is the main AU of the smile, 15 ranged from slight to pronounced, with intensity ranking. Some participants hardly showed teeth on smiling, while others showed teeth throughout the entire smile episode. Descriptive statistics for the features of smiling episodes detected from the 30 study participants, while watching the video footage
CONCLUSIONS
Individual smile episodes and their quantitative features, such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The proposed approach can be used to investigate the impact of oral health and oral rehabilitation on smiles.
[ "INTRODUCTION", "Phase I: software script", "Phase 2: descriptive study and software validation", "Sample characteristics", "Experimental Setup", "Smile triggering video", "Procedure", "Data analysis and statistics", "AUTHOR CONTRIBUTIONS", "PEER REVIEW", "PEER REVIEW" ]
[ "Smiling is a spontaneous facial expression occurring throughout everyday life, which varies largely between individuals.\n1\n While the interpretation of smiling may appear straightforward, it is actually one of the most complex facial expressions, and can be ambiguous.\n2\n Not only can smiles have different forms and meanings, but they are also found in different situations and as a consequence of different eliciting factors.\n3\n, \n4\n\n\nSmile analysis in dentistry has largely focused on static images.\n5\n Nonetheless, more recently, there has been a paradigm shift in treatment planning and smile rehabilitation from using static smiles to dynamic smiles; herein lies the ‘art of the smile’.\n6\n As the pursuit for better dentofacial aesthetics increases, it is essential to distinguish between posed and spontaneous smiles, differences between which are significant and can influence treatment planning and smile design.\n5\n Understanding the characteristics of different smiles and the associated age‐related changes in orofacial musculature, for example, is important to the decision‐making process to achieve ‘ideal’ tooth display.\n7\n However, this process should not be confined to the aesthetic elements alone but should also extend to understand whether an oral rehabilitation treatment, including orthodontics, actually affects the number and the way a patient smiles.\n8\n, \n9\n\n\nSmiling that depicts situations of spontaneous pure enjoyment or laughter are often referred to as the genuine ‘Duchenne’ smiles, to acknowledge the scientist who first described their features.\n10\n, \n11\n The Duchenne smile prompts a combined activation of the zygomaticus major and the orbicularis oculi muscles. This pattern of muscular activity distinguishes between genuine smiles and ‘social’ smiles, which are generally expressed during conditions of non‐enjoyment.\n12\n, \n13\n The identification of Duchenne smiles relies on subtle analysis of facial expressions.\n14\n\n\nThe Facial Action Coding System (FACS)\n15\n is a popular and reliable method for detecting and quantifying the frequency of facial expressions from full‐face video recordings.\n16\n The FACS uses action units (AUs), which code for actions of individual or groups of muscles during facial expression.\n15\n The activation level of each AU is scored using intensity scores, ranging from ‘trace’ to ‘maximum’. According to FACS, the onset of a smile can be identified when the activation of the zygomaticus major displays traces of raised skin within the lower‐to‐middle nasolabial area and other traces of upwardly angled and elongated lip corners.\n15\n These muscle activities would increase in intensity until the smile apex is reached before reverting until no further traces of activation of the zygomaticus major could be recognised; hence, the smile offset is denoted.\n15\n The introduction of FACS has undoubtedly challenged the study of facial expressions as it allows real‐time assessment of emotions; however, its utilisation for manual detection and coding of AUs presents with limitations; (a) the need for experienced coders who are able to accurately identify on a frame‐wise basis the onset, apex, and offset of a smile,\n16\n (b) the coding process is extremely laborious, posing a huge challenge in large‐scale research, (c) susceptibility to observer biases\n17\n and high costs.\n18\n The observable limitations encountered with manual analyses of smiles has led to computing developments to automatedly detect dynamic smiling features.\n19\n\n\nFACS focuses primarily on the identification of active target AUs frame‐by‐frame and do not include comprehensive analyses of smiling as discrete episodes, so that their individual features and patterns can be characterised. An episode‐wise analysis of individual smiles would allow researchers to address questions such as how often, how long, how strong, and how genuinely do individuals smile under different experimental and/or situational factors, and what is the impact of factors such as, oral health‐related conditions, on the way people smile. This would also pave the way to understand the dynamic characteristics of smiles in oral rehabilitation patients\n8\n and assist in areas where smile rehabilitation through individualised muscle mimicry and training is demanded.\n20\n\n\nThe aim of this study is to develop and validate a user‐friendly software script, based on well‐established pattern‐recognition algorithms for tracking facial landmarks and facial AUs, so that discrete smile episodes can be analysed off‐line from full‐face videos and quantified in terms of frequency, duration, authenticity, and intensity of smile.", "OpenFace2.2.0 was used as a platform to extract information about facial AUs, which were considered relevant for this study.\n21\n This is an open‐source automatic facial recognition software intended to be used by researchers interested in machine learning, affective computing, and facial behaviour analysis.\n21\n The software is an update of a previous version of a facial behaviour analysis toolkit, which is based on convolutional neural networks and allows automated identification of 68 facial landmarks at any frame rate.\n21\n, \n22\n, \n23\n The software's output includes a timestamp, quantitative information about all facial landmarks, head posture, eye gaze, activation levels of facial AUs, and three‐dimensional (3D) coordinates of individual facial landmarks, as detected in each frame of the analysed video. AUs represent individual components of muscle movements, whose activation is identified by monitoring the 3D displacement of facial landmarks, with specific sets of landmarks corresponding to each AU. The software also generates videos showing dynamic changes of identified facial landmarks, and 3D information on the gaze vector and the head posture. An example of facial landmarks identified during smiling is shown in Figure 1.\nExample frame from smiling study participants, with activation of AU6 > 0.5 and AU12 > 1.5\nAs this study focused on smiling, the relevant AUs were: AU6, AU12, and AU25. AU6 (‘cheek raiser’) tracks the activity of the orbicularis oculi muscle, pars orbitalis, and is generally considered a marker of smile genuineness (i.e. a Duchenne marker). AU12 (‘lip corner puller’) tracks the activity of the zygomaticus major muscle. AU25 (‘lips apart’) tracks the activity of the depressor labii inferioris muscle. The intensities of AU6 and AU12 activation were automatically coded by the software using a six‐point ordinal scale (0–5), with values amounting to 1–2 indicating weak (trace) to slight activation, a value of 3 indicating marked pronounced activity, a value of 4 indicating extreme activation, and a value of 5 indicating maximum possible intensity.\n15\n, \n24\n AU25 was assigned a dichotomous value, which was either 0, indicating lips closed without teeth showing, or 1.\nA dedicated software script was developed in Java (Oracle JDK 1.8.0_111). The software had a stand‐alone user‐friendly graphic interface, which allows users to open the output file of OpenFace Software and to detect all the smiling episodes occurring throughout an entire video or within a well‐defined portion of a video, as defined by the start and end frame numbers.\nTo identify the onset of a smiling episode, both AU6 and AU12 had to be above the specified thresholds. The end of the smiling episode was identified by a subthreshold activation of either AU for longer than 2 s. In effect, this means that when two or more smiling episodes were separated by less than 2 s, they were merged into a single episode. The stand‐by time could be changed by the user.\nFor every smiling episode, the software assigned a progressive count, the onset time, the duration, and the mean activation of AU6 and AU12 across the entire episode. The onset and duration of individual episodes were given at a resolution equal to the inverse of frame rate of the video analysed.\nIn order to assign a clinically meaningful value to AU25, this was reported as the proportion of time teeth were shown during a given smiling episode. For example, an activity value of 50% indicated that teeth were visible during half of the episode. Additional outcome measurements included the number of smile episodes per minute and the relative smile time (%). This was calculated as the proportion of time that each individual had smiled while watching the video clip, by summing the durations of all smiling episodes and then dividing the total duration of smiling by the length of the video.", "Data were collected at the Craniofacial Clinical Research Laboratory at the Faculty of Dentistry, University of Otago, under local Ethics Committee approval number H19/160. All participants enrolled in this study agreed to participate and signed a written informed consent form. The report of phase 2 conforms to the guidelines for reporting observational studies (STROBE).\n25\n\n", "A convenience sample of thirty participants (16 females, 14 males; age 18.9 years SD 2.2 years) were recruited as part of a larger project aiming to investigate the impact of oral health, psychological traits, and sociodemographic variables on smiling behaviour. Recruitment started in September 2020 and ended in December 2020.\nParticipants aged 16–22 years were recruited through local public advertisements, including university mailing lists, social media, flyers, and word of mouth. Exclusion criteria were: (a) cleft lip/ palate or other craniofacial syndromes; (b) severe periodontitis affecting front teeth; (c) history of major psychiatric disorders; (d) Bell's palsy; (e) removable dentures; (f) enamel dysplasia or severe stains affecting front teeth; (g) history of dysmorphophobia. Wearing eyeglasses was not set as an exclusion criterion; however, only one participant requested to wear glasses while watching the videos, and this apparently did not interfere with landmark identification. The sample investigated in this study represented a randomly selected subset of around a hundred study participants. The large sample exhibited a variety of occlusal conditions and is part of another related investigation examining the influence of malocclusion on smiling features through the proposed approach.\nThe occlusal characteristics of study participants were assessed using the Dental Aesthetic Index (DAI). DAI is a popular tool used in epidemiology to assess a specific set of occlusal traits, such as missing anterior teeth, crowding and spacing in the incisal region, midline diastema, overjet, anterior open bite, incisor irregularity, and molar relationship.\n26\n The overall DAI assessment scores of the weighted components are summed with a constant of 13 to produce the final DAI aggregate.", "An Ultra High‐Definition web camera (Logitech BRIO 4K Ultra High‐Definition Webcam), with resolution set to 4096 × 2160 pixels and frame rate set at 30 frames per second was secured atop a 27‐inch Dell Ultrasharp U2715H computer monitor with a resolution of 2560 × 1440 pixels used to showcase a video clip.\nEach participant was seated 60–70 cm away from the display monitor. The height of the monitor was adjusted so that the participant's eyes were aligned at a point corresponding to the middle of the screen when the participant's head was in natural head position.\nFace lighting was individually optimised by a ring light (APEXEL 10″ 26 cm LED Selfie Circle Ring, Apexel), which was also secured on the back of the screen. A neutral background was used to avoid light reflections and object interferences, which could affect off‐line analyses of the video. The room light was switched off during the entire recording.", "Three amusing video clips were identified via a small pilot study by the focus group previously described. The first clip showed an episode of Mr Bean (Mr Bean Rides Again, Act 5: The Flight; 3 min), whose character is widely used as a trigger stimulus in smile research.\n27\n The next two clips included a non‐stop laughter of a cute baby (47 s) and Juan Joya Borja's viral laughing video widely known as the ‘Spanish laughing guy’ in a televised episode of Ratones Coloraos, which first aired in 2001 but went viral in 2007 (46 s).\n28\n The three clips were separated by fade‐outs and merged into a single video, 4 min and 33 s in length (4 min and 24 s without transitions).\nFollowing the amusing video clips, the video presented instructions for completing a series of tasks, with time kept by a countdown timer and progress bar. The tasks involved initiating a series of jaw movements and facial expressions that could confound identification of smiles: speaking (counting 1–10), yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, smiling, and neutral expressions. The speaking task lasted 10 s while all other tasks lasted six seconds, with a six‐second inter‐task interval. All tasks were administered once, except for smiling, which was repeated three times. These tasks allowed a precise tuning of the machine learning models that were applied to detect smiling episodes in the video and individual‐specific calibration of the algorithm.", "Each participant's involvement in the study took place in a single session. At the start of this session, each participant was checked against the inclusion/exclusion criteria, and the occlusal characteristics were scored using the DAI index. The participants were then given an overview of the research project and signed the written consents for participation. To elicit natural responses and trigger spontaneous smiling reactions during the video recording, the participants were not told that the main outcomes of the study were the features of their smiles. Afterwards, each participant was left alone in the recording room and was requested to view the video clip and then perform the follow‐up tasks.\nAfter viewing the video, each participant was asked to fill in two questionnaires. The first was a 12‐item Smile Aesthetics‐Related Quality of Life (SERQoL) questionnaire relating to three dimensions of the psychosocial impact of smiles.\n29\n The second was the 60‐item IPIP–NEO–60 personality scale.\n30\n The results of these questionnaires were the subject of another investigation and are not analysed in this report. Each participant was given a $20 voucher as reimbursement for participation in this project.", "The full‐face videos were reviewed and coded frame‐wise by two examiners (HM and RK), who were instructed to identify each distinct smiling episode (i.e. preceded and followed by a smile‐free period of at least two seconds) in each study participant. The frames corresponding to the onset and cessation of each smiling episode were identified and noted between the two coders who viewed the full‐face videos within the same setting until a consensus between the two coders was reached. When consensus was not reached, a third coder (MF) was consulted.\nThe validity of the smiling detection software was assessed by calculating receiver operating characteristics (ROC) curves, using the examiner‐coded smiles as a reference standard and classification variable. ROC curves were assessed frame‐by‐frame timewise for each smile and smile‐free portions. Sensitivity (Se = true positive rate) and specificity (Sp = true negative rate) were calculated frame‐wise and maximised using Youden index (Se +Sp −1). The ROC curve was plotted by false positive rate (1‐Sp) on the x‐axis and the true positive rate (Se) on the y‐axis. The ROC curve was plotted by varying two different thresholds, the first one (Th1) representing the activation level of the cheek raiser muscle (AU6) and the second one (Th2) representing the activation level of the lip corner puller muscle (AU12). The two thresholds Th1 and Th2 were varied stepwise using a 0.05 step for both thresholds. The area under the curve (AUC) of the ROC curve and the overall accuracy of the test (sum of true positive frames + true negative frames/total frames) were also calculated.\nAfter threshold optimisation, the software script was run on the entire recording (including the post‐video tasks) to identify smiling episodes and to investigate possible misclassifications (false positives) of confounding tasks as smiles. The three smiling tasks, part of the second section of the video, were excluded from the confounds analysis.\nTo obtain an estimate of smile genuineness (0–5), intensity (0–5), and teeth exposure (%), the amount of activation of AU6, AU12, and AU25 were averaged across each episode. The outcome variables considered in this study were the number of smiling episodes per session, the mean and cumulative duration of smiling episodes, and the mean activation of AU6, AU12, and AU25.\nAll the data were analysed in Excel (Version 16.51, Microsoft Corporation) and SPSS (version 20.0 IBM Corporation).", "Hisham Mohammed and Reginald Kumar Jr involved in conceptualisation, investigation, validation and writing the original draft. Hamza Bennani involved in software and analysis. Jamin Halberstadt involved in methodology, reviewing and editing and supervision. Mauro Farella involved in conceptualisation, methodology, reviewing and editing, analysis and supervision.", "PEER REVIEW The peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378.\nThe peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378.", "The peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Phase I: software script", "Phase 2: descriptive study and software validation", "Sample characteristics", "Experimental Setup", "Smile triggering video", "Procedure", "Data analysis and statistics", "RESULTS", "DISCUSSION", "CONCLUSIONS", "AUTHOR CONTRIBUTIONS", "CONFLICT OF INTEREST", "PEER REVIEW", "PEER REVIEW" ]
[ "Smiling is a spontaneous facial expression occurring throughout everyday life, which varies largely between individuals.\n1\n While the interpretation of smiling may appear straightforward, it is actually one of the most complex facial expressions, and can be ambiguous.\n2\n Not only can smiles have different forms and meanings, but they are also found in different situations and as a consequence of different eliciting factors.\n3\n, \n4\n\n\nSmile analysis in dentistry has largely focused on static images.\n5\n Nonetheless, more recently, there has been a paradigm shift in treatment planning and smile rehabilitation from using static smiles to dynamic smiles; herein lies the ‘art of the smile’.\n6\n As the pursuit for better dentofacial aesthetics increases, it is essential to distinguish between posed and spontaneous smiles, differences between which are significant and can influence treatment planning and smile design.\n5\n Understanding the characteristics of different smiles and the associated age‐related changes in orofacial musculature, for example, is important to the decision‐making process to achieve ‘ideal’ tooth display.\n7\n However, this process should not be confined to the aesthetic elements alone but should also extend to understand whether an oral rehabilitation treatment, including orthodontics, actually affects the number and the way a patient smiles.\n8\n, \n9\n\n\nSmiling that depicts situations of spontaneous pure enjoyment or laughter are often referred to as the genuine ‘Duchenne’ smiles, to acknowledge the scientist who first described their features.\n10\n, \n11\n The Duchenne smile prompts a combined activation of the zygomaticus major and the orbicularis oculi muscles. This pattern of muscular activity distinguishes between genuine smiles and ‘social’ smiles, which are generally expressed during conditions of non‐enjoyment.\n12\n, \n13\n The identification of Duchenne smiles relies on subtle analysis of facial expressions.\n14\n\n\nThe Facial Action Coding System (FACS)\n15\n is a popular and reliable method for detecting and quantifying the frequency of facial expressions from full‐face video recordings.\n16\n The FACS uses action units (AUs), which code for actions of individual or groups of muscles during facial expression.\n15\n The activation level of each AU is scored using intensity scores, ranging from ‘trace’ to ‘maximum’. According to FACS, the onset of a smile can be identified when the activation of the zygomaticus major displays traces of raised skin within the lower‐to‐middle nasolabial area and other traces of upwardly angled and elongated lip corners.\n15\n These muscle activities would increase in intensity until the smile apex is reached before reverting until no further traces of activation of the zygomaticus major could be recognised; hence, the smile offset is denoted.\n15\n The introduction of FACS has undoubtedly challenged the study of facial expressions as it allows real‐time assessment of emotions; however, its utilisation for manual detection and coding of AUs presents with limitations; (a) the need for experienced coders who are able to accurately identify on a frame‐wise basis the onset, apex, and offset of a smile,\n16\n (b) the coding process is extremely laborious, posing a huge challenge in large‐scale research, (c) susceptibility to observer biases\n17\n and high costs.\n18\n The observable limitations encountered with manual analyses of smiles has led to computing developments to automatedly detect dynamic smiling features.\n19\n\n\nFACS focuses primarily on the identification of active target AUs frame‐by‐frame and do not include comprehensive analyses of smiling as discrete episodes, so that their individual features and patterns can be characterised. An episode‐wise analysis of individual smiles would allow researchers to address questions such as how often, how long, how strong, and how genuinely do individuals smile under different experimental and/or situational factors, and what is the impact of factors such as, oral health‐related conditions, on the way people smile. This would also pave the way to understand the dynamic characteristics of smiles in oral rehabilitation patients\n8\n and assist in areas where smile rehabilitation through individualised muscle mimicry and training is demanded.\n20\n\n\nThe aim of this study is to develop and validate a user‐friendly software script, based on well‐established pattern‐recognition algorithms for tracking facial landmarks and facial AUs, so that discrete smile episodes can be analysed off‐line from full‐face videos and quantified in terms of frequency, duration, authenticity, and intensity of smile.", "The study included two phases. During the first phase, a software script was developed with the help of a computer scientist (HB) and extensively tested with ongoing feedback from a focus group represented by the authors and a few test volunteers. During the second phase, preliminary data were collected from a convenience sample of thirty study participants to optimise the performance of the algorithm for smiling detection and to identify optimal thresholds, so that the software's performance could be validated against two manual coders.\nPhase I: software script OpenFace2.2.0 was used as a platform to extract information about facial AUs, which were considered relevant for this study.\n21\n This is an open‐source automatic facial recognition software intended to be used by researchers interested in machine learning, affective computing, and facial behaviour analysis.\n21\n The software is an update of a previous version of a facial behaviour analysis toolkit, which is based on convolutional neural networks and allows automated identification of 68 facial landmarks at any frame rate.\n21\n, \n22\n, \n23\n The software's output includes a timestamp, quantitative information about all facial landmarks, head posture, eye gaze, activation levels of facial AUs, and three‐dimensional (3D) coordinates of individual facial landmarks, as detected in each frame of the analysed video. AUs represent individual components of muscle movements, whose activation is identified by monitoring the 3D displacement of facial landmarks, with specific sets of landmarks corresponding to each AU. The software also generates videos showing dynamic changes of identified facial landmarks, and 3D information on the gaze vector and the head posture. An example of facial landmarks identified during smiling is shown in Figure 1.\nExample frame from smiling study participants, with activation of AU6 > 0.5 and AU12 > 1.5\nAs this study focused on smiling, the relevant AUs were: AU6, AU12, and AU25. AU6 (‘cheek raiser’) tracks the activity of the orbicularis oculi muscle, pars orbitalis, and is generally considered a marker of smile genuineness (i.e. a Duchenne marker). AU12 (‘lip corner puller’) tracks the activity of the zygomaticus major muscle. AU25 (‘lips apart’) tracks the activity of the depressor labii inferioris muscle. The intensities of AU6 and AU12 activation were automatically coded by the software using a six‐point ordinal scale (0–5), with values amounting to 1–2 indicating weak (trace) to slight activation, a value of 3 indicating marked pronounced activity, a value of 4 indicating extreme activation, and a value of 5 indicating maximum possible intensity.\n15\n, \n24\n AU25 was assigned a dichotomous value, which was either 0, indicating lips closed without teeth showing, or 1.\nA dedicated software script was developed in Java (Oracle JDK 1.8.0_111). The software had a stand‐alone user‐friendly graphic interface, which allows users to open the output file of OpenFace Software and to detect all the smiling episodes occurring throughout an entire video or within a well‐defined portion of a video, as defined by the start and end frame numbers.\nTo identify the onset of a smiling episode, both AU6 and AU12 had to be above the specified thresholds. The end of the smiling episode was identified by a subthreshold activation of either AU for longer than 2 s. In effect, this means that when two or more smiling episodes were separated by less than 2 s, they were merged into a single episode. The stand‐by time could be changed by the user.\nFor every smiling episode, the software assigned a progressive count, the onset time, the duration, and the mean activation of AU6 and AU12 across the entire episode. The onset and duration of individual episodes were given at a resolution equal to the inverse of frame rate of the video analysed.\nIn order to assign a clinically meaningful value to AU25, this was reported as the proportion of time teeth were shown during a given smiling episode. For example, an activity value of 50% indicated that teeth were visible during half of the episode. Additional outcome measurements included the number of smile episodes per minute and the relative smile time (%). This was calculated as the proportion of time that each individual had smiled while watching the video clip, by summing the durations of all smiling episodes and then dividing the total duration of smiling by the length of the video.\nOpenFace2.2.0 was used as a platform to extract information about facial AUs, which were considered relevant for this study.\n21\n This is an open‐source automatic facial recognition software intended to be used by researchers interested in machine learning, affective computing, and facial behaviour analysis.\n21\n The software is an update of a previous version of a facial behaviour analysis toolkit, which is based on convolutional neural networks and allows automated identification of 68 facial landmarks at any frame rate.\n21\n, \n22\n, \n23\n The software's output includes a timestamp, quantitative information about all facial landmarks, head posture, eye gaze, activation levels of facial AUs, and three‐dimensional (3D) coordinates of individual facial landmarks, as detected in each frame of the analysed video. AUs represent individual components of muscle movements, whose activation is identified by monitoring the 3D displacement of facial landmarks, with specific sets of landmarks corresponding to each AU. The software also generates videos showing dynamic changes of identified facial landmarks, and 3D information on the gaze vector and the head posture. An example of facial landmarks identified during smiling is shown in Figure 1.\nExample frame from smiling study participants, with activation of AU6 > 0.5 and AU12 > 1.5\nAs this study focused on smiling, the relevant AUs were: AU6, AU12, and AU25. AU6 (‘cheek raiser’) tracks the activity of the orbicularis oculi muscle, pars orbitalis, and is generally considered a marker of smile genuineness (i.e. a Duchenne marker). AU12 (‘lip corner puller’) tracks the activity of the zygomaticus major muscle. AU25 (‘lips apart’) tracks the activity of the depressor labii inferioris muscle. The intensities of AU6 and AU12 activation were automatically coded by the software using a six‐point ordinal scale (0–5), with values amounting to 1–2 indicating weak (trace) to slight activation, a value of 3 indicating marked pronounced activity, a value of 4 indicating extreme activation, and a value of 5 indicating maximum possible intensity.\n15\n, \n24\n AU25 was assigned a dichotomous value, which was either 0, indicating lips closed without teeth showing, or 1.\nA dedicated software script was developed in Java (Oracle JDK 1.8.0_111). The software had a stand‐alone user‐friendly graphic interface, which allows users to open the output file of OpenFace Software and to detect all the smiling episodes occurring throughout an entire video or within a well‐defined portion of a video, as defined by the start and end frame numbers.\nTo identify the onset of a smiling episode, both AU6 and AU12 had to be above the specified thresholds. The end of the smiling episode was identified by a subthreshold activation of either AU for longer than 2 s. In effect, this means that when two or more smiling episodes were separated by less than 2 s, they were merged into a single episode. The stand‐by time could be changed by the user.\nFor every smiling episode, the software assigned a progressive count, the onset time, the duration, and the mean activation of AU6 and AU12 across the entire episode. The onset and duration of individual episodes were given at a resolution equal to the inverse of frame rate of the video analysed.\nIn order to assign a clinically meaningful value to AU25, this was reported as the proportion of time teeth were shown during a given smiling episode. For example, an activity value of 50% indicated that teeth were visible during half of the episode. Additional outcome measurements included the number of smile episodes per minute and the relative smile time (%). This was calculated as the proportion of time that each individual had smiled while watching the video clip, by summing the durations of all smiling episodes and then dividing the total duration of smiling by the length of the video.\nPhase 2: descriptive study and software validation Data were collected at the Craniofacial Clinical Research Laboratory at the Faculty of Dentistry, University of Otago, under local Ethics Committee approval number H19/160. All participants enrolled in this study agreed to participate and signed a written informed consent form. The report of phase 2 conforms to the guidelines for reporting observational studies (STROBE).\n25\n\n\nData were collected at the Craniofacial Clinical Research Laboratory at the Faculty of Dentistry, University of Otago, under local Ethics Committee approval number H19/160. All participants enrolled in this study agreed to participate and signed a written informed consent form. The report of phase 2 conforms to the guidelines for reporting observational studies (STROBE).\n25\n\n\nSample characteristics A convenience sample of thirty participants (16 females, 14 males; age 18.9 years SD 2.2 years) were recruited as part of a larger project aiming to investigate the impact of oral health, psychological traits, and sociodemographic variables on smiling behaviour. Recruitment started in September 2020 and ended in December 2020.\nParticipants aged 16–22 years were recruited through local public advertisements, including university mailing lists, social media, flyers, and word of mouth. Exclusion criteria were: (a) cleft lip/ palate or other craniofacial syndromes; (b) severe periodontitis affecting front teeth; (c) history of major psychiatric disorders; (d) Bell's palsy; (e) removable dentures; (f) enamel dysplasia or severe stains affecting front teeth; (g) history of dysmorphophobia. Wearing eyeglasses was not set as an exclusion criterion; however, only one participant requested to wear glasses while watching the videos, and this apparently did not interfere with landmark identification. The sample investigated in this study represented a randomly selected subset of around a hundred study participants. The large sample exhibited a variety of occlusal conditions and is part of another related investigation examining the influence of malocclusion on smiling features through the proposed approach.\nThe occlusal characteristics of study participants were assessed using the Dental Aesthetic Index (DAI). DAI is a popular tool used in epidemiology to assess a specific set of occlusal traits, such as missing anterior teeth, crowding and spacing in the incisal region, midline diastema, overjet, anterior open bite, incisor irregularity, and molar relationship.\n26\n The overall DAI assessment scores of the weighted components are summed with a constant of 13 to produce the final DAI aggregate.\nA convenience sample of thirty participants (16 females, 14 males; age 18.9 years SD 2.2 years) were recruited as part of a larger project aiming to investigate the impact of oral health, psychological traits, and sociodemographic variables on smiling behaviour. Recruitment started in September 2020 and ended in December 2020.\nParticipants aged 16–22 years were recruited through local public advertisements, including university mailing lists, social media, flyers, and word of mouth. Exclusion criteria were: (a) cleft lip/ palate or other craniofacial syndromes; (b) severe periodontitis affecting front teeth; (c) history of major psychiatric disorders; (d) Bell's palsy; (e) removable dentures; (f) enamel dysplasia or severe stains affecting front teeth; (g) history of dysmorphophobia. Wearing eyeglasses was not set as an exclusion criterion; however, only one participant requested to wear glasses while watching the videos, and this apparently did not interfere with landmark identification. The sample investigated in this study represented a randomly selected subset of around a hundred study participants. The large sample exhibited a variety of occlusal conditions and is part of another related investigation examining the influence of malocclusion on smiling features through the proposed approach.\nThe occlusal characteristics of study participants were assessed using the Dental Aesthetic Index (DAI). DAI is a popular tool used in epidemiology to assess a specific set of occlusal traits, such as missing anterior teeth, crowding and spacing in the incisal region, midline diastema, overjet, anterior open bite, incisor irregularity, and molar relationship.\n26\n The overall DAI assessment scores of the weighted components are summed with a constant of 13 to produce the final DAI aggregate.\nExperimental Setup An Ultra High‐Definition web camera (Logitech BRIO 4K Ultra High‐Definition Webcam), with resolution set to 4096 × 2160 pixels and frame rate set at 30 frames per second was secured atop a 27‐inch Dell Ultrasharp U2715H computer monitor with a resolution of 2560 × 1440 pixels used to showcase a video clip.\nEach participant was seated 60–70 cm away from the display monitor. The height of the monitor was adjusted so that the participant's eyes were aligned at a point corresponding to the middle of the screen when the participant's head was in natural head position.\nFace lighting was individually optimised by a ring light (APEXEL 10″ 26 cm LED Selfie Circle Ring, Apexel), which was also secured on the back of the screen. A neutral background was used to avoid light reflections and object interferences, which could affect off‐line analyses of the video. The room light was switched off during the entire recording.\nAn Ultra High‐Definition web camera (Logitech BRIO 4K Ultra High‐Definition Webcam), with resolution set to 4096 × 2160 pixels and frame rate set at 30 frames per second was secured atop a 27‐inch Dell Ultrasharp U2715H computer monitor with a resolution of 2560 × 1440 pixels used to showcase a video clip.\nEach participant was seated 60–70 cm away from the display monitor. The height of the monitor was adjusted so that the participant's eyes were aligned at a point corresponding to the middle of the screen when the participant's head was in natural head position.\nFace lighting was individually optimised by a ring light (APEXEL 10″ 26 cm LED Selfie Circle Ring, Apexel), which was also secured on the back of the screen. A neutral background was used to avoid light reflections and object interferences, which could affect off‐line analyses of the video. The room light was switched off during the entire recording.\nSmile triggering video Three amusing video clips were identified via a small pilot study by the focus group previously described. The first clip showed an episode of Mr Bean (Mr Bean Rides Again, Act 5: The Flight; 3 min), whose character is widely used as a trigger stimulus in smile research.\n27\n The next two clips included a non‐stop laughter of a cute baby (47 s) and Juan Joya Borja's viral laughing video widely known as the ‘Spanish laughing guy’ in a televised episode of Ratones Coloraos, which first aired in 2001 but went viral in 2007 (46 s).\n28\n The three clips were separated by fade‐outs and merged into a single video, 4 min and 33 s in length (4 min and 24 s without transitions).\nFollowing the amusing video clips, the video presented instructions for completing a series of tasks, with time kept by a countdown timer and progress bar. The tasks involved initiating a series of jaw movements and facial expressions that could confound identification of smiles: speaking (counting 1–10), yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, smiling, and neutral expressions. The speaking task lasted 10 s while all other tasks lasted six seconds, with a six‐second inter‐task interval. All tasks were administered once, except for smiling, which was repeated three times. These tasks allowed a precise tuning of the machine learning models that were applied to detect smiling episodes in the video and individual‐specific calibration of the algorithm.\nThree amusing video clips were identified via a small pilot study by the focus group previously described. The first clip showed an episode of Mr Bean (Mr Bean Rides Again, Act 5: The Flight; 3 min), whose character is widely used as a trigger stimulus in smile research.\n27\n The next two clips included a non‐stop laughter of a cute baby (47 s) and Juan Joya Borja's viral laughing video widely known as the ‘Spanish laughing guy’ in a televised episode of Ratones Coloraos, which first aired in 2001 but went viral in 2007 (46 s).\n28\n The three clips were separated by fade‐outs and merged into a single video, 4 min and 33 s in length (4 min and 24 s without transitions).\nFollowing the amusing video clips, the video presented instructions for completing a series of tasks, with time kept by a countdown timer and progress bar. The tasks involved initiating a series of jaw movements and facial expressions that could confound identification of smiles: speaking (counting 1–10), yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, smiling, and neutral expressions. The speaking task lasted 10 s while all other tasks lasted six seconds, with a six‐second inter‐task interval. All tasks were administered once, except for smiling, which was repeated three times. These tasks allowed a precise tuning of the machine learning models that were applied to detect smiling episodes in the video and individual‐specific calibration of the algorithm.\nProcedure Each participant's involvement in the study took place in a single session. At the start of this session, each participant was checked against the inclusion/exclusion criteria, and the occlusal characteristics were scored using the DAI index. The participants were then given an overview of the research project and signed the written consents for participation. To elicit natural responses and trigger spontaneous smiling reactions during the video recording, the participants were not told that the main outcomes of the study were the features of their smiles. Afterwards, each participant was left alone in the recording room and was requested to view the video clip and then perform the follow‐up tasks.\nAfter viewing the video, each participant was asked to fill in two questionnaires. The first was a 12‐item Smile Aesthetics‐Related Quality of Life (SERQoL) questionnaire relating to three dimensions of the psychosocial impact of smiles.\n29\n The second was the 60‐item IPIP–NEO–60 personality scale.\n30\n The results of these questionnaires were the subject of another investigation and are not analysed in this report. Each participant was given a $20 voucher as reimbursement for participation in this project.\nEach participant's involvement in the study took place in a single session. At the start of this session, each participant was checked against the inclusion/exclusion criteria, and the occlusal characteristics were scored using the DAI index. The participants were then given an overview of the research project and signed the written consents for participation. To elicit natural responses and trigger spontaneous smiling reactions during the video recording, the participants were not told that the main outcomes of the study were the features of their smiles. Afterwards, each participant was left alone in the recording room and was requested to view the video clip and then perform the follow‐up tasks.\nAfter viewing the video, each participant was asked to fill in two questionnaires. The first was a 12‐item Smile Aesthetics‐Related Quality of Life (SERQoL) questionnaire relating to three dimensions of the psychosocial impact of smiles.\n29\n The second was the 60‐item IPIP–NEO–60 personality scale.\n30\n The results of these questionnaires were the subject of another investigation and are not analysed in this report. Each participant was given a $20 voucher as reimbursement for participation in this project.\nData analysis and statistics The full‐face videos were reviewed and coded frame‐wise by two examiners (HM and RK), who were instructed to identify each distinct smiling episode (i.e. preceded and followed by a smile‐free period of at least two seconds) in each study participant. The frames corresponding to the onset and cessation of each smiling episode were identified and noted between the two coders who viewed the full‐face videos within the same setting until a consensus between the two coders was reached. When consensus was not reached, a third coder (MF) was consulted.\nThe validity of the smiling detection software was assessed by calculating receiver operating characteristics (ROC) curves, using the examiner‐coded smiles as a reference standard and classification variable. ROC curves were assessed frame‐by‐frame timewise for each smile and smile‐free portions. Sensitivity (Se = true positive rate) and specificity (Sp = true negative rate) were calculated frame‐wise and maximised using Youden index (Se +Sp −1). The ROC curve was plotted by false positive rate (1‐Sp) on the x‐axis and the true positive rate (Se) on the y‐axis. The ROC curve was plotted by varying two different thresholds, the first one (Th1) representing the activation level of the cheek raiser muscle (AU6) and the second one (Th2) representing the activation level of the lip corner puller muscle (AU12). The two thresholds Th1 and Th2 were varied stepwise using a 0.05 step for both thresholds. The area under the curve (AUC) of the ROC curve and the overall accuracy of the test (sum of true positive frames + true negative frames/total frames) were also calculated.\nAfter threshold optimisation, the software script was run on the entire recording (including the post‐video tasks) to identify smiling episodes and to investigate possible misclassifications (false positives) of confounding tasks as smiles. The three smiling tasks, part of the second section of the video, were excluded from the confounds analysis.\nTo obtain an estimate of smile genuineness (0–5), intensity (0–5), and teeth exposure (%), the amount of activation of AU6, AU12, and AU25 were averaged across each episode. The outcome variables considered in this study were the number of smiling episodes per session, the mean and cumulative duration of smiling episodes, and the mean activation of AU6, AU12, and AU25.\nAll the data were analysed in Excel (Version 16.51, Microsoft Corporation) and SPSS (version 20.0 IBM Corporation).\nThe full‐face videos were reviewed and coded frame‐wise by two examiners (HM and RK), who were instructed to identify each distinct smiling episode (i.e. preceded and followed by a smile‐free period of at least two seconds) in each study participant. The frames corresponding to the onset and cessation of each smiling episode were identified and noted between the two coders who viewed the full‐face videos within the same setting until a consensus between the two coders was reached. When consensus was not reached, a third coder (MF) was consulted.\nThe validity of the smiling detection software was assessed by calculating receiver operating characteristics (ROC) curves, using the examiner‐coded smiles as a reference standard and classification variable. ROC curves were assessed frame‐by‐frame timewise for each smile and smile‐free portions. Sensitivity (Se = true positive rate) and specificity (Sp = true negative rate) were calculated frame‐wise and maximised using Youden index (Se +Sp −1). The ROC curve was plotted by false positive rate (1‐Sp) on the x‐axis and the true positive rate (Se) on the y‐axis. The ROC curve was plotted by varying two different thresholds, the first one (Th1) representing the activation level of the cheek raiser muscle (AU6) and the second one (Th2) representing the activation level of the lip corner puller muscle (AU12). The two thresholds Th1 and Th2 were varied stepwise using a 0.05 step for both thresholds. The area under the curve (AUC) of the ROC curve and the overall accuracy of the test (sum of true positive frames + true negative frames/total frames) were also calculated.\nAfter threshold optimisation, the software script was run on the entire recording (including the post‐video tasks) to identify smiling episodes and to investigate possible misclassifications (false positives) of confounding tasks as smiles. The three smiling tasks, part of the second section of the video, were excluded from the confounds analysis.\nTo obtain an estimate of smile genuineness (0–5), intensity (0–5), and teeth exposure (%), the amount of activation of AU6, AU12, and AU25 were averaged across each episode. The outcome variables considered in this study were the number of smiling episodes per session, the mean and cumulative duration of smiling episodes, and the mean activation of AU6, AU12, and AU25.\nAll the data were analysed in Excel (Version 16.51, Microsoft Corporation) and SPSS (version 20.0 IBM Corporation).", "OpenFace2.2.0 was used as a platform to extract information about facial AUs, which were considered relevant for this study.\n21\n This is an open‐source automatic facial recognition software intended to be used by researchers interested in machine learning, affective computing, and facial behaviour analysis.\n21\n The software is an update of a previous version of a facial behaviour analysis toolkit, which is based on convolutional neural networks and allows automated identification of 68 facial landmarks at any frame rate.\n21\n, \n22\n, \n23\n The software's output includes a timestamp, quantitative information about all facial landmarks, head posture, eye gaze, activation levels of facial AUs, and three‐dimensional (3D) coordinates of individual facial landmarks, as detected in each frame of the analysed video. AUs represent individual components of muscle movements, whose activation is identified by monitoring the 3D displacement of facial landmarks, with specific sets of landmarks corresponding to each AU. The software also generates videos showing dynamic changes of identified facial landmarks, and 3D information on the gaze vector and the head posture. An example of facial landmarks identified during smiling is shown in Figure 1.\nExample frame from smiling study participants, with activation of AU6 > 0.5 and AU12 > 1.5\nAs this study focused on smiling, the relevant AUs were: AU6, AU12, and AU25. AU6 (‘cheek raiser’) tracks the activity of the orbicularis oculi muscle, pars orbitalis, and is generally considered a marker of smile genuineness (i.e. a Duchenne marker). AU12 (‘lip corner puller’) tracks the activity of the zygomaticus major muscle. AU25 (‘lips apart’) tracks the activity of the depressor labii inferioris muscle. The intensities of AU6 and AU12 activation were automatically coded by the software using a six‐point ordinal scale (0–5), with values amounting to 1–2 indicating weak (trace) to slight activation, a value of 3 indicating marked pronounced activity, a value of 4 indicating extreme activation, and a value of 5 indicating maximum possible intensity.\n15\n, \n24\n AU25 was assigned a dichotomous value, which was either 0, indicating lips closed without teeth showing, or 1.\nA dedicated software script was developed in Java (Oracle JDK 1.8.0_111). The software had a stand‐alone user‐friendly graphic interface, which allows users to open the output file of OpenFace Software and to detect all the smiling episodes occurring throughout an entire video or within a well‐defined portion of a video, as defined by the start and end frame numbers.\nTo identify the onset of a smiling episode, both AU6 and AU12 had to be above the specified thresholds. The end of the smiling episode was identified by a subthreshold activation of either AU for longer than 2 s. In effect, this means that when two or more smiling episodes were separated by less than 2 s, they were merged into a single episode. The stand‐by time could be changed by the user.\nFor every smiling episode, the software assigned a progressive count, the onset time, the duration, and the mean activation of AU6 and AU12 across the entire episode. The onset and duration of individual episodes were given at a resolution equal to the inverse of frame rate of the video analysed.\nIn order to assign a clinically meaningful value to AU25, this was reported as the proportion of time teeth were shown during a given smiling episode. For example, an activity value of 50% indicated that teeth were visible during half of the episode. Additional outcome measurements included the number of smile episodes per minute and the relative smile time (%). This was calculated as the proportion of time that each individual had smiled while watching the video clip, by summing the durations of all smiling episodes and then dividing the total duration of smiling by the length of the video.", "Data were collected at the Craniofacial Clinical Research Laboratory at the Faculty of Dentistry, University of Otago, under local Ethics Committee approval number H19/160. All participants enrolled in this study agreed to participate and signed a written informed consent form. The report of phase 2 conforms to the guidelines for reporting observational studies (STROBE).\n25\n\n", "A convenience sample of thirty participants (16 females, 14 males; age 18.9 years SD 2.2 years) were recruited as part of a larger project aiming to investigate the impact of oral health, psychological traits, and sociodemographic variables on smiling behaviour. Recruitment started in September 2020 and ended in December 2020.\nParticipants aged 16–22 years were recruited through local public advertisements, including university mailing lists, social media, flyers, and word of mouth. Exclusion criteria were: (a) cleft lip/ palate or other craniofacial syndromes; (b) severe periodontitis affecting front teeth; (c) history of major psychiatric disorders; (d) Bell's palsy; (e) removable dentures; (f) enamel dysplasia or severe stains affecting front teeth; (g) history of dysmorphophobia. Wearing eyeglasses was not set as an exclusion criterion; however, only one participant requested to wear glasses while watching the videos, and this apparently did not interfere with landmark identification. The sample investigated in this study represented a randomly selected subset of around a hundred study participants. The large sample exhibited a variety of occlusal conditions and is part of another related investigation examining the influence of malocclusion on smiling features through the proposed approach.\nThe occlusal characteristics of study participants were assessed using the Dental Aesthetic Index (DAI). DAI is a popular tool used in epidemiology to assess a specific set of occlusal traits, such as missing anterior teeth, crowding and spacing in the incisal region, midline diastema, overjet, anterior open bite, incisor irregularity, and molar relationship.\n26\n The overall DAI assessment scores of the weighted components are summed with a constant of 13 to produce the final DAI aggregate.", "An Ultra High‐Definition web camera (Logitech BRIO 4K Ultra High‐Definition Webcam), with resolution set to 4096 × 2160 pixels and frame rate set at 30 frames per second was secured atop a 27‐inch Dell Ultrasharp U2715H computer monitor with a resolution of 2560 × 1440 pixels used to showcase a video clip.\nEach participant was seated 60–70 cm away from the display monitor. The height of the monitor was adjusted so that the participant's eyes were aligned at a point corresponding to the middle of the screen when the participant's head was in natural head position.\nFace lighting was individually optimised by a ring light (APEXEL 10″ 26 cm LED Selfie Circle Ring, Apexel), which was also secured on the back of the screen. A neutral background was used to avoid light reflections and object interferences, which could affect off‐line analyses of the video. The room light was switched off during the entire recording.", "Three amusing video clips were identified via a small pilot study by the focus group previously described. The first clip showed an episode of Mr Bean (Mr Bean Rides Again, Act 5: The Flight; 3 min), whose character is widely used as a trigger stimulus in smile research.\n27\n The next two clips included a non‐stop laughter of a cute baby (47 s) and Juan Joya Borja's viral laughing video widely known as the ‘Spanish laughing guy’ in a televised episode of Ratones Coloraos, which first aired in 2001 but went viral in 2007 (46 s).\n28\n The three clips were separated by fade‐outs and merged into a single video, 4 min and 33 s in length (4 min and 24 s without transitions).\nFollowing the amusing video clips, the video presented instructions for completing a series of tasks, with time kept by a countdown timer and progress bar. The tasks involved initiating a series of jaw movements and facial expressions that could confound identification of smiles: speaking (counting 1–10), yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, smiling, and neutral expressions. The speaking task lasted 10 s while all other tasks lasted six seconds, with a six‐second inter‐task interval. All tasks were administered once, except for smiling, which was repeated three times. These tasks allowed a precise tuning of the machine learning models that were applied to detect smiling episodes in the video and individual‐specific calibration of the algorithm.", "Each participant's involvement in the study took place in a single session. At the start of this session, each participant was checked against the inclusion/exclusion criteria, and the occlusal characteristics were scored using the DAI index. The participants were then given an overview of the research project and signed the written consents for participation. To elicit natural responses and trigger spontaneous smiling reactions during the video recording, the participants were not told that the main outcomes of the study were the features of their smiles. Afterwards, each participant was left alone in the recording room and was requested to view the video clip and then perform the follow‐up tasks.\nAfter viewing the video, each participant was asked to fill in two questionnaires. The first was a 12‐item Smile Aesthetics‐Related Quality of Life (SERQoL) questionnaire relating to three dimensions of the psychosocial impact of smiles.\n29\n The second was the 60‐item IPIP–NEO–60 personality scale.\n30\n The results of these questionnaires were the subject of another investigation and are not analysed in this report. Each participant was given a $20 voucher as reimbursement for participation in this project.", "The full‐face videos were reviewed and coded frame‐wise by two examiners (HM and RK), who were instructed to identify each distinct smiling episode (i.e. preceded and followed by a smile‐free period of at least two seconds) in each study participant. The frames corresponding to the onset and cessation of each smiling episode were identified and noted between the two coders who viewed the full‐face videos within the same setting until a consensus between the two coders was reached. When consensus was not reached, a third coder (MF) was consulted.\nThe validity of the smiling detection software was assessed by calculating receiver operating characteristics (ROC) curves, using the examiner‐coded smiles as a reference standard and classification variable. ROC curves were assessed frame‐by‐frame timewise for each smile and smile‐free portions. Sensitivity (Se = true positive rate) and specificity (Sp = true negative rate) were calculated frame‐wise and maximised using Youden index (Se +Sp −1). The ROC curve was plotted by false positive rate (1‐Sp) on the x‐axis and the true positive rate (Se) on the y‐axis. The ROC curve was plotted by varying two different thresholds, the first one (Th1) representing the activation level of the cheek raiser muscle (AU6) and the second one (Th2) representing the activation level of the lip corner puller muscle (AU12). The two thresholds Th1 and Th2 were varied stepwise using a 0.05 step for both thresholds. The area under the curve (AUC) of the ROC curve and the overall accuracy of the test (sum of true positive frames + true negative frames/total frames) were also calculated.\nAfter threshold optimisation, the software script was run on the entire recording (including the post‐video tasks) to identify smiling episodes and to investigate possible misclassifications (false positives) of confounding tasks as smiles. The three smiling tasks, part of the second section of the video, were excluded from the confounds analysis.\nTo obtain an estimate of smile genuineness (0–5), intensity (0–5), and teeth exposure (%), the amount of activation of AU6, AU12, and AU25 were averaged across each episode. The outcome variables considered in this study were the number of smiling episodes per session, the mean and cumulative duration of smiling episodes, and the mean activation of AU6, AU12, and AU25.\nAll the data were analysed in Excel (Version 16.51, Microsoft Corporation) and SPSS (version 20.0 IBM Corporation).", "Study participants were young adults, mostly Caucasian (>80%), about half of them were females, and had a broad range of malocclusions (Table 1).\nDemographic characteristics of the participants\nThe distinct smile episodes that were manually identified frame‐wise by coders were used to build a ROC curve (Figure 2). The area under the curve of the ROC curve was 0.94, and the overall accuracy of smiling frames detection was 84.5%. The maximisation of Youden index indicated that detection accuracy was highest with thresholds of 0.5 for AU6 and 1.5 for AU12. These thresholds resulted in a sensitivity of 82.9% and a specificity of 89.7% and were used in subsequent episode‐wise analysis in the study sample.\nROC curve based on two thresholds for AU6 and AU12 and frame‐wise detection of smiles\nAfter calibration of the algorithm, the true‐positive detection of individual smiling episodes was 90.0%. In addition, 11.3% of confounder tasks were detected as false positives. The tasks more often misclassified as smiles were mouth covering, which amounted to around one‐third of the false detections and yawning, which amounted to around 20% of the false positive detections.\nStudy participants smiled approximately seven times according to the classifier, with each smile episode lasting approximately 10 s, or about one‐third of the duration of the humorous videos. The features of smiling episodes showed a large inter‐individual variation in the frequency, intensity, and duration of smiles (Figure 3).\nThree‐dimensional histogram depicting all the smiling episodes detected by duration and intensity of AU12.\nDescriptive statistics for the individual features of smiles, such as activation of specific AUs are given in Table 2. Activation of AU12, which is the main AU of the smile,\n15\n ranged from slight to pronounced, with intensity ranking. Some participants hardly showed teeth on smiling, while others showed teeth throughout the entire smile episode.\nDescriptive statistics for the features of smiling episodes detected from the 30 study participants, while watching the video footage", "This paper presents a user‐friendly automated software script, which can detect and quantify smiles features in terms of their: (1) frequency, (2) onset and offset of each episode and overall episode duration, (3) peak intensities, (4) percentage of tooth display in each smiling episode and (5) smiling genuineness. In order to identify spontaneous smiles, we have introduced two detection thresholds based on the levels of activation of the cheek raiser (AU6, a Duchenne marker) and the lip corner puller (AU12). The findings indicate an acceptable detection accuracy of the proposed method, which can be used for an episode‐wise analysis of smiles. The software does not require calibration and is available upon request.\nTo the best of our knowledge, this is the first study that has used available libraries and FACS to present an automated episode‐wise analysis of smile episodes. This script uses a user‐friendly interface, which needs to be used in unison with OpenFace2.20 (an open‐source script), which any researcher can utilise. Moreover, the introduction of Artificial Intelligence (AI) to the field of dynamic smile analyses is fundamental. Hence, observer‐related biases are minimised with an expected reduction in time and labour associated with manual analyses.\n31\n\n\nThe measure of diagnostic accuracy includes both sensitivity and specificity values.\n32\n In our study, 82.9% sensitivity value demonstrates a high proportion of detected true smile episodes. In addition, the diagnostic specificity is confined to 89.7%, as presented in the ROC curve plot. In summation, both values align well with stipulated expectations in the area of automated facial expression recognition and dynamic analysis of human emotions.\n33\n, \n34\n On the other hand, descriptive values from automated analysis of sample clips showed that participants smiled around two times per minute, on average for around 11 s per episode. Also, the mean intensity of the zygomaticus major activation (AU12) was 2.2 ± 0.4. These findings align with previous research of participants who viewed a funny clip with a mean duration of AU12 activation (13.8 ± 12.7 s) and a maximum intensity of 1.8 ± 1.1.\n35\n Further, a recent study reported that the mean intensity of AU12 was 4.1 during genuine smiling and 3.9 for posed smiles.\n36\n Though these AU12 intensity values seem comparable during both genuine and posed smiles, previous research pointed out to recognisable differences with respect to AU6 activity during genuine and posed smiles placing an argument that it would be difficult to deliberately fake a genuine smile.\n10\n, \n37\n However, there is also some evidence suggesting that Duchenne smiles are merely traces of smile intensity rather than a reliable and distinct indicator of smile authenticity.\n38\n Such discrepancies in the reported findings may be ascribed to the different methods to trigger and to measure smiles, the social context, and sociodemographic characteristics of the sample, which may all influence the features of smiling.\n39\n However, they still do impose limitations on our understanding, recognition and differentiation of genuine and posed smiles.\nSmiling is an expression that can be triggered on demand, as well as spontaneously within a social context. In phase 2, the trigger video had successfully elicited smiles in individuals regardless of their ethnic background, age, or malocclusion. This suggests that individuals are prone to smiling when a suitable trigger is used aside from circumstances and situations where social integration is seen.\n40\n In addition, while the participants were informed about the video recording process, it can be argued that masking the purpose of the video recording being the assessment of smiling episodes succeeded in rendering a natural response to the trigger (i.e. spontaneous smiling) as observation awareness is well‐known to be an important variable in smiling research.\n39\n\n\nIt is possible to detect various spontaneous facial expressions expressed in unscripted social context with automated recognition systems.\n41\n However, establishing reliable automated coding of discrete facial expressions is a challenging process.\n41\n Detection issues often arise when multiple AUs are involved; hence, recognising compound facial expressions where individuals combine various expressions, is daunting.\n42\n In addition, dynamic tracking and head orientation also pose obstacles to AI recognition.\n43\n In our study, we incorporated post‐video scripts of different plausible confounders to tackle these issues. These tasks included: counting numbers, yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, and neutral expressions.\n44\n Our findings show that only 11.3% of the tasks were identified as false positives after calibration of the algorithm. Misclassified smiles were mostly associated with mouth covering and yawning. In turn, the proposed method for episode‐wise detection of smiles could serve as a cornerstone to better landmark feature extraction as well as recognition detection in future research.\n45\n\n\nThe present study has some limitations, which should be emphasised. Most obviously, the software was validated on a relatively small convenience sample of mostly Caucasian adolescents and young adults who spontaneously smiled. Further research is needed to investigate the quantitative features of posed and spontaneous smiles and validation of the proposed method in other samples, which have not been analysed in the present study. Also, research is needed to determine the classifiers' accuracy in larger and more heterogeneous sample to investigate the effect of ethnicity, sex, age and other demographic characteristics on smiling and to increase the external validity of our findings.\n46\n In addition, it is important to note that the yielded accuracy was not perfect, though an AUC value closer to 1 (0.94 in our report) is viewed as very high in terms of the discrimination performance of the software.\n47\n Further enhancements are plausible with future improvements in AI. These enhancements should also include other methods of quantification of muscular activity to objectively assess genuine smiles using electromyography (EMG) and wearable detection devices.\n48\n The possible effect of calibration on smile detection accuracy also needs to be further investigated in different samples and in contrast with other smile detection methods to establish which is better in terms of accuracy, cost, labour, and overall handling.\nIn summary, this paper presents a novel automated episode‐wise quantitative assessment of smiling dynamics. The software can provide a quantitative analysis of the frequency, duration, intensity, and characteristics of smiles through a user‐friendly interface that is available for further utilisation in smile research. The proposed approach has potential different applications within dentistry. Firstly, it would offer researchers the opportunity to understand to what extent oral conditions, such as dental, oral, and facial anomalies objectively influence smiles. This could then translate into investigations of the effect of corrective treatment through oral rehabilitation, dental and orthodontic treatment, on specific features of smiles. Furthermore, the proposed approach may be used to trigger and investigate spontaneous smiles, thus facilitating restorative treatment plans, as compared to information depicted via static posed photographs.\nIn addition, the capability of the algorithm to detect and analyse observable data of individuals smiling under controlled conditions opens the door to addressing further challenges in other areas. For example, future research could target understanding the dynamics of smiling in real‐time conditions. Moreover, the dental literature is replete with research on the static features of smiles, while data are scarcely available on the dynamic features. It would be interesting and important to examine the relationship between malocclusion patterns, different orthodontic treatment modalities, and their effect on smiling from a dynamic standpoint. Based on the aforementioned points, future developments, and further implementation of the pattern‐recognition algorithm could be significant not only in dental disciplines, but also in psychology, sociology, and behavioural research.", "Individual smile episodes and their quantitative features, such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The proposed approach can be used to investigate the impact of oral health and oral rehabilitation on smiles.", "Hisham Mohammed and Reginald Kumar Jr involved in conceptualisation, investigation, validation and writing the original draft. Hamza Bennani involved in software and analysis. Jamin Halberstadt involved in methodology, reviewing and editing and supervision. Mauro Farella involved in conceptualisation, methodology, reviewing and editing, analysis and supervision.", "All authors declare that they have no competing interest.", "PEER REVIEW The peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378.\nThe peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378.", "The peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378." ]
[ null, "materials-and-methods", null, null, null, null, null, null, null, "results", "discussion", "conclusions", null, "COI-statement", null, null ]
[ "orthodontics", "smiling", "validation studies" ]
INTRODUCTION: Smiling is a spontaneous facial expression occurring throughout everyday life, which varies largely between individuals. 1 While the interpretation of smiling may appear straightforward, it is actually one of the most complex facial expressions, and can be ambiguous. 2 Not only can smiles have different forms and meanings, but they are also found in different situations and as a consequence of different eliciting factors. 3 , 4 Smile analysis in dentistry has largely focused on static images. 5 Nonetheless, more recently, there has been a paradigm shift in treatment planning and smile rehabilitation from using static smiles to dynamic smiles; herein lies the ‘art of the smile’. 6 As the pursuit for better dentofacial aesthetics increases, it is essential to distinguish between posed and spontaneous smiles, differences between which are significant and can influence treatment planning and smile design. 5 Understanding the characteristics of different smiles and the associated age‐related changes in orofacial musculature, for example, is important to the decision‐making process to achieve ‘ideal’ tooth display. 7 However, this process should not be confined to the aesthetic elements alone but should also extend to understand whether an oral rehabilitation treatment, including orthodontics, actually affects the number and the way a patient smiles. 8 , 9 Smiling that depicts situations of spontaneous pure enjoyment or laughter are often referred to as the genuine ‘Duchenne’ smiles, to acknowledge the scientist who first described their features. 10 , 11 The Duchenne smile prompts a combined activation of the zygomaticus major and the orbicularis oculi muscles. This pattern of muscular activity distinguishes between genuine smiles and ‘social’ smiles, which are generally expressed during conditions of non‐enjoyment. 12 , 13 The identification of Duchenne smiles relies on subtle analysis of facial expressions. 14 The Facial Action Coding System (FACS) 15 is a popular and reliable method for detecting and quantifying the frequency of facial expressions from full‐face video recordings. 16 The FACS uses action units (AUs), which code for actions of individual or groups of muscles during facial expression. 15 The activation level of each AU is scored using intensity scores, ranging from ‘trace’ to ‘maximum’. According to FACS, the onset of a smile can be identified when the activation of the zygomaticus major displays traces of raised skin within the lower‐to‐middle nasolabial area and other traces of upwardly angled and elongated lip corners. 15 These muscle activities would increase in intensity until the smile apex is reached before reverting until no further traces of activation of the zygomaticus major could be recognised; hence, the smile offset is denoted. 15 The introduction of FACS has undoubtedly challenged the study of facial expressions as it allows real‐time assessment of emotions; however, its utilisation for manual detection and coding of AUs presents with limitations; (a) the need for experienced coders who are able to accurately identify on a frame‐wise basis the onset, apex, and offset of a smile, 16 (b) the coding process is extremely laborious, posing a huge challenge in large‐scale research, (c) susceptibility to observer biases 17 and high costs. 18 The observable limitations encountered with manual analyses of smiles has led to computing developments to automatedly detect dynamic smiling features. 19 FACS focuses primarily on the identification of active target AUs frame‐by‐frame and do not include comprehensive analyses of smiling as discrete episodes, so that their individual features and patterns can be characterised. An episode‐wise analysis of individual smiles would allow researchers to address questions such as how often, how long, how strong, and how genuinely do individuals smile under different experimental and/or situational factors, and what is the impact of factors such as, oral health‐related conditions, on the way people smile. This would also pave the way to understand the dynamic characteristics of smiles in oral rehabilitation patients 8 and assist in areas where smile rehabilitation through individualised muscle mimicry and training is demanded. 20 The aim of this study is to develop and validate a user‐friendly software script, based on well‐established pattern‐recognition algorithms for tracking facial landmarks and facial AUs, so that discrete smile episodes can be analysed off‐line from full‐face videos and quantified in terms of frequency, duration, authenticity, and intensity of smile. MATERIALS AND METHODS: The study included two phases. During the first phase, a software script was developed with the help of a computer scientist (HB) and extensively tested with ongoing feedback from a focus group represented by the authors and a few test volunteers. During the second phase, preliminary data were collected from a convenience sample of thirty study participants to optimise the performance of the algorithm for smiling detection and to identify optimal thresholds, so that the software's performance could be validated against two manual coders. Phase I: software script OpenFace2.2.0 was used as a platform to extract information about facial AUs, which were considered relevant for this study. 21 This is an open‐source automatic facial recognition software intended to be used by researchers interested in machine learning, affective computing, and facial behaviour analysis. 21 The software is an update of a previous version of a facial behaviour analysis toolkit, which is based on convolutional neural networks and allows automated identification of 68 facial landmarks at any frame rate. 21 , 22 , 23 The software's output includes a timestamp, quantitative information about all facial landmarks, head posture, eye gaze, activation levels of facial AUs, and three‐dimensional (3D) coordinates of individual facial landmarks, as detected in each frame of the analysed video. AUs represent individual components of muscle movements, whose activation is identified by monitoring the 3D displacement of facial landmarks, with specific sets of landmarks corresponding to each AU. The software also generates videos showing dynamic changes of identified facial landmarks, and 3D information on the gaze vector and the head posture. An example of facial landmarks identified during smiling is shown in Figure 1. Example frame from smiling study participants, with activation of AU6 > 0.5 and AU12 > 1.5 As this study focused on smiling, the relevant AUs were: AU6, AU12, and AU25. AU6 (‘cheek raiser’) tracks the activity of the orbicularis oculi muscle, pars orbitalis, and is generally considered a marker of smile genuineness (i.e. a Duchenne marker). AU12 (‘lip corner puller’) tracks the activity of the zygomaticus major muscle. AU25 (‘lips apart’) tracks the activity of the depressor labii inferioris muscle. The intensities of AU6 and AU12 activation were automatically coded by the software using a six‐point ordinal scale (0–5), with values amounting to 1–2 indicating weak (trace) to slight activation, a value of 3 indicating marked pronounced activity, a value of 4 indicating extreme activation, and a value of 5 indicating maximum possible intensity. 15 , 24 AU25 was assigned a dichotomous value, which was either 0, indicating lips closed without teeth showing, or 1. A dedicated software script was developed in Java (Oracle JDK 1.8.0_111). The software had a stand‐alone user‐friendly graphic interface, which allows users to open the output file of OpenFace Software and to detect all the smiling episodes occurring throughout an entire video or within a well‐defined portion of a video, as defined by the start and end frame numbers. To identify the onset of a smiling episode, both AU6 and AU12 had to be above the specified thresholds. The end of the smiling episode was identified by a subthreshold activation of either AU for longer than 2 s. In effect, this means that when two or more smiling episodes were separated by less than 2 s, they were merged into a single episode. The stand‐by time could be changed by the user. For every smiling episode, the software assigned a progressive count, the onset time, the duration, and the mean activation of AU6 and AU12 across the entire episode. The onset and duration of individual episodes were given at a resolution equal to the inverse of frame rate of the video analysed. In order to assign a clinically meaningful value to AU25, this was reported as the proportion of time teeth were shown during a given smiling episode. For example, an activity value of 50% indicated that teeth were visible during half of the episode. Additional outcome measurements included the number of smile episodes per minute and the relative smile time (%). This was calculated as the proportion of time that each individual had smiled while watching the video clip, by summing the durations of all smiling episodes and then dividing the total duration of smiling by the length of the video. OpenFace2.2.0 was used as a platform to extract information about facial AUs, which were considered relevant for this study. 21 This is an open‐source automatic facial recognition software intended to be used by researchers interested in machine learning, affective computing, and facial behaviour analysis. 21 The software is an update of a previous version of a facial behaviour analysis toolkit, which is based on convolutional neural networks and allows automated identification of 68 facial landmarks at any frame rate. 21 , 22 , 23 The software's output includes a timestamp, quantitative information about all facial landmarks, head posture, eye gaze, activation levels of facial AUs, and three‐dimensional (3D) coordinates of individual facial landmarks, as detected in each frame of the analysed video. AUs represent individual components of muscle movements, whose activation is identified by monitoring the 3D displacement of facial landmarks, with specific sets of landmarks corresponding to each AU. The software also generates videos showing dynamic changes of identified facial landmarks, and 3D information on the gaze vector and the head posture. An example of facial landmarks identified during smiling is shown in Figure 1. Example frame from smiling study participants, with activation of AU6 > 0.5 and AU12 > 1.5 As this study focused on smiling, the relevant AUs were: AU6, AU12, and AU25. AU6 (‘cheek raiser’) tracks the activity of the orbicularis oculi muscle, pars orbitalis, and is generally considered a marker of smile genuineness (i.e. a Duchenne marker). AU12 (‘lip corner puller’) tracks the activity of the zygomaticus major muscle. AU25 (‘lips apart’) tracks the activity of the depressor labii inferioris muscle. The intensities of AU6 and AU12 activation were automatically coded by the software using a six‐point ordinal scale (0–5), with values amounting to 1–2 indicating weak (trace) to slight activation, a value of 3 indicating marked pronounced activity, a value of 4 indicating extreme activation, and a value of 5 indicating maximum possible intensity. 15 , 24 AU25 was assigned a dichotomous value, which was either 0, indicating lips closed without teeth showing, or 1. A dedicated software script was developed in Java (Oracle JDK 1.8.0_111). The software had a stand‐alone user‐friendly graphic interface, which allows users to open the output file of OpenFace Software and to detect all the smiling episodes occurring throughout an entire video or within a well‐defined portion of a video, as defined by the start and end frame numbers. To identify the onset of a smiling episode, both AU6 and AU12 had to be above the specified thresholds. The end of the smiling episode was identified by a subthreshold activation of either AU for longer than 2 s. In effect, this means that when two or more smiling episodes were separated by less than 2 s, they were merged into a single episode. The stand‐by time could be changed by the user. For every smiling episode, the software assigned a progressive count, the onset time, the duration, and the mean activation of AU6 and AU12 across the entire episode. The onset and duration of individual episodes were given at a resolution equal to the inverse of frame rate of the video analysed. In order to assign a clinically meaningful value to AU25, this was reported as the proportion of time teeth were shown during a given smiling episode. For example, an activity value of 50% indicated that teeth were visible during half of the episode. Additional outcome measurements included the number of smile episodes per minute and the relative smile time (%). This was calculated as the proportion of time that each individual had smiled while watching the video clip, by summing the durations of all smiling episodes and then dividing the total duration of smiling by the length of the video. Phase 2: descriptive study and software validation Data were collected at the Craniofacial Clinical Research Laboratory at the Faculty of Dentistry, University of Otago, under local Ethics Committee approval number H19/160. All participants enrolled in this study agreed to participate and signed a written informed consent form. The report of phase 2 conforms to the guidelines for reporting observational studies (STROBE). 25 Data were collected at the Craniofacial Clinical Research Laboratory at the Faculty of Dentistry, University of Otago, under local Ethics Committee approval number H19/160. All participants enrolled in this study agreed to participate and signed a written informed consent form. The report of phase 2 conforms to the guidelines for reporting observational studies (STROBE). 25 Sample characteristics A convenience sample of thirty participants (16 females, 14 males; age 18.9 years SD 2.2 years) were recruited as part of a larger project aiming to investigate the impact of oral health, psychological traits, and sociodemographic variables on smiling behaviour. Recruitment started in September 2020 and ended in December 2020. Participants aged 16–22 years were recruited through local public advertisements, including university mailing lists, social media, flyers, and word of mouth. Exclusion criteria were: (a) cleft lip/ palate or other craniofacial syndromes; (b) severe periodontitis affecting front teeth; (c) history of major psychiatric disorders; (d) Bell's palsy; (e) removable dentures; (f) enamel dysplasia or severe stains affecting front teeth; (g) history of dysmorphophobia. Wearing eyeglasses was not set as an exclusion criterion; however, only one participant requested to wear glasses while watching the videos, and this apparently did not interfere with landmark identification. The sample investigated in this study represented a randomly selected subset of around a hundred study participants. The large sample exhibited a variety of occlusal conditions and is part of another related investigation examining the influence of malocclusion on smiling features through the proposed approach. The occlusal characteristics of study participants were assessed using the Dental Aesthetic Index (DAI). DAI is a popular tool used in epidemiology to assess a specific set of occlusal traits, such as missing anterior teeth, crowding and spacing in the incisal region, midline diastema, overjet, anterior open bite, incisor irregularity, and molar relationship. 26 The overall DAI assessment scores of the weighted components are summed with a constant of 13 to produce the final DAI aggregate. A convenience sample of thirty participants (16 females, 14 males; age 18.9 years SD 2.2 years) were recruited as part of a larger project aiming to investigate the impact of oral health, psychological traits, and sociodemographic variables on smiling behaviour. Recruitment started in September 2020 and ended in December 2020. Participants aged 16–22 years were recruited through local public advertisements, including university mailing lists, social media, flyers, and word of mouth. Exclusion criteria were: (a) cleft lip/ palate or other craniofacial syndromes; (b) severe periodontitis affecting front teeth; (c) history of major psychiatric disorders; (d) Bell's palsy; (e) removable dentures; (f) enamel dysplasia or severe stains affecting front teeth; (g) history of dysmorphophobia. Wearing eyeglasses was not set as an exclusion criterion; however, only one participant requested to wear glasses while watching the videos, and this apparently did not interfere with landmark identification. The sample investigated in this study represented a randomly selected subset of around a hundred study participants. The large sample exhibited a variety of occlusal conditions and is part of another related investigation examining the influence of malocclusion on smiling features through the proposed approach. The occlusal characteristics of study participants were assessed using the Dental Aesthetic Index (DAI). DAI is a popular tool used in epidemiology to assess a specific set of occlusal traits, such as missing anterior teeth, crowding and spacing in the incisal region, midline diastema, overjet, anterior open bite, incisor irregularity, and molar relationship. 26 The overall DAI assessment scores of the weighted components are summed with a constant of 13 to produce the final DAI aggregate. Experimental Setup An Ultra High‐Definition web camera (Logitech BRIO 4K Ultra High‐Definition Webcam), with resolution set to 4096 × 2160 pixels and frame rate set at 30 frames per second was secured atop a 27‐inch Dell Ultrasharp U2715H computer monitor with a resolution of 2560 × 1440 pixels used to showcase a video clip. Each participant was seated 60–70 cm away from the display monitor. The height of the monitor was adjusted so that the participant's eyes were aligned at a point corresponding to the middle of the screen when the participant's head was in natural head position. Face lighting was individually optimised by a ring light (APEXEL 10″ 26 cm LED Selfie Circle Ring, Apexel), which was also secured on the back of the screen. A neutral background was used to avoid light reflections and object interferences, which could affect off‐line analyses of the video. The room light was switched off during the entire recording. An Ultra High‐Definition web camera (Logitech BRIO 4K Ultra High‐Definition Webcam), with resolution set to 4096 × 2160 pixels and frame rate set at 30 frames per second was secured atop a 27‐inch Dell Ultrasharp U2715H computer monitor with a resolution of 2560 × 1440 pixels used to showcase a video clip. Each participant was seated 60–70 cm away from the display monitor. The height of the monitor was adjusted so that the participant's eyes were aligned at a point corresponding to the middle of the screen when the participant's head was in natural head position. Face lighting was individually optimised by a ring light (APEXEL 10″ 26 cm LED Selfie Circle Ring, Apexel), which was also secured on the back of the screen. A neutral background was used to avoid light reflections and object interferences, which could affect off‐line analyses of the video. The room light was switched off during the entire recording. Smile triggering video Three amusing video clips were identified via a small pilot study by the focus group previously described. The first clip showed an episode of Mr Bean (Mr Bean Rides Again, Act 5: The Flight; 3 min), whose character is widely used as a trigger stimulus in smile research. 27 The next two clips included a non‐stop laughter of a cute baby (47 s) and Juan Joya Borja's viral laughing video widely known as the ‘Spanish laughing guy’ in a televised episode of Ratones Coloraos, which first aired in 2001 but went viral in 2007 (46 s). 28 The three clips were separated by fade‐outs and merged into a single video, 4 min and 33 s in length (4 min and 24 s without transitions). Following the amusing video clips, the video presented instructions for completing a series of tasks, with time kept by a countdown timer and progress bar. The tasks involved initiating a series of jaw movements and facial expressions that could confound identification of smiles: speaking (counting 1–10), yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, smiling, and neutral expressions. The speaking task lasted 10 s while all other tasks lasted six seconds, with a six‐second inter‐task interval. All tasks were administered once, except for smiling, which was repeated three times. These tasks allowed a precise tuning of the machine learning models that were applied to detect smiling episodes in the video and individual‐specific calibration of the algorithm. Three amusing video clips were identified via a small pilot study by the focus group previously described. The first clip showed an episode of Mr Bean (Mr Bean Rides Again, Act 5: The Flight; 3 min), whose character is widely used as a trigger stimulus in smile research. 27 The next two clips included a non‐stop laughter of a cute baby (47 s) and Juan Joya Borja's viral laughing video widely known as the ‘Spanish laughing guy’ in a televised episode of Ratones Coloraos, which first aired in 2001 but went viral in 2007 (46 s). 28 The three clips were separated by fade‐outs and merged into a single video, 4 min and 33 s in length (4 min and 24 s without transitions). Following the amusing video clips, the video presented instructions for completing a series of tasks, with time kept by a countdown timer and progress bar. The tasks involved initiating a series of jaw movements and facial expressions that could confound identification of smiles: speaking (counting 1–10), yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, smiling, and neutral expressions. The speaking task lasted 10 s while all other tasks lasted six seconds, with a six‐second inter‐task interval. All tasks were administered once, except for smiling, which was repeated three times. These tasks allowed a precise tuning of the machine learning models that were applied to detect smiling episodes in the video and individual‐specific calibration of the algorithm. Procedure Each participant's involvement in the study took place in a single session. At the start of this session, each participant was checked against the inclusion/exclusion criteria, and the occlusal characteristics were scored using the DAI index. The participants were then given an overview of the research project and signed the written consents for participation. To elicit natural responses and trigger spontaneous smiling reactions during the video recording, the participants were not told that the main outcomes of the study were the features of their smiles. Afterwards, each participant was left alone in the recording room and was requested to view the video clip and then perform the follow‐up tasks. After viewing the video, each participant was asked to fill in two questionnaires. The first was a 12‐item Smile Aesthetics‐Related Quality of Life (SERQoL) questionnaire relating to three dimensions of the psychosocial impact of smiles. 29 The second was the 60‐item IPIP–NEO–60 personality scale. 30 The results of these questionnaires were the subject of another investigation and are not analysed in this report. Each participant was given a $20 voucher as reimbursement for participation in this project. Each participant's involvement in the study took place in a single session. At the start of this session, each participant was checked against the inclusion/exclusion criteria, and the occlusal characteristics were scored using the DAI index. The participants were then given an overview of the research project and signed the written consents for participation. To elicit natural responses and trigger spontaneous smiling reactions during the video recording, the participants were not told that the main outcomes of the study were the features of their smiles. Afterwards, each participant was left alone in the recording room and was requested to view the video clip and then perform the follow‐up tasks. After viewing the video, each participant was asked to fill in two questionnaires. The first was a 12‐item Smile Aesthetics‐Related Quality of Life (SERQoL) questionnaire relating to three dimensions of the psychosocial impact of smiles. 29 The second was the 60‐item IPIP–NEO–60 personality scale. 30 The results of these questionnaires were the subject of another investigation and are not analysed in this report. Each participant was given a $20 voucher as reimbursement for participation in this project. Data analysis and statistics The full‐face videos were reviewed and coded frame‐wise by two examiners (HM and RK), who were instructed to identify each distinct smiling episode (i.e. preceded and followed by a smile‐free period of at least two seconds) in each study participant. The frames corresponding to the onset and cessation of each smiling episode were identified and noted between the two coders who viewed the full‐face videos within the same setting until a consensus between the two coders was reached. When consensus was not reached, a third coder (MF) was consulted. The validity of the smiling detection software was assessed by calculating receiver operating characteristics (ROC) curves, using the examiner‐coded smiles as a reference standard and classification variable. ROC curves were assessed frame‐by‐frame timewise for each smile and smile‐free portions. Sensitivity (Se = true positive rate) and specificity (Sp = true negative rate) were calculated frame‐wise and maximised using Youden index (Se +Sp −1). The ROC curve was plotted by false positive rate (1‐Sp) on the x‐axis and the true positive rate (Se) on the y‐axis. The ROC curve was plotted by varying two different thresholds, the first one (Th1) representing the activation level of the cheek raiser muscle (AU6) and the second one (Th2) representing the activation level of the lip corner puller muscle (AU12). The two thresholds Th1 and Th2 were varied stepwise using a 0.05 step for both thresholds. The area under the curve (AUC) of the ROC curve and the overall accuracy of the test (sum of true positive frames + true negative frames/total frames) were also calculated. After threshold optimisation, the software script was run on the entire recording (including the post‐video tasks) to identify smiling episodes and to investigate possible misclassifications (false positives) of confounding tasks as smiles. The three smiling tasks, part of the second section of the video, were excluded from the confounds analysis. To obtain an estimate of smile genuineness (0–5), intensity (0–5), and teeth exposure (%), the amount of activation of AU6, AU12, and AU25 were averaged across each episode. The outcome variables considered in this study were the number of smiling episodes per session, the mean and cumulative duration of smiling episodes, and the mean activation of AU6, AU12, and AU25. All the data were analysed in Excel (Version 16.51, Microsoft Corporation) and SPSS (version 20.0 IBM Corporation). The full‐face videos were reviewed and coded frame‐wise by two examiners (HM and RK), who were instructed to identify each distinct smiling episode (i.e. preceded and followed by a smile‐free period of at least two seconds) in each study participant. The frames corresponding to the onset and cessation of each smiling episode were identified and noted between the two coders who viewed the full‐face videos within the same setting until a consensus between the two coders was reached. When consensus was not reached, a third coder (MF) was consulted. The validity of the smiling detection software was assessed by calculating receiver operating characteristics (ROC) curves, using the examiner‐coded smiles as a reference standard and classification variable. ROC curves were assessed frame‐by‐frame timewise for each smile and smile‐free portions. Sensitivity (Se = true positive rate) and specificity (Sp = true negative rate) were calculated frame‐wise and maximised using Youden index (Se +Sp −1). The ROC curve was plotted by false positive rate (1‐Sp) on the x‐axis and the true positive rate (Se) on the y‐axis. The ROC curve was plotted by varying two different thresholds, the first one (Th1) representing the activation level of the cheek raiser muscle (AU6) and the second one (Th2) representing the activation level of the lip corner puller muscle (AU12). The two thresholds Th1 and Th2 were varied stepwise using a 0.05 step for both thresholds. The area under the curve (AUC) of the ROC curve and the overall accuracy of the test (sum of true positive frames + true negative frames/total frames) were also calculated. After threshold optimisation, the software script was run on the entire recording (including the post‐video tasks) to identify smiling episodes and to investigate possible misclassifications (false positives) of confounding tasks as smiles. The three smiling tasks, part of the second section of the video, were excluded from the confounds analysis. To obtain an estimate of smile genuineness (0–5), intensity (0–5), and teeth exposure (%), the amount of activation of AU6, AU12, and AU25 were averaged across each episode. The outcome variables considered in this study were the number of smiling episodes per session, the mean and cumulative duration of smiling episodes, and the mean activation of AU6, AU12, and AU25. All the data were analysed in Excel (Version 16.51, Microsoft Corporation) and SPSS (version 20.0 IBM Corporation). Phase I: software script: OpenFace2.2.0 was used as a platform to extract information about facial AUs, which were considered relevant for this study. 21 This is an open‐source automatic facial recognition software intended to be used by researchers interested in machine learning, affective computing, and facial behaviour analysis. 21 The software is an update of a previous version of a facial behaviour analysis toolkit, which is based on convolutional neural networks and allows automated identification of 68 facial landmarks at any frame rate. 21 , 22 , 23 The software's output includes a timestamp, quantitative information about all facial landmarks, head posture, eye gaze, activation levels of facial AUs, and three‐dimensional (3D) coordinates of individual facial landmarks, as detected in each frame of the analysed video. AUs represent individual components of muscle movements, whose activation is identified by monitoring the 3D displacement of facial landmarks, with specific sets of landmarks corresponding to each AU. The software also generates videos showing dynamic changes of identified facial landmarks, and 3D information on the gaze vector and the head posture. An example of facial landmarks identified during smiling is shown in Figure 1. Example frame from smiling study participants, with activation of AU6 > 0.5 and AU12 > 1.5 As this study focused on smiling, the relevant AUs were: AU6, AU12, and AU25. AU6 (‘cheek raiser’) tracks the activity of the orbicularis oculi muscle, pars orbitalis, and is generally considered a marker of smile genuineness (i.e. a Duchenne marker). AU12 (‘lip corner puller’) tracks the activity of the zygomaticus major muscle. AU25 (‘lips apart’) tracks the activity of the depressor labii inferioris muscle. The intensities of AU6 and AU12 activation were automatically coded by the software using a six‐point ordinal scale (0–5), with values amounting to 1–2 indicating weak (trace) to slight activation, a value of 3 indicating marked pronounced activity, a value of 4 indicating extreme activation, and a value of 5 indicating maximum possible intensity. 15 , 24 AU25 was assigned a dichotomous value, which was either 0, indicating lips closed without teeth showing, or 1. A dedicated software script was developed in Java (Oracle JDK 1.8.0_111). The software had a stand‐alone user‐friendly graphic interface, which allows users to open the output file of OpenFace Software and to detect all the smiling episodes occurring throughout an entire video or within a well‐defined portion of a video, as defined by the start and end frame numbers. To identify the onset of a smiling episode, both AU6 and AU12 had to be above the specified thresholds. The end of the smiling episode was identified by a subthreshold activation of either AU for longer than 2 s. In effect, this means that when two or more smiling episodes were separated by less than 2 s, they were merged into a single episode. The stand‐by time could be changed by the user. For every smiling episode, the software assigned a progressive count, the onset time, the duration, and the mean activation of AU6 and AU12 across the entire episode. The onset and duration of individual episodes were given at a resolution equal to the inverse of frame rate of the video analysed. In order to assign a clinically meaningful value to AU25, this was reported as the proportion of time teeth were shown during a given smiling episode. For example, an activity value of 50% indicated that teeth were visible during half of the episode. Additional outcome measurements included the number of smile episodes per minute and the relative smile time (%). This was calculated as the proportion of time that each individual had smiled while watching the video clip, by summing the durations of all smiling episodes and then dividing the total duration of smiling by the length of the video. Phase 2: descriptive study and software validation: Data were collected at the Craniofacial Clinical Research Laboratory at the Faculty of Dentistry, University of Otago, under local Ethics Committee approval number H19/160. All participants enrolled in this study agreed to participate and signed a written informed consent form. The report of phase 2 conforms to the guidelines for reporting observational studies (STROBE). 25 Sample characteristics: A convenience sample of thirty participants (16 females, 14 males; age 18.9 years SD 2.2 years) were recruited as part of a larger project aiming to investigate the impact of oral health, psychological traits, and sociodemographic variables on smiling behaviour. Recruitment started in September 2020 and ended in December 2020. Participants aged 16–22 years were recruited through local public advertisements, including university mailing lists, social media, flyers, and word of mouth. Exclusion criteria were: (a) cleft lip/ palate or other craniofacial syndromes; (b) severe periodontitis affecting front teeth; (c) history of major psychiatric disorders; (d) Bell's palsy; (e) removable dentures; (f) enamel dysplasia or severe stains affecting front teeth; (g) history of dysmorphophobia. Wearing eyeglasses was not set as an exclusion criterion; however, only one participant requested to wear glasses while watching the videos, and this apparently did not interfere with landmark identification. The sample investigated in this study represented a randomly selected subset of around a hundred study participants. The large sample exhibited a variety of occlusal conditions and is part of another related investigation examining the influence of malocclusion on smiling features through the proposed approach. The occlusal characteristics of study participants were assessed using the Dental Aesthetic Index (DAI). DAI is a popular tool used in epidemiology to assess a specific set of occlusal traits, such as missing anterior teeth, crowding and spacing in the incisal region, midline diastema, overjet, anterior open bite, incisor irregularity, and molar relationship. 26 The overall DAI assessment scores of the weighted components are summed with a constant of 13 to produce the final DAI aggregate. Experimental Setup: An Ultra High‐Definition web camera (Logitech BRIO 4K Ultra High‐Definition Webcam), with resolution set to 4096 × 2160 pixels and frame rate set at 30 frames per second was secured atop a 27‐inch Dell Ultrasharp U2715H computer monitor with a resolution of 2560 × 1440 pixels used to showcase a video clip. Each participant was seated 60–70 cm away from the display monitor. The height of the monitor was adjusted so that the participant's eyes were aligned at a point corresponding to the middle of the screen when the participant's head was in natural head position. Face lighting was individually optimised by a ring light (APEXEL 10″ 26 cm LED Selfie Circle Ring, Apexel), which was also secured on the back of the screen. A neutral background was used to avoid light reflections and object interferences, which could affect off‐line analyses of the video. The room light was switched off during the entire recording. Smile triggering video: Three amusing video clips were identified via a small pilot study by the focus group previously described. The first clip showed an episode of Mr Bean (Mr Bean Rides Again, Act 5: The Flight; 3 min), whose character is widely used as a trigger stimulus in smile research. 27 The next two clips included a non‐stop laughter of a cute baby (47 s) and Juan Joya Borja's viral laughing video widely known as the ‘Spanish laughing guy’ in a televised episode of Ratones Coloraos, which first aired in 2001 but went viral in 2007 (46 s). 28 The three clips were separated by fade‐outs and merged into a single video, 4 min and 33 s in length (4 min and 24 s without transitions). Following the amusing video clips, the video presented instructions for completing a series of tasks, with time kept by a countdown timer and progress bar. The tasks involved initiating a series of jaw movements and facial expressions that could confound identification of smiles: speaking (counting 1–10), yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, smiling, and neutral expressions. The speaking task lasted 10 s while all other tasks lasted six seconds, with a six‐second inter‐task interval. All tasks were administered once, except for smiling, which was repeated three times. These tasks allowed a precise tuning of the machine learning models that were applied to detect smiling episodes in the video and individual‐specific calibration of the algorithm. Procedure: Each participant's involvement in the study took place in a single session. At the start of this session, each participant was checked against the inclusion/exclusion criteria, and the occlusal characteristics were scored using the DAI index. The participants were then given an overview of the research project and signed the written consents for participation. To elicit natural responses and trigger spontaneous smiling reactions during the video recording, the participants were not told that the main outcomes of the study were the features of their smiles. Afterwards, each participant was left alone in the recording room and was requested to view the video clip and then perform the follow‐up tasks. After viewing the video, each participant was asked to fill in two questionnaires. The first was a 12‐item Smile Aesthetics‐Related Quality of Life (SERQoL) questionnaire relating to three dimensions of the psychosocial impact of smiles. 29 The second was the 60‐item IPIP–NEO–60 personality scale. 30 The results of these questionnaires were the subject of another investigation and are not analysed in this report. Each participant was given a $20 voucher as reimbursement for participation in this project. Data analysis and statistics: The full‐face videos were reviewed and coded frame‐wise by two examiners (HM and RK), who were instructed to identify each distinct smiling episode (i.e. preceded and followed by a smile‐free period of at least two seconds) in each study participant. The frames corresponding to the onset and cessation of each smiling episode were identified and noted between the two coders who viewed the full‐face videos within the same setting until a consensus between the two coders was reached. When consensus was not reached, a third coder (MF) was consulted. The validity of the smiling detection software was assessed by calculating receiver operating characteristics (ROC) curves, using the examiner‐coded smiles as a reference standard and classification variable. ROC curves were assessed frame‐by‐frame timewise for each smile and smile‐free portions. Sensitivity (Se = true positive rate) and specificity (Sp = true negative rate) were calculated frame‐wise and maximised using Youden index (Se +Sp −1). The ROC curve was plotted by false positive rate (1‐Sp) on the x‐axis and the true positive rate (Se) on the y‐axis. The ROC curve was plotted by varying two different thresholds, the first one (Th1) representing the activation level of the cheek raiser muscle (AU6) and the second one (Th2) representing the activation level of the lip corner puller muscle (AU12). The two thresholds Th1 and Th2 were varied stepwise using a 0.05 step for both thresholds. The area under the curve (AUC) of the ROC curve and the overall accuracy of the test (sum of true positive frames + true negative frames/total frames) were also calculated. After threshold optimisation, the software script was run on the entire recording (including the post‐video tasks) to identify smiling episodes and to investigate possible misclassifications (false positives) of confounding tasks as smiles. The three smiling tasks, part of the second section of the video, were excluded from the confounds analysis. To obtain an estimate of smile genuineness (0–5), intensity (0–5), and teeth exposure (%), the amount of activation of AU6, AU12, and AU25 were averaged across each episode. The outcome variables considered in this study were the number of smiling episodes per session, the mean and cumulative duration of smiling episodes, and the mean activation of AU6, AU12, and AU25. All the data were analysed in Excel (Version 16.51, Microsoft Corporation) and SPSS (version 20.0 IBM Corporation). RESULTS: Study participants were young adults, mostly Caucasian (>80%), about half of them were females, and had a broad range of malocclusions (Table 1). Demographic characteristics of the participants The distinct smile episodes that were manually identified frame‐wise by coders were used to build a ROC curve (Figure 2). The area under the curve of the ROC curve was 0.94, and the overall accuracy of smiling frames detection was 84.5%. The maximisation of Youden index indicated that detection accuracy was highest with thresholds of 0.5 for AU6 and 1.5 for AU12. These thresholds resulted in a sensitivity of 82.9% and a specificity of 89.7% and were used in subsequent episode‐wise analysis in the study sample. ROC curve based on two thresholds for AU6 and AU12 and frame‐wise detection of smiles After calibration of the algorithm, the true‐positive detection of individual smiling episodes was 90.0%. In addition, 11.3% of confounder tasks were detected as false positives. The tasks more often misclassified as smiles were mouth covering, which amounted to around one‐third of the false detections and yawning, which amounted to around 20% of the false positive detections. Study participants smiled approximately seven times according to the classifier, with each smile episode lasting approximately 10 s, or about one‐third of the duration of the humorous videos. The features of smiling episodes showed a large inter‐individual variation in the frequency, intensity, and duration of smiles (Figure 3). Three‐dimensional histogram depicting all the smiling episodes detected by duration and intensity of AU12. Descriptive statistics for the individual features of smiles, such as activation of specific AUs are given in Table 2. Activation of AU12, which is the main AU of the smile, 15 ranged from slight to pronounced, with intensity ranking. Some participants hardly showed teeth on smiling, while others showed teeth throughout the entire smile episode. Descriptive statistics for the features of smiling episodes detected from the 30 study participants, while watching the video footage DISCUSSION: This paper presents a user‐friendly automated software script, which can detect and quantify smiles features in terms of their: (1) frequency, (2) onset and offset of each episode and overall episode duration, (3) peak intensities, (4) percentage of tooth display in each smiling episode and (5) smiling genuineness. In order to identify spontaneous smiles, we have introduced two detection thresholds based on the levels of activation of the cheek raiser (AU6, a Duchenne marker) and the lip corner puller (AU12). The findings indicate an acceptable detection accuracy of the proposed method, which can be used for an episode‐wise analysis of smiles. The software does not require calibration and is available upon request. To the best of our knowledge, this is the first study that has used available libraries and FACS to present an automated episode‐wise analysis of smile episodes. This script uses a user‐friendly interface, which needs to be used in unison with OpenFace2.20 (an open‐source script), which any researcher can utilise. Moreover, the introduction of Artificial Intelligence (AI) to the field of dynamic smile analyses is fundamental. Hence, observer‐related biases are minimised with an expected reduction in time and labour associated with manual analyses. 31 The measure of diagnostic accuracy includes both sensitivity and specificity values. 32 In our study, 82.9% sensitivity value demonstrates a high proportion of detected true smile episodes. In addition, the diagnostic specificity is confined to 89.7%, as presented in the ROC curve plot. In summation, both values align well with stipulated expectations in the area of automated facial expression recognition and dynamic analysis of human emotions. 33 , 34 On the other hand, descriptive values from automated analysis of sample clips showed that participants smiled around two times per minute, on average for around 11 s per episode. Also, the mean intensity of the zygomaticus major activation (AU12) was 2.2 ± 0.4. These findings align with previous research of participants who viewed a funny clip with a mean duration of AU12 activation (13.8 ± 12.7 s) and a maximum intensity of 1.8 ± 1.1. 35 Further, a recent study reported that the mean intensity of AU12 was 4.1 during genuine smiling and 3.9 for posed smiles. 36 Though these AU12 intensity values seem comparable during both genuine and posed smiles, previous research pointed out to recognisable differences with respect to AU6 activity during genuine and posed smiles placing an argument that it would be difficult to deliberately fake a genuine smile. 10 , 37 However, there is also some evidence suggesting that Duchenne smiles are merely traces of smile intensity rather than a reliable and distinct indicator of smile authenticity. 38 Such discrepancies in the reported findings may be ascribed to the different methods to trigger and to measure smiles, the social context, and sociodemographic characteristics of the sample, which may all influence the features of smiling. 39 However, they still do impose limitations on our understanding, recognition and differentiation of genuine and posed smiles. Smiling is an expression that can be triggered on demand, as well as spontaneously within a social context. In phase 2, the trigger video had successfully elicited smiles in individuals regardless of their ethnic background, age, or malocclusion. This suggests that individuals are prone to smiling when a suitable trigger is used aside from circumstances and situations where social integration is seen. 40 In addition, while the participants were informed about the video recording process, it can be argued that masking the purpose of the video recording being the assessment of smiling episodes succeeded in rendering a natural response to the trigger (i.e. spontaneous smiling) as observation awareness is well‐known to be an important variable in smiling research. 39 It is possible to detect various spontaneous facial expressions expressed in unscripted social context with automated recognition systems. 41 However, establishing reliable automated coding of discrete facial expressions is a challenging process. 41 Detection issues often arise when multiple AUs are involved; hence, recognising compound facial expressions where individuals combine various expressions, is daunting. 42 In addition, dynamic tracking and head orientation also pose obstacles to AI recognition. 43 In our study, we incorporated post‐video scripts of different plausible confounders to tackle these issues. These tasks included: counting numbers, yawning, coughing, mouth covering; and posing anger, sadness, fear, surprise, disgust, and neutral expressions. 44 Our findings show that only 11.3% of the tasks were identified as false positives after calibration of the algorithm. Misclassified smiles were mostly associated with mouth covering and yawning. In turn, the proposed method for episode‐wise detection of smiles could serve as a cornerstone to better landmark feature extraction as well as recognition detection in future research. 45 The present study has some limitations, which should be emphasised. Most obviously, the software was validated on a relatively small convenience sample of mostly Caucasian adolescents and young adults who spontaneously smiled. Further research is needed to investigate the quantitative features of posed and spontaneous smiles and validation of the proposed method in other samples, which have not been analysed in the present study. Also, research is needed to determine the classifiers' accuracy in larger and more heterogeneous sample to investigate the effect of ethnicity, sex, age and other demographic characteristics on smiling and to increase the external validity of our findings. 46 In addition, it is important to note that the yielded accuracy was not perfect, though an AUC value closer to 1 (0.94 in our report) is viewed as very high in terms of the discrimination performance of the software. 47 Further enhancements are plausible with future improvements in AI. These enhancements should also include other methods of quantification of muscular activity to objectively assess genuine smiles using electromyography (EMG) and wearable detection devices. 48 The possible effect of calibration on smile detection accuracy also needs to be further investigated in different samples and in contrast with other smile detection methods to establish which is better in terms of accuracy, cost, labour, and overall handling. In summary, this paper presents a novel automated episode‐wise quantitative assessment of smiling dynamics. The software can provide a quantitative analysis of the frequency, duration, intensity, and characteristics of smiles through a user‐friendly interface that is available for further utilisation in smile research. The proposed approach has potential different applications within dentistry. Firstly, it would offer researchers the opportunity to understand to what extent oral conditions, such as dental, oral, and facial anomalies objectively influence smiles. This could then translate into investigations of the effect of corrective treatment through oral rehabilitation, dental and orthodontic treatment, on specific features of smiles. Furthermore, the proposed approach may be used to trigger and investigate spontaneous smiles, thus facilitating restorative treatment plans, as compared to information depicted via static posed photographs. In addition, the capability of the algorithm to detect and analyse observable data of individuals smiling under controlled conditions opens the door to addressing further challenges in other areas. For example, future research could target understanding the dynamics of smiling in real‐time conditions. Moreover, the dental literature is replete with research on the static features of smiles, while data are scarcely available on the dynamic features. It would be interesting and important to examine the relationship between malocclusion patterns, different orthodontic treatment modalities, and their effect on smiling from a dynamic standpoint. Based on the aforementioned points, future developments, and further implementation of the pattern‐recognition algorithm could be significant not only in dental disciplines, but also in psychology, sociology, and behavioural research. CONCLUSIONS: Individual smile episodes and their quantitative features, such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The proposed approach can be used to investigate the impact of oral health and oral rehabilitation on smiles. AUTHOR CONTRIBUTIONS: Hisham Mohammed and Reginald Kumar Jr involved in conceptualisation, investigation, validation and writing the original draft. Hamza Bennani involved in software and analysis. Jamin Halberstadt involved in methodology, reviewing and editing and supervision. Mauro Farella involved in conceptualisation, methodology, reviewing and editing, analysis and supervision. CONFLICT OF INTEREST: All authors declare that they have no competing interest. PEER REVIEW: PEER REVIEW The peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378. The peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378. PEER REVIEW: The peer review history for this article is available at https://publons.com/publon/10.1111/joor.13378.
Background: Patients seeking restorative and orthodontic treatment expect an improvement in their smiles and oral health-related quality of life. Nonetheless, the qualitative and quantitative characteristics of dynamic smiles are yet to be understood. Methods: A software script was developed using the Facial Action Coding System (FACS) and artificial intelligence to assess activations of (1) cheek raiser, a marker of smile genuineness; (2) lip corner puller, a marker of smile intensity; and (3) perioral lip muscles, a marker of lips apart. Thirty study participants were asked to view a series of amusing videos. A full-face video was recorded using a webcam. The onset and cessation of smile episodes were identified by two examiners trained with FACS coding. A Receiver Operating Characteristic (ROC) curve was then used to assess detection accuracy and optimise thresholding. The videos of participants were then analysed off-line to automatedly assess the features of smiles. Results: The area under the ROC curve for smile detection was 0.94, with a sensitivity of 82.9% and a specificity of 89.7%. The software correctly identified 90.0% of smile episodes. While watching the amusing videos, study participants smiled 1.6 (±0.8) times per minute. Conclusions: Features of smiles such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The software can be used to investigate the impact of oral conditions and their rehabilitation on smiles.
INTRODUCTION: Smiling is a spontaneous facial expression occurring throughout everyday life, which varies largely between individuals. 1 While the interpretation of smiling may appear straightforward, it is actually one of the most complex facial expressions, and can be ambiguous. 2 Not only can smiles have different forms and meanings, but they are also found in different situations and as a consequence of different eliciting factors. 3 , 4 Smile analysis in dentistry has largely focused on static images. 5 Nonetheless, more recently, there has been a paradigm shift in treatment planning and smile rehabilitation from using static smiles to dynamic smiles; herein lies the ‘art of the smile’. 6 As the pursuit for better dentofacial aesthetics increases, it is essential to distinguish between posed and spontaneous smiles, differences between which are significant and can influence treatment planning and smile design. 5 Understanding the characteristics of different smiles and the associated age‐related changes in orofacial musculature, for example, is important to the decision‐making process to achieve ‘ideal’ tooth display. 7 However, this process should not be confined to the aesthetic elements alone but should also extend to understand whether an oral rehabilitation treatment, including orthodontics, actually affects the number and the way a patient smiles. 8 , 9 Smiling that depicts situations of spontaneous pure enjoyment or laughter are often referred to as the genuine ‘Duchenne’ smiles, to acknowledge the scientist who first described their features. 10 , 11 The Duchenne smile prompts a combined activation of the zygomaticus major and the orbicularis oculi muscles. This pattern of muscular activity distinguishes between genuine smiles and ‘social’ smiles, which are generally expressed during conditions of non‐enjoyment. 12 , 13 The identification of Duchenne smiles relies on subtle analysis of facial expressions. 14 The Facial Action Coding System (FACS) 15 is a popular and reliable method for detecting and quantifying the frequency of facial expressions from full‐face video recordings. 16 The FACS uses action units (AUs), which code for actions of individual or groups of muscles during facial expression. 15 The activation level of each AU is scored using intensity scores, ranging from ‘trace’ to ‘maximum’. According to FACS, the onset of a smile can be identified when the activation of the zygomaticus major displays traces of raised skin within the lower‐to‐middle nasolabial area and other traces of upwardly angled and elongated lip corners. 15 These muscle activities would increase in intensity until the smile apex is reached before reverting until no further traces of activation of the zygomaticus major could be recognised; hence, the smile offset is denoted. 15 The introduction of FACS has undoubtedly challenged the study of facial expressions as it allows real‐time assessment of emotions; however, its utilisation for manual detection and coding of AUs presents with limitations; (a) the need for experienced coders who are able to accurately identify on a frame‐wise basis the onset, apex, and offset of a smile, 16 (b) the coding process is extremely laborious, posing a huge challenge in large‐scale research, (c) susceptibility to observer biases 17 and high costs. 18 The observable limitations encountered with manual analyses of smiles has led to computing developments to automatedly detect dynamic smiling features. 19 FACS focuses primarily on the identification of active target AUs frame‐by‐frame and do not include comprehensive analyses of smiling as discrete episodes, so that their individual features and patterns can be characterised. An episode‐wise analysis of individual smiles would allow researchers to address questions such as how often, how long, how strong, and how genuinely do individuals smile under different experimental and/or situational factors, and what is the impact of factors such as, oral health‐related conditions, on the way people smile. This would also pave the way to understand the dynamic characteristics of smiles in oral rehabilitation patients 8 and assist in areas where smile rehabilitation through individualised muscle mimicry and training is demanded. 20 The aim of this study is to develop and validate a user‐friendly software script, based on well‐established pattern‐recognition algorithms for tracking facial landmarks and facial AUs, so that discrete smile episodes can be analysed off‐line from full‐face videos and quantified in terms of frequency, duration, authenticity, and intensity of smile. CONCLUSIONS: Individual smile episodes and their quantitative features, such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The proposed approach can be used to investigate the impact of oral health and oral rehabilitation on smiles.
Background: Patients seeking restorative and orthodontic treatment expect an improvement in their smiles and oral health-related quality of life. Nonetheless, the qualitative and quantitative characteristics of dynamic smiles are yet to be understood. Methods: A software script was developed using the Facial Action Coding System (FACS) and artificial intelligence to assess activations of (1) cheek raiser, a marker of smile genuineness; (2) lip corner puller, a marker of smile intensity; and (3) perioral lip muscles, a marker of lips apart. Thirty study participants were asked to view a series of amusing videos. A full-face video was recorded using a webcam. The onset and cessation of smile episodes were identified by two examiners trained with FACS coding. A Receiver Operating Characteristic (ROC) curve was then used to assess detection accuracy and optimise thresholding. The videos of participants were then analysed off-line to automatedly assess the features of smiles. Results: The area under the ROC curve for smile detection was 0.94, with a sensitivity of 82.9% and a specificity of 89.7%. The software correctly identified 90.0% of smile episodes. While watching the amusing videos, study participants smiled 1.6 (±0.8) times per minute. Conclusions: Features of smiles such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The software can be used to investigate the impact of oral conditions and their rehabilitation on smiles.
9,931
284
[ 831, 739, 64, 325, 179, 300, 214, 474, 55, 27, 12 ]
16
[ "smiling", "video", "smile", "study", "smiles", "facial", "episode", "software", "activation", "episodes" ]
[ "comprehensive analyses smiling", "planning smile rehabilitation", "oral rehabilitation smiles", "smile design understanding", "smile analysis dentistry" ]
null
[CONTENT] orthodontics | smiling | validation studies [SUMMARY]
null
[CONTENT] orthodontics | smiling | validation studies [SUMMARY]
[CONTENT] orthodontics | smiling | validation studies [SUMMARY]
[CONTENT] orthodontics | smiling | validation studies [SUMMARY]
[CONTENT] orthodontics | smiling | validation studies [SUMMARY]
[CONTENT] Humans | Artificial Intelligence | Quality of Life | Facial Expression | Smiling | Lip [SUMMARY]
null
[CONTENT] Humans | Artificial Intelligence | Quality of Life | Facial Expression | Smiling | Lip [SUMMARY]
[CONTENT] Humans | Artificial Intelligence | Quality of Life | Facial Expression | Smiling | Lip [SUMMARY]
[CONTENT] Humans | Artificial Intelligence | Quality of Life | Facial Expression | Smiling | Lip [SUMMARY]
[CONTENT] Humans | Artificial Intelligence | Quality of Life | Facial Expression | Smiling | Lip [SUMMARY]
[CONTENT] comprehensive analyses smiling | planning smile rehabilitation | oral rehabilitation smiles | smile design understanding | smile analysis dentistry [SUMMARY]
null
[CONTENT] comprehensive analyses smiling | planning smile rehabilitation | oral rehabilitation smiles | smile design understanding | smile analysis dentistry [SUMMARY]
[CONTENT] comprehensive analyses smiling | planning smile rehabilitation | oral rehabilitation smiles | smile design understanding | smile analysis dentistry [SUMMARY]
[CONTENT] comprehensive analyses smiling | planning smile rehabilitation | oral rehabilitation smiles | smile design understanding | smile analysis dentistry [SUMMARY]
[CONTENT] comprehensive analyses smiling | planning smile rehabilitation | oral rehabilitation smiles | smile design understanding | smile analysis dentistry [SUMMARY]
[CONTENT] smiling | video | smile | study | smiles | facial | episode | software | activation | episodes [SUMMARY]
null
[CONTENT] smiling | video | smile | study | smiles | facial | episode | software | activation | episodes [SUMMARY]
[CONTENT] smiling | video | smile | study | smiles | facial | episode | software | activation | episodes [SUMMARY]
[CONTENT] smiling | video | smile | study | smiles | facial | episode | software | activation | episodes [SUMMARY]
[CONTENT] smiling | video | smile | study | smiles | facial | episode | software | activation | episodes [SUMMARY]
[CONTENT] smile | smiles | facial | facs | different | rehabilitation | activation zygomaticus major | activation zygomaticus | way | factors [SUMMARY]
null
[CONTENT] smiling | curve | participants | episodes | au12 | detection | smiling episodes | roc | showed | detected [SUMMARY]
[CONTENT] oral | oral health oral rehabilitation | automatedly assessed | automatedly assessed acceptable level | assessed acceptable level | oral rehabilitation smiles | frequency duration genuineness intensity | frequency duration genuineness | assessed acceptable | accuracy proposed approach [SUMMARY]
[CONTENT] smiling | smiles | smile | video | facial | study | episode | activation | participant | participants [SUMMARY]
[CONTENT] smiling | smiles | smile | video | facial | study | episode | activation | participant | participants [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] ROC | 0.94 | 82.9% | 89.7% ||| 90.0% ||| 1.6 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| the Facial Action Coding System | FACS | 1 | 2 | 3 ||| Thirty ||| ||| two | FACS ||| ROC ||| ||| ROC | 0.94 | 82.9% | 89.7% ||| 90.0% ||| 1.6 ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| the Facial Action Coding System | FACS | 1 | 2 | 3 ||| Thirty ||| ||| two | FACS ||| ROC ||| ||| ROC | 0.94 | 82.9% | 89.7% ||| 90.0% ||| 1.6 ||| ||| [SUMMARY]
[Do the old fears return? : An investigation into the experience of the corona crisis among survivors of the Hamburg Firestorm (1943)].
34414479
The generation of war children of the Second World War is currently in old age experiencing the lock-down caused by the coronavirus crisis.
BACKGROUND
A total of 120 witnesses of the Hamburg Firestorm (1943) were asked about their experiences of the corona pandemic by means of a questionnaire in May 2020 and December 2020. Findings from telephone conversations with several witnesses, who regularly participate in a discussion group, have also been taken into consideration for this study.
METHODS
Of the interviewees contacted in May 2020 and December 2020, 98 (82%) and 77 (64%), respectively, sent back the questionnaire, 58 (45) female and 40 (32) male, the mean age was 86.5 years (87.1 years). According to the questionnaire most of them feel relatively stable and confident about their general situation in the pandemic and are mostly concerned with the contact restrictions rather than with their own health. The majority fear negative economic consequences for Germany. About 13% fully agree that the current crisis reminds them of their experiences in the Hamburg Firestorm. As telephone conversations have shown the memories and experiences of the war and the post-war period in general, seem to act as the leading frame of reference for dealing with the current crisis.
RESULTS
The findings point to typical psychological processing patterns in a war-burdened generation, when they now relate their experiences in the war to the experiences in the corona crisis.
CONCLUSION
[ "Male", "Female", "Humans", "Aged, 80 and over", "Survivors", "Fear", "Pandemics", "World War II", "Coronavirus Infections" ]
8375611
null
null
null
null
null
null
Fazit für die Praxis
Die befragten Überlebenden des Hamburger Feuersturms schildern sich und ihre persönliche Situation als überwiegend stabil.Vor allem die mit dem Lockdown verbundenen Kontaktbeschränkungen werden als belastend erlebt.Die Kriegserfahrung hilft zur Relativierung der gegenwärtigen Krise.Eine Untergruppe erlebt starke Bezüge zur Feuersturmerfahrung und zu anhaltenden Belastungen in der Kriegs- und Nachkriegszeit.Generell sollten die Erfahrungen im Zweiten Weltkrieg in der psychosozialen Betreuung der heute über 80-Jährigen berücksichtigt werden. Die befragten Überlebenden des Hamburger Feuersturms schildern sich und ihre persönliche Situation als überwiegend stabil. Vor allem die mit dem Lockdown verbundenen Kontaktbeschränkungen werden als belastend erlebt. Die Kriegserfahrung hilft zur Relativierung der gegenwärtigen Krise. Eine Untergruppe erlebt starke Bezüge zur Feuersturmerfahrung und zu anhaltenden Belastungen in der Kriegs- und Nachkriegszeit. Generell sollten die Erfahrungen im Zweiten Weltkrieg in der psychosozialen Betreuung der heute über 80-Jährigen berücksichtigt werden.
[ "Einleitung", "Methodik", "Telefongespräche", "Fragebogenuntersuchung im Mai 2020", "Fragebogenuntersuchung im Dezember 2020", "Ergebnisse", "Befunde aus den Telefongesprächen", "Analyse der Fragebogen", "Analyse der Textantworten in der ersten Befragung (Mai 2020)", "Erleben der Kontakteinschränkung", "Hilfe in der Krise", "Verändern sich die Befunde beim zweiten Lockdown?", "Erleben der Coronakrise vor dem Hintergrund der Feuersturmerfahrung", "Bezüge zu den Inhalten der lebensgeschichtlichen Interviews", "Diskussion", "Ausblick" ]
[ "Die Coronapandemie und der damit verbundene erste Lockdown im Frühjahr 2020 mit seinen massiven Kontakteinschränkungen haben viele Menschen unvorbereitet getroffen. Für die Altenheime und Pflegeeinrichtungen galten besonders strenge Regeln der Kontaktvermeidung, da die Gruppe der Alten als Hochrisikogruppe galt. Bei dieser Altersgruppe handelt es sich um die Generation der „Kriegskinder“, also der Menschen, von denen viele in Deutschland als Kinder den Zweiten Weltkrieg erlebt haben. Es stellt sich die Frage, wie sie diese die Coronakrise erleben und mit ihr umgehen, und ob sie die Kriegszeit damals mit der jetzigen Krisenzeit in Verbindung bringen: Kehren in der Isolation der Coronatage alte Ängste aus den „Bombennächten“ wieder? Um diese Fragen differenziert beantworten zu können, erscheinen Untersuchungen an zeitgeschichtlich definierten Gruppen, wie etwa den Überlebenden des „Hamburger Feuersturms“ sinnvoll. Die als „Operation Gomorrha“ bezeichneten Luftangriffe der Alliierten im Juli 1943 zerstörten weite Teile der Stadt [7] und haben sich als „Hamburger Feuersturm“ im kollektiven Gedächtnis verankert [10].\nIhrem Erleben und der Frage nach der transgenerationalen Weitergabe war das interdisziplinäre Forschungsprojekt „Zeitzeugen des Hamburger Feuersturms und ihre Familien“ [4] gewidmet, das im Zuge der „Kriegskind-Forschung“ [9] entstand und 60 Zeitzeugen und ihre Familien untersucht hat. Dort zeigte sich bei vielen Zeitzeugen neben typischen posttraumatischen Symptomen (Angst vor Bränden, Albträume, reizassoziierte Überempfindlichkeiten und Ängste) überwiegend eine fortbestehende Grunderschütterung im Sinne einer hintergründigen psychischen Labilität trotz äußerlicher Stabilität durch die Erfahrung im Feuersturm [2]. Ihre Bewältigung wurde stark durch die Erfahrungen der Nachkriegszeit beeinflusst [5].\nDas „Erinnerungswerk Hamburger Feuersturm“ (EHF 1943) [www.ehf-1943.de] sammelt als praktische Konsequenz aus diesem Projekt weiter fortlaufend Interviews mit den heute hochbetagten Überlebenden. Mittlerweile wurden im Erinnerungswerk 120 lebensgeschichtliche Interviews von psychodynamisch orientierten Psychotherapeuten durchgeführt. Es handelt sich um die Geburtsjahrgänge 1923–1942; die älteste Interviewte war zum Zeitpunkt des Gesprächs 96 Jahre alt. 14 von ihnen lebten im Heim oder in einer Pflegeeinrichtung. Weiter wird im Erinnerungswerk eine kontinuierliche Gesprächsgruppe für Zeitzeuginnen und Zeitzeugen des Hamburger Feuersturms angeboten.\nAls die Coronakrise im Frühjahr 2020 diese Arbeit unterbrach, haben wir uns gefragt, wie „unsere Zeitzeugen“ mit der Situation des Lockdowns umgehen, und inwieweit es hier zu einem bedrängenden und möglicherweise destabilisierenden Wiedererinnern der existenziellen Grenzerfahrung im Hamburger Feuersturm gekommen war. Erneut wurden diese Fragen im Lockdown im Dezember 2020 vor den bevorstehenden Weihnachtstagen aufgeworfen. Es stellten sich insbesondere folgende Fragen:Wie geht es den Zeitzeugen gegenwärtig unter den Bedingungen des Lockdowns?Was macht ihnen besonders zu schaffen?Wie schätzen Sie die allgemeine Lage ein?Werden die Zeitzeugen durch Coronakrise und Lockdown an ihre Erfahrungen in der Kriegs- und Nachkriegszeit erinnert?\nWie geht es den Zeitzeugen gegenwärtig unter den Bedingungen des Lockdowns?\nWas macht ihnen besonders zu schaffen?\nWie schätzen Sie die allgemeine Lage ein?\nWerden die Zeitzeugen durch Coronakrise und Lockdown an ihre Erfahrungen in der Kriegs- und Nachkriegszeit erinnert?", "Telefongespräche Eine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt.\nZahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde.\nEine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt.\nZahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde.\nFragebogenuntersuchung im Mai 2020 Im Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert.\nDer Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten.\nEs antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren.\nDabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet.\nIm Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert.\nDer Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten.\nEs antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren.\nDabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet.\nFragebogenuntersuchung im Dezember 2020 Mitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren.\nMitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren.", "Eine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt.\nZahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde.", "Im Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert.\nDer Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten.\nEs antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren.\nDabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet.", "Mitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren.", "Befunde aus den Telefongesprächen Allen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat.\nSo fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“.\nFür einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“.\nImmer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist.\nAllen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat.\nSo fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“.\nFür einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“.\nImmer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist.\nAnalyse der Fragebogen Tab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09\nZum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar.\nDie Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35).\nBei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94).\nDie eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren.\nTab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09\nZum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar.\nDie Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35).\nBei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94).\nDie eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren.\nAnalyse der Textantworten in der ersten Befragung (Mai 2020) Erleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nAuf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nHilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nZu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nErleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nAuf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nHilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nZu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nVerändern sich die Befunde beim zweiten Lockdown? In der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57).\nDie Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33).\nIn der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57).\nDie Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33).\nErleben der Coronakrise vor dem Hintergrund der Feuersturmerfahrung Bei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24).\nEine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1).\nBei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24).\nEine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1).\nBezüge zu den Inhalten der lebensgeschichtlichen Interviews Es stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet.\nEs stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet.", "Allen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat.\nSo fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“.\nFür einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“.\nImmer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist.", "Tab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09\nZum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar.\nDie Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35).\nBei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94).\nDie eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren.", "Erleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nAuf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nHilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nZu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.", "Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.", "Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.", "In der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57).\nDie Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33).", "Bei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24).\nEine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1).", "Es stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet.", "Unsere explorative Querschnittsstudie gibt einen ersten Einblick in das Erleben der Coronakrise in der Generation der „Kriegskinder des Zweiten Weltkriegs“. Der hohe Rücklauf der Fragebogen in der Gruppe der Hochbetagten (81,7 %, 64 %) spricht für die subjektive Relevanz der Fragestellung bei den Befragten und ihre Bindung an das Erinnerungswerk. Insgesamt zeigen sich die Überlebenden des Hamburger Feuersturms am Anfang des ersten Coronalockdowns im Frühjahr 2020 wie auch im Dezember 2020 überwiegend gelassen und stabil. Gleichzeitig wird deutlich, dass die verhängten Kontaktbeschränkungen, besonders die Einschränkung des unmittelbaren auch körperlichen Kontakts in der Familie erheblich zu schaffen machen. Die Sorge um die eigene Gesundheit und die Angst vor Ansteckung ist weniger ausgeprägt. Als ein wesentliches Element gut durch die Krise zu kommen, wird eine positive eigene Einstellung gesehen. Möglicherweise werden hier Haltungen im Sinne einer „generationsbedingten Genügsamkeit“ [6] aktiviert, die in der Kriegszeit ausgebildet wurden (z. B. Optimismus als Überlebensprinzip, Ruhe bewahren), aber auch Erfahrung und Wissen werden als wichtig erachtet. Freilich finden wir in den Fragebogen auch eine Untergruppe von ca. 13 % der Antwortenden, die einen erinnernden Bezug zwischen der Feuersturm und Coronaerfahrungen angeben. Unsere Befunde sprechen insgesamt dafür, dass bei schwer betroffenen „Kriegskindern“ des II. Weltkriegs die Kriegserfahrungen in ihrem Selbsterleben als stabilisierendes inneres Referenzsystem zur Bewältigung und Relativierung der Coronakrise dienen können, dass aber auch schmerzhaftes und schwieriges „Wiedererinnern“ vorkommt.", "Es erscheint sinnvoll, sich gezielt dem zeitgeschichtlich geprägten Erleben in der Altersgruppe der Hochbetagten zuzuwenden. Dies erscheint umso notwendiger, als z. B. eine groß angelegte nationale Studie zum Erleben der Coronakrise die über 79-Jährigen nicht einbezieht [8]. Ihre kollektive Benennung als „Hochrisikogruppe“ kann diskriminierend wirken und die „Krisenfestigkeit“ der einzelnen Menschen überblenden [1]." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Einleitung", "Methodik", "Telefongespräche", "Fragebogenuntersuchung im Mai 2020", "Fragebogenuntersuchung im Dezember 2020", "Ergebnisse", "Befunde aus den Telefongesprächen", "Analyse der Fragebogen", "Analyse der Textantworten in der ersten Befragung (Mai 2020)", "Erleben der Kontakteinschränkung", "Hilfe in der Krise", "Verändern sich die Befunde beim zweiten Lockdown?", "Erleben der Coronakrise vor dem Hintergrund der Feuersturmerfahrung", "Bezüge zu den Inhalten der lebensgeschichtlichen Interviews", "Diskussion", "Ausblick", "Fazit für die Praxis" ]
[ "Die Coronapandemie und der damit verbundene erste Lockdown im Frühjahr 2020 mit seinen massiven Kontakteinschränkungen haben viele Menschen unvorbereitet getroffen. Für die Altenheime und Pflegeeinrichtungen galten besonders strenge Regeln der Kontaktvermeidung, da die Gruppe der Alten als Hochrisikogruppe galt. Bei dieser Altersgruppe handelt es sich um die Generation der „Kriegskinder“, also der Menschen, von denen viele in Deutschland als Kinder den Zweiten Weltkrieg erlebt haben. Es stellt sich die Frage, wie sie diese die Coronakrise erleben und mit ihr umgehen, und ob sie die Kriegszeit damals mit der jetzigen Krisenzeit in Verbindung bringen: Kehren in der Isolation der Coronatage alte Ängste aus den „Bombennächten“ wieder? Um diese Fragen differenziert beantworten zu können, erscheinen Untersuchungen an zeitgeschichtlich definierten Gruppen, wie etwa den Überlebenden des „Hamburger Feuersturms“ sinnvoll. Die als „Operation Gomorrha“ bezeichneten Luftangriffe der Alliierten im Juli 1943 zerstörten weite Teile der Stadt [7] und haben sich als „Hamburger Feuersturm“ im kollektiven Gedächtnis verankert [10].\nIhrem Erleben und der Frage nach der transgenerationalen Weitergabe war das interdisziplinäre Forschungsprojekt „Zeitzeugen des Hamburger Feuersturms und ihre Familien“ [4] gewidmet, das im Zuge der „Kriegskind-Forschung“ [9] entstand und 60 Zeitzeugen und ihre Familien untersucht hat. Dort zeigte sich bei vielen Zeitzeugen neben typischen posttraumatischen Symptomen (Angst vor Bränden, Albträume, reizassoziierte Überempfindlichkeiten und Ängste) überwiegend eine fortbestehende Grunderschütterung im Sinne einer hintergründigen psychischen Labilität trotz äußerlicher Stabilität durch die Erfahrung im Feuersturm [2]. Ihre Bewältigung wurde stark durch die Erfahrungen der Nachkriegszeit beeinflusst [5].\nDas „Erinnerungswerk Hamburger Feuersturm“ (EHF 1943) [www.ehf-1943.de] sammelt als praktische Konsequenz aus diesem Projekt weiter fortlaufend Interviews mit den heute hochbetagten Überlebenden. Mittlerweile wurden im Erinnerungswerk 120 lebensgeschichtliche Interviews von psychodynamisch orientierten Psychotherapeuten durchgeführt. Es handelt sich um die Geburtsjahrgänge 1923–1942; die älteste Interviewte war zum Zeitpunkt des Gesprächs 96 Jahre alt. 14 von ihnen lebten im Heim oder in einer Pflegeeinrichtung. Weiter wird im Erinnerungswerk eine kontinuierliche Gesprächsgruppe für Zeitzeuginnen und Zeitzeugen des Hamburger Feuersturms angeboten.\nAls die Coronakrise im Frühjahr 2020 diese Arbeit unterbrach, haben wir uns gefragt, wie „unsere Zeitzeugen“ mit der Situation des Lockdowns umgehen, und inwieweit es hier zu einem bedrängenden und möglicherweise destabilisierenden Wiedererinnern der existenziellen Grenzerfahrung im Hamburger Feuersturm gekommen war. Erneut wurden diese Fragen im Lockdown im Dezember 2020 vor den bevorstehenden Weihnachtstagen aufgeworfen. Es stellten sich insbesondere folgende Fragen:Wie geht es den Zeitzeugen gegenwärtig unter den Bedingungen des Lockdowns?Was macht ihnen besonders zu schaffen?Wie schätzen Sie die allgemeine Lage ein?Werden die Zeitzeugen durch Coronakrise und Lockdown an ihre Erfahrungen in der Kriegs- und Nachkriegszeit erinnert?\nWie geht es den Zeitzeugen gegenwärtig unter den Bedingungen des Lockdowns?\nWas macht ihnen besonders zu schaffen?\nWie schätzen Sie die allgemeine Lage ein?\nWerden die Zeitzeugen durch Coronakrise und Lockdown an ihre Erfahrungen in der Kriegs- und Nachkriegszeit erinnert?", "Telefongespräche Eine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt.\nZahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde.\nEine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt.\nZahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde.\nFragebogenuntersuchung im Mai 2020 Im Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert.\nDer Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten.\nEs antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren.\nDabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet.\nIm Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert.\nDer Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten.\nEs antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren.\nDabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet.\nFragebogenuntersuchung im Dezember 2020 Mitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren.\nMitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren.", "Eine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt.\nZahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde.", "Im Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert.\nDer Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten.\nEs antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren.\nDabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet.", "Mitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren.", "Befunde aus den Telefongesprächen Allen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat.\nSo fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“.\nFür einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“.\nImmer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist.\nAllen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat.\nSo fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“.\nFür einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“.\nImmer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist.\nAnalyse der Fragebogen Tab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09\nZum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar.\nDie Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35).\nBei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94).\nDie eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren.\nTab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09\nZum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar.\nDie Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35).\nBei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94).\nDie eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren.\nAnalyse der Textantworten in der ersten Befragung (Mai 2020) Erleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nAuf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nHilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nZu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nErleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nAuf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nHilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nZu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nVerändern sich die Befunde beim zweiten Lockdown? In der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57).\nDie Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33).\nIn der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57).\nDie Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33).\nErleben der Coronakrise vor dem Hintergrund der Feuersturmerfahrung Bei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24).\nEine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1).\nBei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24).\nEine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1).\nBezüge zu den Inhalten der lebensgeschichtlichen Interviews Es stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet.\nEs stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet.", "Allen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat.\nSo fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“.\nFür einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“.\nImmer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist.", "Tab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09\nZum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar.\nDie Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35).\nBei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94).\nDie eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren.", "Erleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nAuf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.\nHilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.\nZu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.", "Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3\nDie Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren.", "Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4\nNeben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen.", "In der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57).\nDie Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33).", "Bei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24).\nEine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1).", "Es stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet.", "Unsere explorative Querschnittsstudie gibt einen ersten Einblick in das Erleben der Coronakrise in der Generation der „Kriegskinder des Zweiten Weltkriegs“. Der hohe Rücklauf der Fragebogen in der Gruppe der Hochbetagten (81,7 %, 64 %) spricht für die subjektive Relevanz der Fragestellung bei den Befragten und ihre Bindung an das Erinnerungswerk. Insgesamt zeigen sich die Überlebenden des Hamburger Feuersturms am Anfang des ersten Coronalockdowns im Frühjahr 2020 wie auch im Dezember 2020 überwiegend gelassen und stabil. Gleichzeitig wird deutlich, dass die verhängten Kontaktbeschränkungen, besonders die Einschränkung des unmittelbaren auch körperlichen Kontakts in der Familie erheblich zu schaffen machen. Die Sorge um die eigene Gesundheit und die Angst vor Ansteckung ist weniger ausgeprägt. Als ein wesentliches Element gut durch die Krise zu kommen, wird eine positive eigene Einstellung gesehen. Möglicherweise werden hier Haltungen im Sinne einer „generationsbedingten Genügsamkeit“ [6] aktiviert, die in der Kriegszeit ausgebildet wurden (z. B. Optimismus als Überlebensprinzip, Ruhe bewahren), aber auch Erfahrung und Wissen werden als wichtig erachtet. Freilich finden wir in den Fragebogen auch eine Untergruppe von ca. 13 % der Antwortenden, die einen erinnernden Bezug zwischen der Feuersturm und Coronaerfahrungen angeben. Unsere Befunde sprechen insgesamt dafür, dass bei schwer betroffenen „Kriegskindern“ des II. Weltkriegs die Kriegserfahrungen in ihrem Selbsterleben als stabilisierendes inneres Referenzsystem zur Bewältigung und Relativierung der Coronakrise dienen können, dass aber auch schmerzhaftes und schwieriges „Wiedererinnern“ vorkommt.", "Es erscheint sinnvoll, sich gezielt dem zeitgeschichtlich geprägten Erleben in der Altersgruppe der Hochbetagten zuzuwenden. Dies erscheint umso notwendiger, als z. B. eine groß angelegte nationale Studie zum Erleben der Coronakrise die über 79-Jährigen nicht einbezieht [8]. Ihre kollektive Benennung als „Hochrisikogruppe“ kann diskriminierend wirken und die „Krisenfestigkeit“ der einzelnen Menschen überblenden [1].", "\nDie befragten Überlebenden des Hamburger Feuersturms schildern sich und ihre persönliche Situation als überwiegend stabil.Vor allem die mit dem Lockdown verbundenen Kontaktbeschränkungen werden als belastend erlebt.Die Kriegserfahrung hilft zur Relativierung der gegenwärtigen Krise.Eine Untergruppe erlebt starke Bezüge zur Feuersturmerfahrung und zu anhaltenden Belastungen in der Kriegs- und Nachkriegszeit.Generell sollten die Erfahrungen im Zweiten Weltkrieg in der psychosozialen Betreuung der heute über 80-Jährigen berücksichtigt werden.\n\nDie befragten Überlebenden des Hamburger Feuersturms schildern sich und ihre persönliche Situation als überwiegend stabil.\nVor allem die mit dem Lockdown verbundenen Kontaktbeschränkungen werden als belastend erlebt.\nDie Kriegserfahrung hilft zur Relativierung der gegenwärtigen Krise.\nEine Untergruppe erlebt starke Bezüge zur Feuersturmerfahrung und zu anhaltenden Belastungen in der Kriegs- und Nachkriegszeit.\nGenerell sollten die Erfahrungen im Zweiten Weltkrieg in der psychosozialen Betreuung der heute über 80-Jährigen berücksichtigt werden." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "conclusion" ]
[ "Coronakrise", "Luftkrieg", "Psychische Folgen des Zweiten Weltkriegs", "Bewältigung", "Corona crisis", "Air war", "Psychological consequences of the Second World War", "Coping" ]
Einleitung: Die Coronapandemie und der damit verbundene erste Lockdown im Frühjahr 2020 mit seinen massiven Kontakteinschränkungen haben viele Menschen unvorbereitet getroffen. Für die Altenheime und Pflegeeinrichtungen galten besonders strenge Regeln der Kontaktvermeidung, da die Gruppe der Alten als Hochrisikogruppe galt. Bei dieser Altersgruppe handelt es sich um die Generation der „Kriegskinder“, also der Menschen, von denen viele in Deutschland als Kinder den Zweiten Weltkrieg erlebt haben. Es stellt sich die Frage, wie sie diese die Coronakrise erleben und mit ihr umgehen, und ob sie die Kriegszeit damals mit der jetzigen Krisenzeit in Verbindung bringen: Kehren in der Isolation der Coronatage alte Ängste aus den „Bombennächten“ wieder? Um diese Fragen differenziert beantworten zu können, erscheinen Untersuchungen an zeitgeschichtlich definierten Gruppen, wie etwa den Überlebenden des „Hamburger Feuersturms“ sinnvoll. Die als „Operation Gomorrha“ bezeichneten Luftangriffe der Alliierten im Juli 1943 zerstörten weite Teile der Stadt [7] und haben sich als „Hamburger Feuersturm“ im kollektiven Gedächtnis verankert [10]. Ihrem Erleben und der Frage nach der transgenerationalen Weitergabe war das interdisziplinäre Forschungsprojekt „Zeitzeugen des Hamburger Feuersturms und ihre Familien“ [4] gewidmet, das im Zuge der „Kriegskind-Forschung“ [9] entstand und 60 Zeitzeugen und ihre Familien untersucht hat. Dort zeigte sich bei vielen Zeitzeugen neben typischen posttraumatischen Symptomen (Angst vor Bränden, Albträume, reizassoziierte Überempfindlichkeiten und Ängste) überwiegend eine fortbestehende Grunderschütterung im Sinne einer hintergründigen psychischen Labilität trotz äußerlicher Stabilität durch die Erfahrung im Feuersturm [2]. Ihre Bewältigung wurde stark durch die Erfahrungen der Nachkriegszeit beeinflusst [5]. Das „Erinnerungswerk Hamburger Feuersturm“ (EHF 1943) [www.ehf-1943.de] sammelt als praktische Konsequenz aus diesem Projekt weiter fortlaufend Interviews mit den heute hochbetagten Überlebenden. Mittlerweile wurden im Erinnerungswerk 120 lebensgeschichtliche Interviews von psychodynamisch orientierten Psychotherapeuten durchgeführt. Es handelt sich um die Geburtsjahrgänge 1923–1942; die älteste Interviewte war zum Zeitpunkt des Gesprächs 96 Jahre alt. 14 von ihnen lebten im Heim oder in einer Pflegeeinrichtung. Weiter wird im Erinnerungswerk eine kontinuierliche Gesprächsgruppe für Zeitzeuginnen und Zeitzeugen des Hamburger Feuersturms angeboten. Als die Coronakrise im Frühjahr 2020 diese Arbeit unterbrach, haben wir uns gefragt, wie „unsere Zeitzeugen“ mit der Situation des Lockdowns umgehen, und inwieweit es hier zu einem bedrängenden und möglicherweise destabilisierenden Wiedererinnern der existenziellen Grenzerfahrung im Hamburger Feuersturm gekommen war. Erneut wurden diese Fragen im Lockdown im Dezember 2020 vor den bevorstehenden Weihnachtstagen aufgeworfen. Es stellten sich insbesondere folgende Fragen:Wie geht es den Zeitzeugen gegenwärtig unter den Bedingungen des Lockdowns?Was macht ihnen besonders zu schaffen?Wie schätzen Sie die allgemeine Lage ein?Werden die Zeitzeugen durch Coronakrise und Lockdown an ihre Erfahrungen in der Kriegs- und Nachkriegszeit erinnert? Wie geht es den Zeitzeugen gegenwärtig unter den Bedingungen des Lockdowns? Was macht ihnen besonders zu schaffen? Wie schätzen Sie die allgemeine Lage ein? Werden die Zeitzeugen durch Coronakrise und Lockdown an ihre Erfahrungen in der Kriegs- und Nachkriegszeit erinnert? Methodik: Telefongespräche Eine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt. Zahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde. Eine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt. Zahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde. Fragebogenuntersuchung im Mai 2020 Im Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert. Der Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten. Es antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren. Dabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet. Im Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert. Der Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten. Es antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren. Dabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet. Fragebogenuntersuchung im Dezember 2020 Mitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren. Mitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren. Telefongespräche: Eine kontinuierliche Gesprächsgruppe mit Zeitzeugen im Rahmen des Erinnerungswerk Hamburger Feuersturm bestand im März 2020 aus 9 Mitgliedern (4 Frauen, 5 Männer). Die Infektionszahlen waren noch relativ gering, es bestand jedoch ein Mangel an Masken und Infektionsmitteln, und Kenntnisse über das Virus waren wenig verbreitet. Am 24.03.2020 führte die Leiterin der Gruppe (S. L.) 7 Telefongespräche, um die Unterbrechung der Gruppe wegen des „Corona-Lockdowns“ anzukündigen und sich nach dem Ergehen der Gruppenmitglieder zu erkundigen. Nach den Telefonaten wurden kurze Gesprächsnotizen angefertigt. Zahlreiche Äußerungen der Gruppenteilnehmer zeigen die Art der Bezugnahme des Erlebens der Coronakrise zu den Kriegs- und Nachkriegserfahrungen und illustrieren die später in Fragebogen erhobenen Befunde. Fragebogenuntersuchung im Mai 2020: Im Mai 2020 wurde an alle 120 Zeitzeugen, die bisher im Rahmen des Erinnerungswerks interviewt worden waren, ein Fragebogen verschickt. Zu diesem Zeitpunkt war die erste Welle des Infektionsgeschehens schon deutlich im Abklingen. Es war freilich noch ungewiss, wie „es weitergehen würde“. Der Lockdown hatte schon über 6 Wochen angedauert. Der Fragebogen bestand aus 23 Fragen zu den Themen des persönlichen Umgangs mit der Krise und dabei erlebten Belastungen, dem Umgang mit der Krise vor dem Hintergrund eigener Erfahrungen in Krieg und Nachkriegszeit sowie zum gesellschaftlichen Umgang mit der Krise, zu Zukunftssorgen und zu erlebten Stützen. Es wurden 5 Antwortkategorien vorgegeben (sehr schlecht (= 1), schlecht (= 2), mittelmäßig (= 3), gut (= 4), sehr gut (= 5); bzw. nein (= 1), eher nein (= 2), teils-teils (= 3), eher ja (= 4), ja (= 5)). Zu zwei Fragen wurden Freitextantworten erbeten. Es antworteten 98 (81,7 %), 58 Frauen und 40 Männer, darunter 3 Heimbewohner, mit einem mittleren Alter von 86,5 Jahren, SD = ±3,72). Sie entstammten überwiegend den Geburtsjahrgängen 1932–1938 und waren während des Feuersturms 1943 im Mittel 8 Jahre alt gewesen. Insgesamt 5 Bogen (4 %) kamen als unzustellbar zurück, weil die Adressaten mittlerweile verstorben waren. Dabei erhielten wir von 93 Zeitzeugen (oder 78 %) schon im Zeitraum vom 15. Mai bis zum 20. Juni eine Rückmeldung. Ende Juni sanken die Infektionszahlen deutlich, und es traten erste Grenzöffnungen und Lockerungen in Kraft. Alte Menschen galten freilich weiter als besonders gefährdet. Fragebogenuntersuchung im Dezember 2020: Mitte Dezember 2020 wurde unter der zweiten Welle der Pandemie ein weiterer Lockdown verordnet. Es standen nun zwar Masken zur Verfügung, und es waren die ersten Impfstoffe entwickelt worden, aber die Infektionszahlen waren massiv angestiegen. Erneut haben wir uns mittels des entwickelten Fragebogens nach dem Ergehen der Zeitzeuginnen und Zeitzeugen zu erkundigt und dabei einige aktuelle Zusatzfragen hinzugefügt. Auf diese Umfrage antworteten jetzt 77 (64 %) der Angeschriebenen, 45 Frauen und 32 Männer mit einem Durchschnittsalter von 86,7 Jahren. Ergebnisse: Befunde aus den Telefongesprächen Allen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat. So fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“. Für einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“. Immer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist. Allen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat. So fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“. Für einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“. Immer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist. Analyse der Fragebogen Tab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09 Zum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar. Die Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35). Bei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94). Die eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren. Tab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09 Zum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar. Die Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35). Bei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94). Die eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren. Analyse der Textantworten in der ersten Befragung (Mai 2020) Erleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3 Die Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren. Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3 Die Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren. Hilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4 Neben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen. Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4 Neben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen. Erleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3 Die Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren. Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3 Die Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren. Hilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4 Neben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen. Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4 Neben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen. Verändern sich die Befunde beim zweiten Lockdown? In der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57). Die Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33). In der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57). Die Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33). Erleben der Coronakrise vor dem Hintergrund der Feuersturmerfahrung Bei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24). Eine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1). Bei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24). Eine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1). Bezüge zu den Inhalten der lebensgeschichtlichen Interviews Es stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet. Es stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet. Befunde aus den Telefongesprächen: Allen Zeitzeuginnen und Zeitzeugen schien es generell wichtig, die „Fassung zu bewahren“. In 5 Telefonaten wurde die Kriegszeit thematisiert. Die Gesprächspartner kamen im Umgang mit dem Lockdown auf Haltungen und Erfahrungen in der Kriegs- und Nachkriegszeit zurück und schienen diese als Folie der Bewältigung, Einordung und Relativierung der Krise zu nutzen. Gelegentlich schienen projektive Elemente bzw. Vorurteile die Bewältigung zu stabilisieren. Im Eindruck der Gruppenleiterin wurden auch untergründige Beunruhigungen und drohende Instabilitäten spürbar, aber auch ein Stolz, dass man schon Schlimmeres erlebt und überstanden hat. So fühlt sich eine Zeitzeugin (86 Jahre) an die Ausgangssperre 1945 erinnert, „als die Engländer kamen und die Lebensmittelregale leer waren“. Eine andere Zeitzeugin (80 Jahre) verweist auf die in der Kriegs- und Nachkriegszeit eingeprägten Muster und das vorausschauende Anlegen von Vorräten: „Manches muss man jetzt eben hinten anstellen“. Das sei für sie aber kein Problem. Das kenne sie ja noch. Vorräte habe sie immer ausreichend im Keller: „Das steckt in uns, für Krisenzeiten immer Vorsorge zu treffen“. Ein Zeitzeuge (81 Jahre) betont, wie wichtig es ist, auch „Glück“ zu haben, eine Einstellung, der wir in unseren Untersuchungen zur Rückschau auf das Überleben im Feuersturm häufig begegnet sind [3]: „Wenn man in Trümmern liegt, braucht man auch Glück, man kann sich anpassen und improvisieren, aber Glück braucht man auch.“ Er betont auch die Fähigkeit, Mängel auszuhalten. „Es ist ja auch kein Drama, wenn es mal an etwas fehlt.“ Vielleicht lerne die nachfolgende Generation jetzt auch, dass man nicht in Panik geraten müsse, „wenn man mal kein Brot im Haus hat“. Für einen anderen Zeitzeugen (79 Jahre) ist dagegen der Hunger zentral. Er habe sich immer gesagt: bloß nie wieder hungern. Er ist froh, jetzt nicht hungern zu müssen. Er bezieht sich in seinem Erleben des Lockdowns nicht nur auf den Nahrungsmangel der Nachkriegszeit, sondern auf die „gesamte Kriegszeit“ seit 1943, „da durften wir auch nicht raus“. Immer wieder werden in den Telefonaten die „Jungen“ erwähnt, die so etwas ja nicht gewohnt seien: „Für die Jungen ist es ja das erste Mal, dass sie zum Nachdenken kommen“, meint eine Zeitzeugin (92 Jahre). Beim Einkaufen in der Kassenschlange habe sie kürzlich eine junge Frau zu ihrer Freundin sagen gehört: „Ich bin so wütend und verzweifelt. Ich hatte eine Theaterkarte, und jetzt fällt das aus“. Da habe sie sich ins Gespräch gemischt: „Nun machen Sie mal nicht so ein Gesicht! Wir haben damals gehungert und die Bomben auf den Kopf gekriegt. Sie haben doch nicht ernsthaft was auszustehen, nur weil die Theatervorstellung ausfällt, mein Kind“. Für einen anderen Zeitzeuge (82 Jahre), der seine Aktivität in Hamburger Schulen durchaus bewusst als eine Art Selbsttherapie begreift, ist es v. a. schwer zu ertragen, dass dies nun unter Coronabedingungen ins Stocken geraten ist. Analyse der Fragebogen: Tab. 1 zeigt die Ergebnisse des Fragebogenrücklaufs zu beiden Zeitpunkten im Mai und Dezember 2020. Die Frage 11: „Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?“ thematisiert die Erfahrungen im Feuersturm. Zu diesem Item wurde die Korrelation mit den anderen Items des Fragebogens berechnet.Umfragen zur CoronapandemieMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pMittelwert (m)Standardabweichung (s)Korrelation (r) mit Item Nr. 11pNr.FragetextUmfrage 1 (n = 98) Mai 2020Umfrage 2 (n = 77) Dezember 20201Wie geht es Ihnen gegenwärtig gesundheitlich?3,570,820,080,413,510,660,050,602Haben Sie persönlich Angst, sich am Coronavirus anzustecken?2,170,930,100,352,460,980,33<0,0013Haben Sie für sich selbst vorgesorgt und Vorräte angelegt?2,151,300,130,222,151,310,080,414Halten Sie sich streng an die „Coronaregeln“ (Abstandsgebote, Kontaktsperre, Hygieneregeln usw.)?4,410,760,110,284,470,750,160,115Mussten Sie Ihre Kontakte einschränken?4,071,160,090,404,011,130,070,536Ist Ihnen das schwergefallen?3,541,450,130,213,891,260,120,237Falls ja, was war hier das Schwerste?Freitextantworten8Haben Sie körperliche oder seelische Symptome, die Sie auf diese Krise zurückführen?1,841,170,120,251,921,050,100,329Hat Ihr Schlaf unter der derzeitigen Krise gelitten?1,691,100,260,011,901,070,320,00110Haben Sie angesichts der gegenwärtigen Bedrohung durch Corona Ängste, die schlimmer sind, als sie eigentlich sein müssten, und gegen die Sie sich nicht wehren können?1,611,020,270,011,740,980,40<0,00111Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?2,351,39––2,481,49––12Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen in der Nachkriegszeit?2,651,500,58<0,0012,641,440,64<0,00113Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als eher harmlos?2,931,48−0,300,012,571,39−0,100,1614Haben Sie den Eindruck, die Coronakrise wird von der Politik dramatisiert?1,961,060,050,661,911,16−0,100,1815Haben Sie den Eindruck, dass die Politik sich ausreichend um die älteren Menschen kümmert?3,211,12−0,100,303,331,13−0,100,5616Haben Sie den Eindruck, dass die jüngeren Menschen gut verstehen, wie es den Älteren in der Krise geht?2,761,2−0,10,152,821,27−0,100,2217Fürchten Sie ernsthafte volkswirtschaftliche Schäden für Deutschland?4,140,9600,984,250,900,8718Belasten Sie solche Befürchtungen?3,081,220,070,503,111,240,200,0519Fürchten Sie ernste wirtschaftliche Folgen auch für sich persönlich (z. B. Inflation, Besteuerung des Ersparten, Rentenkürzungen)?2,861,310,090,363,011,3800,8820Belasten Sie solche Befürchtungen?2,431,260,140,182,571,350,120,2521Haben Sie die Erwartung, dass Deutschland gut durch die Krise kommen wird?3,940,9101,003,710,8400,9422Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft?Freitextantworten23Sind Ihnen hier Ihre Erfahrungen in der Kriegs- oder Nachkriegszeit nützlich?3,631,530,260,013,611,450,240,0224Macht Ihnen die Dauer der Krise zu schaffen?––––3,751,030,020,8425Haben Sie unter Einsamkeit oder Alleinsein zu leiden?––––2,331,310,020,8626Mussten Sie selbst in Quarantäne oder waren besonderer Kontaktbeschränkung ausgesetzt?––––1,030,15−0,100,2127Ist jemand aus Ihrem persönlichen Umfeld am Coronavirus erkrankt?––––1,140,350,230,0228Ist jemand aus Ihrem persönlichen Umfeld am oder mit dem Coronavirus verstorben?––––1,040,190,190,0629Werden Sie selbst sich impfen lassen, sobald ein Impfstoff zur Verfügung steht?––––4,031,30−0,100,4530Fühlen Sie sich gegenwärtig durch die Coronapandemie stärker belastet als im Frühjahr während des ersten Lockdowns?––––3,231,400,150,1431Sind die coronabedingten Einschränkungen zu Weihnachten für Sie besonders schwer auszuhalten?––––2,591,370,160,1132Ist für Sie das Feuerwerk zu Silvester mit Erinnerungen an den Feuersturm verbunden?––––2,331,730,180,09 Zum allgemeinen Umgang mit der Coronakrise geben die Befragten in der ersten Umfrage im Mai 2020 an, dass es ihnen gesundheitlich relativ gut geht (m = 3,57). Sie berichten, die „Coronaregeln“ (m = 4,40) genau zu befolgen. Die damit verbundenen Kontakteinschränkungen (m = 4,07) auf sich zu nehmen, ist überwiegend schwergefallen (m = 3,54). Weniger geben die Befragen an, Vorräte angelegt zu haben (m = 2,17). Obwohl sie der „Risikogruppe“ angehören, ist die Angst vor eigener Ansteckung ebenso relativ gering ausgeprägt (m = 2,17). Körperliche oder seelische Symptome werden in noch geringerem Ausmaß auf die Coronakrise zurückgeführt (m = 1,84), zu Schlafstörungen kommt es kaum (m = 1,69), und über vermehrte Ängste wird vergleichsweise wenig berichtet (m = 1,61). Insgesamt stellen sich die Befragten also stabil und wenig beeinträchtigt dar. Die Durchschnittswerte der Angaben zum historischen Vergleich liegen im mittleren Bereich. Dies gilt sowohl für die Wiedererinnerung an die Nachkriegszeit (m = 2,65) als auch an den Feuersturm (m = 2,35). Bei den Angaben zum gesellschaftlichen Umgang mit der Krise und zu Zukunftserwartungen stehen wirtschaftliche Sorgen im Vordergrund: Überwiegend wird befürchtet, dass es zu volkswirtschaftlichen Schäden für Deutschland kommt (m = 4,13). Diese belasten (m = 3,08), dabei ist die Sorge um die eigene wirtschaftliche Zukunft weniger ausgeprägt (m = 2,86). Überwiegend wird erwartet, dass Deutschland gut durch die Krise kommen wird (m = 3,94). Die eigenen Erfahrungen in Kriegs- und Nachkriegszeit werden im Umgang mit der Krise als nützlich erlebt (m = 3,62). Dieser Wert liegt deutlich höher als die Durchschnittswerte der Items 11 und 12, die eine belastende und ängstigende Wiedererinnerung thematisieren. Analyse der Textantworten in der ersten Befragung (Mai 2020): Erleben der Kontakteinschränkung Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3 Die Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren. Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3 Die Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren. Hilfe in der Krise Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4 Neben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen. Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4 Neben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen. Erleben der Kontakteinschränkung: Auf die Frage 7 (Was war das Schwerste bei der Einhaltung der Kontakteinschränkungen?) wurde in 65 Fragebogen geantwortet. Die Inhalte der Antworten ließen sich thematisch wie folgt gruppieren (Tab. 2).Unterbrochene KontaktenFamilienangehörige, Freunde und Bekannte nicht treffen zu können24Verlust von gewohnten Aktivitäten oder Reisen14Nicht-umarmen-Können in der Familie10Allgemeine Kontakt- und Besuchsreduktion8Wichtige Geburtstage nicht in der Familie feiern zu können4Erleben von Einsamkeit und RestriktionAlleinsein4Eigene Krankenbesuche sind nicht möglich4Erleben von Eingesperrtsein1Kein Ausgang1Kontakt nur am Telefon1Zu wenig Bewegung1Explizit „nicht viel vermisst“3 Die Antworten betonen die Personen, mit denen Kontakte nicht möglich oder unterbunden waren. Besonders schmerzlich wurde erlebt, „dass man sich in der Familie nicht mehr umarmen kann“, ebenso wenn eigene Besuche im Pflegeheim nicht möglich waren. Hilfe in der Krise: Zu der Frage 22: Gibt es etwas, das Ihnen jetzt in der Krise besonders hilft? gingen in 88 Fragebogen Antworten ein, die sich in folgende Gruppen zusammenfassen ließen (Tab. 3).Stützendes Erleben in Partnerschaft und FamilienFamilie allgemein8Kinder8Partnerschaft4Hund2Enkel1Eigene EinstellungenOptimismus, Glaube und Zuversicht10Kraft aus den eigenen Erfahrungen in Krieg und Nachkriegszeit7Erfahrung und Wissen5Eigenes Gesundheitsverhalten und Vorsicht3Ruhe bewahren und nicht zu viel nachdenken2Sozialer Zusammenhalt und günstige eigene Aktivitäten und LebensumständeNachbarn, Hilfe im Alltag, Möglichkeit, selbst zu helfen12Natur und Bewegung, Hobby10Kommunikation über Telefon, E‑Mail oder WhatsApp8Freundschaften5Gute eigene Lebenssituation5Vertrauen in die Politik4Hoffnung auf Besserung und Lockerung der Maßnahmen3Sinnlicher Genuss im Rahmen des Möglichen2Gottesdienstbesuche1Gespräche1Ruhe und Schlaf1Keine besondere Hilfe4 Neben dem stützenden Erleben durch Kontakte und Gespräche in der Familie steht die Erfahrung eines sozialen Miteinanders im Vordergrund. Weiter nennen die Befragten neben „Wissen und Lebenserfahrung“ häufig positive Lebenseinstellungen als hilfreich. Sie zeigen sich gelassen und optimistisch und geben an, dass die Kriegserfahrung ihnen helfe, das Erleben der Kontakteinschränkung zu relativieren: „Wir hungern und frieren ja nicht“. Natur und Hobby sind weitere häufig genannte Kraftquellen. Verändern sich die Befunde beim zweiten Lockdown?: In der zweiten Umfrage im Dezember 2020 ergibt sich im Wesentlichen eine Konstanz der Befunde. Die größten Unterschiede gab es bei der Frage 6 (Ist Ihnen die Kontakteinschränkung schwergefallen? – hier stieg der Mittelwert um 0,4 auf m = 3,89) und bei der Frage 13 (Empfinden Sie die Coronakrise im Vergleich zur Kriegs- und Nachkriegszeit als harmlos? – hier sank der Mittelwert um 0,4 auf m = 2,57). Die Dauer der Krise macht den Zeitzeugen deutlich zu schaffen (m = 3,75). Einsamkeit scheint eher nicht das zentrale Problem zu sein (m = 2,33). Die meisten Zeitzeugen wollen sich impfen lassen (m = 4,0). Mittlere Zustimmung erfahren Fragen nach einer besonderen Belastung durch die bevorstehenden Weihnachtstage (m = 2,69) oder durch das Silvesterfeuerwerk (m = 2,33). Erleben der Coronakrise vor dem Hintergrund der Feuersturmerfahrung: Bei dem Item 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) antworteten 13 Befragte auf der höchsten Stufe (5) mit; Ja. Von ihnen wurde bei 11 Befragten dieses Item in der zweiten Befragung erneut auf der höchsten Stufe beantwortet, in einem Fall ist es aber von 5 auf 4 gesunken, und in einem anderen Fall von 5 auf 2. Die Korrelationsberechnungen mit den anderen Items des Fragebogens (Spalte 3 und 7 der Tab. 1) zeigen zu beiden Zeitpunkten bei 3 Variablen eine dem Betrag nach kleine, jedoch signifikante Korrelation mit den Antworten auf Frage 11 (p-Wert < 0,05): Schlafprobleme (0,26) bzw. (0,32), übertriebene Ängste (0,28) bzw. (0,40), Eindruck einer Nützlichkeit der Kriegserfahrungen für die persönliche Bewältigung der Coronakrise (0,26) bzw. (0,24). Eine stark signifikante Korrelation (0,58) bzw. (0,64) (p-Wert < 0,001) ergibt sich für die Frage nach dem Erinnertwerden an die Nachkriegszeit. Allerdings korrelieren hier meist die niedrigen Werte; die Befragten, die sich durch die Coronakrise nicht an den Hamburger Feuersturm erinnert fühlten, sahen sich auch nicht an die Nachkriegszeit erinnert. Eine weitere signifikante (negative) Korrelation mit dem Erleben, die Coronakrise sei im Vergleich zur Kriegs- und Nachkriegszeit harmlos (−0,3), zeigt sich in der zweiten Befragung nicht mehr (−0,1). Bezüge zu den Inhalten der lebensgeschichtlichen Interviews: Es stellte sich nun die Frage, welche konkreten Erfahrungen im Feuersturm die 13 Rücksender gemacht haben, die auf die Frage 11 (Erinnert die gegenwärtige Krise Sie an Ihre Erfahrungen im Hamburger Feuersturm?) auf der Endstufe der Skalierung (ja = 5) geantwortet haben. Dazu haben wir in unserer Forschungsgruppe die in den im Erinnerungswerk bereits vorliegenden transkribierten Interviews dieser Rücksender herangezogen und ad hoc zentrale Grundzüge der geschilderten Erfahrungen im Feuersturm und der Kriegs- und Nachkriegszeit auf einer Skala von 0 bis 3 gemeinsam eingeschätzt. Dabei zeigte sich eine massive Belastung dieser Zeitzeugen weit über die eigentliche Todesbedrohung im Feuersturm (m = 1,96) hinaus. Im Vordergrund stehen hier die Folgen der Ausbombung mit der damit verbundenen Entwurzelung (m = 2,38), neben weiteren Nöten und Entbehrungen in der Kriegszeit (m = 1,92) und der Nachkriegszeit (m = 1,92). Nur ein Zeitzeuge dieser Gruppe war vergleichsweise gering belastet. Diskussion: Unsere explorative Querschnittsstudie gibt einen ersten Einblick in das Erleben der Coronakrise in der Generation der „Kriegskinder des Zweiten Weltkriegs“. Der hohe Rücklauf der Fragebogen in der Gruppe der Hochbetagten (81,7 %, 64 %) spricht für die subjektive Relevanz der Fragestellung bei den Befragten und ihre Bindung an das Erinnerungswerk. Insgesamt zeigen sich die Überlebenden des Hamburger Feuersturms am Anfang des ersten Coronalockdowns im Frühjahr 2020 wie auch im Dezember 2020 überwiegend gelassen und stabil. Gleichzeitig wird deutlich, dass die verhängten Kontaktbeschränkungen, besonders die Einschränkung des unmittelbaren auch körperlichen Kontakts in der Familie erheblich zu schaffen machen. Die Sorge um die eigene Gesundheit und die Angst vor Ansteckung ist weniger ausgeprägt. Als ein wesentliches Element gut durch die Krise zu kommen, wird eine positive eigene Einstellung gesehen. Möglicherweise werden hier Haltungen im Sinne einer „generationsbedingten Genügsamkeit“ [6] aktiviert, die in der Kriegszeit ausgebildet wurden (z. B. Optimismus als Überlebensprinzip, Ruhe bewahren), aber auch Erfahrung und Wissen werden als wichtig erachtet. Freilich finden wir in den Fragebogen auch eine Untergruppe von ca. 13 % der Antwortenden, die einen erinnernden Bezug zwischen der Feuersturm und Coronaerfahrungen angeben. Unsere Befunde sprechen insgesamt dafür, dass bei schwer betroffenen „Kriegskindern“ des II. Weltkriegs die Kriegserfahrungen in ihrem Selbsterleben als stabilisierendes inneres Referenzsystem zur Bewältigung und Relativierung der Coronakrise dienen können, dass aber auch schmerzhaftes und schwieriges „Wiedererinnern“ vorkommt. Ausblick: Es erscheint sinnvoll, sich gezielt dem zeitgeschichtlich geprägten Erleben in der Altersgruppe der Hochbetagten zuzuwenden. Dies erscheint umso notwendiger, als z. B. eine groß angelegte nationale Studie zum Erleben der Coronakrise die über 79-Jährigen nicht einbezieht [8]. Ihre kollektive Benennung als „Hochrisikogruppe“ kann diskriminierend wirken und die „Krisenfestigkeit“ der einzelnen Menschen überblenden [1]. Fazit für die Praxis: Die befragten Überlebenden des Hamburger Feuersturms schildern sich und ihre persönliche Situation als überwiegend stabil.Vor allem die mit dem Lockdown verbundenen Kontaktbeschränkungen werden als belastend erlebt.Die Kriegserfahrung hilft zur Relativierung der gegenwärtigen Krise.Eine Untergruppe erlebt starke Bezüge zur Feuersturmerfahrung und zu anhaltenden Belastungen in der Kriegs- und Nachkriegszeit.Generell sollten die Erfahrungen im Zweiten Weltkrieg in der psychosozialen Betreuung der heute über 80-Jährigen berücksichtigt werden. Die befragten Überlebenden des Hamburger Feuersturms schildern sich und ihre persönliche Situation als überwiegend stabil. Vor allem die mit dem Lockdown verbundenen Kontaktbeschränkungen werden als belastend erlebt. Die Kriegserfahrung hilft zur Relativierung der gegenwärtigen Krise. Eine Untergruppe erlebt starke Bezüge zur Feuersturmerfahrung und zu anhaltenden Belastungen in der Kriegs- und Nachkriegszeit. Generell sollten die Erfahrungen im Zweiten Weltkrieg in der psychosozialen Betreuung der heute über 80-Jährigen berücksichtigt werden.
Background: The generation of war children of the Second World War is currently in old age experiencing the lock-down caused by the coronavirus crisis. Methods: A total of 120 witnesses of the Hamburg Firestorm (1943) were asked about their experiences of the corona pandemic by means of a questionnaire in May 2020 and December 2020. Findings from telephone conversations with several witnesses, who regularly participate in a discussion group, have also been taken into consideration for this study. Results: Of the interviewees contacted in May 2020 and December 2020, 98 (82%) and 77 (64%), respectively, sent back the questionnaire, 58 (45) female and 40 (32) male, the mean age was 86.5 years (87.1 years). According to the questionnaire most of them feel relatively stable and confident about their general situation in the pandemic and are mostly concerned with the contact restrictions rather than with their own health. The majority fear negative economic consequences for Germany. About 13% fully agree that the current crisis reminds them of their experiences in the Hamburg Firestorm. As telephone conversations have shown the memories and experiences of the war and the post-war period in general, seem to act as the leading frame of reference for dealing with the current crisis. Conclusions: The findings point to typical psychological processing patterns in a war-burdened generation, when they now relate their experiences in the war to the experiences in the corona crisis.
null
null
11,323
290
[ 552, 1136, 132, 336, 92, 5438, 572, 811, 676, 136, 197, 171, 278, 185, 267, 69 ]
17
[ "die", "der", "und", "im", "zu", "sie", "sich", "nicht", "es", "den" ]
[ "coronakrise und lockdown", "erleben die coronakrise", "der coronakrise dienen", "einleitung die coronapandemie", "coronakrise der generation" ]
null
null
null
null
null
null
null
[CONTENT] Coronakrise | Luftkrieg | Psychische Folgen des Zweiten Weltkriegs | Bewältigung | Corona crisis | Air war | Psychological consequences of the Second World War | Coping [SUMMARY]
[CONTENT] Coronakrise | Luftkrieg | Psychische Folgen des Zweiten Weltkriegs | Bewältigung | Corona crisis | Air war | Psychological consequences of the Second World War | Coping [SUMMARY]
null
null
null
null
[CONTENT] Male | Female | Humans | Aged, 80 and over | Survivors | Fear | Pandemics | World War II | Coronavirus Infections [SUMMARY]
[CONTENT] Male | Female | Humans | Aged, 80 and over | Survivors | Fear | Pandemics | World War II | Coronavirus Infections [SUMMARY]
null
null
null
null
[CONTENT] coronakrise und lockdown | erleben die coronakrise | der coronakrise dienen | einleitung die coronapandemie | coronakrise der generation [SUMMARY]
[CONTENT] coronakrise und lockdown | erleben die coronakrise | der coronakrise dienen | einleitung die coronapandemie | coronakrise der generation [SUMMARY]
null
null
null
null
[CONTENT] die | der | und | im | zu | sie | sich | nicht | es | den [SUMMARY]
[CONTENT] die | der | und | im | zu | sie | sich | nicht | es | den [SUMMARY]
null
null
null
null
[CONTENT] der | die | werden | erlebt | 80 jährigen berücksichtigt | sich und | heute über 80 jährigen | heute über 80 | heute über | 80 jährigen [SUMMARY]
[CONTENT] die | der | und | im | zu | nicht | sie | sich | es | des [SUMMARY]
null
null
null
null
[CONTENT] [SUMMARY]
[CONTENT] the Second World War ||| 120 | the Hamburg Firestorm | 1943 | May 2020 | December 2020 ||| ||| May 2020 | December 2020 | 98 | 82% | 77 | 64% | 58 | 45 | 40 | 32 | 86.5 years | 87.1 years ||| ||| Germany ||| About 13% | the Hamburg Firestorm ||| ||| [SUMMARY]
null
The WHO surgical safety checklist: survey of patients' views.
25038036
Evidence suggests that full implementation of the WHO surgical safety checklist across NHS operating theatres is still proving a challenge for many surgical teams. The aim of the current study was to assess patients' views of the checklist, which have yet to be considered and could inform its appropriate use, and influence clinical buy-in.
BACKGROUND
Postoperative patients were sampled from surgical wards at two large London teaching hospitals. Patients were shown two professionally produced videos, one demonstrating use of the WHO surgical safety checklist, and one demonstrating the equivalent periods of their operation before its introduction. Patients' views of the checklist, its use in practice, and their involvement in safety improvement more generally were captured using a bespoke 19-item questionnaire.
METHOD
141 patients participated. Patients were positive towards the checklist, strongly agreeing that it would impact positively on their safety and on surgical team performance. Those worried about coming to harm in hospital were particularly supportive. Views were divided regarding hearing discussions around blood loss/airway before their procedure, supporting appropriate modifications to the tool. Patients did not feel they had a strong role to play in safety improvement more broadly.
RESULTS
It is feasible and instructive to capture patients' views of the delivery of safety improvements like the checklist. We have demonstrated strong support for the checklist in a sample of surgical patients, presenting a challenge to those resistant to its use.
CONCLUSIONS
[ "Adult", "Aged", "Checklist", "Female", "Humans", "London", "Male", "Medical Errors", "Middle Aged", "Operating Rooms", "Patient Care Team", "Patient Safety", "Quality Assurance, Health Care", "Surgical Procedures, Operative", "Surveys and Questionnaires", "World Health Organization" ]
4215340
Introduction
The introduction of the WHO surgical safety checklist into National Health Service (NHS) operating theatres in 2009 represented the first move towards mandatory action for improving surgical safety in the UK for some time.1 The potential for safety checklists to improve surgical outcomes and team performance is largely supported across the surgical literature2–8; however, their ability to bring about such improvements appears to be related to the style of implementation adopted, and the buy-in fostered by clinical teams, rather than their introduction per se. Indeed, the lack of a focussed implementation programme to support checklist introduction (including aspects such as training, feedback, local adaptation and involvement from all levels of the organisation), might explain reports that have not detected an effect of checklists on outcomes.9 10 In the UK, implementation of the WHO checklist has not been entirely straightforward. Several barriers to implementation have been described, including some very practical issues (eg, bringing the whole team together at one time),8 11 other tool-specific concerns (eg, the applicability of the checks to certain surgical contexts),12 and certain team-based challenges (eg, checklist fatigue and blurred lines of accountability).13 While some clinicians have been quick to see the benefits and have embraced the use of checklists, others have strongly resisted their implementation, seeing it as an attack on clinical autonomy or a slur on their professionalism.12–16 This discussion of the introduction and use of surgical checklists has so far been conducted entirely from the standpoint of the professionals involved. But perhaps patients, being the recipients of care as well as the payers, should have a voice in the weight given to safety in healthcare systems and in the introduction of novel safety measures. At a time when the fallibility of hospital care is very much in the public eye (with the release of publications such as the Francis report in the UK),17 this question becomes particularly pertinent. In many cases, such as controls on radiation levels or chemotherapy dosing, only a small number of healthcare experts have the requisite expertise to formulate and assess safety measures. In other cases however, such as using a checklist, patients might be able to come to a view and potentially influence implementation.18 By contrast with aviation, where safety checklists are very much engrained standard operating procedure and directly involve crew members only, in surgery, patients are part of the process and there are still questions regarding how they will respond to the checklist, if they will feel more vulnerable to errors, whether they would be agitated hearing some of the checks discussed and so on. This study sought to address these questions which, to the best of our knowledge, have not been addressed before: What views do patients have about the use of the WHO surgical safety checklist in NHS hospitals?Does the previous experience of error in hospital or other experiential/patient characteristics influence these views? What views do patients have about the use of the WHO surgical safety checklist in NHS hospitals? Does the previous experience of error in hospital or other experiential/patient characteristics influence these views? As a secondary aim, we also explored the views patients have towards being involved in decisions around improvements in safety and the care they receive more generally, and sought to begin to understand how best to assess such patient views on safety measures in healthcare from a methodological perspective. Specifically, we tested the feasibility of using videos to communicate safety concepts to patients on hospital wards.
Methods
Sample Patients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1. Patients’ characteristics (N=141) Patients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1. Patients’ characteristics (N=141) Design To assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care. This was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced. To assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care. This was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced. Materials Videos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request. Questionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items). Patients’ views of the checklist (N=141) *% of total sample. Patients’ views of involvement in safety improvement in healthcare (N=141) *% of total sample. Videos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request. Questionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items). Patients’ views of the checklist (N=141) *% of total sample. Patients’ views of involvement in safety improvement in healthcare (N=141) *% of total sample. Procedure A senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure). Senior ward nurse identified postoperative patients who were sufficiently recovered to participate. Patient approached by nurse and researchers and informed about the study. Information sheet provided (5 min). If willing to take part, patient consent obtained (3 min). Patient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min). Patient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min). Patient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min). Patient completed questionnaire (10 min). Patient debriefed (3 min). Total time=30 min. A senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure). Senior ward nurse identified postoperative patients who were sufficiently recovered to participate. Patient approached by nurse and researchers and informed about the study. Information sheet provided (5 min). If willing to take part, patient consent obtained (3 min). Patient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min). Patient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min). Patient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min). Patient completed questionnaire (10 min). Patient debriefed (3 min). Total time=30 min. Statistical analyses Data were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes. Data were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes.
Results
Totally, 180 patients were approached to take part in the study. Thirty-nine refused to participate for reasons including not feeling well enough, waiting for visitors, waiting for lunch, and inadequate understanding of English. This meant that data were available for 141 patients. Patient characteristics A wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection). A wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection). Attitudes towards the WHO checklist The majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not. Those who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place. The majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not. Those who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place. Attitudes towards use of the checklist in practice Overall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001). Overall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001). Attitudes towards involvement in safety improvement in healthcare Although all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001). Although all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001).
Conclusion
We have demonstrated a high level of patient support for use of the WHO surgical safety checklist in our sample. We have also shown that it is feasible to gain patient insight into the delivery of safety tools like the checklist, and that such feedback can inform appropriate tool modifications. We highlight the need for better patient and public education around opportunities for becoming more actively involved in safety improvement in healthcare, and the continued development of approaches that allow feedback to be provided in a non-threatening and accessible manner.
[ "Sample", "Design", "Materials", "Procedure", "Statistical analyses", "Patient characteristics", "Attitudes towards the WHO checklist", "Attitudes towards use of the checklist in practice", "Attitudes towards involvement in safety improvement in healthcare", "Limitations", "Implications" ]
[ "Patients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1.\nPatients’ characteristics (N=141)", "To assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care.\nThis was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced.", "Videos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request.\nQuestionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items).\nPatients’ views of the checklist (N=141)\n*% of total sample.\nPatients’ views of involvement in safety improvement in healthcare (N=141)\n*% of total sample.", "A senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure).\nSenior ward nurse identified postoperative patients who were sufficiently recovered to participate.\nPatient approached by nurse and researchers and informed about the study. Information sheet provided (5 min).\nIf willing to take part, patient consent obtained (3 min).\nPatient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min).\nPatient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min).\nPatient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min).\nPatient completed questionnaire (10 min).\nPatient debriefed (3 min).\nTotal time=30 min.", "Data were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes.", "A wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection).", "The majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not.\nThose who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place.", "Overall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001).", "Although all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001).", "This study has certain limitations. First, only surgical patients (and largely day surgery patients who were generally more able and willing to participate) were included in the sample. A wider sample, including patients from other specialities or, indeed, members of the public (who are all potential patients), might generate different views. Second, although the questionnaire methodology undertaken was ideal for an initial survey, there is clearly a need to further explore and understand the beliefs underlying these views. Additional qualitative studies are needed to explore the views of patients on the surgical checklist and safety measures more generally. Finally, with regard to the design of the questionnaire, it can be argued that we have omitted the important role played by anaesthetists in leadership and implementation of the checklist. While we asked patients whether they would prefer a surgeon or nurse to lead the checks, no reference to the anaesthetist was made. The reason behind this was to aid comprehension and to account for the fact that patients are often unfamiliar with the role an anaesthetist plays during surgery; however, it is a clear limitation in the study and should be addressed in future work. This, in itself, highlights the challenges inherent in designing tools for use with patient cohorts.", "The current work has implications for patient safety research, healthcare improvement and clinical practice. First, we have shown that our sample of patients was largely in favour of the WHO checklist. While it may not seem surprising that patients would be positive towards extra safety checks, we believe that the current findings provide a persuasive argument for the use of the checklist and a challenge to those who are reluctant to its use or who do not complete parts of the checks on the grounds that patients might not feel comfortable. This is an important addition to the work around safety checklist implementation to date.\nSecond, the study has suggested ways in which the use of the checklist might be adjusted to take into account the sensitivities of the patient experience. For example, we found that almost a third of patients expressed that they would feel anxious upon hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ checks). This information could be used to inform modification of the tool (which was endorsed by the National Patient Safety Agency when the checklist was introduced).19 The wording of the ‘blood loss’ item might be altered (eg, maybe by referring to the need for a ‘group and save’ instead), or this check could be completed at a different time-point entirely (eg, blood loss is also checked at ‘time-out’ when the patient is asleep, which could be sufficient). From our experience, this kind of modification is already happening in a large number of hospitals, but here we have provided support for doing so from a patient perspective. This is a simple demonstration of how patient experience might be improved (or at the very least not unnecessarily compromised) by gaining patient input into the delivery of safety interventions, and the research question can certainly be extended to additional interventions beyond the checklist.\nThird, this work informs the methodological approach we might take to involve patients in quality and safety improvement in healthcare, which is an area that requires consideration. We have presented a feasible methodology for informing patients about safety interventions that they themselves would not necessarily otherwise be aware of or witness, which provides them with the opportunity to share their views surrounding its implementation. The use of videos was an efficient, and well received effective means of visually demonstrating the use of the checklist and equivalent prechecklist practices, and made these concepts easy to grasp and relevant to the individual patient. This methodology was entirely feasible for use on a hospital ward. The survey instrument was also a quick and simple way of collecting a large amount of data—participants could answer the questions at their own pace and rarely required assistance. This methodology allows standardised presentation of information and can be adapted to satisfy a range of research questions. Patients were generally very happy to participate and, in fact, valued the distraction from their environment." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Sample", "Design", "Materials", "Procedure", "Statistical analyses", "Results", "Patient characteristics", "Attitudes towards the WHO checklist", "Attitudes towards use of the checklist in practice", "Attitudes towards involvement in safety improvement in healthcare", "Discussion", "Limitations", "Implications", "Conclusion" ]
[ "The introduction of the WHO surgical safety checklist into National Health Service (NHS) operating theatres in 2009 represented the first move towards mandatory action for improving surgical safety in the UK for some time.1 The potential for safety checklists to improve surgical outcomes and team performance is largely supported across the surgical literature2–8; however, their ability to bring about such improvements appears to be related to the style of implementation adopted, and the buy-in fostered by clinical teams, rather than their introduction per se. Indeed, the lack of a focussed implementation programme to support checklist introduction (including aspects such as training, feedback, local adaptation and involvement from all levels of the organisation), might explain reports that have not detected an effect of checklists on outcomes.9\n10\nIn the UK, implementation of the WHO checklist has not been entirely straightforward. Several barriers to implementation have been described, including some very practical issues (eg, bringing the whole team together at one time),8\n11 other tool-specific concerns (eg, the applicability of the checks to certain surgical contexts),12 and certain team-based challenges (eg, checklist fatigue and blurred lines of accountability).13 While some clinicians have been quick to see the benefits and have embraced the use of checklists, others have strongly resisted their implementation, seeing it as an attack on clinical autonomy or a slur on their professionalism.12–16\nThis discussion of the introduction and use of surgical checklists has so far been conducted entirely from the standpoint of the professionals involved. But perhaps patients, being the recipients of care as well as the payers, should have a voice in the weight given to safety in healthcare systems and in the introduction of novel safety measures. At a time when the fallibility of hospital care is very much in the public eye (with the release of publications such as the Francis report in the UK),17 this question becomes particularly pertinent. In many cases, such as controls on radiation levels or chemotherapy dosing, only a small number of healthcare experts have the requisite expertise to formulate and assess safety measures. In other cases however, such as using a checklist, patients might be able to come to a view and potentially influence implementation.18 By contrast with aviation, where safety checklists are very much engrained standard operating procedure and directly involve crew members only, in surgery, patients are part of the process and there are still questions regarding how they will respond to the checklist, if they will feel more vulnerable to errors, whether they would be agitated hearing some of the checks discussed and so on.\nThis study sought to address these questions which, to the best of our knowledge, have not been addressed before:\nWhat views do patients have about the use of the WHO surgical safety checklist in NHS hospitals?Does the previous experience of error in hospital or other experiential/patient characteristics influence these views?\nWhat views do patients have about the use of the WHO surgical safety checklist in NHS hospitals?\nDoes the previous experience of error in hospital or other experiential/patient characteristics influence these views?\nAs a secondary aim, we also explored the views patients have towards being involved in decisions around improvements in safety and the care they receive more generally, and sought to begin to understand how best to assess such patient views on safety measures in healthcare from a methodological perspective. Specifically, we tested the feasibility of using videos to communicate safety concepts to patients on hospital wards.", " Sample Patients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1.\nPatients’ characteristics (N=141)\nPatients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1.\nPatients’ characteristics (N=141)\n Design To assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care.\nThis was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced.\nTo assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care.\nThis was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced.\n Materials Videos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request.\nQuestionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items).\nPatients’ views of the checklist (N=141)\n*% of total sample.\nPatients’ views of involvement in safety improvement in healthcare (N=141)\n*% of total sample.\nVideos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request.\nQuestionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items).\nPatients’ views of the checklist (N=141)\n*% of total sample.\nPatients’ views of involvement in safety improvement in healthcare (N=141)\n*% of total sample.\n Procedure A senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure).\nSenior ward nurse identified postoperative patients who were sufficiently recovered to participate.\nPatient approached by nurse and researchers and informed about the study. Information sheet provided (5 min).\nIf willing to take part, patient consent obtained (3 min).\nPatient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min).\nPatient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min).\nPatient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min).\nPatient completed questionnaire (10 min).\nPatient debriefed (3 min).\nTotal time=30 min.\nA senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure).\nSenior ward nurse identified postoperative patients who were sufficiently recovered to participate.\nPatient approached by nurse and researchers and informed about the study. Information sheet provided (5 min).\nIf willing to take part, patient consent obtained (3 min).\nPatient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min).\nPatient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min).\nPatient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min).\nPatient completed questionnaire (10 min).\nPatient debriefed (3 min).\nTotal time=30 min.\n Statistical analyses Data were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes.\nData were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes.", "Patients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1.\nPatients’ characteristics (N=141)", "To assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care.\nThis was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced.", "Videos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request.\nQuestionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items).\nPatients’ views of the checklist (N=141)\n*% of total sample.\nPatients’ views of involvement in safety improvement in healthcare (N=141)\n*% of total sample.", "A senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure).\nSenior ward nurse identified postoperative patients who were sufficiently recovered to participate.\nPatient approached by nurse and researchers and informed about the study. Information sheet provided (5 min).\nIf willing to take part, patient consent obtained (3 min).\nPatient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min).\nPatient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min).\nPatient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min).\nPatient completed questionnaire (10 min).\nPatient debriefed (3 min).\nTotal time=30 min.", "Data were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes.", "Totally, 180 patients were approached to take part in the study. Thirty-nine refused to participate for reasons including not feeling well enough, waiting for visitors, waiting for lunch, and inadequate understanding of English. This meant that data were available for 141 patients.\n Patient characteristics A wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection).\nA wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection).\n Attitudes towards the WHO checklist The majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not.\nThose who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place.\nThe majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not.\nThose who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place.\n Attitudes towards use of the checklist in practice Overall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001).\nOverall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001).\n Attitudes towards involvement in safety improvement in healthcare Although all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001).\nAlthough all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001).", "A wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection).", "The majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not.\nThose who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place.", "Overall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001).", "Although all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001).", "The importance of patient involvement in healthcare delivery and quality improvement is being increasingly acknowledged and addressed in clinical practice and in research. This study offers some understanding around the feasibility of capturing patients’ views of the implementation of safety measures, like the WHO surgical safety checklist, and also insight into patients’ impressions of the checklist, as well as their involvement in quality improvement efforts more generally.\nWe found that patients were very receptive towards the checklist; most were in fact surprised that the tool was only a recent introduction to surgical care. While there was heterogeneity in the results, patients were generally positive about the beneficial impact the checklist could have on communication and safety in theatre, and they understood why such basic checks were necessary. Most patients disagreed that the checks undermined the competence of staff, and were confident that the checklist would be used correctly by their care team. This contrasts with evidence that suggests that there is, in fact, a high degree of variability in how well the checklist is used in practice.8 Patients did not have strong preferences with regards to whom (ie, surgeon vs nurse) should take responsibility for the checklist. Different views were expressed on the experience of hearing discussion about potential blood loss just before surgery. Some felt that this would reassure them that all eventualities were being taken into consideration, whereas, a quarter of those asked felt that it would make them feel anxious and worried. This clearly requires further investigation and, potentially, some adjustment to how the checklist is used.\nDemographic characteristics had minimal impact on patients’ views. No differences were found according to sex or ethnicity and age (over 50 vs under 50 years) had an impact on responses for only three of the questionnaire items: older participants were slightly less likely to agree that such checks were necessary but were still positive overall about the checklist.\nFactors relating to an individual's previous or current experience in hospital and the level of worry that they have about coming to unnecessary harm had a greater impact on their views. In line with previously established prevalence rates, 8.7% of patients reported a previous adverse event in hospital (eg, medication error, surgical site infection).20 These individuals, and those who had been in hospital for three or more days for their surgery at the time of participating, were less likely to want to hear discussions around blood loss prior to their surgery and were more likely to want a surgeon to conduct the checklist, respectively. Patients who were worried that they would come to unnecessary harm in hospital were significantly more positive about the checklist. These individuals, arguably, have a more realistic view of the extent of error and harm in hospitals, and if patients in the study were provided with background material about safety problems in healthcare, their attitudes towards the checklist might be stronger than those expressed here. Patients in this study were, for the most part, having relatively straightforward day surgery, and many had comparatively little experience of healthcare. Future studies should address a wider variety of patients to assess whether patients with longstanding problems and more experience of healthcare have a different attitude to safety measures than those with less experience.\nAlthough patients felt able to provide views on the surgical checklist, it was noteworthy that they did not feel they had a strong role to play in safety improvement more generally. Those who had experienced a previous error in their care were more likely to agree that patient feedback should be used to try to improve services and those who worried about experiencing harm were more likely to agree that they would like to become involved in efforts to reduce patient harm. This fits with previous research into patient perceptions following adverse events.21 The majority, however, were willing to leave the responsibility for safety improvement to the healthcare professionals. This may again reflect the characteristics of our sample; since the majority of patients (80%) had been admitted for minor procedures, they were, on the whole, relatively low-risk. Perhaps patients with more serious conditions, comorbidities and chronic problems would be more motivated to, and more aware of, their ability to play a role in decisions around the process and delivery of their care.22 There are likely to be additional barriers to patient involvement in general quality improvement, however, including perceived patient/doctor authority gradients, willingness (or lack of) to commit time and energy to quality improvement in the context of one's own health problems, and a fear that unwanted involvement might jeopardise the quality of their own care.18\n23 Added to this, a patient's capacity to become involved will likely be influenced by their underlying intellectual, moral and behavioural profile.24 Patient involvement in quality improvement is still a relatively new and rapidly evolving concept in the NHS, and these findings highlight a need for raising awareness and educating the public around opportunities to become involved, and the benefits that can be gained.\n Limitations This study has certain limitations. First, only surgical patients (and largely day surgery patients who were generally more able and willing to participate) were included in the sample. A wider sample, including patients from other specialities or, indeed, members of the public (who are all potential patients), might generate different views. Second, although the questionnaire methodology undertaken was ideal for an initial survey, there is clearly a need to further explore and understand the beliefs underlying these views. Additional qualitative studies are needed to explore the views of patients on the surgical checklist and safety measures more generally. Finally, with regard to the design of the questionnaire, it can be argued that we have omitted the important role played by anaesthetists in leadership and implementation of the checklist. While we asked patients whether they would prefer a surgeon or nurse to lead the checks, no reference to the anaesthetist was made. The reason behind this was to aid comprehension and to account for the fact that patients are often unfamiliar with the role an anaesthetist plays during surgery; however, it is a clear limitation in the study and should be addressed in future work. This, in itself, highlights the challenges inherent in designing tools for use with patient cohorts.\nThis study has certain limitations. First, only surgical patients (and largely day surgery patients who were generally more able and willing to participate) were included in the sample. A wider sample, including patients from other specialities or, indeed, members of the public (who are all potential patients), might generate different views. Second, although the questionnaire methodology undertaken was ideal for an initial survey, there is clearly a need to further explore and understand the beliefs underlying these views. Additional qualitative studies are needed to explore the views of patients on the surgical checklist and safety measures more generally. Finally, with regard to the design of the questionnaire, it can be argued that we have omitted the important role played by anaesthetists in leadership and implementation of the checklist. While we asked patients whether they would prefer a surgeon or nurse to lead the checks, no reference to the anaesthetist was made. The reason behind this was to aid comprehension and to account for the fact that patients are often unfamiliar with the role an anaesthetist plays during surgery; however, it is a clear limitation in the study and should be addressed in future work. This, in itself, highlights the challenges inherent in designing tools for use with patient cohorts.\n Implications The current work has implications for patient safety research, healthcare improvement and clinical practice. First, we have shown that our sample of patients was largely in favour of the WHO checklist. While it may not seem surprising that patients would be positive towards extra safety checks, we believe that the current findings provide a persuasive argument for the use of the checklist and a challenge to those who are reluctant to its use or who do not complete parts of the checks on the grounds that patients might not feel comfortable. This is an important addition to the work around safety checklist implementation to date.\nSecond, the study has suggested ways in which the use of the checklist might be adjusted to take into account the sensitivities of the patient experience. For example, we found that almost a third of patients expressed that they would feel anxious upon hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ checks). This information could be used to inform modification of the tool (which was endorsed by the National Patient Safety Agency when the checklist was introduced).19 The wording of the ‘blood loss’ item might be altered (eg, maybe by referring to the need for a ‘group and save’ instead), or this check could be completed at a different time-point entirely (eg, blood loss is also checked at ‘time-out’ when the patient is asleep, which could be sufficient). From our experience, this kind of modification is already happening in a large number of hospitals, but here we have provided support for doing so from a patient perspective. This is a simple demonstration of how patient experience might be improved (or at the very least not unnecessarily compromised) by gaining patient input into the delivery of safety interventions, and the research question can certainly be extended to additional interventions beyond the checklist.\nThird, this work informs the methodological approach we might take to involve patients in quality and safety improvement in healthcare, which is an area that requires consideration. We have presented a feasible methodology for informing patients about safety interventions that they themselves would not necessarily otherwise be aware of or witness, which provides them with the opportunity to share their views surrounding its implementation. The use of videos was an efficient, and well received effective means of visually demonstrating the use of the checklist and equivalent prechecklist practices, and made these concepts easy to grasp and relevant to the individual patient. This methodology was entirely feasible for use on a hospital ward. The survey instrument was also a quick and simple way of collecting a large amount of data—participants could answer the questions at their own pace and rarely required assistance. This methodology allows standardised presentation of information and can be adapted to satisfy a range of research questions. Patients were generally very happy to participate and, in fact, valued the distraction from their environment.\nThe current work has implications for patient safety research, healthcare improvement and clinical practice. First, we have shown that our sample of patients was largely in favour of the WHO checklist. While it may not seem surprising that patients would be positive towards extra safety checks, we believe that the current findings provide a persuasive argument for the use of the checklist and a challenge to those who are reluctant to its use or who do not complete parts of the checks on the grounds that patients might not feel comfortable. This is an important addition to the work around safety checklist implementation to date.\nSecond, the study has suggested ways in which the use of the checklist might be adjusted to take into account the sensitivities of the patient experience. For example, we found that almost a third of patients expressed that they would feel anxious upon hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ checks). This information could be used to inform modification of the tool (which was endorsed by the National Patient Safety Agency when the checklist was introduced).19 The wording of the ‘blood loss’ item might be altered (eg, maybe by referring to the need for a ‘group and save’ instead), or this check could be completed at a different time-point entirely (eg, blood loss is also checked at ‘time-out’ when the patient is asleep, which could be sufficient). From our experience, this kind of modification is already happening in a large number of hospitals, but here we have provided support for doing so from a patient perspective. This is a simple demonstration of how patient experience might be improved (or at the very least not unnecessarily compromised) by gaining patient input into the delivery of safety interventions, and the research question can certainly be extended to additional interventions beyond the checklist.\nThird, this work informs the methodological approach we might take to involve patients in quality and safety improvement in healthcare, which is an area that requires consideration. We have presented a feasible methodology for informing patients about safety interventions that they themselves would not necessarily otherwise be aware of or witness, which provides them with the opportunity to share their views surrounding its implementation. The use of videos was an efficient, and well received effective means of visually demonstrating the use of the checklist and equivalent prechecklist practices, and made these concepts easy to grasp and relevant to the individual patient. This methodology was entirely feasible for use on a hospital ward. The survey instrument was also a quick and simple way of collecting a large amount of data—participants could answer the questions at their own pace and rarely required assistance. This methodology allows standardised presentation of information and can be adapted to satisfy a range of research questions. Patients were generally very happy to participate and, in fact, valued the distraction from their environment.", "This study has certain limitations. First, only surgical patients (and largely day surgery patients who were generally more able and willing to participate) were included in the sample. A wider sample, including patients from other specialities or, indeed, members of the public (who are all potential patients), might generate different views. Second, although the questionnaire methodology undertaken was ideal for an initial survey, there is clearly a need to further explore and understand the beliefs underlying these views. Additional qualitative studies are needed to explore the views of patients on the surgical checklist and safety measures more generally. Finally, with regard to the design of the questionnaire, it can be argued that we have omitted the important role played by anaesthetists in leadership and implementation of the checklist. While we asked patients whether they would prefer a surgeon or nurse to lead the checks, no reference to the anaesthetist was made. The reason behind this was to aid comprehension and to account for the fact that patients are often unfamiliar with the role an anaesthetist plays during surgery; however, it is a clear limitation in the study and should be addressed in future work. This, in itself, highlights the challenges inherent in designing tools for use with patient cohorts.", "The current work has implications for patient safety research, healthcare improvement and clinical practice. First, we have shown that our sample of patients was largely in favour of the WHO checklist. While it may not seem surprising that patients would be positive towards extra safety checks, we believe that the current findings provide a persuasive argument for the use of the checklist and a challenge to those who are reluctant to its use or who do not complete parts of the checks on the grounds that patients might not feel comfortable. This is an important addition to the work around safety checklist implementation to date.\nSecond, the study has suggested ways in which the use of the checklist might be adjusted to take into account the sensitivities of the patient experience. For example, we found that almost a third of patients expressed that they would feel anxious upon hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ checks). This information could be used to inform modification of the tool (which was endorsed by the National Patient Safety Agency when the checklist was introduced).19 The wording of the ‘blood loss’ item might be altered (eg, maybe by referring to the need for a ‘group and save’ instead), or this check could be completed at a different time-point entirely (eg, blood loss is also checked at ‘time-out’ when the patient is asleep, which could be sufficient). From our experience, this kind of modification is already happening in a large number of hospitals, but here we have provided support for doing so from a patient perspective. This is a simple demonstration of how patient experience might be improved (or at the very least not unnecessarily compromised) by gaining patient input into the delivery of safety interventions, and the research question can certainly be extended to additional interventions beyond the checklist.\nThird, this work informs the methodological approach we might take to involve patients in quality and safety improvement in healthcare, which is an area that requires consideration. We have presented a feasible methodology for informing patients about safety interventions that they themselves would not necessarily otherwise be aware of or witness, which provides them with the opportunity to share their views surrounding its implementation. The use of videos was an efficient, and well received effective means of visually demonstrating the use of the checklist and equivalent prechecklist practices, and made these concepts easy to grasp and relevant to the individual patient. This methodology was entirely feasible for use on a hospital ward. The survey instrument was also a quick and simple way of collecting a large amount of data—participants could answer the questions at their own pace and rarely required assistance. This methodology allows standardised presentation of information and can be adapted to satisfy a range of research questions. Patients were generally very happy to participate and, in fact, valued the distraction from their environment.", "We have demonstrated a high level of patient support for use of the WHO surgical safety checklist in our sample. We have also shown that it is feasible to gain patient insight into the delivery of safety tools like the checklist, and that such feedback can inform appropriate tool modifications. We highlight the need for better patient and public education around opportunities for becoming more actively involved in safety improvement in healthcare, and the continued development of approaches that allow feedback to be provided in a non-threatening and accessible manner." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, "discussion", null, null, "conclusions" ]
[ "Checklists", "Patient safety", "Attitudes", "Quality improvement", "Surgery" ]
Introduction: The introduction of the WHO surgical safety checklist into National Health Service (NHS) operating theatres in 2009 represented the first move towards mandatory action for improving surgical safety in the UK for some time.1 The potential for safety checklists to improve surgical outcomes and team performance is largely supported across the surgical literature2–8; however, their ability to bring about such improvements appears to be related to the style of implementation adopted, and the buy-in fostered by clinical teams, rather than their introduction per se. Indeed, the lack of a focussed implementation programme to support checklist introduction (including aspects such as training, feedback, local adaptation and involvement from all levels of the organisation), might explain reports that have not detected an effect of checklists on outcomes.9 10 In the UK, implementation of the WHO checklist has not been entirely straightforward. Several barriers to implementation have been described, including some very practical issues (eg, bringing the whole team together at one time),8 11 other tool-specific concerns (eg, the applicability of the checks to certain surgical contexts),12 and certain team-based challenges (eg, checklist fatigue and blurred lines of accountability).13 While some clinicians have been quick to see the benefits and have embraced the use of checklists, others have strongly resisted their implementation, seeing it as an attack on clinical autonomy or a slur on their professionalism.12–16 This discussion of the introduction and use of surgical checklists has so far been conducted entirely from the standpoint of the professionals involved. But perhaps patients, being the recipients of care as well as the payers, should have a voice in the weight given to safety in healthcare systems and in the introduction of novel safety measures. At a time when the fallibility of hospital care is very much in the public eye (with the release of publications such as the Francis report in the UK),17 this question becomes particularly pertinent. In many cases, such as controls on radiation levels or chemotherapy dosing, only a small number of healthcare experts have the requisite expertise to formulate and assess safety measures. In other cases however, such as using a checklist, patients might be able to come to a view and potentially influence implementation.18 By contrast with aviation, where safety checklists are very much engrained standard operating procedure and directly involve crew members only, in surgery, patients are part of the process and there are still questions regarding how they will respond to the checklist, if they will feel more vulnerable to errors, whether they would be agitated hearing some of the checks discussed and so on. This study sought to address these questions which, to the best of our knowledge, have not been addressed before: What views do patients have about the use of the WHO surgical safety checklist in NHS hospitals?Does the previous experience of error in hospital or other experiential/patient characteristics influence these views? What views do patients have about the use of the WHO surgical safety checklist in NHS hospitals? Does the previous experience of error in hospital or other experiential/patient characteristics influence these views? As a secondary aim, we also explored the views patients have towards being involved in decisions around improvements in safety and the care they receive more generally, and sought to begin to understand how best to assess such patient views on safety measures in healthcare from a methodological perspective. Specifically, we tested the feasibility of using videos to communicate safety concepts to patients on hospital wards. Methods: Sample Patients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1. Patients’ characteristics (N=141) Patients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1. Patients’ characteristics (N=141) Design To assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care. This was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced. To assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care. This was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced. Materials Videos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request. Questionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items). Patients’ views of the checklist (N=141) *% of total sample. Patients’ views of involvement in safety improvement in healthcare (N=141) *% of total sample. Videos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request. Questionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items). Patients’ views of the checklist (N=141) *% of total sample. Patients’ views of involvement in safety improvement in healthcare (N=141) *% of total sample. Procedure A senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure). Senior ward nurse identified postoperative patients who were sufficiently recovered to participate. Patient approached by nurse and researchers and informed about the study. Information sheet provided (5 min). If willing to take part, patient consent obtained (3 min). Patient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min). Patient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min). Patient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min). Patient completed questionnaire (10 min). Patient debriefed (3 min). Total time=30 min. A senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure). Senior ward nurse identified postoperative patients who were sufficiently recovered to participate. Patient approached by nurse and researchers and informed about the study. Information sheet provided (5 min). If willing to take part, patient consent obtained (3 min). Patient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min). Patient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min). Patient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min). Patient completed questionnaire (10 min). Patient debriefed (3 min). Total time=30 min. Statistical analyses Data were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes. Data were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes. Sample: Patients were recruited from surgical wards at two large teaching hospitals in London, UK, between June and November 2011. Sampling was opportunistic based on the permission of a senior nurse, and within the constraints of the following inclusion criteria: all patients were over 18 years of age, were able and willing to provide informed consent to participate, could fully understand and express themselves in English, were not in any distress or suffering from a serious mental illness, and did not have a clinical occupation. Clinicians were excluded from the sample as they might have had previous experience and views on the use of the WHO checklist. All patients had undergone a surgical procedure during their current period of in-hospital stay. We visited day surgery wards and standard surgical wards with the aim of including a mix of patients who had undergone minor and more serious procedures. Patient characteristics are displayed in table 1. Patients’ characteristics (N=141) Design: To assess patients’ views of the WHO checklist and their involvement in safety improvement, a methodology was required which was feasible to deliver on hospital wards. It was necessary to ensure that patients were informed about the checklist without bias—what it is, how it is used, how it differs from previous practice, and how it is relevant to their care journey. We needed to conduct this in a standardised, comprehensive and valid manner, while avoiding unnecessary confusion or anxiety caused to patients still receiving care. This was achieved by demonstrating use of the checklist to patients visually using two professionally produced videos. The videos were produced in collaboration with clinical teams (to ensure a realistic and unstaged portrayal of operating room procedures), and patient safety experts. The videos captured the perioperative safety procedures undertaken preintroduction (video 1) and postintroduction (video 2) of the WHO surgical safety checklist. This enabled patients viewing the videos to compare the relevant periods of the operations before and after introduction of the checklist. Patients were also given a laminated version of the WHO checklist for reference. To measure patients’ attitudes and to record patient characteristics, a simple questionnaire was designed for completion by the patient after the videos had been viewed. Ethical approval for the study was granted by the participating NHS Trust and the local research ethics committee before data collection commenced. Materials: Videos: The ‘pre-checklist’ video (shown first) depicted the typical safety checks occurring before introduction of the checklist at equivalent stages to which the ‘sign-in’, ‘time-out’, and ‘sign-out’ portions of the WHO checklist are completed (ie, when the patient enters the anaesthetic room and is checked in, the final stage of set up before incision, and postoperatively before the patient leaves the operating theatre, respectively). The ‘post-checklist’ video (shown second) depicted the process of completing the formal ‘sign-in’, ‘time-out’ and ‘sign-out’ sections of the WHO surgical safety checklist in a manner that adhered to recommended good practice.17 The ‘sign-in’ phase of the two videos was fairly similar (given that majority of the preanaesthetic checks listed in the checklist were already routinely taking place); however, the ‘time-out’ phase of the prechecklist video was shorter (including an identification (ID), procedure and antibiotic check, but no formal team discussion), and the ‘sign-out’ phase of the prechecklist video was shorter still (including a brief discussion between the surgical team only). The two videos were shot in an operating theatre complex out of hours, with a professional actor playing the role of the patient and a complete operating theatre team running the two different scenarios. The ‘sign-in’ (and equivalent prechecklist checks) was filmed in an anaesthetic room with an anaesthetist, operating department practitioner and the patient, while the ‘time-out’ and ‘sign-out’ (and equivalent prechecklist checks) were filmed in an operating theatre with the full operating theatre team. No scenes of the operation were included, only the safety checks and related conversations at each of the three perioperative phases were shown. Each video lasted between 3 and 4 min; edited clips are available from the authors on request. Questionnaire: A 19-item questionnaire was designed to assess the following constructs: attitudes towards the WHO surgical safety checklist (eight items), attitudes regarding how the checklist is used in practice (six items), attitudes towards involvement in safety improvement efforts in hospitals more generally (four items), and the degree of anxiety that one might come to some harm in hospital (one item). Each item was phrased as a statement, for example, ‘Using the checklist would make me feel safer’, ‘I would rather a surgeon took charge of the checklist than a nurse’, ‘Given the opportunity, I would like to be more involved in efforts to reduce patient harm’. Respondents rated the degree to which they agreed with each statement on a Likert scale (1=strongly disagree, 7=strongly agree). Patient characteristics (including basic demographics and patient experience of hospital care) that might be associated with such attitudes were also recorded (ie, age, sex, ethnicity, occupation, surgical procedure admitted for, general or local anaesthetic, number of previous operations, and any previous experience of medical error). The questionnaire was piloted on a sample of 20 patients before data collection commenced; this process identified one question that was consequently rephrased to improve comprehension (see tables 2 and 3 for questionnaire items). Patients’ views of the checklist (N=141) *% of total sample. Patients’ views of involvement in safety improvement in healthcare (N=141) *% of total sample. Procedure: A senior ward nurse was consulted before approaching patients, to identify (1) those who had already undergone their surgical procedure (patients who had not yet undergone their surgery were excluded to avoid provoking unnecessary anxiety) and (2) those who were deemed well enough to participate. The study was explained verbally with the aid of a written information sheet and informed consent was obtained. Patients were shown a laminated version of the WHO checklist (UK's National Patient Safety Agency standard version).19 The checklist was described as a ‘change in process during surgery’ about which it was important to collect patients’ views. Care was taken not to provide more detail than this so as to avoid biasing patients’ towards the checklist from the outset. Patients were instructed that they would view two videos, the first depicting what happened before the checklist was introduced (‘prechecklist’ video) and the second depicting what happens when the checklist is used, that is, currently (‘postchecklist’ video). Videos were displayed on a laptop at the patient's bedside, with sufficient sound quality for the patient to hear the videos without headphones (they could use their own headphones if they wished). Patients were then asked to fill in the questionnaire. The Likert scale was explained and they were assured that there were no right or wrong answers. It took around 30 minutes to collect data from each patient (see box 1. for a breakdown of the procedure). Senior ward nurse identified postoperative patients who were sufficiently recovered to participate. Patient approached by nurse and researchers and informed about the study. Information sheet provided (5 min). If willing to take part, patient consent obtained (3 min). Patient informed about the introduction of the surgical safety checklist and shown a laminated copy of the tool (1 min). Patient viewed video of what used to happen before introduction of checklist (at equivalent perioperative phases) (3 min). Patient viewed video of checklist being used at ‘sign-in’, ‘time-out’, and ‘sign-out’ (5 min). Patient completed questionnaire (10 min). Patient debriefed (3 min). Total time=30 min. Statistical analyses: Data were analysed using the IBM SPSS Statistics 20 software. Responses to each of the items on the questionnaire were grouped into the following categories: disagree (scores of 1–2), neither agree nor disagree (scores of 3–5) and agree (scores of 6–7). The percentage of patients falling into these three categories was computed separately for each item and tabulated. The final item (‘I worry that I will come to unnecessary harm in hospital’) was reduced to a binary variable—those who agreed that they were worried (ie, scored 6 or 7) formed one group, and all others formed another group. χ2 Analysis was then used to determine whether this and any of the other demographic/patient variables (age, sex, ethnicity, length of stay, number of previous surgical procedures, past experience of an error in care) were associated with patients’ attitudes. Results: Totally, 180 patients were approached to take part in the study. Thirty-nine refused to participate for reasons including not feeling well enough, waiting for visitors, waiting for lunch, and inadequate understanding of English. This meant that data were available for 141 patients. Patient characteristics A wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection). A wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection). Attitudes towards the WHO checklist The majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not. Those who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place. The majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not. Those who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place. Attitudes towards use of the checklist in practice Overall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001). Overall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001). Attitudes towards involvement in safety improvement in healthcare Although all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001). Although all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001). Patient characteristics: A wide age range was represented in the sample (median=52 years, range=18–87 years), and while the majority of patients (61%, N=86) were British, the remaining patients varied widely in ethnic origin (table 1). The sample was evenly spread with regards to sex, the number of previous operations they had experienced (0, 1–2, 3 or more), and whether or not they were worried that they would come to harm in hospital. Eighty per cent of the patients (N=113) had been admitted for day surgery procedures (including hernia repair, arthroscopy, laparoscopic cholecystectomy, hysterectomy, varicose veins), while the remaining 20% of patients (N=28) had been admitted for a longer period of days or weeks for major surgical procedures (including prostatectomy, colectomy, oesophagectomy, lower limb amputation, nephrectomy). Finally, 8.7% of the patients (N=12) reported that they had experienced a previous adverse event in hospital (eg, medication error, surgical site infection). Attitudes towards the WHO checklist: The majority of patients agreed that they would like the checklist to be used if they were having an operation (78%), compared with a very small number of patients who were not in favour of its use (4.3%) (table 2). In line with this positive perception of the checklist, patients largely agreed that use of the checklist would make them feel safer (74%), that it would improve communication between staff in theatre (69%), and that it would reduce the number of errors during surgery (67%). Most patients (61%) did not agree with the view that the checklist was an unnecessary ‘tick-box’ exercise, or that it would undermine the competence of front-line staff (57%). Patients over 50 years of age were slightly less positive: they were more likely to agree that it was an unnecessary ‘tick-box’ exercise (18% vs 10%) (χ2=7.72, df=2, p=0.021) and less persuaded that it would reduce errors (17% vs 3%) (χ2=7.26, df=2, p=0.027). However, their views were overall still more positive towards the checklist than not. Those who reported that they were worried about coming to unnecessary harm in hospital were more positive than the rest, being significantly more likely to agree that they would like the checklist to be used (87% vs 69%) (χ2=7.18, df=2, p=0.028), that it would make them feel safer (89% vs 57%) (χ2=21.35,df=2, p<0.001), and that it would improve communication in the operating theatre (80% vs 57%) (χ2=9.91,df=2, p=0.007). Over half the participating patients (53%) assumed that a checklist like this had always been in place. Attitudes towards use of the checklist in practice: Overall, patients did not seem to mind which member of their care team took charge of the checklist: 68% agreed that they would feel comfortable with a nurse carrying out the checks and 57% neither agreed nor disagreed that they would rather a surgeon lead them (table 2). Those who had been in hospital for 3 or more days, however, were significantly more likely to agree that they would like the surgeon to lead the checklist (40% vs 14%) (χ2=9.38, df=2, p=0.009). Overall, most patients trusted that their care team would carry out the checklist correctly: less than a third (29%) were worried that staff would not complete it correctly, and most (63%) did not want any specific assurance of it having been used, or were impartial. Those who reported that they were worried about coming to unnecessary harm in hospital were significantly more likely to agree that they would like some assurance that the checklist had been used compared with those who were not worried (46% vs 25%) (χ2=6.99, df=2, p=0.03). Overall, patients were divided with regards to whether they felt that hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ portion of the checklist) would make them feel anxious: 30% agreed that it would, 26% said that it would not, and 44% were impartial. Those who had experienced a previous error in care (8.7% of participants) were significantly less likely to disagree that such discussions would make them feel anxious (0% vs 28%) (χ2=6.35, df=2, p=0.042). Almost all patients, however, agreed that they understood why they needed to confirm their identity and procedure before their surgery (84%), particularly those who were less than 50 years of age (92%), and those who were worried about coming to harm (96%) (χ2=9.46, df=2, p=0.009, χ2=18.40, df=2, p<0.001). Attitudes towards involvement in safety improvement in healthcare: Although all patients who agreed to take part felt able to give their views on the surgical checklist, most did not feel that they had a major part to play in safety improvement work in general (table 3). Over half the participants (51%) agreed that it was best to leave decisions about patient safety to healthcare professionals and the same proportion disagreed that they could help to reduce errors in their care if they were more involved. Similarly, only 38% of patients agreed that patient feedback should be used to identify areas for improvement in patient safety (48% disagreed). This figure decreased to 24% after excluding those who had experienced a previous error in care, who were significantly more likely to agree that patient feedback should be used (66% vs 24%) (χ2=11.32, df=2, p=0.003). Patients who were more worried about coming to harm in hospital were significantly more likely to agree that they would like to become involved in efforts to reduce patient harm (60% vs 29%) (χ2=13.73, df=2, p=0.001). Discussion: The importance of patient involvement in healthcare delivery and quality improvement is being increasingly acknowledged and addressed in clinical practice and in research. This study offers some understanding around the feasibility of capturing patients’ views of the implementation of safety measures, like the WHO surgical safety checklist, and also insight into patients’ impressions of the checklist, as well as their involvement in quality improvement efforts more generally. We found that patients were very receptive towards the checklist; most were in fact surprised that the tool was only a recent introduction to surgical care. While there was heterogeneity in the results, patients were generally positive about the beneficial impact the checklist could have on communication and safety in theatre, and they understood why such basic checks were necessary. Most patients disagreed that the checks undermined the competence of staff, and were confident that the checklist would be used correctly by their care team. This contrasts with evidence that suggests that there is, in fact, a high degree of variability in how well the checklist is used in practice.8 Patients did not have strong preferences with regards to whom (ie, surgeon vs nurse) should take responsibility for the checklist. Different views were expressed on the experience of hearing discussion about potential blood loss just before surgery. Some felt that this would reassure them that all eventualities were being taken into consideration, whereas, a quarter of those asked felt that it would make them feel anxious and worried. This clearly requires further investigation and, potentially, some adjustment to how the checklist is used. Demographic characteristics had minimal impact on patients’ views. No differences were found according to sex or ethnicity and age (over 50 vs under 50 years) had an impact on responses for only three of the questionnaire items: older participants were slightly less likely to agree that such checks were necessary but were still positive overall about the checklist. Factors relating to an individual's previous or current experience in hospital and the level of worry that they have about coming to unnecessary harm had a greater impact on their views. In line with previously established prevalence rates, 8.7% of patients reported a previous adverse event in hospital (eg, medication error, surgical site infection).20 These individuals, and those who had been in hospital for three or more days for their surgery at the time of participating, were less likely to want to hear discussions around blood loss prior to their surgery and were more likely to want a surgeon to conduct the checklist, respectively. Patients who were worried that they would come to unnecessary harm in hospital were significantly more positive about the checklist. These individuals, arguably, have a more realistic view of the extent of error and harm in hospitals, and if patients in the study were provided with background material about safety problems in healthcare, their attitudes towards the checklist might be stronger than those expressed here. Patients in this study were, for the most part, having relatively straightforward day surgery, and many had comparatively little experience of healthcare. Future studies should address a wider variety of patients to assess whether patients with longstanding problems and more experience of healthcare have a different attitude to safety measures than those with less experience. Although patients felt able to provide views on the surgical checklist, it was noteworthy that they did not feel they had a strong role to play in safety improvement more generally. Those who had experienced a previous error in their care were more likely to agree that patient feedback should be used to try to improve services and those who worried about experiencing harm were more likely to agree that they would like to become involved in efforts to reduce patient harm. This fits with previous research into patient perceptions following adverse events.21 The majority, however, were willing to leave the responsibility for safety improvement to the healthcare professionals. This may again reflect the characteristics of our sample; since the majority of patients (80%) had been admitted for minor procedures, they were, on the whole, relatively low-risk. Perhaps patients with more serious conditions, comorbidities and chronic problems would be more motivated to, and more aware of, their ability to play a role in decisions around the process and delivery of their care.22 There are likely to be additional barriers to patient involvement in general quality improvement, however, including perceived patient/doctor authority gradients, willingness (or lack of) to commit time and energy to quality improvement in the context of one's own health problems, and a fear that unwanted involvement might jeopardise the quality of their own care.18 23 Added to this, a patient's capacity to become involved will likely be influenced by their underlying intellectual, moral and behavioural profile.24 Patient involvement in quality improvement is still a relatively new and rapidly evolving concept in the NHS, and these findings highlight a need for raising awareness and educating the public around opportunities to become involved, and the benefits that can be gained. Limitations This study has certain limitations. First, only surgical patients (and largely day surgery patients who were generally more able and willing to participate) were included in the sample. A wider sample, including patients from other specialities or, indeed, members of the public (who are all potential patients), might generate different views. Second, although the questionnaire methodology undertaken was ideal for an initial survey, there is clearly a need to further explore and understand the beliefs underlying these views. Additional qualitative studies are needed to explore the views of patients on the surgical checklist and safety measures more generally. Finally, with regard to the design of the questionnaire, it can be argued that we have omitted the important role played by anaesthetists in leadership and implementation of the checklist. While we asked patients whether they would prefer a surgeon or nurse to lead the checks, no reference to the anaesthetist was made. The reason behind this was to aid comprehension and to account for the fact that patients are often unfamiliar with the role an anaesthetist plays during surgery; however, it is a clear limitation in the study and should be addressed in future work. This, in itself, highlights the challenges inherent in designing tools for use with patient cohorts. This study has certain limitations. First, only surgical patients (and largely day surgery patients who were generally more able and willing to participate) were included in the sample. A wider sample, including patients from other specialities or, indeed, members of the public (who are all potential patients), might generate different views. Second, although the questionnaire methodology undertaken was ideal for an initial survey, there is clearly a need to further explore and understand the beliefs underlying these views. Additional qualitative studies are needed to explore the views of patients on the surgical checklist and safety measures more generally. Finally, with regard to the design of the questionnaire, it can be argued that we have omitted the important role played by anaesthetists in leadership and implementation of the checklist. While we asked patients whether they would prefer a surgeon or nurse to lead the checks, no reference to the anaesthetist was made. The reason behind this was to aid comprehension and to account for the fact that patients are often unfamiliar with the role an anaesthetist plays during surgery; however, it is a clear limitation in the study and should be addressed in future work. This, in itself, highlights the challenges inherent in designing tools for use with patient cohorts. Implications The current work has implications for patient safety research, healthcare improvement and clinical practice. First, we have shown that our sample of patients was largely in favour of the WHO checklist. While it may not seem surprising that patients would be positive towards extra safety checks, we believe that the current findings provide a persuasive argument for the use of the checklist and a challenge to those who are reluctant to its use or who do not complete parts of the checks on the grounds that patients might not feel comfortable. This is an important addition to the work around safety checklist implementation to date. Second, the study has suggested ways in which the use of the checklist might be adjusted to take into account the sensitivities of the patient experience. For example, we found that almost a third of patients expressed that they would feel anxious upon hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ checks). This information could be used to inform modification of the tool (which was endorsed by the National Patient Safety Agency when the checklist was introduced).19 The wording of the ‘blood loss’ item might be altered (eg, maybe by referring to the need for a ‘group and save’ instead), or this check could be completed at a different time-point entirely (eg, blood loss is also checked at ‘time-out’ when the patient is asleep, which could be sufficient). From our experience, this kind of modification is already happening in a large number of hospitals, but here we have provided support for doing so from a patient perspective. This is a simple demonstration of how patient experience might be improved (or at the very least not unnecessarily compromised) by gaining patient input into the delivery of safety interventions, and the research question can certainly be extended to additional interventions beyond the checklist. Third, this work informs the methodological approach we might take to involve patients in quality and safety improvement in healthcare, which is an area that requires consideration. We have presented a feasible methodology for informing patients about safety interventions that they themselves would not necessarily otherwise be aware of or witness, which provides them with the opportunity to share their views surrounding its implementation. The use of videos was an efficient, and well received effective means of visually demonstrating the use of the checklist and equivalent prechecklist practices, and made these concepts easy to grasp and relevant to the individual patient. This methodology was entirely feasible for use on a hospital ward. The survey instrument was also a quick and simple way of collecting a large amount of data—participants could answer the questions at their own pace and rarely required assistance. This methodology allows standardised presentation of information and can be adapted to satisfy a range of research questions. Patients were generally very happy to participate and, in fact, valued the distraction from their environment. The current work has implications for patient safety research, healthcare improvement and clinical practice. First, we have shown that our sample of patients was largely in favour of the WHO checklist. While it may not seem surprising that patients would be positive towards extra safety checks, we believe that the current findings provide a persuasive argument for the use of the checklist and a challenge to those who are reluctant to its use or who do not complete parts of the checks on the grounds that patients might not feel comfortable. This is an important addition to the work around safety checklist implementation to date. Second, the study has suggested ways in which the use of the checklist might be adjusted to take into account the sensitivities of the patient experience. For example, we found that almost a third of patients expressed that they would feel anxious upon hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ checks). This information could be used to inform modification of the tool (which was endorsed by the National Patient Safety Agency when the checklist was introduced).19 The wording of the ‘blood loss’ item might be altered (eg, maybe by referring to the need for a ‘group and save’ instead), or this check could be completed at a different time-point entirely (eg, blood loss is also checked at ‘time-out’ when the patient is asleep, which could be sufficient). From our experience, this kind of modification is already happening in a large number of hospitals, but here we have provided support for doing so from a patient perspective. This is a simple demonstration of how patient experience might be improved (or at the very least not unnecessarily compromised) by gaining patient input into the delivery of safety interventions, and the research question can certainly be extended to additional interventions beyond the checklist. Third, this work informs the methodological approach we might take to involve patients in quality and safety improvement in healthcare, which is an area that requires consideration. We have presented a feasible methodology for informing patients about safety interventions that they themselves would not necessarily otherwise be aware of or witness, which provides them with the opportunity to share their views surrounding its implementation. The use of videos was an efficient, and well received effective means of visually demonstrating the use of the checklist and equivalent prechecklist practices, and made these concepts easy to grasp and relevant to the individual patient. This methodology was entirely feasible for use on a hospital ward. The survey instrument was also a quick and simple way of collecting a large amount of data—participants could answer the questions at their own pace and rarely required assistance. This methodology allows standardised presentation of information and can be adapted to satisfy a range of research questions. Patients were generally very happy to participate and, in fact, valued the distraction from their environment. Limitations: This study has certain limitations. First, only surgical patients (and largely day surgery patients who were generally more able and willing to participate) were included in the sample. A wider sample, including patients from other specialities or, indeed, members of the public (who are all potential patients), might generate different views. Second, although the questionnaire methodology undertaken was ideal for an initial survey, there is clearly a need to further explore and understand the beliefs underlying these views. Additional qualitative studies are needed to explore the views of patients on the surgical checklist and safety measures more generally. Finally, with regard to the design of the questionnaire, it can be argued that we have omitted the important role played by anaesthetists in leadership and implementation of the checklist. While we asked patients whether they would prefer a surgeon or nurse to lead the checks, no reference to the anaesthetist was made. The reason behind this was to aid comprehension and to account for the fact that patients are often unfamiliar with the role an anaesthetist plays during surgery; however, it is a clear limitation in the study and should be addressed in future work. This, in itself, highlights the challenges inherent in designing tools for use with patient cohorts. Implications: The current work has implications for patient safety research, healthcare improvement and clinical practice. First, we have shown that our sample of patients was largely in favour of the WHO checklist. While it may not seem surprising that patients would be positive towards extra safety checks, we believe that the current findings provide a persuasive argument for the use of the checklist and a challenge to those who are reluctant to its use or who do not complete parts of the checks on the grounds that patients might not feel comfortable. This is an important addition to the work around safety checklist implementation to date. Second, the study has suggested ways in which the use of the checklist might be adjusted to take into account the sensitivities of the patient experience. For example, we found that almost a third of patients expressed that they would feel anxious upon hearing discussions around blood loss prior to their surgery (part of the ‘sign-in’ checks). This information could be used to inform modification of the tool (which was endorsed by the National Patient Safety Agency when the checklist was introduced).19 The wording of the ‘blood loss’ item might be altered (eg, maybe by referring to the need for a ‘group and save’ instead), or this check could be completed at a different time-point entirely (eg, blood loss is also checked at ‘time-out’ when the patient is asleep, which could be sufficient). From our experience, this kind of modification is already happening in a large number of hospitals, but here we have provided support for doing so from a patient perspective. This is a simple demonstration of how patient experience might be improved (or at the very least not unnecessarily compromised) by gaining patient input into the delivery of safety interventions, and the research question can certainly be extended to additional interventions beyond the checklist. Third, this work informs the methodological approach we might take to involve patients in quality and safety improvement in healthcare, which is an area that requires consideration. We have presented a feasible methodology for informing patients about safety interventions that they themselves would not necessarily otherwise be aware of or witness, which provides them with the opportunity to share their views surrounding its implementation. The use of videos was an efficient, and well received effective means of visually demonstrating the use of the checklist and equivalent prechecklist practices, and made these concepts easy to grasp and relevant to the individual patient. This methodology was entirely feasible for use on a hospital ward. The survey instrument was also a quick and simple way of collecting a large amount of data—participants could answer the questions at their own pace and rarely required assistance. This methodology allows standardised presentation of information and can be adapted to satisfy a range of research questions. Patients were generally very happy to participate and, in fact, valued the distraction from their environment. Conclusion: We have demonstrated a high level of patient support for use of the WHO surgical safety checklist in our sample. We have also shown that it is feasible to gain patient insight into the delivery of safety tools like the checklist, and that such feedback can inform appropriate tool modifications. We highlight the need for better patient and public education around opportunities for becoming more actively involved in safety improvement in healthcare, and the continued development of approaches that allow feedback to be provided in a non-threatening and accessible manner.
Background: Evidence suggests that full implementation of the WHO surgical safety checklist across NHS operating theatres is still proving a challenge for many surgical teams. The aim of the current study was to assess patients' views of the checklist, which have yet to be considered and could inform its appropriate use, and influence clinical buy-in. Methods: Postoperative patients were sampled from surgical wards at two large London teaching hospitals. Patients were shown two professionally produced videos, one demonstrating use of the WHO surgical safety checklist, and one demonstrating the equivalent periods of their operation before its introduction. Patients' views of the checklist, its use in practice, and their involvement in safety improvement more generally were captured using a bespoke 19-item questionnaire. Results: 141 patients participated. Patients were positive towards the checklist, strongly agreeing that it would impact positively on their safety and on surgical team performance. Those worried about coming to harm in hospital were particularly supportive. Views were divided regarding hearing discussions around blood loss/airway before their procedure, supporting appropriate modifications to the tool. Patients did not feel they had a strong role to play in safety improvement more broadly. Conclusions: It is feasible and instructive to capture patients' views of the delivery of safety improvements like the checklist. We have demonstrated strong support for the checklist in a sample of surgical patients, presenting a challenge to those resistant to its use.
Introduction: The introduction of the WHO surgical safety checklist into National Health Service (NHS) operating theatres in 2009 represented the first move towards mandatory action for improving surgical safety in the UK for some time.1 The potential for safety checklists to improve surgical outcomes and team performance is largely supported across the surgical literature2–8; however, their ability to bring about such improvements appears to be related to the style of implementation adopted, and the buy-in fostered by clinical teams, rather than their introduction per se. Indeed, the lack of a focussed implementation programme to support checklist introduction (including aspects such as training, feedback, local adaptation and involvement from all levels of the organisation), might explain reports that have not detected an effect of checklists on outcomes.9 10 In the UK, implementation of the WHO checklist has not been entirely straightforward. Several barriers to implementation have been described, including some very practical issues (eg, bringing the whole team together at one time),8 11 other tool-specific concerns (eg, the applicability of the checks to certain surgical contexts),12 and certain team-based challenges (eg, checklist fatigue and blurred lines of accountability).13 While some clinicians have been quick to see the benefits and have embraced the use of checklists, others have strongly resisted their implementation, seeing it as an attack on clinical autonomy or a slur on their professionalism.12–16 This discussion of the introduction and use of surgical checklists has so far been conducted entirely from the standpoint of the professionals involved. But perhaps patients, being the recipients of care as well as the payers, should have a voice in the weight given to safety in healthcare systems and in the introduction of novel safety measures. At a time when the fallibility of hospital care is very much in the public eye (with the release of publications such as the Francis report in the UK),17 this question becomes particularly pertinent. In many cases, such as controls on radiation levels or chemotherapy dosing, only a small number of healthcare experts have the requisite expertise to formulate and assess safety measures. In other cases however, such as using a checklist, patients might be able to come to a view and potentially influence implementation.18 By contrast with aviation, where safety checklists are very much engrained standard operating procedure and directly involve crew members only, in surgery, patients are part of the process and there are still questions regarding how they will respond to the checklist, if they will feel more vulnerable to errors, whether they would be agitated hearing some of the checks discussed and so on. This study sought to address these questions which, to the best of our knowledge, have not been addressed before: What views do patients have about the use of the WHO surgical safety checklist in NHS hospitals?Does the previous experience of error in hospital or other experiential/patient characteristics influence these views? What views do patients have about the use of the WHO surgical safety checklist in NHS hospitals? Does the previous experience of error in hospital or other experiential/patient characteristics influence these views? As a secondary aim, we also explored the views patients have towards being involved in decisions around improvements in safety and the care they receive more generally, and sought to begin to understand how best to assess such patient views on safety measures in healthcare from a methodological perspective. Specifically, we tested the feasibility of using videos to communicate safety concepts to patients on hospital wards. Conclusion: We have demonstrated a high level of patient support for use of the WHO surgical safety checklist in our sample. We have also shown that it is feasible to gain patient insight into the delivery of safety tools like the checklist, and that such feedback can inform appropriate tool modifications. We highlight the need for better patient and public education around opportunities for becoming more actively involved in safety improvement in healthcare, and the continued development of approaches that allow feedback to be provided in a non-threatening and accessible manner.
Background: Evidence suggests that full implementation of the WHO surgical safety checklist across NHS operating theatres is still proving a challenge for many surgical teams. The aim of the current study was to assess patients' views of the checklist, which have yet to be considered and could inform its appropriate use, and influence clinical buy-in. Methods: Postoperative patients were sampled from surgical wards at two large London teaching hospitals. Patients were shown two professionally produced videos, one demonstrating use of the WHO surgical safety checklist, and one demonstrating the equivalent periods of their operation before its introduction. Patients' views of the checklist, its use in practice, and their involvement in safety improvement more generally were captured using a bespoke 19-item questionnaire. Results: 141 patients participated. Patients were positive towards the checklist, strongly agreeing that it would impact positively on their safety and on surgical team performance. Those worried about coming to harm in hospital were particularly supportive. Views were divided regarding hearing discussions around blood loss/airway before their procedure, supporting appropriate modifications to the tool. Patients did not feel they had a strong role to play in safety improvement more broadly. Conclusions: It is feasible and instructive to capture patients' views of the delivery of safety improvements like the checklist. We have demonstrated strong support for the checklist in a sample of surgical patients, presenting a challenge to those resistant to its use.
12,746
274
[ 177, 257, 676, 430, 170, 195, 347, 386, 204, 236, 548 ]
16
[ "patients", "checklist", "patient", "safety", "surgical", "hospital", "views", "use", "χ2", "care" ]
[ "safety checklists improve", "checklists improve surgical", "patients surgical checklist", "surgical safety uk", "safety checklist nhs" ]
[CONTENT] Checklists | Patient safety | Attitudes | Quality improvement | Surgery [SUMMARY]
[CONTENT] Checklists | Patient safety | Attitudes | Quality improvement | Surgery [SUMMARY]
[CONTENT] Checklists | Patient safety | Attitudes | Quality improvement | Surgery [SUMMARY]
[CONTENT] Checklists | Patient safety | Attitudes | Quality improvement | Surgery [SUMMARY]
[CONTENT] Checklists | Patient safety | Attitudes | Quality improvement | Surgery [SUMMARY]
[CONTENT] Checklists | Patient safety | Attitudes | Quality improvement | Surgery [SUMMARY]
[CONTENT] Adult | Aged | Checklist | Female | Humans | London | Male | Medical Errors | Middle Aged | Operating Rooms | Patient Care Team | Patient Safety | Quality Assurance, Health Care | Surgical Procedures, Operative | Surveys and Questionnaires | World Health Organization [SUMMARY]
[CONTENT] Adult | Aged | Checklist | Female | Humans | London | Male | Medical Errors | Middle Aged | Operating Rooms | Patient Care Team | Patient Safety | Quality Assurance, Health Care | Surgical Procedures, Operative | Surveys and Questionnaires | World Health Organization [SUMMARY]
[CONTENT] Adult | Aged | Checklist | Female | Humans | London | Male | Medical Errors | Middle Aged | Operating Rooms | Patient Care Team | Patient Safety | Quality Assurance, Health Care | Surgical Procedures, Operative | Surveys and Questionnaires | World Health Organization [SUMMARY]
[CONTENT] Adult | Aged | Checklist | Female | Humans | London | Male | Medical Errors | Middle Aged | Operating Rooms | Patient Care Team | Patient Safety | Quality Assurance, Health Care | Surgical Procedures, Operative | Surveys and Questionnaires | World Health Organization [SUMMARY]
[CONTENT] Adult | Aged | Checklist | Female | Humans | London | Male | Medical Errors | Middle Aged | Operating Rooms | Patient Care Team | Patient Safety | Quality Assurance, Health Care | Surgical Procedures, Operative | Surveys and Questionnaires | World Health Organization [SUMMARY]
[CONTENT] Adult | Aged | Checklist | Female | Humans | London | Male | Medical Errors | Middle Aged | Operating Rooms | Patient Care Team | Patient Safety | Quality Assurance, Health Care | Surgical Procedures, Operative | Surveys and Questionnaires | World Health Organization [SUMMARY]
[CONTENT] safety checklists improve | checklists improve surgical | patients surgical checklist | surgical safety uk | safety checklist nhs [SUMMARY]
[CONTENT] safety checklists improve | checklists improve surgical | patients surgical checklist | surgical safety uk | safety checklist nhs [SUMMARY]
[CONTENT] safety checklists improve | checklists improve surgical | patients surgical checklist | surgical safety uk | safety checklist nhs [SUMMARY]
[CONTENT] safety checklists improve | checklists improve surgical | patients surgical checklist | surgical safety uk | safety checklist nhs [SUMMARY]
[CONTENT] safety checklists improve | checklists improve surgical | patients surgical checklist | surgical safety uk | safety checklist nhs [SUMMARY]
[CONTENT] safety checklists improve | checklists improve surgical | patients surgical checklist | surgical safety uk | safety checklist nhs [SUMMARY]
[CONTENT] patients | checklist | patient | safety | surgical | hospital | views | use | χ2 | care [SUMMARY]
[CONTENT] patients | checklist | patient | safety | surgical | hospital | views | use | χ2 | care [SUMMARY]
[CONTENT] patients | checklist | patient | safety | surgical | hospital | views | use | χ2 | care [SUMMARY]
[CONTENT] patients | checklist | patient | safety | surgical | hospital | views | use | χ2 | care [SUMMARY]
[CONTENT] patients | checklist | patient | safety | surgical | hospital | views | use | χ2 | care [SUMMARY]
[CONTENT] patients | checklist | patient | safety | surgical | hospital | views | use | χ2 | care [SUMMARY]
[CONTENT] safety | checklists | implementation | surgical | introduction | influence | checklist | use surgical | views | patients [SUMMARY]
[CONTENT] checklist | patient | patients | video | min | videos | sign | safety | questionnaire | min patient [SUMMARY]
[CONTENT] df | χ2 | vs | patients | agreed | checklist | likely | significantly likely | significantly | likely agree [SUMMARY]
[CONTENT] feedback | safety | patient | modifications highlight need | feedback provided non | patient insight delivery safety | patient insight delivery | patient insight | feedback provided | modifications highlight need better [SUMMARY]
[CONTENT] patients | checklist | patient | safety | df | χ2 | surgical | vs | agreed | video [SUMMARY]
[CONTENT] patients | checklist | patient | safety | df | χ2 | surgical | vs | agreed | video [SUMMARY]
[CONTENT] WHO | NHS ||| [SUMMARY]
[CONTENT] two | London ||| two | one | WHO | one ||| 19 [SUMMARY]
[CONTENT] 141 ||| ||| ||| Views ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] WHO | NHS ||| ||| two | London ||| two | one | WHO | one ||| 19 ||| 141 ||| ||| ||| Views ||| ||| ||| [SUMMARY]
[CONTENT] WHO | NHS ||| ||| two | London ||| two | one | WHO | one ||| 19 ||| 141 ||| ||| ||| Views ||| ||| ||| [SUMMARY]
Neuroendocrine tumors (NETs) - experience of a single Center.
35024733
Neuroendocrine neoplasms (NENs) are a heterogeneous group of tumors arising from cells that are part of the diffuse neuroendocrine system.
INTRODUCTION
We conducted a retrospective study in which we included a number of 91 cases diagnosed with neuroendocrine tumors (NETs). Descriptive statistics was performed: number of cases based on location, distribution by gender (male∕female), distribution by age, and we also performed a morphological and immunohistochemical (IHC) study.
PATIENTS, MATERIALS AND METHODS
The highest number of cases was found in lungs (60 cases). Tumors located on the skin, breast or bladder have been discovered, locations considered rare for this type of tumor. Of all cases diagnosed in the lungs, 59 were diagnosed as small cell carcinomas (SCCs) and only one case as NET. All surgical specimens were positive for chromogranin A (CgA), with a different expression for the other immunomarkers. For the lung biopsies, the most frequently IHC staining was CgA and cluster of differentiation 56 (CD56), with an increased positivity for the latter.
RESULTS
CgA remains the most sensitive immunomarker in the diagnosis of NETs. CD56 is the most widely used immunomarker for diagnosing small cell lung tumors. Positive expression of thyroid transcription factor 1 (TTF1) immunomarker does not confirm pulmonary origin of SCCs.
CONCLUSIONS
[ "Biomarkers, Tumor", "Carcinoma, Neuroendocrine", "Chromogranin A", "Female", "Humans", "Lung Neoplasms", "Male", "Neuroendocrine Tumors", "Retrospective Studies" ]
8848213
⧉ Introduction
Neuroendocrine neoplasms (NENs) are a heterogeneous group of tumors arising from cells that are part of the diffuse neuroendocrine (NE) system [1]. The NE system is represented by endocrine glands like the pituitary gland, parathyroid glands, the NE part of the adrenal glands and also the endocrine islet tissue located at the level of glandular tissues (pancreatic, thyroid). This diffuse NE system also includes the endocrine cells that are located in the respiratory and digestive tracts [2,3]. In 1907, Kulchitsky identified the NE cells, and in the same year, Oberndorfer first described the carcinoid [4,5]. NENs were previously classified differently, based on location, with different terminology. The classification criteria according to organ systems created a lot of confusion. After the Conference held by World Health Organization (WHO) in November 2017, a new uniform classification for all neuroendocrine tumors (NETs) was published in 2018. Following this common classification, the distinction between well-differentiated NETs, formerly known as carcinoid tumors, and poorly differentiated neuroendocrine carcinomas (NECs) was made. Although both NETs and NECs express the same NE immunomarkers, these tumors are not related [6,7]. The association of two, low-grade and high-grade components in the same NET indicates that the high-grade component remains a well-differentiated tumor. On the contrary, NECs are not often associated with NETs, and they develop from precursor lesions [7]. An essential aspect from a clinical and treatment point of view is the functional and non-functional feature of NETs. The definition of endocrine tumors is given by their association with clinical syndromes that occur in the context of increased and abnormal production of hormones. Its presence can be proven by elevated serum levels or by immunohistochemical (IHC) reactions performed on the operative specimen [6,7,8]. Aim The aim of this study was to analyze the epidemiological, morphological and IHC aspects of NETs in our Center and to study new perspectives in the literature in terms of molecular biology and targeted therapy.
null
null
⧉ Results
Out of a total of 91 cases, 17 (18.68%) of them were diagnosed based on the surgical specimens containing the primary tumor, 63 (69.23%) were diagnosed by biopsy, while the remaining 11 (12.08%) cases were secondary/metastatic tumors located in the liver, lymph nodes and skin, with unknown site of the primary tumor. The mean age of the selected cases was 63.84 years (ranges between 17 and 84 years), with a mean age of 61.22 years in females, and 65.2 years in males. Gender distribution: 34% (n=31) females and 66% (n=60) males (Table 2). Gender, age distribution, and localization of primary cancer All known N Col% Age [years] Primary cancer All known 80 100% 63.84 Skin MCC 1 1.25% 77 Bladder 2 2.5% 64.5 Breast 2 2.5% 71 Stomach 4 5% 63 Small bowel 5 6% 72.5 Appendix 3 3.75% 25.3 Colon 1 1.25% 80 Rectum 2 2.5% 63 Lung 60 75% 66 Men Women N Col% Mean age [years] N Col% Mean age [years] Primary cancer All known 54 100% 64.5 26 100% 59.85 Skin MCC 0 0% 0 1 3.84% 77 Bladder 2 3.7% 64.5 0 0% 0 Breast 0 0% 0 2 7.69% 71 Stomach 0 0% 0 4 15.38% 63 Small bowel 3 5.5% 80.5 2 7.69% 64.5 Appendix 1 1.85% 57 2 7.69% 19 Colon 1 1.85% 80 0 0% 0 Rectum 1 1.85% 68 1 3.84% 58 Lung 46 85.18% 66 14 53.84% 66.5 Col%: Percent distribution of patients (column percentage); MCC: Merkel cell carcinoma; N: No. of cases Pathological features: tumor site – the primary tumor was located as follows: five cases with a rare location: skin one case (1.25%), bladder two (2.5%) cases, breast two (2.5%) cases, 15 (18%) cases in the gastrointestinal tract, with different sites: four (5%) cases in the stomach, five (6.25%) cases in the small bowel, three (3.75%) cases in the appendix, one case (1.25%) in the right colon, two (2.5%) cases in the rectum, and 60 (75%) cases in the lung (Figure 1). Tumors distribution: primary site The tumor found in the skin was diagnosed as Merkel cell carcinoma (MCC) based on morphological features: tumor cells with scant eosinophilic cytoplasm and round nuclei with finely granular and dusty chromatin, and IHC profile: positivity for NE immunomarkers [chromogranin A (CgA), synaptophysin and cluster of differentiation 56 (CD56)], and also for the epithelial immunomarkers cytokeratin (CK) AE1/AE3, CK20 and epithelial membrane antigen (EMA) (Figures 2 and 3). Both bladder cases were diagnosed with SCC. Tumor cells were positive for the following NE immunomarkers: the first case for synaptophysin and CgA, and the second for CD56 and neuron-specific enolase (NSE) (Figure 4). The tumor cells were negative for p63 immunostaining, with positive internal control in the normal urothelium. One of the cases was positive for thyroid transcription factor 1 (TTF1). All lung cases were diagnosed from tissue fragments collected by biopsy. Out of the 60 cases, 59 were classified as SCCs and only one case was diagnosed as a NET. Tumor grade: only patients diagnosed with primary NETs were included in this evaluation. Most of the cases were classified as G1 – seven (46.6%) cases, followed by G3 – six (40%) cases, and two (13.3%) cases as G2. Small cell NEC is by definition classified as a tumor with a high grade of malignancy. Five (0.54%) patients diagnosed with primary NETs on surgical pieces presented lymph node metastases. Lymph node metastases were also identified in all the patients presenting lymphovascular invasion. None of the tumors showed perineural invasions. In patients where diagnosis was established on metastases, in seven (63.63%) cases the tumor was identified in the liver, in three (27.27%) cases in the lymph nodes and one case (9.09%) was located in the skin. The IHC profile of the analyzed cases is highlighted in Table 3. (A and B) Merkel cell carcinoma. HE staining: (A) ×50; (B) ×200. HE: Hematoxylin–Eosin Merkel cell carcinoma: immunoexpressions (×200) of CgA (A), CD56 (B), synaptophysin (C), and CK20 (D). CD56: Cluster of differentiation 56; CgA: Chromogranin A; CK20: Cytokeratin 20 (A–D) Small cell carcinoma of the bladder; immunoexpressions (×200) of CD56 (C) and NSE (D). HE staining: (A) ×50; (B) ×200. CD56: Cluster of differentiation 56; HE: Hematoxylin–Eosin; NSE: Neuron-specific enolase Immunohistochemical aspects of neuroendocrine tumors Antibodies Synaptophysin CgA CD56 NSE Primary site Analyzed cases/positive cases Skin 1/1 1/1 1/1 0 Bladder 2/1 2/1 1/1 1/1 Breast 2/2 1/1 1/0 1/0 Lung 29/13 59/18 60/54 10/8 Stomach 3/2 4/4 1/0 1/0 Small bowel 4/0 4/4 4/0 5/1 Appendix 3/1 3/3 2/1 2/1 Right colon 1/0 1/0 1/1 1/1 Rectum 1/0 2/1 1/1 1/1 Metastases Analyzed cases/positive cases Liver 7/5 7/6 7/2 7/4 Lymph nodes 3/3 3/3 3/2 3/2 Skin 1/0 1/1 1/0 1/1 CD56: Cluster of differentiation 56; CgA: Chromogranin A; NSE: Neuron-specific enolase
⧉ Conclusions
CgA remains the most sensitive immunomarker in diagnosis of NETs. CD56 is the most widely used immunomarker for diagnosing small cell lung tumors. Positive expression of TTF1 immunomarker does not confirm pulmonary origin of SCCs. Although the diagnosis of NETs has increased greatly in recent decades, they are still relatively rare among pathological tumor diagnosis. Given the heterogeneity of these tumors, the expertise of each Center must be shared to help manage these cases.
[]
[]
[]
[ "⧉ Introduction", "⧉ Patients, Materials and Methods", "⧉ Results ", "⧉ Discussions", "⧉ Conclusions", "Conflict of interest" ]
[ "Neuroendocrine neoplasms (NENs) are a heterogeneous group of tumors arising from cells that are part of the diffuse neuroendocrine (NE) system [1]. The NE system is represented by endocrine glands like the pituitary gland, parathyroid glands, the NE part of the adrenal glands and also the endocrine islet tissue located at the level of glandular tissues (pancreatic, thyroid). This diffuse NE system also includes the endocrine cells that are located in the respiratory and digestive tracts [2,3]. In 1907, Kulchitsky identified the NE cells, and in the same year, Oberndorfer first described the carcinoid [4,5].\nNENs were previously classified differently, based on location, with different terminology. The classification criteria according to organ systems created a lot of confusion. After the Conference held by World Health Organization (WHO) in November 2017, a new uniform classification for all neuroendocrine tumors (NETs) was published in 2018. Following this common classification, the distinction between well-differentiated NETs, formerly known as carcinoid tumors, and poorly differentiated neuroendocrine carcinomas (NECs) was made. Although both NETs and NECs express the same NE immunomarkers, these tumors are not related [6,7].\nThe association of two, low-grade and high-grade components in the same NET indicates that the high-grade component remains a well-differentiated tumor. On the contrary, NECs are not often associated with NETs, and they develop from precursor lesions [7].\nAn essential aspect from a clinical and treatment point of view is the functional and non-functional feature of NETs. The definition of endocrine tumors is given by their association with clinical syndromes that occur in the context of increased and abnormal production of hormones. Its presence can be proven by elevated serum levels or by immunohistochemical (IHC) reactions performed on the operative specimen [6,7,8].\n\nAim\n\nThe aim of this study was to analyze the epidemiological, morphological and IHC aspects of NETs in our Center and to study new perspectives in the literature in terms of molecular biology and targeted therapy.", "We conducted a retrospective study which included patients admitted in Mureş Clinical County Hospital, Târgu Mureş, Romania, and diagnosed with NETs (primary or metastatic, with different locations), between January 1, 2016–December 31, 2019, based on the pathological reports released by the Department of Pathology in this facility.\nWe used the Department of Pathology database that includes a number of 24 000 cases from 2016 to 2019. Using keywords such as: ‘neuroendocrine’, ‘carcinoid’, ‘small cells’, ‘large cells’, we selected a number of 150 cases. Cases with diagnosis established without performing IHC reactions or incomplete data were excluded. Ninety-one cases were included in the study.\nWe performed descriptive statistics: number of cases based on location, distribution by gender (male/female), distribution by age, and we also conducted a morphological and IHC study.\nAll the surgical specimens were fixed in 10% neutral buffered formalin and the sampled fragments were embedded in paraffin blocks using standard pathological report and staining with Hematoxylin–Eosin (HE).\nFor the IHC analysis, 4 μm thick sections made of paraffin blocks and an immunostainer (BenchMark GX, Ventana Medical Systems, Inc., Tucson, AZ, USA) were used. Staining of IHC tests was performed automatically using the automatic staining tool from Ventana BenchMark GX according to the manufacturer’s instructions. The deparaffinizing of the slides was performed at 90°C using the EZ Prep solution (Ventana Medical Systems, Inc.) and the reactants and incubation times recommended on the antibody leaflet. Slides were developed using the OmniMap 3,3’-Diaminobenzidine (DAB) detection kit (Ventana Medical Systems, Inc.) and counterstained with Mayer’s Hematoxylin. The antibodies used are shown in Table 1.\nAntibodies used for immunohistochemistry\nAntibody\nClone\nManufacturer\nReactivity\nDilution\nCD56\n123C3\nVentana Medical Systems, Inc.\nNeuroendocrine immunomarker\nRTU\nCgA\nLK2H10\nVentana Medical Systems, Inc.\nNeuroendocrine immunomarker\nCK20\nSP33\nVentana Medical Systems, Inc.\nEpithelial immunomarker\nCK AE1/AE3\nPCK26\nVentana Medical Systems, Inc.\nEpithelial immunomarker\nEMA\nE29\nVentana Medical Systems, Inc.\nEpithelial immunomarker\nNSE\nMRQ-55\nCell Marque, Inc.\nNeuroendocrine immunomarker\np63\n4A4\nVentana Medical Systems, Inc.\nMyoepithelial immunomarker\nSynaptophysin\nMRQ-40\nCell Marque, Inc.\nNeuroendocrine immunomarker\nTTF1\nSP141\nVentana Medical Systems, Inc.\nTranscription factor\nCD56: Cluster of differentiation 56; CgA: Chromogranin A; CK: Cytokeratin; EMA: Epithelial membrane antigen; NSE: Neuron-specific enolase; RTU: Ready-to-use; TTF1: Thyroid transcription factor 1", "Out of a total of 91 cases, 17 (18.68%) of them were diagnosed based on the surgical specimens containing the primary tumor, 63 (69.23%) were diagnosed by biopsy, while the remaining 11 (12.08%) cases were secondary/metastatic tumors located in the liver, lymph nodes and skin, with unknown site of the primary tumor.\nThe mean age of the selected cases was 63.84 years (ranges between 17 and 84 years), with a mean age of 61.22 years in females, and 65.2 years in males. Gender distribution: 34% (n=31) females and 66% (n=60) males (Table 2).\nGender, age distribution, and localization of primary cancer\nAll known\nN\nCol%\nAge [years]\nPrimary cancer\nAll known\n80\n100%\n63.84\nSkin MCC\n1\n1.25%\n77\nBladder\n2\n2.5%\n64.5\nBreast\n2\n2.5%\n71\nStomach\n4\n5%\n63\nSmall bowel\n5\n6%\n72.5\nAppendix\n3\n3.75%\n25.3\nColon\n1\n1.25%\n80\nRectum\n2\n2.5%\n63\nLung\n60\n75%\n66\nMen\nWomen\nN\nCol%\nMean age [years]\nN\nCol%\nMean age [years]\nPrimary cancer\nAll known\n54\n100%\n64.5\n26\n100%\n59.85\nSkin MCC\n0\n0%\n0\n1\n3.84%\n77\nBladder\n2\n3.7%\n64.5\n0\n0%\n0\nBreast\n0\n0%\n0\n2\n7.69%\n71\nStomach\n0\n0%\n0\n4\n15.38%\n63\nSmall bowel\n3\n5.5%\n80.5\n2\n7.69%\n64.5\nAppendix\n1\n1.85%\n57\n2\n7.69%\n19\nColon\n1\n1.85%\n80\n0\n0%\n0\nRectum\n1\n1.85%\n68\n1\n3.84%\n58\nLung\n46\n85.18%\n66\n14\n53.84%\n66.5\nCol%: Percent distribution of patients (column percentage); MCC: Merkel cell carcinoma; N: No. of cases\nPathological features: tumor site – the primary tumor was located as follows: five cases with a rare location: skin one case (1.25%), bladder two (2.5%) cases, breast two (2.5%) cases, 15 (18%) cases in the gastrointestinal tract, with different sites: four (5%) cases in the stomach, five (6.25%) cases in the small bowel, three (3.75%) cases in the appendix, one case (1.25%) in the right colon, two (2.5%) cases in the rectum, and 60 (75%) cases in the lung (Figure 1).\nTumors distribution: primary site\nThe tumor found in the skin was diagnosed as Merkel cell carcinoma (MCC) based on morphological features: tumor cells with scant eosinophilic cytoplasm and round nuclei with finely granular and dusty chromatin, and IHC profile: positivity for NE immunomarkers [chromogranin A (CgA), synaptophysin and cluster of differentiation 56 (CD56)], and also for the epithelial immunomarkers cytokeratin (CK) AE1/AE3, CK20 and epithelial membrane antigen (EMA) (Figures 2 and 3).\nBoth bladder cases were diagnosed with SCC. Tumor cells were positive for the following NE immunomarkers: the first case for synaptophysin and CgA, and the second for CD56 and neuron-specific enolase (NSE) (Figure 4). The tumor cells were negative for p63 immunostaining, with positive internal control in the normal urothelium. One of the cases was positive for thyroid transcription factor 1 (TTF1).\nAll lung cases were diagnosed from tissue fragments collected by biopsy. Out of the 60 cases, 59 were classified as SCCs and only one case was diagnosed as a NET.\nTumor grade: only patients diagnosed with primary NETs were included in this evaluation.\nMost of the cases were classified as G1 – seven (46.6%) cases, followed by G3 – six (40%) cases, and two (13.3%) cases as G2. Small cell NEC is by definition classified as a tumor with a high grade of malignancy.\nFive (0.54%) patients diagnosed with primary NETs on surgical pieces presented lymph node metastases. Lymph node metastases were also identified in all the patients presenting lymphovascular invasion. None of the tumors showed perineural invasions.\nIn patients where diagnosis was established on metastases, in seven (63.63%) cases the tumor was identified in the liver, in three (27.27%) cases in the lymph nodes and one case (9.09%) was located in the skin.\nThe IHC profile of the analyzed cases is highlighted in Table 3.\n(A and B) Merkel cell carcinoma. HE staining: (A) ×50; (B) ×200. HE: Hematoxylin–Eosin\nMerkel cell carcinoma: immunoexpressions (×200) of CgA (A), CD56 (B), synaptophysin (C), and CK20 (D). CD56: Cluster of differentiation 56; CgA: Chromogranin A; CK20: Cytokeratin 20\n(A–D) Small cell carcinoma of the bladder; immunoexpressions (×200) of CD56 (C) and NSE (D). HE staining: (A) ×50; (B) ×200. CD56: Cluster of differentiation 56; HE: Hematoxylin–Eosin; NSE: Neuron-specific enolase\nImmunohistochemical aspects of neuroendocrine tumors\nAntibodies\nSynaptophysin\nCgA\nCD56\nNSE\nPrimary site\nAnalyzed cases/positive cases\nSkin\n1/1\n1/1\n1/1\n0\nBladder\n2/1\n2/1\n1/1\n1/1\nBreast\n2/2\n1/1\n1/0\n1/0\nLung\n29/13\n59/18\n60/54\n10/8\nStomach\n3/2\n4/4\n1/0\n1/0\nSmall bowel\n4/0\n4/4\n4/0\n5/1\nAppendix\n3/1\n3/3\n2/1\n2/1\nRight colon\n1/0\n1/0\n1/1\n1/1\nRectum\n1/0\n2/1\n1/1\n1/1\nMetastases\nAnalyzed cases/positive cases\nLiver\n7/5\n7/6\n7/2\n7/4\nLymph nodes\n3/3\n3/3\n3/2\n3/2\nSkin\n1/0\n1/1\n1/0\n1/1\nCD56: Cluster of differentiation 56; CgA: Chromogranin A; NSE: Neuron-specific enolase", "This study presented the experience of a single Center – Department of Pathology, Mureş Clinical County Hospital – on NETs. NETs are rare neoplasms whose clinical and pathological features have been extensively studied in recent decades to understand the behavior of these heterogeneous tumors. To highlight the importance of studying these tumors we just have to pay attention to figures reported by various studies or health organizations.\nAccording to the Surveillance, Epidemiology and End Results (SEER) Program, NETs have shown an alarming incidence growth. A study conducted in Beijing, China, highlighted that the incidence of NETs in digestive tract increased from 0.51 cases per 100 000 people in 1973 to 6.20 cases per 100 000 people in 2015 [4].\nThe skin, breast, and bladder are included among the rare places of NETs occurrence.\nMCC (primary cutaneous NET) is a tumor with a very low incidence, both worldwide and in Europe, where the incidence is estimated to be 0.13 cases/100 000 people per year [9]. Given its low incidence, this diagnosis should be exclusionary, and the tumor must be differentiated primarily from metastasis of a NET of extracutaneous origin [10]. The first IHC marker used to diagnose MCC was CK20 [11]. MCC is positive for both epithelial immunomarkers (CK20, CK AE1/AE3 and EMA) and for NE immunomarkers (CgA, synaptophysin and CD56) [12]. The characteristic ‘dot-like’ perinuclear immunostaining of CK20 helps clarifying the diagnosis [13]. In our case, MCC was diagnosed in a 77-year-old woman. The tumor was positive in both epithelial immunomarkers (CK20, EMA, CK AE1/AE3) and the NE ones (CgA, synaptophysin and CD56). After melanoma, MCC represents the second cause of skin cancer death [14,15]. This type of carcinoma is closely related to ultraviolet radiations exposure, the presence of Merkel cell polyomavirus and immunosuppression [10].\nIn the literature, the percentage of NETs occurring in the breast is between 1–5% [16]. Both breast NETs evaluated in this study were classified as G1 tumors and were positive for synaptophysin and CgA labeling.\nBladder NETs represent less than 1% of all tumors concerning this site. Small cell bladder carcinoma generally affects men over the age of 60. In general, patients have an advanced stage of cancer when they are first diagnosed. The cases analyzed in our study are in accordance with the literature, both being diagnosed in two men with a mean age of 64.5 years. At the time of diagnosis, in both cases, the tumor infiltrated the muscular layer of the bladder [17,18].\nAn analysis of the most common primary location among NETs shows that the lung is the most affected organ, followed by the stomach [19]; our study results are consistent with these findings, 75% of the NETs being located in the lung. This is in contradiction with many studies that showed a higher prevalence in the gastrointestinal tract, especially pancreas [20]. The main reason for this is that WHO included SCC in the group of NETs [21]. If we excluded this group from our study, we would only have one NET case diagnosed in lungs. The NE properties of SCCs are thought to be mediated by key transcription factor achaete-scute family basic helix-loop-helix (BHLH) transcription factor 1 (ASCL1), possibly using neurogenic differentiation 1 (NeuroD1) factor [22,23]. Association with other large histological types of lung tumors (squamous cell carcinoma, adenocarcinoma) suggests that SCC cells have the same endodermal origin as the respiratory epithelium [21].\nAccording to other published studies that included patients diagnosed in different Centers, the average age of onset is in the 6–7th decade of life. This is consistent with data analyzed in our center [19, 24], with small exceptions concerning the appendix, where the youngest patient was 19 years old. In our survey, in accordance with the literature, we observed that if we ruled out lung tumors, women present a slightly higher prevalence in the development of these tumors compared to men [19, 25], but after including SCC of the lung, the most affected by NETs are men.\nMost metastases with unidentified primary tumors were in liver, an aspect also supported by literature [20, 26]. Diagnosis of NETs should be established in the presence of at least two NE immunomarkers positivity, preferably the panel should contain CgA and synaptophysin, considered to be general NE immunomarkers [27,28]. All the cases analyzed on surgical specimens were positive for CgA, with a different expression for the other immunomarkers. As for lung biopsies, the most frequently IHC staining was CgA and CD56, with an increased positivity for the last. Although CD56 appears to be the most sensitive NE immunomarker for lung NETs diagnosis, especially SCC, it is not specific. The negative reaction in all NE immunomarkers can be found in up to 10% of cases [21]. MCC was positive for all three NE immunomarkers and also for epithelial immunomarkers CK AE1/AE3, CK20 and EMA.\nThe first series of cases in which TTF1 immunoexpression was analyzed in SCCs of extrapulmonary origin was reported in the 2000s [29]. This study supports the use of TTF1 as an immunomarker of differentiation between SCC of pulmonary origin compared to those of extrapulmonary origin. In 2007, two other studies ruled out this hypothesis and reported an increased number of extrapulmonary cases that are positive for TTF1 [30]. TTF1 is currently used for differential diagnosis of MCC from other NE skin metastases of other origins [31].\nA very important aspect is the division of NETs into functional and non-functional tumors [32]. ‘Functional NET’ term refers to the fact that these tumors can secrete biologically active amines or peptides. As a result of this secretory activity, patients may develop certain symptoms, which are known as the carcinoid syndrome that often help raising the suspicion of a NET [33]. The classical carcinoid syndrome is characterized by diarrhea, flushing, hypotension, right heart disease. These correlate with the effects of serotonin hypersecretion [2].\nEven if they have a common origin and express neuronal and NE immunomarkers, diversity and heterogeneity are traits characterizing NETs. They differ in their malignant potential, presence or absence of a clinical syndrome, biological behavior, and molecular abnormalities [6, 34]. This is also noticeable in tumors having the same location [35,36]. In the last 10 years, many genetic and epigenetic changes have been published. These reports confirmed a radical difference between well-differentiated NETs, including those with a high Ki67 proliferation index, and NECs. The reports showed frequent inactivation of retinoblastoma 1 (RB1) gene and tumor protein P53 (TP53) gene, a rare aspect in NETs [37]. This finding is included in the new 2017 WHO Classification of pancreatic NETs but is expected to be extended to other levels in the coming years [38].", "CgA remains the most sensitive immunomarker in diagnosis of NETs. CD56 is the most widely used immunomarker for diagnosing small cell lung tumors. Positive expression of TTF1 immunomarker does not confirm pulmonary origin of SCCs. Although the diagnosis of NETs has increased greatly in recent decades, they are still relatively rare among pathological tumor diagnosis. Given the heterogeneity of these tumors, the expertise of each Center must be shared to help manage these cases.", "The authors declare that they have no conflict of interests." ]
[ "intro", "materials|methods", "results", "discussion", "conclusions", "COI-statement" ]
[ "neuroendocrine tumors", "Merkel cell carcinoma", "immunohistochemistry", "NETs/NECs" ]
⧉ Introduction: Neuroendocrine neoplasms (NENs) are a heterogeneous group of tumors arising from cells that are part of the diffuse neuroendocrine (NE) system [1]. The NE system is represented by endocrine glands like the pituitary gland, parathyroid glands, the NE part of the adrenal glands and also the endocrine islet tissue located at the level of glandular tissues (pancreatic, thyroid). This diffuse NE system also includes the endocrine cells that are located in the respiratory and digestive tracts [2,3]. In 1907, Kulchitsky identified the NE cells, and in the same year, Oberndorfer first described the carcinoid [4,5]. NENs were previously classified differently, based on location, with different terminology. The classification criteria according to organ systems created a lot of confusion. After the Conference held by World Health Organization (WHO) in November 2017, a new uniform classification for all neuroendocrine tumors (NETs) was published in 2018. Following this common classification, the distinction between well-differentiated NETs, formerly known as carcinoid tumors, and poorly differentiated neuroendocrine carcinomas (NECs) was made. Although both NETs and NECs express the same NE immunomarkers, these tumors are not related [6,7]. The association of two, low-grade and high-grade components in the same NET indicates that the high-grade component remains a well-differentiated tumor. On the contrary, NECs are not often associated with NETs, and they develop from precursor lesions [7]. An essential aspect from a clinical and treatment point of view is the functional and non-functional feature of NETs. The definition of endocrine tumors is given by their association with clinical syndromes that occur in the context of increased and abnormal production of hormones. Its presence can be proven by elevated serum levels or by immunohistochemical (IHC) reactions performed on the operative specimen [6,7,8]. Aim The aim of this study was to analyze the epidemiological, morphological and IHC aspects of NETs in our Center and to study new perspectives in the literature in terms of molecular biology and targeted therapy. ⧉ Patients, Materials and Methods: We conducted a retrospective study which included patients admitted in Mureş Clinical County Hospital, Târgu Mureş, Romania, and diagnosed with NETs (primary or metastatic, with different locations), between January 1, 2016–December 31, 2019, based on the pathological reports released by the Department of Pathology in this facility. We used the Department of Pathology database that includes a number of 24 000 cases from 2016 to 2019. Using keywords such as: ‘neuroendocrine’, ‘carcinoid’, ‘small cells’, ‘large cells’, we selected a number of 150 cases. Cases with diagnosis established without performing IHC reactions or incomplete data were excluded. Ninety-one cases were included in the study. We performed descriptive statistics: number of cases based on location, distribution by gender (male/female), distribution by age, and we also conducted a morphological and IHC study. All the surgical specimens were fixed in 10% neutral buffered formalin and the sampled fragments were embedded in paraffin blocks using standard pathological report and staining with Hematoxylin–Eosin (HE). For the IHC analysis, 4 μm thick sections made of paraffin blocks and an immunostainer (BenchMark GX, Ventana Medical Systems, Inc., Tucson, AZ, USA) were used. Staining of IHC tests was performed automatically using the automatic staining tool from Ventana BenchMark GX according to the manufacturer’s instructions. The deparaffinizing of the slides was performed at 90°C using the EZ Prep solution (Ventana Medical Systems, Inc.) and the reactants and incubation times recommended on the antibody leaflet. Slides were developed using the OmniMap 3,3’-Diaminobenzidine (DAB) detection kit (Ventana Medical Systems, Inc.) and counterstained with Mayer’s Hematoxylin. The antibodies used are shown in Table 1. Antibodies used for immunohistochemistry Antibody Clone Manufacturer Reactivity Dilution CD56 123C3 Ventana Medical Systems, Inc. Neuroendocrine immunomarker RTU CgA LK2H10 Ventana Medical Systems, Inc. Neuroendocrine immunomarker CK20 SP33 Ventana Medical Systems, Inc. Epithelial immunomarker CK AE1/AE3 PCK26 Ventana Medical Systems, Inc. Epithelial immunomarker EMA E29 Ventana Medical Systems, Inc. Epithelial immunomarker NSE MRQ-55 Cell Marque, Inc. Neuroendocrine immunomarker p63 4A4 Ventana Medical Systems, Inc. Myoepithelial immunomarker Synaptophysin MRQ-40 Cell Marque, Inc. Neuroendocrine immunomarker TTF1 SP141 Ventana Medical Systems, Inc. Transcription factor CD56: Cluster of differentiation 56; CgA: Chromogranin A; CK: Cytokeratin; EMA: Epithelial membrane antigen; NSE: Neuron-specific enolase; RTU: Ready-to-use; TTF1: Thyroid transcription factor 1 ⧉ Results : Out of a total of 91 cases, 17 (18.68%) of them were diagnosed based on the surgical specimens containing the primary tumor, 63 (69.23%) were diagnosed by biopsy, while the remaining 11 (12.08%) cases were secondary/metastatic tumors located in the liver, lymph nodes and skin, with unknown site of the primary tumor. The mean age of the selected cases was 63.84 years (ranges between 17 and 84 years), with a mean age of 61.22 years in females, and 65.2 years in males. Gender distribution: 34% (n=31) females and 66% (n=60) males (Table 2). Gender, age distribution, and localization of primary cancer All known N Col% Age [years] Primary cancer All known 80 100% 63.84 Skin MCC 1 1.25% 77 Bladder 2 2.5% 64.5 Breast 2 2.5% 71 Stomach 4 5% 63 Small bowel 5 6% 72.5 Appendix 3 3.75% 25.3 Colon 1 1.25% 80 Rectum 2 2.5% 63 Lung 60 75% 66 Men Women N Col% Mean age [years] N Col% Mean age [years] Primary cancer All known 54 100% 64.5 26 100% 59.85 Skin MCC 0 0% 0 1 3.84% 77 Bladder 2 3.7% 64.5 0 0% 0 Breast 0 0% 0 2 7.69% 71 Stomach 0 0% 0 4 15.38% 63 Small bowel 3 5.5% 80.5 2 7.69% 64.5 Appendix 1 1.85% 57 2 7.69% 19 Colon 1 1.85% 80 0 0% 0 Rectum 1 1.85% 68 1 3.84% 58 Lung 46 85.18% 66 14 53.84% 66.5 Col%: Percent distribution of patients (column percentage); MCC: Merkel cell carcinoma; N: No. of cases Pathological features: tumor site – the primary tumor was located as follows: five cases with a rare location: skin one case (1.25%), bladder two (2.5%) cases, breast two (2.5%) cases, 15 (18%) cases in the gastrointestinal tract, with different sites: four (5%) cases in the stomach, five (6.25%) cases in the small bowel, three (3.75%) cases in the appendix, one case (1.25%) in the right colon, two (2.5%) cases in the rectum, and 60 (75%) cases in the lung (Figure 1). Tumors distribution: primary site The tumor found in the skin was diagnosed as Merkel cell carcinoma (MCC) based on morphological features: tumor cells with scant eosinophilic cytoplasm and round nuclei with finely granular and dusty chromatin, and IHC profile: positivity for NE immunomarkers [chromogranin A (CgA), synaptophysin and cluster of differentiation 56 (CD56)], and also for the epithelial immunomarkers cytokeratin (CK) AE1/AE3, CK20 and epithelial membrane antigen (EMA) (Figures 2 and 3). Both bladder cases were diagnosed with SCC. Tumor cells were positive for the following NE immunomarkers: the first case for synaptophysin and CgA, and the second for CD56 and neuron-specific enolase (NSE) (Figure 4). The tumor cells were negative for p63 immunostaining, with positive internal control in the normal urothelium. One of the cases was positive for thyroid transcription factor 1 (TTF1). All lung cases were diagnosed from tissue fragments collected by biopsy. Out of the 60 cases, 59 were classified as SCCs and only one case was diagnosed as a NET. Tumor grade: only patients diagnosed with primary NETs were included in this evaluation. Most of the cases were classified as G1 – seven (46.6%) cases, followed by G3 – six (40%) cases, and two (13.3%) cases as G2. Small cell NEC is by definition classified as a tumor with a high grade of malignancy. Five (0.54%) patients diagnosed with primary NETs on surgical pieces presented lymph node metastases. Lymph node metastases were also identified in all the patients presenting lymphovascular invasion. None of the tumors showed perineural invasions. In patients where diagnosis was established on metastases, in seven (63.63%) cases the tumor was identified in the liver, in three (27.27%) cases in the lymph nodes and one case (9.09%) was located in the skin. The IHC profile of the analyzed cases is highlighted in Table 3. (A and B) Merkel cell carcinoma. HE staining: (A) ×50; (B) ×200. HE: Hematoxylin–Eosin Merkel cell carcinoma: immunoexpressions (×200) of CgA (A), CD56 (B), synaptophysin (C), and CK20 (D). CD56: Cluster of differentiation 56; CgA: Chromogranin A; CK20: Cytokeratin 20 (A–D) Small cell carcinoma of the bladder; immunoexpressions (×200) of CD56 (C) and NSE (D). HE staining: (A) ×50; (B) ×200. CD56: Cluster of differentiation 56; HE: Hematoxylin–Eosin; NSE: Neuron-specific enolase Immunohistochemical aspects of neuroendocrine tumors Antibodies Synaptophysin CgA CD56 NSE Primary site Analyzed cases/positive cases Skin 1/1 1/1 1/1 0 Bladder 2/1 2/1 1/1 1/1 Breast 2/2 1/1 1/0 1/0 Lung 29/13 59/18 60/54 10/8 Stomach 3/2 4/4 1/0 1/0 Small bowel 4/0 4/4 4/0 5/1 Appendix 3/1 3/3 2/1 2/1 Right colon 1/0 1/0 1/1 1/1 Rectum 1/0 2/1 1/1 1/1 Metastases Analyzed cases/positive cases Liver 7/5 7/6 7/2 7/4 Lymph nodes 3/3 3/3 3/2 3/2 Skin 1/0 1/1 1/0 1/1 CD56: Cluster of differentiation 56; CgA: Chromogranin A; NSE: Neuron-specific enolase ⧉ Discussions: This study presented the experience of a single Center – Department of Pathology, Mureş Clinical County Hospital – on NETs. NETs are rare neoplasms whose clinical and pathological features have been extensively studied in recent decades to understand the behavior of these heterogeneous tumors. To highlight the importance of studying these tumors we just have to pay attention to figures reported by various studies or health organizations. According to the Surveillance, Epidemiology and End Results (SEER) Program, NETs have shown an alarming incidence growth. A study conducted in Beijing, China, highlighted that the incidence of NETs in digestive tract increased from 0.51 cases per 100 000 people in 1973 to 6.20 cases per 100 000 people in 2015 [4]. The skin, breast, and bladder are included among the rare places of NETs occurrence. MCC (primary cutaneous NET) is a tumor with a very low incidence, both worldwide and in Europe, where the incidence is estimated to be 0.13 cases/100 000 people per year [9]. Given its low incidence, this diagnosis should be exclusionary, and the tumor must be differentiated primarily from metastasis of a NET of extracutaneous origin [10]. The first IHC marker used to diagnose MCC was CK20 [11]. MCC is positive for both epithelial immunomarkers (CK20, CK AE1/AE3 and EMA) and for NE immunomarkers (CgA, synaptophysin and CD56) [12]. The characteristic ‘dot-like’ perinuclear immunostaining of CK20 helps clarifying the diagnosis [13]. In our case, MCC was diagnosed in a 77-year-old woman. The tumor was positive in both epithelial immunomarkers (CK20, EMA, CK AE1/AE3) and the NE ones (CgA, synaptophysin and CD56). After melanoma, MCC represents the second cause of skin cancer death [14,15]. This type of carcinoma is closely related to ultraviolet radiations exposure, the presence of Merkel cell polyomavirus and immunosuppression [10]. In the literature, the percentage of NETs occurring in the breast is between 1–5% [16]. Both breast NETs evaluated in this study were classified as G1 tumors and were positive for synaptophysin and CgA labeling. Bladder NETs represent less than 1% of all tumors concerning this site. Small cell bladder carcinoma generally affects men over the age of 60. In general, patients have an advanced stage of cancer when they are first diagnosed. The cases analyzed in our study are in accordance with the literature, both being diagnosed in two men with a mean age of 64.5 years. At the time of diagnosis, in both cases, the tumor infiltrated the muscular layer of the bladder [17,18]. An analysis of the most common primary location among NETs shows that the lung is the most affected organ, followed by the stomach [19]; our study results are consistent with these findings, 75% of the NETs being located in the lung. This is in contradiction with many studies that showed a higher prevalence in the gastrointestinal tract, especially pancreas [20]. The main reason for this is that WHO included SCC in the group of NETs [21]. If we excluded this group from our study, we would only have one NET case diagnosed in lungs. The NE properties of SCCs are thought to be mediated by key transcription factor achaete-scute family basic helix-loop-helix (BHLH) transcription factor 1 (ASCL1), possibly using neurogenic differentiation 1 (NeuroD1) factor [22,23]. Association with other large histological types of lung tumors (squamous cell carcinoma, adenocarcinoma) suggests that SCC cells have the same endodermal origin as the respiratory epithelium [21]. According to other published studies that included patients diagnosed in different Centers, the average age of onset is in the 6–7th decade of life. This is consistent with data analyzed in our center [19, 24], with small exceptions concerning the appendix, where the youngest patient was 19 years old. In our survey, in accordance with the literature, we observed that if we ruled out lung tumors, women present a slightly higher prevalence in the development of these tumors compared to men [19, 25], but after including SCC of the lung, the most affected by NETs are men. Most metastases with unidentified primary tumors were in liver, an aspect also supported by literature [20, 26]. Diagnosis of NETs should be established in the presence of at least two NE immunomarkers positivity, preferably the panel should contain CgA and synaptophysin, considered to be general NE immunomarkers [27,28]. All the cases analyzed on surgical specimens were positive for CgA, with a different expression for the other immunomarkers. As for lung biopsies, the most frequently IHC staining was CgA and CD56, with an increased positivity for the last. Although CD56 appears to be the most sensitive NE immunomarker for lung NETs diagnosis, especially SCC, it is not specific. The negative reaction in all NE immunomarkers can be found in up to 10% of cases [21]. MCC was positive for all three NE immunomarkers and also for epithelial immunomarkers CK AE1/AE3, CK20 and EMA. The first series of cases in which TTF1 immunoexpression was analyzed in SCCs of extrapulmonary origin was reported in the 2000s [29]. This study supports the use of TTF1 as an immunomarker of differentiation between SCC of pulmonary origin compared to those of extrapulmonary origin. In 2007, two other studies ruled out this hypothesis and reported an increased number of extrapulmonary cases that are positive for TTF1 [30]. TTF1 is currently used for differential diagnosis of MCC from other NE skin metastases of other origins [31]. A very important aspect is the division of NETs into functional and non-functional tumors [32]. ‘Functional NET’ term refers to the fact that these tumors can secrete biologically active amines or peptides. As a result of this secretory activity, patients may develop certain symptoms, which are known as the carcinoid syndrome that often help raising the suspicion of a NET [33]. The classical carcinoid syndrome is characterized by diarrhea, flushing, hypotension, right heart disease. These correlate with the effects of serotonin hypersecretion [2]. Even if they have a common origin and express neuronal and NE immunomarkers, diversity and heterogeneity are traits characterizing NETs. They differ in their malignant potential, presence or absence of a clinical syndrome, biological behavior, and molecular abnormalities [6, 34]. This is also noticeable in tumors having the same location [35,36]. In the last 10 years, many genetic and epigenetic changes have been published. These reports confirmed a radical difference between well-differentiated NETs, including those with a high Ki67 proliferation index, and NECs. The reports showed frequent inactivation of retinoblastoma 1 (RB1) gene and tumor protein P53 (TP53) gene, a rare aspect in NETs [37]. This finding is included in the new 2017 WHO Classification of pancreatic NETs but is expected to be extended to other levels in the coming years [38]. ⧉ Conclusions: CgA remains the most sensitive immunomarker in diagnosis of NETs. CD56 is the most widely used immunomarker for diagnosing small cell lung tumors. Positive expression of TTF1 immunomarker does not confirm pulmonary origin of SCCs. Although the diagnosis of NETs has increased greatly in recent decades, they are still relatively rare among pathological tumor diagnosis. Given the heterogeneity of these tumors, the expertise of each Center must be shared to help manage these cases. Conflict of interest: The authors declare that they have no conflict of interests.
Background: Neuroendocrine neoplasms (NENs) are a heterogeneous group of tumors arising from cells that are part of the diffuse neuroendocrine system. Methods: We conducted a retrospective study in which we included a number of 91 cases diagnosed with neuroendocrine tumors (NETs). Descriptive statistics was performed: number of cases based on location, distribution by gender (male∕female), distribution by age, and we also performed a morphological and immunohistochemical (IHC) study. Results: The highest number of cases was found in lungs (60 cases). Tumors located on the skin, breast or bladder have been discovered, locations considered rare for this type of tumor. Of all cases diagnosed in the lungs, 59 were diagnosed as small cell carcinomas (SCCs) and only one case as NET. All surgical specimens were positive for chromogranin A (CgA), with a different expression for the other immunomarkers. For the lung biopsies, the most frequently IHC staining was CgA and cluster of differentiation 56 (CD56), with an increased positivity for the latter. Conclusions: CgA remains the most sensitive immunomarker in the diagnosis of NETs. CD56 is the most widely used immunomarker for diagnosing small cell lung tumors. Positive expression of thyroid transcription factor 1 (TTF1) immunomarker does not confirm pulmonary origin of SCCs.
⧉ Introduction: Neuroendocrine neoplasms (NENs) are a heterogeneous group of tumors arising from cells that are part of the diffuse neuroendocrine (NE) system [1]. The NE system is represented by endocrine glands like the pituitary gland, parathyroid glands, the NE part of the adrenal glands and also the endocrine islet tissue located at the level of glandular tissues (pancreatic, thyroid). This diffuse NE system also includes the endocrine cells that are located in the respiratory and digestive tracts [2,3]. In 1907, Kulchitsky identified the NE cells, and in the same year, Oberndorfer first described the carcinoid [4,5]. NENs were previously classified differently, based on location, with different terminology. The classification criteria according to organ systems created a lot of confusion. After the Conference held by World Health Organization (WHO) in November 2017, a new uniform classification for all neuroendocrine tumors (NETs) was published in 2018. Following this common classification, the distinction between well-differentiated NETs, formerly known as carcinoid tumors, and poorly differentiated neuroendocrine carcinomas (NECs) was made. Although both NETs and NECs express the same NE immunomarkers, these tumors are not related [6,7]. The association of two, low-grade and high-grade components in the same NET indicates that the high-grade component remains a well-differentiated tumor. On the contrary, NECs are not often associated with NETs, and they develop from precursor lesions [7]. An essential aspect from a clinical and treatment point of view is the functional and non-functional feature of NETs. The definition of endocrine tumors is given by their association with clinical syndromes that occur in the context of increased and abnormal production of hormones. Its presence can be proven by elevated serum levels or by immunohistochemical (IHC) reactions performed on the operative specimen [6,7,8]. Aim The aim of this study was to analyze the epidemiological, morphological and IHC aspects of NETs in our Center and to study new perspectives in the literature in terms of molecular biology and targeted therapy. ⧉ Conclusions: CgA remains the most sensitive immunomarker in diagnosis of NETs. CD56 is the most widely used immunomarker for diagnosing small cell lung tumors. Positive expression of TTF1 immunomarker does not confirm pulmonary origin of SCCs. Although the diagnosis of NETs has increased greatly in recent decades, they are still relatively rare among pathological tumor diagnosis. Given the heterogeneity of these tumors, the expertise of each Center must be shared to help manage these cases.
Background: Neuroendocrine neoplasms (NENs) are a heterogeneous group of tumors arising from cells that are part of the diffuse neuroendocrine system. Methods: We conducted a retrospective study in which we included a number of 91 cases diagnosed with neuroendocrine tumors (NETs). Descriptive statistics was performed: number of cases based on location, distribution by gender (male∕female), distribution by age, and we also performed a morphological and immunohistochemical (IHC) study. Results: The highest number of cases was found in lungs (60 cases). Tumors located on the skin, breast or bladder have been discovered, locations considered rare for this type of tumor. Of all cases diagnosed in the lungs, 59 were diagnosed as small cell carcinomas (SCCs) and only one case as NET. All surgical specimens were positive for chromogranin A (CgA), with a different expression for the other immunomarkers. For the lung biopsies, the most frequently IHC staining was CgA and cluster of differentiation 56 (CD56), with an increased positivity for the latter. Conclusions: CgA remains the most sensitive immunomarker in the diagnosis of NETs. CD56 is the most widely used immunomarker for diagnosing small cell lung tumors. Positive expression of thyroid transcription factor 1 (TTF1) immunomarker does not confirm pulmonary origin of SCCs.
3,670
252
[]
6
[ "cases", "nets", "tumors", "tumor", "ne", "cga", "cd56", "diagnosed", "primary", "immunomarkers" ]
[ "neuroendocrine neoplasms", "neuroendocrine neoplasms nens", "keywords neuroendocrine carcinoid", "neuroendocrine tumors nets", "differentiated neuroendocrine carcinomas" ]
null
[CONTENT] neuroendocrine tumors | Merkel cell carcinoma | immunohistochemistry | NETs/NECs [SUMMARY]
null
[CONTENT] neuroendocrine tumors | Merkel cell carcinoma | immunohistochemistry | NETs/NECs [SUMMARY]
[CONTENT] neuroendocrine tumors | Merkel cell carcinoma | immunohistochemistry | NETs/NECs [SUMMARY]
[CONTENT] neuroendocrine tumors | Merkel cell carcinoma | immunohistochemistry | NETs/NECs [SUMMARY]
[CONTENT] neuroendocrine tumors | Merkel cell carcinoma | immunohistochemistry | NETs/NECs [SUMMARY]
[CONTENT] Biomarkers, Tumor | Carcinoma, Neuroendocrine | Chromogranin A | Female | Humans | Lung Neoplasms | Male | Neuroendocrine Tumors | Retrospective Studies [SUMMARY]
null
[CONTENT] Biomarkers, Tumor | Carcinoma, Neuroendocrine | Chromogranin A | Female | Humans | Lung Neoplasms | Male | Neuroendocrine Tumors | Retrospective Studies [SUMMARY]
[CONTENT] Biomarkers, Tumor | Carcinoma, Neuroendocrine | Chromogranin A | Female | Humans | Lung Neoplasms | Male | Neuroendocrine Tumors | Retrospective Studies [SUMMARY]
[CONTENT] Biomarkers, Tumor | Carcinoma, Neuroendocrine | Chromogranin A | Female | Humans | Lung Neoplasms | Male | Neuroendocrine Tumors | Retrospective Studies [SUMMARY]
[CONTENT] Biomarkers, Tumor | Carcinoma, Neuroendocrine | Chromogranin A | Female | Humans | Lung Neoplasms | Male | Neuroendocrine Tumors | Retrospective Studies [SUMMARY]
[CONTENT] neuroendocrine neoplasms | neuroendocrine neoplasms nens | keywords neuroendocrine carcinoid | neuroendocrine tumors nets | differentiated neuroendocrine carcinomas [SUMMARY]
null
[CONTENT] neuroendocrine neoplasms | neuroendocrine neoplasms nens | keywords neuroendocrine carcinoid | neuroendocrine tumors nets | differentiated neuroendocrine carcinomas [SUMMARY]
[CONTENT] neuroendocrine neoplasms | neuroendocrine neoplasms nens | keywords neuroendocrine carcinoid | neuroendocrine tumors nets | differentiated neuroendocrine carcinomas [SUMMARY]
[CONTENT] neuroendocrine neoplasms | neuroendocrine neoplasms nens | keywords neuroendocrine carcinoid | neuroendocrine tumors nets | differentiated neuroendocrine carcinomas [SUMMARY]
[CONTENT] neuroendocrine neoplasms | neuroendocrine neoplasms nens | keywords neuroendocrine carcinoid | neuroendocrine tumors nets | differentiated neuroendocrine carcinomas [SUMMARY]
[CONTENT] cases | nets | tumors | tumor | ne | cga | cd56 | diagnosed | primary | immunomarkers [SUMMARY]
null
[CONTENT] cases | nets | tumors | tumor | ne | cga | cd56 | diagnosed | primary | immunomarkers [SUMMARY]
[CONTENT] cases | nets | tumors | tumor | ne | cga | cd56 | diagnosed | primary | immunomarkers [SUMMARY]
[CONTENT] cases | nets | tumors | tumor | ne | cga | cd56 | diagnosed | primary | immunomarkers [SUMMARY]
[CONTENT] cases | nets | tumors | tumor | ne | cga | cd56 | diagnosed | primary | immunomarkers [SUMMARY]
[CONTENT] ne | endocrine | nets | system | glands | ne system | tumors | neuroendocrine | differentiated | classification [SUMMARY]
null
[CONTENT] cases | 63 | primary | skin | tumor | 84 | years | diagnosed | lymph | 85 [SUMMARY]
[CONTENT] immunomarker | diagnosis | diagnosis nets | tumors | nets | small cell lung tumors | cga remains | cga remains sensitive | cga remains sensitive immunomarker | small cell lung [SUMMARY]
[CONTENT] cases | nets | tumors | authors | declare conflict | interests | conflict interests | declare | conflict | declare conflict interests [SUMMARY]
[CONTENT] cases | nets | tumors | authors | declare conflict | interests | conflict interests | declare | conflict | declare conflict interests [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] 60 ||| ||| 59 | only one | NET ||| CgA ||| IHC | CgA | 56 [SUMMARY]
[CONTENT] CgA ||| ||| transcription | 1 | TTF1 [SUMMARY]
[CONTENT] ||| 91 ||| IHC ||| ||| 60 ||| ||| 59 | only one | NET ||| CgA ||| IHC | CgA | 56 ||| CgA ||| ||| transcription | 1 | TTF1 [SUMMARY]
[CONTENT] ||| 91 ||| IHC ||| ||| 60 ||| ||| 59 | only one | NET ||| CgA ||| IHC | CgA | 56 ||| CgA ||| ||| transcription | 1 | TTF1 [SUMMARY]
Maintenance of improvement in spinal mobility, physical function and quality of life in patients with ankylosing spondylitis after 5 years in a clinical trial of adalimumab.
25541333
Clinicaltrials.gov; https://clinicaltrials.gov/NCT00085644.
TRIAL REGISTRATION
Patients received blinded adalimumab 40 mg or placebo every other week for 24 weeks, then open-label adalimumab for up to 5 years. Spinal mobility was evaluated using linear BASMI (BASMIlin). BASDAI, total back pain, CRP, BASFI, Short Form-36 and AS quality of life (ASQoL) were also assessed. Correlations between BASMIlin and clinical, functional and ASQoL outcomes after 12 weeks and after 5years of adalimumab exposure were evaluated using Spearman's rank correlation. Associations were further analysed using multivariate regression.
METHODS
Three hundred and eleven patients received ≥1 dose of adalimumab; 125 of the 208 patients originally randomized to adalimumab received treatment for 5 years. Improvements in BASMIlin were sustained through 5 years, with a mean change of -0.6 from baseline in the population who completed 5 years of treatment with adalimumab. Improvements in disease activity, physical function and ASQoL were also sustained through 5 years. BASMIlin was significantly correlated with all evaluated clinical outcomes (P < 0.001). The highest correlation was with BASFI at 12 weeks (r = 0.52) and at 5 years (r = 0.65). Multivariate regression analysis confirmed this association (P < 0.001).
RESULTS
Treatment with adalimumab for up to 5 years demonstrated sustained benefits in spinal mobility, disease activity, physical function and HRQoL in patients with active AS. Spinal mobility was significantly associated with short- and long-term physical function in these patients.
CONCLUSION
[ "Adalimumab", "Adult", "Antibodies, Monoclonal, Humanized", "Antirheumatic Agents", "Dose-Response Relationship, Drug", "Female", "Follow-Up Studies", "Humans", "Longitudinal Studies", "Male", "Middle Aged", "Motor Activity", "Outcome Assessment, Health Care", "Quality of Life", "Range of Motion, Articular", "Severity of Illness Index", "Spine", "Spondylitis, Ankylosing", "Time Factors", "Treatment Outcome", "Tumor Necrosis Factor-alpha" ]
4473764
Introduction
AS is a chronic inflammatory disease that affects the axial skeleton, peripheral joints and entheses, primarily of the spine and SI joints [1]. Symptoms of AS include back pain, loss of spinal mobility, joint stiffness and fatigue. The progressive nature of AS can result in complete fusion of the spine, putting patients at risk of vertebral fractures [2]. Progressive AS can also cause significant functional impairment, reductions in health-related quality of life (HRQoL) [3, 4], reduced capacity for work [5, 6] and substantial direct and indirect costs for the patient and the health care system [7]. TNF antagonists (etanercept, infliximab, golimumab and adalimumab) have demonstrated clinical efficacy in short-term and long-term clinical trials [8–25]. For adalimumab, long-term effectiveness data for patients treated for up to 5 years are now available from the Adalimumab Trial Evaluating Long-term Efficacy and Safety for AS (ATLAS) study. ATLAS was a Phase 3, randomized, 24-week, double-blind, placebo-controlled, multicentre trial with an open-label extension for up to 5 years, assessing efficacy and safety of adalimumab in patients with active AS [21]. This study demonstrated that significantly more patients treated with adalimumab achieved Assessment in SpondyloArthritis international Society (ASAS) 20 response compared with those treated with a placebo within 2 weeks of treatment initiation. A high ASAS20 response rate (89%) was maintained in patients completing 5 years of treatment [26]. Considering the lifelong consequences of AS on spinal mobility and HRQoL, long-term data on the impact of anti-TNF treatment on these outcomes are needed. The ATLAS trial has shown that adalimumab treatment resulted in short-term (12–24 weeks) improvements in physical function, disease activity, general health and HRQoL compared with placebo in patients with AS [21, 27]. Moreover, improvements in these parameters were maintained through 3 years of treatment [8]. However, sustained improvements in spinal mobility after adalimumab treatment as measured by the BASMI have only been reported for 2 years of adalimumab treatment [23]. The current report provides an assessment of the long-term improvement in spinal mobility using the more sensitive linear BASMI (BASMIlin) [28], as well as physical function and HRQoL through 5 years of adalimumab treatment in patients with active AS. In addition, this analysis evaluated the association of spinal mobility and clinical, functional and HRQoL outcomes.
Methods
Patients and study design Patients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent. Patients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent. Outcome measures Spinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8]. Spinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8]. Statistical analysis The mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin. The mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin.
Results
Patients Of the 315 randomized patients (adalimumab, n = 208; placebo, n = 107), 311 received ≥ 1 dose of adalimumab, either blinded or open-label (any adalimumab population; see Supplementary Fig. S1, available at Rheumatology Online). From this population, 65% (202 of the 311) completed the 5-year study. Withdrawal of consent (n = 37) and adverse events (n = 38) were the most common reasons for discontinuation during the 5 years of the study. The median [mean (s.d.)] duration of exposure to adalimumab in the any adalimumab population was 4.8 years [3.9 (1.6) years]. Of the original 208 patients who were randomized to treatment with adalimumab, 125 patients (60%) completed 5 years of treatment (the 5-year adalimumab completer population). A significant number of patients (n = 77) initially received placebo for 6 months and thus were exposed to adalimumab for only 4.5 years; although these patients completed the study, they were not included in the population for analysis of 5-year exposure (n = 125). In the any adalimumab population, 82 of the 311patients (26%) received ≥ 7 doses in the last 70days of treatment, indicating weekly dosing. Of the 202 patients who received adalimumab at any time and who completed 5 years in the study, 29 patients (14%) received ≥ 7 doses in the last 70 days of treatment. Baseline clinical characteristics for patients treated during the double-blind period have been previously reported [21]. Demographics and disease state characteristics for the 5-year adalimumab completer population (n = 125) at baseline (i.e. assessment prior to first dose of adalimumab in the double-blind period) were similar to the any adalimumab study population (Table 1). Table 1Baseline patient demographics and disease state of patients who received adalimumabAny adalimumab populationa n = 311Population who completed 5 years of adalimumab treatmentb n = 125Age, mean (s.d.), years42.3 (11.6)42.8 (12.0)Male, n (%)233 (74.9)101 (80.8)White, n (%)299 (96.1)121 (96.8)Disease duration, mean (s.d.), years11.0 (9.5)11.9 (10.4)BASDAI score, 0–10 cm VAS6.3 (1.7)6.2 (1.8)BASFI score, 0–10 cm VAS5.4 (2.2)5.2 (2.1)BASMIlin 0–104.4 (1.7)c4.3 (1.7)SF-36 PCS, 0–5032.5 (8.0)d33.2 (8.2)eASQoL, 0–1810.3 (4.3)9.9 (4.3)Total back pain, 0–10 cm VAS6.5 (2.1)6.4 (2.1)Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Baseline patient demographics and disease state of patients who received adalimumab Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Of the 315 randomized patients (adalimumab, n = 208; placebo, n = 107), 311 received ≥ 1 dose of adalimumab, either blinded or open-label (any adalimumab population; see Supplementary Fig. S1, available at Rheumatology Online). From this population, 65% (202 of the 311) completed the 5-year study. Withdrawal of consent (n = 37) and adverse events (n = 38) were the most common reasons for discontinuation during the 5 years of the study. The median [mean (s.d.)] duration of exposure to adalimumab in the any adalimumab population was 4.8 years [3.9 (1.6) years]. Of the original 208 patients who were randomized to treatment with adalimumab, 125 patients (60%) completed 5 years of treatment (the 5-year adalimumab completer population). A significant number of patients (n = 77) initially received placebo for 6 months and thus were exposed to adalimumab for only 4.5 years; although these patients completed the study, they were not included in the population for analysis of 5-year exposure (n = 125). In the any adalimumab population, 82 of the 311patients (26%) received ≥ 7 doses in the last 70days of treatment, indicating weekly dosing. Of the 202 patients who received adalimumab at any time and who completed 5 years in the study, 29 patients (14%) received ≥ 7 doses in the last 70 days of treatment. Baseline clinical characteristics for patients treated during the double-blind period have been previously reported [21]. Demographics and disease state characteristics for the 5-year adalimumab completer population (n = 125) at baseline (i.e. assessment prior to first dose of adalimumab in the double-blind period) were similar to the any adalimumab study population (Table 1). Table 1Baseline patient demographics and disease state of patients who received adalimumabAny adalimumab populationa n = 311Population who completed 5 years of adalimumab treatmentb n = 125Age, mean (s.d.), years42.3 (11.6)42.8 (12.0)Male, n (%)233 (74.9)101 (80.8)White, n (%)299 (96.1)121 (96.8)Disease duration, mean (s.d.), years11.0 (9.5)11.9 (10.4)BASDAI score, 0–10 cm VAS6.3 (1.7)6.2 (1.8)BASFI score, 0–10 cm VAS5.4 (2.2)5.2 (2.1)BASMIlin 0–104.4 (1.7)c4.3 (1.7)SF-36 PCS, 0–5032.5 (8.0)d33.2 (8.2)eASQoL, 0–1810.3 (4.3)9.9 (4.3)Total back pain, 0–10 cm VAS6.5 (2.1)6.4 (2.1)Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Baseline patient demographics and disease state of patients who received adalimumab Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Spinal mobility Both BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06). Table 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Spinal mobility–BASMIlin and components over 5 years Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Both BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06). Table 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Spinal mobility–BASMIlin and components over 5 years Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Disease activity and physical function In the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5. Fig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. Mean BASDAI, total back pain and BASFI scores over time Analysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. In the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5. Fig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. Mean BASDAI, total back pain and BASFI scores over time Analysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. Health-related quality of life Improvement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B). Fig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Quality of life measures (A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Improvement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B). Fig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Quality of life measures (A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Association of spinal mobility with disease activity, function and HRQoL BASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin. Table 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Table 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. Correlation of BASMIlin with disease activity, physical function and quality of life measuresa aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. BASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin. Table 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Table 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. Correlation of BASMIlin with disease activity, physical function and quality of life measuresa aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI.
Conclusion
Treatment of active AS with adalimumab for up to 5 years resulted in sustained benefits in spinal mobility, physical function and HRQoL. Spinal mobility was correlated with patient-reported function, both early in the course of adalimumab treatment and after 5 years of therapy, suggesting improvements in mobility result in enhanced long-term physical function.
[ "Patients and study design", "Outcome measures", "Statistical analysis", "Spinal mobility", "Disease activity and physical function", "Health-related quality of life", "Association of spinal mobility with disease activity, function and HRQoL" ]
[ "Patients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent.", "Spinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8].", "The mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin.", "Both BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06).\nTable 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI.\nSpinal mobility–BASMIlin and components over 5 years\nData are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI.", "In the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5.\nFig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population.\nMean BASDAI, total back pain and BASFI scores over time\nAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population.", "Improvement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B).\nFig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score.\nQuality of life measures\n(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score.", "BASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin.\nTable 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score.\nTable 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI.\nCorrelation of BASMIlin with disease activity, physical function and quality of life measuresa\naAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score.\nMultivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a\naAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI." ]
[ "subjects", null, null, null, null, null, null ]
[ "Introduction", "Methods", "Patients and study design", "Outcome measures", "Statistical analysis", "Results", "Patients", "Spinal mobility", "Disease activity and physical function", "Health-related quality of life", "Association of spinal mobility with disease activity, function and HRQoL", "Discussion", "Conclusion", "Supplementary Material" ]
[ "AS is a chronic inflammatory disease that affects the axial skeleton, peripheral joints and entheses, primarily of the spine and SI joints [1]. Symptoms of AS include back pain, loss of spinal mobility, joint stiffness and fatigue. The progressive nature of AS can result in complete fusion of the spine, putting patients at risk of vertebral fractures [2]. Progressive AS can also cause significant functional impairment, reductions in health-related quality of life (HRQoL) [3, 4], reduced capacity for work [5, 6] and substantial direct and indirect costs for the patient and the health care system [7].\nTNF antagonists (etanercept, infliximab, golimumab and adalimumab) have demonstrated clinical efficacy in short-term and long-term clinical trials [8–25]. For adalimumab, long-term effectiveness data for patients treated for up to 5 years are now available from the Adalimumab Trial Evaluating Long-term Efficacy and Safety for AS (ATLAS) study. ATLAS was a Phase 3, randomized, 24-week, double-blind, placebo-controlled, multicentre trial with an open-label extension for up to 5 years, assessing efficacy and safety of adalimumab in patients with active AS [21]. This study demonstrated that significantly more patients treated with adalimumab achieved Assessment in SpondyloArthritis international Society (ASAS) 20 response compared with those treated with a placebo within 2 weeks of treatment initiation. A high ASAS20 response rate (89%) was maintained in patients completing 5 years of treatment [26].\nConsidering the lifelong consequences of AS on spinal mobility and HRQoL, long-term data on the impact of anti-TNF treatment on these outcomes are needed. The ATLAS trial has shown that adalimumab treatment resulted in short-term (12–24 weeks) improvements in physical function, disease activity, general health and HRQoL compared with placebo in patients with AS [21, 27]. Moreover, improvements in these parameters were maintained through 3 years of treatment [8]. However, sustained improvements in spinal mobility after adalimumab treatment as measured by the BASMI have only been reported for 2 years of adalimumab treatment [23]. The current report provides an assessment of the long-term improvement in spinal mobility using the more sensitive linear BASMI (BASMIlin) [28], as well as physical function and HRQoL through 5 years of adalimumab treatment in patients with active AS. In addition, this analysis evaluated the association of spinal mobility and clinical, functional and HRQoL outcomes.", " Patients and study design Patients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent.\nPatients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent.\n Outcome measures Spinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8].\nSpinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8].\n Statistical analysis The mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin.\nThe mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin.", "Patients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent.", "Spinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8].", "The mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin.", " Patients Of the 315 randomized patients (adalimumab, n = 208; placebo, n = 107), 311 received ≥ 1 dose of adalimumab, either blinded or open-label (any adalimumab population; see Supplementary Fig. S1, available at Rheumatology Online). From this population, 65% (202 of the 311) completed the 5-year study. Withdrawal of consent (n = 37) and adverse events (n = 38) were the most common reasons for discontinuation during the 5 years of the study. The median [mean (s.d.)] duration of exposure to adalimumab in the any adalimumab population was 4.8 years [3.9 (1.6) years]. Of the original 208 patients who were randomized to treatment with adalimumab, 125 patients (60%) completed 5 years of treatment (the 5-year adalimumab completer population). A significant number of patients (n = 77) initially received placebo for 6 months and thus were exposed to adalimumab for only 4.5 years; although these patients completed the study, they were not included in the population for analysis of 5-year exposure (n = 125). In the any adalimumab population, 82 of the 311patients (26%) received ≥ 7 doses in the last 70days of treatment, indicating weekly dosing. Of the 202 patients who received adalimumab at any time and who completed 5 years in the study, 29 patients (14%) received ≥ 7 doses in the last 70 days of treatment.\nBaseline clinical characteristics for patients treated during the double-blind period have been previously reported [21]. Demographics and disease state characteristics for the 5-year adalimumab completer population (n = 125) at baseline (i.e. assessment prior to first dose of adalimumab in the double-blind period) were similar to the any adalimumab study population (Table 1).\nTable 1Baseline patient demographics and disease state of patients who received adalimumabAny adalimumab populationa\nn = 311Population who completed 5 years of adalimumab treatmentb\nn = 125Age, mean (s.d.), years42.3 (11.6)42.8 (12.0)Male, n (%)233 (74.9)101 (80.8)White, n (%)299 (96.1)121 (96.8)Disease duration, mean (s.d.), years11.0 (9.5)11.9 (10.4)BASDAI score, 0–10 cm VAS6.3 (1.7)6.2 (1.8)BASFI score, 0–10 cm VAS5.4 (2.2)5.2 (2.1)BASMIlin 0–104.4 (1.7)c4.3 (1.7)SF-36 PCS, 0–5032.5 (8.0)d33.2 (8.2)eASQoL, 0–1810.3 (4.3)9.9 (4.3)Total back pain, 0–10 cm VAS6.5 (2.1)6.4 (2.1)Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey.\nBaseline patient demographics and disease state of patients who received adalimumab\nData are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey.\nOf the 315 randomized patients (adalimumab, n = 208; placebo, n = 107), 311 received ≥ 1 dose of adalimumab, either blinded or open-label (any adalimumab population; see Supplementary Fig. S1, available at Rheumatology Online). From this population, 65% (202 of the 311) completed the 5-year study. Withdrawal of consent (n = 37) and adverse events (n = 38) were the most common reasons for discontinuation during the 5 years of the study. The median [mean (s.d.)] duration of exposure to adalimumab in the any adalimumab population was 4.8 years [3.9 (1.6) years]. Of the original 208 patients who were randomized to treatment with adalimumab, 125 patients (60%) completed 5 years of treatment (the 5-year adalimumab completer population). A significant number of patients (n = 77) initially received placebo for 6 months and thus were exposed to adalimumab for only 4.5 years; although these patients completed the study, they were not included in the population for analysis of 5-year exposure (n = 125). In the any adalimumab population, 82 of the 311patients (26%) received ≥ 7 doses in the last 70days of treatment, indicating weekly dosing. Of the 202 patients who received adalimumab at any time and who completed 5 years in the study, 29 patients (14%) received ≥ 7 doses in the last 70 days of treatment.\nBaseline clinical characteristics for patients treated during the double-blind period have been previously reported [21]. Demographics and disease state characteristics for the 5-year adalimumab completer population (n = 125) at baseline (i.e. assessment prior to first dose of adalimumab in the double-blind period) were similar to the any adalimumab study population (Table 1).\nTable 1Baseline patient demographics and disease state of patients who received adalimumabAny adalimumab populationa\nn = 311Population who completed 5 years of adalimumab treatmentb\nn = 125Age, mean (s.d.), years42.3 (11.6)42.8 (12.0)Male, n (%)233 (74.9)101 (80.8)White, n (%)299 (96.1)121 (96.8)Disease duration, mean (s.d.), years11.0 (9.5)11.9 (10.4)BASDAI score, 0–10 cm VAS6.3 (1.7)6.2 (1.8)BASFI score, 0–10 cm VAS5.4 (2.2)5.2 (2.1)BASMIlin 0–104.4 (1.7)c4.3 (1.7)SF-36 PCS, 0–5032.5 (8.0)d33.2 (8.2)eASQoL, 0–1810.3 (4.3)9.9 (4.3)Total back pain, 0–10 cm VAS6.5 (2.1)6.4 (2.1)Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey.\nBaseline patient demographics and disease state of patients who received adalimumab\nData are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey.\n Spinal mobility Both BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06).\nTable 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI.\nSpinal mobility–BASMIlin and components over 5 years\nData are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI.\nBoth BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06).\nTable 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI.\nSpinal mobility–BASMIlin and components over 5 years\nData are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI.\n Disease activity and physical function In the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5.\nFig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population.\nMean BASDAI, total back pain and BASFI scores over time\nAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population.\nIn the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5.\nFig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population.\nMean BASDAI, total back pain and BASFI scores over time\nAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population.\n Health-related quality of life Improvement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B).\nFig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score.\nQuality of life measures\n(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score.\nImprovement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B).\nFig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score.\nQuality of life measures\n(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score.\n Association of spinal mobility with disease activity, function and HRQoL BASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin.\nTable 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score.\nTable 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI.\nCorrelation of BASMIlin with disease activity, physical function and quality of life measuresa\naAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score.\nMultivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a\naAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI.\nBASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin.\nTable 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score.\nTable 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI.\nCorrelation of BASMIlin with disease activity, physical function and quality of life measuresa\naAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score.\nMultivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a\naAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI.", "Of the 315 randomized patients (adalimumab, n = 208; placebo, n = 107), 311 received ≥ 1 dose of adalimumab, either blinded or open-label (any adalimumab population; see Supplementary Fig. S1, available at Rheumatology Online). From this population, 65% (202 of the 311) completed the 5-year study. Withdrawal of consent (n = 37) and adverse events (n = 38) were the most common reasons for discontinuation during the 5 years of the study. The median [mean (s.d.)] duration of exposure to adalimumab in the any adalimumab population was 4.8 years [3.9 (1.6) years]. Of the original 208 patients who were randomized to treatment with adalimumab, 125 patients (60%) completed 5 years of treatment (the 5-year adalimumab completer population). A significant number of patients (n = 77) initially received placebo for 6 months and thus were exposed to adalimumab for only 4.5 years; although these patients completed the study, they were not included in the population for analysis of 5-year exposure (n = 125). In the any adalimumab population, 82 of the 311patients (26%) received ≥ 7 doses in the last 70days of treatment, indicating weekly dosing. Of the 202 patients who received adalimumab at any time and who completed 5 years in the study, 29 patients (14%) received ≥ 7 doses in the last 70 days of treatment.\nBaseline clinical characteristics for patients treated during the double-blind period have been previously reported [21]. Demographics and disease state characteristics for the 5-year adalimumab completer population (n = 125) at baseline (i.e. assessment prior to first dose of adalimumab in the double-blind period) were similar to the any adalimumab study population (Table 1).\nTable 1Baseline patient demographics and disease state of patients who received adalimumabAny adalimumab populationa\nn = 311Population who completed 5 years of adalimumab treatmentb\nn = 125Age, mean (s.d.), years42.3 (11.6)42.8 (12.0)Male, n (%)233 (74.9)101 (80.8)White, n (%)299 (96.1)121 (96.8)Disease duration, mean (s.d.), years11.0 (9.5)11.9 (10.4)BASDAI score, 0–10 cm VAS6.3 (1.7)6.2 (1.8)BASFI score, 0–10 cm VAS5.4 (2.2)5.2 (2.1)BASMIlin 0–104.4 (1.7)c4.3 (1.7)SF-36 PCS, 0–5032.5 (8.0)d33.2 (8.2)eASQoL, 0–1810.3 (4.3)9.9 (4.3)Total back pain, 0–10 cm VAS6.5 (2.1)6.4 (2.1)Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey.\nBaseline patient demographics and disease state of patients who received adalimumab\nData are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey.", "Both BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06).\nTable 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI.\nSpinal mobility–BASMIlin and components over 5 years\nData are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI.", "In the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5.\nFig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population.\nMean BASDAI, total back pain and BASFI scores over time\nAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population.", "Improvement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B).\nFig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score.\nQuality of life measures\n(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score.", "BASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin.\nTable 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score.\nTable 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI.\nCorrelation of BASMIlin with disease activity, physical function and quality of life measuresa\naAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score.\nMultivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a\naAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI.", "There is a growing body of evidence that treatment of AS with anti-TNF agents provides long-term benefits. The progressive restriction of physical function typically experienced by patients with AS negatively affects their HRQoL. Slowing or halting this progressive loss of function over the long term is thus an important goal of AS therapy. This current study is the first to report data for up to 5 years in terms of improvement in spinal mobility with adalimumab. Adalimumab treatment resulted in improvements in spinal mobility, as measured by the composite BASMIlin score, which appear to be due to improvements in lumbar side flexion, intermalleolar distance and cervical rotation components. In contrast, lumbar flexion and tragus-to-wall flexion measurements remained similar to baseline throughout the study. These improvements in spinal mobility are consistent with shorter-term findings with adalimumab [23] and long-term findings with other anti-TNF agents [36–38].\nThe baseline BASMIlin scores in the current analysis were higher than the baseline BASMI2 score previously reported by van der Heijde et al. [21] for the ATLAS study population originally randomized to treatment with adalimumab. However, this is not surprising as mean status scores for BASMI2 have been shown to be lower compared with BASMIlin scores [28, 39].\nThe current analysis demonstrated that improvements in physical function and disease activity were maintained over 5 years. Consistent with these data, HRQoL improvements observed after 12 and 24 weeks of adalimumab treatment [27] were sustained till the end of the 5-year study. By year 5, most patients remaining in the study had reached the MID for both ASQoL and SF-36 PCS. These results were observed despite the finding that progression of structural damage in AS (as determined by radiographs) does not appear to be inhibited by 2 years of adalimumab treatment [40]. Radiographic data with longer follow-up are not available. Multivariate regression analyses in a study of infliximab treatment demonstrated that HRQoL in AS is determined by physical function and disease activity [41]. In addition, multivariate regression showed that physical function is dependent on spinal mobility and disease activity; the degree of spinal mobility, in turn, is determined by irreversible structural damage and reversible inflammation of the spine [41, 42]. However, the association between structural damage and spinal mobility is highly variable among individual patients [15]. This discordance may be due to the influence of spinal inflammation on spinal mobility [42]. Thus, maintaining a high HRQoL by improving physical function and reducing disease activity is an achievable goal for patients with AS treated with anti-TNF therapy.\nThe second objective of the current analysis was to determine whether spinal mobility correlated with other AS clinical, functional and HRQoL outcomes. Significant correlations between BASMIlin and BASDAI, total back pain, SF-36 PCS and ASQoL were observed, although they were weak. The weakness of association between these variables was further confirmed by multivariate regression analysis, in which these measures were not a statistically significant contributory factor to BASMIlin. The BASMIlin correlated best with BASFI at both week 12 and at year5. This moderate correlation suggests that improvements in spinal mobility lead to enhanced physical function throughout extended treatment with adalimumab. The multivariate regression analysis confirmed the significant nature of the positive association between spinal mobility and physical function. In a previous study of 70 patients with AS who completed 6 months of pamidronate therapy, changes in the BASMI2 also significantly correlated with changes in BASFI (Pearson correlation coefficient = 0.44; P < 0.001) [43].\nThe main strength of this study is the duration of the long-term follow-up. ATLAS is the largest long-term clinical trial of anti-TNF treatment of AS to date. Limitations of long-term analyses include the fact that patients for whom efficacy is suboptimal are less likely to remain in the study, enriching the continuing study population with patients who are doing relatively well. Also, the open-label nature of the extension period may have resulted in bias.\n Conclusion Treatment of active AS with adalimumab for up to 5 years resulted in sustained benefits in spinal mobility, physical function and HRQoL. Spinal mobility was correlated with patient-reported function, both early in the course of adalimumab treatment and after 5 years of therapy, suggesting improvements in mobility result in enhanced long-term physical function.\nTreatment of active AS with adalimumab for up to 5 years resulted in sustained benefits in spinal mobility, physical function and HRQoL. Spinal mobility was correlated with patient-reported function, both early in the course of adalimumab treatment and after 5 years of therapy, suggesting improvements in mobility result in enhanced long-term physical function.", "Treatment of active AS with adalimumab for up to 5 years resulted in sustained benefits in spinal mobility, physical function and HRQoL. Spinal mobility was correlated with patient-reported function, both early in the course of adalimumab treatment and after 5 years of therapy, suggesting improvements in mobility result in enhanced long-term physical function.", "" ]
[ "intro", "methods", "subjects", null, null, "results", "subjects", null, null, null, null, "discussion", "conclusions", "supplementary-material" ]
[ "anti-TNF drugs", "ankylosing spondylitis", "spinal mobility", "health-related quality of life", "physical function" ]
Introduction: AS is a chronic inflammatory disease that affects the axial skeleton, peripheral joints and entheses, primarily of the spine and SI joints [1]. Symptoms of AS include back pain, loss of spinal mobility, joint stiffness and fatigue. The progressive nature of AS can result in complete fusion of the spine, putting patients at risk of vertebral fractures [2]. Progressive AS can also cause significant functional impairment, reductions in health-related quality of life (HRQoL) [3, 4], reduced capacity for work [5, 6] and substantial direct and indirect costs for the patient and the health care system [7]. TNF antagonists (etanercept, infliximab, golimumab and adalimumab) have demonstrated clinical efficacy in short-term and long-term clinical trials [8–25]. For adalimumab, long-term effectiveness data for patients treated for up to 5 years are now available from the Adalimumab Trial Evaluating Long-term Efficacy and Safety for AS (ATLAS) study. ATLAS was a Phase 3, randomized, 24-week, double-blind, placebo-controlled, multicentre trial with an open-label extension for up to 5 years, assessing efficacy and safety of adalimumab in patients with active AS [21]. This study demonstrated that significantly more patients treated with adalimumab achieved Assessment in SpondyloArthritis international Society (ASAS) 20 response compared with those treated with a placebo within 2 weeks of treatment initiation. A high ASAS20 response rate (89%) was maintained in patients completing 5 years of treatment [26]. Considering the lifelong consequences of AS on spinal mobility and HRQoL, long-term data on the impact of anti-TNF treatment on these outcomes are needed. The ATLAS trial has shown that adalimumab treatment resulted in short-term (12–24 weeks) improvements in physical function, disease activity, general health and HRQoL compared with placebo in patients with AS [21, 27]. Moreover, improvements in these parameters were maintained through 3 years of treatment [8]. However, sustained improvements in spinal mobility after adalimumab treatment as measured by the BASMI have only been reported for 2 years of adalimumab treatment [23]. The current report provides an assessment of the long-term improvement in spinal mobility using the more sensitive linear BASMI (BASMIlin) [28], as well as physical function and HRQoL through 5 years of adalimumab treatment in patients with active AS. In addition, this analysis evaluated the association of spinal mobility and clinical, functional and HRQoL outcomes. Methods: Patients and study design Patients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent. Patients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent. Outcome measures Spinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8]. Spinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8]. Statistical analysis The mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin. The mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin. Patients and study design: Patients were ≥18 years of age, fulfilled the modified New York criteria for AS [29] and had ≥2 of the following disease activity criteria: BASDAI score ≥4, morning stiffness ≥1 h and VAS for total back pain ≥4 (on a scale of 0–10). Full details of the inclusion/exclusion criteria have been previously published [21]. Patients were randomized 2:1 to receive adalimumab (AbbVie, North Chicago, IL, USA) 40 mg or placebo s.c. every other week for 24 weeks. Further details of the study design have been published [21]. Patients in the randomized portion of the trial were eligible to continue treatment in an open-label extension study in which patients received open-label adalimumab 40 mg every other week or weekly for up to 5 years, as previously described [23, 26]. Clinic visits during the first 6months of the open-label period occurred every 6 weeks. Visits were every 12–16 weeks for the remainder of the open-label extension period. All research was carried out in compliance with the Declaration of Helsinki. Institutional ethics review board and ethics committee approval was obtained, and each patient provided written informed consent. Outcome measures: Spinal mobility in the original controlled, double-blind period of ATLAS was assessed using BASMI2, a composite index (scale 0–10) based on categorical scores on a scale of 0–2 for five clinical measurements: cervical rotation (in degrees), anterior lumbar flexion, lumbar side flexion, intermalleolar distance and tragus-to-wall distance (all measured in centimetres) [30]. BASMIlin was calculated using the data collected for the BASMI2. BASMIlin is a composite measure, with a total score ranging from 0 to 10 based on a linear assessment-to-score of the five clinical measurements. The BASMIlin has demonstrated a greater sensitivity to change compared with the BASMI2 [28]. A higher score indicates worse spinal mobility. Disease activity was assessed using BASDAI (0–10 cm VAS) [31], total back pain (0–10 cm VAS) and CRP (mg/dl). Physical function was assessed using the BASFI (0–10 cm VAS) and the Short Form-36 (SF-36) physical component score (PCS; 0–50) [32]. HRQoL was assessed using the AS quality of life (ASQoL) and SF-36 PCS [33]. A decrease of ≥1.8 points in ASQoL (where lower scores represent an increase in AS-specific quality of life) [34] or an increase of ≥3.0 points in the SF-36 PCS (where higher scores represent better health status) [27] have been previously identified as the minimum important difference (MID) for these HRQoL measures. Details of the questions posed, domains assessed and possible ranges for these instruments have been published [8]. Statistical analysis: The mean observed scores or values at baseline, week 12 and years 1, 3 and 5 were determined for BASMIlin and its components, BASDAI, total back pain, CRP, BASFI, SF-36 PCS and ASQoL. Values and scores from baseline up to week 12, and years 1, 3 and 5 were time-averaged. Analyses were conducted for all patients who received ≥1 dose of adalimumab (blinded or open-label), termed the any adalimumab population. Baseline for this analysis was defined as the last observation before the first dose of adalimumab and included patients originally enrolled in the placebo arm but who switched to treatment with open-label adalimumab. An additional analysis of efficacy endpoints at baseline, week 12, and years 1, 3 and 5 was conducted using data from only those patients who were originally randomized to adalimumab and who completed 5 years of the study, termed the 5-year adalimumab completer population. At the year 5 time point, the any adalimumab and the 5-year adalimumab completer populations were identical, as no patients in the any adalimumab group who had been randomized to receive placebo during the double-blind period reached a full 5 years of adalimumab treatment. Values reported are those observed (i.e. no imputations were made for missing data). Correlation coefficients between BASMIlin and BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL outcomes after 12 weeks and after 5 years of adalimumab exposure were evaluated using Spearman’s rank correlation. All assumptions for the Spearman’s rank order correlation coefficient were verified, and correlation coefficients were tested for statistical significance (P < 0.05). Interpretation of the correlation coefficients were 0.00−0.29, little or no correlation; 0.30−0.49, weak; 0.50−0.69, moderate; 0.70−0.89, strong and 0.90−1.00, very strong [35]. As linear regression analysis of BASMIlin showed significant association with each of the covariates (BASDAI, total back pain, BASFI, SF-36 PCS and ASQoL), multivariate regression analysis was performed by first including all five explanatory variables in the model and subsequently dropping and adding variables to come up with the best model, based on adjusted R2 values. Although the BASDAI showed significant association with BASMIlin in the multivariate regression model, it was excluded from the final model due to multicollinearity between BASDAI and BASFI. BASFI was retained in the model due to its stronger association with BASMIlin. The final model included age and BASFI as covariates for explaining BASMIlin. Results: Patients Of the 315 randomized patients (adalimumab, n = 208; placebo, n = 107), 311 received ≥ 1 dose of adalimumab, either blinded or open-label (any adalimumab population; see Supplementary Fig. S1, available at Rheumatology Online). From this population, 65% (202 of the 311) completed the 5-year study. Withdrawal of consent (n = 37) and adverse events (n = 38) were the most common reasons for discontinuation during the 5 years of the study. The median [mean (s.d.)] duration of exposure to adalimumab in the any adalimumab population was 4.8 years [3.9 (1.6) years]. Of the original 208 patients who were randomized to treatment with adalimumab, 125 patients (60%) completed 5 years of treatment (the 5-year adalimumab completer population). A significant number of patients (n = 77) initially received placebo for 6 months and thus were exposed to adalimumab for only 4.5 years; although these patients completed the study, they were not included in the population for analysis of 5-year exposure (n = 125). In the any adalimumab population, 82 of the 311patients (26%) received ≥ 7 doses in the last 70days of treatment, indicating weekly dosing. Of the 202 patients who received adalimumab at any time and who completed 5 years in the study, 29 patients (14%) received ≥ 7 doses in the last 70 days of treatment. Baseline clinical characteristics for patients treated during the double-blind period have been previously reported [21]. Demographics and disease state characteristics for the 5-year adalimumab completer population (n = 125) at baseline (i.e. assessment prior to first dose of adalimumab in the double-blind period) were similar to the any adalimumab study population (Table 1). Table 1Baseline patient demographics and disease state of patients who received adalimumabAny adalimumab populationa n = 311Population who completed 5 years of adalimumab treatmentb n = 125Age, mean (s.d.), years42.3 (11.6)42.8 (12.0)Male, n (%)233 (74.9)101 (80.8)White, n (%)299 (96.1)121 (96.8)Disease duration, mean (s.d.), years11.0 (9.5)11.9 (10.4)BASDAI score, 0–10 cm VAS6.3 (1.7)6.2 (1.8)BASFI score, 0–10 cm VAS5.4 (2.2)5.2 (2.1)BASMIlin 0–104.4 (1.7)c4.3 (1.7)SF-36 PCS, 0–5032.5 (8.0)d33.2 (8.2)eASQoL, 0–1810.3 (4.3)9.9 (4.3)Total back pain, 0–10 cm VAS6.5 (2.1)6.4 (2.1)Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Baseline patient demographics and disease state of patients who received adalimumab Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Of the 315 randomized patients (adalimumab, n = 208; placebo, n = 107), 311 received ≥ 1 dose of adalimumab, either blinded or open-label (any adalimumab population; see Supplementary Fig. S1, available at Rheumatology Online). From this population, 65% (202 of the 311) completed the 5-year study. Withdrawal of consent (n = 37) and adverse events (n = 38) were the most common reasons for discontinuation during the 5 years of the study. The median [mean (s.d.)] duration of exposure to adalimumab in the any adalimumab population was 4.8 years [3.9 (1.6) years]. Of the original 208 patients who were randomized to treatment with adalimumab, 125 patients (60%) completed 5 years of treatment (the 5-year adalimumab completer population). A significant number of patients (n = 77) initially received placebo for 6 months and thus were exposed to adalimumab for only 4.5 years; although these patients completed the study, they were not included in the population for analysis of 5-year exposure (n = 125). In the any adalimumab population, 82 of the 311patients (26%) received ≥ 7 doses in the last 70days of treatment, indicating weekly dosing. Of the 202 patients who received adalimumab at any time and who completed 5 years in the study, 29 patients (14%) received ≥ 7 doses in the last 70 days of treatment. Baseline clinical characteristics for patients treated during the double-blind period have been previously reported [21]. Demographics and disease state characteristics for the 5-year adalimumab completer population (n = 125) at baseline (i.e. assessment prior to first dose of adalimumab in the double-blind period) were similar to the any adalimumab study population (Table 1). Table 1Baseline patient demographics and disease state of patients who received adalimumabAny adalimumab populationa n = 311Population who completed 5 years of adalimumab treatmentb n = 125Age, mean (s.d.), years42.3 (11.6)42.8 (12.0)Male, n (%)233 (74.9)101 (80.8)White, n (%)299 (96.1)121 (96.8)Disease duration, mean (s.d.), years11.0 (9.5)11.9 (10.4)BASDAI score, 0–10 cm VAS6.3 (1.7)6.2 (1.8)BASFI score, 0–10 cm VAS5.4 (2.2)5.2 (2.1)BASMIlin 0–104.4 (1.7)c4.3 (1.7)SF-36 PCS, 0–5032.5 (8.0)d33.2 (8.2)eASQoL, 0–1810.3 (4.3)9.9 (4.3)Total back pain, 0–10 cm VAS6.5 (2.1)6.4 (2.1)Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Baseline patient demographics and disease state of patients who received adalimumab Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Spinal mobility Both BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06). Table 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Spinal mobility–BASMIlin and components over 5 years Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Both BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06). Table 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Spinal mobility–BASMIlin and components over 5 years Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Disease activity and physical function In the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5. Fig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. Mean BASDAI, total back pain and BASFI scores over time Analysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. In the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5. Fig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. Mean BASDAI, total back pain and BASFI scores over time Analysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. Health-related quality of life Improvement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B). Fig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Quality of life measures (A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Improvement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B). Fig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Quality of life measures (A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Association of spinal mobility with disease activity, function and HRQoL BASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin. Table 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Table 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. Correlation of BASMIlin with disease activity, physical function and quality of life measuresa aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. BASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin. Table 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Table 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. Correlation of BASMIlin with disease activity, physical function and quality of life measuresa aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. Patients: Of the 315 randomized patients (adalimumab, n = 208; placebo, n = 107), 311 received ≥ 1 dose of adalimumab, either blinded or open-label (any adalimumab population; see Supplementary Fig. S1, available at Rheumatology Online). From this population, 65% (202 of the 311) completed the 5-year study. Withdrawal of consent (n = 37) and adverse events (n = 38) were the most common reasons for discontinuation during the 5 years of the study. The median [mean (s.d.)] duration of exposure to adalimumab in the any adalimumab population was 4.8 years [3.9 (1.6) years]. Of the original 208 patients who were randomized to treatment with adalimumab, 125 patients (60%) completed 5 years of treatment (the 5-year adalimumab completer population). A significant number of patients (n = 77) initially received placebo for 6 months and thus were exposed to adalimumab for only 4.5 years; although these patients completed the study, they were not included in the population for analysis of 5-year exposure (n = 125). In the any adalimumab population, 82 of the 311patients (26%) received ≥ 7 doses in the last 70days of treatment, indicating weekly dosing. Of the 202 patients who received adalimumab at any time and who completed 5 years in the study, 29 patients (14%) received ≥ 7 doses in the last 70 days of treatment. Baseline clinical characteristics for patients treated during the double-blind period have been previously reported [21]. Demographics and disease state characteristics for the 5-year adalimumab completer population (n = 125) at baseline (i.e. assessment prior to first dose of adalimumab in the double-blind period) were similar to the any adalimumab study population (Table 1). Table 1Baseline patient demographics and disease state of patients who received adalimumabAny adalimumab populationa n = 311Population who completed 5 years of adalimumab treatmentb n = 125Age, mean (s.d.), years42.3 (11.6)42.8 (12.0)Male, n (%)233 (74.9)101 (80.8)White, n (%)299 (96.1)121 (96.8)Disease duration, mean (s.d.), years11.0 (9.5)11.9 (10.4)BASDAI score, 0–10 cm VAS6.3 (1.7)6.2 (1.8)BASFI score, 0–10 cm VAS5.4 (2.2)5.2 (2.1)BASMIlin 0–104.4 (1.7)c4.3 (1.7)SF-36 PCS, 0–5032.5 (8.0)d33.2 (8.2)eASQoL, 0–1810.3 (4.3)9.9 (4.3)Total back pain, 0–10 cm VAS6.5 (2.1)6.4 (2.1)Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Baseline patient demographics and disease state of patients who received adalimumab Data are mean (s.d.) unless otherwise indicated. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. cn = 309. dn = 307. en = 124. ASQoL: AS quality of life questionnaire; BASMIlin: linear BASMI; PCS: physical component score; SF-36: Short Form-36 Health Survey. Spinal mobility: Both BASMI (as previously described by van der Heijde et al. [21]) and BASMIlin (Table 2) were significantly improved compared with placebo after 12 weeks of treatment with adalimumab (P < 0.001). In the any adalimumab population, improvement in spinal mobility as measured by the composite BASMIlin score was sustained through 5years of treatment with adalimumab (Table 2). In the 5-year adalimumab completer population, BASMIlin scores were 4.3 (s.d. 1.7) at baseline and 3.7 (1.7) after 5 years of treatment with adalimumab, a mean change of –0.6 (Table 2; P < 0.001 change from baseline at year 5). In this 5-year adalimumab completer population, the individual BASMI components of lumbar side flexion, cervical rotation and intramalleolar distance demonstrated significant improvements from baseline at year 5 (P < 0.001 for all comparisons change from baseline at year 5) and appeared to continue to improve over the course of the 5-year observation period (Table 2). For patients in the any adalimumab population with weekly dosing of adalimumab in the last 70 days of treatment, BASMIlin scores were 4.7 (1.7) at baseline and 4.3 (1.5) at year 5, a mean change of –0.4 (P = 0.06). Table 2Spinal mobility–BASMIlin and components over 5 yearsDuration of exposure to adalimumabAssessmentBaselineWeek 12Year 1Year 3Year 5BASMIlin composite, scale 0–10    Any adalimumab populationa4.4 (1.7) n = 3094.2 (1.7) n = 3094.0 (1.7) n = 2823.8 (1.7) n = 2333.7 (1.7) n = 124    5-year adalimumab completersb4.3 (1.7) n = 1254.1 (1.7) n = 1243.8 (1.7) n = 1253.7 (1.7) n = 1243.7 (1.7) n = 124Lumbar flexion, cm    Any adalimumab populationa4.0 (3.5) n = 3114.0 (3.0) n = 3094.0 (2.7) n = 2824.0 (2.6) n = 2334.0 (2.4) n = 124    5-year adalimumab completersb4.1 (3.5) n = 1254.2 (3.2) n = 1244.3 (2.9) n = 1254.1 (2.7) n = 1244.0 (2.4) n = 124Lumbar side flexion, cm    Any adalimumab populationa9.5 (5.4) n = 30710.2 (5.3) n = 30610.9 (5.5) n = 28211.5 (5.6) n = 23312.0 (5.6) n = 123    5-year adalimumab completersb9.8 (5.5) n = 12210.7 (5.4) n = 12111.8 (5.6) n = 12512.0 (5.5) n = 12412.0 (5.6) n = 123Cervical rotation, degrees    Any adalimumab populationa46.3 (22.1) n = 31048.2 (20.7) n = 30751.4 (20.4) n = 28253.9 (20.6) n = 23354.9 (20.8) n = 125    5-year adalimumab completersb46.7 (20.7) n = 12548.7 (20.3) n = 12252.1 (20.5) n = 12554.1 (20.5) n = 12454.9 (20.8) n = 125Intermalleolar distance, cm    Any adalimumab populationa93.6 (26.1) n = 30597.1 (25.3) n = 305101.2 (25.8) n = 280104.0 (22.5) n = 232106.2 (22.4) n = 124    5-year adalimumab completersb95.9 (23.3) n = 124100.3 (24.7) n = 123104.4 (27.1) n = 125106.0 (23.3) n = 124106.2 (22.4) n = 124Tragus-to-wall distance, cm    Any adalimumab populationa15.8 (6.0) n = 30915.6 (5.7) n = 30715.6 (5.5) n = 28215.5 (5.7) n = 23315.7 (6.1) n = 125    5-year adalimumab completersb15.9 (6.2) n = 12415.9 (6.0) n = 12315.8 (5.8) n = 12515.6 (6.0) n = 12415.7 (6.1) n = 125Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Spinal mobility–BASMIlin and components over 5 years Data are mean (s.d.). Decreased BASMIlin composite scores indicate improvement. aPatients who received ≥ 1 dose of adalimumab. Baseline was the last observation before the first dose of adalimumab. bPatients initially randomized to adalimumab and who had a total of 5 years of adalimumab exposure during the study. BASMIlin: linear BASMI. Disease activity and physical function: In the any adalimumab population, improvements in disease activity (BASDAI), total back pain and function (BASFI) were sustained over 5 years (Fig. 1A). CRP levels showed the same pattern, with mean values of 1.20 mg/dl at week 12 (n = 308), 0.71 mg/dl at year 1 (n = 282), 0.56 mg/dl at year 3 (n = 233), 0.52 mg/dl at year4 (n = 217) and 0.50 mg/dl at year 5 (n = 125). In the 5-year adalimumab completer population, improvements in disease activity, total back pain and function were observed over the 5-year period (Fig. 1B). CRP levels in the 5-year adalimumab completer population were 1.30 mg/dl at week 12, 0.54 mg/dl at year 3, 0.51 mg/dl at year 4 and 0.50 mg/dl at year 5. Fig. 1Mean BASDAI, total back pain and BASFI scores over timeAnalysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. Mean BASDAI, total back pain and BASFI scores over time Analysis in the (A) any adalimumab population and (B) 5-year adalimumab completer population. Health-related quality of life: Improvement in the SF-36 PCS and ASQoL was maintained through 5 years of treatment with adalimumab in both the any adalimumab and the 5-year adalimumab completer populations (Fig. 2A). The proportion of patients who achieved the MID for SF-36 PCS increased numerically from 64.0% of patients (178 of 278) at week12 to 78.2% of patients (129 of 165) at year 5 in the any adalimumab population, and from 68.0% of patients (83 of 122) at week 12 to 78.7% of patients (96 of 122) at year 5 in the 5-year adalimumab completer population (Fig. 2B). The proportion of patients who achieved the MID for ASQoL increased from 62.0% of patients (176 of 284) at week 12 to 84.0% of patients (142 of 169) at year 5 in the any adalimumab population, and from 67.2% of patients (84 of 125) at week 12 to 84.8% of patients (106 of 125) at year 5 in the 5-year adalimumab completer population (Fig. 2B). Fig. 2Quality of life measures(A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Quality of life measures (A) Mean SF-36 PCS and ASQoL over time; (B) percentage of patients reaching the minimum important difference (MID) for SF-36 PCS and ASQoL over time. The MID was defined as a change of at least –1.8 for ASQoL and at least 3.0 for SF-36 PCS. ASQoL: AS quality of life; SF-36 PCS: Short Form-36 physical component score. Association of spinal mobility with disease activity, function and HRQoL: BASMIlin was significantly correlated with each evaluated disease, function or HRQoL measure at week 12 and year5 (P < 0.001 for each; Table 3). The strongest correlation was observed for BASFI at week 12 and year 5 (r = 0.52 and r = 0.65, respectively). A multivariate regression model confirmed a significant association between BASMIlin and BASFI, as well as between BASMIlin and age (P < 0.001; Table 4). None of the other variables tested in the multivariate regression model had significant associations with BASMIlin. Table 3Correlation of BASMIlin with disease activity, physical function and quality of life measuresaWeek 12, n = 309Year 5, n = 124Measurenr bP-valuecnr bP-valuecBASDAI3080.32<0.0011230.41<0.001Total back pain3080.25<0.0011240.42<0.001BASFI3080.52<0.0011240.65<0.001SF-36 PCS278−0.33<0.001121−0.40<0.001ASQoL2820.30<0.0011240.33<0.001aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Table 4Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5aBASMIlin (dependent variable)Parameter estimate (s.e.)t-valueP-valueIntercept0.852 (0.47)1.820.07Age, years0.046 (0.01)3.990.0001BASFI 0–10 cm VAS0.44 (0.06)6.75<0.0001R2 = 0.44Adjusted R2 = 0.43aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. Correlation of BASMIlin with disease activity, physical function and quality of life measuresa aAnalysis is for the any adalimumab population. bInterpretation of the correlation coefficients: 0.00–0.29, little or no correlation; 0.30–0.49, weak; 0.50–0.69, moderate; 0.70–0.89, strong; and 0.90–1.00, very strong. cSignificant P-value suggests that a non-zero correlation may be present between corresponding variables. ASQoL: AS quality of life; BASMIlin: linear BASMI; SF-36 PCS: Short Form-36 physical component score. Multivariate regression analysis between BASMIlin and other clinical and demographic variables at year 5a aAnalysis is for the population who completed 5 years of adalimumab treatment. BASMIlin: linear BASMI. Discussion: There is a growing body of evidence that treatment of AS with anti-TNF agents provides long-term benefits. The progressive restriction of physical function typically experienced by patients with AS negatively affects their HRQoL. Slowing or halting this progressive loss of function over the long term is thus an important goal of AS therapy. This current study is the first to report data for up to 5 years in terms of improvement in spinal mobility with adalimumab. Adalimumab treatment resulted in improvements in spinal mobility, as measured by the composite BASMIlin score, which appear to be due to improvements in lumbar side flexion, intermalleolar distance and cervical rotation components. In contrast, lumbar flexion and tragus-to-wall flexion measurements remained similar to baseline throughout the study. These improvements in spinal mobility are consistent with shorter-term findings with adalimumab [23] and long-term findings with other anti-TNF agents [36–38]. The baseline BASMIlin scores in the current analysis were higher than the baseline BASMI2 score previously reported by van der Heijde et al. [21] for the ATLAS study population originally randomized to treatment with adalimumab. However, this is not surprising as mean status scores for BASMI2 have been shown to be lower compared with BASMIlin scores [28, 39]. The current analysis demonstrated that improvements in physical function and disease activity were maintained over 5 years. Consistent with these data, HRQoL improvements observed after 12 and 24 weeks of adalimumab treatment [27] were sustained till the end of the 5-year study. By year 5, most patients remaining in the study had reached the MID for both ASQoL and SF-36 PCS. These results were observed despite the finding that progression of structural damage in AS (as determined by radiographs) does not appear to be inhibited by 2 years of adalimumab treatment [40]. Radiographic data with longer follow-up are not available. Multivariate regression analyses in a study of infliximab treatment demonstrated that HRQoL in AS is determined by physical function and disease activity [41]. In addition, multivariate regression showed that physical function is dependent on spinal mobility and disease activity; the degree of spinal mobility, in turn, is determined by irreversible structural damage and reversible inflammation of the spine [41, 42]. However, the association between structural damage and spinal mobility is highly variable among individual patients [15]. This discordance may be due to the influence of spinal inflammation on spinal mobility [42]. Thus, maintaining a high HRQoL by improving physical function and reducing disease activity is an achievable goal for patients with AS treated with anti-TNF therapy. The second objective of the current analysis was to determine whether spinal mobility correlated with other AS clinical, functional and HRQoL outcomes. Significant correlations between BASMIlin and BASDAI, total back pain, SF-36 PCS and ASQoL were observed, although they were weak. The weakness of association between these variables was further confirmed by multivariate regression analysis, in which these measures were not a statistically significant contributory factor to BASMIlin. The BASMIlin correlated best with BASFI at both week 12 and at year5. This moderate correlation suggests that improvements in spinal mobility lead to enhanced physical function throughout extended treatment with adalimumab. The multivariate regression analysis confirmed the significant nature of the positive association between spinal mobility and physical function. In a previous study of 70 patients with AS who completed 6 months of pamidronate therapy, changes in the BASMI2 also significantly correlated with changes in BASFI (Pearson correlation coefficient = 0.44; P < 0.001) [43]. The main strength of this study is the duration of the long-term follow-up. ATLAS is the largest long-term clinical trial of anti-TNF treatment of AS to date. Limitations of long-term analyses include the fact that patients for whom efficacy is suboptimal are less likely to remain in the study, enriching the continuing study population with patients who are doing relatively well. Also, the open-label nature of the extension period may have resulted in bias. Conclusion Treatment of active AS with adalimumab for up to 5 years resulted in sustained benefits in spinal mobility, physical function and HRQoL. Spinal mobility was correlated with patient-reported function, both early in the course of adalimumab treatment and after 5 years of therapy, suggesting improvements in mobility result in enhanced long-term physical function. Treatment of active AS with adalimumab for up to 5 years resulted in sustained benefits in spinal mobility, physical function and HRQoL. Spinal mobility was correlated with patient-reported function, both early in the course of adalimumab treatment and after 5 years of therapy, suggesting improvements in mobility result in enhanced long-term physical function. Conclusion: Treatment of active AS with adalimumab for up to 5 years resulted in sustained benefits in spinal mobility, physical function and HRQoL. Spinal mobility was correlated with patient-reported function, both early in the course of adalimumab treatment and after 5 years of therapy, suggesting improvements in mobility result in enhanced long-term physical function. Supplementary Material:
Background: Clinicaltrials.gov; https://clinicaltrials.gov/NCT00085644. Methods: Patients received blinded adalimumab 40 mg or placebo every other week for 24 weeks, then open-label adalimumab for up to 5 years. Spinal mobility was evaluated using linear BASMI (BASMIlin). BASDAI, total back pain, CRP, BASFI, Short Form-36 and AS quality of life (ASQoL) were also assessed. Correlations between BASMIlin and clinical, functional and ASQoL outcomes after 12 weeks and after 5years of adalimumab exposure were evaluated using Spearman's rank correlation. Associations were further analysed using multivariate regression. Results: Three hundred and eleven patients received ≥1 dose of adalimumab; 125 of the 208 patients originally randomized to adalimumab received treatment for 5 years. Improvements in BASMIlin were sustained through 5 years, with a mean change of -0.6 from baseline in the population who completed 5 years of treatment with adalimumab. Improvements in disease activity, physical function and ASQoL were also sustained through 5 years. BASMIlin was significantly correlated with all evaluated clinical outcomes (P < 0.001). The highest correlation was with BASFI at 12 weeks (r = 0.52) and at 5 years (r = 0.65). Multivariate regression analysis confirmed this association (P < 0.001). Conclusions: Treatment with adalimumab for up to 5 years demonstrated sustained benefits in spinal mobility, disease activity, physical function and HRQoL in patients with active AS. Spinal mobility was significantly associated with short- and long-term physical function in these patients.
Introduction: AS is a chronic inflammatory disease that affects the axial skeleton, peripheral joints and entheses, primarily of the spine and SI joints [1]. Symptoms of AS include back pain, loss of spinal mobility, joint stiffness and fatigue. The progressive nature of AS can result in complete fusion of the spine, putting patients at risk of vertebral fractures [2]. Progressive AS can also cause significant functional impairment, reductions in health-related quality of life (HRQoL) [3, 4], reduced capacity for work [5, 6] and substantial direct and indirect costs for the patient and the health care system [7]. TNF antagonists (etanercept, infliximab, golimumab and adalimumab) have demonstrated clinical efficacy in short-term and long-term clinical trials [8–25]. For adalimumab, long-term effectiveness data for patients treated for up to 5 years are now available from the Adalimumab Trial Evaluating Long-term Efficacy and Safety for AS (ATLAS) study. ATLAS was a Phase 3, randomized, 24-week, double-blind, placebo-controlled, multicentre trial with an open-label extension for up to 5 years, assessing efficacy and safety of adalimumab in patients with active AS [21]. This study demonstrated that significantly more patients treated with adalimumab achieved Assessment in SpondyloArthritis international Society (ASAS) 20 response compared with those treated with a placebo within 2 weeks of treatment initiation. A high ASAS20 response rate (89%) was maintained in patients completing 5 years of treatment [26]. Considering the lifelong consequences of AS on spinal mobility and HRQoL, long-term data on the impact of anti-TNF treatment on these outcomes are needed. The ATLAS trial has shown that adalimumab treatment resulted in short-term (12–24 weeks) improvements in physical function, disease activity, general health and HRQoL compared with placebo in patients with AS [21, 27]. Moreover, improvements in these parameters were maintained through 3 years of treatment [8]. However, sustained improvements in spinal mobility after adalimumab treatment as measured by the BASMI have only been reported for 2 years of adalimumab treatment [23]. The current report provides an assessment of the long-term improvement in spinal mobility using the more sensitive linear BASMI (BASMIlin) [28], as well as physical function and HRQoL through 5 years of adalimumab treatment in patients with active AS. In addition, this analysis evaluated the association of spinal mobility and clinical, functional and HRQoL outcomes. Conclusion: Treatment of active AS with adalimumab for up to 5 years resulted in sustained benefits in spinal mobility, physical function and HRQoL. Spinal mobility was correlated with patient-reported function, both early in the course of adalimumab treatment and after 5 years of therapy, suggesting improvements in mobility result in enhanced long-term physical function.
Background: Clinicaltrials.gov; https://clinicaltrials.gov/NCT00085644. Methods: Patients received blinded adalimumab 40 mg or placebo every other week for 24 weeks, then open-label adalimumab for up to 5 years. Spinal mobility was evaluated using linear BASMI (BASMIlin). BASDAI, total back pain, CRP, BASFI, Short Form-36 and AS quality of life (ASQoL) were also assessed. Correlations between BASMIlin and clinical, functional and ASQoL outcomes after 12 weeks and after 5years of adalimumab exposure were evaluated using Spearman's rank correlation. Associations were further analysed using multivariate regression. Results: Three hundred and eleven patients received ≥1 dose of adalimumab; 125 of the 208 patients originally randomized to adalimumab received treatment for 5 years. Improvements in BASMIlin were sustained through 5 years, with a mean change of -0.6 from baseline in the population who completed 5 years of treatment with adalimumab. Improvements in disease activity, physical function and ASQoL were also sustained through 5 years. BASMIlin was significantly correlated with all evaluated clinical outcomes (P < 0.001). The highest correlation was with BASFI at 12 weeks (r = 0.52) and at 5 years (r = 0.65). Multivariate regression analysis confirmed this association (P < 0.001). Conclusions: Treatment with adalimumab for up to 5 years demonstrated sustained benefits in spinal mobility, disease activity, physical function and HRQoL in patients with active AS. Spinal mobility was significantly associated with short- and long-term physical function in these patients.
12,066
282
[ 225, 305, 458, 837, 250, 357, 401 ]
14
[ "adalimumab", "year", "patients", "basmilin", "years", "36", "population", "pcs", "sf 36", "sf" ]
[ "adalimumab trial evaluating", "adalimumab long term", "mobility adalimumab treatment", "adalimumab treatment values", "spinal mobility adalimumab" ]
[CONTENT] anti-TNF drugs | ankylosing spondylitis | spinal mobility | health-related quality of life | physical function [SUMMARY]
[CONTENT] anti-TNF drugs | ankylosing spondylitis | spinal mobility | health-related quality of life | physical function [SUMMARY]
[CONTENT] anti-TNF drugs | ankylosing spondylitis | spinal mobility | health-related quality of life | physical function [SUMMARY]
[CONTENT] anti-TNF drugs | ankylosing spondylitis | spinal mobility | health-related quality of life | physical function [SUMMARY]
[CONTENT] anti-TNF drugs | ankylosing spondylitis | spinal mobility | health-related quality of life | physical function [SUMMARY]
[CONTENT] anti-TNF drugs | ankylosing spondylitis | spinal mobility | health-related quality of life | physical function [SUMMARY]
[CONTENT] Adalimumab | Adult | Antibodies, Monoclonal, Humanized | Antirheumatic Agents | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Humans | Longitudinal Studies | Male | Middle Aged | Motor Activity | Outcome Assessment, Health Care | Quality of Life | Range of Motion, Articular | Severity of Illness Index | Spine | Spondylitis, Ankylosing | Time Factors | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Adalimumab | Adult | Antibodies, Monoclonal, Humanized | Antirheumatic Agents | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Humans | Longitudinal Studies | Male | Middle Aged | Motor Activity | Outcome Assessment, Health Care | Quality of Life | Range of Motion, Articular | Severity of Illness Index | Spine | Spondylitis, Ankylosing | Time Factors | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Adalimumab | Adult | Antibodies, Monoclonal, Humanized | Antirheumatic Agents | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Humans | Longitudinal Studies | Male | Middle Aged | Motor Activity | Outcome Assessment, Health Care | Quality of Life | Range of Motion, Articular | Severity of Illness Index | Spine | Spondylitis, Ankylosing | Time Factors | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Adalimumab | Adult | Antibodies, Monoclonal, Humanized | Antirheumatic Agents | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Humans | Longitudinal Studies | Male | Middle Aged | Motor Activity | Outcome Assessment, Health Care | Quality of Life | Range of Motion, Articular | Severity of Illness Index | Spine | Spondylitis, Ankylosing | Time Factors | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Adalimumab | Adult | Antibodies, Monoclonal, Humanized | Antirheumatic Agents | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Humans | Longitudinal Studies | Male | Middle Aged | Motor Activity | Outcome Assessment, Health Care | Quality of Life | Range of Motion, Articular | Severity of Illness Index | Spine | Spondylitis, Ankylosing | Time Factors | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Adalimumab | Adult | Antibodies, Monoclonal, Humanized | Antirheumatic Agents | Dose-Response Relationship, Drug | Female | Follow-Up Studies | Humans | Longitudinal Studies | Male | Middle Aged | Motor Activity | Outcome Assessment, Health Care | Quality of Life | Range of Motion, Articular | Severity of Illness Index | Spine | Spondylitis, Ankylosing | Time Factors | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] adalimumab trial evaluating | adalimumab long term | mobility adalimumab treatment | adalimumab treatment values | spinal mobility adalimumab [SUMMARY]
[CONTENT] adalimumab trial evaluating | adalimumab long term | mobility adalimumab treatment | adalimumab treatment values | spinal mobility adalimumab [SUMMARY]
[CONTENT] adalimumab trial evaluating | adalimumab long term | mobility adalimumab treatment | adalimumab treatment values | spinal mobility adalimumab [SUMMARY]
[CONTENT] adalimumab trial evaluating | adalimumab long term | mobility adalimumab treatment | adalimumab treatment values | spinal mobility adalimumab [SUMMARY]
[CONTENT] adalimumab trial evaluating | adalimumab long term | mobility adalimumab treatment | adalimumab treatment values | spinal mobility adalimumab [SUMMARY]
[CONTENT] adalimumab trial evaluating | adalimumab long term | mobility adalimumab treatment | adalimumab treatment values | spinal mobility adalimumab [SUMMARY]
[CONTENT] adalimumab | year | patients | basmilin | years | 36 | population | pcs | sf 36 | sf [SUMMARY]
[CONTENT] adalimumab | year | patients | basmilin | years | 36 | population | pcs | sf 36 | sf [SUMMARY]
[CONTENT] adalimumab | year | patients | basmilin | years | 36 | population | pcs | sf 36 | sf [SUMMARY]
[CONTENT] adalimumab | year | patients | basmilin | years | 36 | population | pcs | sf 36 | sf [SUMMARY]
[CONTENT] adalimumab | year | patients | basmilin | years | 36 | population | pcs | sf 36 | sf [SUMMARY]
[CONTENT] adalimumab | year | patients | basmilin | years | 36 | population | pcs | sf 36 | sf [SUMMARY]
[CONTENT] term | long term | long | adalimumab | patients | treatment | spinal mobility | hrqol | mobility | spinal [SUMMARY]
[CONTENT] adalimumab | assessed | basmilin | model | patients | correlation | basfi | basdai | 36 | open label [SUMMARY]
[CONTENT] adalimumab | year | population | basmilin | 36 | patients | year adalimumab | table | sf | sf 36 [SUMMARY]
[CONTENT] mobility | function | physical function | spinal mobility | spinal | physical | suggesting improvements mobility | suggesting improvements mobility result | hrqol spinal | hrqol spinal mobility [SUMMARY]
[CONTENT] adalimumab | year | patients | basmilin | 36 | years | population | mobility | treatment | sf [SUMMARY]
[CONTENT] adalimumab | year | patients | basmilin | 36 | years | population | mobility | treatment | sf [SUMMARY]
[CONTENT] Clinicaltrials.gov [SUMMARY]
[CONTENT] 40 | 24 weeks | up to 5 years ||| ||| BASDAI | CRP ||| BASMIlin | 12 weeks | 5years | Spearman ||| [SUMMARY]
[CONTENT] Three hundred and eleven | ≥1 | 125 | 208 | 5 years ||| BASMIlin | 5 years | 5 years ||| 5 years ||| ||| 12 weeks | 0.52 | 5 years | 0.65 ||| [SUMMARY]
[CONTENT] up to 5 years ||| [SUMMARY]
[CONTENT] ||| 40 | 24 weeks | up to 5 years ||| ||| BASDAI | CRP ||| BASMIlin | 12 weeks | 5years | Spearman ||| ||| Three hundred and eleven | ≥1 | 125 | 208 | 5 years ||| BASMIlin | 5 years | 5 years ||| 5 years ||| ||| 12 weeks | 0.52 | 5 years | 0.65 ||| ||| up to 5 years ||| [SUMMARY]
[CONTENT] ||| 40 | 24 weeks | up to 5 years ||| ||| BASDAI | CRP ||| BASMIlin | 12 weeks | 5years | Spearman ||| ||| Three hundred and eleven | ≥1 | 125 | 208 | 5 years ||| BASMIlin | 5 years | 5 years ||| 5 years ||| ||| 12 weeks | 0.52 | 5 years | 0.65 ||| ||| up to 5 years ||| [SUMMARY]
Establishing Postnatal Growth Monitoring Curves of Preterm Infants in China: Allowing for Continuous Use from 24 Weeks of Preterm Birth to 50 Weeks.
35684032
Early postnatal growth monitoring and nutrition assessment for preterm infants is a public health and clinical concern. We aimed to establish a set of postnatal growth monitoring curves of preterm infants to better help clinicians make in-hospital and post-discharge nutrition plan of these vulnerable infants.
BACKGROUND
We collected weight, length and head circumference data from a nationwide survey in China between 2015 and 2018. Polynomial regression and the modified LMS methods were employed to construct the smoothed weight, length and head circumference growth curves.
METHODS
We established the P3, P10, P25, P50, P75, P90, P97 reference curves of weight, length and head circumference that allowed for continuous use from 24 weeks of preterm birth to 50 weeks and developed a set of user-friendly growth monitoring charts. We estimated approximate ranges of weight gain per day and length and head circumference gains per week.
RESULTS
Our established growth monitoring curves, which can be used continuously without correcting gestational age from 24 weeks of preterm birth to 50 weeks, may be useful for assessment of postnatal growth trajectories, definition of intrauterine growth retardation at birth, and classification of early nutrition status for preterm infants.
CONCLUSIONS
[ "Aftercare", "Birth Weight", "China", "Female", "Gestational Age", "Humans", "Infant", "Infant, Newborn", "Infant, Premature", "Patient Discharge", "Premature Birth" ]
9182854
1. Introduction
Early postnatal growth and development of preterm infants have an important impact on their future health status and disease risk. Accurate identification for early postnatal growth and nutrition deviation, such as insufficient or excessive growth, intrauterine growth retardation (IUGR) at birth, undernutrition or overnutrition, is conducive to scientific feeding guidance and nutritional management for preterm infants. Since growth curves are essential tools to monitor and evaluate early postnatal growth and nutrition deviation of preterm infants, it is of great practical value to establish scientific and user-friendly growth curves for newborn clinical practice. In recent years, researchers have successively constructed percentile growth curves specially used for early postnatal growth monitoring for preterm infants [1,2]. Frequently-used charts, such as the Fenton 2013 or the Olsen 2015 preterm growth charts, did not include a Chinese cohort in their development [3]. Considering there are racial/ethnic differences in children’s growth and development worldwide [4,5], foreign growth curves may not fully reflect early postnatal growth and nutrition status of Chinese preterm infants [6,7]. In fact, China has also lacked a set of user-friendly growth curves for early postnatal growth monitoring and nutrition assessment for preterm infants. Therefore, it is necessary to develop postnatal growth monitoring curves of preterm infants in China. We aimed to establish a set of postnatal growth monitoring curves of preterm infants based on a nationwide growth and development survey in China between 2015 and 2018 to better help clinicians make in-hospital and post-discharge nutrition plan of these vulnerable infants.
null
null
3. Results
3.1. Established Postnatal Growth Monitoring Curves and their Comparison with Newborn Growth Curves and the Child Growth Curves Based on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks. Based on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks. 3.2. Postnatal Growth Monitoring Curve Allowing for Continuous Use from 24 Weeks of Preterm Birth to 50 Weeks We obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request. Based on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks. In order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3). We obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request. Based on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks. In order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3). 3.3. Comparison of the Established Postnatal Growth Monitoring Curves with the Fenton 2013 Growth Curves Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks. Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks. 3.4. Comparison of Established Postnatal Growth Monitoring Curves with the INTERGROWTH Growth Curves Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves. Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves.
5. Conclusions
Our established postnatal growth monitoring curves are applicable to preterm infants and do not require correction for gestational age in actual use. Early postnatal growth trajectories of preterm infants can be continuously depicted to 50 weeks on this set of the growth charts, and some common growth and nutrition deviations of preterm infants, such as insufficient or excessive growth, IUGR at birth, undernutrition or overnutrition, can be screened out in time with the help of this convenient, clinical, practical tool. In addition, our study has added more evidence to better help clinicians make in-hospital and post-discharge nutrition plans for vulnerable infants, especially for preterm infants with similar ethnic and genetic backgrounds in the Asian part of the world. Further studies should be conducted to evaluate the long-term outcomes (such as neurologic and cardio-metabolic morbidities) of vulnerable infants with IUGR at birth or early overweight/obesity identified by our established postnatal growth monitoring curves.
[ "2. Materials and Methods", "2.1. Study Design and Data Source", "2.2. Process of Establishing Postnatal Growth Monitoring Curves", "2.2.1. Selecting Seven Main Percentile Curves", "2.2.2. Selecting Age Target Points of Initial Percentile Curves", "2.2.3. Obtaining Preliminarily Smoothed Percentile Values", "2.2.4. Obtaining Smoothed L, M and S Parameters", "2.2.5. Obtaining Standardized Smoothed Percentile Values", "2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves", "2.3. Fenton 2013 Growth Monitoring Curves and INTERGROWTH Growth Curves", "2.4. Statistical Analysis", "3.1. Established Postnatal Growth Monitoring Curves and their Comparison with Newborn Growth Curves and the Child Growth Curves", "3.2. Postnatal Growth Monitoring Curve Allowing for Continuous Use from 24 Weeks of Preterm Birth to 50 Weeks", "3.3. Comparison of the Established Postnatal Growth Monitoring Curves with the Fenton 2013 Growth Curves", "3.4. Comparison of Established Postnatal Growth Monitoring Curves with the INTERGROWTH Growth Curves" ]
[ "2.1. Study Design and Data Source We established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8].\nThe Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13].\nWe established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8].\nThe Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13].\n2.2. Process of Establishing Postnatal Growth Monitoring Curves 2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\nTo ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\n2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\nSince establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\n2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\nBased on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\n2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\nL, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\n2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\nStandardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\n2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\nStandardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\n2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\nTo ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\n2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\nSince establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\n2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\nBased on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\n2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\nL, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\n2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\nStandardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\n2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\nStandardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\n2.3. Fenton 2013 Growth Monitoring Curves and INTERGROWTH Growth Curves To better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2].\nThe Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks.\nThe INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks.\nTo better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2].\nThe Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks.\nThe INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks.\n2.4. Statistical Analysis Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA).\nPolynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA).", "We established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8].\nThe Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13].", "2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\nTo ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\n2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\nSince establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\n2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\nBased on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\n2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\nL, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\n2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\nStandardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\n2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\nStandardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.", "To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].", "Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.", "Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.", "L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].", "Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.", "Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.", "To better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2].\nThe Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks.\nThe INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks.", "Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA).", "Based on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks.", "We obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request.\nBased on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks.\nIn order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3).", "Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks.", "Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Study Design and Data Source", "2.2. Process of Establishing Postnatal Growth Monitoring Curves", "2.2.1. Selecting Seven Main Percentile Curves", "2.2.2. Selecting Age Target Points of Initial Percentile Curves", "2.2.3. Obtaining Preliminarily Smoothed Percentile Values", "2.2.4. Obtaining Smoothed L, M and S Parameters", "2.2.5. Obtaining Standardized Smoothed Percentile Values", "2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves", "2.3. Fenton 2013 Growth Monitoring Curves and INTERGROWTH Growth Curves", "2.4. Statistical Analysis", "3. Results", "3.1. Established Postnatal Growth Monitoring Curves and their Comparison with Newborn Growth Curves and the Child Growth Curves", "3.2. Postnatal Growth Monitoring Curve Allowing for Continuous Use from 24 Weeks of Preterm Birth to 50 Weeks", "3.3. Comparison of the Established Postnatal Growth Monitoring Curves with the Fenton 2013 Growth Curves", "3.4. Comparison of Established Postnatal Growth Monitoring Curves with the INTERGROWTH Growth Curves", "4. Discussion", "5. Conclusions" ]
[ "Early postnatal growth and development of preterm infants have an important impact on their future health status and disease risk. Accurate identification for early postnatal growth and nutrition deviation, such as insufficient or excessive growth, intrauterine growth retardation (IUGR) at birth, undernutrition or overnutrition, is conducive to scientific feeding guidance and nutritional management for preterm infants. Since growth curves are essential tools to monitor and evaluate early postnatal growth and nutrition deviation of preterm infants, it is of great practical value to establish scientific and user-friendly growth curves for newborn clinical practice.\nIn recent years, researchers have successively constructed percentile growth curves specially used for early postnatal growth monitoring for preterm infants [1,2]. Frequently-used charts, such as the Fenton 2013 or the Olsen 2015 preterm growth charts, did not include a Chinese cohort in their development [3]. Considering there are racial/ethnic differences in children’s growth and development worldwide [4,5], foreign growth curves may not fully reflect early postnatal growth and nutrition status of Chinese preterm infants [6,7]. In fact, China has also lacked a set of user-friendly growth curves for early postnatal growth monitoring and nutrition assessment for preterm infants. Therefore, it is necessary to develop postnatal growth monitoring curves of preterm infants in China. \nWe aimed to establish a set of postnatal growth monitoring curves of preterm infants based on a nationwide growth and development survey in China between 2015 and 2018 to better help clinicians make in-hospital and post-discharge nutrition plan of these vulnerable infants.", "2.1. Study Design and Data Source We established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8].\nThe Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13].\nWe established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8].\nThe Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13].\n2.2. Process of Establishing Postnatal Growth Monitoring Curves 2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\nTo ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\n2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\nSince establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\n2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\nBased on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\n2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\nL, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\n2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\nStandardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\n2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\nStandardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\n2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\nTo ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\n2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\nSince establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\n2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\nBased on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\n2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\nL, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\n2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\nStandardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\n2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\nStandardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\n2.3. Fenton 2013 Growth Monitoring Curves and INTERGROWTH Growth Curves To better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2].\nThe Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks.\nThe INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks.\nTo better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2].\nThe Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks.\nThe INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks.\n2.4. Statistical Analysis Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA).\nPolynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA).", "We established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8].\nThe Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13].", "2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\nTo ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].\n2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\nSince establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.\n2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\nBased on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.\n2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\nL, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].\n2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\nStandardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.\n2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.\nStandardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.", "To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14].", "Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks.", "Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks.", "L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16].", "Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1)\nwhere C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age.", "Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit.", "To better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2].\nThe Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks.\nThe INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks.", "Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA).", "3.1. Established Postnatal Growth Monitoring Curves and their Comparison with Newborn Growth Curves and the Child Growth Curves Based on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks.\nBased on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks.\n3.2. Postnatal Growth Monitoring Curve Allowing for Continuous Use from 24 Weeks of Preterm Birth to 50 Weeks We obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request.\nBased on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks.\nIn order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3).\nWe obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request.\nBased on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks.\nIn order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3).\n3.3. Comparison of the Established Postnatal Growth Monitoring Curves with the Fenton 2013 Growth Curves Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks.\nOverall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks.\n3.4. Comparison of Established Postnatal Growth Monitoring Curves with the INTERGROWTH Growth Curves Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves.\nOverall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves.", "Based on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks.", "We obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request.\nBased on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks.\nIn order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3).", "Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks.", "Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves.", "Regarding early postnatal growth monitoring for preterm infants, traditionally, newborn growth curves are first used, and then child growth curves are used when preterm infants grow to full-term age through a correction of gestational age. However, this practice leads to the depiction of growth monitoring trajectories of preterm infants on different growth curves, which is not conducive to quickly and accurately identifying early growth and nutrition deviation for pediatricians and child health physicians. In this study, we established a new set of postnatal growth monitoring curves for preterm infants by deeply integrating newborn growth curves and child growth curves. Using the established postnatal growth curves is essentially the same as the traditional evaluation process, but simplifies the evaluation process, as it enables early postnatal growth monitoring trajectories of preterm infants to continuously plot on a single chart without correcting gestational age from 24 weeks of preterm birth to 50 weeks. With the help of this simple and convenient practical tool, clinicians can timely and accurately find early growth and nutrition deviation of preterm infants (such as insufficient or excessive growth, IUGR at birth, and undernutrition or overnutrition), so as to better guide early scientific feeding and nutrition management for these vulnerable infants.\nOur established curves were consistent with the Fenton 2013 curves in the smoothing methods and process of developing the growth curves. The curve shapes and trajectories of our established curves were similar to those of the Fenton 2013 curves that were generated based on a systematic review and meta-analysis. The main differences of these two curves were reflected in the low percentile curves (such as P3, P10) of weight at 27–34 weeks and at 40–50 weeks. The main reason was that the Fenton 2013 curves were linked to the WHO Child Growth Standards, including 2.3% of “unhealthy” full-term infants with low birth weight (<2.5 kg) [17], which may cause an overall slightly lighter for the Fenton 2013 weight curves than our established weight curves. A recent study in China showed that postnatal weight gain of late preterm infants was higher than the Fenton 2013 curves [6], suggesting that our established curves may be more appropriate in monitoring and evaluating early postnatal growth and nutrition deviation of Chinese preterm infants. Future research needs to further directly evaluate the applicability of our established curves in Chinese population.\nThe curve shapes and trajectories of our established curves were similar to those of the INTERGROWTH curves that were constructed based on longitudinal data of preterm infants [2]. The main differences of these two curves were reflected in weight, length and head circumference curves at 27–30 weeks, and low percentile curves (such as P3, P10) of weight at 37–50 weeks. The main reasons may be attributed to the difference of study design and data type (cross-sectional data vs. longitudinal data) and very small sample sizes of the INTERGROWTH curves at 27–32 weeks. Some recent studies showed that the Fenton 2013 curves and the INTERGROWTH curves had various differences depending on the evaluated populations in classification of extrauterine growth restriction [18,19,20], which indirectly suggests that it may be more appropriate to assess early postnatal growth and nutrition deviation of preterm infants in a population using the growth curves generated from the same racial/ethnic and living environmental population. A recent study on comparison of the growth trajectories of Chinese preterm infants with the INTERGROWTH curves pointed out that although the international growth curves were recommended to be applied globally, there are still limitations in practical application due to the differences in racial/ethnic groups, economic developmental level and social culture across the world, and thus the nationally representative growth curves of preterm infants in China should be established as soon as possible [7].\nPostnatal growth of preterm infants reflects the growth and development status of newborns in the extrauterine environment after leaving the mothers. However, the growth level of newborns at birth reflects the growth and development status of the fetus in utero, and the newborn growth curves based on newborn data at birth show a rapid increase in preterm period and a slowing increase in full-term period. With the acceleration of catch-up growth of preterm infants after birth, the postnatal growth level of preterm infants at 37–42 weeks may be slightly higher than that of full-term infants at birth [2,21,22,23]. As expected, our established curves were higher than the Newborn Growth Curves at the full-term period. In practical applications, our established curves may be useful for assessment of postnatal growth trajectories, definition of IUGR at birth, and classification of early nutrition status for preterm infants, but not appropriate for classification of small for gestational age or large for gestational age at the birth of full-term infants.\nOur study has several strengths. First, the study was based on a large-scale nationally representative sample of newborns and children in modern China. Second, our established curves were the first set of postnatal growth monitoring curves of preterm infants in China that allowed for continuous use from early preterm birth to 3 months or so after birth. Third, we estimated a relatively reasonable approximate range of weight gain per day and length and head circumference gains per week from 24 weeks of preterm birth to 50 weeks. However, our study has also a limitation. Our established curves were generated based on cross-sectional data and advanced statistical methods, which realized a good connection between newborn growth curves and child growth curves. From the curve shapes and trajectories of our established curves, they seem realistic, but need to be further verified using follow-up data of preterm infants and in real clinical practice.", "Our established postnatal growth monitoring curves are applicable to preterm infants and do not require correction for gestational age in actual use. Early postnatal growth trajectories of preterm infants can be continuously depicted to 50 weeks on this set of the growth charts, and some common growth and nutrition deviations of preterm infants, such as insufficient or excessive growth, IUGR at birth, undernutrition or overnutrition, can be screened out in time with the help of this convenient, clinical, practical tool. In addition, our study has added more evidence to better help clinicians make in-hospital and post-discharge nutrition plans for vulnerable infants, especially for preterm infants with similar ethnic and genetic backgrounds in the Asian part of the world. Further studies should be conducted to evaluate the long-term outcomes (such as neurologic and cardio-metabolic morbidities) of vulnerable infants with IUGR at birth or early overweight/obesity identified by our established postnatal growth monitoring curves." ]
[ "intro", null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "preterm infant", "premature", "nutrition", "weight", "length", "head circumference", "growth curve", "growth reference" ]
1. Introduction: Early postnatal growth and development of preterm infants have an important impact on their future health status and disease risk. Accurate identification for early postnatal growth and nutrition deviation, such as insufficient or excessive growth, intrauterine growth retardation (IUGR) at birth, undernutrition or overnutrition, is conducive to scientific feeding guidance and nutritional management for preterm infants. Since growth curves are essential tools to monitor and evaluate early postnatal growth and nutrition deviation of preterm infants, it is of great practical value to establish scientific and user-friendly growth curves for newborn clinical practice. In recent years, researchers have successively constructed percentile growth curves specially used for early postnatal growth monitoring for preterm infants [1,2]. Frequently-used charts, such as the Fenton 2013 or the Olsen 2015 preterm growth charts, did not include a Chinese cohort in their development [3]. Considering there are racial/ethnic differences in children’s growth and development worldwide [4,5], foreign growth curves may not fully reflect early postnatal growth and nutrition status of Chinese preterm infants [6,7]. In fact, China has also lacked a set of user-friendly growth curves for early postnatal growth monitoring and nutrition assessment for preterm infants. Therefore, it is necessary to develop postnatal growth monitoring curves of preterm infants in China. We aimed to establish a set of postnatal growth monitoring curves of preterm infants based on a nationwide growth and development survey in China between 2015 and 2018 to better help clinicians make in-hospital and post-discharge nutrition plan of these vulnerable infants. 2. Materials and Methods: 2.1. Study Design and Data Source We established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8]. The Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13]. We established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8]. The Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13]. 2.2. Process of Establishing Postnatal Growth Monitoring Curves 2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14]. To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14]. 2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks. Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks. 2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks. Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks. 2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16]. L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16]. 2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1) where C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age. Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1) where C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age. 2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit. Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit. 2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14]. To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14]. 2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks. Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks. 2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks. Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks. 2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16]. L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16]. 2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1) where C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age. Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1) where C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age. 2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit. Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit. 2.3. Fenton 2013 Growth Monitoring Curves and INTERGROWTH Growth Curves To better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2]. The Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks. The INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks. To better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2]. The Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks. The INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks. 2.4. Statistical Analysis Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA). Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA). 2.1. Study Design and Data Source: We established postnatal growth monitoring curves of preterm infants allowing for continuous use from 24 weeks of preterm birth to 50 weeks based on two sets of growth reference values. First, the Newborn Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and gestational age from 24 weeks to 42 weeks. Second, the Child Growth Curves contribute to percentile reference values of weight, length and head circumference by sex and age from 37 weeks of full-term birth to 50 weeks (equivalent to postnatal 10 weeks of full-term birth). Establishing postnatal growth monitoring curves based on existing percentile values of the growth curves was confirmed to be scientific and feasible by a series of studies by Fenton and colleagues [1,8]. The Newborn Growth Curves were constructed based on a large-scale cross-sectional sample of 24,375 newborn babies with a gestational age of 24–42 weeks in thirteen cities in China from June 2015 to November 2018 [9]. The Child Growth Curves were constructed based on a large-scale cross-sectional sample of 83,628 children from full-term birth to seven years old in nine cities in China from June to November 2015 [10]. Both the sample population of the Newborn Growth Curves and the sample population of the Child Growth Curves came from the fifth National Survey on the Physical Growth and Development of Children in China (NSPGDC), which was conducted under the framework of the nine cities (i.e., Beijing, Harbin, Xi’an, Shanghai, Nanjing, Wuhan, Guangzhou, Fuzhou and Kunming). In essence, these two samples belonged to a single reference population with high homogeneity. Considering that the actual number of early preterm babies was very small, the data collection time was appropriately extended and the four survey sites were added in Tianjin, Shenyang, Changsha and Shenzhen surrounding the nine cities of the NSPGDC when collecting preterm babies with <32 weeks of gestation [11]. In the NSPGDC, the inclusion and exclusion criteria of the study subjects can be seen in references [12,13]. 2.2. Process of Establishing Postnatal Growth Monitoring Curves: 2.2.1. Selecting Seven Main Percentile Curves To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14]. To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14]. 2.2.2. Selecting Age Target Points of Initial Percentile Curves Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks. Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks. 2.2.3. Obtaining Preliminarily Smoothed Percentile Values Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks. Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks. 2.2.4. Obtaining Smoothed L, M and S Parameters L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16]. L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16]. 2.2.5. Obtaining Standardized Smoothed Percentile Values Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1) where C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age. Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1) where C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age. 2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit. Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit. 2.2.1. Selecting Seven Main Percentile Curves: To ensure goodness of fit of the growth curves, seven frequently-used main percentile curves (P3, P10, P25, P50, P75, P90, P97) were selected as the initial curves to be smoothed according to statistical accuracy and international practice [1,14]. 2.2.2. Selecting Age Target Points of Initial Percentile Curves: Since establishing postnatal growth monitoring curves of preterm infants mainly involved reasonable merge between the full-term parts of the Newborn Growth Curves and the Child Growth Curves, we referred to the real practice of selecting age target points in establishing the Fenton 2013 growth monitoring curves [1] and observed the specific curve shapes of the Newborn Growth Curves and the Child Growth Curves. Several age target points of initial percentile curves were selected in this study, including 36, 38, 40, 42, 44 and 46 weeks. 2.2.3. Obtaining Preliminarily Smoothed Percentile Values: Based on the curve shapes of the Newborn Growth Curves and the Child Growth Curves, the initial values of each of seven main percentiles of weight, length and head circumference were obtained at the age target points between 36 weeks and 46 weeks by an observation method to make the merged curves look smooth, and the percentile values at 24–35 weeks and at 47–50 weeks remained unchanged. Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference by sex, then a series of smoothed predicted percentile values were reobtained between 36 weeks and 46 weeks. Finally, we obtained preliminarily smoothed seven main percentile values of weight, length and head circumference for boys and girls at each week between 24 weeks and 50 weeks. 2.2.4. Obtaining Smoothed L, M and S Parameters: L, M and S parameters of weight, length and head circumference by sex and age were fitted based on the above preliminarily smoothed seven main percentile values using the nonlinear equation from the modified LMS methods [15,16]. 2.2.5. Obtaining Standardized Smoothed Percentile Values: Standardized smoothed percentile values of weight, length and head circumference by sex and age were calculated based on the above smoothed L, M and S parameters using the following nonlinear equation from the modified LMS methods:C100α(t) = M(t)[1 + L(t)S(t)zα]1/L(t)(1) where C100α(t) is the centile curve plotted against age t, zα is the normal equivalent deviate for the centile (for example when α = 0.97 corresponding to P97, zα = 1.88), and L(t), M(t) and S(t) are the fitted smoothed curves plotted against age. 2.2.6. Assessing the Fitted Performance of the Standardized Smoothed Percentile Curves: Standardized smoothed seven main percentile curves of weight, length and head circumference were compared with the preliminarily smoothed seven main percentile curves between 24 weeks and 50 weeks. A small difference (i.e., weight < 0.01 kg, length < 0.1 cm, head circumference < 0.1 cm) by sex and age was regarded as a good fit. 2.3. Fenton 2013 Growth Monitoring Curves and INTERGROWTH Growth Curves: To better understand and apply postnatal growth curves of preterm infants, we compared the established postnatal growth monitoring curves with the Fenton 2013 growth monitoring curves [1] and the INTERGROWTH growth curves [2]. The Fenton 2013 curves were generated based on newborn percentile values from European and American developed countries and children’s percentile values from the WHO Child Growth Standards, including weight, length and head circumference growth curves for boy and girls [1]. Specifically, the weight curves came from newborn weight percentile values of the 22–40 weeks of gestation from six countries (i.e., Germany (1995–2000), United States (1998–2006), Italy (2005–2007), Australia (1991–1994), Scotland (1998–2003) and Canada (1994–1996)) and children’s weight percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The length and head circumference curves came from newborn length and head circumference percentile values of the 23–40 weeks of gestation from two countries (i.e., United States (1998–2006) and Italy (2005–2007)) and children’s length and head circumference percentile values of full-term 40 weeks to postnatal 10 weeks from the WHO Child Growth Standards. The Fenton 2013 curves covered the age range of 24 weeks of preterm birth to 50 weeks. The INTERGROWTH curves were generated based on a population-based longitudinal data of 201 preterm babies with 26–36 weeks of gestation from eight locations worldwide: Pelotas, Brazil; Turin, Italy; Muscat, Oman; Oxford, UK; Seattle, WA, USA; Shunyi County, Beijing, China; central Nagpur, India; and Parklands suburb, Nairobi, Kenya between 2009 and 2014 [2]. Weight, length and head circumference were measured within 12 h of birth and thereafter every 2 weeks in the first 2 months and every 4 weeks until postnatal age 8 months; a total of 1759 sets of measures were recorded. The INTERGROWTH curves included weight, length and head circumference growth curves for boys and girls with age range of 27 weeks of preterm birth to 64 weeks. 2.4. Statistical Analysis: Polynomial regression was employed to fit the initial values of each of seven main percentiles of weight, length and head circumference for boys and girls between 36 weeks and 46 weeks to obtain preliminarily smoothed P3, P10, P25, P50, P75, P90, P97 values. A nonlinear equation of the modified LMS methods was used to fit seven preliminarily smoothed main percentile values of weight, length and head circumference for boys and girls between 24 weeks and 50 weeks to obtain the smoothed L, M and S parameters. Standardized smoothed percentile and Z-score values of weight, length and head circumference by sex and age were finally generated based on the smoothed L, M and S parameters. R-square was used to assess goodness of fit of the growth curves. The P3, P10, P50, P90, P97 values of the established postnatal growth monitoring curves were compared with the corresponding percentile curves of the Fenton 2013 curves [1] and the INTERGROWTH curves [2]. Statistical analysis was performed by SAS 9.4 (SAS Institute Inc., Cary, NC, USA). 3. Results: 3.1. Established Postnatal Growth Monitoring Curves and their Comparison with Newborn Growth Curves and the Child Growth Curves Based on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks. Based on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks. 3.2. Postnatal Growth Monitoring Curve Allowing for Continuous Use from 24 Weeks of Preterm Birth to 50 Weeks We obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request. Based on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks. In order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3). We obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request. Based on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks. In order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3). 3.3. Comparison of the Established Postnatal Growth Monitoring Curves with the Fenton 2013 Growth Curves Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks. Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks. 3.4. Comparison of Established Postnatal Growth Monitoring Curves with the INTERGROWTH Growth Curves Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves. Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves. 3.1. Established Postnatal Growth Monitoring Curves and their Comparison with Newborn Growth Curves and the Child Growth Curves: Based on seven main percentile values of weight, length and head circumference by sex and week from the Newborn Growth Curves (extract data from 24 weeks of gestation to 42 weeks) and the Child Growth Curves (extract data from 37 weeks of full-term birth to 50 weeks), after a smoothing process of polynomial regression and the modified LMS methods, we fitted postnatal growth monitoring curves of weight, length and head circumference for boys and girls that covered the age range from 24 weeks of preterm birth to 50 weeks. Overall, the established postnatal growth monitoring curves can be fairly well linked to the Newborn Growth Curves and the Child Growth Curves (Figure 1A–C). The established postnatal growth curves were not very different from the Newborn Growth Curves at 24–36 weeks and the Child Growth Curves at 46–50 weeks, but higher than both the Newborn Growth Curves and the Child Growth Curves at 37–45 weeks. 3.2. Postnatal Growth Monitoring Curve Allowing for Continuous Use from 24 Weeks of Preterm Birth to 50 Weeks: We obtained standardized smoothed P3, P10, P25, P50, P75, P90, P97 reference values of weight, length and head circumference for preterm boys and girls that allowed for continuous use from 24 weeks of preterm birth to 50 weeks. Data including percentile and Z-score reference values are available for research upon request. Based on the established postnatal growth monitoring curves, the approximate ranges of the “growth velocity” were estimated as follow: weight gain of (17–18) ± 2 g/kg/d at 24–36 weeks, (10−11) ± 2 g/kg/d at 37–42 weeks, (6−7) ± 1 g/kg/d at 43–50 weeks; length gain of (1.3–1.5) cm/w at 24–32 weeks, (1.1−1.3) cm/w at 33–36 weeks, (0.8−0.9) cm/w at 37–50 weeks; head circumference of (0.9−1.0) cm/w at 24–32 weeks, (0.6−0.7) cm/w at 33–36 weeks, 0.5 cm/w at 37–50 weeks. In order to facilitate actual clinical application, we drew a set of user-friendly postnatal growth monitoring charts for boys and girls which were composed of the frequently-used seven main percentile curves of weight, length and head circumference in the charts (Figure 2 and Figure 3). 3.3. Comparison of the Established Postnatal Growth Monitoring Curves with the Fenton 2013 Growth Curves: Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were consistent with those of the Fenton 2013 curves (Figure 4A–C). The P3 and P10 curves of weight were somewhat higher than the corresponding percentile curves of the Fenton 2013 curves, which seemed to be obvious at 27–34 weeks and at 40–50 weeks. The P3 and P10 curves of length were slightly higher than the corresponding percentile curves of the Fenton 2013 curves at 34–42 weeks. The growth curves of head circumference were slightly lower than the corresponding percentile curves of the Fenton 2013 curves at 37–45 weeks. 3.4. Comparison of Established Postnatal Growth Monitoring Curves with the INTERGROWTH Growth Curves: Overall, the curve shapes and trajectories of the established postnatal weight, length and head circumference growth monitoring curves were similar to those of the INTERGROWTH curves between 27 and 50 weeks (Figure 5A–C). The P3 and P10 curves of weight were higher, the P97 and P90 curves of length were higher, and the P3 and P10 curves of head circumference were lower than the corresponding percentile curves of the INTERGROWTH curves. 4. Discussion: Regarding early postnatal growth monitoring for preterm infants, traditionally, newborn growth curves are first used, and then child growth curves are used when preterm infants grow to full-term age through a correction of gestational age. However, this practice leads to the depiction of growth monitoring trajectories of preterm infants on different growth curves, which is not conducive to quickly and accurately identifying early growth and nutrition deviation for pediatricians and child health physicians. In this study, we established a new set of postnatal growth monitoring curves for preterm infants by deeply integrating newborn growth curves and child growth curves. Using the established postnatal growth curves is essentially the same as the traditional evaluation process, but simplifies the evaluation process, as it enables early postnatal growth monitoring trajectories of preterm infants to continuously plot on a single chart without correcting gestational age from 24 weeks of preterm birth to 50 weeks. With the help of this simple and convenient practical tool, clinicians can timely and accurately find early growth and nutrition deviation of preterm infants (such as insufficient or excessive growth, IUGR at birth, and undernutrition or overnutrition), so as to better guide early scientific feeding and nutrition management for these vulnerable infants. Our established curves were consistent with the Fenton 2013 curves in the smoothing methods and process of developing the growth curves. The curve shapes and trajectories of our established curves were similar to those of the Fenton 2013 curves that were generated based on a systematic review and meta-analysis. The main differences of these two curves were reflected in the low percentile curves (such as P3, P10) of weight at 27–34 weeks and at 40–50 weeks. The main reason was that the Fenton 2013 curves were linked to the WHO Child Growth Standards, including 2.3% of “unhealthy” full-term infants with low birth weight (<2.5 kg) [17], which may cause an overall slightly lighter for the Fenton 2013 weight curves than our established weight curves. A recent study in China showed that postnatal weight gain of late preterm infants was higher than the Fenton 2013 curves [6], suggesting that our established curves may be more appropriate in monitoring and evaluating early postnatal growth and nutrition deviation of Chinese preterm infants. Future research needs to further directly evaluate the applicability of our established curves in Chinese population. The curve shapes and trajectories of our established curves were similar to those of the INTERGROWTH curves that were constructed based on longitudinal data of preterm infants [2]. The main differences of these two curves were reflected in weight, length and head circumference curves at 27–30 weeks, and low percentile curves (such as P3, P10) of weight at 37–50 weeks. The main reasons may be attributed to the difference of study design and data type (cross-sectional data vs. longitudinal data) and very small sample sizes of the INTERGROWTH curves at 27–32 weeks. Some recent studies showed that the Fenton 2013 curves and the INTERGROWTH curves had various differences depending on the evaluated populations in classification of extrauterine growth restriction [18,19,20], which indirectly suggests that it may be more appropriate to assess early postnatal growth and nutrition deviation of preterm infants in a population using the growth curves generated from the same racial/ethnic and living environmental population. A recent study on comparison of the growth trajectories of Chinese preterm infants with the INTERGROWTH curves pointed out that although the international growth curves were recommended to be applied globally, there are still limitations in practical application due to the differences in racial/ethnic groups, economic developmental level and social culture across the world, and thus the nationally representative growth curves of preterm infants in China should be established as soon as possible [7]. Postnatal growth of preterm infants reflects the growth and development status of newborns in the extrauterine environment after leaving the mothers. However, the growth level of newborns at birth reflects the growth and development status of the fetus in utero, and the newborn growth curves based on newborn data at birth show a rapid increase in preterm period and a slowing increase in full-term period. With the acceleration of catch-up growth of preterm infants after birth, the postnatal growth level of preterm infants at 37–42 weeks may be slightly higher than that of full-term infants at birth [2,21,22,23]. As expected, our established curves were higher than the Newborn Growth Curves at the full-term period. In practical applications, our established curves may be useful for assessment of postnatal growth trajectories, definition of IUGR at birth, and classification of early nutrition status for preterm infants, but not appropriate for classification of small for gestational age or large for gestational age at the birth of full-term infants. Our study has several strengths. First, the study was based on a large-scale nationally representative sample of newborns and children in modern China. Second, our established curves were the first set of postnatal growth monitoring curves of preterm infants in China that allowed for continuous use from early preterm birth to 3 months or so after birth. Third, we estimated a relatively reasonable approximate range of weight gain per day and length and head circumference gains per week from 24 weeks of preterm birth to 50 weeks. However, our study has also a limitation. Our established curves were generated based on cross-sectional data and advanced statistical methods, which realized a good connection between newborn growth curves and child growth curves. From the curve shapes and trajectories of our established curves, they seem realistic, but need to be further verified using follow-up data of preterm infants and in real clinical practice. 5. Conclusions: Our established postnatal growth monitoring curves are applicable to preterm infants and do not require correction for gestational age in actual use. Early postnatal growth trajectories of preterm infants can be continuously depicted to 50 weeks on this set of the growth charts, and some common growth and nutrition deviations of preterm infants, such as insufficient or excessive growth, IUGR at birth, undernutrition or overnutrition, can be screened out in time with the help of this convenient, clinical, practical tool. In addition, our study has added more evidence to better help clinicians make in-hospital and post-discharge nutrition plans for vulnerable infants, especially for preterm infants with similar ethnic and genetic backgrounds in the Asian part of the world. Further studies should be conducted to evaluate the long-term outcomes (such as neurologic and cardio-metabolic morbidities) of vulnerable infants with IUGR at birth or early overweight/obesity identified by our established postnatal growth monitoring curves.
Background: Early postnatal growth monitoring and nutrition assessment for preterm infants is a public health and clinical concern. We aimed to establish a set of postnatal growth monitoring curves of preterm infants to better help clinicians make in-hospital and post-discharge nutrition plan of these vulnerable infants. Methods: We collected weight, length and head circumference data from a nationwide survey in China between 2015 and 2018. Polynomial regression and the modified LMS methods were employed to construct the smoothed weight, length and head circumference growth curves. Results: We established the P3, P10, P25, P50, P75, P90, P97 reference curves of weight, length and head circumference that allowed for continuous use from 24 weeks of preterm birth to 50 weeks and developed a set of user-friendly growth monitoring charts. We estimated approximate ranges of weight gain per day and length and head circumference gains per week. Conclusions: Our established growth monitoring curves, which can be used continuously without correcting gestational age from 24 weeks of preterm birth to 50 weeks, may be useful for assessment of postnatal growth trajectories, definition of intrauterine growth retardation at birth, and classification of early nutrition status for preterm infants.
1. Introduction: Early postnatal growth and development of preterm infants have an important impact on their future health status and disease risk. Accurate identification for early postnatal growth and nutrition deviation, such as insufficient or excessive growth, intrauterine growth retardation (IUGR) at birth, undernutrition or overnutrition, is conducive to scientific feeding guidance and nutritional management for preterm infants. Since growth curves are essential tools to monitor and evaluate early postnatal growth and nutrition deviation of preterm infants, it is of great practical value to establish scientific and user-friendly growth curves for newborn clinical practice. In recent years, researchers have successively constructed percentile growth curves specially used for early postnatal growth monitoring for preterm infants [1,2]. Frequently-used charts, such as the Fenton 2013 or the Olsen 2015 preterm growth charts, did not include a Chinese cohort in their development [3]. Considering there are racial/ethnic differences in children’s growth and development worldwide [4,5], foreign growth curves may not fully reflect early postnatal growth and nutrition status of Chinese preterm infants [6,7]. In fact, China has also lacked a set of user-friendly growth curves for early postnatal growth monitoring and nutrition assessment for preterm infants. Therefore, it is necessary to develop postnatal growth monitoring curves of preterm infants in China. We aimed to establish a set of postnatal growth monitoring curves of preterm infants based on a nationwide growth and development survey in China between 2015 and 2018 to better help clinicians make in-hospital and post-discharge nutrition plan of these vulnerable infants. 5. Conclusions: Our established postnatal growth monitoring curves are applicable to preterm infants and do not require correction for gestational age in actual use. Early postnatal growth trajectories of preterm infants can be continuously depicted to 50 weeks on this set of the growth charts, and some common growth and nutrition deviations of preterm infants, such as insufficient or excessive growth, IUGR at birth, undernutrition or overnutrition, can be screened out in time with the help of this convenient, clinical, practical tool. In addition, our study has added more evidence to better help clinicians make in-hospital and post-discharge nutrition plans for vulnerable infants, especially for preterm infants with similar ethnic and genetic backgrounds in the Asian part of the world. Further studies should be conducted to evaluate the long-term outcomes (such as neurologic and cardio-metabolic morbidities) of vulnerable infants with IUGR at birth or early overweight/obesity identified by our established postnatal growth monitoring curves.
Background: Early postnatal growth monitoring and nutrition assessment for preterm infants is a public health and clinical concern. We aimed to establish a set of postnatal growth monitoring curves of preterm infants to better help clinicians make in-hospital and post-discharge nutrition plan of these vulnerable infants. Methods: We collected weight, length and head circumference data from a nationwide survey in China between 2015 and 2018. Polynomial regression and the modified LMS methods were employed to construct the smoothed weight, length and head circumference growth curves. Results: We established the P3, P10, P25, P50, P75, P90, P97 reference curves of weight, length and head circumference that allowed for continuous use from 24 weeks of preterm birth to 50 weeks and developed a set of user-friendly growth monitoring charts. We estimated approximate ranges of weight gain per day and length and head circumference gains per week. Conclusions: Our established growth monitoring curves, which can be used continuously without correcting gestational age from 24 weeks of preterm birth to 50 weeks, may be useful for assessment of postnatal growth trajectories, definition of intrauterine growth retardation at birth, and classification of early nutrition status for preterm infants.
10,399
230
[ 4152, 390, 1068, 52, 97, 144, 42, 104, 63, 394, 205, 173, 250, 117, 81 ]
19
[ "curves", "growth", "weeks", "growth curves", "percentile", "weight", "head", "head circumference", "circumference", "values" ]
[ "postnatal growth monitoring", "postnatal growth curves", "values newborn growth", "preterm infants growth", "growth preterm infants" ]
null
[CONTENT] preterm infant | premature | nutrition | weight | length | head circumference | growth curve | growth reference [SUMMARY]
null
[CONTENT] preterm infant | premature | nutrition | weight | length | head circumference | growth curve | growth reference [SUMMARY]
[CONTENT] preterm infant | premature | nutrition | weight | length | head circumference | growth curve | growth reference [SUMMARY]
[CONTENT] preterm infant | premature | nutrition | weight | length | head circumference | growth curve | growth reference [SUMMARY]
[CONTENT] preterm infant | premature | nutrition | weight | length | head circumference | growth curve | growth reference [SUMMARY]
[CONTENT] Aftercare | Birth Weight | China | Female | Gestational Age | Humans | Infant | Infant, Newborn | Infant, Premature | Patient Discharge | Premature Birth [SUMMARY]
null
[CONTENT] Aftercare | Birth Weight | China | Female | Gestational Age | Humans | Infant | Infant, Newborn | Infant, Premature | Patient Discharge | Premature Birth [SUMMARY]
[CONTENT] Aftercare | Birth Weight | China | Female | Gestational Age | Humans | Infant | Infant, Newborn | Infant, Premature | Patient Discharge | Premature Birth [SUMMARY]
[CONTENT] Aftercare | Birth Weight | China | Female | Gestational Age | Humans | Infant | Infant, Newborn | Infant, Premature | Patient Discharge | Premature Birth [SUMMARY]
[CONTENT] Aftercare | Birth Weight | China | Female | Gestational Age | Humans | Infant | Infant, Newborn | Infant, Premature | Patient Discharge | Premature Birth [SUMMARY]
[CONTENT] postnatal growth monitoring | postnatal growth curves | values newborn growth | preterm infants growth | growth preterm infants [SUMMARY]
null
[CONTENT] postnatal growth monitoring | postnatal growth curves | values newborn growth | preterm infants growth | growth preterm infants [SUMMARY]
[CONTENT] postnatal growth monitoring | postnatal growth curves | values newborn growth | preterm infants growth | growth preterm infants [SUMMARY]
[CONTENT] postnatal growth monitoring | postnatal growth curves | values newborn growth | preterm infants growth | growth preterm infants [SUMMARY]
[CONTENT] postnatal growth monitoring | postnatal growth curves | values newborn growth | preterm infants growth | growth preterm infants [SUMMARY]
[CONTENT] curves | growth | weeks | growth curves | percentile | weight | head | head circumference | circumference | values [SUMMARY]
null
[CONTENT] curves | growth | weeks | growth curves | percentile | weight | head | head circumference | circumference | values [SUMMARY]
[CONTENT] curves | growth | weeks | growth curves | percentile | weight | head | head circumference | circumference | values [SUMMARY]
[CONTENT] curves | growth | weeks | growth curves | percentile | weight | head | head circumference | circumference | values [SUMMARY]
[CONTENT] curves | growth | weeks | growth curves | percentile | weight | head | head circumference | circumference | values [SUMMARY]
[CONTENT] growth | infants | early postnatal | early postnatal growth | preterm infants | preterm | early | nutrition | postnatal growth | postnatal [SUMMARY]
null
[CONTENT] curves | weeks | growth | growth curves | cm | 50 weeks | 50 | postnatal | weeks cm | 37 [SUMMARY]
[CONTENT] infants | preterm infants | growth | preterm | nutrition | iugr birth | iugr | vulnerable infants | vulnerable | help [SUMMARY]
[CONTENT] curves | growth | weeks | growth curves | weight | smoothed | values | length | head | circumference [SUMMARY]
[CONTENT] curves | growth | weeks | growth curves | weight | smoothed | values | length | head | circumference [SUMMARY]
[CONTENT] ||| clinicians [SUMMARY]
null
[CONTENT] P3 | P10 | P25 | P50 | P75 | P90 | P97 | 24 weeks | 50 weeks ||| [SUMMARY]
[CONTENT] 24 weeks | 50 weeks [SUMMARY]
[CONTENT] ||| clinicians ||| China | between 2015 and 2018 ||| LMS ||| ||| P3 | P10 | P25 | P50 | P75 | P90 | P97 | 24 weeks | 50 weeks ||| ||| 24 weeks | 50 weeks [SUMMARY]
[CONTENT] ||| clinicians ||| China | between 2015 and 2018 ||| LMS ||| ||| P3 | P10 | P25 | P50 | P75 | P90 | P97 | 24 weeks | 50 weeks ||| ||| 24 weeks | 50 weeks [SUMMARY]
A preliminary study exploring the change in ankle joint laxity and general joint laxity during the menstrual cycle in cis women.
33761990
The purpose of the present study was to examine the relationship between ankle joint laxity and general joint laxity (GJL) in relation to the menstrual cycle, which was divided into four phases based on basal body temperature and ovulation, assessed using an ovulation kit.
BACKGROUND
Participants were 14 female college students (21-22 years) with normal menstrual cycles (cis gender). Anterior drawer stress to a magnitude of 120 N was applied for all participants. Anterior talofibular ligament (ATFL) length was measured as the linear distance (mm) between its points of attachment on the lateral malleolus and talus using ultrasonography. Data on ATFL length from each subject were used to calculate each subject's normalized length change with anterior drawer stress (AD%). The University of Tokyo method was used for evaluation of GJL. AD% and GJL were measured once in each menstrual phase.
METHODS
There was no statistically significant difference between AD% in each phase. GJL score was significantly higher in the ovulation and luteal phases compared with the early follicular phase. AD% and GJL showed a positive correlation with each other in the ovulation phase.
RESULTS
Although it is unclear whether estrogen receptors are present in the ATFL, the present study suggests that women with high GJL scores might be more sensitive to the effects of estrogen, resulting in ATFL length change in the ovulation phase.
CONCLUSIONS
[ "Ankle Joint", "Female", "Humans", "Joint Instability", "Lateral Ligament, Ankle", "Menstrual Cycle", "Ultrasonography", "Young Adult" ]
7988940
Introduction
It was previously reported that the frequency of sports injuries in women is higher than that in men, suggesting a relationship between the menstrual cycle and sports injury [1, 2]. The menstrual cycle is controlled mainly by cyclic fluctuations in estradiol and progesterone [3], and is classified primarily into follicular, ovulation and luteal phases. Several studies [3–7] investigating the timing of injury of the anterior cruciate ligament (ACL) of the knee in relation to the menstrual cycle reported that ACL injuries often occur during the follicular [3, 5] and ovulation phases [4, 6]. It has also been reported that estrogen receptors are present in the human ACL [8], and that female hormones affect the tissue structure of the ACL [9]. In vivo studies have reported that anterior knee laxity [10] increases during ovulation [11] and luteal phases [12]. Additionally, plantar fasciitis, a type of sports injury, is more common in women. Previous studies have investigated female hormone levels in relation to plantar fascia elasticity, and reported that plantar fascia elasticity increases during ovulation, when estrogen levels are at their peak [13]. Thus, changes in the elasticity and joint laxity of ligaments and tendons have been observed in each phase of the menstrual cycle, and their relationship with sports injuries has been discussed. Lateral ankle ligamentous sprain (LAS) is one of the most common injuries resulting from recreational activities and competitive sports [14]. Of them, roughly 66–85 % involve injuries to the anterior talofibular ligament (ATFL) alone [15–17]. The intrinsic predictive factors of LAS include anatomic characteristics, functional deficits in isokinetic strength, flexibility, joint position sense, muscle reaction time, postural stability, gait mechanics, limb dominance, previous ankle sprains, and body mass index [14]. In recent years, generalized joint laxity (GJL) has also been shown to be a risk factor for ACL injury [18]. Stettler et al. [19] reported higher values for AKL in individuals with higher GJL scores compared to those with normal mobility. In addition, GJL scores are higher in women than in men [20]; this difference between men and women has been attributed to differences in sex hormone levels. LASs have also been reported to occur more frequently in women than in men [10]. However, the effects of hormone fluctuations in women on ankle joint laxity and GJL have not been investigated. The previous studies have described the usefulness of ultrasonography in diagnosing ankle ligament injuries [21, 22]. The application of anterior drawer stress during ultrasonography examination has allowed the evaluation of the changes in location between the ATFL origin and insertion [21]. The use of ultrasonography has demonstrated good-to excellent interrater reliability in the linear measurement of the ATFL under stress positions using the Telos stress device [22, 23]. Therefore, in this study, ankle joint laxity is measured using ATFL ratio of stress ultrasonography. The purpose of the present study was to examine the relationship between ankle joint laxity and GJL during the menstrual cycle, divided into four phases based on basal body temperature (BBT) and ovulation, assessed using an ovulation kit. We hypothesized that ankle joint laxity and GJL values were high during the ovulation period when estrogen levels are high.
null
null
Results
There was no statistically significant difference between AD% in each phase (Table 2). GJL score was significantly higher in the ovulation (p = 0.016) and luteal phases (p = 0.026) compared with the early follicular phase (Table 3). AD% and GJL showed a positive correlation only in the ovulation phase (R = 0.551, P = 0.041) (Fig. 5). In all phases, there were a statistically significant number of ankle (p = 0.001) and shoulder (p = 0.001) joints that were positive for GJL (Table 4). Table 2Change in anterior talofibular ligament length with the anterior drawer test (%) during the menstrual cycleEarly follicular phaseLate follicular phaseOvulation phaseLuteal phaseATFL length change (%)4.7 ± 3.64.4 ± 4.35.6 ± 5.74.0 ± 4.8n = 14Values represent means ± SDATFL anterior talofibular ligamentThere was no statistically significant difference between AD% in each phaseTable 3Change in general joint laxity during the menstrual cycleEarly follicular phaseLate follicular phaseOvulation phaseLuteal phaseGeneral joint laxityScore (points)2.1 ± 1.02.6 ± 1.12.8 ± 1.3*2.8 ± 1.1**n = 14Values represent means ± SD*P = 0.016 vs. early follicular phase**P = 0.026 vs. early follicular phaseFig. 5Correlation between general joint laxity and anterior talofibular ligament length change with anterior drawer stress in each cycleAD (%): anterior talofibular ligament length change with anterior drawer stressGJL: general joint laxityTable 4Number of subjects positive for general joint laxity during the menstrual cycleSpineHipKneeAnkleShoulderElbowWristEarly follicular3/110/14**5/99/5**8/6**1/13**4/10Late follicular5/90/14*4/109/5*9/5*1/13*4/10Ovulation phase6/80/14*4/1010/4*9/5*2/12*6/8Luteal phase6/81/13**3/1111/3**9/5**0/14**5/9n = 14Data represent the Number of Positives (N) / Number of Negatives (N)*P = 0.001 vs. Negatives**P < 0.001 vs. Negatives Change in anterior talofibular ligament length with the anterior drawer test (%) during the menstrual cycle n = 14 Values represent means ± SD ATFL anterior talofibular ligament There was no statistically significant difference between AD% in each phase Change in general joint laxity during the menstrual cycle n = 14 Values represent means ± SD *P = 0.016 vs. early follicular phase **P = 0.026 vs. early follicular phase Correlation between general joint laxity and anterior talofibular ligament length change with anterior drawer stress in each cycle AD (%): anterior talofibular ligament length change with anterior drawer stress GJL: general joint laxity Number of subjects positive for general joint laxity during the menstrual cycle n = 14 Data represent the Number of Positives (N) / Number of Negatives (N) *P = 0.001 vs. Negatives **P < 0.001 vs. Negatives
Conclusions
It is unclear whether estrogen receptors are present in the ATFL, although it has been suggested that women with high GJL scores might be more sensitive to the effects of estrogen on ATFL length change during the ovulation phase. Also, this study was considered to be one of the causes of LAS occurring in women. In future studies, menstrual cycle phases should be identified by measuring hormone concentrations in order to fully examine the effects of the menstrual cycle on the risk factors of LAS.
[ "Methods", "Participants", "Evaluation of the menstrual cycle", "Timing of measurement", "Measurement methods", "Intra‐rater reliability", "Statistical analysis" ]
[ "Participants We surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment.\nWe surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment.\nEvaluation of the menstrual cycle Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1.\nFig. 1Subject selection\nSubject selection\n Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1.\nFig. 1Subject selection\nSubject selection\nTiming of measurement ATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase.\nATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28].\nATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase.\nATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28].\nMeasurement methods Ultrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2).\nFig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nPosition of the foot during measurement\nNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position\nUltrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nUltrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were.\nused to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29].\nGJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator.\nFig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nThe University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nUltrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2).\nFig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nPosition of the foot during measurement\nNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position\nUltrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nUltrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were.\nused to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29].\nGJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator.\nFig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nThe University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nIntra‐rater reliability To assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high.\nTable 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval\nReliability of ultrasonography measurements\nn = 10\nValues are given as mean ± SD\nATFL anterior talofibular ligament; ICC intraclass correlation coefficient\nMDD95 %: minimal detectable difference at the 95 % confidence interval\nTo assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high.\nTable 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval\nReliability of ultrasonography measurements\nn = 10\nValues are given as mean ± SD\nATFL anterior talofibular ligament; ICC intraclass correlation coefficient\nMDD95 %: minimal detectable difference at the 95 % confidence interval\nStatistical analysis Statistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %.\nStatistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %.", "We surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment.", " Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1.\nFig. 1Subject selection\nSubject selection", "ATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase.\nATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28].", "Ultrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2).\nFig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nPosition of the foot during measurement\nNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position\nUltrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nUltrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were.\nused to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29].\nGJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator.\nFig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nThe University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points ", "To assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high.\nTable 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval\nReliability of ultrasonography measurements\nn = 10\nValues are given as mean ± SD\nATFL anterior talofibular ligament; ICC intraclass correlation coefficient\nMDD95 %: minimal detectable difference at the 95 % confidence interval", "Statistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Participants", "Evaluation of the menstrual cycle", "Timing of measurement", "Measurement methods", "Intra‐rater reliability", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "It was previously reported that the frequency of sports injuries in women is higher than that in men, suggesting a relationship between the menstrual cycle and sports injury [1, 2]. The menstrual cycle is controlled mainly by cyclic fluctuations in estradiol and progesterone [3], and is classified primarily into follicular, ovulation and luteal phases.\nSeveral studies [3–7] investigating the timing of injury of the anterior cruciate ligament (ACL) of the knee in relation to the menstrual cycle reported that ACL injuries often occur during the follicular [3, 5] and ovulation phases [4, 6]. It has also been reported that estrogen receptors are present in the human ACL [8], and that female hormones affect the tissue structure of the ACL [9]. In vivo studies have reported that anterior knee laxity [10] increases during ovulation [11] and luteal phases [12]. Additionally, plantar fasciitis, a type of sports injury, is more common in women. Previous studies have investigated female hormone levels in relation to plantar fascia elasticity, and reported that plantar fascia elasticity increases during ovulation, when estrogen levels are at their peak [13]. Thus, changes in the elasticity and joint laxity of ligaments and tendons have been observed in each phase of the menstrual cycle, and their relationship with sports injuries has been discussed.\nLateral ankle ligamentous sprain (LAS) is one of the most common injuries resulting from recreational activities and competitive sports [14]. Of them, roughly 66–85 % involve injuries to the anterior talofibular ligament (ATFL) alone [15–17]. The intrinsic predictive factors of LAS include anatomic characteristics, functional deficits in isokinetic strength, flexibility, joint position sense, muscle reaction time, postural stability, gait mechanics, limb dominance, previous ankle sprains, and body mass index [14]. In recent years, generalized joint laxity (GJL) has also been shown to be a risk factor for ACL injury [18]. Stettler et al. [19] reported higher values for AKL in individuals with higher GJL scores compared to those with normal mobility. In addition, GJL scores are higher in women than in men [20]; this difference between men and women has been attributed to differences in sex hormone levels. LASs have also been reported to occur more frequently in women than in men [10]. However, the effects of hormone fluctuations in women on ankle joint laxity and GJL have not been investigated.\nThe previous studies have described the usefulness of ultrasonography in diagnosing ankle ligament injuries [21, 22]. The application of anterior drawer stress during ultrasonography examination has allowed the evaluation of the changes in location between the ATFL origin and insertion [21]. The use of ultrasonography has demonstrated good-to excellent interrater reliability in the linear measurement of the ATFL under stress positions using the Telos stress device [22, 23]. Therefore, in this study, ankle joint laxity is measured using ATFL ratio of stress ultrasonography.\nThe purpose of the present study was to examine the relationship between ankle joint laxity and GJL during the menstrual cycle, divided into four phases based on basal body temperature (BBT) and ovulation, assessed using an ovulation kit. We hypothesized that ankle joint laxity and GJL values were high during the ovulation period when estrogen levels are high.", "Participants We surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment.\nWe surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment.\nEvaluation of the menstrual cycle Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1.\nFig. 1Subject selection\nSubject selection\n Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1.\nFig. 1Subject selection\nSubject selection\nTiming of measurement ATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase.\nATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28].\nATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase.\nATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28].\nMeasurement methods Ultrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2).\nFig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nPosition of the foot during measurement\nNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position\nUltrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nUltrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were.\nused to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29].\nGJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator.\nFig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nThe University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nUltrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2).\nFig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nPosition of the foot during measurement\nNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position\nUltrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nUltrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were.\nused to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29].\nGJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator.\nFig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nThe University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nIntra‐rater reliability To assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high.\nTable 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval\nReliability of ultrasonography measurements\nn = 10\nValues are given as mean ± SD\nATFL anterior talofibular ligament; ICC intraclass correlation coefficient\nMDD95 %: minimal detectable difference at the 95 % confidence interval\nTo assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high.\nTable 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval\nReliability of ultrasonography measurements\nn = 10\nValues are given as mean ± SD\nATFL anterior talofibular ligament; ICC intraclass correlation coefficient\nMDD95 %: minimal detectable difference at the 95 % confidence interval\nStatistical analysis Statistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %.\nStatistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %.", "We surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment.", " Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1.\nFig. 1Subject selection\nSubject selection", "ATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase.\nATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28].", "Ultrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2).\nFig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nPosition of the foot during measurement\nNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position\nUltrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament\nUltrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were.\nused to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29].\nGJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator.\nFig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points \nThe University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points ", "To assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high.\nTable 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval\nReliability of ultrasonography measurements\nn = 10\nValues are given as mean ± SD\nATFL anterior talofibular ligament; ICC intraclass correlation coefficient\nMDD95 %: minimal detectable difference at the 95 % confidence interval", "Statistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %.", "There was no statistically significant difference between AD% in each phase (Table 2). GJL score was significantly higher in the ovulation (p = 0.016) and luteal phases (p = 0.026) compared with the early follicular phase (Table 3). AD% and GJL showed a positive correlation only in the ovulation phase (R = 0.551, P = 0.041) (Fig. 5). In all phases, there were a statistically significant number of ankle (p = 0.001) and shoulder (p = 0.001) joints that were positive for GJL (Table 4).\nTable 2Change in anterior talofibular ligament length with the anterior drawer test (%) during the menstrual cycleEarly follicular phaseLate follicular phaseOvulation phaseLuteal phaseATFL length change (%)4.7 ± 3.64.4 ± 4.35.6 ± 5.74.0 ± 4.8n = 14Values represent means ± SDATFL anterior talofibular ligamentThere was no statistically significant difference between AD% in each phaseTable 3Change in general joint laxity during the menstrual cycleEarly follicular phaseLate follicular phaseOvulation phaseLuteal phaseGeneral joint laxityScore (points)2.1 ± 1.02.6 ± 1.12.8 ± 1.3*2.8 ± 1.1**n = 14Values represent means ± SD*P = 0.016 vs. early follicular phase**P = 0.026 vs. early follicular phaseFig. 5Correlation between general joint laxity and anterior talofibular ligament length change with anterior drawer stress in each cycleAD (%): anterior talofibular ligament length change with anterior drawer stressGJL: general joint laxityTable 4Number of subjects positive for general joint laxity during the menstrual cycleSpineHipKneeAnkleShoulderElbowWristEarly follicular3/110/14**5/99/5**8/6**1/13**4/10Late follicular5/90/14*4/109/5*9/5*1/13*4/10Ovulation phase6/80/14*4/1010/4*9/5*2/12*6/8Luteal phase6/81/13**3/1111/3**9/5**0/14**5/9n = 14Data represent the Number of Positives (N) / Number of Negatives (N)*P = 0.001 vs. Negatives**P < 0.001 vs. Negatives\nChange in anterior talofibular ligament length with the anterior drawer test (%) during the menstrual cycle\nn = 14\nValues represent means ± SD\nATFL anterior talofibular ligament\nThere was no statistically significant difference between AD% in each phase\nChange in general joint laxity during the menstrual cycle\nn = 14\nValues represent means ± SD\n*P = 0.016 vs. early follicular phase\n**P = 0.026 vs. early follicular phase\nCorrelation between general joint laxity and anterior talofibular ligament length change with anterior drawer stress in each cycle\nAD (%): anterior talofibular ligament length change with anterior drawer stress\nGJL: general joint laxity\nNumber of subjects positive for general joint laxity during the menstrual cycle\nn = 14\nData represent the Number of Positives (N) / Number of Negatives (N)\n*P = 0.001 vs. Negatives\n**P < 0.001 vs. Negatives", "In this study, there was no statistically significant difference between AD% during the four phases of the menstrual cycle. GJL score, however, was significantly higher in the ovulation and luteal phases as compared with the early follicular phase. Further, AD% and GJL showed a positive correlation in the ovulation phase. Regarding the relationship between the menstrual cycle and tissue structure of the ACL, it has been reported that estrogen receptors are present in the human ACL [8], and in vivo studies have reported that AKL increases during ovulation [11] and luteal phases [12]. Higher values for AKL in individuals with higher GJL scores compared to those with normal mobility have also been reported [19]. Previous studies have investigated the correlation between female hormones and plantar fascia elasticity, and reported that plantar fascia elasticity increases during ovulation, synchronous with high estrogen levels [13]. Therefore, although it is unclear whether estrogen receptors are present in the ATFL, it was suggested that women with high GJL scores might be more sensitive to the effects of estrogen on ATFL length change in the ovulation phase. Also, this study was considered to be one of the causes of LAS occurring in women.\nIn our study, there was a large standard deviation between individuals. In a previous study, patients were divided into three groups according to LAS severity for comparison of the ATFL elongation rate. The results showed that ATFL length change with the anterior drawer test in the control group was 1.3 ± 10.7 %, in the history of 1 ankle sprain more than 1 year ago and no residual symptoms of instability or giving way (Coper) group was 14.0 ± 15.9 %, and that in the chronic ankle instability (CAI) group was 15.6 ± 15.1 %, indicating significantly higher ATFL length change in the Coper and CAI groups as compared to the control group [29]. Due to the inclusion criterion of “no history of varus and valgus sprains in the past 6 months” used in our study, it is possible that our study included subjects in both the Coper and CAI groups. Future studies should consider the subjects’ past history, including the severity of ankle sprain. In addition, this study is small sample sizes. This may cause a large standard deviation between individuals of this study.\nSeveral limitations must be considered in this study. First, we did not measure hormone levels to clearly differentiate the four phases of the menstrual cycle. Instead of measuring hormone concentrations by blood sampling, we performed cycle classification using an ovulation kit, which is an inexpensive and non-invasive tool, and the BBT method. Since a correlation has been shown between urinary and serum luteinizing hormone levels, we inferred that the ovulation phase could be adequately defined using an ovulation kit. In addition, use of the BBT method enables estimation of ovulatory and anovulatory cycles [25, 26]. Thus, we expected that including subjects whose BBT showed a biphasic trend would enable us to select subjects with normal ovulatory cycles whose menstrual cycle could be classified into four phases. However, since the timing and phasing of estrogen and progesterone concentration changes vary considerably across the menstrual cycle [28, 32, 33], it might be necessary to classify menstrual cycle according to hormone concentrations in serum, urine, or saliva in future studies. The second limitation is the sample size. The total number of subjects in this study was 14. This may limit the interpretation of the results of this study and therefore requires a larger sample size. The third limitation is due to the inclusion criterion of “no history of varus and valgus sprains in the past 6 months” used in our study, it is possible that our study included subjects in both the Coper and CAI groups.", "It is unclear whether estrogen receptors are present in the ATFL, although it has been suggested that women with high GJL scores might be more sensitive to the effects of estrogen on ATFL length change during the ovulation phase. Also, this study was considered to be one of the causes of LAS occurring in women. In future studies, menstrual cycle phases should be identified by measuring hormone concentrations in order to fully examine the effects of the menstrual cycle on the risk factors of LAS." ]
[ "introduction", null, null, null, null, null, null, null, "results", "discussion", "conclusion" ]
[ "Anterior tibiofibular ligament", "Ovulation phase", "Lateral ankle ligamentous sprain" ]
Introduction: It was previously reported that the frequency of sports injuries in women is higher than that in men, suggesting a relationship between the menstrual cycle and sports injury [1, 2]. The menstrual cycle is controlled mainly by cyclic fluctuations in estradiol and progesterone [3], and is classified primarily into follicular, ovulation and luteal phases. Several studies [3–7] investigating the timing of injury of the anterior cruciate ligament (ACL) of the knee in relation to the menstrual cycle reported that ACL injuries often occur during the follicular [3, 5] and ovulation phases [4, 6]. It has also been reported that estrogen receptors are present in the human ACL [8], and that female hormones affect the tissue structure of the ACL [9]. In vivo studies have reported that anterior knee laxity [10] increases during ovulation [11] and luteal phases [12]. Additionally, plantar fasciitis, a type of sports injury, is more common in women. Previous studies have investigated female hormone levels in relation to plantar fascia elasticity, and reported that plantar fascia elasticity increases during ovulation, when estrogen levels are at their peak [13]. Thus, changes in the elasticity and joint laxity of ligaments and tendons have been observed in each phase of the menstrual cycle, and their relationship with sports injuries has been discussed. Lateral ankle ligamentous sprain (LAS) is one of the most common injuries resulting from recreational activities and competitive sports [14]. Of them, roughly 66–85 % involve injuries to the anterior talofibular ligament (ATFL) alone [15–17]. The intrinsic predictive factors of LAS include anatomic characteristics, functional deficits in isokinetic strength, flexibility, joint position sense, muscle reaction time, postural stability, gait mechanics, limb dominance, previous ankle sprains, and body mass index [14]. In recent years, generalized joint laxity (GJL) has also been shown to be a risk factor for ACL injury [18]. Stettler et al. [19] reported higher values for AKL in individuals with higher GJL scores compared to those with normal mobility. In addition, GJL scores are higher in women than in men [20]; this difference between men and women has been attributed to differences in sex hormone levels. LASs have also been reported to occur more frequently in women than in men [10]. However, the effects of hormone fluctuations in women on ankle joint laxity and GJL have not been investigated. The previous studies have described the usefulness of ultrasonography in diagnosing ankle ligament injuries [21, 22]. The application of anterior drawer stress during ultrasonography examination has allowed the evaluation of the changes in location between the ATFL origin and insertion [21]. The use of ultrasonography has demonstrated good-to excellent interrater reliability in the linear measurement of the ATFL under stress positions using the Telos stress device [22, 23]. Therefore, in this study, ankle joint laxity is measured using ATFL ratio of stress ultrasonography. The purpose of the present study was to examine the relationship between ankle joint laxity and GJL during the menstrual cycle, divided into four phases based on basal body temperature (BBT) and ovulation, assessed using an ovulation kit. We hypothesized that ankle joint laxity and GJL values were high during the ovulation period when estrogen levels are high. Methods: Participants We surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment. We surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment. Evaluation of the menstrual cycle Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1. Fig. 1Subject selection Subject selection Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1. Fig. 1Subject selection Subject selection Timing of measurement ATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase. ATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28]. ATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase. ATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28]. Measurement methods Ultrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2). Fig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament Position of the foot during measurement Neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament Ultrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were. used to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29]. GJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator. Fig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points  The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points  Ultrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2). Fig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament Position of the foot during measurement Neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament Ultrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were. used to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29]. GJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator. Fig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points  The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points  Intra‐rater reliability To assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high. Table 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval Reliability of ultrasonography measurements n = 10 Values are given as mean ± SD ATFL anterior talofibular ligament; ICC intraclass correlation coefficient MDD95 %: minimal detectable difference at the 95 % confidence interval To assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high. Table 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval Reliability of ultrasonography measurements n = 10 Values are given as mean ± SD ATFL anterior talofibular ligament; ICC intraclass correlation coefficient MDD95 %: minimal detectable difference at the 95 % confidence interval Statistical analysis Statistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %. Statistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %. Participants: We surveyed 49 female university students (cis gender) using a questionnaire and interview. Inclusion criteria were as follows: (1) no history of varus and valgus sprains in the past 6 months; (2) no history of surgery on the lower leg; (3) no oral contraceptive or other hormone-stimulating medication usage in the preceding 6 months [12]; and (4) physically active less than three times per week. Among the students who were screened, 14 women (mean age, 21.1 ± 0.3 years; mean height, 159.0 ± 4.5 cm; mean weight, 53.0 ± 6.1 kg; mean cycle days, 30.1 ± 2.8 days) with regular menstrual cycles and biphasic BBTs (indicative of ovulatory cycles) were enrolled. This study was approved by the University Ethics Review Committee (Approval Number 17,946). In addition, this study complied with the Declaration of Helsinki, and was conducted only after written consent was obtained from the study participants, who had been fully informed (in both verbal and written form) of the nature of the experiment. Evaluation of the menstrual cycle: Based on the completed questionnaires and interviews conducted in the 49 female subjects, we asked 26 of them who had regular menstrual cycles and agreed to participate in this study to measure and record their BBT every morning for 1 to 2 months preceding the start of the experiment. Subjects were provided with basal thermometers (Citizen Electronic Thermometer CTEB503L, Citizen Systems Co., Ltd., Tokyo, Japan) for this purpose. To estimate the ovulation date, subjects were provided with ovulation kits (Doctor’s Choice One Step Ovulation Test Clear; Beauty and Health Research, Inc., CA, USA) to be used from the day after the end of menstruation. Since luteinizing hormone (LH) in urine and serum have been shown to correlate with each other [24], the ovulation date was estimated using the ovulation kit results as a substitute for blood sampling. A recording sheet for creation of a BBT table was prepared, and daily BBT, menstrual period, and ovulation kit results were recorded. Based on these data, the first day of menstruation was considered day 1, and the mean BBT up to day 6 was calculated. When the BBT for three consecutive days after ovulation (as determined by the ovulation kit) was at least 0.2 °C higher than this mean value, it was judged that the subject exhibited a biphasic cycle of low and high temperatures [25]. Women with biphasic cycles were classified as having a normal ovulatory pattern, while those with monophasic cycles were considered to have an anovulatory pattern [25, 26]. Of the 26 women whose menstrual cycles were monitored, two were excluded because their BBT was monophasic; ATFL and GJL were measured in the remaining 24 subjects. The final enrolled study population consisted of 14 women who had a cycle length of 25 to 38 days [27] and biphasic BBTs during the menstrual cycle, and in whom ATFL length and GJL measurements were performed. Ten of the 24 subjects were excluded for the reasons indicated in Fig. 1. Fig. 1Subject selection Subject selection Timing of measurement: ATFL length and GJL measurements were taken once in each of the four phases of the menstrual cycle; these phases consisted of the early follicular phase, late follicular phase, ovulatory phase, and luteal phase. ATFL and GJL were measured in the early follicular phase from 3 to 4 days after the start of menstruation, in the late follicular phase from 3 to 4 days after the end of menstruation, in the ovulation phase from 2 to 4 days after the day when the ovulation kit indicated a positive result, and in the luteal phase from 5 to 10 days after the start of the high temperature phase. In consideration of possible diurnal variations, all measurements in all subjects were performed between 8:00 a.m. and 12:00 p.m.[28]. Measurement methods: Ultrasound imaging was performed using ultrasonography (Aplio 500, Toshiba Medical Systems, Tochigi, Japan) with a 10-MHz linear probe. The test positions used were identical to those in a previous study [23, 29] and were performed in the following order: (1) neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity positioned on the bed; and (2) anterior drawer stress to the ankle, performed about 3 cm proximal to the lateral malleolus (Fig. 2). Ankle stress conditions were applied with a Telos Stress Device (Telos SE, Aimedic MMT, Japan). Anterior drawer stress was applied to a magnitude of 120 N for all participants. The measurement was performed thrice, once each by three examiners, two examiners performing the test using ultrasonography, and one examiner performing the test using the Telos Stress Device. With the participant’s ankle in approximately a neutral position, the examiner palpated the anterolateral aspect of the lateral malleolus and talus. Next, the examiner applied ultrasound conducting gel over the lateral aspect of the ankle and positioned the ultrasound probe. The examiner then oriented the probe to view the cross-sectional representation of the lateral malleolus, kept on the right side of the screen, while the lateral talar articular surface cartilage and the neck of the talus, where the ATFL attaches, were identified (Fig. 3). After optimizing the image and centering these bony landmarks within the field of view, the examiner saved the three images and removed the probe. Next, the stress device was applied to the ankle and three images of the ATFL were obtained while performing the anterior drawer stress by application of a posteriorly directed force of 120 N through the tibia (Fig. 2). Fig. 2Position of the foot during measurementNeutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe positionFig. 3Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament Position of the foot during measurement Neutral ankle position with about 30° of plantar flexion, with the subject lying on their side and the lower extremity placed on the bed. Ankle stress conditions were applied with a Telos Stress Device. Anterior drawer stress was applied using a force of magnitude 120 N for all subjects. a: Side view. b front view. c probe position Ultrasound image for measurement of the anterior talofibular ligament length. The ultrasound image captured directly over the anterior talofibular ligament origin and insertion allows the examiner to use a straight line measurement tool to draw a line from the anterolateral aspect of the lateral malleolus to the talus, points that correspond to the anatomic attachment sites of the ligament Ultrasonographic image analysis was performed using an ultrasonic diagnostic imaging system. The ATFL length was measured as the linear distance (mm) between the landmarks. The anterolateral aspect of the lateral malleolus was identified as the ATFL origin, and the peak of the talus was used as the insertion point. The average of the values measured from the three images was adopted. ATFL length data from each subject were. used to calculate each participant’s normalized length change with application of anterior drawer stress (AD%) using the formula [(L stress – L neutral) /L neutral] ×100, where L is the ATFL length in millimeters [29]. GJL was measured using the University of Tokyo joint laxity test [30] (Fig. 4). Mobility was measured at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist. Each item was assigned a value of 1 point, and a total of seven positions were measured; for the six major bilateral joints (i.e., aside from the spine), the left and right positions were assigned a value of 0.5 points each. For items with joint angle as the criterion, the joint angle was measured using a goniometer. Joint angle measurements were performed by one operator and recorded by one operator. Fig. 4The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points  The University of Tokyo joint laxity test. Laxity of six major joints in the body (hip, knee, ankle, shoulder, elbow, wrist) and of the spine were examined. Each item was assigned a value of 1 point (0.5 points each on the left and right sides for bilateral joints) , for a total of 7 points  Intra‐rater reliability: To assess the inter-rater reliability of the measurements, 10 adult men (mean age, 21.0 years; mean height, 176 ± 6.5 cm; mean weight, 68.9 ± 6.3 kg) were subjected. The study content was fully explained to the participants, and written, informed consent was obtained from all participants. Measurement was performed using the above-described ATFL length measurement method; again, the measurement was performed three times, and the average of the three measurements was used. The measurement was repeated on two or more separate days within 1 week, and the intraclass correlation coefficient (ICC) (1, 3) was calculated. The resulting ICC (1, 3) for the ATFL measurements was 0.92 (neutral ATFL length) and 0.93 (stress ATFL length) (Table 1). According to the criteria of Landis et al., [31] reproducibility is considered to be almost perfect when the ICC is 0.81 or more. Therefore, the reproducibility of ATFL length measurement in this study was considered to be high. Table 1Reliability of ultrasonography measurementsLoadATFL length (mm)ICC (1,3)ReliabilityMDD95 %First RaterSecond RaterNeutral21.9 ± 2.421.7 ± 2.50.929almost perfect1.8120 N stress23.2 ± 2.723.1 ± 2.20.920almost perfect1.9n = 10Values are given as mean ± SDATFL anterior talofibular ligament; ICC intraclass correlation coefficientMDD95 %: minimal detectable difference at the 95 % confidence interval Reliability of ultrasonography measurements n = 10 Values are given as mean ± SD ATFL anterior talofibular ligament; ICC intraclass correlation coefficient MDD95 %: minimal detectable difference at the 95 % confidence interval Statistical analysis: Statistical analyses were performed using SPSS (version 24.0, SPSS Japan Inc., Tokyo, Japan). A one-way repeated measures analysis of variance was used to compare AD% and GJL in each phase of the menstrual cycle. Pearson’s correlation test was used to assess the relationship between AD% and GJL in each phase. Pearson’s chi-squared test was used to compare differences in assessments at the spine, and bilaterally at the hip, knee, ankle, shoulder, elbow and wrist in the ovulation phase. The level of significance was set at 5 %. Results: There was no statistically significant difference between AD% in each phase (Table 2). GJL score was significantly higher in the ovulation (p = 0.016) and luteal phases (p = 0.026) compared with the early follicular phase (Table 3). AD% and GJL showed a positive correlation only in the ovulation phase (R = 0.551, P = 0.041) (Fig. 5). In all phases, there were a statistically significant number of ankle (p = 0.001) and shoulder (p = 0.001) joints that were positive for GJL (Table 4). Table 2Change in anterior talofibular ligament length with the anterior drawer test (%) during the menstrual cycleEarly follicular phaseLate follicular phaseOvulation phaseLuteal phaseATFL length change (%)4.7 ± 3.64.4 ± 4.35.6 ± 5.74.0 ± 4.8n = 14Values represent means ± SDATFL anterior talofibular ligamentThere was no statistically significant difference between AD% in each phaseTable 3Change in general joint laxity during the menstrual cycleEarly follicular phaseLate follicular phaseOvulation phaseLuteal phaseGeneral joint laxityScore (points)2.1 ± 1.02.6 ± 1.12.8 ± 1.3*2.8 ± 1.1**n = 14Values represent means ± SD*P = 0.016 vs. early follicular phase**P = 0.026 vs. early follicular phaseFig. 5Correlation between general joint laxity and anterior talofibular ligament length change with anterior drawer stress in each cycleAD (%): anterior talofibular ligament length change with anterior drawer stressGJL: general joint laxityTable 4Number of subjects positive for general joint laxity during the menstrual cycleSpineHipKneeAnkleShoulderElbowWristEarly follicular3/110/14**5/99/5**8/6**1/13**4/10Late follicular5/90/14*4/109/5*9/5*1/13*4/10Ovulation phase6/80/14*4/1010/4*9/5*2/12*6/8Luteal phase6/81/13**3/1111/3**9/5**0/14**5/9n = 14Data represent the Number of Positives (N) / Number of Negatives (N)*P = 0.001 vs. Negatives**P < 0.001 vs. Negatives Change in anterior talofibular ligament length with the anterior drawer test (%) during the menstrual cycle n = 14 Values represent means ± SD ATFL anterior talofibular ligament There was no statistically significant difference between AD% in each phase Change in general joint laxity during the menstrual cycle n = 14 Values represent means ± SD *P = 0.016 vs. early follicular phase **P = 0.026 vs. early follicular phase Correlation between general joint laxity and anterior talofibular ligament length change with anterior drawer stress in each cycle AD (%): anterior talofibular ligament length change with anterior drawer stress GJL: general joint laxity Number of subjects positive for general joint laxity during the menstrual cycle n = 14 Data represent the Number of Positives (N) / Number of Negatives (N) *P = 0.001 vs. Negatives **P < 0.001 vs. Negatives Discussion: In this study, there was no statistically significant difference between AD% during the four phases of the menstrual cycle. GJL score, however, was significantly higher in the ovulation and luteal phases as compared with the early follicular phase. Further, AD% and GJL showed a positive correlation in the ovulation phase. Regarding the relationship between the menstrual cycle and tissue structure of the ACL, it has been reported that estrogen receptors are present in the human ACL [8], and in vivo studies have reported that AKL increases during ovulation [11] and luteal phases [12]. Higher values for AKL in individuals with higher GJL scores compared to those with normal mobility have also been reported [19]. Previous studies have investigated the correlation between female hormones and plantar fascia elasticity, and reported that plantar fascia elasticity increases during ovulation, synchronous with high estrogen levels [13]. Therefore, although it is unclear whether estrogen receptors are present in the ATFL, it was suggested that women with high GJL scores might be more sensitive to the effects of estrogen on ATFL length change in the ovulation phase. Also, this study was considered to be one of the causes of LAS occurring in women. In our study, there was a large standard deviation between individuals. In a previous study, patients were divided into three groups according to LAS severity for comparison of the ATFL elongation rate. The results showed that ATFL length change with the anterior drawer test in the control group was 1.3 ± 10.7 %, in the history of 1 ankle sprain more than 1 year ago and no residual symptoms of instability or giving way (Coper) group was 14.0 ± 15.9 %, and that in the chronic ankle instability (CAI) group was 15.6 ± 15.1 %, indicating significantly higher ATFL length change in the Coper and CAI groups as compared to the control group [29]. Due to the inclusion criterion of “no history of varus and valgus sprains in the past 6 months” used in our study, it is possible that our study included subjects in both the Coper and CAI groups. Future studies should consider the subjects’ past history, including the severity of ankle sprain. In addition, this study is small sample sizes. This may cause a large standard deviation between individuals of this study. Several limitations must be considered in this study. First, we did not measure hormone levels to clearly differentiate the four phases of the menstrual cycle. Instead of measuring hormone concentrations by blood sampling, we performed cycle classification using an ovulation kit, which is an inexpensive and non-invasive tool, and the BBT method. Since a correlation has been shown between urinary and serum luteinizing hormone levels, we inferred that the ovulation phase could be adequately defined using an ovulation kit. In addition, use of the BBT method enables estimation of ovulatory and anovulatory cycles [25, 26]. Thus, we expected that including subjects whose BBT showed a biphasic trend would enable us to select subjects with normal ovulatory cycles whose menstrual cycle could be classified into four phases. However, since the timing and phasing of estrogen and progesterone concentration changes vary considerably across the menstrual cycle [28, 32, 33], it might be necessary to classify menstrual cycle according to hormone concentrations in serum, urine, or saliva in future studies. The second limitation is the sample size. The total number of subjects in this study was 14. This may limit the interpretation of the results of this study and therefore requires a larger sample size. The third limitation is due to the inclusion criterion of “no history of varus and valgus sprains in the past 6 months” used in our study, it is possible that our study included subjects in both the Coper and CAI groups. Conclusions: It is unclear whether estrogen receptors are present in the ATFL, although it has been suggested that women with high GJL scores might be more sensitive to the effects of estrogen on ATFL length change during the ovulation phase. Also, this study was considered to be one of the causes of LAS occurring in women. In future studies, menstrual cycle phases should be identified by measuring hormone concentrations in order to fully examine the effects of the menstrual cycle on the risk factors of LAS.
Background: The purpose of the present study was to examine the relationship between ankle joint laxity and general joint laxity (GJL) in relation to the menstrual cycle, which was divided into four phases based on basal body temperature and ovulation, assessed using an ovulation kit. Methods: Participants were 14 female college students (21-22 years) with normal menstrual cycles (cis gender). Anterior drawer stress to a magnitude of 120 N was applied for all participants. Anterior talofibular ligament (ATFL) length was measured as the linear distance (mm) between its points of attachment on the lateral malleolus and talus using ultrasonography. Data on ATFL length from each subject were used to calculate each subject's normalized length change with anterior drawer stress (AD%). The University of Tokyo method was used for evaluation of GJL. AD% and GJL were measured once in each menstrual phase. Results: There was no statistically significant difference between AD% in each phase. GJL score was significantly higher in the ovulation and luteal phases compared with the early follicular phase. AD% and GJL showed a positive correlation with each other in the ovulation phase. Conclusions: Although it is unclear whether estrogen receptors are present in the ATFL, the present study suggests that women with high GJL scores might be more sensitive to the effects of estrogen, resulting in ATFL length change in the ovulation phase.
Introduction: It was previously reported that the frequency of sports injuries in women is higher than that in men, suggesting a relationship between the menstrual cycle and sports injury [1, 2]. The menstrual cycle is controlled mainly by cyclic fluctuations in estradiol and progesterone [3], and is classified primarily into follicular, ovulation and luteal phases. Several studies [3–7] investigating the timing of injury of the anterior cruciate ligament (ACL) of the knee in relation to the menstrual cycle reported that ACL injuries often occur during the follicular [3, 5] and ovulation phases [4, 6]. It has also been reported that estrogen receptors are present in the human ACL [8], and that female hormones affect the tissue structure of the ACL [9]. In vivo studies have reported that anterior knee laxity [10] increases during ovulation [11] and luteal phases [12]. Additionally, plantar fasciitis, a type of sports injury, is more common in women. Previous studies have investigated female hormone levels in relation to plantar fascia elasticity, and reported that plantar fascia elasticity increases during ovulation, when estrogen levels are at their peak [13]. Thus, changes in the elasticity and joint laxity of ligaments and tendons have been observed in each phase of the menstrual cycle, and their relationship with sports injuries has been discussed. Lateral ankle ligamentous sprain (LAS) is one of the most common injuries resulting from recreational activities and competitive sports [14]. Of them, roughly 66–85 % involve injuries to the anterior talofibular ligament (ATFL) alone [15–17]. The intrinsic predictive factors of LAS include anatomic characteristics, functional deficits in isokinetic strength, flexibility, joint position sense, muscle reaction time, postural stability, gait mechanics, limb dominance, previous ankle sprains, and body mass index [14]. In recent years, generalized joint laxity (GJL) has also been shown to be a risk factor for ACL injury [18]. Stettler et al. [19] reported higher values for AKL in individuals with higher GJL scores compared to those with normal mobility. In addition, GJL scores are higher in women than in men [20]; this difference between men and women has been attributed to differences in sex hormone levels. LASs have also been reported to occur more frequently in women than in men [10]. However, the effects of hormone fluctuations in women on ankle joint laxity and GJL have not been investigated. The previous studies have described the usefulness of ultrasonography in diagnosing ankle ligament injuries [21, 22]. The application of anterior drawer stress during ultrasonography examination has allowed the evaluation of the changes in location between the ATFL origin and insertion [21]. The use of ultrasonography has demonstrated good-to excellent interrater reliability in the linear measurement of the ATFL under stress positions using the Telos stress device [22, 23]. Therefore, in this study, ankle joint laxity is measured using ATFL ratio of stress ultrasonography. The purpose of the present study was to examine the relationship between ankle joint laxity and GJL during the menstrual cycle, divided into four phases based on basal body temperature (BBT) and ovulation, assessed using an ovulation kit. We hypothesized that ankle joint laxity and GJL values were high during the ovulation period when estrogen levels are high. Conclusions: It is unclear whether estrogen receptors are present in the ATFL, although it has been suggested that women with high GJL scores might be more sensitive to the effects of estrogen on ATFL length change during the ovulation phase. Also, this study was considered to be one of the causes of LAS occurring in women. In future studies, menstrual cycle phases should be identified by measuring hormone concentrations in order to fully examine the effects of the menstrual cycle on the risk factors of LAS.
Background: The purpose of the present study was to examine the relationship between ankle joint laxity and general joint laxity (GJL) in relation to the menstrual cycle, which was divided into four phases based on basal body temperature and ovulation, assessed using an ovulation kit. Methods: Participants were 14 female college students (21-22 years) with normal menstrual cycles (cis gender). Anterior drawer stress to a magnitude of 120 N was applied for all participants. Anterior talofibular ligament (ATFL) length was measured as the linear distance (mm) between its points of attachment on the lateral malleolus and talus using ultrasonography. Data on ATFL length from each subject were used to calculate each subject's normalized length change with anterior drawer stress (AD%). The University of Tokyo method was used for evaluation of GJL. AD% and GJL were measured once in each menstrual phase. Results: There was no statistically significant difference between AD% in each phase. GJL score was significantly higher in the ovulation and luteal phases compared with the early follicular phase. AD% and GJL showed a positive correlation with each other in the ovulation phase. Conclusions: Although it is unclear whether estrogen receptors are present in the ATFL, the present study suggests that women with high GJL scores might be more sensitive to the effects of estrogen, resulting in ATFL length change in the ovulation phase.
8,672
267
[ 4419, 216, 390, 139, 1020, 324, 109 ]
11
[ "atfl", "stress", "anterior", "ankle", "length", "ovulation", "phase", "menstrual", "study", "measurement" ]
[ "menstruation luteinizing hormone", "injury common women", "menstrual cycle sports", "injuries occur follicular", "knee relation menstrual" ]
null
[CONTENT] Anterior tibiofibular ligament | Ovulation phase | Lateral ankle ligamentous sprain [SUMMARY]
null
[CONTENT] Anterior tibiofibular ligament | Ovulation phase | Lateral ankle ligamentous sprain [SUMMARY]
[CONTENT] Anterior tibiofibular ligament | Ovulation phase | Lateral ankle ligamentous sprain [SUMMARY]
[CONTENT] Anterior tibiofibular ligament | Ovulation phase | Lateral ankle ligamentous sprain [SUMMARY]
[CONTENT] Anterior tibiofibular ligament | Ovulation phase | Lateral ankle ligamentous sprain [SUMMARY]
[CONTENT] Ankle Joint | Female | Humans | Joint Instability | Lateral Ligament, Ankle | Menstrual Cycle | Ultrasonography | Young Adult [SUMMARY]
null
[CONTENT] Ankle Joint | Female | Humans | Joint Instability | Lateral Ligament, Ankle | Menstrual Cycle | Ultrasonography | Young Adult [SUMMARY]
[CONTENT] Ankle Joint | Female | Humans | Joint Instability | Lateral Ligament, Ankle | Menstrual Cycle | Ultrasonography | Young Adult [SUMMARY]
[CONTENT] Ankle Joint | Female | Humans | Joint Instability | Lateral Ligament, Ankle | Menstrual Cycle | Ultrasonography | Young Adult [SUMMARY]
[CONTENT] Ankle Joint | Female | Humans | Joint Instability | Lateral Ligament, Ankle | Menstrual Cycle | Ultrasonography | Young Adult [SUMMARY]
[CONTENT] menstruation luteinizing hormone | injury common women | menstrual cycle sports | injuries occur follicular | knee relation menstrual [SUMMARY]
null
[CONTENT] menstruation luteinizing hormone | injury common women | menstrual cycle sports | injuries occur follicular | knee relation menstrual [SUMMARY]
[CONTENT] menstruation luteinizing hormone | injury common women | menstrual cycle sports | injuries occur follicular | knee relation menstrual [SUMMARY]
[CONTENT] menstruation luteinizing hormone | injury common women | menstrual cycle sports | injuries occur follicular | knee relation menstrual [SUMMARY]
[CONTENT] menstruation luteinizing hormone | injury common women | menstrual cycle sports | injuries occur follicular | knee relation menstrual [SUMMARY]
[CONTENT] atfl | stress | anterior | ankle | length | ovulation | phase | menstrual | study | measurement [SUMMARY]
null
[CONTENT] atfl | stress | anterior | ankle | length | ovulation | phase | menstrual | study | measurement [SUMMARY]
[CONTENT] atfl | stress | anterior | ankle | length | ovulation | phase | menstrual | study | measurement [SUMMARY]
[CONTENT] atfl | stress | anterior | ankle | length | ovulation | phase | menstrual | study | measurement [SUMMARY]
[CONTENT] atfl | stress | anterior | ankle | length | ovulation | phase | menstrual | study | measurement [SUMMARY]
[CONTENT] injuries | reported | sports | laxity | joint | acl | joint laxity | injury | joint laxity gjl | laxity gjl [SUMMARY]
null
[CONTENT] general joint | general | vs | anterior | general joint laxity | joint | negatives | represent | 001 | follicular [SUMMARY]
[CONTENT] estrogen | las | effects | women | cycle phases identified | effects menstrual | menstrual cycle risk factors | menstrual cycle risk | effects menstrual cycle risk | effects menstrual cycle [SUMMARY]
[CONTENT] phase | ovulation | atfl | anterior | mean | length | menstrual | ankle | study | stress [SUMMARY]
[CONTENT] phase | ovulation | atfl | anterior | mean | length | menstrual | ankle | study | stress [SUMMARY]
[CONTENT] GJL | four [SUMMARY]
null
[CONTENT] ||| GJL ||| GJL [SUMMARY]
[CONTENT] estrogen | ATFL | GJL | ATFL [SUMMARY]
[CONTENT] GJL | four ||| 14 | 21-22 years ||| 120 ||| ATFL | mm ||| ATFL ||| The University of Tokyo | GJL ||| GJL ||| ||| GJL ||| GJL ||| ||| estrogen | ATFL | GJL | ATFL [SUMMARY]
[CONTENT] GJL | four ||| 14 | 21-22 years ||| 120 ||| ATFL | mm ||| ATFL ||| The University of Tokyo | GJL ||| GJL ||| ||| GJL ||| GJL ||| ||| estrogen | ATFL | GJL | ATFL [SUMMARY]
[The future of outpatient surgery].
35925293
Over the past 30 years, outpatient surgery has developed into an indispensable pillar of patient care in Germany, without its full potential coming to light.
BACKGROUND
Presentation and comparison of outpatient surgery numbers from clinics and practices, and a critical analysis of their development.
MATERIALS AND METHODS
After reaching a maximum number of outpatient operations in practices and clinics in 2015, there has been a location-independent decrease and stagnation due to underfunding of outpatient surgical structures and a shortage of resources.
RESULTS
Outpatient surgery represents a patient-friendly and cost-effective alternative to inpatient interventions, provided that that medical and social indications rule out an increased risk. The expansion of outpatient surgery has so far provided relieve to the cost-intensive hospital sector and-in view of the shortage of nurses and physicians-will do so to an even greater extent as soon as politicians and payers commit to remuneration that is performance-related and actually covers the costs. Furthermore, the future of the healthcare system also depends on the future of outpatient surgery, which is to be assessed as positive.
CONCLUSION
[ "Ambulatory Surgical Procedures", "Cost-Benefit Analysis", "Germany", "Hospital Costs", "Humans", "Outpatients" ]
9257115
null
null
null
null
null
null
Fazit für die Praxis
Durch kontinuierliche Verbesserungen v. a. im niedergelassenen Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus.Durch ambulantes Operieren kann man Einsparpotenziale generieren.Aus Sicht der Patienten zählen v. a. ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit.Kostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt.Die nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan, mit der gesundheitspolitischen nonverbalen Einladung, in den ambulanten Gesundheitsmarkt einzusteigen.Ein unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf – und somit bereits die nahe Zukunft des ambulanten Operierens – gefährden. Durch kontinuierliche Verbesserungen v. a. im niedergelassenen Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus. Durch ambulantes Operieren kann man Einsparpotenziale generieren. Aus Sicht der Patienten zählen v. a. ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit. Kostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt. Die nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan, mit der gesundheitspolitischen nonverbalen Einladung, in den ambulanten Gesundheitsmarkt einzusteigen. Ein unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf – und somit bereits die nahe Zukunft des ambulanten Operierens – gefährden.
[ "Historie und Status quo", "Patientenperspektive", "30 Jahre Erfahrung", "Stagnation trotz Potenzial", "Forderungen an Politik und Kassen", "Mehr Kapazitäten erfordern mehr Ressourcen", "Praxisnachfolge entscheidet über Zukunft des ambulanten Operierens", "Positive Bilanz", "Fehlentwicklungen", "Potenziale und Herausforderungen an das ambulante Operieren", "Zusammenfassung" ]
[ "Die beschriebenen Vorbehalte sind schon seit geraumer Zeit ausgeräumt, weil hohe Operationszahlen mit großer Patientenzufriedenheit und guter Ergebnisqualität auf Dauer die besseren Argumente sind [8]. Dennoch ist die Zukunft des ambulanten Operierens trotz dieser Ausgangsposition weiterhin volatil, denn bisher gelang es ambulanten Operateuren und Anästhesisten nicht, neben Patienten und Angehörigen auch Gesundheitspolitik und Kostenträger von der Hochwertigkeit ihrer ärztlichen Leistungen so zu überzeugen, dass beide einer angemessenen sowie kostendeckenden Vergütung zustimmen [23, 26]. Hier gilt es, noch nachhaltiger politische Überzeugungsarbeit zu leistet, um die Zukunft des ambulanten Operierens in Praxen und Kliniken unter gleichen und akzeptablen Rahmenbedingungen langfristig zu sichern [15].\nDas ambulante Operieren hat in den letzten Jahrzehnten einen enormen Entwicklungsschub erfahren\nDer Begriff „ambulant“ rührt ursprünglich vom von Patienten zu Patienten sich begebenden Chirurgen her [6]. Sowohl dessen Berufsbild als auch die Definition des ambulanten Operierens haben im Verlauf der Medizingeschichte und v. a. in den letzten Jahrzehnten einen enormen Entwicklungsschub erfahren. Ambulantes Operieren in Deutschland bedeutet im Grund nichts anderes, als dass Patienten die Zeit vor und nach ihrer Operation zu Hause verbringen und der ambulante Eingriff innerhalb der Aufenthaltsdauer der Patienten ohne stationäre Aufnahme zu erfolgen hat. Im AOP-Katalog (ambulantes Operieren und sonstige stationsersetzende Eingriffe im Krankenhaus) sind ambulant durchführbare und stationsersetzende Operationen aufgeführt, die jeweiligen Vergütungen sind im einheitlichen Bewertungsmaßstab (EBM) geregelt und gelten für Praxen und Kliniken seit Inkrafttreten des Gesundheitsstrukturgesetzes im Jahr 1993 [10].\nWas die Struktur- und Prozessqualität angelangt, so sind Klinikstandards verbindlich. Es wird dabei weder den dynamisch steigernden Kostenstrukturen in Kliniken und Praxen noch der Implementierung des medizinischen Fortschritts innerhalb des ambulanten Settings kostenkalkulatorisch korrekt Rechnung getragen. Von dieser Abspaltung der ökonomischen Sicht auf das komplexe medizinische Produkt ambulantes Operieren und der Negation seines bewiesenen Mehrwertes sind alle regelmäßig ambulant operierenden Fachgebiete in Praxen und Kliniken betroffen [14, 16].\nDurch kontinuierliche Verbesserungsprozesse v. a. im niedergelassenen, d. h. selbständigen und selbsthaftende Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus. Beides ermöglicht eine fachlich breit ausgerichtete sowie flächendeckende chirurgisch-operative Versorgung der Patienten. Hinzu kommt die Etablierung schonender OP- und Anästhesieverfahren als innovatives und motivierendes Moment.", "Die Präferenz von Operationen im ambulanten Kontext begründen Patienten und Angehörige damit, dass sie in der Facharztpraxis von einem selbst gewählten Facharzt behandelt werden, zu dem ein persönliches Vertrauensverhältnis besteht, und sie anschließend die Genesungszeit zuhause verbringen können – in der Gewissheit, sich bei Bedarf vertrauensvoll an Operateur oder Anästhesisten wenden zu können. Diese Einschätzung umfasst die eigentliche Philosophie des ambulanten Operierens, nämlich den gesamten Behandlungsablauf aus einer zuverlässigen Hand. Damit ist ambulantes Operieren unstrittig zu einem festen Bestandteil und einer der tragenden Säulen der Patientenversorgung allgemein und speziell im Hinblick auf die Verwirklichung einer integrierten, fach- und sektorenübergreifenden Medizin geworden und nimmt deshalb eine Vorreiter- und Vorbildfunktion bei der anstehenden Ambulantisierungsoffensive ein. Dieser für alle sichtbare Leuchtturm ambulantes Operieren wurde im Übrigen auch aus den Steinen errichtet, die seiner Entwicklung immer wieder bewusst in den Weg gelegt werden, auch durch gezielte Infragestellung [25]. Notorischen Zweiflern sei an dieser Stelle empfohlen, zur besseren Ein- und Weitsicht die Kassenbrille abzunehmen und gegen die Patientensichtbrille einzutauschen [12].", "Ambulante Operateure und Anästhesisten dürfen zu Recht mit ihrem Erfahrungsvorsprung von 30 Jahren auf ihre besondere Rolle bei der aktuellen Meinungsbildung und den anstehenden Entscheidungen bestehen. Wer in medizinisch-fachlichen Belangen und Fragen der Patientensicherheit die erforderliche Expertise vorweist, muss auch in so wichtigen gesundheitspolitischen Entscheidungen wie der Ambulantisierung angehört und in seinen Empfehlungen respektiert werden [22]. Zu groß ist sonst die Gefahr weiterer unerwünschter Nebenwirkungen und Kollateralschäden. Von diesen ist auch das ambulante Operieren im Verlauf seiner Entwicklung nicht verschont geblieben, wie ein aktuell zu verzeichnender Rückgang der Patienten- und Eingriffszahlen nach einer schon seit 2015 zu beobachtenden Stagnation belegt [23]. Dazu beweist dieser Trend das Gegenteil der seit Anfang an unbewiesenen Unterstellung einer unkontrollierten Mengenausweitung und drohender Kostenexplosion. Und es ist der Punkt in der – wohlgemerkt medizinischen – Erfolgsgeschichte des ambulanten Operierens erreicht, an dem die bisherigen einrichtungsinternen Rationalisierungsmaßnahmen ihren Zenit erreicht bzw. überschritten haben und vorhandene personelle und finanzielle Ressourcen definitiv an ihre Grenzen angekommen sind [16]. Ein unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf und somit bereits die nahe Zukunft des ambulanten Operierens gefährden.", "Der 2015 eingetretene Stillstand früherer konstanter Steigerungsraten betrifft Kliniken und Praxen gleichsam; seither sind Schwankungen in den Patientenzahlen und bestenfalls eine Plateaubildung zu verzeichnen [1]. Dies lässt sich mit Zahlen konkret belegen und sollte als Warnhinweis und Weckruf verstanden werden. Das Maximum der erfassten ambulanten Operationen in Kliniken betrug knapp 2 Mio., in den Praxen waren es 8,75 Mio. Zuletzt wurden aus dem niedergelassenen Bereich 2020 8 Mio. Eingriffe erfasst, was dem bereits 2008 erzielten Ergebnis entspricht. Retrospektiv verzeichnen Praxen in der Dekade zwischen 1998 und 2008 den größten Zuwachs an ambulanten Operationen, in Kliniken fand diese Entwicklung zeitlich versetzt von 2004 bis 2015 statt, mit einem Optimum von 7,1 % Anteil an allen Krankenhausbehandlungen [17].\nDurch ambulantes Operieren per se und dessen gezielten Ausbau kann man Einsparpotenziale generieren\nNeben den genannten Vorteilen für Patienten wird auch das Gesundheitssystem durch ambulante Operationen begünstigt, dank einer im Vergleich zu stationären Eingriffen ablaufbedingt geringeren Inanspruchnahme von Kapazitäten und damit einhergehend geringeren Kostenanteils [3]. Damit ist grundsätzlich die Möglichkeit verbunden, durch ambulantes Operieren per se und durch dessen gezielten Ausbau relevante Einsparpotenziale zu generieren, wie P. Oberende bereits 2010 in seinem vom BAO initiierten Gutachten belegen konnte [24].\nDie bisherigen Einspareffekte dienen den verantwortlichen Entscheidungsträgern unwissentlich als Deckungsbeiträge für anderweitige Defizite oder werden bestenfalls in neue medizinische Bereiche reinvestiert. Der Aspekt regelmäßiger Reinvestitionen in die Strukturen des ambulanten Operierens und deren Ausbau wird dabei völlig außer Acht gelassen, wie überhaupt das ambulante Operieren unter den Schirm einer peinlich anmutenden Milchmädchenrechnung gestellt wurde, mit dem vorherrschenden Ziel, für jede ambulante Operation eine stationäre einzusparen. Diese schlichte Modellrechnung kann nicht aufgehen, schon allein aus dem Grund, dass der medizinisch-therapeutische Teil des Gesundheitssystems ein im Wandel begriffener ist und per se einem ständigen Verbesserungsprozess unterliegt. So kompensiert das ambulante Operieren den weiter voranschreitenden Klinik- und Bettenabbau zusätzlich. Hier kann man in der Tat statt von einer nur stationsersetzenden Intention des Gesetzgebers ebenso von einer stationszersetzenden sprechen, wenn man die Entwicklung des Gesundheitssystems in den beiden letzten Dekaden betrachtet.\nDie gesamte ambulante niedergelassene fachärztliche Medizin im Umfeld von geplanten bzw. umgesetzten Klinikschließungen trägt entscheidend zur Aufrechterhaltung einer flächendeckenden wohnortnahen Patientenversorgung bei. Die getroffenen Maßnahmen zur Pandemiebekämpfung kommen streckenweise Klinikschließungen gleich und beruhen auf der sicheren Annahme einer weiter funktionierenden ambulanten Patientenversorgung. Und dies im Gegensatz zu Kliniken ohne Unterstützungsmaßnahmen für ambulante Facharztpraxen oder Anerkennung der Praxismitarbeiterinnen in Form eines gewährten Coronabonus.", "Seit 3 Jahrzehnten ist der Generalverdacht, ambulante Medizin allgemein und insbesondere das anspruchsvolle, aufwändige ambulante Operieren seien Kostentreiber (also eine unnötige luxuriöse doppelte Versorgungsschiene) und folglich durch gesundheitspolitische Gegenmaßnahmen (wie die gegen alle ökonomische Vernunft von oben verordnete Punktwertabsenkung im Rahmen des EBM 2000 plus) in ihren Entwicklungen zunehmend einzuschränken, ständiger Begleiter ambulanter Fachärzt:innen. Und dies gegen den erklärten Patienten- und Wählerwillen, der in Umfragen mit konstant hohen Zustimmungswerten für ambulant operierende Fachärzt:innen zum Ausdruck kommt [7].\nEs ist an der Zeit, dem hohen Stellenwert einer dem gesellschaftlichen Wertewandel unterworfenen, aber mehrheitlich stabilen, von Vertrauen geprägten Arzt-Patienten-Beziehung die nötige politische Aufmerksamkeit und Respekterweisung entgegenzubringen. Diese Forderung beinhaltet, das ambulante Operieren als wesentlichen Bestandteil der chirurgisch-operativen Versorgung der Patienten in Deutschland mit großem Steigerungspotenzial anzuerkennen und seine Weiterentwicklung (statt zu stören) gezielt zu fördern. Ein Blick auf die umfangreichen und nicht immer Sinn und Nutzen stiftenden GKV-Werbekampagnen (Gesetzliche Krankenversicherung) zeigt Möglichkeiten des Abflusses von Mitteln aus der aktiven Patientenversorgung und deren Zweckentfremdung für Gesundheitsleistungen als Sonderangebot und Marketinginstrument trotz angeblicher chronisch leerer Kassenlage.\nWas aus Sicht der Patienten wirklich zählt, das sind immer noch ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit. Dieser Widerspruch zwischen dem Selbstverständnis patientennaher und -ferner Akteure zeigt einmal mehr den bereits vollzogenen Wandel unseres Gesundheitswesens vom Sozialsystem hin zu Gesundheitsmarkt und -wirtschaft und diesem Wandel bzw. seinen Ursachen und deren Folgen ist auch das ambulante Operieren unterworfen. Vom dazwischen liegenden ökonomischen Entwicklungspotential bleiben ambulante OP-Einrichtungen ausgeschlossen, da sie in wenig flexible und noch weniger reflektierten Rahmenbedingungen gezwängt sind. Diese Tatsache ist hinreichend belegt, bekannt und Gegenstand zahlreichlicher Bemühungen der davon betroffenen Berufsverbände, wie der Hinweis, dass bereits vor 2019 und dem Beginn der Pandemie die ambulanten OP-Fallzahlen vorwiegend in Praxen rückläufig waren. Dieser Effekt wurde durch COVID-19 („coronavirus disease 2019“) zusätzlich verstärkt und betrifft seither auch die im stationären Setting erhobenen Fallzahlen [1].\nMarginalen Schwankungen ambulanter OP-Zahlen im niedergelassenen vertragsärztlichen Bereich seit Beginn 2019 und bis Jahresmitte 2021 lassen im besten Annahmefall einen leicht steigenden Trend herauslesen und sich mit der Fluktuation abgesagter elektiver Eingriffe aus den Kliniken zu niedergelassenen Operateuren erklären. Den gemeinsamen Bemühungen um eine bestmögliche Aufrechterhaltung der Patientenversorgung unter Pandemiebedingungen muss eine Diskussion über den Terminus elektiver Eingriff folgen. Er ist der aktuellen Bedarfssituation anzupassen und neu zu definieren, auch in Hinblick auf den neuen und umfangreicheren AOP-Katalog und dessen machbarer Umsetzung, aber auch unter dem Gesichtspunkt einer Zunahme der jetzigen Kapazitätsengpässe und möglicher neuer Krisen oder Pandemielagen.", "Für eine signifikante Steigerung ambulanter Operationen im niedergelassenen Bereich ist die Aufstockung von Personal, Räumlichkeiten und Arbeitszeiten nötig. Die Möglichkeiten dafür werden durch fehlende Kapazitäten deutlich minimiert, die wiederum mit der beschriebenen Unterfinanzierung korrelieren.\nKostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt\nErschwerend kommt hinzu, dass seit einer Dekade kostendeckende Hygienezuschläge von allen Kostenträgern, trotz eindeutiger Kausalität auf der Grundlage der Umsetzung einer staatlich angeordneten Gesetzesnovellierung, abgelehnt werden [13]. Diese inakzeptable und mit den Regeln eines Rechtstaates und dessen Grundsatzgebot von Treu und Glauben nur sehr schwer in Vereinbarung zu bringende Verweigerungshaltung findet ihre kompromisslose Fortsetzung auch in den Zeiten der Pandemie. Und dies ausdrücklich unter Beifall der verantwortlichen Gesundheitspolitik, die das Regelwerk eines „ehrbaren Kaufmanns“ im Umgang mit selbständigen fachärztlichen Freiberuflern bewusst außer Kraft gesetzt hat. Besser kann man den Tatbestand der ordnungspolitischen Dysfunktionalität an einem konkreten Beispiel nicht festmachen, ebenso die zum Dauerproblem gewordene Vergütungssystematik für ambulant operierende Facharztpraxen, welche wegen hoher Fixkosten trotz der extrabudgetären Vergütung des ambulanten Operierens ein hohes wirtschaftliches Risiko eingehen [26].\nDiese Einschätzung zeigt darüber hinaus, dass die grundsätzlichen Rahmenbedingungen zu lange stagnieren und keine solide wirtschaftliche Basis für das ambulante Operieren darstellen. In diesem Kontext greift die von Busse et al. trefflich formulierte These bzgl. der Rahmenbedingungen für medizinischen Qualitätswettbewerb – bestehend aus politischer Verlässlichkeit und Mut zu Strukturveränderung [11]. Die Pandemie verstärkt diesbezügliche Defizite, darf aber nicht als ursächlicher Auslöser fehlinterpretiert werden, wozu die zeitliche Nähe von Pandemie, politischer Willenserklärung zur Ambulantisierung und IGES-Gutachten je nach Einzelinteressenslage durchaus verleiten kann.", "Jede weitere Negation gesundheitspolitischer Wahrheiten wird ihren Einfluss auf das Interesse an Praxisnachfolgen ausüben, die gerade bei der aktuellen Altersstruktur niedergelassener Fachärzt:innen verdichtet anstehen. Bisheriger, weiterer und zukünftiger Umgang mit Praxen und ihren fachärztlichen Inhaber:innen wird von niederlassungsinteressierten Klinikfachärzt:innen sehr wohl und kritisch differenziert wahrgenommen, beeinflusst den weiteren Berufsweg und die Entscheidung zwischen den Alternativen Klinikkarriere, Selbständigkeit und MVZ-Anstellung (medizinisches Versorgungszentrum). Damit steht und fällt auch die Zukunft des ambulanten Operierens, weshalb niedergelassene Operateure und Anästhesisten mit ihrer täglichen Praxisarbeit nicht nur gemeinsame Patienten, sondern auch involvierte Facharztkolleginnen und Kollegen in den Kliniken am besten davon überzeugen können, dass bei einer Niederlassung die sich eröffnenden Chancen die Risiken bei weitem überwiegen. Das ambulante Operieren kann dabei einen wichtigen Teil zur persönlichen Erfolgsgeschichte und beruflichen Zufriedenheit beitragen. Dass diese Gewissheit in die eigenen Fähigkeiten außerhalb einer Facharztpraxis in deren Umfeld nicht immer vorauszusetzen ist und folglich Handlungsmaximen Dritter auf die emotionale Ebene wie bewusst ausgelöste Neiddebatten verlagert werden, belegen die jüngsten unsauberen und tendenziösen öffentlichen Äußerungen über Ärzteeinkommen trefflich. Nicht zufällig und von ungefähr geschehen solche Störmanöver in wiederkehrender Regelmäßigkeit und diesmal in auffallender zeitlicher Abfolge zur Veröffentlichung des IGES-Gutachtens und dessen positiven Einschätzungen in Bezug auf das ambulante Operieren.", "Der ambulante Versorgungsbereich hat nicht erst in den letzten 2 Jahren bewiesen, wie leistungsfähig er trotz der beschriebenen Be- und Einschränkungen ist und dass die patientenorientierte Zusammenarbeit zwischen Praxen und Kliniken sich besser gestaltet als manchem patientenfernen und ideologielastigen Entscheidungsträger lieb sein kann. Wer trennt sich schon gerne von seiner Erfindung mit Namen doppelte Facharztschiene und erkennt an, dass nachhaltige Investitionen in dieses duale System mittel- und langfristig von gesamtgesellschaftlichem Nutzen sind und zugleich auch Engpässe in Krisenzeiten zu kompensieren helfen.\nFortschritt braucht in der ambulanten Versorgung öffentliche sowie kollektive finanzielle Ressourcen\nJedoch braucht Fortschritt in der ambulanten medizinischen Versorgung öffentliche sowie kollektive finanzielle Ressourcen, weshalb von staatlicher Seite die Dauerfrage der Finanzierung zu klären ist. Sehr hilfreich ist dabei das Einsehen, dass das Gesundheitssystem keinen lästigen, nach Belieben manipulierbaren Kostenfaktor, sondern eine der wichtigsten Säulen der Gesellschaft darstellt und sich Qualitätskosten langfristig immer positiv auswirken, wie die Pandemie und ihre Brennglasfunktion für Defizite und deren hohen Fehlerfolgekosten beweisen.\nAmbulantes Operieren verfügt über sehr hohes versorgungswissenschaftliches Potenzial und Ausbaufähigkeit, wenn die Rahmenbedingungen stimmen. Deren Gestaltung liegt nicht im Verantwortungsbereich ambulanter Operateure und Anästhesisten. Sie haben mit der Gründung eigener ambulanter Praxen, ambulanter OP-Zentren und Praxiskliniken und der Aufrechterhaltung dieser hochwertigen Infrastrukturen mit Eigenmitteln ihre Fähigkeiten zum selbständigen, eigenverantwortlichen Handeln über 3 Jahrzehnte unter Beweis und sich damit einem täglichen Qualitätswettbewerb gestellt und sich trotz aller gegenteiligen externen Bemühungen bisher behauptet.", "Die nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan mit der gesundheitspolitischen nonverbalen Einladung, doch in den ambulanten Gesundheitsmarkt einzusteigen. Die inzwischen vorliegenden Abrechnungsdaten sprechen die Sprache der Rendite- und Gewinnmaximierung. Beides ist für anonyme Investoren wie ausländischen Pensionsfonds inzwischen gegenüber Politik und Kostenträgern salonfähig geworden, während der Einzelvertragsarztpraxis, den Fachärzten einer BAG und Ärzte-MVZ weiterhin ihr in der Regel bereits gekürztes Honorar als Ausdruck höchster Profitgier vorgehalten wird. Wer sich auf Augenhöhe mit Vertretern der Konzern- und Finanzwirtschaft wähnt, übersieht dabei geflissentlich, dass die klinische und ambulante medizinische Versorgungskette nur so stabil und sicher ist wie die Investorenkette dazu bereit ist und zulässt. Und alles nur, um sich nicht mehr den permanenten und impertinenten Verbesserungsvorschlägen der sog. Leistungserbringer und ihren konstruktiven Dialogbedürfnissen auszusetzen?\nMit dem stellenweise bereits eingeläuteten Niedergang der ambulanten, selbständigen und freiberuflichen operierenden Facharztmedizin verliert das deutsche Gesundheitssystem einen wichtigen Bestandteil seines noch verbliebenen Tafelsilbers. Deshalb darf der Ausbau der Ambulantisierung nicht mit einem Abbau der eigentlichen Garanten für medizinisch-ärztliche Kompetenz und zuverlässige Leitungsbereitschaft einhergehen. Ambulantes Operieren ist als dritte Säule der Patientenversorgung ein fester, verlässlicher soziokultureller Bestandteil unserer Gesellschaft geworden und verdient es, wertgeschätzt, vor weiteren Defiziten geschützt und mit gezieltem Ressourceneinsatz bedarfsgerecht gefördert zu werden.", "Um beiden Aspekten gerecht zu werden, müssen alte Wahrheiten wie die der Beitragsstabilität und die daraus abgeleitete unzureichende Gesamtvergütung sowie das Abstreiten der Zuständigkeit für die Übernahme von Hygiene- und Qualitätskosten auf den Prüfstand und entsorgt werden. Zeitgleich gilt es, das öffentliche Interesse am hohen Gut Gesundheit zu reaktivieren und damit die kollektive Bereitschaft für höhere Beiträge vorzubereiten. Der zu erzielende gesamtgesellschaftliche Nutzen und sein Mehrwert rechtfertigen den für den anstehenden und überfälligen Um- und Ausbau des ambulanten Operierens benötigten Mitteleinsatz.\nDie Gesundheitspolitik sollte ergebnisoffen diesen bevorstehenden Transformationsprozess moderieren und im Konsens erzielte Beratungsergebnisse ordnungspolitisch zeitnah umsetzen. Die bisherige paternalistisch anmutende Vorgehensweise ist überholt, weniger vom Mainstream als von den eigenen Misserfolgen. An diese würde sich nahtlos das über 2 Jahrzehnte vernachlässigte und von allen Parteien bewusst übergangene Projekt ambulantes Operieren anreihen, gefolgt vom Scheitern des aktuell politisch angesagten Leuchtturmprojekts Ambulantisierung des Gesundheitssystems [21].\nNur unter dieser Grundvoraussetzung ist die Aufnahme weiterer ambulant durchführbarer Leistungen in den AOP-Katalog und dessen Erweiterung von 2879 auf 5355 Leistungen [1, 2] realistisch und Erfolg versprechend. Alles andere entspräche lediglich einer nochmaligen Verdopplung von ambulanten OP-Zahlen und zeitgleicher Multiplikation der dargelegten Problemzonen.\nEin weiterer wichtiger Schritt in Richtung einer positiven Zukunft des ambulanten Operierens ist zweifelsfrei eine stufenweise individuelle Beurteilung der Patienten nach objektiven Schweregraden statt der bisherigen Handhabung mit unterschiedlichen und subjektiv überlagerten Komorbiditäten [2]. Damit ist eine objektive und verbindliche Risikoadjustierung vorgegeben und die Nachverfolgbarkeit, ob ein Patient durch einen ambulanten chirurgischen Eingriff ein erhöhtes Risiko zu erwarten hat oder lediglich einer intensivierten ambulanten bzw. teilstationären Nachsorge bedarf, haben sich ambulant operierende Einheiten in Struktur und Organisation diesen neuen Anforderungen und Inhalten des Leistungskatalogs anzupassen. Im Umkehrschluss allerdings muss sich parallel dazu die Vergütung an diesem Qualitätszuwachs orientieren und von bisherigen laienhaft anmutenden Wertschöpfungsmythen, wie die Schnitt-Naht-Zeit als Bewertungsmaß und Rechtfertigung zum systematischen Abwerten ambulanter Eingriffe, definitiv verabschieden.\nAmbulantes Operieren wäre ein Zugewinn für das unter Pflege- und Ärztemangel stehende Gesundheitswesen\nNicht nur der BAO empfiehlt ein schrittweises, in Absprache vereinbartes Vorgehen, an dessen Anfang der proaktive Konsens zuerst unter den niedergelassenen und anschließend mit den klinischen fachärztlichen ambulanten Operateuren und Anästhesisten stehen muss. Dies betrifft die bedarfsgerechte und nach internationalen Standards zu erfolgende Neudefinition des Komplexes ambulantes Operieren über die bisherige von der Versorgungsrealität überholte Gleichsetzung mit Tageschirurgie hinaus. Das verschafft neue Handlungs- und Entscheidungsspielräume und erleichtert für alle Beteiligten Eingriffe ambulant ausschließlich nach medizinischen und nicht etwaigen ökonomischen Kriterien durchzuführen. Der bisher übliche Diskussionsbedarf für und wider stationäre postoperative Überwachungen entfällt damit. In der Folge werden an anderer Stelle dringend benötigte Kapazitäten frei. Ein elementarer Zugewinn für das unter dem Druck von Pflege- und Ärztemangel stehende Gesundheitswesen. Damit findet auch eine Kultur des einseitigen Misstrauens ihr Ende, die dem seit drei Jahrzehnten von weit über hundert Millionen ambulant operierten Patienten ihren Operateuren und Anästhesisten ausgesprochenen vollen Vertrauen zum Trotz Politik und Kostenträger gestattet, unverhohlen ihr unbegründetes, oft noch fadenscheiniges Misstrauen gegen ambulante Operateure und Anästhesisten offen und medienwirksam zur Schau zu stellen.\nFür einen geordneten Neuanfang des ambulanten Operierens ist der vorliegende Umfang des neu überarbeiteten AOP-Katalogs zu mächtig. Zu Beginn erscheint die Beschränkung auf die häufigsten ambulanten Operationen eine sinnvolle Lösung zu sein, über deren Umsetzung es zeitig zu diskutieren gilt. Je übersichtlicher und in der Handhabung alltagstauglicher, umso größer die allgemeine Akzeptanz, umso schneller sind Umsetzbarkeit und die Implementierung begleitender Analysen unter medizinisch-qualitativen und ökonomischen Gesichtspunkten, wie die Erfassung von Investitions- und Betriebskosten in Hinblick auf ein optionales Endprodukt Hybrid-DRG („diagnosis related groups“) für Praxen und Kliniken [18]. Die bisherigen EBM- und DRG-Systematiken bilden diese Parameter nicht ausreichend transparent ab und tragen dadurch zu den beschriebenen Versorgungs‑ und Finanzierungsdefiziten bei.\nStichhaltige Argumente von fachärztlicher Seite bedürfen von Anfang an einer stichprobenartigen Qualitätssicherung, auch zur Dokumentation der eigenen Leistungspotenziale und als Rechtstitel eigener Vergütungsansprüche. Um einen zeitraubenden Richtungsstreit zu umgehen, kann die Frage einer praktikablen Qualitätssicherung durch die Übernahme des seit 20 Jahren gerade im Rahmen von Selektivverträgen bewährten AQS1-Patientensicherheitsfragebogens [4, 8], mit der Option seiner regelmäßigen Modifizierungen und digitalen Ausführung, beantwortet werden.\nFerner gilt es, sich auf einen definierten Zeitraum zu einigen, innerhalb dessen die Transformation vom stationären in das ambulante Setting sowie die differenzierte Förderung für Kliniken und bereits bestehender ambulante OP-Einrichtungen abgeschlossen ist und – als conditio sine qua non – eine für Praxen und Kliniken einheitliche Vergütung greift.\nDer Deutschen Krankenhausgesellschaft (DKG) und v. a. den klinischen Kolleg:innen sei hiermit signalisiert, dass deren Vorbehalte gegenüber dem ambulanten Operieren auf der derzeitigen nicht tragfähigen EBM-Basis gerechtfertigt sind und deshalb eine eigenständige, gemeinsame AOP-Vergütungssystematik erforderlich ist. In einer Übergangsphase kann die unverhandelbare standortunabhängige Vergütungsanhebung für Kliniken in begründeten Fällen höher ausfallen als Kompensation für den anstehenden und zu bewältigenden aufwändigen Strukturwandel. Aber nach Fristablauf und erfolgter Umstellung hat eine einheitliche (d. h. sektorenverbindende) AOP-Vergütung inklusive einer Kostenerstattungsdynamik zu erfolgen. Eine Wiederholung der im Entwurf gelungenen und von der Politik zum Misslingen genötigten Aktion EBM 2000 plus darf und kann nicht akzeptiert werden. Ambulantes Operieren beruht auf dem Wunsch der Patienten nach dieser Versorgungsform und dem Wissen und Können der ambulanten Operateure und Anästhesisten, diese Patienteninteressen in der Praxis des Versorgungsalltags umzusetzen. Ab 1993 wurde dieser neu etablierte Behandlungspfad als gesundheitspolitisches Mittel zum vorrangigen Zweck der Krankenhauskostendämpfung uminterpretiert und dient seither zugleich zur Rechtfertigung einer chronischen Unterfinanzierung medizinisch erbrachter Leistungen in Praxen und Kliniken.\nRetrospektiv hätten niedergelassene Fachärzt:innen und Klinikkolleg:innen diesem Konstrukt niemals zustimmen dürfen, ebenso wenig KBV und DKG; sie sind in die Falle der einseitigen Kostenminimierung getappt und seitdem in der Funktion als subalterne Leistungserbringer gefangen.\nSollten sich die Interessensvertreter der Praxen und Kliniken in den nächsten Wochen und Monaten nicht zu einer vernunftbasierten Einigung für einen gemeinsamen Aktionsplan durchringen, droht folgendes nicht unrealistisches alternatives Szenario:keine stufenweise und differenzierte Umsetzung des AOP-Katalogs, sondern zum nächstmöglichen Zeitpunkt vollumfänglich gültig und umzusetzen,kein Hybrid-DRG und Beibehaltung des EBM für Praxen und Kliniken ohne Anhebung der Vergütung und ohne Kostenneuberechnung,absoluter Zwang zum ambulanten Operieren unter Berufung auf das MDK-Reformgesetz [20],weiterer Verlust fachärztlicher Therapie- und Gestaltungsfreiheit, dafür auf beide Sektoren übergreifend.\nkeine stufenweise und differenzierte Umsetzung des AOP-Katalogs, sondern zum nächstmöglichen Zeitpunkt vollumfänglich gültig und umzusetzen,\nkein Hybrid-DRG und Beibehaltung des EBM für Praxen und Kliniken ohne Anhebung der Vergütung und ohne Kostenneuberechnung,\nabsoluter Zwang zum ambulanten Operieren unter Berufung auf das MDK-Reformgesetz [20],\nweiterer Verlust fachärztlicher Therapie- und Gestaltungsfreiheit, dafür auf beide Sektoren übergreifend.", "In diesem Sinne verdienen gemeinsame Forderungen von Praxen und Kliniken wie die nach einer einheitlichen Vergütung des ambulanten Operierens höchste Priorität.\nNur wenn ambulante Operationen für Praxen und Kliniken leistungsgerecht und kostendeckend vergütet werden, können mehr ambulante Operationen durchgeführt, bedarfsgerechte zusätzliche ambulante OP-Strukturen geschaffen und damit auch in der Zukunft mittel- bis langfristig eine optimale ambulant operative Versorgung der Patienten gewährleistet werden.\nMehr ambulante Operationen sind möglich und gleichzusetzen mit einer Intensivierung der Ambulantisierung\nDie Überwindung der durch die COVID-19-Pandemie wie im Brennglas verstärkt sichtbar gewordenen Herausforderungen und die Umsetzung der vorhandenen Potenziale beim ambulanten Operieren hängen nun vorrangig von den Änderungen seiner bisherigen kontraproduktiven Rahmenbedingungen ab. Einen ersten wichtigen Schritt und Einschnitt in verfestigte und sinnfreie Verweigerungshaltung, Besitzstandwahrung und Stillstandfetischismus stellt zweifelsohne die Veröffentlichung des IGES-Gutachtens dar und seiner Botschaft: mehr ambulante Operationen sind möglich und gleichzusetzen mit einer Intensivierung der Ambulantisierung. Die alles entscheidende Frage bleibt offen und ist nur auf demokratischen Weg zu lösen: wessen kompetenten Händen vertraut die davon betroffene Gesellschaft dieses richtungsweisende Großprojekt an? Eine Teilantwort liefert die Gegenfrage: wer kommt angesichts der beschriebenen Fehleranalyse dafür nicht in Frage?\nDie erhöhte Aufmerksamkeit, die derzeit der Gesundheitsversorgung zuteil wird, fördert den Bedarf nach Veränderung und Diskussionen um Weiterentwicklungen und Verbesserungen von Defiziten und Schwachstellen des Gesundheitssystems. Dazu zählen vor allem die Unterfinanzierung des ambulanten Operierens und die sich in diesem Kontext aktuell eröffnenden Möglichkeiten, bereits in Erprobung befindliche Ansätze wie Hybrid-DRG oder bewährte Selektivverträge für die Ermittlung einer sektorenverbindenden Vergütung und damit einhergehend zur Überwindung bisheriger Sektorengrenzen in die Praxis umzusetzen. Erst wenn diese Hürden überwunden sind, kann über den Eintritt in die Phase einer gemeinsamen Patientenakte gesprochen werden. Parallelveranstaltungen nach Art und Vorgehensweise der Digitalisierung sind nicht nachahmenswert, wenn es um die Neudefinition des Begriffes ambulante Medizin und ihres umfangreichen und ausbaufähigen Teilbereiches ambulantes Operieren und dessen Anschluss an europäische und internationale Standards geht [7].\nDas politische Bekenntnis zur Ambulantisierung darf nicht wieder zum bloßen Lippenbekenntnis verkümmern, sondern muss als Chance gesehen werden, den klinischen Sektor durch Überführung von diagnostischen, therapeutischen sowie operativen Prozeduren in ambulante, nicht-vollstationäre Behandlungsabläufe zu entlasten. Deshalb darf der einzige Impetus nicht im Einsparen an finanziellen Ressourcen ohne Rücksicht auf die Ressource Mensch bestehen. Vielmehr stehen zielgerichteter Umgang mit nicht unbegrenzten, aber kontinuierlich anzupassenden Ressourcen und die unbedingte Vermeidung weiterer Fehlerkosten im Vordergrund. Und nicht wie bisher die stets gescheiterten Versuche, Gesundheitskosten zu senken, bei wachsendem Bedarf und steigenden Qualitätskosten. Die Beibehaltung und der sukzessive Ausbau des ambulanten Operierens werden dazu beitragen, die Gesundheitsausgaben im operativen Bereich nachhaltig zu stabilisieren und wieder kalkulierbar werden zu lassen.\nSollten sich der Gesetzgeber und in seinem Fahrwasser die Kostenträger dieser Logik und Einsicht weiter verweigern, dann drohen im bestanzunehmenden Verlauf die Konservierung des Status quo und somit Stagnation mit abfallender Tendenz beim ambulanten Operieren und weiterhin hohe stationäre Fallzahlen. Realistischer erscheint das Szenario eines weiteren und zunehmenden Fallzahlrückgangs ambulanter Operationen in Praxen und Kliniken und weitere Klinik- und Praxisschließungen aus Gründen der Unrentabilität bzw. der ungeklärten Nachfolgefrage. Im Gegenzug wird ein weiterer Zulauf in das Berufsbild des dauerangestellten Facharztes mit begrenzter Haftung in investorengeführten MVZ von Kapitalgesellschaften zu beobachten sein. Es steht außer Zweifel, dass diese Variante keine individuelle berufliche Perspektive von Dauer für eine akademisch geprägte Berufslaufbahn sein kann. Damit würde der Facharztstatus zu einem weisungsgebundenen Ausbildungsberuf degradiert werden und neben der Option Selbständigkeit auch noch seiner Freiberuflichkeit verlustig gehen.\nNach medizinischen und versorgungswissenschaftlichen Kriterien kann die Zukunft des ambulanten Operierens als positiv bewertet werden. Allerdings nicht losgelöst von gesundheitspolitischen Unabwägbarkeiten und Rahmenbedingungen sowie deren Einflussnahme auf Ressourcen und den damit korrelierenden Zukunftsaussichten des Arztberufs und seiner fachlich-operativen Subspezialisierungen. Deshalb bedarf es einer aktiven fachärztlichen Mitbeteiligung an der aktuellen Diskussion und am Meinungsbildungsprozess bei der Ausgestaltung der Ambulantisierung." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Historie und Status quo", "Patientenperspektive", "30 Jahre Erfahrung", "Stagnation trotz Potenzial", "Forderungen an Politik und Kassen", "Mehr Kapazitäten erfordern mehr Ressourcen", "Praxisnachfolge entscheidet über Zukunft des ambulanten Operierens", "Positive Bilanz", "Fehlentwicklungen", "Potenziale und Herausforderungen an das ambulante Operieren", "Zusammenfassung", "Fazit für die Praxis" ]
[ "Die beschriebenen Vorbehalte sind schon seit geraumer Zeit ausgeräumt, weil hohe Operationszahlen mit großer Patientenzufriedenheit und guter Ergebnisqualität auf Dauer die besseren Argumente sind [8]. Dennoch ist die Zukunft des ambulanten Operierens trotz dieser Ausgangsposition weiterhin volatil, denn bisher gelang es ambulanten Operateuren und Anästhesisten nicht, neben Patienten und Angehörigen auch Gesundheitspolitik und Kostenträger von der Hochwertigkeit ihrer ärztlichen Leistungen so zu überzeugen, dass beide einer angemessenen sowie kostendeckenden Vergütung zustimmen [23, 26]. Hier gilt es, noch nachhaltiger politische Überzeugungsarbeit zu leistet, um die Zukunft des ambulanten Operierens in Praxen und Kliniken unter gleichen und akzeptablen Rahmenbedingungen langfristig zu sichern [15].\nDas ambulante Operieren hat in den letzten Jahrzehnten einen enormen Entwicklungsschub erfahren\nDer Begriff „ambulant“ rührt ursprünglich vom von Patienten zu Patienten sich begebenden Chirurgen her [6]. Sowohl dessen Berufsbild als auch die Definition des ambulanten Operierens haben im Verlauf der Medizingeschichte und v. a. in den letzten Jahrzehnten einen enormen Entwicklungsschub erfahren. Ambulantes Operieren in Deutschland bedeutet im Grund nichts anderes, als dass Patienten die Zeit vor und nach ihrer Operation zu Hause verbringen und der ambulante Eingriff innerhalb der Aufenthaltsdauer der Patienten ohne stationäre Aufnahme zu erfolgen hat. Im AOP-Katalog (ambulantes Operieren und sonstige stationsersetzende Eingriffe im Krankenhaus) sind ambulant durchführbare und stationsersetzende Operationen aufgeführt, die jeweiligen Vergütungen sind im einheitlichen Bewertungsmaßstab (EBM) geregelt und gelten für Praxen und Kliniken seit Inkrafttreten des Gesundheitsstrukturgesetzes im Jahr 1993 [10].\nWas die Struktur- und Prozessqualität angelangt, so sind Klinikstandards verbindlich. Es wird dabei weder den dynamisch steigernden Kostenstrukturen in Kliniken und Praxen noch der Implementierung des medizinischen Fortschritts innerhalb des ambulanten Settings kostenkalkulatorisch korrekt Rechnung getragen. Von dieser Abspaltung der ökonomischen Sicht auf das komplexe medizinische Produkt ambulantes Operieren und der Negation seines bewiesenen Mehrwertes sind alle regelmäßig ambulant operierenden Fachgebiete in Praxen und Kliniken betroffen [14, 16].\nDurch kontinuierliche Verbesserungsprozesse v. a. im niedergelassenen, d. h. selbständigen und selbsthaftende Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus. Beides ermöglicht eine fachlich breit ausgerichtete sowie flächendeckende chirurgisch-operative Versorgung der Patienten. Hinzu kommt die Etablierung schonender OP- und Anästhesieverfahren als innovatives und motivierendes Moment.", "Die Präferenz von Operationen im ambulanten Kontext begründen Patienten und Angehörige damit, dass sie in der Facharztpraxis von einem selbst gewählten Facharzt behandelt werden, zu dem ein persönliches Vertrauensverhältnis besteht, und sie anschließend die Genesungszeit zuhause verbringen können – in der Gewissheit, sich bei Bedarf vertrauensvoll an Operateur oder Anästhesisten wenden zu können. Diese Einschätzung umfasst die eigentliche Philosophie des ambulanten Operierens, nämlich den gesamten Behandlungsablauf aus einer zuverlässigen Hand. Damit ist ambulantes Operieren unstrittig zu einem festen Bestandteil und einer der tragenden Säulen der Patientenversorgung allgemein und speziell im Hinblick auf die Verwirklichung einer integrierten, fach- und sektorenübergreifenden Medizin geworden und nimmt deshalb eine Vorreiter- und Vorbildfunktion bei der anstehenden Ambulantisierungsoffensive ein. Dieser für alle sichtbare Leuchtturm ambulantes Operieren wurde im Übrigen auch aus den Steinen errichtet, die seiner Entwicklung immer wieder bewusst in den Weg gelegt werden, auch durch gezielte Infragestellung [25]. Notorischen Zweiflern sei an dieser Stelle empfohlen, zur besseren Ein- und Weitsicht die Kassenbrille abzunehmen und gegen die Patientensichtbrille einzutauschen [12].", "Ambulante Operateure und Anästhesisten dürfen zu Recht mit ihrem Erfahrungsvorsprung von 30 Jahren auf ihre besondere Rolle bei der aktuellen Meinungsbildung und den anstehenden Entscheidungen bestehen. Wer in medizinisch-fachlichen Belangen und Fragen der Patientensicherheit die erforderliche Expertise vorweist, muss auch in so wichtigen gesundheitspolitischen Entscheidungen wie der Ambulantisierung angehört und in seinen Empfehlungen respektiert werden [22]. Zu groß ist sonst die Gefahr weiterer unerwünschter Nebenwirkungen und Kollateralschäden. Von diesen ist auch das ambulante Operieren im Verlauf seiner Entwicklung nicht verschont geblieben, wie ein aktuell zu verzeichnender Rückgang der Patienten- und Eingriffszahlen nach einer schon seit 2015 zu beobachtenden Stagnation belegt [23]. Dazu beweist dieser Trend das Gegenteil der seit Anfang an unbewiesenen Unterstellung einer unkontrollierten Mengenausweitung und drohender Kostenexplosion. Und es ist der Punkt in der – wohlgemerkt medizinischen – Erfolgsgeschichte des ambulanten Operierens erreicht, an dem die bisherigen einrichtungsinternen Rationalisierungsmaßnahmen ihren Zenit erreicht bzw. überschritten haben und vorhandene personelle und finanzielle Ressourcen definitiv an ihre Grenzen angekommen sind [16]. Ein unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf und somit bereits die nahe Zukunft des ambulanten Operierens gefährden.", "Der 2015 eingetretene Stillstand früherer konstanter Steigerungsraten betrifft Kliniken und Praxen gleichsam; seither sind Schwankungen in den Patientenzahlen und bestenfalls eine Plateaubildung zu verzeichnen [1]. Dies lässt sich mit Zahlen konkret belegen und sollte als Warnhinweis und Weckruf verstanden werden. Das Maximum der erfassten ambulanten Operationen in Kliniken betrug knapp 2 Mio., in den Praxen waren es 8,75 Mio. Zuletzt wurden aus dem niedergelassenen Bereich 2020 8 Mio. Eingriffe erfasst, was dem bereits 2008 erzielten Ergebnis entspricht. Retrospektiv verzeichnen Praxen in der Dekade zwischen 1998 und 2008 den größten Zuwachs an ambulanten Operationen, in Kliniken fand diese Entwicklung zeitlich versetzt von 2004 bis 2015 statt, mit einem Optimum von 7,1 % Anteil an allen Krankenhausbehandlungen [17].\nDurch ambulantes Operieren per se und dessen gezielten Ausbau kann man Einsparpotenziale generieren\nNeben den genannten Vorteilen für Patienten wird auch das Gesundheitssystem durch ambulante Operationen begünstigt, dank einer im Vergleich zu stationären Eingriffen ablaufbedingt geringeren Inanspruchnahme von Kapazitäten und damit einhergehend geringeren Kostenanteils [3]. Damit ist grundsätzlich die Möglichkeit verbunden, durch ambulantes Operieren per se und durch dessen gezielten Ausbau relevante Einsparpotenziale zu generieren, wie P. Oberende bereits 2010 in seinem vom BAO initiierten Gutachten belegen konnte [24].\nDie bisherigen Einspareffekte dienen den verantwortlichen Entscheidungsträgern unwissentlich als Deckungsbeiträge für anderweitige Defizite oder werden bestenfalls in neue medizinische Bereiche reinvestiert. Der Aspekt regelmäßiger Reinvestitionen in die Strukturen des ambulanten Operierens und deren Ausbau wird dabei völlig außer Acht gelassen, wie überhaupt das ambulante Operieren unter den Schirm einer peinlich anmutenden Milchmädchenrechnung gestellt wurde, mit dem vorherrschenden Ziel, für jede ambulante Operation eine stationäre einzusparen. Diese schlichte Modellrechnung kann nicht aufgehen, schon allein aus dem Grund, dass der medizinisch-therapeutische Teil des Gesundheitssystems ein im Wandel begriffener ist und per se einem ständigen Verbesserungsprozess unterliegt. So kompensiert das ambulante Operieren den weiter voranschreitenden Klinik- und Bettenabbau zusätzlich. Hier kann man in der Tat statt von einer nur stationsersetzenden Intention des Gesetzgebers ebenso von einer stationszersetzenden sprechen, wenn man die Entwicklung des Gesundheitssystems in den beiden letzten Dekaden betrachtet.\nDie gesamte ambulante niedergelassene fachärztliche Medizin im Umfeld von geplanten bzw. umgesetzten Klinikschließungen trägt entscheidend zur Aufrechterhaltung einer flächendeckenden wohnortnahen Patientenversorgung bei. Die getroffenen Maßnahmen zur Pandemiebekämpfung kommen streckenweise Klinikschließungen gleich und beruhen auf der sicheren Annahme einer weiter funktionierenden ambulanten Patientenversorgung. Und dies im Gegensatz zu Kliniken ohne Unterstützungsmaßnahmen für ambulante Facharztpraxen oder Anerkennung der Praxismitarbeiterinnen in Form eines gewährten Coronabonus.", "Seit 3 Jahrzehnten ist der Generalverdacht, ambulante Medizin allgemein und insbesondere das anspruchsvolle, aufwändige ambulante Operieren seien Kostentreiber (also eine unnötige luxuriöse doppelte Versorgungsschiene) und folglich durch gesundheitspolitische Gegenmaßnahmen (wie die gegen alle ökonomische Vernunft von oben verordnete Punktwertabsenkung im Rahmen des EBM 2000 plus) in ihren Entwicklungen zunehmend einzuschränken, ständiger Begleiter ambulanter Fachärzt:innen. Und dies gegen den erklärten Patienten- und Wählerwillen, der in Umfragen mit konstant hohen Zustimmungswerten für ambulant operierende Fachärzt:innen zum Ausdruck kommt [7].\nEs ist an der Zeit, dem hohen Stellenwert einer dem gesellschaftlichen Wertewandel unterworfenen, aber mehrheitlich stabilen, von Vertrauen geprägten Arzt-Patienten-Beziehung die nötige politische Aufmerksamkeit und Respekterweisung entgegenzubringen. Diese Forderung beinhaltet, das ambulante Operieren als wesentlichen Bestandteil der chirurgisch-operativen Versorgung der Patienten in Deutschland mit großem Steigerungspotenzial anzuerkennen und seine Weiterentwicklung (statt zu stören) gezielt zu fördern. Ein Blick auf die umfangreichen und nicht immer Sinn und Nutzen stiftenden GKV-Werbekampagnen (Gesetzliche Krankenversicherung) zeigt Möglichkeiten des Abflusses von Mitteln aus der aktiven Patientenversorgung und deren Zweckentfremdung für Gesundheitsleistungen als Sonderangebot und Marketinginstrument trotz angeblicher chronisch leerer Kassenlage.\nWas aus Sicht der Patienten wirklich zählt, das sind immer noch ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit. Dieser Widerspruch zwischen dem Selbstverständnis patientennaher und -ferner Akteure zeigt einmal mehr den bereits vollzogenen Wandel unseres Gesundheitswesens vom Sozialsystem hin zu Gesundheitsmarkt und -wirtschaft und diesem Wandel bzw. seinen Ursachen und deren Folgen ist auch das ambulante Operieren unterworfen. Vom dazwischen liegenden ökonomischen Entwicklungspotential bleiben ambulante OP-Einrichtungen ausgeschlossen, da sie in wenig flexible und noch weniger reflektierten Rahmenbedingungen gezwängt sind. Diese Tatsache ist hinreichend belegt, bekannt und Gegenstand zahlreichlicher Bemühungen der davon betroffenen Berufsverbände, wie der Hinweis, dass bereits vor 2019 und dem Beginn der Pandemie die ambulanten OP-Fallzahlen vorwiegend in Praxen rückläufig waren. Dieser Effekt wurde durch COVID-19 („coronavirus disease 2019“) zusätzlich verstärkt und betrifft seither auch die im stationären Setting erhobenen Fallzahlen [1].\nMarginalen Schwankungen ambulanter OP-Zahlen im niedergelassenen vertragsärztlichen Bereich seit Beginn 2019 und bis Jahresmitte 2021 lassen im besten Annahmefall einen leicht steigenden Trend herauslesen und sich mit der Fluktuation abgesagter elektiver Eingriffe aus den Kliniken zu niedergelassenen Operateuren erklären. Den gemeinsamen Bemühungen um eine bestmögliche Aufrechterhaltung der Patientenversorgung unter Pandemiebedingungen muss eine Diskussion über den Terminus elektiver Eingriff folgen. Er ist der aktuellen Bedarfssituation anzupassen und neu zu definieren, auch in Hinblick auf den neuen und umfangreicheren AOP-Katalog und dessen machbarer Umsetzung, aber auch unter dem Gesichtspunkt einer Zunahme der jetzigen Kapazitätsengpässe und möglicher neuer Krisen oder Pandemielagen.", "Für eine signifikante Steigerung ambulanter Operationen im niedergelassenen Bereich ist die Aufstockung von Personal, Räumlichkeiten und Arbeitszeiten nötig. Die Möglichkeiten dafür werden durch fehlende Kapazitäten deutlich minimiert, die wiederum mit der beschriebenen Unterfinanzierung korrelieren.\nKostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt\nErschwerend kommt hinzu, dass seit einer Dekade kostendeckende Hygienezuschläge von allen Kostenträgern, trotz eindeutiger Kausalität auf der Grundlage der Umsetzung einer staatlich angeordneten Gesetzesnovellierung, abgelehnt werden [13]. Diese inakzeptable und mit den Regeln eines Rechtstaates und dessen Grundsatzgebot von Treu und Glauben nur sehr schwer in Vereinbarung zu bringende Verweigerungshaltung findet ihre kompromisslose Fortsetzung auch in den Zeiten der Pandemie. Und dies ausdrücklich unter Beifall der verantwortlichen Gesundheitspolitik, die das Regelwerk eines „ehrbaren Kaufmanns“ im Umgang mit selbständigen fachärztlichen Freiberuflern bewusst außer Kraft gesetzt hat. Besser kann man den Tatbestand der ordnungspolitischen Dysfunktionalität an einem konkreten Beispiel nicht festmachen, ebenso die zum Dauerproblem gewordene Vergütungssystematik für ambulant operierende Facharztpraxen, welche wegen hoher Fixkosten trotz der extrabudgetären Vergütung des ambulanten Operierens ein hohes wirtschaftliches Risiko eingehen [26].\nDiese Einschätzung zeigt darüber hinaus, dass die grundsätzlichen Rahmenbedingungen zu lange stagnieren und keine solide wirtschaftliche Basis für das ambulante Operieren darstellen. In diesem Kontext greift die von Busse et al. trefflich formulierte These bzgl. der Rahmenbedingungen für medizinischen Qualitätswettbewerb – bestehend aus politischer Verlässlichkeit und Mut zu Strukturveränderung [11]. Die Pandemie verstärkt diesbezügliche Defizite, darf aber nicht als ursächlicher Auslöser fehlinterpretiert werden, wozu die zeitliche Nähe von Pandemie, politischer Willenserklärung zur Ambulantisierung und IGES-Gutachten je nach Einzelinteressenslage durchaus verleiten kann.", "Jede weitere Negation gesundheitspolitischer Wahrheiten wird ihren Einfluss auf das Interesse an Praxisnachfolgen ausüben, die gerade bei der aktuellen Altersstruktur niedergelassener Fachärzt:innen verdichtet anstehen. Bisheriger, weiterer und zukünftiger Umgang mit Praxen und ihren fachärztlichen Inhaber:innen wird von niederlassungsinteressierten Klinikfachärzt:innen sehr wohl und kritisch differenziert wahrgenommen, beeinflusst den weiteren Berufsweg und die Entscheidung zwischen den Alternativen Klinikkarriere, Selbständigkeit und MVZ-Anstellung (medizinisches Versorgungszentrum). Damit steht und fällt auch die Zukunft des ambulanten Operierens, weshalb niedergelassene Operateure und Anästhesisten mit ihrer täglichen Praxisarbeit nicht nur gemeinsame Patienten, sondern auch involvierte Facharztkolleginnen und Kollegen in den Kliniken am besten davon überzeugen können, dass bei einer Niederlassung die sich eröffnenden Chancen die Risiken bei weitem überwiegen. Das ambulante Operieren kann dabei einen wichtigen Teil zur persönlichen Erfolgsgeschichte und beruflichen Zufriedenheit beitragen. Dass diese Gewissheit in die eigenen Fähigkeiten außerhalb einer Facharztpraxis in deren Umfeld nicht immer vorauszusetzen ist und folglich Handlungsmaximen Dritter auf die emotionale Ebene wie bewusst ausgelöste Neiddebatten verlagert werden, belegen die jüngsten unsauberen und tendenziösen öffentlichen Äußerungen über Ärzteeinkommen trefflich. Nicht zufällig und von ungefähr geschehen solche Störmanöver in wiederkehrender Regelmäßigkeit und diesmal in auffallender zeitlicher Abfolge zur Veröffentlichung des IGES-Gutachtens und dessen positiven Einschätzungen in Bezug auf das ambulante Operieren.", "Der ambulante Versorgungsbereich hat nicht erst in den letzten 2 Jahren bewiesen, wie leistungsfähig er trotz der beschriebenen Be- und Einschränkungen ist und dass die patientenorientierte Zusammenarbeit zwischen Praxen und Kliniken sich besser gestaltet als manchem patientenfernen und ideologielastigen Entscheidungsträger lieb sein kann. Wer trennt sich schon gerne von seiner Erfindung mit Namen doppelte Facharztschiene und erkennt an, dass nachhaltige Investitionen in dieses duale System mittel- und langfristig von gesamtgesellschaftlichem Nutzen sind und zugleich auch Engpässe in Krisenzeiten zu kompensieren helfen.\nFortschritt braucht in der ambulanten Versorgung öffentliche sowie kollektive finanzielle Ressourcen\nJedoch braucht Fortschritt in der ambulanten medizinischen Versorgung öffentliche sowie kollektive finanzielle Ressourcen, weshalb von staatlicher Seite die Dauerfrage der Finanzierung zu klären ist. Sehr hilfreich ist dabei das Einsehen, dass das Gesundheitssystem keinen lästigen, nach Belieben manipulierbaren Kostenfaktor, sondern eine der wichtigsten Säulen der Gesellschaft darstellt und sich Qualitätskosten langfristig immer positiv auswirken, wie die Pandemie und ihre Brennglasfunktion für Defizite und deren hohen Fehlerfolgekosten beweisen.\nAmbulantes Operieren verfügt über sehr hohes versorgungswissenschaftliches Potenzial und Ausbaufähigkeit, wenn die Rahmenbedingungen stimmen. Deren Gestaltung liegt nicht im Verantwortungsbereich ambulanter Operateure und Anästhesisten. Sie haben mit der Gründung eigener ambulanter Praxen, ambulanter OP-Zentren und Praxiskliniken und der Aufrechterhaltung dieser hochwertigen Infrastrukturen mit Eigenmitteln ihre Fähigkeiten zum selbständigen, eigenverantwortlichen Handeln über 3 Jahrzehnte unter Beweis und sich damit einem täglichen Qualitätswettbewerb gestellt und sich trotz aller gegenteiligen externen Bemühungen bisher behauptet.", "Die nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan mit der gesundheitspolitischen nonverbalen Einladung, doch in den ambulanten Gesundheitsmarkt einzusteigen. Die inzwischen vorliegenden Abrechnungsdaten sprechen die Sprache der Rendite- und Gewinnmaximierung. Beides ist für anonyme Investoren wie ausländischen Pensionsfonds inzwischen gegenüber Politik und Kostenträgern salonfähig geworden, während der Einzelvertragsarztpraxis, den Fachärzten einer BAG und Ärzte-MVZ weiterhin ihr in der Regel bereits gekürztes Honorar als Ausdruck höchster Profitgier vorgehalten wird. Wer sich auf Augenhöhe mit Vertretern der Konzern- und Finanzwirtschaft wähnt, übersieht dabei geflissentlich, dass die klinische und ambulante medizinische Versorgungskette nur so stabil und sicher ist wie die Investorenkette dazu bereit ist und zulässt. Und alles nur, um sich nicht mehr den permanenten und impertinenten Verbesserungsvorschlägen der sog. Leistungserbringer und ihren konstruktiven Dialogbedürfnissen auszusetzen?\nMit dem stellenweise bereits eingeläuteten Niedergang der ambulanten, selbständigen und freiberuflichen operierenden Facharztmedizin verliert das deutsche Gesundheitssystem einen wichtigen Bestandteil seines noch verbliebenen Tafelsilbers. Deshalb darf der Ausbau der Ambulantisierung nicht mit einem Abbau der eigentlichen Garanten für medizinisch-ärztliche Kompetenz und zuverlässige Leitungsbereitschaft einhergehen. Ambulantes Operieren ist als dritte Säule der Patientenversorgung ein fester, verlässlicher soziokultureller Bestandteil unserer Gesellschaft geworden und verdient es, wertgeschätzt, vor weiteren Defiziten geschützt und mit gezieltem Ressourceneinsatz bedarfsgerecht gefördert zu werden.", "Um beiden Aspekten gerecht zu werden, müssen alte Wahrheiten wie die der Beitragsstabilität und die daraus abgeleitete unzureichende Gesamtvergütung sowie das Abstreiten der Zuständigkeit für die Übernahme von Hygiene- und Qualitätskosten auf den Prüfstand und entsorgt werden. Zeitgleich gilt es, das öffentliche Interesse am hohen Gut Gesundheit zu reaktivieren und damit die kollektive Bereitschaft für höhere Beiträge vorzubereiten. Der zu erzielende gesamtgesellschaftliche Nutzen und sein Mehrwert rechtfertigen den für den anstehenden und überfälligen Um- und Ausbau des ambulanten Operierens benötigten Mitteleinsatz.\nDie Gesundheitspolitik sollte ergebnisoffen diesen bevorstehenden Transformationsprozess moderieren und im Konsens erzielte Beratungsergebnisse ordnungspolitisch zeitnah umsetzen. Die bisherige paternalistisch anmutende Vorgehensweise ist überholt, weniger vom Mainstream als von den eigenen Misserfolgen. An diese würde sich nahtlos das über 2 Jahrzehnte vernachlässigte und von allen Parteien bewusst übergangene Projekt ambulantes Operieren anreihen, gefolgt vom Scheitern des aktuell politisch angesagten Leuchtturmprojekts Ambulantisierung des Gesundheitssystems [21].\nNur unter dieser Grundvoraussetzung ist die Aufnahme weiterer ambulant durchführbarer Leistungen in den AOP-Katalog und dessen Erweiterung von 2879 auf 5355 Leistungen [1, 2] realistisch und Erfolg versprechend. Alles andere entspräche lediglich einer nochmaligen Verdopplung von ambulanten OP-Zahlen und zeitgleicher Multiplikation der dargelegten Problemzonen.\nEin weiterer wichtiger Schritt in Richtung einer positiven Zukunft des ambulanten Operierens ist zweifelsfrei eine stufenweise individuelle Beurteilung der Patienten nach objektiven Schweregraden statt der bisherigen Handhabung mit unterschiedlichen und subjektiv überlagerten Komorbiditäten [2]. Damit ist eine objektive und verbindliche Risikoadjustierung vorgegeben und die Nachverfolgbarkeit, ob ein Patient durch einen ambulanten chirurgischen Eingriff ein erhöhtes Risiko zu erwarten hat oder lediglich einer intensivierten ambulanten bzw. teilstationären Nachsorge bedarf, haben sich ambulant operierende Einheiten in Struktur und Organisation diesen neuen Anforderungen und Inhalten des Leistungskatalogs anzupassen. Im Umkehrschluss allerdings muss sich parallel dazu die Vergütung an diesem Qualitätszuwachs orientieren und von bisherigen laienhaft anmutenden Wertschöpfungsmythen, wie die Schnitt-Naht-Zeit als Bewertungsmaß und Rechtfertigung zum systematischen Abwerten ambulanter Eingriffe, definitiv verabschieden.\nAmbulantes Operieren wäre ein Zugewinn für das unter Pflege- und Ärztemangel stehende Gesundheitswesen\nNicht nur der BAO empfiehlt ein schrittweises, in Absprache vereinbartes Vorgehen, an dessen Anfang der proaktive Konsens zuerst unter den niedergelassenen und anschließend mit den klinischen fachärztlichen ambulanten Operateuren und Anästhesisten stehen muss. Dies betrifft die bedarfsgerechte und nach internationalen Standards zu erfolgende Neudefinition des Komplexes ambulantes Operieren über die bisherige von der Versorgungsrealität überholte Gleichsetzung mit Tageschirurgie hinaus. Das verschafft neue Handlungs- und Entscheidungsspielräume und erleichtert für alle Beteiligten Eingriffe ambulant ausschließlich nach medizinischen und nicht etwaigen ökonomischen Kriterien durchzuführen. Der bisher übliche Diskussionsbedarf für und wider stationäre postoperative Überwachungen entfällt damit. In der Folge werden an anderer Stelle dringend benötigte Kapazitäten frei. Ein elementarer Zugewinn für das unter dem Druck von Pflege- und Ärztemangel stehende Gesundheitswesen. Damit findet auch eine Kultur des einseitigen Misstrauens ihr Ende, die dem seit drei Jahrzehnten von weit über hundert Millionen ambulant operierten Patienten ihren Operateuren und Anästhesisten ausgesprochenen vollen Vertrauen zum Trotz Politik und Kostenträger gestattet, unverhohlen ihr unbegründetes, oft noch fadenscheiniges Misstrauen gegen ambulante Operateure und Anästhesisten offen und medienwirksam zur Schau zu stellen.\nFür einen geordneten Neuanfang des ambulanten Operierens ist der vorliegende Umfang des neu überarbeiteten AOP-Katalogs zu mächtig. Zu Beginn erscheint die Beschränkung auf die häufigsten ambulanten Operationen eine sinnvolle Lösung zu sein, über deren Umsetzung es zeitig zu diskutieren gilt. Je übersichtlicher und in der Handhabung alltagstauglicher, umso größer die allgemeine Akzeptanz, umso schneller sind Umsetzbarkeit und die Implementierung begleitender Analysen unter medizinisch-qualitativen und ökonomischen Gesichtspunkten, wie die Erfassung von Investitions- und Betriebskosten in Hinblick auf ein optionales Endprodukt Hybrid-DRG („diagnosis related groups“) für Praxen und Kliniken [18]. Die bisherigen EBM- und DRG-Systematiken bilden diese Parameter nicht ausreichend transparent ab und tragen dadurch zu den beschriebenen Versorgungs‑ und Finanzierungsdefiziten bei.\nStichhaltige Argumente von fachärztlicher Seite bedürfen von Anfang an einer stichprobenartigen Qualitätssicherung, auch zur Dokumentation der eigenen Leistungspotenziale und als Rechtstitel eigener Vergütungsansprüche. Um einen zeitraubenden Richtungsstreit zu umgehen, kann die Frage einer praktikablen Qualitätssicherung durch die Übernahme des seit 20 Jahren gerade im Rahmen von Selektivverträgen bewährten AQS1-Patientensicherheitsfragebogens [4, 8], mit der Option seiner regelmäßigen Modifizierungen und digitalen Ausführung, beantwortet werden.\nFerner gilt es, sich auf einen definierten Zeitraum zu einigen, innerhalb dessen die Transformation vom stationären in das ambulante Setting sowie die differenzierte Förderung für Kliniken und bereits bestehender ambulante OP-Einrichtungen abgeschlossen ist und – als conditio sine qua non – eine für Praxen und Kliniken einheitliche Vergütung greift.\nDer Deutschen Krankenhausgesellschaft (DKG) und v. a. den klinischen Kolleg:innen sei hiermit signalisiert, dass deren Vorbehalte gegenüber dem ambulanten Operieren auf der derzeitigen nicht tragfähigen EBM-Basis gerechtfertigt sind und deshalb eine eigenständige, gemeinsame AOP-Vergütungssystematik erforderlich ist. In einer Übergangsphase kann die unverhandelbare standortunabhängige Vergütungsanhebung für Kliniken in begründeten Fällen höher ausfallen als Kompensation für den anstehenden und zu bewältigenden aufwändigen Strukturwandel. Aber nach Fristablauf und erfolgter Umstellung hat eine einheitliche (d. h. sektorenverbindende) AOP-Vergütung inklusive einer Kostenerstattungsdynamik zu erfolgen. Eine Wiederholung der im Entwurf gelungenen und von der Politik zum Misslingen genötigten Aktion EBM 2000 plus darf und kann nicht akzeptiert werden. Ambulantes Operieren beruht auf dem Wunsch der Patienten nach dieser Versorgungsform und dem Wissen und Können der ambulanten Operateure und Anästhesisten, diese Patienteninteressen in der Praxis des Versorgungsalltags umzusetzen. Ab 1993 wurde dieser neu etablierte Behandlungspfad als gesundheitspolitisches Mittel zum vorrangigen Zweck der Krankenhauskostendämpfung uminterpretiert und dient seither zugleich zur Rechtfertigung einer chronischen Unterfinanzierung medizinisch erbrachter Leistungen in Praxen und Kliniken.\nRetrospektiv hätten niedergelassene Fachärzt:innen und Klinikkolleg:innen diesem Konstrukt niemals zustimmen dürfen, ebenso wenig KBV und DKG; sie sind in die Falle der einseitigen Kostenminimierung getappt und seitdem in der Funktion als subalterne Leistungserbringer gefangen.\nSollten sich die Interessensvertreter der Praxen und Kliniken in den nächsten Wochen und Monaten nicht zu einer vernunftbasierten Einigung für einen gemeinsamen Aktionsplan durchringen, droht folgendes nicht unrealistisches alternatives Szenario:keine stufenweise und differenzierte Umsetzung des AOP-Katalogs, sondern zum nächstmöglichen Zeitpunkt vollumfänglich gültig und umzusetzen,kein Hybrid-DRG und Beibehaltung des EBM für Praxen und Kliniken ohne Anhebung der Vergütung und ohne Kostenneuberechnung,absoluter Zwang zum ambulanten Operieren unter Berufung auf das MDK-Reformgesetz [20],weiterer Verlust fachärztlicher Therapie- und Gestaltungsfreiheit, dafür auf beide Sektoren übergreifend.\nkeine stufenweise und differenzierte Umsetzung des AOP-Katalogs, sondern zum nächstmöglichen Zeitpunkt vollumfänglich gültig und umzusetzen,\nkein Hybrid-DRG und Beibehaltung des EBM für Praxen und Kliniken ohne Anhebung der Vergütung und ohne Kostenneuberechnung,\nabsoluter Zwang zum ambulanten Operieren unter Berufung auf das MDK-Reformgesetz [20],\nweiterer Verlust fachärztlicher Therapie- und Gestaltungsfreiheit, dafür auf beide Sektoren übergreifend.", "In diesem Sinne verdienen gemeinsame Forderungen von Praxen und Kliniken wie die nach einer einheitlichen Vergütung des ambulanten Operierens höchste Priorität.\nNur wenn ambulante Operationen für Praxen und Kliniken leistungsgerecht und kostendeckend vergütet werden, können mehr ambulante Operationen durchgeführt, bedarfsgerechte zusätzliche ambulante OP-Strukturen geschaffen und damit auch in der Zukunft mittel- bis langfristig eine optimale ambulant operative Versorgung der Patienten gewährleistet werden.\nMehr ambulante Operationen sind möglich und gleichzusetzen mit einer Intensivierung der Ambulantisierung\nDie Überwindung der durch die COVID-19-Pandemie wie im Brennglas verstärkt sichtbar gewordenen Herausforderungen und die Umsetzung der vorhandenen Potenziale beim ambulanten Operieren hängen nun vorrangig von den Änderungen seiner bisherigen kontraproduktiven Rahmenbedingungen ab. Einen ersten wichtigen Schritt und Einschnitt in verfestigte und sinnfreie Verweigerungshaltung, Besitzstandwahrung und Stillstandfetischismus stellt zweifelsohne die Veröffentlichung des IGES-Gutachtens dar und seiner Botschaft: mehr ambulante Operationen sind möglich und gleichzusetzen mit einer Intensivierung der Ambulantisierung. Die alles entscheidende Frage bleibt offen und ist nur auf demokratischen Weg zu lösen: wessen kompetenten Händen vertraut die davon betroffene Gesellschaft dieses richtungsweisende Großprojekt an? Eine Teilantwort liefert die Gegenfrage: wer kommt angesichts der beschriebenen Fehleranalyse dafür nicht in Frage?\nDie erhöhte Aufmerksamkeit, die derzeit der Gesundheitsversorgung zuteil wird, fördert den Bedarf nach Veränderung und Diskussionen um Weiterentwicklungen und Verbesserungen von Defiziten und Schwachstellen des Gesundheitssystems. Dazu zählen vor allem die Unterfinanzierung des ambulanten Operierens und die sich in diesem Kontext aktuell eröffnenden Möglichkeiten, bereits in Erprobung befindliche Ansätze wie Hybrid-DRG oder bewährte Selektivverträge für die Ermittlung einer sektorenverbindenden Vergütung und damit einhergehend zur Überwindung bisheriger Sektorengrenzen in die Praxis umzusetzen. Erst wenn diese Hürden überwunden sind, kann über den Eintritt in die Phase einer gemeinsamen Patientenakte gesprochen werden. Parallelveranstaltungen nach Art und Vorgehensweise der Digitalisierung sind nicht nachahmenswert, wenn es um die Neudefinition des Begriffes ambulante Medizin und ihres umfangreichen und ausbaufähigen Teilbereiches ambulantes Operieren und dessen Anschluss an europäische und internationale Standards geht [7].\nDas politische Bekenntnis zur Ambulantisierung darf nicht wieder zum bloßen Lippenbekenntnis verkümmern, sondern muss als Chance gesehen werden, den klinischen Sektor durch Überführung von diagnostischen, therapeutischen sowie operativen Prozeduren in ambulante, nicht-vollstationäre Behandlungsabläufe zu entlasten. Deshalb darf der einzige Impetus nicht im Einsparen an finanziellen Ressourcen ohne Rücksicht auf die Ressource Mensch bestehen. Vielmehr stehen zielgerichteter Umgang mit nicht unbegrenzten, aber kontinuierlich anzupassenden Ressourcen und die unbedingte Vermeidung weiterer Fehlerkosten im Vordergrund. Und nicht wie bisher die stets gescheiterten Versuche, Gesundheitskosten zu senken, bei wachsendem Bedarf und steigenden Qualitätskosten. Die Beibehaltung und der sukzessive Ausbau des ambulanten Operierens werden dazu beitragen, die Gesundheitsausgaben im operativen Bereich nachhaltig zu stabilisieren und wieder kalkulierbar werden zu lassen.\nSollten sich der Gesetzgeber und in seinem Fahrwasser die Kostenträger dieser Logik und Einsicht weiter verweigern, dann drohen im bestanzunehmenden Verlauf die Konservierung des Status quo und somit Stagnation mit abfallender Tendenz beim ambulanten Operieren und weiterhin hohe stationäre Fallzahlen. Realistischer erscheint das Szenario eines weiteren und zunehmenden Fallzahlrückgangs ambulanter Operationen in Praxen und Kliniken und weitere Klinik- und Praxisschließungen aus Gründen der Unrentabilität bzw. der ungeklärten Nachfolgefrage. Im Gegenzug wird ein weiterer Zulauf in das Berufsbild des dauerangestellten Facharztes mit begrenzter Haftung in investorengeführten MVZ von Kapitalgesellschaften zu beobachten sein. Es steht außer Zweifel, dass diese Variante keine individuelle berufliche Perspektive von Dauer für eine akademisch geprägte Berufslaufbahn sein kann. Damit würde der Facharztstatus zu einem weisungsgebundenen Ausbildungsberuf degradiert werden und neben der Option Selbständigkeit auch noch seiner Freiberuflichkeit verlustig gehen.\nNach medizinischen und versorgungswissenschaftlichen Kriterien kann die Zukunft des ambulanten Operierens als positiv bewertet werden. Allerdings nicht losgelöst von gesundheitspolitischen Unabwägbarkeiten und Rahmenbedingungen sowie deren Einflussnahme auf Ressourcen und den damit korrelierenden Zukunftsaussichten des Arztberufs und seiner fachlich-operativen Subspezialisierungen. Deshalb bedarf es einer aktiven fachärztlichen Mitbeteiligung an der aktuellen Diskussion und am Meinungsbildungsprozess bei der Ausgestaltung der Ambulantisierung.", "\nDurch kontinuierliche Verbesserungen v. a. im niedergelassenen Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus.Durch ambulantes Operieren kann man Einsparpotenziale generieren.Aus Sicht der Patienten zählen v. a. ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit.Kostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt.Die nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan, mit der gesundheitspolitischen nonverbalen Einladung, in den ambulanten Gesundheitsmarkt einzusteigen.Ein unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf – und somit bereits die nahe Zukunft des ambulanten Operierens – gefährden.\n\nDurch kontinuierliche Verbesserungen v. a. im niedergelassenen Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus.\nDurch ambulantes Operieren kann man Einsparpotenziale generieren.\nAus Sicht der Patienten zählen v. a. ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit.\nKostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt.\nDie nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan, mit der gesundheitspolitischen nonverbalen Einladung, in den ambulanten Gesundheitsmarkt einzusteigen.\nEin unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf – und somit bereits die nahe Zukunft des ambulanten Operierens – gefährden." ]
[ null, null, null, null, null, null, null, null, null, null, null, "conclusion" ]
[ "Gesundheitspolitik", "Vergütung", "„Diagnosis related groups“", "Patientenzufriedenheit", "Leistungskatalog", "Health policy", "Remuneration", "Diagnosis related groups", "Patient satisfaction", "Service catalog" ]
Historie und Status quo: Die beschriebenen Vorbehalte sind schon seit geraumer Zeit ausgeräumt, weil hohe Operationszahlen mit großer Patientenzufriedenheit und guter Ergebnisqualität auf Dauer die besseren Argumente sind [8]. Dennoch ist die Zukunft des ambulanten Operierens trotz dieser Ausgangsposition weiterhin volatil, denn bisher gelang es ambulanten Operateuren und Anästhesisten nicht, neben Patienten und Angehörigen auch Gesundheitspolitik und Kostenträger von der Hochwertigkeit ihrer ärztlichen Leistungen so zu überzeugen, dass beide einer angemessenen sowie kostendeckenden Vergütung zustimmen [23, 26]. Hier gilt es, noch nachhaltiger politische Überzeugungsarbeit zu leistet, um die Zukunft des ambulanten Operierens in Praxen und Kliniken unter gleichen und akzeptablen Rahmenbedingungen langfristig zu sichern [15]. Das ambulante Operieren hat in den letzten Jahrzehnten einen enormen Entwicklungsschub erfahren Der Begriff „ambulant“ rührt ursprünglich vom von Patienten zu Patienten sich begebenden Chirurgen her [6]. Sowohl dessen Berufsbild als auch die Definition des ambulanten Operierens haben im Verlauf der Medizingeschichte und v. a. in den letzten Jahrzehnten einen enormen Entwicklungsschub erfahren. Ambulantes Operieren in Deutschland bedeutet im Grund nichts anderes, als dass Patienten die Zeit vor und nach ihrer Operation zu Hause verbringen und der ambulante Eingriff innerhalb der Aufenthaltsdauer der Patienten ohne stationäre Aufnahme zu erfolgen hat. Im AOP-Katalog (ambulantes Operieren und sonstige stationsersetzende Eingriffe im Krankenhaus) sind ambulant durchführbare und stationsersetzende Operationen aufgeführt, die jeweiligen Vergütungen sind im einheitlichen Bewertungsmaßstab (EBM) geregelt und gelten für Praxen und Kliniken seit Inkrafttreten des Gesundheitsstrukturgesetzes im Jahr 1993 [10]. Was die Struktur- und Prozessqualität angelangt, so sind Klinikstandards verbindlich. Es wird dabei weder den dynamisch steigernden Kostenstrukturen in Kliniken und Praxen noch der Implementierung des medizinischen Fortschritts innerhalb des ambulanten Settings kostenkalkulatorisch korrekt Rechnung getragen. Von dieser Abspaltung der ökonomischen Sicht auf das komplexe medizinische Produkt ambulantes Operieren und der Negation seines bewiesenen Mehrwertes sind alle regelmäßig ambulant operierenden Fachgebiete in Praxen und Kliniken betroffen [14, 16]. Durch kontinuierliche Verbesserungsprozesse v. a. im niedergelassenen, d. h. selbständigen und selbsthaftende Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus. Beides ermöglicht eine fachlich breit ausgerichtete sowie flächendeckende chirurgisch-operative Versorgung der Patienten. Hinzu kommt die Etablierung schonender OP- und Anästhesieverfahren als innovatives und motivierendes Moment. Patientenperspektive: Die Präferenz von Operationen im ambulanten Kontext begründen Patienten und Angehörige damit, dass sie in der Facharztpraxis von einem selbst gewählten Facharzt behandelt werden, zu dem ein persönliches Vertrauensverhältnis besteht, und sie anschließend die Genesungszeit zuhause verbringen können – in der Gewissheit, sich bei Bedarf vertrauensvoll an Operateur oder Anästhesisten wenden zu können. Diese Einschätzung umfasst die eigentliche Philosophie des ambulanten Operierens, nämlich den gesamten Behandlungsablauf aus einer zuverlässigen Hand. Damit ist ambulantes Operieren unstrittig zu einem festen Bestandteil und einer der tragenden Säulen der Patientenversorgung allgemein und speziell im Hinblick auf die Verwirklichung einer integrierten, fach- und sektorenübergreifenden Medizin geworden und nimmt deshalb eine Vorreiter- und Vorbildfunktion bei der anstehenden Ambulantisierungsoffensive ein. Dieser für alle sichtbare Leuchtturm ambulantes Operieren wurde im Übrigen auch aus den Steinen errichtet, die seiner Entwicklung immer wieder bewusst in den Weg gelegt werden, auch durch gezielte Infragestellung [25]. Notorischen Zweiflern sei an dieser Stelle empfohlen, zur besseren Ein- und Weitsicht die Kassenbrille abzunehmen und gegen die Patientensichtbrille einzutauschen [12]. 30 Jahre Erfahrung: Ambulante Operateure und Anästhesisten dürfen zu Recht mit ihrem Erfahrungsvorsprung von 30 Jahren auf ihre besondere Rolle bei der aktuellen Meinungsbildung und den anstehenden Entscheidungen bestehen. Wer in medizinisch-fachlichen Belangen und Fragen der Patientensicherheit die erforderliche Expertise vorweist, muss auch in so wichtigen gesundheitspolitischen Entscheidungen wie der Ambulantisierung angehört und in seinen Empfehlungen respektiert werden [22]. Zu groß ist sonst die Gefahr weiterer unerwünschter Nebenwirkungen und Kollateralschäden. Von diesen ist auch das ambulante Operieren im Verlauf seiner Entwicklung nicht verschont geblieben, wie ein aktuell zu verzeichnender Rückgang der Patienten- und Eingriffszahlen nach einer schon seit 2015 zu beobachtenden Stagnation belegt [23]. Dazu beweist dieser Trend das Gegenteil der seit Anfang an unbewiesenen Unterstellung einer unkontrollierten Mengenausweitung und drohender Kostenexplosion. Und es ist der Punkt in der – wohlgemerkt medizinischen – Erfolgsgeschichte des ambulanten Operierens erreicht, an dem die bisherigen einrichtungsinternen Rationalisierungsmaßnahmen ihren Zenit erreicht bzw. überschritten haben und vorhandene personelle und finanzielle Ressourcen definitiv an ihre Grenzen angekommen sind [16]. Ein unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf und somit bereits die nahe Zukunft des ambulanten Operierens gefährden. Stagnation trotz Potenzial: Der 2015 eingetretene Stillstand früherer konstanter Steigerungsraten betrifft Kliniken und Praxen gleichsam; seither sind Schwankungen in den Patientenzahlen und bestenfalls eine Plateaubildung zu verzeichnen [1]. Dies lässt sich mit Zahlen konkret belegen und sollte als Warnhinweis und Weckruf verstanden werden. Das Maximum der erfassten ambulanten Operationen in Kliniken betrug knapp 2 Mio., in den Praxen waren es 8,75 Mio. Zuletzt wurden aus dem niedergelassenen Bereich 2020 8 Mio. Eingriffe erfasst, was dem bereits 2008 erzielten Ergebnis entspricht. Retrospektiv verzeichnen Praxen in der Dekade zwischen 1998 und 2008 den größten Zuwachs an ambulanten Operationen, in Kliniken fand diese Entwicklung zeitlich versetzt von 2004 bis 2015 statt, mit einem Optimum von 7,1 % Anteil an allen Krankenhausbehandlungen [17]. Durch ambulantes Operieren per se und dessen gezielten Ausbau kann man Einsparpotenziale generieren Neben den genannten Vorteilen für Patienten wird auch das Gesundheitssystem durch ambulante Operationen begünstigt, dank einer im Vergleich zu stationären Eingriffen ablaufbedingt geringeren Inanspruchnahme von Kapazitäten und damit einhergehend geringeren Kostenanteils [3]. Damit ist grundsätzlich die Möglichkeit verbunden, durch ambulantes Operieren per se und durch dessen gezielten Ausbau relevante Einsparpotenziale zu generieren, wie P. Oberende bereits 2010 in seinem vom BAO initiierten Gutachten belegen konnte [24]. Die bisherigen Einspareffekte dienen den verantwortlichen Entscheidungsträgern unwissentlich als Deckungsbeiträge für anderweitige Defizite oder werden bestenfalls in neue medizinische Bereiche reinvestiert. Der Aspekt regelmäßiger Reinvestitionen in die Strukturen des ambulanten Operierens und deren Ausbau wird dabei völlig außer Acht gelassen, wie überhaupt das ambulante Operieren unter den Schirm einer peinlich anmutenden Milchmädchenrechnung gestellt wurde, mit dem vorherrschenden Ziel, für jede ambulante Operation eine stationäre einzusparen. Diese schlichte Modellrechnung kann nicht aufgehen, schon allein aus dem Grund, dass der medizinisch-therapeutische Teil des Gesundheitssystems ein im Wandel begriffener ist und per se einem ständigen Verbesserungsprozess unterliegt. So kompensiert das ambulante Operieren den weiter voranschreitenden Klinik- und Bettenabbau zusätzlich. Hier kann man in der Tat statt von einer nur stationsersetzenden Intention des Gesetzgebers ebenso von einer stationszersetzenden sprechen, wenn man die Entwicklung des Gesundheitssystems in den beiden letzten Dekaden betrachtet. Die gesamte ambulante niedergelassene fachärztliche Medizin im Umfeld von geplanten bzw. umgesetzten Klinikschließungen trägt entscheidend zur Aufrechterhaltung einer flächendeckenden wohnortnahen Patientenversorgung bei. Die getroffenen Maßnahmen zur Pandemiebekämpfung kommen streckenweise Klinikschließungen gleich und beruhen auf der sicheren Annahme einer weiter funktionierenden ambulanten Patientenversorgung. Und dies im Gegensatz zu Kliniken ohne Unterstützungsmaßnahmen für ambulante Facharztpraxen oder Anerkennung der Praxismitarbeiterinnen in Form eines gewährten Coronabonus. Forderungen an Politik und Kassen: Seit 3 Jahrzehnten ist der Generalverdacht, ambulante Medizin allgemein und insbesondere das anspruchsvolle, aufwändige ambulante Operieren seien Kostentreiber (also eine unnötige luxuriöse doppelte Versorgungsschiene) und folglich durch gesundheitspolitische Gegenmaßnahmen (wie die gegen alle ökonomische Vernunft von oben verordnete Punktwertabsenkung im Rahmen des EBM 2000 plus) in ihren Entwicklungen zunehmend einzuschränken, ständiger Begleiter ambulanter Fachärzt:innen. Und dies gegen den erklärten Patienten- und Wählerwillen, der in Umfragen mit konstant hohen Zustimmungswerten für ambulant operierende Fachärzt:innen zum Ausdruck kommt [7]. Es ist an der Zeit, dem hohen Stellenwert einer dem gesellschaftlichen Wertewandel unterworfenen, aber mehrheitlich stabilen, von Vertrauen geprägten Arzt-Patienten-Beziehung die nötige politische Aufmerksamkeit und Respekterweisung entgegenzubringen. Diese Forderung beinhaltet, das ambulante Operieren als wesentlichen Bestandteil der chirurgisch-operativen Versorgung der Patienten in Deutschland mit großem Steigerungspotenzial anzuerkennen und seine Weiterentwicklung (statt zu stören) gezielt zu fördern. Ein Blick auf die umfangreichen und nicht immer Sinn und Nutzen stiftenden GKV-Werbekampagnen (Gesetzliche Krankenversicherung) zeigt Möglichkeiten des Abflusses von Mitteln aus der aktiven Patientenversorgung und deren Zweckentfremdung für Gesundheitsleistungen als Sonderangebot und Marketinginstrument trotz angeblicher chronisch leerer Kassenlage. Was aus Sicht der Patienten wirklich zählt, das sind immer noch ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit. Dieser Widerspruch zwischen dem Selbstverständnis patientennaher und -ferner Akteure zeigt einmal mehr den bereits vollzogenen Wandel unseres Gesundheitswesens vom Sozialsystem hin zu Gesundheitsmarkt und -wirtschaft und diesem Wandel bzw. seinen Ursachen und deren Folgen ist auch das ambulante Operieren unterworfen. Vom dazwischen liegenden ökonomischen Entwicklungspotential bleiben ambulante OP-Einrichtungen ausgeschlossen, da sie in wenig flexible und noch weniger reflektierten Rahmenbedingungen gezwängt sind. Diese Tatsache ist hinreichend belegt, bekannt und Gegenstand zahlreichlicher Bemühungen der davon betroffenen Berufsverbände, wie der Hinweis, dass bereits vor 2019 und dem Beginn der Pandemie die ambulanten OP-Fallzahlen vorwiegend in Praxen rückläufig waren. Dieser Effekt wurde durch COVID-19 („coronavirus disease 2019“) zusätzlich verstärkt und betrifft seither auch die im stationären Setting erhobenen Fallzahlen [1]. Marginalen Schwankungen ambulanter OP-Zahlen im niedergelassenen vertragsärztlichen Bereich seit Beginn 2019 und bis Jahresmitte 2021 lassen im besten Annahmefall einen leicht steigenden Trend herauslesen und sich mit der Fluktuation abgesagter elektiver Eingriffe aus den Kliniken zu niedergelassenen Operateuren erklären. Den gemeinsamen Bemühungen um eine bestmögliche Aufrechterhaltung der Patientenversorgung unter Pandemiebedingungen muss eine Diskussion über den Terminus elektiver Eingriff folgen. Er ist der aktuellen Bedarfssituation anzupassen und neu zu definieren, auch in Hinblick auf den neuen und umfangreicheren AOP-Katalog und dessen machbarer Umsetzung, aber auch unter dem Gesichtspunkt einer Zunahme der jetzigen Kapazitätsengpässe und möglicher neuer Krisen oder Pandemielagen. Mehr Kapazitäten erfordern mehr Ressourcen: Für eine signifikante Steigerung ambulanter Operationen im niedergelassenen Bereich ist die Aufstockung von Personal, Räumlichkeiten und Arbeitszeiten nötig. Die Möglichkeiten dafür werden durch fehlende Kapazitäten deutlich minimiert, die wiederum mit der beschriebenen Unterfinanzierung korrelieren. Kostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt Erschwerend kommt hinzu, dass seit einer Dekade kostendeckende Hygienezuschläge von allen Kostenträgern, trotz eindeutiger Kausalität auf der Grundlage der Umsetzung einer staatlich angeordneten Gesetzesnovellierung, abgelehnt werden [13]. Diese inakzeptable und mit den Regeln eines Rechtstaates und dessen Grundsatzgebot von Treu und Glauben nur sehr schwer in Vereinbarung zu bringende Verweigerungshaltung findet ihre kompromisslose Fortsetzung auch in den Zeiten der Pandemie. Und dies ausdrücklich unter Beifall der verantwortlichen Gesundheitspolitik, die das Regelwerk eines „ehrbaren Kaufmanns“ im Umgang mit selbständigen fachärztlichen Freiberuflern bewusst außer Kraft gesetzt hat. Besser kann man den Tatbestand der ordnungspolitischen Dysfunktionalität an einem konkreten Beispiel nicht festmachen, ebenso die zum Dauerproblem gewordene Vergütungssystematik für ambulant operierende Facharztpraxen, welche wegen hoher Fixkosten trotz der extrabudgetären Vergütung des ambulanten Operierens ein hohes wirtschaftliches Risiko eingehen [26]. Diese Einschätzung zeigt darüber hinaus, dass die grundsätzlichen Rahmenbedingungen zu lange stagnieren und keine solide wirtschaftliche Basis für das ambulante Operieren darstellen. In diesem Kontext greift die von Busse et al. trefflich formulierte These bzgl. der Rahmenbedingungen für medizinischen Qualitätswettbewerb – bestehend aus politischer Verlässlichkeit und Mut zu Strukturveränderung [11]. Die Pandemie verstärkt diesbezügliche Defizite, darf aber nicht als ursächlicher Auslöser fehlinterpretiert werden, wozu die zeitliche Nähe von Pandemie, politischer Willenserklärung zur Ambulantisierung und IGES-Gutachten je nach Einzelinteressenslage durchaus verleiten kann. Praxisnachfolge entscheidet über Zukunft des ambulanten Operierens: Jede weitere Negation gesundheitspolitischer Wahrheiten wird ihren Einfluss auf das Interesse an Praxisnachfolgen ausüben, die gerade bei der aktuellen Altersstruktur niedergelassener Fachärzt:innen verdichtet anstehen. Bisheriger, weiterer und zukünftiger Umgang mit Praxen und ihren fachärztlichen Inhaber:innen wird von niederlassungsinteressierten Klinikfachärzt:innen sehr wohl und kritisch differenziert wahrgenommen, beeinflusst den weiteren Berufsweg und die Entscheidung zwischen den Alternativen Klinikkarriere, Selbständigkeit und MVZ-Anstellung (medizinisches Versorgungszentrum). Damit steht und fällt auch die Zukunft des ambulanten Operierens, weshalb niedergelassene Operateure und Anästhesisten mit ihrer täglichen Praxisarbeit nicht nur gemeinsame Patienten, sondern auch involvierte Facharztkolleginnen und Kollegen in den Kliniken am besten davon überzeugen können, dass bei einer Niederlassung die sich eröffnenden Chancen die Risiken bei weitem überwiegen. Das ambulante Operieren kann dabei einen wichtigen Teil zur persönlichen Erfolgsgeschichte und beruflichen Zufriedenheit beitragen. Dass diese Gewissheit in die eigenen Fähigkeiten außerhalb einer Facharztpraxis in deren Umfeld nicht immer vorauszusetzen ist und folglich Handlungsmaximen Dritter auf die emotionale Ebene wie bewusst ausgelöste Neiddebatten verlagert werden, belegen die jüngsten unsauberen und tendenziösen öffentlichen Äußerungen über Ärzteeinkommen trefflich. Nicht zufällig und von ungefähr geschehen solche Störmanöver in wiederkehrender Regelmäßigkeit und diesmal in auffallender zeitlicher Abfolge zur Veröffentlichung des IGES-Gutachtens und dessen positiven Einschätzungen in Bezug auf das ambulante Operieren. Positive Bilanz: Der ambulante Versorgungsbereich hat nicht erst in den letzten 2 Jahren bewiesen, wie leistungsfähig er trotz der beschriebenen Be- und Einschränkungen ist und dass die patientenorientierte Zusammenarbeit zwischen Praxen und Kliniken sich besser gestaltet als manchem patientenfernen und ideologielastigen Entscheidungsträger lieb sein kann. Wer trennt sich schon gerne von seiner Erfindung mit Namen doppelte Facharztschiene und erkennt an, dass nachhaltige Investitionen in dieses duale System mittel- und langfristig von gesamtgesellschaftlichem Nutzen sind und zugleich auch Engpässe in Krisenzeiten zu kompensieren helfen. Fortschritt braucht in der ambulanten Versorgung öffentliche sowie kollektive finanzielle Ressourcen Jedoch braucht Fortschritt in der ambulanten medizinischen Versorgung öffentliche sowie kollektive finanzielle Ressourcen, weshalb von staatlicher Seite die Dauerfrage der Finanzierung zu klären ist. Sehr hilfreich ist dabei das Einsehen, dass das Gesundheitssystem keinen lästigen, nach Belieben manipulierbaren Kostenfaktor, sondern eine der wichtigsten Säulen der Gesellschaft darstellt und sich Qualitätskosten langfristig immer positiv auswirken, wie die Pandemie und ihre Brennglasfunktion für Defizite und deren hohen Fehlerfolgekosten beweisen. Ambulantes Operieren verfügt über sehr hohes versorgungswissenschaftliches Potenzial und Ausbaufähigkeit, wenn die Rahmenbedingungen stimmen. Deren Gestaltung liegt nicht im Verantwortungsbereich ambulanter Operateure und Anästhesisten. Sie haben mit der Gründung eigener ambulanter Praxen, ambulanter OP-Zentren und Praxiskliniken und der Aufrechterhaltung dieser hochwertigen Infrastrukturen mit Eigenmitteln ihre Fähigkeiten zum selbständigen, eigenverantwortlichen Handeln über 3 Jahrzehnte unter Beweis und sich damit einem täglichen Qualitätswettbewerb gestellt und sich trotz aller gegenteiligen externen Bemühungen bisher behauptet. Fehlentwicklungen: Die nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan mit der gesundheitspolitischen nonverbalen Einladung, doch in den ambulanten Gesundheitsmarkt einzusteigen. Die inzwischen vorliegenden Abrechnungsdaten sprechen die Sprache der Rendite- und Gewinnmaximierung. Beides ist für anonyme Investoren wie ausländischen Pensionsfonds inzwischen gegenüber Politik und Kostenträgern salonfähig geworden, während der Einzelvertragsarztpraxis, den Fachärzten einer BAG und Ärzte-MVZ weiterhin ihr in der Regel bereits gekürztes Honorar als Ausdruck höchster Profitgier vorgehalten wird. Wer sich auf Augenhöhe mit Vertretern der Konzern- und Finanzwirtschaft wähnt, übersieht dabei geflissentlich, dass die klinische und ambulante medizinische Versorgungskette nur so stabil und sicher ist wie die Investorenkette dazu bereit ist und zulässt. Und alles nur, um sich nicht mehr den permanenten und impertinenten Verbesserungsvorschlägen der sog. Leistungserbringer und ihren konstruktiven Dialogbedürfnissen auszusetzen? Mit dem stellenweise bereits eingeläuteten Niedergang der ambulanten, selbständigen und freiberuflichen operierenden Facharztmedizin verliert das deutsche Gesundheitssystem einen wichtigen Bestandteil seines noch verbliebenen Tafelsilbers. Deshalb darf der Ausbau der Ambulantisierung nicht mit einem Abbau der eigentlichen Garanten für medizinisch-ärztliche Kompetenz und zuverlässige Leitungsbereitschaft einhergehen. Ambulantes Operieren ist als dritte Säule der Patientenversorgung ein fester, verlässlicher soziokultureller Bestandteil unserer Gesellschaft geworden und verdient es, wertgeschätzt, vor weiteren Defiziten geschützt und mit gezieltem Ressourceneinsatz bedarfsgerecht gefördert zu werden. Potenziale und Herausforderungen an das ambulante Operieren: Um beiden Aspekten gerecht zu werden, müssen alte Wahrheiten wie die der Beitragsstabilität und die daraus abgeleitete unzureichende Gesamtvergütung sowie das Abstreiten der Zuständigkeit für die Übernahme von Hygiene- und Qualitätskosten auf den Prüfstand und entsorgt werden. Zeitgleich gilt es, das öffentliche Interesse am hohen Gut Gesundheit zu reaktivieren und damit die kollektive Bereitschaft für höhere Beiträge vorzubereiten. Der zu erzielende gesamtgesellschaftliche Nutzen und sein Mehrwert rechtfertigen den für den anstehenden und überfälligen Um- und Ausbau des ambulanten Operierens benötigten Mitteleinsatz. Die Gesundheitspolitik sollte ergebnisoffen diesen bevorstehenden Transformationsprozess moderieren und im Konsens erzielte Beratungsergebnisse ordnungspolitisch zeitnah umsetzen. Die bisherige paternalistisch anmutende Vorgehensweise ist überholt, weniger vom Mainstream als von den eigenen Misserfolgen. An diese würde sich nahtlos das über 2 Jahrzehnte vernachlässigte und von allen Parteien bewusst übergangene Projekt ambulantes Operieren anreihen, gefolgt vom Scheitern des aktuell politisch angesagten Leuchtturmprojekts Ambulantisierung des Gesundheitssystems [21]. Nur unter dieser Grundvoraussetzung ist die Aufnahme weiterer ambulant durchführbarer Leistungen in den AOP-Katalog und dessen Erweiterung von 2879 auf 5355 Leistungen [1, 2] realistisch und Erfolg versprechend. Alles andere entspräche lediglich einer nochmaligen Verdopplung von ambulanten OP-Zahlen und zeitgleicher Multiplikation der dargelegten Problemzonen. Ein weiterer wichtiger Schritt in Richtung einer positiven Zukunft des ambulanten Operierens ist zweifelsfrei eine stufenweise individuelle Beurteilung der Patienten nach objektiven Schweregraden statt der bisherigen Handhabung mit unterschiedlichen und subjektiv überlagerten Komorbiditäten [2]. Damit ist eine objektive und verbindliche Risikoadjustierung vorgegeben und die Nachverfolgbarkeit, ob ein Patient durch einen ambulanten chirurgischen Eingriff ein erhöhtes Risiko zu erwarten hat oder lediglich einer intensivierten ambulanten bzw. teilstationären Nachsorge bedarf, haben sich ambulant operierende Einheiten in Struktur und Organisation diesen neuen Anforderungen und Inhalten des Leistungskatalogs anzupassen. Im Umkehrschluss allerdings muss sich parallel dazu die Vergütung an diesem Qualitätszuwachs orientieren und von bisherigen laienhaft anmutenden Wertschöpfungsmythen, wie die Schnitt-Naht-Zeit als Bewertungsmaß und Rechtfertigung zum systematischen Abwerten ambulanter Eingriffe, definitiv verabschieden. Ambulantes Operieren wäre ein Zugewinn für das unter Pflege- und Ärztemangel stehende Gesundheitswesen Nicht nur der BAO empfiehlt ein schrittweises, in Absprache vereinbartes Vorgehen, an dessen Anfang der proaktive Konsens zuerst unter den niedergelassenen und anschließend mit den klinischen fachärztlichen ambulanten Operateuren und Anästhesisten stehen muss. Dies betrifft die bedarfsgerechte und nach internationalen Standards zu erfolgende Neudefinition des Komplexes ambulantes Operieren über die bisherige von der Versorgungsrealität überholte Gleichsetzung mit Tageschirurgie hinaus. Das verschafft neue Handlungs- und Entscheidungsspielräume und erleichtert für alle Beteiligten Eingriffe ambulant ausschließlich nach medizinischen und nicht etwaigen ökonomischen Kriterien durchzuführen. Der bisher übliche Diskussionsbedarf für und wider stationäre postoperative Überwachungen entfällt damit. In der Folge werden an anderer Stelle dringend benötigte Kapazitäten frei. Ein elementarer Zugewinn für das unter dem Druck von Pflege- und Ärztemangel stehende Gesundheitswesen. Damit findet auch eine Kultur des einseitigen Misstrauens ihr Ende, die dem seit drei Jahrzehnten von weit über hundert Millionen ambulant operierten Patienten ihren Operateuren und Anästhesisten ausgesprochenen vollen Vertrauen zum Trotz Politik und Kostenträger gestattet, unverhohlen ihr unbegründetes, oft noch fadenscheiniges Misstrauen gegen ambulante Operateure und Anästhesisten offen und medienwirksam zur Schau zu stellen. Für einen geordneten Neuanfang des ambulanten Operierens ist der vorliegende Umfang des neu überarbeiteten AOP-Katalogs zu mächtig. Zu Beginn erscheint die Beschränkung auf die häufigsten ambulanten Operationen eine sinnvolle Lösung zu sein, über deren Umsetzung es zeitig zu diskutieren gilt. Je übersichtlicher und in der Handhabung alltagstauglicher, umso größer die allgemeine Akzeptanz, umso schneller sind Umsetzbarkeit und die Implementierung begleitender Analysen unter medizinisch-qualitativen und ökonomischen Gesichtspunkten, wie die Erfassung von Investitions- und Betriebskosten in Hinblick auf ein optionales Endprodukt Hybrid-DRG („diagnosis related groups“) für Praxen und Kliniken [18]. Die bisherigen EBM- und DRG-Systematiken bilden diese Parameter nicht ausreichend transparent ab und tragen dadurch zu den beschriebenen Versorgungs‑ und Finanzierungsdefiziten bei. Stichhaltige Argumente von fachärztlicher Seite bedürfen von Anfang an einer stichprobenartigen Qualitätssicherung, auch zur Dokumentation der eigenen Leistungspotenziale und als Rechtstitel eigener Vergütungsansprüche. Um einen zeitraubenden Richtungsstreit zu umgehen, kann die Frage einer praktikablen Qualitätssicherung durch die Übernahme des seit 20 Jahren gerade im Rahmen von Selektivverträgen bewährten AQS1-Patientensicherheitsfragebogens [4, 8], mit der Option seiner regelmäßigen Modifizierungen und digitalen Ausführung, beantwortet werden. Ferner gilt es, sich auf einen definierten Zeitraum zu einigen, innerhalb dessen die Transformation vom stationären in das ambulante Setting sowie die differenzierte Förderung für Kliniken und bereits bestehender ambulante OP-Einrichtungen abgeschlossen ist und – als conditio sine qua non – eine für Praxen und Kliniken einheitliche Vergütung greift. Der Deutschen Krankenhausgesellschaft (DKG) und v. a. den klinischen Kolleg:innen sei hiermit signalisiert, dass deren Vorbehalte gegenüber dem ambulanten Operieren auf der derzeitigen nicht tragfähigen EBM-Basis gerechtfertigt sind und deshalb eine eigenständige, gemeinsame AOP-Vergütungssystematik erforderlich ist. In einer Übergangsphase kann die unverhandelbare standortunabhängige Vergütungsanhebung für Kliniken in begründeten Fällen höher ausfallen als Kompensation für den anstehenden und zu bewältigenden aufwändigen Strukturwandel. Aber nach Fristablauf und erfolgter Umstellung hat eine einheitliche (d. h. sektorenverbindende) AOP-Vergütung inklusive einer Kostenerstattungsdynamik zu erfolgen. Eine Wiederholung der im Entwurf gelungenen und von der Politik zum Misslingen genötigten Aktion EBM 2000 plus darf und kann nicht akzeptiert werden. Ambulantes Operieren beruht auf dem Wunsch der Patienten nach dieser Versorgungsform und dem Wissen und Können der ambulanten Operateure und Anästhesisten, diese Patienteninteressen in der Praxis des Versorgungsalltags umzusetzen. Ab 1993 wurde dieser neu etablierte Behandlungspfad als gesundheitspolitisches Mittel zum vorrangigen Zweck der Krankenhauskostendämpfung uminterpretiert und dient seither zugleich zur Rechtfertigung einer chronischen Unterfinanzierung medizinisch erbrachter Leistungen in Praxen und Kliniken. Retrospektiv hätten niedergelassene Fachärzt:innen und Klinikkolleg:innen diesem Konstrukt niemals zustimmen dürfen, ebenso wenig KBV und DKG; sie sind in die Falle der einseitigen Kostenminimierung getappt und seitdem in der Funktion als subalterne Leistungserbringer gefangen. Sollten sich die Interessensvertreter der Praxen und Kliniken in den nächsten Wochen und Monaten nicht zu einer vernunftbasierten Einigung für einen gemeinsamen Aktionsplan durchringen, droht folgendes nicht unrealistisches alternatives Szenario:keine stufenweise und differenzierte Umsetzung des AOP-Katalogs, sondern zum nächstmöglichen Zeitpunkt vollumfänglich gültig und umzusetzen,kein Hybrid-DRG und Beibehaltung des EBM für Praxen und Kliniken ohne Anhebung der Vergütung und ohne Kostenneuberechnung,absoluter Zwang zum ambulanten Operieren unter Berufung auf das MDK-Reformgesetz [20],weiterer Verlust fachärztlicher Therapie- und Gestaltungsfreiheit, dafür auf beide Sektoren übergreifend. keine stufenweise und differenzierte Umsetzung des AOP-Katalogs, sondern zum nächstmöglichen Zeitpunkt vollumfänglich gültig und umzusetzen, kein Hybrid-DRG und Beibehaltung des EBM für Praxen und Kliniken ohne Anhebung der Vergütung und ohne Kostenneuberechnung, absoluter Zwang zum ambulanten Operieren unter Berufung auf das MDK-Reformgesetz [20], weiterer Verlust fachärztlicher Therapie- und Gestaltungsfreiheit, dafür auf beide Sektoren übergreifend. Zusammenfassung: In diesem Sinne verdienen gemeinsame Forderungen von Praxen und Kliniken wie die nach einer einheitlichen Vergütung des ambulanten Operierens höchste Priorität. Nur wenn ambulante Operationen für Praxen und Kliniken leistungsgerecht und kostendeckend vergütet werden, können mehr ambulante Operationen durchgeführt, bedarfsgerechte zusätzliche ambulante OP-Strukturen geschaffen und damit auch in der Zukunft mittel- bis langfristig eine optimale ambulant operative Versorgung der Patienten gewährleistet werden. Mehr ambulante Operationen sind möglich und gleichzusetzen mit einer Intensivierung der Ambulantisierung Die Überwindung der durch die COVID-19-Pandemie wie im Brennglas verstärkt sichtbar gewordenen Herausforderungen und die Umsetzung der vorhandenen Potenziale beim ambulanten Operieren hängen nun vorrangig von den Änderungen seiner bisherigen kontraproduktiven Rahmenbedingungen ab. Einen ersten wichtigen Schritt und Einschnitt in verfestigte und sinnfreie Verweigerungshaltung, Besitzstandwahrung und Stillstandfetischismus stellt zweifelsohne die Veröffentlichung des IGES-Gutachtens dar und seiner Botschaft: mehr ambulante Operationen sind möglich und gleichzusetzen mit einer Intensivierung der Ambulantisierung. Die alles entscheidende Frage bleibt offen und ist nur auf demokratischen Weg zu lösen: wessen kompetenten Händen vertraut die davon betroffene Gesellschaft dieses richtungsweisende Großprojekt an? Eine Teilantwort liefert die Gegenfrage: wer kommt angesichts der beschriebenen Fehleranalyse dafür nicht in Frage? Die erhöhte Aufmerksamkeit, die derzeit der Gesundheitsversorgung zuteil wird, fördert den Bedarf nach Veränderung und Diskussionen um Weiterentwicklungen und Verbesserungen von Defiziten und Schwachstellen des Gesundheitssystems. Dazu zählen vor allem die Unterfinanzierung des ambulanten Operierens und die sich in diesem Kontext aktuell eröffnenden Möglichkeiten, bereits in Erprobung befindliche Ansätze wie Hybrid-DRG oder bewährte Selektivverträge für die Ermittlung einer sektorenverbindenden Vergütung und damit einhergehend zur Überwindung bisheriger Sektorengrenzen in die Praxis umzusetzen. Erst wenn diese Hürden überwunden sind, kann über den Eintritt in die Phase einer gemeinsamen Patientenakte gesprochen werden. Parallelveranstaltungen nach Art und Vorgehensweise der Digitalisierung sind nicht nachahmenswert, wenn es um die Neudefinition des Begriffes ambulante Medizin und ihres umfangreichen und ausbaufähigen Teilbereiches ambulantes Operieren und dessen Anschluss an europäische und internationale Standards geht [7]. Das politische Bekenntnis zur Ambulantisierung darf nicht wieder zum bloßen Lippenbekenntnis verkümmern, sondern muss als Chance gesehen werden, den klinischen Sektor durch Überführung von diagnostischen, therapeutischen sowie operativen Prozeduren in ambulante, nicht-vollstationäre Behandlungsabläufe zu entlasten. Deshalb darf der einzige Impetus nicht im Einsparen an finanziellen Ressourcen ohne Rücksicht auf die Ressource Mensch bestehen. Vielmehr stehen zielgerichteter Umgang mit nicht unbegrenzten, aber kontinuierlich anzupassenden Ressourcen und die unbedingte Vermeidung weiterer Fehlerkosten im Vordergrund. Und nicht wie bisher die stets gescheiterten Versuche, Gesundheitskosten zu senken, bei wachsendem Bedarf und steigenden Qualitätskosten. Die Beibehaltung und der sukzessive Ausbau des ambulanten Operierens werden dazu beitragen, die Gesundheitsausgaben im operativen Bereich nachhaltig zu stabilisieren und wieder kalkulierbar werden zu lassen. Sollten sich der Gesetzgeber und in seinem Fahrwasser die Kostenträger dieser Logik und Einsicht weiter verweigern, dann drohen im bestanzunehmenden Verlauf die Konservierung des Status quo und somit Stagnation mit abfallender Tendenz beim ambulanten Operieren und weiterhin hohe stationäre Fallzahlen. Realistischer erscheint das Szenario eines weiteren und zunehmenden Fallzahlrückgangs ambulanter Operationen in Praxen und Kliniken und weitere Klinik- und Praxisschließungen aus Gründen der Unrentabilität bzw. der ungeklärten Nachfolgefrage. Im Gegenzug wird ein weiterer Zulauf in das Berufsbild des dauerangestellten Facharztes mit begrenzter Haftung in investorengeführten MVZ von Kapitalgesellschaften zu beobachten sein. Es steht außer Zweifel, dass diese Variante keine individuelle berufliche Perspektive von Dauer für eine akademisch geprägte Berufslaufbahn sein kann. Damit würde der Facharztstatus zu einem weisungsgebundenen Ausbildungsberuf degradiert werden und neben der Option Selbständigkeit auch noch seiner Freiberuflichkeit verlustig gehen. Nach medizinischen und versorgungswissenschaftlichen Kriterien kann die Zukunft des ambulanten Operierens als positiv bewertet werden. Allerdings nicht losgelöst von gesundheitspolitischen Unabwägbarkeiten und Rahmenbedingungen sowie deren Einflussnahme auf Ressourcen und den damit korrelierenden Zukunftsaussichten des Arztberufs und seiner fachlich-operativen Subspezialisierungen. Deshalb bedarf es einer aktiven fachärztlichen Mitbeteiligung an der aktuellen Diskussion und am Meinungsbildungsprozess bei der Ausgestaltung der Ambulantisierung. Fazit für die Praxis: Durch kontinuierliche Verbesserungen v. a. im niedergelassenen Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus.Durch ambulantes Operieren kann man Einsparpotenziale generieren.Aus Sicht der Patienten zählen v. a. ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit.Kostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt.Die nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan, mit der gesundheitspolitischen nonverbalen Einladung, in den ambulanten Gesundheitsmarkt einzusteigen.Ein unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf – und somit bereits die nahe Zukunft des ambulanten Operierens – gefährden. Durch kontinuierliche Verbesserungen v. a. im niedergelassenen Bereich, zeichnet sich ambulantes Operieren durch schlanke Strukturen und professionelle Teamarbeit aus. Durch ambulantes Operieren kann man Einsparpotenziale generieren. Aus Sicht der Patienten zählen v. a. ärztliche Kernkompetenzen, ihre Qualität und daraus resultierend Patientensicherheit und Patientenzufriedenheit. Kostendeckende Hygienezuschläge werden von allen Kostenträgern abgelehnt. Die nächste sich abzeichnende Eskalationsstufe ruft fachfremde, kapitalstarke Privatinvestoren auf den Plan, mit der gesundheitspolitischen nonverbalen Einladung, in den ambulanten Gesundheitsmarkt einzusteigen. Ein unter den derzeitigen Rahmenbedingungen erzwungener Ausbau ambulanter OP-Kapazitäten würde in keinem betriebswirtschaftlich mehr zu verantwortenden Aufwand-Nutzen-Verhältnis stehen und bestehende Ablaufstrukturen durch zusätzlichen Ressourcenmehrbedarf – und somit bereits die nahe Zukunft des ambulanten Operierens – gefährden.
Background: Over the past 30 years, outpatient surgery has developed into an indispensable pillar of patient care in Germany, without its full potential coming to light. Methods: Presentation and comparison of outpatient surgery numbers from clinics and practices, and a critical analysis of their development. Results: After reaching a maximum number of outpatient operations in practices and clinics in 2015, there has been a location-independent decrease and stagnation due to underfunding of outpatient surgical structures and a shortage of resources. Conclusions: Outpatient surgery represents a patient-friendly and cost-effective alternative to inpatient interventions, provided that that medical and social indications rule out an increased risk. The expansion of outpatient surgery has so far provided relieve to the cost-intensive hospital sector and-in view of the shortage of nurses and physicians-will do so to an even greater extent as soon as politicians and payers commit to remuneration that is performance-related and actually covers the costs. Furthermore, the future of the healthcare system also depends on the future of outpatient surgery, which is to be assessed as positive.
null
null
4,991
217
[ 411, 187, 223, 450, 481, 288, 225, 255, 228, 1214, 687 ]
12
[ "und", "der", "die", "den", "zu", "von", "des", "ambulanten", "für", "operieren" ]
[ "und patientenzufriedenheit kostendeckende", "aufrechterhaltung der patientenversorgung", "ambulant operierten patienten", "patienten ihren operateuren", "anästhesisten diese patienteninteressen" ]
null
null
null
null
null
null
null
[CONTENT] Gesundheitspolitik | Vergütung | „Diagnosis related groups“ | Patientenzufriedenheit | Leistungskatalog | Health policy | Remuneration | Diagnosis related groups | Patient satisfaction | Service catalog [SUMMARY]
[CONTENT] Gesundheitspolitik | Vergütung | „Diagnosis related groups“ | Patientenzufriedenheit | Leistungskatalog | Health policy | Remuneration | Diagnosis related groups | Patient satisfaction | Service catalog [SUMMARY]
null
null
null
null
[CONTENT] Ambulatory Surgical Procedures | Cost-Benefit Analysis | Germany | Hospital Costs | Humans | Outpatients [SUMMARY]
[CONTENT] Ambulatory Surgical Procedures | Cost-Benefit Analysis | Germany | Hospital Costs | Humans | Outpatients [SUMMARY]
null
null
null
null
[CONTENT] und patientenzufriedenheit kostendeckende | aufrechterhaltung der patientenversorgung | ambulant operierten patienten | patienten ihren operateuren | anästhesisten diese patienteninteressen [SUMMARY]
[CONTENT] und patientenzufriedenheit kostendeckende | aufrechterhaltung der patientenversorgung | ambulant operierten patienten | patienten ihren operateuren | anästhesisten diese patienteninteressen [SUMMARY]
null
null
null
null
[CONTENT] und | der | die | den | zu | von | des | ambulanten | für | operieren [SUMMARY]
[CONTENT] und | der | die | den | zu | von | des | ambulanten | für | operieren [SUMMARY]
null
null
null
null
[CONTENT] durch | und | den | aus | teamarbeit aus durch ambulantes | einladung den | einladung den ambulanten | einladung den ambulanten gesundheitsmarkt | professionelle teamarbeit aus durch | einzusteigen ein unter den [SUMMARY]
[CONTENT] und | der | die | den | zu | von | des | im | ambulanten | einer [SUMMARY]
null
null
null
null
[CONTENT] Outpatient ||| ||| [SUMMARY]
[CONTENT] the past 30 years | Germany ||| ||| 2015 ||| ||| ||| [SUMMARY]
null
Clinical outcome of postoperative highly conformal versus 3D conformal radiotherapy in patients with malignant pleural mesothelioma.
24456714
Radiotherapy (RT) is currently under investigation as part of a trimodality treatment of malignant pleural mesothelioma (MPM). The introduction of highly conformal radiotherapy (HCRT) technique improved dose delivery and target coverage in comparison to 3-dimensional conformal radiotherapy (3DCRT). The following study was undertaken to investigate the clinical outcome of both radiation techniques.
BACKGROUND
Thirty-nine MPM patients were treated with neoadjuvant chemotherapy, extrapleural pneumonectomy (EPP) and adjuvant RT. Twenty-five patients were treated with 3DCRT, and 14 with HCRT (Intensity modulated radiotherapy or volumetric modulated arc therapy). Overall survival, disease free survival, locoregional recurrence and pattern of recurrence were assessed. A matched pair analysis was performed including 11 patients of each group.
METHODS
After matching for gender, age, histology, tumor stage and resection status, HCRT seemed superior to 3DCRT with a local relapse rate of 27.3% compared to 72.7% after 3DCRT (p = 0.06). The median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06). The median overall survival was 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT (p = 0.57). Recurrence analysis showed that in-field local relapses occurred in previously underdosed regions of the tumor bed in 16% of patients treated with 3DCRT and in 0% of HCRT patients.
RESULTS
The use of HCRT increases the probability of local control as compared to 3DCRT by improving target volume coverage. HCRT did not improve overall survival in this patient series due to the high rate of distant recurrences.
CONCLUSIONS
[ "Combined Modality Therapy", "Female", "Follow-Up Studies", "Humans", "Imaging, Three-Dimensional", "Lung Neoplasms", "Male", "Matched-Pair Analysis", "Mesothelioma", "Mesothelioma, Malignant", "Middle Aged", "Pleural Neoplasms", "Pneumonectomy", "Postoperative Care", "Radiotherapy, Conformal", "Radiotherapy, Image-Guided", "Retrospective Studies", "Treatment Outcome" ]
3946177
Introduction
Malignant pleural mesothelioma (MPM) is a rare and aggressive malignancy associated with poor prognosis. Although MPM is often initially confined to the hemithorax, it has a high potential for metastatic spread in the course of disease [1]. The mainstay of treatment is surgery consisting of either pleurectomy/decortication (PD) or radical extrapleural pneumonectomy (EPP) in combination with cisplatin/pemetrexed and, in selected cases, postoperative radiotherapy [2-5]. The rationale to apply postoperative radiotherapy after EPP has been the high rate of local recurrence after EPP alone of about 40% [6]. The pattern of pleural dissemination, infiltrative growth and the manipulations within the chest cavity during surgery place the entire ipsilateral chest wall at high risk for post-surgical relapse, especially at the diaphragm insertion, the pericardium, mediastinum and bronchial stump. Technically, hemithoracic radiotherapy is challenging due to various reasons. Firstly, the size of the volume to be treated is large, and may cover up to six liters. Secondly, the target lies in close proximity to various organs at risk (OAR) such as the heart, ipsilateral kidney, liver, remaining lung, esophagus and/or spinal cord. Thirdly, the thoracic cavity has a complex shape with its costodiaphragmatic recess extending around the liver and the kidney. Previous publications showed that highly conformal radiotherapy (HCRT) such as intensity modulated radiotherapy (IMRT) or volumetric modulated arc therapy (VMAT) can improve the dose distribution in respect to target coverage and dose to OAR [7,8]. However to our knowledge there is no clinical study published that investigated and compared clinical outcome after both radiation techniques. In order to verify if the technical improvements introduced with IMRT or VMAT have translated into a clinical benefit, we evaluated the clinical outcome of MPM patients treated with chemotherapy, surgery and 3DCRT or HCRT at our institution.
null
null
Results
Between 1999 and 2011, 39 patients were treated with neoadjuvant chemotherapy and EPP followed by RT. All follow up patients were deceased at the time of this study. Matched pair analysis In the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group. The median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57). Local control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. Overall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. In the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group. The median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57). Local control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. Overall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. Outcome of the entire cohort Fourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1. Patient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy Abbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy. The median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%). The local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT. Fourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1. Patient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy Abbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy. The median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%). The local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT. Analysis of tumor recurrence For patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up. Locoregional recurrences in patients who underwent highly conformal modulated radiotherapy Locoregional recurrences in patients who underwent 3D-conformal radiotherapy 1The recurrence occurred in two separate regions. Distant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%). For patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up. Locoregional recurrences in patients who underwent highly conformal modulated radiotherapy Locoregional recurrences in patients who underwent 3D-conformal radiotherapy 1The recurrence occurred in two separate regions. Distant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%).
Conclusions
In summary, the use of HCRT for treatment of patients with MPM after EPP is likely to improve local control rates. The local control improvement did not improve the overall survival due to the high rates of distant relapses in this series. Further improvement of trimodal or systemic therapy is required to tackle the high risk of distant recurrences.
[ "Radiation techniques", "Follow-Up and recurrence analysis", "Statistics", "Matched pair analysis", "Outcome of the entire cohort", "Analysis of tumor recurrence", "Competing interests", "Authors’ contributions" ]
[ "The 3DCRT group was treated between 1999 and 2005. These patients were treated with 25 × 1.8 Gy = 45 Gy to the hemithorax and subsequently, in a second series, a boost of 7 × 1.8 Gy = 12.6 Gy was given to the incompletely resected area (total dose 57.6 Gy). Dose calculation was performed on Pinnacle planning system (Philips Medical Systems) for a linear accelerator (Clinac 2100C, Varian Medical Systems). Details of the treatment technique have previously been published [7].\nHCRT has been used at our institution since 2005 for the treatment of MPM patients. Of the 14 patients treated with HCRT, 11 were treated with conventional static field IMRT and 3 patients with rotational IMRT (volumetric arc radiotherapy, i.e. Rapid Arc® in the present series).\nIMRT and VMAT plans achieved similar dose distributions [9,10]. In the case of HCRT only one series was applied with 26 × 1.75 Gy = 45.5 Gy delivered to the hemithorax with a simultaneous integrated boost of 26 × 2.15 Gy = 55.9 Gy delivered to the R1/R2 region. Planning and dose calculation was performed on the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) for a linear accelerator (Clinac 6EX or Trilogy, Varian Medical Systems). The treatment technique and dose-volume constraints have been previously published [7,10,11].", "Patients were followed up every three to four months with clinical examinations and CT or PET/CT scans. Local tumor progression or recurrence was defined as an increasing radiographic abnormality within or partially within the irradiation field. Recurrence adjacent to the field border but not in-field was defined as marginal miss recurrence. Regional recurrence was defined as recurrence in close proximity but not within the irradiated field. Tumor recurrence in the contralateral hemithorax or abdominal cavity was classified as a distant recurrence [12]. All in-field recurrences were carefully analyzed by 2 of the authors (JK, PD), in order to assess if they occurred in previously underdosed areas by comparing the respective diagnostic image with the radiation therapy treatment plan.", "All survival endpoints as well as tumor recurrence were measured from the date of treatment start (neoadjuvant chemotherapy) and were evaluated using the Kaplan Meier Method. In a subset of the cohort, a matched pair analysis was performed in order to compare outcome after 3DCRT and HCRT. For this analysis, the patients were matched for age, preoperative TNM, R and histology, and sex (except one pair).", "In the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group.\nThe median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57).\nLocal control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients.\nOverall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients.", "Fourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1.\nPatient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy\nAbbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy.\nThe median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%).\nThe local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT.", "For patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up.\nLocoregional recurrences in patients who underwent highly conformal modulated radiotherapy\nLocoregional recurrences in patients who underwent 3D-conformal radiotherapy\n1The recurrence occurred in two separate regions.\nDistant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%).", "The authors declare that they have no competing interests.", "JK, PD and OR were responsible for the study design and implementation. JK and PD performed the data analysis. JK, PD, IC and OR contributed to the implementation and manuscript writing. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Material and methods", "Radiation techniques", "Follow-Up and recurrence analysis", "Statistics", "Results", "Matched pair analysis", "Outcome of the entire cohort", "Analysis of tumor recurrence", "Discussion", "Conclusions", "Competing interests", "Authors’ contributions" ]
[ "Malignant pleural mesothelioma (MPM) is a rare and aggressive malignancy associated with poor prognosis. Although MPM is often initially confined to the hemithorax, it has a high potential for metastatic spread in the course of disease [1]. The mainstay of treatment is surgery consisting of either pleurectomy/decortication (PD) or radical extrapleural pneumonectomy (EPP) in combination with cisplatin/pemetrexed and, in selected cases, postoperative radiotherapy [2-5]. The rationale to apply postoperative radiotherapy after EPP has been the high rate of local recurrence after EPP alone of about 40% [6].\nThe pattern of pleural dissemination, infiltrative growth and the manipulations within the chest cavity during surgery place the entire ipsilateral chest wall at high risk for post-surgical relapse, especially at the diaphragm insertion, the pericardium, mediastinum and bronchial stump. Technically, hemithoracic radiotherapy is challenging due to various reasons. Firstly, the size of the volume to be treated is large, and may cover up to six liters. Secondly, the target lies in close proximity to various organs at risk (OAR) such as the heart, ipsilateral kidney, liver, remaining lung, esophagus and/or spinal cord. Thirdly, the thoracic cavity has a complex shape with its costodiaphragmatic recess extending around the liver and the kidney. Previous publications showed that highly conformal radiotherapy (HCRT) such as intensity modulated radiotherapy (IMRT) or volumetric modulated arc therapy (VMAT) can improve the dose distribution in respect to target coverage and dose to OAR [7,8]. However to our knowledge there is no clinical study published that investigated and compared clinical outcome after both radiation techniques. In order to verify if the technical improvements introduced with IMRT or VMAT have translated into a clinical benefit, we evaluated the clinical outcome of MPM patients treated with chemotherapy, surgery and 3DCRT or HCRT at our institution.", "We reviewed the clinical outcome of 39 consecutive patients treated either with 3DCRT (25 patients) or HCRT (11 IMRT patients and 3 VMAT patients). Patient staging was established using FDG-PET/CT and/or conventional thoraco-abdominal CT. The patients with clinical stage T1-T3, N0-2, M0, R0-2 were treated with 3 cycles of preoperative chemotherapy (pemetrexed and cisplatin) followed by EPP and RT [7]. All histological subtypes were accepted for RT. Patients were not selected for this review if they had metastatic disease or a local relapse before the start of RT. The study was approved by the local ethics committee of the University Hospital of Zurich.\n Radiation techniques The 3DCRT group was treated between 1999 and 2005. These patients were treated with 25 × 1.8 Gy = 45 Gy to the hemithorax and subsequently, in a second series, a boost of 7 × 1.8 Gy = 12.6 Gy was given to the incompletely resected area (total dose 57.6 Gy). Dose calculation was performed on Pinnacle planning system (Philips Medical Systems) for a linear accelerator (Clinac 2100C, Varian Medical Systems). Details of the treatment technique have previously been published [7].\nHCRT has been used at our institution since 2005 for the treatment of MPM patients. Of the 14 patients treated with HCRT, 11 were treated with conventional static field IMRT and 3 patients with rotational IMRT (volumetric arc radiotherapy, i.e. Rapid Arc® in the present series).\nIMRT and VMAT plans achieved similar dose distributions [9,10]. In the case of HCRT only one series was applied with 26 × 1.75 Gy = 45.5 Gy delivered to the hemithorax with a simultaneous integrated boost of 26 × 2.15 Gy = 55.9 Gy delivered to the R1/R2 region. Planning and dose calculation was performed on the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) for a linear accelerator (Clinac 6EX or Trilogy, Varian Medical Systems). The treatment technique and dose-volume constraints have been previously published [7,10,11].\nThe 3DCRT group was treated between 1999 and 2005. These patients were treated with 25 × 1.8 Gy = 45 Gy to the hemithorax and subsequently, in a second series, a boost of 7 × 1.8 Gy = 12.6 Gy was given to the incompletely resected area (total dose 57.6 Gy). Dose calculation was performed on Pinnacle planning system (Philips Medical Systems) for a linear accelerator (Clinac 2100C, Varian Medical Systems). Details of the treatment technique have previously been published [7].\nHCRT has been used at our institution since 2005 for the treatment of MPM patients. Of the 14 patients treated with HCRT, 11 were treated with conventional static field IMRT and 3 patients with rotational IMRT (volumetric arc radiotherapy, i.e. Rapid Arc® in the present series).\nIMRT and VMAT plans achieved similar dose distributions [9,10]. In the case of HCRT only one series was applied with 26 × 1.75 Gy = 45.5 Gy delivered to the hemithorax with a simultaneous integrated boost of 26 × 2.15 Gy = 55.9 Gy delivered to the R1/R2 region. Planning and dose calculation was performed on the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) for a linear accelerator (Clinac 6EX or Trilogy, Varian Medical Systems). The treatment technique and dose-volume constraints have been previously published [7,10,11].\n Follow-Up and recurrence analysis Patients were followed up every three to four months with clinical examinations and CT or PET/CT scans. Local tumor progression or recurrence was defined as an increasing radiographic abnormality within or partially within the irradiation field. Recurrence adjacent to the field border but not in-field was defined as marginal miss recurrence. Regional recurrence was defined as recurrence in close proximity but not within the irradiated field. Tumor recurrence in the contralateral hemithorax or abdominal cavity was classified as a distant recurrence [12]. All in-field recurrences were carefully analyzed by 2 of the authors (JK, PD), in order to assess if they occurred in previously underdosed areas by comparing the respective diagnostic image with the radiation therapy treatment plan.\nPatients were followed up every three to four months with clinical examinations and CT or PET/CT scans. Local tumor progression or recurrence was defined as an increasing radiographic abnormality within or partially within the irradiation field. Recurrence adjacent to the field border but not in-field was defined as marginal miss recurrence. Regional recurrence was defined as recurrence in close proximity but not within the irradiated field. Tumor recurrence in the contralateral hemithorax or abdominal cavity was classified as a distant recurrence [12]. All in-field recurrences were carefully analyzed by 2 of the authors (JK, PD), in order to assess if they occurred in previously underdosed areas by comparing the respective diagnostic image with the radiation therapy treatment plan.\n Statistics All survival endpoints as well as tumor recurrence were measured from the date of treatment start (neoadjuvant chemotherapy) and were evaluated using the Kaplan Meier Method. In a subset of the cohort, a matched pair analysis was performed in order to compare outcome after 3DCRT and HCRT. For this analysis, the patients were matched for age, preoperative TNM, R and histology, and sex (except one pair).\nAll survival endpoints as well as tumor recurrence were measured from the date of treatment start (neoadjuvant chemotherapy) and were evaluated using the Kaplan Meier Method. In a subset of the cohort, a matched pair analysis was performed in order to compare outcome after 3DCRT and HCRT. For this analysis, the patients were matched for age, preoperative TNM, R and histology, and sex (except one pair).", "The 3DCRT group was treated between 1999 and 2005. These patients were treated with 25 × 1.8 Gy = 45 Gy to the hemithorax and subsequently, in a second series, a boost of 7 × 1.8 Gy = 12.6 Gy was given to the incompletely resected area (total dose 57.6 Gy). Dose calculation was performed on Pinnacle planning system (Philips Medical Systems) for a linear accelerator (Clinac 2100C, Varian Medical Systems). Details of the treatment technique have previously been published [7].\nHCRT has been used at our institution since 2005 for the treatment of MPM patients. Of the 14 patients treated with HCRT, 11 were treated with conventional static field IMRT and 3 patients with rotational IMRT (volumetric arc radiotherapy, i.e. Rapid Arc® in the present series).\nIMRT and VMAT plans achieved similar dose distributions [9,10]. In the case of HCRT only one series was applied with 26 × 1.75 Gy = 45.5 Gy delivered to the hemithorax with a simultaneous integrated boost of 26 × 2.15 Gy = 55.9 Gy delivered to the R1/R2 region. Planning and dose calculation was performed on the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) for a linear accelerator (Clinac 6EX or Trilogy, Varian Medical Systems). The treatment technique and dose-volume constraints have been previously published [7,10,11].", "Patients were followed up every three to four months with clinical examinations and CT or PET/CT scans. Local tumor progression or recurrence was defined as an increasing radiographic abnormality within or partially within the irradiation field. Recurrence adjacent to the field border but not in-field was defined as marginal miss recurrence. Regional recurrence was defined as recurrence in close proximity but not within the irradiated field. Tumor recurrence in the contralateral hemithorax or abdominal cavity was classified as a distant recurrence [12]. All in-field recurrences were carefully analyzed by 2 of the authors (JK, PD), in order to assess if they occurred in previously underdosed areas by comparing the respective diagnostic image with the radiation therapy treatment plan.", "All survival endpoints as well as tumor recurrence were measured from the date of treatment start (neoadjuvant chemotherapy) and were evaluated using the Kaplan Meier Method. In a subset of the cohort, a matched pair analysis was performed in order to compare outcome after 3DCRT and HCRT. For this analysis, the patients were matched for age, preoperative TNM, R and histology, and sex (except one pair).", "Between 1999 and 2011, 39 patients were treated with neoadjuvant chemotherapy and EPP followed by RT. All follow up patients were deceased at the time of this study.\n Matched pair analysis In the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group.\nThe median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57).\nLocal control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients.\nOverall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients.\nIn the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group.\nThe median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57).\nLocal control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients.\nOverall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients.\n Outcome of the entire cohort Fourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1.\nPatient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy\nAbbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy.\nThe median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%).\nThe local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT.\nFourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1.\nPatient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy\nAbbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy.\nThe median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%).\nThe local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT.\n Analysis of tumor recurrence For patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up.\nLocoregional recurrences in patients who underwent highly conformal modulated radiotherapy\nLocoregional recurrences in patients who underwent 3D-conformal radiotherapy\n1The recurrence occurred in two separate regions.\nDistant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%).\nFor patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up.\nLocoregional recurrences in patients who underwent highly conformal modulated radiotherapy\nLocoregional recurrences in patients who underwent 3D-conformal radiotherapy\n1The recurrence occurred in two separate regions.\nDistant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%).", "In the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group.\nThe median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57).\nLocal control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients.\nOverall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients.", "Fourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1.\nPatient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy\nAbbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy.\nThe median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%).\nThe local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT.", "For patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up.\nLocoregional recurrences in patients who underwent highly conformal modulated radiotherapy\nLocoregional recurrences in patients who underwent 3D-conformal radiotherapy\n1The recurrence occurred in two separate regions.\nDistant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%).", "We demonstrate in a retrospective analysis of patients with MPM and treated at our institution with trimodality therapy that the use of postoperative highly conformal radiation techniques (HCRT) reduces local recurrence in comparison to 3DCRT. A recurrence analysis showed that in the case of 3DCRT 4 of 25 patients (16%) had a local recurrence in regions that were clearly underdosed according to current radiation protocols (doses ≥ 45 Gy are recommended, e.g. SAKK 17/04) in contrast to 0% of patients treated with HCRT. This supports the hypothesis that HCRT should improve local control in comparison to 3DCRT by improving target volume coverage. In our study patients treated with HCRT showed a tendency for improved progression free survival and local relapse free survival but did not benefit in terms of overall survival due to the high rates of distant relapses.\nLocal control is important in patients with MPM for symptom control, but also because some patients might benefit in terms of improved overall survival. Better local control after HCRT did not translate into improved overall survival in our patient series. Remarkably, the rate of death due to intercurrent disease, most often cardiac events, was higher after HCRT (29%) in comparison to 3DCRT (4%). Since cardiac sparing is rather improved with HCRT the most likely explanation for this difference is patient selection. The urgent research question, if postoperative radiotherapy impacts on overall survival after EPP, is addressed by a randomized study currently conducted in Switzerland, SAKK 1704. Patient accrual for this study was terminated in 2012 and the results are awaited.\nEven after trimodality treatment local recurrence remains high in some patient series. In a retrospective series of 49 patients treated with 3D-conformal RT after EPP and chemotherapy 67% of all recurrences included the ipsilateral hemithorax and 25% of all recurrences were local only [12]. Therefore improvement of radiotherapy is mandatory. In recent years radiotherapy has made enormous technical advances. More sophisticated highly conformal radiation techniques (HCRT) such as IMRT or rotational RT (VMAT) have become available and substituted for the older 3DCRT technique. The use of HCRT enables improvement in the dose distribution and target volume coverage. This is because with HCRT even complex target volumes such as the tumor bed of the costodiaphragmatic recess or the pericardium can be treated without or with little dose compromise and at the same time with optimal sparing of the normal tissue due to a steeper dose fall-off. Thus, the use of HCRT should intuitively improve treatment outcome in terms of local tumor control. Our data suggest indeed, that the use of HCRT bears considerable potential to improve on hemi-thoracic tumor control rates most likely due to improved target volume coverage.\nThe poor local control rates and high rates of in-field recurrences following 3DCRT in our cohort may be due to suboptimal dose coverage or the restriction of the target volume to avoid critical organs, both limitations inherent to the technique. After 3DCRT 4/24 (16.6%) in-field recurrences occurred in regions covered with only 30–43 Gy. In the case of 3DCRT mixed beams of photons and electrons were used to optimize dose coverage. The match of these beams often causes cold and hot spots of dose coverage. Poor matching during daily treatment can result in >20% dose inhomogeneity in the junction area [7]. In addition, as the spinal cord is blocked when the tolerance dose of 45 Gy is reached, insufficient dose delivery to parts of the mediastinum has been observed, resulting in underdosage to the tumor bed [7].\nFavorable tumor control after IMRT as part of a trimodality therapy has previously been reported by Rice et al. [13]. The median overall survival of their 61 patients treated was 14.2 months with a locoregional recurrence rate of 13% and only 5% local in-field recurrences reported. The median dose prescribed was only 45 Gy, and half of all patients received doses even less than 45 Gy. The reason for the comparatively higher local control rate reported by Rice et al. in comparison to our study remains unclear. It may be explained by patient selection and the comparatively short median overall survival of 14.2 months in comparison to 20.8 months in the present series and by the retrospective study design. The shorter median overall survival reported by Rice et al. could be caused by more advanced tumor stages (40 T3, 8 T4, 26 N2), more aggressive subtypes (14 biphasic, 4 sarcomatoid) and the fact that neoadjuvant chemotherapy was not routinely administered.\nWith regard to toxicity the major dose limiting organ for postoperative radiotherapy of MPM is the contralateral lung. Lung complications such as radiation pneumonitis are likely to be higher with multi-field techniques such as IMRT or VMAT in comparison to 3DCRT, where opposed beams from 0 and 180 degrees are usually used, thereby optimally sparing the contralateral lung. With regard to dose escalation and lung sparing surgery, protons might prove superior to IMRT/VMAT. Severe complications of the lung with grade 4 and 5 pneumonitis after IMRT have been reported [7,12]. Since then, special attention to the contralateral lung dose has been given during the treatment planning process and pneumonitis rates should be lower today. Intuitively, the use of HCRT should reduce toxicity and complication probabilities of esophagus, heart, liver and kidney, however no data with regard to these toxicity endpoints comparing both treatment techniques are available.\nIn recent years, the need for extensive surgery has been questioned, and less radical surgery has been advocated such as pleurectomy/decortication. In the context of reduced surgery, the anatomical situation makes it difficult for RT to be applied to the entire pleural space, however, it can still be considered as a targeted local postoperative option in case of incomplete resection. Future clinical studies are required to define the role of radiotherapy in combination with lung sparing surgery.", "In summary, the use of HCRT for treatment of patients with MPM after EPP is likely to improve local control rates. The local control improvement did not improve the overall survival due to the high rates of distant relapses in this series. Further improvement of trimodal or systemic therapy is required to tackle the high risk of distant recurrences.", "The authors declare that they have no competing interests.", "JK, PD and OR were responsible for the study design and implementation. JK and PD performed the data analysis. JK, PD, IC and OR contributed to the implementation and manuscript writing. All authors read and approved the final manuscript." ]
[ "intro", "materials|methods", null, null, null, "results", null, null, null, "discussion", "conclusions", null, null ]
[ "Mesothelioma", "Radiation therapy", "Extrapleural pneumonectomy", "Volumetric modulated arc therapy", "Intensity modulated radiotherapy", "Multimodal therapy" ]
Introduction: Malignant pleural mesothelioma (MPM) is a rare and aggressive malignancy associated with poor prognosis. Although MPM is often initially confined to the hemithorax, it has a high potential for metastatic spread in the course of disease [1]. The mainstay of treatment is surgery consisting of either pleurectomy/decortication (PD) or radical extrapleural pneumonectomy (EPP) in combination with cisplatin/pemetrexed and, in selected cases, postoperative radiotherapy [2-5]. The rationale to apply postoperative radiotherapy after EPP has been the high rate of local recurrence after EPP alone of about 40% [6]. The pattern of pleural dissemination, infiltrative growth and the manipulations within the chest cavity during surgery place the entire ipsilateral chest wall at high risk for post-surgical relapse, especially at the diaphragm insertion, the pericardium, mediastinum and bronchial stump. Technically, hemithoracic radiotherapy is challenging due to various reasons. Firstly, the size of the volume to be treated is large, and may cover up to six liters. Secondly, the target lies in close proximity to various organs at risk (OAR) such as the heart, ipsilateral kidney, liver, remaining lung, esophagus and/or spinal cord. Thirdly, the thoracic cavity has a complex shape with its costodiaphragmatic recess extending around the liver and the kidney. Previous publications showed that highly conformal radiotherapy (HCRT) such as intensity modulated radiotherapy (IMRT) or volumetric modulated arc therapy (VMAT) can improve the dose distribution in respect to target coverage and dose to OAR [7,8]. However to our knowledge there is no clinical study published that investigated and compared clinical outcome after both radiation techniques. In order to verify if the technical improvements introduced with IMRT or VMAT have translated into a clinical benefit, we evaluated the clinical outcome of MPM patients treated with chemotherapy, surgery and 3DCRT or HCRT at our institution. Material and methods: We reviewed the clinical outcome of 39 consecutive patients treated either with 3DCRT (25 patients) or HCRT (11 IMRT patients and 3 VMAT patients). Patient staging was established using FDG-PET/CT and/or conventional thoraco-abdominal CT. The patients with clinical stage T1-T3, N0-2, M0, R0-2 were treated with 3 cycles of preoperative chemotherapy (pemetrexed and cisplatin) followed by EPP and RT [7]. All histological subtypes were accepted for RT. Patients were not selected for this review if they had metastatic disease or a local relapse before the start of RT. The study was approved by the local ethics committee of the University Hospital of Zurich. Radiation techniques The 3DCRT group was treated between 1999 and 2005. These patients were treated with 25 × 1.8 Gy = 45 Gy to the hemithorax and subsequently, in a second series, a boost of 7 × 1.8 Gy = 12.6 Gy was given to the incompletely resected area (total dose 57.6 Gy). Dose calculation was performed on Pinnacle planning system (Philips Medical Systems) for a linear accelerator (Clinac 2100C, Varian Medical Systems). Details of the treatment technique have previously been published [7]. HCRT has been used at our institution since 2005 for the treatment of MPM patients. Of the 14 patients treated with HCRT, 11 were treated with conventional static field IMRT and 3 patients with rotational IMRT (volumetric arc radiotherapy, i.e. Rapid Arc® in the present series). IMRT and VMAT plans achieved similar dose distributions [9,10]. In the case of HCRT only one series was applied with 26 × 1.75 Gy = 45.5 Gy delivered to the hemithorax with a simultaneous integrated boost of 26 × 2.15 Gy = 55.9 Gy delivered to the R1/R2 region. Planning and dose calculation was performed on the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) for a linear accelerator (Clinac 6EX or Trilogy, Varian Medical Systems). The treatment technique and dose-volume constraints have been previously published [7,10,11]. The 3DCRT group was treated between 1999 and 2005. These patients were treated with 25 × 1.8 Gy = 45 Gy to the hemithorax and subsequently, in a second series, a boost of 7 × 1.8 Gy = 12.6 Gy was given to the incompletely resected area (total dose 57.6 Gy). Dose calculation was performed on Pinnacle planning system (Philips Medical Systems) for a linear accelerator (Clinac 2100C, Varian Medical Systems). Details of the treatment technique have previously been published [7]. HCRT has been used at our institution since 2005 for the treatment of MPM patients. Of the 14 patients treated with HCRT, 11 were treated with conventional static field IMRT and 3 patients with rotational IMRT (volumetric arc radiotherapy, i.e. Rapid Arc® in the present series). IMRT and VMAT plans achieved similar dose distributions [9,10]. In the case of HCRT only one series was applied with 26 × 1.75 Gy = 45.5 Gy delivered to the hemithorax with a simultaneous integrated boost of 26 × 2.15 Gy = 55.9 Gy delivered to the R1/R2 region. Planning and dose calculation was performed on the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) for a linear accelerator (Clinac 6EX or Trilogy, Varian Medical Systems). The treatment technique and dose-volume constraints have been previously published [7,10,11]. Follow-Up and recurrence analysis Patients were followed up every three to four months with clinical examinations and CT or PET/CT scans. Local tumor progression or recurrence was defined as an increasing radiographic abnormality within or partially within the irradiation field. Recurrence adjacent to the field border but not in-field was defined as marginal miss recurrence. Regional recurrence was defined as recurrence in close proximity but not within the irradiated field. Tumor recurrence in the contralateral hemithorax or abdominal cavity was classified as a distant recurrence [12]. All in-field recurrences were carefully analyzed by 2 of the authors (JK, PD), in order to assess if they occurred in previously underdosed areas by comparing the respective diagnostic image with the radiation therapy treatment plan. Patients were followed up every three to four months with clinical examinations and CT or PET/CT scans. Local tumor progression or recurrence was defined as an increasing radiographic abnormality within or partially within the irradiation field. Recurrence adjacent to the field border but not in-field was defined as marginal miss recurrence. Regional recurrence was defined as recurrence in close proximity but not within the irradiated field. Tumor recurrence in the contralateral hemithorax or abdominal cavity was classified as a distant recurrence [12]. All in-field recurrences were carefully analyzed by 2 of the authors (JK, PD), in order to assess if they occurred in previously underdosed areas by comparing the respective diagnostic image with the radiation therapy treatment plan. Statistics All survival endpoints as well as tumor recurrence were measured from the date of treatment start (neoadjuvant chemotherapy) and were evaluated using the Kaplan Meier Method. In a subset of the cohort, a matched pair analysis was performed in order to compare outcome after 3DCRT and HCRT. For this analysis, the patients were matched for age, preoperative TNM, R and histology, and sex (except one pair). All survival endpoints as well as tumor recurrence were measured from the date of treatment start (neoadjuvant chemotherapy) and were evaluated using the Kaplan Meier Method. In a subset of the cohort, a matched pair analysis was performed in order to compare outcome after 3DCRT and HCRT. For this analysis, the patients were matched for age, preoperative TNM, R and histology, and sex (except one pair). Radiation techniques: The 3DCRT group was treated between 1999 and 2005. These patients were treated with 25 × 1.8 Gy = 45 Gy to the hemithorax and subsequently, in a second series, a boost of 7 × 1.8 Gy = 12.6 Gy was given to the incompletely resected area (total dose 57.6 Gy). Dose calculation was performed on Pinnacle planning system (Philips Medical Systems) for a linear accelerator (Clinac 2100C, Varian Medical Systems). Details of the treatment technique have previously been published [7]. HCRT has been used at our institution since 2005 for the treatment of MPM patients. Of the 14 patients treated with HCRT, 11 were treated with conventional static field IMRT and 3 patients with rotational IMRT (volumetric arc radiotherapy, i.e. Rapid Arc® in the present series). IMRT and VMAT plans achieved similar dose distributions [9,10]. In the case of HCRT only one series was applied with 26 × 1.75 Gy = 45.5 Gy delivered to the hemithorax with a simultaneous integrated boost of 26 × 2.15 Gy = 55.9 Gy delivered to the R1/R2 region. Planning and dose calculation was performed on the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) for a linear accelerator (Clinac 6EX or Trilogy, Varian Medical Systems). The treatment technique and dose-volume constraints have been previously published [7,10,11]. Follow-Up and recurrence analysis: Patients were followed up every three to four months with clinical examinations and CT or PET/CT scans. Local tumor progression or recurrence was defined as an increasing radiographic abnormality within or partially within the irradiation field. Recurrence adjacent to the field border but not in-field was defined as marginal miss recurrence. Regional recurrence was defined as recurrence in close proximity but not within the irradiated field. Tumor recurrence in the contralateral hemithorax or abdominal cavity was classified as a distant recurrence [12]. All in-field recurrences were carefully analyzed by 2 of the authors (JK, PD), in order to assess if they occurred in previously underdosed areas by comparing the respective diagnostic image with the radiation therapy treatment plan. Statistics: All survival endpoints as well as tumor recurrence were measured from the date of treatment start (neoadjuvant chemotherapy) and were evaluated using the Kaplan Meier Method. In a subset of the cohort, a matched pair analysis was performed in order to compare outcome after 3DCRT and HCRT. For this analysis, the patients were matched for age, preoperative TNM, R and histology, and sex (except one pair). Results: Between 1999 and 2011, 39 patients were treated with neoadjuvant chemotherapy and EPP followed by RT. All follow up patients were deceased at the time of this study. Matched pair analysis In the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group. The median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57). Local control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. Overall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. In the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group. The median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57). Local control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. Overall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. Outcome of the entire cohort Fourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1. Patient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy Abbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy. The median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%). The local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT. Fourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1. Patient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy Abbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy. The median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%). The local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT. Analysis of tumor recurrence For patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up. Locoregional recurrences in patients who underwent highly conformal modulated radiotherapy Locoregional recurrences in patients who underwent 3D-conformal radiotherapy 1The recurrence occurred in two separate regions. Distant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%). For patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up. Locoregional recurrences in patients who underwent highly conformal modulated radiotherapy Locoregional recurrences in patients who underwent 3D-conformal radiotherapy 1The recurrence occurred in two separate regions. Distant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%). Matched pair analysis: In the matched pair analysis, 11 HCRT and 11 3DCRT patients were matched based on tumor staging, resection status, tumor histology, age and gender (except one pair were the gender was not matched). In each group 3 patients had a tumor stage T1N0M0 with resection R0 and 8 patients, tumor stage T2N0M0 with resection R1. Tumor histology was epithelioid for 6 patients and biphasic for 5 patients in each group. The mean age was 59.6 years and 59.8 years for patient’s in the HCRT and 3DCRT group. The median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06) as displayed in Figure 1. Three (27.3%) and eight patients (72.7%) had a local relapse after HCRT and 3DCRT respectively. Nine HCRT (81.8%) and nine 3DCRT (81.8%) patients developed metastases within a median time of 18.4 ± 10.7 months and 10.9 ± 8.6 months (p = 0.21). The difference in disease free survival between HCRT and 3DCRT was not significant (p = 0.72). The median overall survivals were 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT and are displayed in Figure 2 (p = 0.57). Local control for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. Overall survival for 11 matched modulated radiotherapy (HCRT) patients and 11 3-dimensional conformal radiotherapy (3DCRT) patients. Outcome of the entire cohort: Fourteen HCRT and 25 3DCRT patients were treated and reviewed. Patient’s sex, age, tumor characteristics and resection are displayed on Table 1. Patient demographics and tumor characteristics of 39 patients who underwent neoadjuvant chemotherapy, extrapleural pneumonectomy and radiotherapy Abbreviation: HCRT: Highly conformal modulated radiotherapy, 3DCRT: 3-dimensional conformal radiotherapy. The median overall survival was 20.8 ± 14.4 months for the HCRT group, and 26.9 ± 11.8 months for the 3DCRT group (p = 0.48). In the HCRT group, 10 patients (71%) died of progressive disease and 4 patients (29%) due to intercurrent disease: one patient died of septic shock, one of acute myocardial infarction, one of progressive biventricular heart failure and another patient, who was well and without evidence of disease at two days before his sudden death most likely also died due to a cardiac event. In the 3DCRT group 24 patients (96%) died of progressive disease and one of septic shock (4%). The local control rates were improved after HCRT (p = 0.30). Four HCRT patients (28.6%) suffered from locoregional relapse in comparison to 15 patients (60%) treated with 3DCRT. Analysis of tumor recurrence: For patients treated with HCRT, local relapse occurred in-field in 3 patients (21.4%), all within areas that had been treated with doses between 43 Gy to 59 Gy (according to our treatment planning protocol 95% of the prescribed 45 Gy (=43 Gy) should enclose the target volume, which in this case is the tumor bed of the hemithorax, Table 2) and none in a clearly underdosed region. One patient (7.1%) had a marginal miss recurrence at the field border (13 Gy). In the 3DCRT group, twelve patients (48%) had in-field recurrences in regions treated with doses between 30 Gy and 56 Gy (Table 3). Notably, in 16% of patients treated with 3DCRT (4/25) in-field recurrences occurred in regions that were covered with doses of only 30 to 43 Gy, instead of the prescribed ≥45 Gy. One patient (4.0%) had a marginal miss recurrence (18 Gy). In one patient with a regional recurrence (4.0%) the delivered dose was not possible to define because no diagnostic CT was available. In one patient (4.0%) the site of recurrence could not be determined because of missing radiographs during follow-up. Locoregional recurrences in patients who underwent highly conformal modulated radiotherapy Locoregional recurrences in patients who underwent 3D-conformal radiotherapy 1The recurrence occurred in two separate regions. Distant recurrences occurred in ten patients (71.4%) treated with HCRT and in twenty 3DCRT patients (80%). The median time to distant metastases was 18.4 ± 10.7 months in HCRT group and 16.7 ± 7.7 months in 3DCRT group (p = 0.7). In the HCRT group, distant metastases involved only the contralateral chest in three patients (30%) and only the abdominal cavity in three patients (30%). Both sites were affected by distant metastases in four further patients (40%). Discussion: We demonstrate in a retrospective analysis of patients with MPM and treated at our institution with trimodality therapy that the use of postoperative highly conformal radiation techniques (HCRT) reduces local recurrence in comparison to 3DCRT. A recurrence analysis showed that in the case of 3DCRT 4 of 25 patients (16%) had a local recurrence in regions that were clearly underdosed according to current radiation protocols (doses ≥ 45 Gy are recommended, e.g. SAKK 17/04) in contrast to 0% of patients treated with HCRT. This supports the hypothesis that HCRT should improve local control in comparison to 3DCRT by improving target volume coverage. In our study patients treated with HCRT showed a tendency for improved progression free survival and local relapse free survival but did not benefit in terms of overall survival due to the high rates of distant relapses. Local control is important in patients with MPM for symptom control, but also because some patients might benefit in terms of improved overall survival. Better local control after HCRT did not translate into improved overall survival in our patient series. Remarkably, the rate of death due to intercurrent disease, most often cardiac events, was higher after HCRT (29%) in comparison to 3DCRT (4%). Since cardiac sparing is rather improved with HCRT the most likely explanation for this difference is patient selection. The urgent research question, if postoperative radiotherapy impacts on overall survival after EPP, is addressed by a randomized study currently conducted in Switzerland, SAKK 1704. Patient accrual for this study was terminated in 2012 and the results are awaited. Even after trimodality treatment local recurrence remains high in some patient series. In a retrospective series of 49 patients treated with 3D-conformal RT after EPP and chemotherapy 67% of all recurrences included the ipsilateral hemithorax and 25% of all recurrences were local only [12]. Therefore improvement of radiotherapy is mandatory. In recent years radiotherapy has made enormous technical advances. More sophisticated highly conformal radiation techniques (HCRT) such as IMRT or rotational RT (VMAT) have become available and substituted for the older 3DCRT technique. The use of HCRT enables improvement in the dose distribution and target volume coverage. This is because with HCRT even complex target volumes such as the tumor bed of the costodiaphragmatic recess or the pericardium can be treated without or with little dose compromise and at the same time with optimal sparing of the normal tissue due to a steeper dose fall-off. Thus, the use of HCRT should intuitively improve treatment outcome in terms of local tumor control. Our data suggest indeed, that the use of HCRT bears considerable potential to improve on hemi-thoracic tumor control rates most likely due to improved target volume coverage. The poor local control rates and high rates of in-field recurrences following 3DCRT in our cohort may be due to suboptimal dose coverage or the restriction of the target volume to avoid critical organs, both limitations inherent to the technique. After 3DCRT 4/24 (16.6%) in-field recurrences occurred in regions covered with only 30–43 Gy. In the case of 3DCRT mixed beams of photons and electrons were used to optimize dose coverage. The match of these beams often causes cold and hot spots of dose coverage. Poor matching during daily treatment can result in >20% dose inhomogeneity in the junction area [7]. In addition, as the spinal cord is blocked when the tolerance dose of 45 Gy is reached, insufficient dose delivery to parts of the mediastinum has been observed, resulting in underdosage to the tumor bed [7]. Favorable tumor control after IMRT as part of a trimodality therapy has previously been reported by Rice et al. [13]. The median overall survival of their 61 patients treated was 14.2 months with a locoregional recurrence rate of 13% and only 5% local in-field recurrences reported. The median dose prescribed was only 45 Gy, and half of all patients received doses even less than 45 Gy. The reason for the comparatively higher local control rate reported by Rice et al. in comparison to our study remains unclear. It may be explained by patient selection and the comparatively short median overall survival of 14.2 months in comparison to 20.8 months in the present series and by the retrospective study design. The shorter median overall survival reported by Rice et al. could be caused by more advanced tumor stages (40 T3, 8 T4, 26 N2), more aggressive subtypes (14 biphasic, 4 sarcomatoid) and the fact that neoadjuvant chemotherapy was not routinely administered. With regard to toxicity the major dose limiting organ for postoperative radiotherapy of MPM is the contralateral lung. Lung complications such as radiation pneumonitis are likely to be higher with multi-field techniques such as IMRT or VMAT in comparison to 3DCRT, where opposed beams from 0 and 180 degrees are usually used, thereby optimally sparing the contralateral lung. With regard to dose escalation and lung sparing surgery, protons might prove superior to IMRT/VMAT. Severe complications of the lung with grade 4 and 5 pneumonitis after IMRT have been reported [7,12]. Since then, special attention to the contralateral lung dose has been given during the treatment planning process and pneumonitis rates should be lower today. Intuitively, the use of HCRT should reduce toxicity and complication probabilities of esophagus, heart, liver and kidney, however no data with regard to these toxicity endpoints comparing both treatment techniques are available. In recent years, the need for extensive surgery has been questioned, and less radical surgery has been advocated such as pleurectomy/decortication. In the context of reduced surgery, the anatomical situation makes it difficult for RT to be applied to the entire pleural space, however, it can still be considered as a targeted local postoperative option in case of incomplete resection. Future clinical studies are required to define the role of radiotherapy in combination with lung sparing surgery. Conclusions: In summary, the use of HCRT for treatment of patients with MPM after EPP is likely to improve local control rates. The local control improvement did not improve the overall survival due to the high rates of distant relapses in this series. Further improvement of trimodal or systemic therapy is required to tackle the high risk of distant recurrences. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: JK, PD and OR were responsible for the study design and implementation. JK and PD performed the data analysis. JK, PD, IC and OR contributed to the implementation and manuscript writing. All authors read and approved the final manuscript.
Background: Radiotherapy (RT) is currently under investigation as part of a trimodality treatment of malignant pleural mesothelioma (MPM). The introduction of highly conformal radiotherapy (HCRT) technique improved dose delivery and target coverage in comparison to 3-dimensional conformal radiotherapy (3DCRT). The following study was undertaken to investigate the clinical outcome of both radiation techniques. Methods: Thirty-nine MPM patients were treated with neoadjuvant chemotherapy, extrapleural pneumonectomy (EPP) and adjuvant RT. Twenty-five patients were treated with 3DCRT, and 14 with HCRT (Intensity modulated radiotherapy or volumetric modulated arc therapy). Overall survival, disease free survival, locoregional recurrence and pattern of recurrence were assessed. A matched pair analysis was performed including 11 patients of each group. Results: After matching for gender, age, histology, tumor stage and resection status, HCRT seemed superior to 3DCRT with a local relapse rate of 27.3% compared to 72.7% after 3DCRT (p = 0.06). The median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06). The median overall survival was 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT (p = 0.57). Recurrence analysis showed that in-field local relapses occurred in previously underdosed regions of the tumor bed in 16% of patients treated with 3DCRT and in 0% of HCRT patients. Conclusions: The use of HCRT increases the probability of local control as compared to 3DCRT by improving target volume coverage. HCRT did not improve overall survival in this patient series due to the high rate of distant recurrences.
Introduction: Malignant pleural mesothelioma (MPM) is a rare and aggressive malignancy associated with poor prognosis. Although MPM is often initially confined to the hemithorax, it has a high potential for metastatic spread in the course of disease [1]. The mainstay of treatment is surgery consisting of either pleurectomy/decortication (PD) or radical extrapleural pneumonectomy (EPP) in combination with cisplatin/pemetrexed and, in selected cases, postoperative radiotherapy [2-5]. The rationale to apply postoperative radiotherapy after EPP has been the high rate of local recurrence after EPP alone of about 40% [6]. The pattern of pleural dissemination, infiltrative growth and the manipulations within the chest cavity during surgery place the entire ipsilateral chest wall at high risk for post-surgical relapse, especially at the diaphragm insertion, the pericardium, mediastinum and bronchial stump. Technically, hemithoracic radiotherapy is challenging due to various reasons. Firstly, the size of the volume to be treated is large, and may cover up to six liters. Secondly, the target lies in close proximity to various organs at risk (OAR) such as the heart, ipsilateral kidney, liver, remaining lung, esophagus and/or spinal cord. Thirdly, the thoracic cavity has a complex shape with its costodiaphragmatic recess extending around the liver and the kidney. Previous publications showed that highly conformal radiotherapy (HCRT) such as intensity modulated radiotherapy (IMRT) or volumetric modulated arc therapy (VMAT) can improve the dose distribution in respect to target coverage and dose to OAR [7,8]. However to our knowledge there is no clinical study published that investigated and compared clinical outcome after both radiation techniques. In order to verify if the technical improvements introduced with IMRT or VMAT have translated into a clinical benefit, we evaluated the clinical outcome of MPM patients treated with chemotherapy, surgery and 3DCRT or HCRT at our institution. Conclusions: In summary, the use of HCRT for treatment of patients with MPM after EPP is likely to improve local control rates. The local control improvement did not improve the overall survival due to the high rates of distant relapses in this series. Further improvement of trimodal or systemic therapy is required to tackle the high risk of distant recurrences.
Background: Radiotherapy (RT) is currently under investigation as part of a trimodality treatment of malignant pleural mesothelioma (MPM). The introduction of highly conformal radiotherapy (HCRT) technique improved dose delivery and target coverage in comparison to 3-dimensional conformal radiotherapy (3DCRT). The following study was undertaken to investigate the clinical outcome of both radiation techniques. Methods: Thirty-nine MPM patients were treated with neoadjuvant chemotherapy, extrapleural pneumonectomy (EPP) and adjuvant RT. Twenty-five patients were treated with 3DCRT, and 14 with HCRT (Intensity modulated radiotherapy or volumetric modulated arc therapy). Overall survival, disease free survival, locoregional recurrence and pattern of recurrence were assessed. A matched pair analysis was performed including 11 patients of each group. Results: After matching for gender, age, histology, tumor stage and resection status, HCRT seemed superior to 3DCRT with a local relapse rate of 27.3% compared to 72.7% after 3DCRT (p = 0.06). The median time to local relapse was increased by 49% with HCRT in comparison to 3DCRT from 10.9 ± 5.4 months to 16.2 ± 3.1 months (p = 0.06). The median overall survival was 22.3 ± 15.3 months for HCRT and 21.2 ± 9.2 months for 3DCRT (p = 0.57). Recurrence analysis showed that in-field local relapses occurred in previously underdosed regions of the tumor bed in 16% of patients treated with 3DCRT and in 0% of HCRT patients. Conclusions: The use of HCRT increases the probability of local control as compared to 3DCRT by improving target volume coverage. HCRT did not improve overall survival in this patient series due to the high rate of distant recurrences.
6,180
323
[ 275, 136, 78, 323, 240, 386, 10, 46 ]
13
[ "patients", "hcrt", "3dcrt", "gy", "recurrence", "treated", "tumor", "radiotherapy", "group", "local" ]
[ "pleurectomy decortication", "chemotherapy extrapleural pneumonectomy", "pneumonectomy radiotherapy", "malignant pleural mesothelioma", "pleural mesothelioma mpm" ]
null
[CONTENT] Mesothelioma | Radiation therapy | Extrapleural pneumonectomy | Volumetric modulated arc therapy | Intensity modulated radiotherapy | Multimodal therapy [SUMMARY]
null
[CONTENT] Mesothelioma | Radiation therapy | Extrapleural pneumonectomy | Volumetric modulated arc therapy | Intensity modulated radiotherapy | Multimodal therapy [SUMMARY]
[CONTENT] Mesothelioma | Radiation therapy | Extrapleural pneumonectomy | Volumetric modulated arc therapy | Intensity modulated radiotherapy | Multimodal therapy [SUMMARY]
[CONTENT] Mesothelioma | Radiation therapy | Extrapleural pneumonectomy | Volumetric modulated arc therapy | Intensity modulated radiotherapy | Multimodal therapy [SUMMARY]
[CONTENT] Mesothelioma | Radiation therapy | Extrapleural pneumonectomy | Volumetric modulated arc therapy | Intensity modulated radiotherapy | Multimodal therapy [SUMMARY]
[CONTENT] Combined Modality Therapy | Female | Follow-Up Studies | Humans | Imaging, Three-Dimensional | Lung Neoplasms | Male | Matched-Pair Analysis | Mesothelioma | Mesothelioma, Malignant | Middle Aged | Pleural Neoplasms | Pneumonectomy | Postoperative Care | Radiotherapy, Conformal | Radiotherapy, Image-Guided | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] Combined Modality Therapy | Female | Follow-Up Studies | Humans | Imaging, Three-Dimensional | Lung Neoplasms | Male | Matched-Pair Analysis | Mesothelioma | Mesothelioma, Malignant | Middle Aged | Pleural Neoplasms | Pneumonectomy | Postoperative Care | Radiotherapy, Conformal | Radiotherapy, Image-Guided | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Combined Modality Therapy | Female | Follow-Up Studies | Humans | Imaging, Three-Dimensional | Lung Neoplasms | Male | Matched-Pair Analysis | Mesothelioma | Mesothelioma, Malignant | Middle Aged | Pleural Neoplasms | Pneumonectomy | Postoperative Care | Radiotherapy, Conformal | Radiotherapy, Image-Guided | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Combined Modality Therapy | Female | Follow-Up Studies | Humans | Imaging, Three-Dimensional | Lung Neoplasms | Male | Matched-Pair Analysis | Mesothelioma | Mesothelioma, Malignant | Middle Aged | Pleural Neoplasms | Pneumonectomy | Postoperative Care | Radiotherapy, Conformal | Radiotherapy, Image-Guided | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Combined Modality Therapy | Female | Follow-Up Studies | Humans | Imaging, Three-Dimensional | Lung Neoplasms | Male | Matched-Pair Analysis | Mesothelioma | Mesothelioma, Malignant | Middle Aged | Pleural Neoplasms | Pneumonectomy | Postoperative Care | Radiotherapy, Conformal | Radiotherapy, Image-Guided | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] pleurectomy decortication | chemotherapy extrapleural pneumonectomy | pneumonectomy radiotherapy | malignant pleural mesothelioma | pleural mesothelioma mpm [SUMMARY]
null
[CONTENT] pleurectomy decortication | chemotherapy extrapleural pneumonectomy | pneumonectomy radiotherapy | malignant pleural mesothelioma | pleural mesothelioma mpm [SUMMARY]
[CONTENT] pleurectomy decortication | chemotherapy extrapleural pneumonectomy | pneumonectomy radiotherapy | malignant pleural mesothelioma | pleural mesothelioma mpm [SUMMARY]
[CONTENT] pleurectomy decortication | chemotherapy extrapleural pneumonectomy | pneumonectomy radiotherapy | malignant pleural mesothelioma | pleural mesothelioma mpm [SUMMARY]
[CONTENT] pleurectomy decortication | chemotherapy extrapleural pneumonectomy | pneumonectomy radiotherapy | malignant pleural mesothelioma | pleural mesothelioma mpm [SUMMARY]
[CONTENT] patients | hcrt | 3dcrt | gy | recurrence | treated | tumor | radiotherapy | group | local [SUMMARY]
null
[CONTENT] patients | hcrt | 3dcrt | gy | recurrence | treated | tumor | radiotherapy | group | local [SUMMARY]
[CONTENT] patients | hcrt | 3dcrt | gy | recurrence | treated | tumor | radiotherapy | group | local [SUMMARY]
[CONTENT] patients | hcrt | 3dcrt | gy | recurrence | treated | tumor | radiotherapy | group | local [SUMMARY]
[CONTENT] patients | hcrt | 3dcrt | gy | recurrence | treated | tumor | radiotherapy | group | local [SUMMARY]
[CONTENT] clinical | surgery | radiotherapy | high | oar | epp | mpm | pleural | postoperative radiotherapy | postoperative [SUMMARY]
null
[CONTENT] patients | 3dcrt | hcrt | group | gy | months | patient | radiotherapy | 11 | tumor [SUMMARY]
[CONTENT] improvement | high | improve | rates | local control | control | distant | treatment patients mpm | control improvement improve overall | rates distant relapses series [SUMMARY]
[CONTENT] patients | gy | hcrt | 3dcrt | recurrence | treated | field | tumor | radiotherapy | group [SUMMARY]
[CONTENT] patients | gy | hcrt | 3dcrt | recurrence | treated | field | tumor | radiotherapy | group [SUMMARY]
[CONTENT] malignant pleural mesothelioma | MPM ||| 3 | 3DCRT ||| [SUMMARY]
null
[CONTENT] 3DCRT | 27.3% | 72.7% | 3DCRT | 0.06 ||| 49% | 3DCRT | 10.9 | 5.4 months | 16.2 ± | 3.1 months | 0.06 ||| 22.3 | 15.3 months | 21.2 | 9.2 months | 3DCRT | 0.57 ||| 16% | 3DCRT | 0% [SUMMARY]
[CONTENT] 3DCRT ||| [SUMMARY]
[CONTENT] malignant pleural mesothelioma | MPM ||| 3 | 3DCRT ||| ||| Thirty-nine | MPM ||| Twenty-five | 3DCRT | 14 ||| ||| 11 ||| 3DCRT | 27.3% | 72.7% | 3DCRT | 0.06 ||| 49% | 3DCRT | 10.9 | 5.4 months | 16.2 ± | 3.1 months | 0.06 ||| 22.3 | 15.3 months | 21.2 | 9.2 months | 3DCRT | 0.57 ||| 16% | 3DCRT | 0% ||| 3DCRT ||| [SUMMARY]
[CONTENT] malignant pleural mesothelioma | MPM ||| 3 | 3DCRT ||| ||| Thirty-nine | MPM ||| Twenty-five | 3DCRT | 14 ||| ||| 11 ||| 3DCRT | 27.3% | 72.7% | 3DCRT | 0.06 ||| 49% | 3DCRT | 10.9 | 5.4 months | 16.2 ± | 3.1 months | 0.06 ||| 22.3 | 15.3 months | 21.2 | 9.2 months | 3DCRT | 0.57 ||| 16% | 3DCRT | 0% ||| 3DCRT ||| [SUMMARY]
Self-reported halitosis and oral health related quality of life in adolescent students from a suburban community in Nigeria.
34394270
Halitosis is an important cause of impaired quality of life in adolescents. Little is known about the prevalence of self-reported halitosis in adolescents in Nigeria and the extent to which self-reported halitosis impairs their oral health related quality of life.
BACKGROUND
An analytical cross-sectional study. Pre-tested self-administered pro-forma was used to obtain the adolescents' demographic data and their self-perception of halitosis. The Oral Health Impact Profile (OHIP-14) was used to assess the adolescents' OHRQoL. The Mann-Whitney U test was used to compare the median OHIP-14 scores between adolescents who reported halitosis and those who did not. The level of significance was set at p < 0.05. Ethics approval for this study was obtained from the Health Research and Ethics Committee of the Lagos University Teaching Hospital.
METHODS
A total of 361 adolescents aged 10 - 19 years (mean age 14.1 ± 1.79 years) took part in the study. Of these, 32.7% (n=118) had self-reported halitosis. The median OHIP-14 score among adolescents with self-reported halitosis was 3 (0-9) while those who did not report halitosis had a median OHIP-14 score of 0 (0 - 5). This difference was statistically significant (p < 0.0001).
RESULTS
Self-reported halitosis significantly impaired the oral health related quality of life of the adolescents.
CONCLUSION
[ "Adolescent", "Child", "Cross-Sectional Studies", "Female", "Halitosis", "Humans", "Male", "Nigeria", "Oral Health", "Prevalence", "Quality of Life", "Self Report", "Students", "Suburban Population", "Surveys and Questionnaires", "Young Adult" ]
8351855
Introduction
Halitosis can simply be defined as an unpleasant odour that emanates from the mouth. 1, 2 It has been classified into three main categories: genuine, pseudo, and halitophobia. Halitosis is genuine if it is beyond a socially acceptable level; pseudo if it is only perceived by the affected person while halitophobia is halitosis that the affected person perceives as persisting after treatment. 1, 2 Halitosis ranks as the third most common presenting complaint of dental patients following dental caries and periodontal diseases. 2 It may be assessed both objectively and subjectively. However, the subjective assessment may be more relevant in assessing the impact of halitosis on people's quality of lives than an objective assessment. 3 Furthermore, a systematic review showed no significant difference in the prevalence of halitosis as measured by organoleptic method, level of volatile Sulphur compounds and self-report. 4 The reported prevalence of self-reported halitosis in adolescents varies quite widely ranging from 23.6% – 54.7%. 5–8 In Nigeria, a hospital-based study conducted on self-reported halitosis revealed a prevalence of 14.8% and 17.1%. 1, 9 Halitosis has been shown to impair health because it affects overall health, well-being and self-esteem negatively.10, 11 Studies also show that halitosis impairs social relations with others. 12–14 Halitosis has been described as being the most important condition to create a bad first impression on others. 3 In another study among adolescent males in Brazil, the respondents reported feelings ranging from shame, tenseness to avoidance of close relationships if they felt they suffered from halitosis. 15 Furthermore, previous studies show that adolescents in New Zealand and Brazil rate halitosis as the most important oral condition impacting their oral health related quality of life. 5, 16, 17 This study therefore sought to add to this knowledge base by describing the prevalence of self-reported halitosis among adolescent students and the extent to which it impairs their oral health related quality of life (OHRQoL) in a suburban community in Nigeria.
null
null
Results
A total of 361 adolescents took part in this study, 146 boys (40.4%) and 215 girls (59.6%). The adolescents had an age range of 10 – 19 years with a mean age of 14.1 ± 1.79 years. Table 1 shows that most of them, 303 (83.9%) attended public schools and 313 (86.7%) of the adolescents reported that both parents were their caregivers. Close to two thirds of the adolescents (60.4%) reported that they brushed their teeth once daily. Only 8.9% and 2.2% of the adolescents gave a positive history of alcohol and tobacco use respectively. Less than 20% (17.2%) of the adolescents reported that they had ever visited a dentist. Background characteristics of study participants by self-reported halitosis status (N=361) The study further revealed that 32.7% (n=118) of the subjects had self-reported halitosis. There were no statistically significant differences between adolescents who reported halitosis and those who did not concerning age, gender, type of school, caregiver, previous dental visit, alcohol and tobacco use. However, there was a statistically significant difference in the frequency of daily tooth brushing between the 2 groups. (Table 1) The overall OHIP-14 score range was 0 – 32. The overall median OHIP-14 score was 1; interquartile range 0 – 25. Figure 1 shows that the median OHIP-14 score among adolescents with self-reported halitosis was 3 (0–9) while adolescents who did not report halitosis had a median OHIP-14 score was 0 (0 – 5). The Mann-Whitney U test showed that this difference was statistically significant (p < 0.0001), z = -5.31; there was a 66.4% (60.3% – 72.3%) probability that adolescents with self-reported halitosis would have a higher OHIP-14 score than those who did not. Box Plots of Oral Health Related Quality of Life Score by Halitosis *data more than 1.5 times from the interquartile range are plotted as outliers
Conclusion
We found that self-reported halitosis significantly impaired adolescents' OHRQoL. A qualitative study may help to further clarify the role self-reported halitosis plays in OHRQoL of adolescent students in Nigeria.
[ "Limitation" ]
[ "The adolescents studied were adolescents in school. Within the context of Nigeria's educational situation where about 30% of adolescents are out of school, 21 this study fails to capture the full picture of the Nigerian situation. However, this study lays the groundwork for further studies in adolescent oral health which is a neglected area in Nigeria. Furthermore, this was a cross-sectional study and it would be difficult to prove that the impact of OHRQoL observed among these students was solely due to their halitosis status. Thus, we acknowledge that this study has shown association and not causality. However, this potential bias was mitigated by the similarity in the baseline characteristics of the adolescents who reported halitosis and those who did not." ]
[ null ]
[ "Introduction", "Materials and methods", "Results", "Discussion", "Limitation", "Conclusion" ]
[ "Halitosis can simply be defined as an unpleasant odour that emanates from the mouth. 1, 2 It has been classified into three main categories: genuine, pseudo, and halitophobia. Halitosis is genuine if it is beyond a socially acceptable level; pseudo if it is only perceived by the affected person while halitophobia is halitosis that the affected person perceives as persisting after treatment. 1, 2 Halitosis ranks as the third most common presenting complaint of dental patients following dental caries and periodontal diseases. 2 It may be assessed both objectively and subjectively. However, the subjective assessment may be more relevant in assessing the impact of halitosis on people's quality of lives than an objective assessment. 3 Furthermore, a systematic review showed no significant difference in the prevalence of halitosis as measured by organoleptic method, level of volatile Sulphur compounds and self-report. 4\nThe reported prevalence of self-reported halitosis in adolescents varies quite widely ranging from 23.6% – 54.7%. 5–8 In Nigeria, a hospital-based study conducted on self-reported halitosis revealed a prevalence of 14.8% and 17.1%. 1, 9\nHalitosis has been shown to impair health because it affects overall health, well-being and self-esteem negatively.10, 11 Studies also show that halitosis impairs social relations with others. 12–14 Halitosis has been described as being the most important condition to create a bad first impression on others. 3 In another study among adolescent males in Brazil, the respondents reported feelings ranging from shame, tenseness to avoidance of close relationships if they felt they suffered from halitosis. 15 Furthermore, previous studies show that adolescents in New Zealand and Brazil rate halitosis as the most important oral condition impacting their oral health related quality of life. 5, 16, 17 This study therefore sought to add to this knowledge base by describing the prevalence of self-reported halitosis among adolescent students and the extent to which it impairs their oral health related quality of life (OHRQoL) in a suburban community in Nigeria.", "An analytical cross-sectional study was conducted among adolescent students in Pakoto community of Ogun State, Nigeria. The sample size formula for comparison of a continuous variable outcome between 2 groups yielded a total minimum sample size of 353. The parameters used were level of significance of 0.05, power of 80%, mean difference of 1.7 and standard deviation of 5.7. 18 The secondary schools in this community were stratified into private and public schools. Two schools were randomly selected from each stratum to give a total of 4 schools. From each school, about 90 students were recruited into the study evenly spread across the 6 secondary school classes (Junior secondary classes I – III and senior secondary classes I – III). The inclusion criteria were students aged 10 – 19 years who attended secondary schools in Pakoto community. Students with pre-existing systemic conditions, dental caries, orofacial pain, malocclusion, traumatic dental injury and oral mucosal lesions were excluded from the study. Pre-tested self-administered questionnaires were used to obtain respondents' demographic data and their self-perception of halitosis. Respondents were deemed to have reported halitosis if they answered, “Yes”, to the question, “Do you suffer from bad breath?”. The Oral Health Impact Profile -14 (OHIP-14) was used to assess the adolescents' oral health related quality of life (OHRQoL). The OHIP-14 has been validated as a measure of OHRQoL among adolescents in Nigeria18. The OHIP-14 scores responses on a Likert scale which ranges from 0 = “Never” to 4 = “Very Often”. Therefore, higher scores on the OHIP-14 equated to poorer OHRQoL.\nThe background characteristics were presented with mean (SD), frequencies and proportions. The Student's Independent t-test and Chi square test were used to compare the background characteristics of adolescents who reported halitosis and those who did not. The OHRQoL scores were not normally distributed and so were presented with median (IQR) and graphically, with box plots. The Mann-Whitney U test was used to compare the median OHRQoL scores of adolescents who reported halitosis and those who did not. The Mann-Whitney U Test is not sensitive to outliers because all the OHIP-14 scores are ranked. The effect size of the Mann-Whitney U test and its 95% confidence was calculated as described by Conroy (2012).19 The level of significance was set at p<0.05. Data analysis was done using STATA 13.1 (Stata Corp. 2013. Statistical Software. Release 13. College Station, TX, USA).\nThe principals of the schools gave approval for the study. Informed consent was obtained from parents/guardians. Students aged 18 years and above also gave informed consent while assent to take part in the study was obtained from students less than 18 years old. Ethics approval for this study was obtained from the Health Research and Ethics Committee of the Lagos University Teaching Hospital.", "A total of 361 adolescents took part in this study, 146 boys (40.4%) and 215 girls (59.6%). The adolescents had an age range of 10 – 19 years with a mean age of 14.1 ± 1.79 years. Table 1 shows that most of them, 303 (83.9%) attended public schools and 313 (86.7%) of the adolescents reported that both parents were their caregivers. Close to two thirds of the adolescents (60.4%) reported that they brushed their teeth once daily. Only 8.9% and 2.2% of the adolescents gave a positive history of alcohol and tobacco use respectively. Less than 20% (17.2%) of the adolescents reported that they had ever visited a dentist.\nBackground characteristics of study participants by self-reported halitosis status (N=361)\nThe study further revealed that 32.7% (n=118) of the subjects had self-reported halitosis. There were no statistically significant differences between adolescents who reported halitosis and those who did not concerning age, gender, type of school, caregiver, previous dental visit, alcohol and tobacco use. However, there was a statistically significant difference in the frequency of daily tooth brushing between the 2 groups. (Table 1)\nThe overall OHIP-14 score range was 0 – 32. The overall median OHIP-14 score was 1; interquartile range 0 – 25. Figure 1 shows that the median OHIP-14 score among adolescents with self-reported halitosis was 3 (0–9) while adolescents who did not report halitosis had a median OHIP-14 score was 0 (0 – 5). The Mann-Whitney U test showed that this difference was statistically significant (p < 0.0001), z = -5.31; there was a 66.4% (60.3% – 72.3%) probability that adolescents with self-reported halitosis would have a higher OHIP-14 score than those who did not.\nBox Plots of Oral Health Related Quality of Life Score by Halitosis\n*data more than 1.5 times from the interquartile range are plotted as outliers", "In this study, the prevalence of self-reported halitosis was found to be 32.7%. Studies from various regions of the world show that the prevalence of self-reported halitosis among adolescents varies quite widely. 6–8,16 The lowest prevalence of self-reported halitosis was from a national survey in South Korea, 23.6%. 8 Studies from the southern and southeastern regions of Brazil showed a prevalence of 39.7% and 43.0% respectively.7,16. While a study of adolescents in a prefecture in Japan recorded the highest prevalence of 54.7%. 6 Thus, the prevalence of 32.7% recorded in this study fell within the range of 23.6 – 54.7% reported from these studies.\nThe variations in the prevalence of self-reported halitosis can be attributed to the study population; adolescent students may be more inclined to report halitosis than non-students. The prevalence of 23.6% reported from South Korea was from a nationally representative sample of adolescents and may be the closest representation of the true prevalence of self-reported halitosis in adolescents. On the other hand, the prevalence of 39.7% and 43.0% reported from Brazil were from adolescents in schools in Brazil and this may explain the closeness of the results to this study's finding. Furthermore, the phrasing of the question assessing self-reported halitosis (“Subjects were also asked whether they were conscious of self-oral malodour”) in the Japanese study may be responsible for the very high prevalence of 54.7% reported.\nThis study also found that self-reported halitosis significantly impaired adolescents' oral health related quality of life (p < 0.0001). The median (IQR) OHIP-14 score among adolescents with self-reported halitosis, 3 (0 – 9) was significantly higher than the median score of 0 (0 – 5) observed for adolescents who did not report halitosis. The median OHIP-14 score of 3 among the adolescents who reported halitosis was significantly higher the median of 0 for those who did not.\nIn previous studies, self-reported halitosis also significantly impaired the oral health related quality of life of adolescents. Studies conducted among adolescents in New Zealand and Brazil, using OHIP-14, also demonstrated that self-reported halitosis significantly impaired oral health related quality of life. 5, 17, Colussi et al.,5 found that out of 13 socio-demographic, behavioural and oral factors assessed, only self-reported halitosis and socio-economic status had significant impacts on adolescents' OHRQoL in their study. 5 Similarly, Broughton et al., reported that adolescents with self-reported halitosis had higher impacts on OHRQoL than those without self-reported halitosis. 17 Studies in Chinese and Swedish adults also affirm this finding. 10,20 Thus self-reported halitosis appears to be a condition that cuts across cultures, geographic spaces and significantly impairs adolescents' oral health related quality of life.\nThe overlap in the interquartile ranges (0 – 9 and 0 – 5) of the adolescents with and without self-reported halitosis respectively suggested that it played a limited role in the OHRQoL of adolescents in this study. However, the 66.4% (60.3% – 72.3%) probability that students with self-reported halitosis would have a higher OHIP-14 score than those who did not corroborated the statistically significant p < 0.0001.\nThis study's strength lies in the fact that it is the first to report on the impact of halitosis on the oral health related quality of life of adolescents in Nigeria. Adolescence is a peculiarly precarious period when social problems assume epic proportions and given the importance of halitosis on social relationships, it is particularly important to study the impact of halitosis on the OHRQoL in Nigeria with the highest population of adolescents in Africa.", "The adolescents studied were adolescents in school. Within the context of Nigeria's educational situation where about 30% of adolescents are out of school, 21 this study fails to capture the full picture of the Nigerian situation. However, this study lays the groundwork for further studies in adolescent oral health which is a neglected area in Nigeria. Furthermore, this was a cross-sectional study and it would be difficult to prove that the impact of OHRQoL observed among these students was solely due to their halitosis status. Thus, we acknowledge that this study has shown association and not causality. However, this potential bias was mitigated by the similarity in the baseline characteristics of the adolescents who reported halitosis and those who did not.", "We found that self-reported halitosis significantly impaired adolescents' OHRQoL. A qualitative study may help to further clarify the role self-reported halitosis plays in OHRQoL of adolescent students in Nigeria." ]
[ "intro", "materials|methods", "results", "discussion", null, "conclusions" ]
[ "Halitosis", "oral health", "quality of life", "adolescent" ]
Introduction: Halitosis can simply be defined as an unpleasant odour that emanates from the mouth. 1, 2 It has been classified into three main categories: genuine, pseudo, and halitophobia. Halitosis is genuine if it is beyond a socially acceptable level; pseudo if it is only perceived by the affected person while halitophobia is halitosis that the affected person perceives as persisting after treatment. 1, 2 Halitosis ranks as the third most common presenting complaint of dental patients following dental caries and periodontal diseases. 2 It may be assessed both objectively and subjectively. However, the subjective assessment may be more relevant in assessing the impact of halitosis on people's quality of lives than an objective assessment. 3 Furthermore, a systematic review showed no significant difference in the prevalence of halitosis as measured by organoleptic method, level of volatile Sulphur compounds and self-report. 4 The reported prevalence of self-reported halitosis in adolescents varies quite widely ranging from 23.6% – 54.7%. 5–8 In Nigeria, a hospital-based study conducted on self-reported halitosis revealed a prevalence of 14.8% and 17.1%. 1, 9 Halitosis has been shown to impair health because it affects overall health, well-being and self-esteem negatively.10, 11 Studies also show that halitosis impairs social relations with others. 12–14 Halitosis has been described as being the most important condition to create a bad first impression on others. 3 In another study among adolescent males in Brazil, the respondents reported feelings ranging from shame, tenseness to avoidance of close relationships if they felt they suffered from halitosis. 15 Furthermore, previous studies show that adolescents in New Zealand and Brazil rate halitosis as the most important oral condition impacting their oral health related quality of life. 5, 16, 17 This study therefore sought to add to this knowledge base by describing the prevalence of self-reported halitosis among adolescent students and the extent to which it impairs their oral health related quality of life (OHRQoL) in a suburban community in Nigeria. Materials and methods: An analytical cross-sectional study was conducted among adolescent students in Pakoto community of Ogun State, Nigeria. The sample size formula for comparison of a continuous variable outcome between 2 groups yielded a total minimum sample size of 353. The parameters used were level of significance of 0.05, power of 80%, mean difference of 1.7 and standard deviation of 5.7. 18 The secondary schools in this community were stratified into private and public schools. Two schools were randomly selected from each stratum to give a total of 4 schools. From each school, about 90 students were recruited into the study evenly spread across the 6 secondary school classes (Junior secondary classes I – III and senior secondary classes I – III). The inclusion criteria were students aged 10 – 19 years who attended secondary schools in Pakoto community. Students with pre-existing systemic conditions, dental caries, orofacial pain, malocclusion, traumatic dental injury and oral mucosal lesions were excluded from the study. Pre-tested self-administered questionnaires were used to obtain respondents' demographic data and their self-perception of halitosis. Respondents were deemed to have reported halitosis if they answered, “Yes”, to the question, “Do you suffer from bad breath?”. The Oral Health Impact Profile -14 (OHIP-14) was used to assess the adolescents' oral health related quality of life (OHRQoL). The OHIP-14 has been validated as a measure of OHRQoL among adolescents in Nigeria18. The OHIP-14 scores responses on a Likert scale which ranges from 0 = “Never” to 4 = “Very Often”. Therefore, higher scores on the OHIP-14 equated to poorer OHRQoL. The background characteristics were presented with mean (SD), frequencies and proportions. The Student's Independent t-test and Chi square test were used to compare the background characteristics of adolescents who reported halitosis and those who did not. The OHRQoL scores were not normally distributed and so were presented with median (IQR) and graphically, with box plots. The Mann-Whitney U test was used to compare the median OHRQoL scores of adolescents who reported halitosis and those who did not. The Mann-Whitney U Test is not sensitive to outliers because all the OHIP-14 scores are ranked. The effect size of the Mann-Whitney U test and its 95% confidence was calculated as described by Conroy (2012).19 The level of significance was set at p<0.05. Data analysis was done using STATA 13.1 (Stata Corp. 2013. Statistical Software. Release 13. College Station, TX, USA). The principals of the schools gave approval for the study. Informed consent was obtained from parents/guardians. Students aged 18 years and above also gave informed consent while assent to take part in the study was obtained from students less than 18 years old. Ethics approval for this study was obtained from the Health Research and Ethics Committee of the Lagos University Teaching Hospital. Results: A total of 361 adolescents took part in this study, 146 boys (40.4%) and 215 girls (59.6%). The adolescents had an age range of 10 – 19 years with a mean age of 14.1 ± 1.79 years. Table 1 shows that most of them, 303 (83.9%) attended public schools and 313 (86.7%) of the adolescents reported that both parents were their caregivers. Close to two thirds of the adolescents (60.4%) reported that they brushed their teeth once daily. Only 8.9% and 2.2% of the adolescents gave a positive history of alcohol and tobacco use respectively. Less than 20% (17.2%) of the adolescents reported that they had ever visited a dentist. Background characteristics of study participants by self-reported halitosis status (N=361) The study further revealed that 32.7% (n=118) of the subjects had self-reported halitosis. There were no statistically significant differences between adolescents who reported halitosis and those who did not concerning age, gender, type of school, caregiver, previous dental visit, alcohol and tobacco use. However, there was a statistically significant difference in the frequency of daily tooth brushing between the 2 groups. (Table 1) The overall OHIP-14 score range was 0 – 32. The overall median OHIP-14 score was 1; interquartile range 0 – 25. Figure 1 shows that the median OHIP-14 score among adolescents with self-reported halitosis was 3 (0–9) while adolescents who did not report halitosis had a median OHIP-14 score was 0 (0 – 5). The Mann-Whitney U test showed that this difference was statistically significant (p < 0.0001), z = -5.31; there was a 66.4% (60.3% – 72.3%) probability that adolescents with self-reported halitosis would have a higher OHIP-14 score than those who did not. Box Plots of Oral Health Related Quality of Life Score by Halitosis *data more than 1.5 times from the interquartile range are plotted as outliers Discussion: In this study, the prevalence of self-reported halitosis was found to be 32.7%. Studies from various regions of the world show that the prevalence of self-reported halitosis among adolescents varies quite widely. 6–8,16 The lowest prevalence of self-reported halitosis was from a national survey in South Korea, 23.6%. 8 Studies from the southern and southeastern regions of Brazil showed a prevalence of 39.7% and 43.0% respectively.7,16. While a study of adolescents in a prefecture in Japan recorded the highest prevalence of 54.7%. 6 Thus, the prevalence of 32.7% recorded in this study fell within the range of 23.6 – 54.7% reported from these studies. The variations in the prevalence of self-reported halitosis can be attributed to the study population; adolescent students may be more inclined to report halitosis than non-students. The prevalence of 23.6% reported from South Korea was from a nationally representative sample of adolescents and may be the closest representation of the true prevalence of self-reported halitosis in adolescents. On the other hand, the prevalence of 39.7% and 43.0% reported from Brazil were from adolescents in schools in Brazil and this may explain the closeness of the results to this study's finding. Furthermore, the phrasing of the question assessing self-reported halitosis (“Subjects were also asked whether they were conscious of self-oral malodour”) in the Japanese study may be responsible for the very high prevalence of 54.7% reported. This study also found that self-reported halitosis significantly impaired adolescents' oral health related quality of life (p < 0.0001). The median (IQR) OHIP-14 score among adolescents with self-reported halitosis, 3 (0 – 9) was significantly higher than the median score of 0 (0 – 5) observed for adolescents who did not report halitosis. The median OHIP-14 score of 3 among the adolescents who reported halitosis was significantly higher the median of 0 for those who did not. In previous studies, self-reported halitosis also significantly impaired the oral health related quality of life of adolescents. Studies conducted among adolescents in New Zealand and Brazil, using OHIP-14, also demonstrated that self-reported halitosis significantly impaired oral health related quality of life. 5, 17, Colussi et al.,5 found that out of 13 socio-demographic, behavioural and oral factors assessed, only self-reported halitosis and socio-economic status had significant impacts on adolescents' OHRQoL in their study. 5 Similarly, Broughton et al., reported that adolescents with self-reported halitosis had higher impacts on OHRQoL than those without self-reported halitosis. 17 Studies in Chinese and Swedish adults also affirm this finding. 10,20 Thus self-reported halitosis appears to be a condition that cuts across cultures, geographic spaces and significantly impairs adolescents' oral health related quality of life. The overlap in the interquartile ranges (0 – 9 and 0 – 5) of the adolescents with and without self-reported halitosis respectively suggested that it played a limited role in the OHRQoL of adolescents in this study. However, the 66.4% (60.3% – 72.3%) probability that students with self-reported halitosis would have a higher OHIP-14 score than those who did not corroborated the statistically significant p < 0.0001. This study's strength lies in the fact that it is the first to report on the impact of halitosis on the oral health related quality of life of adolescents in Nigeria. Adolescence is a peculiarly precarious period when social problems assume epic proportions and given the importance of halitosis on social relationships, it is particularly important to study the impact of halitosis on the OHRQoL in Nigeria with the highest population of adolescents in Africa. Limitation: The adolescents studied were adolescents in school. Within the context of Nigeria's educational situation where about 30% of adolescents are out of school, 21 this study fails to capture the full picture of the Nigerian situation. However, this study lays the groundwork for further studies in adolescent oral health which is a neglected area in Nigeria. Furthermore, this was a cross-sectional study and it would be difficult to prove that the impact of OHRQoL observed among these students was solely due to their halitosis status. Thus, we acknowledge that this study has shown association and not causality. However, this potential bias was mitigated by the similarity in the baseline characteristics of the adolescents who reported halitosis and those who did not. Conclusion: We found that self-reported halitosis significantly impaired adolescents' OHRQoL. A qualitative study may help to further clarify the role self-reported halitosis plays in OHRQoL of adolescent students in Nigeria.
Background: Halitosis is an important cause of impaired quality of life in adolescents. Little is known about the prevalence of self-reported halitosis in adolescents in Nigeria and the extent to which self-reported halitosis impairs their oral health related quality of life. Methods: An analytical cross-sectional study. Pre-tested self-administered pro-forma was used to obtain the adolescents' demographic data and their self-perception of halitosis. The Oral Health Impact Profile (OHIP-14) was used to assess the adolescents' OHRQoL. The Mann-Whitney U test was used to compare the median OHIP-14 scores between adolescents who reported halitosis and those who did not. The level of significance was set at p < 0.05. Ethics approval for this study was obtained from the Health Research and Ethics Committee of the Lagos University Teaching Hospital. Results: A total of 361 adolescents aged 10 - 19 years (mean age 14.1 ± 1.79 years) took part in the study. Of these, 32.7% (n=118) had self-reported halitosis. The median OHIP-14 score among adolescents with self-reported halitosis was 3 (0-9) while those who did not report halitosis had a median OHIP-14 score of 0 (0 - 5). This difference was statistically significant (p < 0.0001). Conclusions: Self-reported halitosis significantly impaired the oral health related quality of life of the adolescents.
Introduction: Halitosis can simply be defined as an unpleasant odour that emanates from the mouth. 1, 2 It has been classified into three main categories: genuine, pseudo, and halitophobia. Halitosis is genuine if it is beyond a socially acceptable level; pseudo if it is only perceived by the affected person while halitophobia is halitosis that the affected person perceives as persisting after treatment. 1, 2 Halitosis ranks as the third most common presenting complaint of dental patients following dental caries and periodontal diseases. 2 It may be assessed both objectively and subjectively. However, the subjective assessment may be more relevant in assessing the impact of halitosis on people's quality of lives than an objective assessment. 3 Furthermore, a systematic review showed no significant difference in the prevalence of halitosis as measured by organoleptic method, level of volatile Sulphur compounds and self-report. 4 The reported prevalence of self-reported halitosis in adolescents varies quite widely ranging from 23.6% – 54.7%. 5–8 In Nigeria, a hospital-based study conducted on self-reported halitosis revealed a prevalence of 14.8% and 17.1%. 1, 9 Halitosis has been shown to impair health because it affects overall health, well-being and self-esteem negatively.10, 11 Studies also show that halitosis impairs social relations with others. 12–14 Halitosis has been described as being the most important condition to create a bad first impression on others. 3 In another study among adolescent males in Brazil, the respondents reported feelings ranging from shame, tenseness to avoidance of close relationships if they felt they suffered from halitosis. 15 Furthermore, previous studies show that adolescents in New Zealand and Brazil rate halitosis as the most important oral condition impacting their oral health related quality of life. 5, 16, 17 This study therefore sought to add to this knowledge base by describing the prevalence of self-reported halitosis among adolescent students and the extent to which it impairs their oral health related quality of life (OHRQoL) in a suburban community in Nigeria. Conclusion: We found that self-reported halitosis significantly impaired adolescents' OHRQoL. A qualitative study may help to further clarify the role self-reported halitosis plays in OHRQoL of adolescent students in Nigeria.
Background: Halitosis is an important cause of impaired quality of life in adolescents. Little is known about the prevalence of self-reported halitosis in adolescents in Nigeria and the extent to which self-reported halitosis impairs their oral health related quality of life. Methods: An analytical cross-sectional study. Pre-tested self-administered pro-forma was used to obtain the adolescents' demographic data and their self-perception of halitosis. The Oral Health Impact Profile (OHIP-14) was used to assess the adolescents' OHRQoL. The Mann-Whitney U test was used to compare the median OHIP-14 scores between adolescents who reported halitosis and those who did not. The level of significance was set at p < 0.05. Ethics approval for this study was obtained from the Health Research and Ethics Committee of the Lagos University Teaching Hospital. Results: A total of 361 adolescents aged 10 - 19 years (mean age 14.1 ± 1.79 years) took part in the study. Of these, 32.7% (n=118) had self-reported halitosis. The median OHIP-14 score among adolescents with self-reported halitosis was 3 (0-9) while those who did not report halitosis had a median OHIP-14 score of 0 (0 - 5). This difference was statistically significant (p < 0.0001). Conclusions: Self-reported halitosis significantly impaired the oral health related quality of life of the adolescents.
2,209
271
[ 136 ]
6
[ "halitosis", "reported", "adolescents", "reported halitosis", "self", "study", "self reported", "self reported halitosis", "14", "prevalence" ]
[ "halitophobia halitosis affected", "prevalence halitosis measured", "halitophobia halitosis genuine", "impact halitosis oral", "halitosis important oral" ]
null
[CONTENT] Halitosis | oral health | quality of life | adolescent [SUMMARY]
null
[CONTENT] Halitosis | oral health | quality of life | adolescent [SUMMARY]
[CONTENT] Halitosis | oral health | quality of life | adolescent [SUMMARY]
[CONTENT] Halitosis | oral health | quality of life | adolescent [SUMMARY]
[CONTENT] Halitosis | oral health | quality of life | adolescent [SUMMARY]
[CONTENT] Adolescent | Child | Cross-Sectional Studies | Female | Halitosis | Humans | Male | Nigeria | Oral Health | Prevalence | Quality of Life | Self Report | Students | Suburban Population | Surveys and Questionnaires | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Child | Cross-Sectional Studies | Female | Halitosis | Humans | Male | Nigeria | Oral Health | Prevalence | Quality of Life | Self Report | Students | Suburban Population | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Adolescent | Child | Cross-Sectional Studies | Female | Halitosis | Humans | Male | Nigeria | Oral Health | Prevalence | Quality of Life | Self Report | Students | Suburban Population | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Adolescent | Child | Cross-Sectional Studies | Female | Halitosis | Humans | Male | Nigeria | Oral Health | Prevalence | Quality of Life | Self Report | Students | Suburban Population | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Adolescent | Child | Cross-Sectional Studies | Female | Halitosis | Humans | Male | Nigeria | Oral Health | Prevalence | Quality of Life | Self Report | Students | Suburban Population | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] halitophobia halitosis affected | prevalence halitosis measured | halitophobia halitosis genuine | impact halitosis oral | halitosis important oral [SUMMARY]
null
[CONTENT] halitophobia halitosis affected | prevalence halitosis measured | halitophobia halitosis genuine | impact halitosis oral | halitosis important oral [SUMMARY]
[CONTENT] halitophobia halitosis affected | prevalence halitosis measured | halitophobia halitosis genuine | impact halitosis oral | halitosis important oral [SUMMARY]
[CONTENT] halitophobia halitosis affected | prevalence halitosis measured | halitophobia halitosis genuine | impact halitosis oral | halitosis important oral [SUMMARY]
[CONTENT] halitophobia halitosis affected | prevalence halitosis measured | halitophobia halitosis genuine | impact halitosis oral | halitosis important oral [SUMMARY]
[CONTENT] halitosis | reported | adolescents | reported halitosis | self | study | self reported | self reported halitosis | 14 | prevalence [SUMMARY]
null
[CONTENT] halitosis | reported | adolescents | reported halitosis | self | study | self reported | self reported halitosis | 14 | prevalence [SUMMARY]
[CONTENT] halitosis | reported | adolescents | reported halitosis | self | study | self reported | self reported halitosis | 14 | prevalence [SUMMARY]
[CONTENT] halitosis | reported | adolescents | reported halitosis | self | study | self reported | self reported halitosis | 14 | prevalence [SUMMARY]
[CONTENT] halitosis | reported | adolescents | reported halitosis | self | study | self reported | self reported halitosis | 14 | prevalence [SUMMARY]
[CONTENT] halitosis | prevalence | self | reported | health | pseudo | person | ranging | genuine | halitophobia [SUMMARY]
null
[CONTENT] score | adolescents | ohip 14 score | 14 score | 14 | reported | ohip 14 | ohip | range | halitosis [SUMMARY]
[CONTENT] self reported | self reported halitosis | self | ohrqol | ohrqol qualitative study | halitosis plays ohrqol | qualitative | clarify role | clarify role self | clarify role self reported [SUMMARY]
[CONTENT] halitosis | reported | adolescents | self | self reported halitosis | self reported | reported halitosis | study | 14 | prevalence [SUMMARY]
[CONTENT] halitosis | reported | adolescents | self | self reported halitosis | self reported | reported halitosis | study | 14 | prevalence [SUMMARY]
[CONTENT] ||| Nigeria [SUMMARY]
null
[CONTENT] 361 | 10 - 19 years | age 14.1 | 1.79 years ||| 32.7% | n=118 ||| 3 | 0-9 | 0 | 0 ||| [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| Nigeria ||| ||| ||| The Oral Health Impact Profile | Mann-Whitney U ||| p < 0.05 ||| the Health Research and Ethics Committee | the Lagos University Teaching Hospital ||| ||| 361 | 10 - 19 years | age 14.1 | 1.79 years ||| 32.7% | n=118 ||| 3 | 0-9 | 0 | 0 ||| ||| [SUMMARY]
[CONTENT] ||| Nigeria ||| ||| ||| The Oral Health Impact Profile | Mann-Whitney U ||| p < 0.05 ||| the Health Research and Ethics Committee | the Lagos University Teaching Hospital ||| ||| 361 | 10 - 19 years | age 14.1 | 1.79 years ||| 32.7% | n=118 ||| 3 | 0-9 | 0 | 0 ||| ||| [SUMMARY]
Intracranial aneurysm treatment with WEB and adjunctive stent: preliminary evaluation in a single-center series.
33785641
Intrasaccular flow disruption with WEB is a safe and efficacious technique that has significantly changed endovascular management of wide-neck bifurcation aneurysms (WNBAs). Use of stent in combination with WEB is occasionally required. We analyzed the frequency of use, indications, safety, and efficacy of the WEB-stent combination.
BACKGROUND
All aneurysms treated with WEB and stent were extracted from a prospectively maintained database. Patient and aneurysm characteristics, complications, and anatomical results were independently analyzed by a physician independent of the procedures.
METHODS
From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Of these, 17/152 patients (11.2%) with 19/157 aneurysms (12.1%) were treated with WEB device and stent. Indications were very wide neck with a branch emerging from the neck in 1/19 (5.2%) aneurysms and WEB protrusion in 18/19 (94.7%). At 1 month, no morbimortality was reported. At 6 months, anatomical results were complete aneurysm occlusion in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). At 12 months, there was complete aneurysm occlusion in 13/14 aneurysms (92.9%) and neck remnant in 1/14 (7.1%).
RESULTS
Combining WEB and stent is a therapeutic strategy to manage WNBA. In our series, this combination was used in 11.2% of patients treated with WEB, resulting in no morbidity or mortality with a high efficacy at 6 and 12 months (complete aneurysm occlusion in 88.2% and 92.9%, respectively).
CONCLUSIONS
[ "Embolization, Therapeutic", "Endovascular Procedures", "Humans", "Intracranial Aneurysm", "Retrospective Studies", "Stents", "Treatment Outcome" ]
8785053
Introduction
Since publication of the ISAT (International Subarachnoid Aneurysm Trial) results, the endovascular approach has increasingly replaced surgery when managing intracranial aneurysms (IAs).1 In cases where treatment was initially based on coil use (with or without balloon-assistance), treatment complexity of some IAs led to the development of alternative techniques such as flow diversion and flow disruption.2–6 Flow disruption with WEB (MicroVention, Aliso Viejo, CA, USA) is an innovative treatment for wide-neck bifurcation aneurysms (WNBAs) that has been evaluated in several multicenter prospective studies, including two European trials (WEB Clinical Assessment of Intrasaccular Aneurysm Therapy (WEBCAST) and WEBCAST-2), one trial in the United States (WEB Intrasaccular Therapy (WEB-IT)), and one French trial (French Observatory), showing high safety and efficacy in the short and long term.7–14 Additional WEB trials are currently recruiting or under analysis: CLinical Assessment of WEB Device in Ruptured aneurYSms (CLARYS), CLinical EValuation of WEB 0.017 Device in Intracranial AneuRysms (CLEVER), and WEB-IT China (WEB-IT China). The initial clinical experience with WEB showed that WEB sizing was critical to obtain a good apposition of the lateral surface of the WEB against the aneurysm wall and a complete sealing of the aneurysm neck. Recommendations regarding WEB sizing have evolved over time including proposing oversizing (approximately 1 mm) the device in transverse diameter.15 Although this approach is now routine practice, excessive oversizing may lead to WEB protrusion, which is potentially associated with thromboembolic events (TEs). To overcome the protrusion problem, three options have been considered: downsizing the WEB device, use of balloon assistance during or after WEB deployment, or placement of a stent in the parent artery in the event of protrusion with important reduction of parent artery lumen.16 However, downsizing the device may not be a viable option given its potential association with a higher risk of aneurysm recurrence. Given that balloon inflation during WEB deployment only very slightly changes the WEB position in the aneurysm sac, our limited experience with balloon assistance was inconclusive. Additionally, using balloon assistance after WEB placement led to poorer outcomes with no or very limited change in WEB position. After these relatively disappointing attempts with balloon assistance, our department devised a new strategy for WEB aneurysm treatment. The strategy involves oversizing the WEB device in transverse diameter and, in the event of WEB protrusion, placing a stent (adjunctive stenting) in the parent artery. We report and retrospectively analyze our preliminary experience with this strategy in patients treated with WEB and stent in our prospective database of patients treated with WEB.
null
null
Results
Patients From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures. Characteristics of patients and aneurysms Acom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured. From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures. Characteristics of patients and aneurysms Acom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured. Aneurysms Of 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively. Aneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%). Of 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively. Aneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%). Procedures Pretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10). Dual antiplatelet therapy, devices, complications, and anatomical results *0: none; 1: ischemic; 2: hemorrhage. †1: complete occlusion; 2: neck remnant; 3: aneurysm remnant. ‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height). NA, not available. LVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1). Patient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion. Stent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion. Pretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10). Dual antiplatelet therapy, devices, complications, and anatomical results *0: none; 1: ischemic; 2: hemorrhage. †1: complete occlusion; 2: neck remnant; 3: aneurysm remnant. ‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height). NA, not available. LVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1). Patient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion. Stent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion. Complications There were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI. Finally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%. There were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI. Finally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%. Anatomical results at 6 months DSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown. WEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%). DSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown. WEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%). Anatomical results at 12 months DSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%). No intra-stent stenosis or occlusion was depicted at 12 months. WEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%). DSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%). No intra-stent stenosis or occlusion was depicted at 12 months. WEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%). Retreatment One patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent. One patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent.
Conclusion
Combining WEB with stent is part of the treatment strategy to manage WNBA. In our series, this combination was used in 11.2% of patients treated with WEB, resulting in no morbidity or mortality and high efficacy at 6 and 12 months (complete aneurysm occlusion in 88.2% and 92.9%, respectively). This combination should be considered as a safe and effective option in the treatment of complex WNBA. Further studies (including comparative ones) are needed to further analyze the place of this EVT when managing intracranial aneurysms.
[ "Patients", "Web device", "Stents", "Therapeutic strategy", "Embolization procedure", "Follow-up imaging protocol", "Data collection", "Data analysis", "Statistical analysis", "Patients", "Aneurysms", "Procedures", "Complications", "Anatomical results at 6 months", "Anatomical results at 12 months", "Retreatment", "Limitations" ]
[ "Since 2001, our department has maintained a prospective database of all patients with endovascularly treated aneurysms. We extracted data from all patients from the database who had aneurysms treated with WEB and stent between June 2011 (first WEB patient) and January 2020.", "The WEB is a self-expanding, retrievable, electrothermally detachable, nitinol braided device, placed within the aneurysm sac. There have been several WEB device iterations over time. From November 2013, WEB DL (dual layer) was replaced by WEB with a single braid (WEB SL (single layer) and WEB SLS (single layer spherical), each possessing a barrel and a spherical shape, respectively).", "The following stents were used: Low-Profile Visualized Intraluminal Support Junior (LVIS Jr) Stent (MicroVention, Aliso Viejo, CA, USA), Stent ACCLINO (Acandis, Pforzheim, Germany), and Stent Enterprise (Cerenovus, Miami, FL, USA).", "In our center, treatment decisions for ruptured, unruptured, and recanalized aneurysms are made by a local multidisciplinary team that includes neurosurgeons and neuroradiologists. When endovascular treatment is selected as the primary treatment strategy, the Interventional Neuroradiology team selects the endovascular technique. WEB is usually used for treatment of WNBA, but the introduction of WEB 17 permits broader indications, such as pericallosal and sidewall wide neck aneurysms. Ruptured, unruptured, and recanalized aneurysms are treated with WEB if their anatomy is compatible with this treatment.\nThe decision to use stent in addition to WEB placement occurs in two circumstances:\nPreoperative analysis shows that aneurysm anatomy, namely neck width, is not compatible with placement of the WEB device alone. This occurs when a branch emerges from the aneurysm neck, in which case a microcatheter is placed in the branch to be stented before WEB deployment; the stent is delivered after WEB deployment.\nA WEB protrusion in the parent artery (or one branch of the bifurcation) is seen after several WEB deployments. In this case, the stenotic branch must be catheterized with a microcatheter and a stent placed in front of the neck and WEB. This strategy was developed after analyzing protrusion situations where WEB size reduction (or WEB undersizing) will inevitably conduct to mid-term or long-term aneurysm recanalization.", "Procedures were performed under general anesthesia on a biplane angiographic system (Axiom Artis, Siemens Biplane and Allura Clarity, Philips Healthcare). Since 2013, all patients treated with WEB (with or without stent) received premedication with double antiplatelet treatment (DAPT). Two protocols were used successively: aspirin and clopidogrel (Sanofi-Aventis, Gentilly, France) for 5 days until April 2015, followed by aspirin and ticagrelor (AstraZeneca, Courbevoie, France) for 2 days. The change of DAPT protocol was prompted by the high rate of clopidogrel resistance. In the event of stent placement, this treatment continued for 3 months; thereafter, clopidogrel or ticagrelor were stopped and aspirin continued for at least 12 months post-procedure date. Antiplatelet activity testing was not performed.\nPost-procedure MRI including diffusion-weighted imaging (DWI) was performed 24 hours post-procedure. Digital subtraction angiography (DSA) was performed at 6 and 12 months.", "Anatomical follow-up of aneurysms treated by EVT is usually conducted on a lifetime basis.\nImmediate postoperative aneurysm occlusion was evaluated on DSA performed at the end of the procedure. At 6 and 12 month follow-ups, both DSA and MRA were performed. Yearly or less frequent (depending on the initial anatomical occlusion and its evolution along the time) follow-up was performed using MRI/MRA. Ten years post-procedure, follow-up imaging was always performed using MRI/MRA every 5 years.", "The following data were collected:\nPatient: age, gender;\nAneurysm: location, size (pretreatment);\nWEB procedure: date, type and size of device used, any complications;\nStent type;\nProcedure images, 24-hour MRI and DSA at 6 and 12 months.", "All clinical data and collected images were independently evaluated by an interventional neuroradiologist with 5 years’ experience who was not involved in any procedure.\nAneurysm occlusion was evaluated at 6 and 12 months on a three-grade scale: complete occlusion, neck remnant, and aneurysm remnant. WEB shape was also evaluated at 6 and 12 months using a three-grade scale: no WEB shape modification, mild WEB shape modification (<50% of decrease in height), or strong WEB shape modification (≥50% in height). Artery status in which the stent was placed was evaluated using a four-grade scale: no stenosis, stenosis 50% or less, stenosis greater than 50%, and occlusion.", "Continuous variables were described as mean±SD and range. Categorical variables were described as counts and percentage. Analyses were performed using Microsoft Office Excel 2010 (Redmond, WA, USA).", "From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures.\nCharacteristics of patients and aneurysms\nAcom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured.", "Of 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively.\nAneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%).", "Pretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10).\nDual antiplatelet therapy, devices, complications, and anatomical results\n*0: none; 1: ischemic; 2: hemorrhage.\n†1: complete occlusion; 2: neck remnant; 3: aneurysm remnant.\n‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height).\nNA, not available.\nLVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1).\nPatient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion.\nStent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion.", "There were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI.\nFinally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%.", "DSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown.\nWEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%).", "DSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%).\nNo intra-stent stenosis or occlusion was depicted at 12 months.\nWEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%).", "One patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent.", "This study has some limitations. First this analysis was retrospectively conducted in a prospectively maintained database. At the beginning of our experience of combining WEB and stent, the impression was that it would be used in a very limited number of cases and that no prospective study was required to evaluate this treatment. Second, the patient cohort is relatively small because indications for combining WEB and stent are relatively limited. Nonetheless, it is important to report the usefulness, safety, and efficacy of this combination." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Patients", "Web device", "Stents", "Therapeutic strategy", "Embolization procedure", "Follow-up imaging protocol", "Data collection", "Data analysis", "Statistical analysis", "Results", "Patients", "Aneurysms", "Procedures", "Complications", "Anatomical results at 6 months", "Anatomical results at 12 months", "Retreatment", "Discussion", "Limitations", "Conclusion" ]
[ "Since publication of the ISAT (International Subarachnoid Aneurysm Trial) results, the endovascular approach has increasingly replaced surgery when managing intracranial aneurysms (IAs).1 In cases where treatment was initially based on coil use (with or without balloon-assistance), treatment complexity of some IAs led to the development of alternative techniques such as flow diversion and flow disruption.2–6 Flow disruption with WEB (MicroVention, Aliso Viejo, CA, USA) is an innovative treatment for wide-neck bifurcation aneurysms (WNBAs) that has been evaluated in several multicenter prospective studies, including two European trials (WEB Clinical Assessment of Intrasaccular Aneurysm Therapy (WEBCAST) and WEBCAST-2), one trial in the United States (WEB Intrasaccular Therapy (WEB-IT)), and one French trial (French Observatory), showing high safety and efficacy in the short and long term.7–14 Additional WEB trials are currently recruiting or under analysis: CLinical Assessment of WEB Device in Ruptured aneurYSms (CLARYS), CLinical EValuation of WEB 0.017 Device in Intracranial AneuRysms (CLEVER), and WEB-IT China (WEB-IT China).\nThe initial clinical experience with WEB showed that WEB sizing was critical to obtain a good apposition of the lateral surface of the WEB against the aneurysm wall and a complete sealing of the aneurysm neck. Recommendations regarding WEB sizing have evolved over time including proposing oversizing (approximately 1 mm) the device in transverse diameter.15 Although this approach is now routine practice, excessive oversizing may lead to WEB protrusion, which is potentially associated with thromboembolic events (TEs). To overcome the protrusion problem, three options have been considered: downsizing the WEB device, use of balloon assistance during or after WEB deployment, or placement of a stent in the parent artery in the event of protrusion with important reduction of parent artery lumen.16 However, downsizing the device may not be a viable option given its potential association with a higher risk of aneurysm recurrence. Given that balloon inflation during WEB deployment only very slightly changes the WEB position in the aneurysm sac, our limited experience with balloon assistance was inconclusive. Additionally, using balloon assistance after WEB placement led to poorer outcomes with no or very limited change in WEB position. After these relatively disappointing attempts with balloon assistance, our department devised a new strategy for WEB aneurysm treatment. The strategy involves oversizing the WEB device in transverse diameter and, in the event of WEB protrusion, placing a stent (adjunctive stenting) in the parent artery.\nWe report and retrospectively analyze our preliminary experience with this strategy in patients treated with WEB and stent in our prospective database of patients treated with WEB.", "According to the retrospective study design and the fact that WEB treatment and intracranial stenting were already part of current clinical practice, no institutional review board or ethics committee review was necessary; instead, patients were required to give oral consent for data use.\nPatients Since 2001, our department has maintained a prospective database of all patients with endovascularly treated aneurysms. We extracted data from all patients from the database who had aneurysms treated with WEB and stent between June 2011 (first WEB patient) and January 2020.\nSince 2001, our department has maintained a prospective database of all patients with endovascularly treated aneurysms. We extracted data from all patients from the database who had aneurysms treated with WEB and stent between June 2011 (first WEB patient) and January 2020.\nWeb device The WEB is a self-expanding, retrievable, electrothermally detachable, nitinol braided device, placed within the aneurysm sac. There have been several WEB device iterations over time. From November 2013, WEB DL (dual layer) was replaced by WEB with a single braid (WEB SL (single layer) and WEB SLS (single layer spherical), each possessing a barrel and a spherical shape, respectively).\nThe WEB is a self-expanding, retrievable, electrothermally detachable, nitinol braided device, placed within the aneurysm sac. There have been several WEB device iterations over time. From November 2013, WEB DL (dual layer) was replaced by WEB with a single braid (WEB SL (single layer) and WEB SLS (single layer spherical), each possessing a barrel and a spherical shape, respectively).\nStents The following stents were used: Low-Profile Visualized Intraluminal Support Junior (LVIS Jr) Stent (MicroVention, Aliso Viejo, CA, USA), Stent ACCLINO (Acandis, Pforzheim, Germany), and Stent Enterprise (Cerenovus, Miami, FL, USA).\nThe following stents were used: Low-Profile Visualized Intraluminal Support Junior (LVIS Jr) Stent (MicroVention, Aliso Viejo, CA, USA), Stent ACCLINO (Acandis, Pforzheim, Germany), and Stent Enterprise (Cerenovus, Miami, FL, USA).\nTherapeutic strategy In our center, treatment decisions for ruptured, unruptured, and recanalized aneurysms are made by a local multidisciplinary team that includes neurosurgeons and neuroradiologists. When endovascular treatment is selected as the primary treatment strategy, the Interventional Neuroradiology team selects the endovascular technique. WEB is usually used for treatment of WNBA, but the introduction of WEB 17 permits broader indications, such as pericallosal and sidewall wide neck aneurysms. Ruptured, unruptured, and recanalized aneurysms are treated with WEB if their anatomy is compatible with this treatment.\nThe decision to use stent in addition to WEB placement occurs in two circumstances:\nPreoperative analysis shows that aneurysm anatomy, namely neck width, is not compatible with placement of the WEB device alone. This occurs when a branch emerges from the aneurysm neck, in which case a microcatheter is placed in the branch to be stented before WEB deployment; the stent is delivered after WEB deployment.\nA WEB protrusion in the parent artery (or one branch of the bifurcation) is seen after several WEB deployments. In this case, the stenotic branch must be catheterized with a microcatheter and a stent placed in front of the neck and WEB. This strategy was developed after analyzing protrusion situations where WEB size reduction (or WEB undersizing) will inevitably conduct to mid-term or long-term aneurysm recanalization.\nIn our center, treatment decisions for ruptured, unruptured, and recanalized aneurysms are made by a local multidisciplinary team that includes neurosurgeons and neuroradiologists. When endovascular treatment is selected as the primary treatment strategy, the Interventional Neuroradiology team selects the endovascular technique. WEB is usually used for treatment of WNBA, but the introduction of WEB 17 permits broader indications, such as pericallosal and sidewall wide neck aneurysms. Ruptured, unruptured, and recanalized aneurysms are treated with WEB if their anatomy is compatible with this treatment.\nThe decision to use stent in addition to WEB placement occurs in two circumstances:\nPreoperative analysis shows that aneurysm anatomy, namely neck width, is not compatible with placement of the WEB device alone. This occurs when a branch emerges from the aneurysm neck, in which case a microcatheter is placed in the branch to be stented before WEB deployment; the stent is delivered after WEB deployment.\nA WEB protrusion in the parent artery (or one branch of the bifurcation) is seen after several WEB deployments. In this case, the stenotic branch must be catheterized with a microcatheter and a stent placed in front of the neck and WEB. This strategy was developed after analyzing protrusion situations where WEB size reduction (or WEB undersizing) will inevitably conduct to mid-term or long-term aneurysm recanalization.\nEmbolization procedure Procedures were performed under general anesthesia on a biplane angiographic system (Axiom Artis, Siemens Biplane and Allura Clarity, Philips Healthcare). Since 2013, all patients treated with WEB (with or without stent) received premedication with double antiplatelet treatment (DAPT). Two protocols were used successively: aspirin and clopidogrel (Sanofi-Aventis, Gentilly, France) for 5 days until April 2015, followed by aspirin and ticagrelor (AstraZeneca, Courbevoie, France) for 2 days. The change of DAPT protocol was prompted by the high rate of clopidogrel resistance. In the event of stent placement, this treatment continued for 3 months; thereafter, clopidogrel or ticagrelor were stopped and aspirin continued for at least 12 months post-procedure date. Antiplatelet activity testing was not performed.\nPost-procedure MRI including diffusion-weighted imaging (DWI) was performed 24 hours post-procedure. Digital subtraction angiography (DSA) was performed at 6 and 12 months.\nProcedures were performed under general anesthesia on a biplane angiographic system (Axiom Artis, Siemens Biplane and Allura Clarity, Philips Healthcare). Since 2013, all patients treated with WEB (with or without stent) received premedication with double antiplatelet treatment (DAPT). Two protocols were used successively: aspirin and clopidogrel (Sanofi-Aventis, Gentilly, France) for 5 days until April 2015, followed by aspirin and ticagrelor (AstraZeneca, Courbevoie, France) for 2 days. The change of DAPT protocol was prompted by the high rate of clopidogrel resistance. In the event of stent placement, this treatment continued for 3 months; thereafter, clopidogrel or ticagrelor were stopped and aspirin continued for at least 12 months post-procedure date. Antiplatelet activity testing was not performed.\nPost-procedure MRI including diffusion-weighted imaging (DWI) was performed 24 hours post-procedure. Digital subtraction angiography (DSA) was performed at 6 and 12 months.\nFollow-up imaging protocol Anatomical follow-up of aneurysms treated by EVT is usually conducted on a lifetime basis.\nImmediate postoperative aneurysm occlusion was evaluated on DSA performed at the end of the procedure. At 6 and 12 month follow-ups, both DSA and MRA were performed. Yearly or less frequent (depending on the initial anatomical occlusion and its evolution along the time) follow-up was performed using MRI/MRA. Ten years post-procedure, follow-up imaging was always performed using MRI/MRA every 5 years.\nAnatomical follow-up of aneurysms treated by EVT is usually conducted on a lifetime basis.\nImmediate postoperative aneurysm occlusion was evaluated on DSA performed at the end of the procedure. At 6 and 12 month follow-ups, both DSA and MRA were performed. Yearly or less frequent (depending on the initial anatomical occlusion and its evolution along the time) follow-up was performed using MRI/MRA. Ten years post-procedure, follow-up imaging was always performed using MRI/MRA every 5 years.\nData collection The following data were collected:\nPatient: age, gender;\nAneurysm: location, size (pretreatment);\nWEB procedure: date, type and size of device used, any complications;\nStent type;\nProcedure images, 24-hour MRI and DSA at 6 and 12 months.\nThe following data were collected:\nPatient: age, gender;\nAneurysm: location, size (pretreatment);\nWEB procedure: date, type and size of device used, any complications;\nStent type;\nProcedure images, 24-hour MRI and DSA at 6 and 12 months.\nData analysis All clinical data and collected images were independently evaluated by an interventional neuroradiologist with 5 years’ experience who was not involved in any procedure.\nAneurysm occlusion was evaluated at 6 and 12 months on a three-grade scale: complete occlusion, neck remnant, and aneurysm remnant. WEB shape was also evaluated at 6 and 12 months using a three-grade scale: no WEB shape modification, mild WEB shape modification (<50% of decrease in height), or strong WEB shape modification (≥50% in height). Artery status in which the stent was placed was evaluated using a four-grade scale: no stenosis, stenosis 50% or less, stenosis greater than 50%, and occlusion.\nAll clinical data and collected images were independently evaluated by an interventional neuroradiologist with 5 years’ experience who was not involved in any procedure.\nAneurysm occlusion was evaluated at 6 and 12 months on a three-grade scale: complete occlusion, neck remnant, and aneurysm remnant. WEB shape was also evaluated at 6 and 12 months using a three-grade scale: no WEB shape modification, mild WEB shape modification (<50% of decrease in height), or strong WEB shape modification (≥50% in height). Artery status in which the stent was placed was evaluated using a four-grade scale: no stenosis, stenosis 50% or less, stenosis greater than 50%, and occlusion.\nStatistical analysis Continuous variables were described as mean±SD and range. Categorical variables were described as counts and percentage. Analyses were performed using Microsoft Office Excel 2010 (Redmond, WA, USA).\nContinuous variables were described as mean±SD and range. Categorical variables were described as counts and percentage. Analyses were performed using Microsoft Office Excel 2010 (Redmond, WA, USA).", "Since 2001, our department has maintained a prospective database of all patients with endovascularly treated aneurysms. We extracted data from all patients from the database who had aneurysms treated with WEB and stent between June 2011 (first WEB patient) and January 2020.", "The WEB is a self-expanding, retrievable, electrothermally detachable, nitinol braided device, placed within the aneurysm sac. There have been several WEB device iterations over time. From November 2013, WEB DL (dual layer) was replaced by WEB with a single braid (WEB SL (single layer) and WEB SLS (single layer spherical), each possessing a barrel and a spherical shape, respectively).", "The following stents were used: Low-Profile Visualized Intraluminal Support Junior (LVIS Jr) Stent (MicroVention, Aliso Viejo, CA, USA), Stent ACCLINO (Acandis, Pforzheim, Germany), and Stent Enterprise (Cerenovus, Miami, FL, USA).", "In our center, treatment decisions for ruptured, unruptured, and recanalized aneurysms are made by a local multidisciplinary team that includes neurosurgeons and neuroradiologists. When endovascular treatment is selected as the primary treatment strategy, the Interventional Neuroradiology team selects the endovascular technique. WEB is usually used for treatment of WNBA, but the introduction of WEB 17 permits broader indications, such as pericallosal and sidewall wide neck aneurysms. Ruptured, unruptured, and recanalized aneurysms are treated with WEB if their anatomy is compatible with this treatment.\nThe decision to use stent in addition to WEB placement occurs in two circumstances:\nPreoperative analysis shows that aneurysm anatomy, namely neck width, is not compatible with placement of the WEB device alone. This occurs when a branch emerges from the aneurysm neck, in which case a microcatheter is placed in the branch to be stented before WEB deployment; the stent is delivered after WEB deployment.\nA WEB protrusion in the parent artery (or one branch of the bifurcation) is seen after several WEB deployments. In this case, the stenotic branch must be catheterized with a microcatheter and a stent placed in front of the neck and WEB. This strategy was developed after analyzing protrusion situations where WEB size reduction (or WEB undersizing) will inevitably conduct to mid-term or long-term aneurysm recanalization.", "Procedures were performed under general anesthesia on a biplane angiographic system (Axiom Artis, Siemens Biplane and Allura Clarity, Philips Healthcare). Since 2013, all patients treated with WEB (with or without stent) received premedication with double antiplatelet treatment (DAPT). Two protocols were used successively: aspirin and clopidogrel (Sanofi-Aventis, Gentilly, France) for 5 days until April 2015, followed by aspirin and ticagrelor (AstraZeneca, Courbevoie, France) for 2 days. The change of DAPT protocol was prompted by the high rate of clopidogrel resistance. In the event of stent placement, this treatment continued for 3 months; thereafter, clopidogrel or ticagrelor were stopped and aspirin continued for at least 12 months post-procedure date. Antiplatelet activity testing was not performed.\nPost-procedure MRI including diffusion-weighted imaging (DWI) was performed 24 hours post-procedure. Digital subtraction angiography (DSA) was performed at 6 and 12 months.", "Anatomical follow-up of aneurysms treated by EVT is usually conducted on a lifetime basis.\nImmediate postoperative aneurysm occlusion was evaluated on DSA performed at the end of the procedure. At 6 and 12 month follow-ups, both DSA and MRA were performed. Yearly or less frequent (depending on the initial anatomical occlusion and its evolution along the time) follow-up was performed using MRI/MRA. Ten years post-procedure, follow-up imaging was always performed using MRI/MRA every 5 years.", "The following data were collected:\nPatient: age, gender;\nAneurysm: location, size (pretreatment);\nWEB procedure: date, type and size of device used, any complications;\nStent type;\nProcedure images, 24-hour MRI and DSA at 6 and 12 months.", "All clinical data and collected images were independently evaluated by an interventional neuroradiologist with 5 years’ experience who was not involved in any procedure.\nAneurysm occlusion was evaluated at 6 and 12 months on a three-grade scale: complete occlusion, neck remnant, and aneurysm remnant. WEB shape was also evaluated at 6 and 12 months using a three-grade scale: no WEB shape modification, mild WEB shape modification (<50% of decrease in height), or strong WEB shape modification (≥50% in height). Artery status in which the stent was placed was evaluated using a four-grade scale: no stenosis, stenosis 50% or less, stenosis greater than 50%, and occlusion.", "Continuous variables were described as mean±SD and range. Categorical variables were described as counts and percentage. Analyses were performed using Microsoft Office Excel 2010 (Redmond, WA, USA).", "Patients From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures.\nCharacteristics of patients and aneurysms\nAcom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured.\nFrom June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures.\nCharacteristics of patients and aneurysms\nAcom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured.\nAneurysms Of 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively.\nAneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%).\nOf 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively.\nAneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%).\nProcedures Pretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10).\nDual antiplatelet therapy, devices, complications, and anatomical results\n*0: none; 1: ischemic; 2: hemorrhage.\n†1: complete occlusion; 2: neck remnant; 3: aneurysm remnant.\n‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height).\nNA, not available.\nLVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1).\nPatient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion.\nStent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion.\nPretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10).\nDual antiplatelet therapy, devices, complications, and anatomical results\n*0: none; 1: ischemic; 2: hemorrhage.\n†1: complete occlusion; 2: neck remnant; 3: aneurysm remnant.\n‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height).\nNA, not available.\nLVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1).\nPatient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion.\nStent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion.\nComplications There were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI.\nFinally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%.\nThere were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI.\nFinally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%.\nAnatomical results at 6 months DSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown.\nWEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%).\nDSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown.\nWEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%).\nAnatomical results at 12 months DSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%).\nNo intra-stent stenosis or occlusion was depicted at 12 months.\nWEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%).\nDSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%).\nNo intra-stent stenosis or occlusion was depicted at 12 months.\nWEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%).\nRetreatment One patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent.\nOne patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent.", "From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures.\nCharacteristics of patients and aneurysms\nAcom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured.", "Of 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively.\nAneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%).", "Pretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10).\nDual antiplatelet therapy, devices, complications, and anatomical results\n*0: none; 1: ischemic; 2: hemorrhage.\n†1: complete occlusion; 2: neck remnant; 3: aneurysm remnant.\n‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height).\nNA, not available.\nLVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1).\nPatient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion.\nStent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion.", "There were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI.\nFinally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%.", "DSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown.\nWEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%).", "DSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%).\nNo intra-stent stenosis or occlusion was depicted at 12 months.\nWEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%).", "One patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent.", "This single-center retrospective study shows that additional stenting was used in a relatively low percentage of cases (11.2%). Stent placement was always performed after WEB deployment and was successful in all cases (100.0%). Complications (most asymptomatic) were observed in 29.4% of cases, but morbimortality at 1 month was 0.0%. Aneurysm treatment with WEB and stent was associated with excellent anatomical results: complete aneurysm occlusion in 92.9% and neck remnant in 7.1% at 12 months. Retreatment rate at 12 months was limited to 5.9%.\nSince 2010, WEB treatment has primarily been used in Europe for WNBA management.17 Prospective, multicenter studies conducted in Europe and the USA have shown high safety and great efficacy at 12 months, 2 years, and 3 years.6 8 18 During the learning curve with this innovative device, it became apparent that appropriate WEB sizing is critical to obtain a stable and complete aneurysm occlusion. Achieving good apposition of the device against the aneurysm wall to properly block the neck required oversizing the WEB device relative to aneurysm mean transverse diameter of approximately 1 mm.15 However, oversizing may lead to WEB protrusion and potential stenosis of one branch of the bifurcation. This is particularly true for complex WNBA: aneurysms not centered on the bifurcation, irregular shape aneurysm, and aneurysms with a strong angulation with the parent artery. In these situations, proper deployment of the WEB in the aneurysm is sometimes challenging. To overcome this problem, Mihalea and colleagues have proposed combining WEB and remodeling techniques.16 The balloon is inflated before WEB deployment in order to slightly change WEB position in the aneurysm sac to avoid or reduce WEB protrusion.\nAs our experience with this technique was not very effective, we decided to evaluate another treatment strategy. This strategy, which we used in 18/19 aneurysms (94.7%), consisted of routinely oversizing the WEB and, if there was a WEB protrusion in one branch, placing a stent. In a limited number of aneurysms (1/19, 5.2%), placement was deemed mandatory at the beginning of the procedure. In this case, the microcatheter for stent deployment was placed before WEB placement, followed by WEB deployment and finally, stent deployment.\nAdjunctive stenting was used with all WEB systems (17, 21, 27, and 33) and no specific problems were encountered with any system. Given the number of patients/aneurysms, it is unfeasible to analyze whether the progressive evolution of the WEB system (including evolution of delivering microcatheter size) has significantly influenced the need for adjunctive stenting. Both braided and laser-cut stents were used in combination with WEB, and no specific technical difficulty was encountered with any type.\nIn this series, we reported detailed data on complications, including those not associated with clinical deterioration and depicted only on postoperative MRI, an imaging protocol that has yet to be fully integrated into routine clinical practice. The detailed reporting of complications may partially explain why the reported rate of complications is relatively high (29.4%) and slightly higher than that reported with stent-assisted coiling (21.2%).19 Even if the complication rate was relatively high, most complications were asymptomatic and no morbidity or mortality was observed at 1 month. Compared with what was reported in European and US WEB studies (no mortality, morbidity 3.0% and 0.7%, respectively), the safety of the WEB and stent treatment is unlikely to be significantly different from WEB alone. Knowing that stent-assisted coiling is associated with a higher rate of morbimortality compared with coiling alone, this result is important.20\n\nVery good efficacy results were observed in the present study with complete occlusion in 88.2% at 6 months and 92.9% at 12 months, and adequate occlusion in 94.1% at 6 months and 100.0% at 12 months. Furthermore, these efficacy results are superior to those reported in prospective, multicenter studies conducted in Europe and the USA. In the entire population of three European studies (WEBCAST, WEBCAST 2, and French Observatory), aneurysm occlusion at 12 months was complete occlusion in 52.9%, neck remnant in 26.1%, and aneurysm remnant in 20.9%.7–9 Similar results were reported in the US study: complete occlusion in 53.8%, neck remnant in 30.8%, and aneurysm remnant in 15.4%.11 However, a recent study analyzing factors affecting aneurysm occlusion after WEB treatment did not show a positive association between stenting and the rate of adequate occlusion.21\n\nIn contrast to what has been observed for anatomical results, rate of retreatment at 12 months is relatively similar across the European series (6.9%), US series (5.6%), and the current series (5.9%).\nOne previous series of 17 patients reported what the authors termed ‘Stent-assisted WEB’, and mentioned two circumstances in which stent placement was performed in addition to WEB treatment: (1) WNBA with a branch emerging from the neck, and (2) narrowing of a bifurcation branch.22 This is similar to what was observed in our series, in which the second indication for stent was by far the most frequent (94.7%). This series reported similar safety (no morbimortality), but slightly poorer efficacy at 12 months, with complete aneurysm occlusion in 69.0%, neck remnant in 12.5%, and aneurysm remnant in 18.5%.\nLimitations This study has some limitations. First this analysis was retrospectively conducted in a prospectively maintained database. At the beginning of our experience of combining WEB and stent, the impression was that it would be used in a very limited number of cases and that no prospective study was required to evaluate this treatment. Second, the patient cohort is relatively small because indications for combining WEB and stent are relatively limited. Nonetheless, it is important to report the usefulness, safety, and efficacy of this combination.\nThis study has some limitations. First this analysis was retrospectively conducted in a prospectively maintained database. At the beginning of our experience of combining WEB and stent, the impression was that it would be used in a very limited number of cases and that no prospective study was required to evaluate this treatment. Second, the patient cohort is relatively small because indications for combining WEB and stent are relatively limited. Nonetheless, it is important to report the usefulness, safety, and efficacy of this combination.", "This study has some limitations. First this analysis was retrospectively conducted in a prospectively maintained database. At the beginning of our experience of combining WEB and stent, the impression was that it would be used in a very limited number of cases and that no prospective study was required to evaluate this treatment. Second, the patient cohort is relatively small because indications for combining WEB and stent are relatively limited. Nonetheless, it is important to report the usefulness, safety, and efficacy of this combination.", "Combining WEB with stent is part of the treatment strategy to manage WNBA. In our series, this combination was used in 11.2% of patients treated with WEB, resulting in no morbidity or mortality and high efficacy at 6 and 12 months (complete aneurysm occlusion in 88.2% and 92.9%, respectively). This combination should be considered as a safe and effective option in the treatment of complex WNBA. Further studies (including comparative ones) are needed to further analyze the place of this EVT when managing intracranial aneurysms." ]
[ "intro", "materials", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion", null, "conclusions" ]
[ "aneurysm", "flow diverter", "stent" ]
Introduction: Since publication of the ISAT (International Subarachnoid Aneurysm Trial) results, the endovascular approach has increasingly replaced surgery when managing intracranial aneurysms (IAs).1 In cases where treatment was initially based on coil use (with or without balloon-assistance), treatment complexity of some IAs led to the development of alternative techniques such as flow diversion and flow disruption.2–6 Flow disruption with WEB (MicroVention, Aliso Viejo, CA, USA) is an innovative treatment for wide-neck bifurcation aneurysms (WNBAs) that has been evaluated in several multicenter prospective studies, including two European trials (WEB Clinical Assessment of Intrasaccular Aneurysm Therapy (WEBCAST) and WEBCAST-2), one trial in the United States (WEB Intrasaccular Therapy (WEB-IT)), and one French trial (French Observatory), showing high safety and efficacy in the short and long term.7–14 Additional WEB trials are currently recruiting or under analysis: CLinical Assessment of WEB Device in Ruptured aneurYSms (CLARYS), CLinical EValuation of WEB 0.017 Device in Intracranial AneuRysms (CLEVER), and WEB-IT China (WEB-IT China). The initial clinical experience with WEB showed that WEB sizing was critical to obtain a good apposition of the lateral surface of the WEB against the aneurysm wall and a complete sealing of the aneurysm neck. Recommendations regarding WEB sizing have evolved over time including proposing oversizing (approximately 1 mm) the device in transverse diameter.15 Although this approach is now routine practice, excessive oversizing may lead to WEB protrusion, which is potentially associated with thromboembolic events (TEs). To overcome the protrusion problem, three options have been considered: downsizing the WEB device, use of balloon assistance during or after WEB deployment, or placement of a stent in the parent artery in the event of protrusion with important reduction of parent artery lumen.16 However, downsizing the device may not be a viable option given its potential association with a higher risk of aneurysm recurrence. Given that balloon inflation during WEB deployment only very slightly changes the WEB position in the aneurysm sac, our limited experience with balloon assistance was inconclusive. Additionally, using balloon assistance after WEB placement led to poorer outcomes with no or very limited change in WEB position. After these relatively disappointing attempts with balloon assistance, our department devised a new strategy for WEB aneurysm treatment. The strategy involves oversizing the WEB device in transverse diameter and, in the event of WEB protrusion, placing a stent (adjunctive stenting) in the parent artery. We report and retrospectively analyze our preliminary experience with this strategy in patients treated with WEB and stent in our prospective database of patients treated with WEB. Materials and methods: According to the retrospective study design and the fact that WEB treatment and intracranial stenting were already part of current clinical practice, no institutional review board or ethics committee review was necessary; instead, patients were required to give oral consent for data use. Patients Since 2001, our department has maintained a prospective database of all patients with endovascularly treated aneurysms. We extracted data from all patients from the database who had aneurysms treated with WEB and stent between June 2011 (first WEB patient) and January 2020. Since 2001, our department has maintained a prospective database of all patients with endovascularly treated aneurysms. We extracted data from all patients from the database who had aneurysms treated with WEB and stent between June 2011 (first WEB patient) and January 2020. Web device The WEB is a self-expanding, retrievable, electrothermally detachable, nitinol braided device, placed within the aneurysm sac. There have been several WEB device iterations over time. From November 2013, WEB DL (dual layer) was replaced by WEB with a single braid (WEB SL (single layer) and WEB SLS (single layer spherical), each possessing a barrel and a spherical shape, respectively). The WEB is a self-expanding, retrievable, electrothermally detachable, nitinol braided device, placed within the aneurysm sac. There have been several WEB device iterations over time. From November 2013, WEB DL (dual layer) was replaced by WEB with a single braid (WEB SL (single layer) and WEB SLS (single layer spherical), each possessing a barrel and a spherical shape, respectively). Stents The following stents were used: Low-Profile Visualized Intraluminal Support Junior (LVIS Jr) Stent (MicroVention, Aliso Viejo, CA, USA), Stent ACCLINO (Acandis, Pforzheim, Germany), and Stent Enterprise (Cerenovus, Miami, FL, USA). The following stents were used: Low-Profile Visualized Intraluminal Support Junior (LVIS Jr) Stent (MicroVention, Aliso Viejo, CA, USA), Stent ACCLINO (Acandis, Pforzheim, Germany), and Stent Enterprise (Cerenovus, Miami, FL, USA). Therapeutic strategy In our center, treatment decisions for ruptured, unruptured, and recanalized aneurysms are made by a local multidisciplinary team that includes neurosurgeons and neuroradiologists. When endovascular treatment is selected as the primary treatment strategy, the Interventional Neuroradiology team selects the endovascular technique. WEB is usually used for treatment of WNBA, but the introduction of WEB 17 permits broader indications, such as pericallosal and sidewall wide neck aneurysms. Ruptured, unruptured, and recanalized aneurysms are treated with WEB if their anatomy is compatible with this treatment. The decision to use stent in addition to WEB placement occurs in two circumstances: Preoperative analysis shows that aneurysm anatomy, namely neck width, is not compatible with placement of the WEB device alone. This occurs when a branch emerges from the aneurysm neck, in which case a microcatheter is placed in the branch to be stented before WEB deployment; the stent is delivered after WEB deployment. A WEB protrusion in the parent artery (or one branch of the bifurcation) is seen after several WEB deployments. In this case, the stenotic branch must be catheterized with a microcatheter and a stent placed in front of the neck and WEB. This strategy was developed after analyzing protrusion situations where WEB size reduction (or WEB undersizing) will inevitably conduct to mid-term or long-term aneurysm recanalization. In our center, treatment decisions for ruptured, unruptured, and recanalized aneurysms are made by a local multidisciplinary team that includes neurosurgeons and neuroradiologists. When endovascular treatment is selected as the primary treatment strategy, the Interventional Neuroradiology team selects the endovascular technique. WEB is usually used for treatment of WNBA, but the introduction of WEB 17 permits broader indications, such as pericallosal and sidewall wide neck aneurysms. Ruptured, unruptured, and recanalized aneurysms are treated with WEB if their anatomy is compatible with this treatment. The decision to use stent in addition to WEB placement occurs in two circumstances: Preoperative analysis shows that aneurysm anatomy, namely neck width, is not compatible with placement of the WEB device alone. This occurs when a branch emerges from the aneurysm neck, in which case a microcatheter is placed in the branch to be stented before WEB deployment; the stent is delivered after WEB deployment. A WEB protrusion in the parent artery (or one branch of the bifurcation) is seen after several WEB deployments. In this case, the stenotic branch must be catheterized with a microcatheter and a stent placed in front of the neck and WEB. This strategy was developed after analyzing protrusion situations where WEB size reduction (or WEB undersizing) will inevitably conduct to mid-term or long-term aneurysm recanalization. Embolization procedure Procedures were performed under general anesthesia on a biplane angiographic system (Axiom Artis, Siemens Biplane and Allura Clarity, Philips Healthcare). Since 2013, all patients treated with WEB (with or without stent) received premedication with double antiplatelet treatment (DAPT). Two protocols were used successively: aspirin and clopidogrel (Sanofi-Aventis, Gentilly, France) for 5 days until April 2015, followed by aspirin and ticagrelor (AstraZeneca, Courbevoie, France) for 2 days. The change of DAPT protocol was prompted by the high rate of clopidogrel resistance. In the event of stent placement, this treatment continued for 3 months; thereafter, clopidogrel or ticagrelor were stopped and aspirin continued for at least 12 months post-procedure date. Antiplatelet activity testing was not performed. Post-procedure MRI including diffusion-weighted imaging (DWI) was performed 24 hours post-procedure. Digital subtraction angiography (DSA) was performed at 6 and 12 months. Procedures were performed under general anesthesia on a biplane angiographic system (Axiom Artis, Siemens Biplane and Allura Clarity, Philips Healthcare). Since 2013, all patients treated with WEB (with or without stent) received premedication with double antiplatelet treatment (DAPT). Two protocols were used successively: aspirin and clopidogrel (Sanofi-Aventis, Gentilly, France) for 5 days until April 2015, followed by aspirin and ticagrelor (AstraZeneca, Courbevoie, France) for 2 days. The change of DAPT protocol was prompted by the high rate of clopidogrel resistance. In the event of stent placement, this treatment continued for 3 months; thereafter, clopidogrel or ticagrelor were stopped and aspirin continued for at least 12 months post-procedure date. Antiplatelet activity testing was not performed. Post-procedure MRI including diffusion-weighted imaging (DWI) was performed 24 hours post-procedure. Digital subtraction angiography (DSA) was performed at 6 and 12 months. Follow-up imaging protocol Anatomical follow-up of aneurysms treated by EVT is usually conducted on a lifetime basis. Immediate postoperative aneurysm occlusion was evaluated on DSA performed at the end of the procedure. At 6 and 12 month follow-ups, both DSA and MRA were performed. Yearly or less frequent (depending on the initial anatomical occlusion and its evolution along the time) follow-up was performed using MRI/MRA. Ten years post-procedure, follow-up imaging was always performed using MRI/MRA every 5 years. Anatomical follow-up of aneurysms treated by EVT is usually conducted on a lifetime basis. Immediate postoperative aneurysm occlusion was evaluated on DSA performed at the end of the procedure. At 6 and 12 month follow-ups, both DSA and MRA were performed. Yearly or less frequent (depending on the initial anatomical occlusion and its evolution along the time) follow-up was performed using MRI/MRA. Ten years post-procedure, follow-up imaging was always performed using MRI/MRA every 5 years. Data collection The following data were collected: Patient: age, gender; Aneurysm: location, size (pretreatment); WEB procedure: date, type and size of device used, any complications; Stent type; Procedure images, 24-hour MRI and DSA at 6 and 12 months. The following data were collected: Patient: age, gender; Aneurysm: location, size (pretreatment); WEB procedure: date, type and size of device used, any complications; Stent type; Procedure images, 24-hour MRI and DSA at 6 and 12 months. Data analysis All clinical data and collected images were independently evaluated by an interventional neuroradiologist with 5 years’ experience who was not involved in any procedure. Aneurysm occlusion was evaluated at 6 and 12 months on a three-grade scale: complete occlusion, neck remnant, and aneurysm remnant. WEB shape was also evaluated at 6 and 12 months using a three-grade scale: no WEB shape modification, mild WEB shape modification (<50% of decrease in height), or strong WEB shape modification (≥50% in height). Artery status in which the stent was placed was evaluated using a four-grade scale: no stenosis, stenosis 50% or less, stenosis greater than 50%, and occlusion. All clinical data and collected images were independently evaluated by an interventional neuroradiologist with 5 years’ experience who was not involved in any procedure. Aneurysm occlusion was evaluated at 6 and 12 months on a three-grade scale: complete occlusion, neck remnant, and aneurysm remnant. WEB shape was also evaluated at 6 and 12 months using a three-grade scale: no WEB shape modification, mild WEB shape modification (<50% of decrease in height), or strong WEB shape modification (≥50% in height). Artery status in which the stent was placed was evaluated using a four-grade scale: no stenosis, stenosis 50% or less, stenosis greater than 50%, and occlusion. Statistical analysis Continuous variables were described as mean±SD and range. Categorical variables were described as counts and percentage. Analyses were performed using Microsoft Office Excel 2010 (Redmond, WA, USA). Continuous variables were described as mean±SD and range. Categorical variables were described as counts and percentage. Analyses were performed using Microsoft Office Excel 2010 (Redmond, WA, USA). Patients: Since 2001, our department has maintained a prospective database of all patients with endovascularly treated aneurysms. We extracted data from all patients from the database who had aneurysms treated with WEB and stent between June 2011 (first WEB patient) and January 2020. Web device: The WEB is a self-expanding, retrievable, electrothermally detachable, nitinol braided device, placed within the aneurysm sac. There have been several WEB device iterations over time. From November 2013, WEB DL (dual layer) was replaced by WEB with a single braid (WEB SL (single layer) and WEB SLS (single layer spherical), each possessing a barrel and a spherical shape, respectively). Stents: The following stents were used: Low-Profile Visualized Intraluminal Support Junior (LVIS Jr) Stent (MicroVention, Aliso Viejo, CA, USA), Stent ACCLINO (Acandis, Pforzheim, Germany), and Stent Enterprise (Cerenovus, Miami, FL, USA). Therapeutic strategy: In our center, treatment decisions for ruptured, unruptured, and recanalized aneurysms are made by a local multidisciplinary team that includes neurosurgeons and neuroradiologists. When endovascular treatment is selected as the primary treatment strategy, the Interventional Neuroradiology team selects the endovascular technique. WEB is usually used for treatment of WNBA, but the introduction of WEB 17 permits broader indications, such as pericallosal and sidewall wide neck aneurysms. Ruptured, unruptured, and recanalized aneurysms are treated with WEB if their anatomy is compatible with this treatment. The decision to use stent in addition to WEB placement occurs in two circumstances: Preoperative analysis shows that aneurysm anatomy, namely neck width, is not compatible with placement of the WEB device alone. This occurs when a branch emerges from the aneurysm neck, in which case a microcatheter is placed in the branch to be stented before WEB deployment; the stent is delivered after WEB deployment. A WEB protrusion in the parent artery (or one branch of the bifurcation) is seen after several WEB deployments. In this case, the stenotic branch must be catheterized with a microcatheter and a stent placed in front of the neck and WEB. This strategy was developed after analyzing protrusion situations where WEB size reduction (or WEB undersizing) will inevitably conduct to mid-term or long-term aneurysm recanalization. Embolization procedure: Procedures were performed under general anesthesia on a biplane angiographic system (Axiom Artis, Siemens Biplane and Allura Clarity, Philips Healthcare). Since 2013, all patients treated with WEB (with or without stent) received premedication with double antiplatelet treatment (DAPT). Two protocols were used successively: aspirin and clopidogrel (Sanofi-Aventis, Gentilly, France) for 5 days until April 2015, followed by aspirin and ticagrelor (AstraZeneca, Courbevoie, France) for 2 days. The change of DAPT protocol was prompted by the high rate of clopidogrel resistance. In the event of stent placement, this treatment continued for 3 months; thereafter, clopidogrel or ticagrelor were stopped and aspirin continued for at least 12 months post-procedure date. Antiplatelet activity testing was not performed. Post-procedure MRI including diffusion-weighted imaging (DWI) was performed 24 hours post-procedure. Digital subtraction angiography (DSA) was performed at 6 and 12 months. Follow-up imaging protocol: Anatomical follow-up of aneurysms treated by EVT is usually conducted on a lifetime basis. Immediate postoperative aneurysm occlusion was evaluated on DSA performed at the end of the procedure. At 6 and 12 month follow-ups, both DSA and MRA were performed. Yearly or less frequent (depending on the initial anatomical occlusion and its evolution along the time) follow-up was performed using MRI/MRA. Ten years post-procedure, follow-up imaging was always performed using MRI/MRA every 5 years. Data collection: The following data were collected: Patient: age, gender; Aneurysm: location, size (pretreatment); WEB procedure: date, type and size of device used, any complications; Stent type; Procedure images, 24-hour MRI and DSA at 6 and 12 months. Data analysis: All clinical data and collected images were independently evaluated by an interventional neuroradiologist with 5 years’ experience who was not involved in any procedure. Aneurysm occlusion was evaluated at 6 and 12 months on a three-grade scale: complete occlusion, neck remnant, and aneurysm remnant. WEB shape was also evaluated at 6 and 12 months using a three-grade scale: no WEB shape modification, mild WEB shape modification (<50% of decrease in height), or strong WEB shape modification (≥50% in height). Artery status in which the stent was placed was evaluated using a four-grade scale: no stenosis, stenosis 50% or less, stenosis greater than 50%, and occlusion. Statistical analysis: Continuous variables were described as mean±SD and range. Categorical variables were described as counts and percentage. Analyses were performed using Microsoft Office Excel 2010 (Redmond, WA, USA). Results: Patients From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures. Characteristics of patients and aneurysms Acom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured. From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures. Characteristics of patients and aneurysms Acom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured. Aneurysms Of 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively. Aneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%). Of 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively. Aneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%). Procedures Pretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10). Dual antiplatelet therapy, devices, complications, and anatomical results *0: none; 1: ischemic; 2: hemorrhage. †1: complete occlusion; 2: neck remnant; 3: aneurysm remnant. ‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height). NA, not available. LVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1). Patient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion. Stent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion. Pretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10). Dual antiplatelet therapy, devices, complications, and anatomical results *0: none; 1: ischemic; 2: hemorrhage. †1: complete occlusion; 2: neck remnant; 3: aneurysm remnant. ‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height). NA, not available. LVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1). Patient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion. Stent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion. Complications There were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI. Finally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%. There were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI. Finally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%. Anatomical results at 6 months DSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown. WEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%). DSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown. WEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%). Anatomical results at 12 months DSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%). No intra-stent stenosis or occlusion was depicted at 12 months. WEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%). DSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%). No intra-stent stenosis or occlusion was depicted at 12 months. WEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%). Retreatment One patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent. One patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent. Patients: From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Among them, 17/152 (11.2%) patients with 19/157 (12.1%) aneurysms were treated with WEB device and stent (table 1). Thirteen of 17 patients (76.5%) were female and mean age was 53±8 years (range 40–63 years). Sixteen patients were treated with WEB and stent for one aneurysm, while one patient (17) was treated for three aneurysms during three procedures. Characteristics of patients and aneurysms Acom, anterior communicating artery; BA, basilar artery; ICAt, internal carotid artery terminus; MCA, middle cerebral artery; Recan, recanalized; UnR, unruptured. Aneurysms: Of 19 aneurysms (table 1), 15 (78.9%) were unruptured while four (21.0%) were recanalized aneurysms (N0 1, 3, 5, and 10). The four recanalized aneurysms were previously ruptured, treated at the acute phase of bleeding by coils in three patients (No 1, 3, and 10) and WEB in one patient (No 5). Patients were retreated with WEB and stent for aneurysm recanalization 4, 20, 22, and 3 years after initial treatment, respectively. Aneurysm location was middle cerebral artery (MCA) in 11/19 (57.9%), basilar artery (BA) in 5/19 (26.3%), internal artery carotid terminus (ICAt) in 1/19 (5.2%), and anterior communicating artery (Acom) in 2/19 (10.5%). Aneurysms had a maximum width between 3.0 and 9.6 mm (mean 6.1±1.9 mm) and a height between 2.6 and 12.1 mm (mean 5.2±2.7 mm). The neck was narrow (<4 mm) in 9/19 (47.4%) and wide in 10/19 aneurysms (52.6%). Procedures: Pretreatment DAPT was aspirin and clopidogrel in 5/17 (29.4%) patients and aspirin and ticagrelor in 12/17 (70.6%) patients (table 2). All aneurysms were successfully treated with WEB and stent (100%). WEB SL was used in 18/19 aneurysms and WEB DL (deployed through a Via 27) in 1/19 (No 1). Among aneurysms treated with WEB SL, 7 aneurysms were treated with WEB 17 system (No 11, 12, 13, 14, 15, 16, and 17 (third aneurysm)), 3 with WEB 21 system (No 3, 4 and 6), 7 with WEB 27 system (No 2, 5, 7, 8, 9, and 17 (first and second aneurysms)), and 1 (5.9 %) with WEB 33 system (No 10). Dual antiplatelet therapy, devices, complications, and anatomical results *0: none; 1: ischemic; 2: hemorrhage. †1: complete occlusion; 2: neck remnant; 3: aneurysm remnant. ‡0: no WEB shape modification; 1: mild WEB shape modification (<50% of decrease in height); 2: strong WEB shape modification (≥50% in height). NA, not available. LVIS JR was used in 16/19 (84.2%), Enterprise in 2/19 (10.5%), and Acclino in 1/19 (5.2%) (figure 1). Patient17: unruptured left middle cerebral artery (MCA) aneurysm. (A) Three-dimensional digital subtraction angiography (3D-DSA) shows the aneurysm. (B) and (C) DSA, working view, before and after web device deployment. After web deployment web protrusion is visible (white arrowhead). (D) and (E) subtracted and unsubtracted working view. DSA subtracted and unsubtracted working view at the end of procedure show the web and the stent deployed. (F) Subarachnoid hemorrhage depicted by 24-hour MRI (T2*) (black arrowhead). (G) and (H) MIP 3D-DSA at the end of procedure and at 6 months, respectively, show the web and stent with complete aneurysm occlusion. Stent placement was planned pre-procedure in 1/19 (5.2%) aneurysms due to a very wide neck with a branch emerging from it. In 18/19 (94.7%) aneurysms, stent placement was determined during procedure due to WEB protrusion. Complications: There were seven minor complications in 5/17 patients (29.4%) and 7/19 aneurysms/procedures (36.8%). Three patients (No 5, 7, and 16) had transient ischemic attack (TIA) that occurred on day 1, 10, and 2 post-procedure, respectively. In all cases, control MRI performed after TIA showed multiple spots on DWI images without territorial infarct. Two patients (No 8 and 17) had four hemorrhagic complications including one (No 17) who had three procedures for three different aneurysms with one hemorrhagic complication at each of them. Among them, two were subarachnoid hemorrhage (SAH) depicted on 24-hour MRI occurring after MCA aneurysm treatment. The patient also had a cerebellar hematoma depicted on 24-hour MRI post-BA aneurysm treatment. These three events were not associated with any clinical deterioration. Finally, patient No 8 (treated for MCA aneurysm) had two hematomas in the Sylvian fissure, distal to the aneurysm, likely due to distal artery perforation during stent placement, indicated by postoperative headache and confirmed by 24-hour MRI. Finally, all patients were mRS 0 1 month post-complication, resulting in a morbimortality rate of 0.0%. Anatomical results at 6 months: DSA at 6 months was obtained in 15/17 (88.2%) patients and 17/19 (89.5%) aneurysms. Patient No 7 was evaluated only with MRI due to a vascular access problem. Patient No 12 did not attend his appointment at 6 months due to concurrent unrelated disease. Complete aneurysm occlusion was achieved in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). Patient No 3 with aneurysm remnant was retreated immediately after the 6-month control DSA (see below). No intra-stent stenosis or occlusion was shown. WEB shape was unchanged in 13/17 (76.5%) aneurysms. There was mild modification in 2/17 (11.8%) aneurysms and strong modification in 2/17 (11.8%). Anatomical results at 12 months: DSA at 12 months was obtained in 14/17 (82.3%) patients and 14/19 aneurysms (73.7%). Patient No 7 did not have a DSA control at 12 months (see above). Patient No 3, who was retreated 6 months post-WEB and stent procedure, was not included in this analysis. Patient No 17 (three aneurysms) refused follow-up DSA at 12 months. Complete aneurysm occlusion was achieved in 13/14 aneurysms (92.9%) and neck remnant in 1/14 aneurysms (7.1%). No intra-stent stenosis or occlusion was depicted at 12 months. WEB shape was unchanged in 10/14 (71.4%) aneurysms. There was mild modification in 3/14 (21.4%) aneurysms and strong modification in 1/14 (7.1%). Retreatment: One patient (No 3) of 17 (5.9%) was retreated the year following the initial WEB and stent procedure. Additional coils were placed through the stent. Discussion: This single-center retrospective study shows that additional stenting was used in a relatively low percentage of cases (11.2%). Stent placement was always performed after WEB deployment and was successful in all cases (100.0%). Complications (most asymptomatic) were observed in 29.4% of cases, but morbimortality at 1 month was 0.0%. Aneurysm treatment with WEB and stent was associated with excellent anatomical results: complete aneurysm occlusion in 92.9% and neck remnant in 7.1% at 12 months. Retreatment rate at 12 months was limited to 5.9%. Since 2010, WEB treatment has primarily been used in Europe for WNBA management.17 Prospective, multicenter studies conducted in Europe and the USA have shown high safety and great efficacy at 12 months, 2 years, and 3 years.6 8 18 During the learning curve with this innovative device, it became apparent that appropriate WEB sizing is critical to obtain a stable and complete aneurysm occlusion. Achieving good apposition of the device against the aneurysm wall to properly block the neck required oversizing the WEB device relative to aneurysm mean transverse diameter of approximately 1 mm.15 However, oversizing may lead to WEB protrusion and potential stenosis of one branch of the bifurcation. This is particularly true for complex WNBA: aneurysms not centered on the bifurcation, irregular shape aneurysm, and aneurysms with a strong angulation with the parent artery. In these situations, proper deployment of the WEB in the aneurysm is sometimes challenging. To overcome this problem, Mihalea and colleagues have proposed combining WEB and remodeling techniques.16 The balloon is inflated before WEB deployment in order to slightly change WEB position in the aneurysm sac to avoid or reduce WEB protrusion. As our experience with this technique was not very effective, we decided to evaluate another treatment strategy. This strategy, which we used in 18/19 aneurysms (94.7%), consisted of routinely oversizing the WEB and, if there was a WEB protrusion in one branch, placing a stent. In a limited number of aneurysms (1/19, 5.2%), placement was deemed mandatory at the beginning of the procedure. In this case, the microcatheter for stent deployment was placed before WEB placement, followed by WEB deployment and finally, stent deployment. Adjunctive stenting was used with all WEB systems (17, 21, 27, and 33) and no specific problems were encountered with any system. Given the number of patients/aneurysms, it is unfeasible to analyze whether the progressive evolution of the WEB system (including evolution of delivering microcatheter size) has significantly influenced the need for adjunctive stenting. Both braided and laser-cut stents were used in combination with WEB, and no specific technical difficulty was encountered with any type. In this series, we reported detailed data on complications, including those not associated with clinical deterioration and depicted only on postoperative MRI, an imaging protocol that has yet to be fully integrated into routine clinical practice. The detailed reporting of complications may partially explain why the reported rate of complications is relatively high (29.4%) and slightly higher than that reported with stent-assisted coiling (21.2%).19 Even if the complication rate was relatively high, most complications were asymptomatic and no morbidity or mortality was observed at 1 month. Compared with what was reported in European and US WEB studies (no mortality, morbidity 3.0% and 0.7%, respectively), the safety of the WEB and stent treatment is unlikely to be significantly different from WEB alone. Knowing that stent-assisted coiling is associated with a higher rate of morbimortality compared with coiling alone, this result is important.20 Very good efficacy results were observed in the present study with complete occlusion in 88.2% at 6 months and 92.9% at 12 months, and adequate occlusion in 94.1% at 6 months and 100.0% at 12 months. Furthermore, these efficacy results are superior to those reported in prospective, multicenter studies conducted in Europe and the USA. In the entire population of three European studies (WEBCAST, WEBCAST 2, and French Observatory), aneurysm occlusion at 12 months was complete occlusion in 52.9%, neck remnant in 26.1%, and aneurysm remnant in 20.9%.7–9 Similar results were reported in the US study: complete occlusion in 53.8%, neck remnant in 30.8%, and aneurysm remnant in 15.4%.11 However, a recent study analyzing factors affecting aneurysm occlusion after WEB treatment did not show a positive association between stenting and the rate of adequate occlusion.21 In contrast to what has been observed for anatomical results, rate of retreatment at 12 months is relatively similar across the European series (6.9%), US series (5.6%), and the current series (5.9%). One previous series of 17 patients reported what the authors termed ‘Stent-assisted WEB’, and mentioned two circumstances in which stent placement was performed in addition to WEB treatment: (1) WNBA with a branch emerging from the neck, and (2) narrowing of a bifurcation branch.22 This is similar to what was observed in our series, in which the second indication for stent was by far the most frequent (94.7%). This series reported similar safety (no morbimortality), but slightly poorer efficacy at 12 months, with complete aneurysm occlusion in 69.0%, neck remnant in 12.5%, and aneurysm remnant in 18.5%. Limitations This study has some limitations. First this analysis was retrospectively conducted in a prospectively maintained database. At the beginning of our experience of combining WEB and stent, the impression was that it would be used in a very limited number of cases and that no prospective study was required to evaluate this treatment. Second, the patient cohort is relatively small because indications for combining WEB and stent are relatively limited. Nonetheless, it is important to report the usefulness, safety, and efficacy of this combination. This study has some limitations. First this analysis was retrospectively conducted in a prospectively maintained database. At the beginning of our experience of combining WEB and stent, the impression was that it would be used in a very limited number of cases and that no prospective study was required to evaluate this treatment. Second, the patient cohort is relatively small because indications for combining WEB and stent are relatively limited. Nonetheless, it is important to report the usefulness, safety, and efficacy of this combination. Limitations: This study has some limitations. First this analysis was retrospectively conducted in a prospectively maintained database. At the beginning of our experience of combining WEB and stent, the impression was that it would be used in a very limited number of cases and that no prospective study was required to evaluate this treatment. Second, the patient cohort is relatively small because indications for combining WEB and stent are relatively limited. Nonetheless, it is important to report the usefulness, safety, and efficacy of this combination. Conclusion: Combining WEB with stent is part of the treatment strategy to manage WNBA. In our series, this combination was used in 11.2% of patients treated with WEB, resulting in no morbidity or mortality and high efficacy at 6 and 12 months (complete aneurysm occlusion in 88.2% and 92.9%, respectively). This combination should be considered as a safe and effective option in the treatment of complex WNBA. Further studies (including comparative ones) are needed to further analyze the place of this EVT when managing intracranial aneurysms.
Background: Intrasaccular flow disruption with WEB is a safe and efficacious technique that has significantly changed endovascular management of wide-neck bifurcation aneurysms (WNBAs). Use of stent in combination with WEB is occasionally required. We analyzed the frequency of use, indications, safety, and efficacy of the WEB-stent combination. Methods: All aneurysms treated with WEB and stent were extracted from a prospectively maintained database. Patient and aneurysm characteristics, complications, and anatomical results were independently analyzed by a physician independent of the procedures. Results: From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Of these, 17/152 patients (11.2%) with 19/157 aneurysms (12.1%) were treated with WEB device and stent. Indications were very wide neck with a branch emerging from the neck in 1/19 (5.2%) aneurysms and WEB protrusion in 18/19 (94.7%). At 1 month, no morbimortality was reported. At 6 months, anatomical results were complete aneurysm occlusion in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). At 12 months, there was complete aneurysm occlusion in 13/14 aneurysms (92.9%) and neck remnant in 1/14 (7.1%). Conclusions: Combining WEB and stent is a therapeutic strategy to manage WNBA. In our series, this combination was used in 11.2% of patients treated with WEB, resulting in no morbidity or mortality with a high efficacy at 6 and 12 months (complete aneurysm occlusion in 88.2% and 92.9%, respectively).
Introduction: Since publication of the ISAT (International Subarachnoid Aneurysm Trial) results, the endovascular approach has increasingly replaced surgery when managing intracranial aneurysms (IAs).1 In cases where treatment was initially based on coil use (with or without balloon-assistance), treatment complexity of some IAs led to the development of alternative techniques such as flow diversion and flow disruption.2–6 Flow disruption with WEB (MicroVention, Aliso Viejo, CA, USA) is an innovative treatment for wide-neck bifurcation aneurysms (WNBAs) that has been evaluated in several multicenter prospective studies, including two European trials (WEB Clinical Assessment of Intrasaccular Aneurysm Therapy (WEBCAST) and WEBCAST-2), one trial in the United States (WEB Intrasaccular Therapy (WEB-IT)), and one French trial (French Observatory), showing high safety and efficacy in the short and long term.7–14 Additional WEB trials are currently recruiting or under analysis: CLinical Assessment of WEB Device in Ruptured aneurYSms (CLARYS), CLinical EValuation of WEB 0.017 Device in Intracranial AneuRysms (CLEVER), and WEB-IT China (WEB-IT China). The initial clinical experience with WEB showed that WEB sizing was critical to obtain a good apposition of the lateral surface of the WEB against the aneurysm wall and a complete sealing of the aneurysm neck. Recommendations regarding WEB sizing have evolved over time including proposing oversizing (approximately 1 mm) the device in transverse diameter.15 Although this approach is now routine practice, excessive oversizing may lead to WEB protrusion, which is potentially associated with thromboembolic events (TEs). To overcome the protrusion problem, three options have been considered: downsizing the WEB device, use of balloon assistance during or after WEB deployment, or placement of a stent in the parent artery in the event of protrusion with important reduction of parent artery lumen.16 However, downsizing the device may not be a viable option given its potential association with a higher risk of aneurysm recurrence. Given that balloon inflation during WEB deployment only very slightly changes the WEB position in the aneurysm sac, our limited experience with balloon assistance was inconclusive. Additionally, using balloon assistance after WEB placement led to poorer outcomes with no or very limited change in WEB position. After these relatively disappointing attempts with balloon assistance, our department devised a new strategy for WEB aneurysm treatment. The strategy involves oversizing the WEB device in transverse diameter and, in the event of WEB protrusion, placing a stent (adjunctive stenting) in the parent artery. We report and retrospectively analyze our preliminary experience with this strategy in patients treated with WEB and stent in our prospective database of patients treated with WEB. Conclusion: Combining WEB with stent is part of the treatment strategy to manage WNBA. In our series, this combination was used in 11.2% of patients treated with WEB, resulting in no morbidity or mortality and high efficacy at 6 and 12 months (complete aneurysm occlusion in 88.2% and 92.9%, respectively). This combination should be considered as a safe and effective option in the treatment of complex WNBA. Further studies (including comparative ones) are needed to further analyze the place of this EVT when managing intracranial aneurysms.
Background: Intrasaccular flow disruption with WEB is a safe and efficacious technique that has significantly changed endovascular management of wide-neck bifurcation aneurysms (WNBAs). Use of stent in combination with WEB is occasionally required. We analyzed the frequency of use, indications, safety, and efficacy of the WEB-stent combination. Methods: All aneurysms treated with WEB and stent were extracted from a prospectively maintained database. Patient and aneurysm characteristics, complications, and anatomical results were independently analyzed by a physician independent of the procedures. Results: From June 2011 to January 2020, 152 patients with 157 aneurysms were treated with WEB. Of these, 17/152 patients (11.2%) with 19/157 aneurysms (12.1%) were treated with WEB device and stent. Indications were very wide neck with a branch emerging from the neck in 1/19 (5.2%) aneurysms and WEB protrusion in 18/19 (94.7%). At 1 month, no morbimortality was reported. At 6 months, anatomical results were complete aneurysm occlusion in 15/17 aneurysms (88.2%), neck remnant in 1/17 (5.9%), and aneurysm remnant in 1/17 (5.9%). At 12 months, there was complete aneurysm occlusion in 13/14 aneurysms (92.9%) and neck remnant in 1/14 (7.1%). Conclusions: Combining WEB and stent is a therapeutic strategy to manage WNBA. In our series, this combination was used in 11.2% of patients treated with WEB, resulting in no morbidity or mortality with a high efficacy at 6 and 12 months (complete aneurysm occlusion in 88.2% and 92.9%, respectively).
9,073
310
[ 47, 79, 52, 249, 184, 100, 59, 137, 34, 133, 216, 472, 232, 150, 148, 32, 94 ]
22
[ "web", "aneurysms", "stent", "aneurysm", "17", "patients", "19", "months", "12", "procedure" ]
[ "intracranial aneurysms ias", "strategy web aneurysm", "aneurysm therapy webcast", "device intracranial aneurysms", "aneurysms treated web" ]
null
[CONTENT] aneurysm | flow diverter | stent [SUMMARY]
null
[CONTENT] aneurysm | flow diverter | stent [SUMMARY]
[CONTENT] aneurysm | flow diverter | stent [SUMMARY]
[CONTENT] aneurysm | flow diverter | stent [SUMMARY]
[CONTENT] aneurysm | flow diverter | stent [SUMMARY]
[CONTENT] Embolization, Therapeutic | Endovascular Procedures | Humans | Intracranial Aneurysm | Retrospective Studies | Stents | Treatment Outcome [SUMMARY]
null
[CONTENT] Embolization, Therapeutic | Endovascular Procedures | Humans | Intracranial Aneurysm | Retrospective Studies | Stents | Treatment Outcome [SUMMARY]
[CONTENT] Embolization, Therapeutic | Endovascular Procedures | Humans | Intracranial Aneurysm | Retrospective Studies | Stents | Treatment Outcome [SUMMARY]
[CONTENT] Embolization, Therapeutic | Endovascular Procedures | Humans | Intracranial Aneurysm | Retrospective Studies | Stents | Treatment Outcome [SUMMARY]
[CONTENT] Embolization, Therapeutic | Endovascular Procedures | Humans | Intracranial Aneurysm | Retrospective Studies | Stents | Treatment Outcome [SUMMARY]
[CONTENT] intracranial aneurysms ias | strategy web aneurysm | aneurysm therapy webcast | device intracranial aneurysms | aneurysms treated web [SUMMARY]
null
[CONTENT] intracranial aneurysms ias | strategy web aneurysm | aneurysm therapy webcast | device intracranial aneurysms | aneurysms treated web [SUMMARY]
[CONTENT] intracranial aneurysms ias | strategy web aneurysm | aneurysm therapy webcast | device intracranial aneurysms | aneurysms treated web [SUMMARY]
[CONTENT] intracranial aneurysms ias | strategy web aneurysm | aneurysm therapy webcast | device intracranial aneurysms | aneurysms treated web [SUMMARY]
[CONTENT] intracranial aneurysms ias | strategy web aneurysm | aneurysm therapy webcast | device intracranial aneurysms | aneurysms treated web [SUMMARY]
[CONTENT] web | aneurysms | stent | aneurysm | 17 | patients | 19 | months | 12 | procedure [SUMMARY]
null
[CONTENT] web | aneurysms | stent | aneurysm | 17 | patients | 19 | months | 12 | procedure [SUMMARY]
[CONTENT] web | aneurysms | stent | aneurysm | 17 | patients | 19 | months | 12 | procedure [SUMMARY]
[CONTENT] web | aneurysms | stent | aneurysm | 17 | patients | 19 | months | 12 | procedure [SUMMARY]
[CONTENT] web | aneurysms | stent | aneurysm | 17 | patients | 19 | months | 12 | procedure [SUMMARY]
[CONTENT] web | balloon | balloon assistance | assistance | device | trial | flow | aneurysm | oversizing | protrusion [SUMMARY]
null
[CONTENT] aneurysms | 17 | 19 | web | patients | aneurysm | 14 | 10 | dsa | artery [SUMMARY]
[CONTENT] combination | wnba | treatment | complete aneurysm occlusion 88 | comparative ones needed | comparative ones needed analyze | mortality high efficacy | mortality high | option treatment complex wnba | option treatment complex [SUMMARY]
[CONTENT] web | aneurysms | stent | 17 | aneurysm | patients | months | 19 | procedure | 12 [SUMMARY]
[CONTENT] web | aneurysms | stent | 17 | aneurysm | patients | months | 19 | procedure | 12 [SUMMARY]
[CONTENT] WEB ||| ||| [SUMMARY]
null
[CONTENT] June 2011 to January 2020 | 152 | 157 ||| 17/152 | 11.2% | 19/157 | 12.1% ||| 1/19 | 5.2% | 18/19 | 94.7% ||| 1 month ||| 6 months | 15/17 | 88.2% | 1/17 | 5.9% | 1/17 | 5.9% ||| 12 months | 13/14 | 92.9% | 1/14 | 7.1% [SUMMARY]
[CONTENT] WNBA ||| 11.2% | 6 and 12 months | 88.2% | 92.9% [SUMMARY]
[CONTENT] WEB ||| ||| ||| WEB ||| ||| June 2011 to January 2020 | 152 | 157 ||| 17/152 | 11.2% | 19/157 | 12.1% ||| 1/19 | 5.2% | 18/19 | 94.7% ||| 1 month ||| 6 months | 15/17 | 88.2% | 1/17 | 5.9% | 1/17 | 5.9% ||| 12 months | 13/14 | 92.9% | 1/14 | 7.1% ||| WNBA ||| 11.2% | 6 and 12 months | 88.2% | 92.9% [SUMMARY]
[CONTENT] WEB ||| ||| ||| WEB ||| ||| June 2011 to January 2020 | 152 | 157 ||| 17/152 | 11.2% | 19/157 | 12.1% ||| 1/19 | 5.2% | 18/19 | 94.7% ||| 1 month ||| 6 months | 15/17 | 88.2% | 1/17 | 5.9% | 1/17 | 5.9% ||| 12 months | 13/14 | 92.9% | 1/14 | 7.1% ||| WNBA ||| 11.2% | 6 and 12 months | 88.2% | 92.9% [SUMMARY]
Effect of Radix Stemonae concentrated decoction on the lung tissue pathology and inflammatory mediators in COPD rats.
27832794
Chronic obstructive pulmonary disease (COPD) is a common and frequently occurring respiratory disease. At present, western medicine treatment of COPD mainly focuses on symptomatic treatment. Using Chinese medicines or integrated Chinese and Western medicines to treat stable COPD has significant efficacy. In this study, we aimed to observe the effect of Radix Stemonae concentrated decoction on the lung tissue pathology and inflammatory mediators in COPD rats and explore its possible mechanism.
BACKGROUND
SD rats were randomized into blank group, COPD model group and Radix Stemonae group, 10 cases in each group. Rats were fed for 112 days. Before the rats were sacrificed, lung function of the animals was tested. The right lower lung was fixed for morphologic observation. The inflammatory mediators in serum were determined using enzyme-linked immuno sorbent assay.
METHODS
Body weight of animals in the model group was significantly decreased compared with blank group (P < 0.05). After gavage therapy with Radix Stemonae, body weight was significantly increased (P < 0.05). Compared with the blank group, pulmonary functions of rats in the model group were significantly abnormal (P < 0.05), while in Radix Stemonae group, these indicators turned much better than model group (P < 0.05). As for pathological changes in lungs, airway inflammation in the model group was aggravated. In the Radix Stemonae group, inflammation and emphysema were much milder. The concentrations of TNF-α, IL-8 and LTB4 in both model group and Radix Stemonae group were increased significantly (P < 0.05). But the levels in Radix Stemonae group were decreased significantly than model group (P < 0.05).
RESULTS
Radix Stemonae concentrated decoction may mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators.
CONCLUSION
[ "Animals", "Drugs, Chinese Herbal", "Humans", "Interleukin-8", "Lung", "Male", "Pulmonary Disease, Chronic Obstructive", "Rats", "Rats, Sprague-Dawley", "Tumor Necrosis Factor-alpha" ]
5105246
Background
Chronic obstructive pulmonary disease (COPD) is a common and frequently occurring respiratory disease, which is characterized by persistent airflow limitation [1]. It is often complicated with exacerbations and hospitalizations [2], which increased mortality and reduced life-expectancy [3]. At present, western medicine treatment of COPD, especially treatment of the stable stage, mainly focuses on symptomatic treatment [4], and drug treatment mostly focuses on improving symptoms and (or) reducing complications [5]. Using Chinese medicines or integrated Chinese and Western medicines to treat stable COPD has significant efficacy [6], improving pulmonary function and increasing diaphragm muscular tension [7]. A large amount of high-quality and high-level evidence has shown that its efficacy in improving symptoms, reducing exacerbations, and improving exercise capacity and quality of life is better than treatment with Western medicines alone [8–10]. Stemonae Radix is a traditional Chinese medicine (TCM) used as an antitussive and insecticidal remedy, which is derived from Stemona tuberosa Lour, S. japonica and and S. sessilifolia [11]. It has been widely used for the treatment of respiratory diseases in China for thousands of years. Earlier studies have shown that Stemonae Radix can release coughs [12], remove phlegm [13] and has antihelminthic, antibacterial, antituberculous and antifungal activity [14, 15]. Alkaloids are the major effective ingredients in Radix Stemonae, which includes stemoninine, stemoninoamide, bisdehydrostemoninine, neotuberostemonine, neostenine, tuberostemonine and tuberostemonine H. Liao et al. [16] found in an in vitro experiment on guinea pigs that the extract of Stemona radix had a relaxation effect on airway smooth muscle and the relaxation effect is realized by interacting with musearinic receptors and dihydropyridine binding point. Besides, the pharmacokinetics study of multiple components absorbed in rat plasma after oral administration of Stemonae radix showed that croomine and tuberostemonine would be potential bioactive components for the treatment of chronic or acute cough in rat model [17, 18]. In previous study of this study group, Radix Stemonae lung-nourishing fried paste prepared by our hospital (Hu Yao Zhi Zi Hu 05050323) was used for interference with patients with stable COPD [19]. And the results showed that Radix Stemonae lung-nourishing fried paste could mitigate the clinical symptoms of COPD patients, improve their quality of life [19]. What’s more, it is safe and reliable. However, its mechanism of action in treating COPD was unknown. In this study, we established SD rat COPD models by exposing to cigarette smoke combined with intratracheal instillation of LPS according to the method of Yiping Song et al. [20]. Radix Stemonae concentrated decoction was used for gavage therapy. By observation of the effect of Radix Stemonae concentrated decoction on the lung tissue pathology and inflammatory mediators in COPD rats, we tried to explore its possible mechanism of action.
null
null
Results
Comparison of general conditions In blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow. In blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow. Body weight of rats Body weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30# 393.70 ± 3.95# Model207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*# 387.00 ± 7.04*# After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Comparison of rat weights in three groups After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Body weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30# 393.70 ± 3.95# Model207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*# 387.00 ± 7.04*# After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Comparison of rat weights in three groups After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Pulmonary function test The comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68# 75.10 ± 2.78# 0.32 ± 0.02# 0.54 ± 0.05# Model1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*# 66.87 ± 3.42*# 0.38 ± 0.01*# 0.47 ± 0.03*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Comparison of tested values of lung function in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 The comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68# 75.10 ± 2.78# 0.32 ± 0.02# 0.54 ± 0.05# Model1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*# 66.87 ± 3.42*# 0.38 ± 0.01*# 0.47 ± 0.03*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Comparison of tested values of lung function in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Pathological changes in rats Pathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group In blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group. Pathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group In blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group. Inflammatory mediators in serum and BALF Compared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90# 142.60 ± 19.75# 502.36 ± 55.62# Model10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*# 378.71 ± 64.25*# 673.62 ± 50.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Table 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16# 87.66 ± 23.97# 516.70 ± 45.34# Model10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*# 254.10 ± 61.02*# 675.04 ± 33.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of serum inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of BALF inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Compared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90# 142.60 ± 19.75# 502.36 ± 55.62# Model10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*# 378.71 ± 64.25*# 673.62 ± 50.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Table 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16# 87.66 ± 23.97# 516.70 ± 45.34# Model10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*# 254.10 ± 61.02*# 675.04 ± 33.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of serum inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of BALF inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05
Conclusion
In this study, Radix Stemonae concentrated decoction was used for therapying COPD model rats. The pathological results of lung tissues showed that the manifestations of swelling, disorder and detachment in rat tracheal and bronchial mucosal epithelial and the increasing in goblet cells were improved significantly compared with COPD model group. The number of inflammatory cells in the small bronchial lumen and gland catheter was decreased than the model group. Alveolar wall thinning, alveolar enlargement and fusion were mitigated in Radix Stemonae group. The test results of inflammatory mediators in serum and BALF suggested that Radix Stemonae concentrated decoction could significantly decrease the concentrations of inflammatory mediators TNF-α, IL-8 and LTB4. Therefore, we suggested that Radix Stemonae concentrated decoction can mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators, which further delay the deterioration of lung function.
[ "Ethics statement", "Animals", "Reagents and drugs", "Instruments and equipment", "Preparation of animal samples", "Pulmonary function test", "Morphologic observation of lung tissue", "Inflammatory mediators in serum and bronchoalveolar lavage fluid (BALF)", "Statistical analysis", "Comparison of general conditions", "Body weight of rats", "Pulmonary function test", "Pathological changes in rats", "Inflammatory mediators in serum and BALF" ]
[ "All experiments in this study were approved by Ethic Committee of Shanghai University of Traditional Chinese Medicine, Shanghai, China.", "Thirty healthy male SD rats with weight of 200 ± 20 g, 10 weeks old were provided by Shanghai SLAC Laboratory Animal Co., Ltd (Shanghai, China). Rats were housed in the animal room of Shuguang Hospital 1 week before the experiment (Shanghai, China). Room temperature was maintained at 25 ± 1 °C, gas changes at 10~15 times per hour, relative humidity at (50 ± 10) %, ammonia concentration less than 14 mg/m3, noise ≤60 db. Sterilized diet and water were freely accessed.", "Lipopolysaccharide (LPS) was purchased from Sigma (USA). Detection kits of TNF-α, IL-8 and LTB4 were purchased from JRDUN Biotechnology Shanghai Co., Ltd (Shanghai, China). Radix Stemonae Concentrated Decoction was obtained from Yueyang Hospital of Integrated Traditional Chinese and Western Medicine affiliated to Shanghai University of Traditional Chinese Medicine (Shanghai, China). Daqianmen cigarette (tar 13 mg/stick, nicotine 1.0 mg/stick, carbon monoxide 14 mg/stick) was purchased from Shanghai Tobacco Group Co., Ltd (Shanghai, China).\nPreparation of Radix Stemonae Concentrated Decoction: It was prepared by the Department of Pharmacy of Yueyang Hospital. We boiled the raw material in water twice, concentrated the filtrate to 2 g/mL, and added 70 % ethanol for precipitation. After 24 h for standing, ethanol was removed and an appropriate amount of water was added to dilute the solution to a concentration of 0.6 g/mL. The amount needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21] (each 250 g of Radix Stemonae Lung-nourishing Fried Paste contains Radix Stemonae crude drug 77.5 g; the clinical dose was 10 g/time, TID).", "Microplate reader (Labsystems microplate reader, MK3), pipette (Pipetman, Gilson P), paraffin slicer (Leica, RM2235), water bath (Leica, HI1210), drying apparatus (Leica, HI1220 horizontal drying type), embedding machine (Changzhou Zhongwei Electronic Instrument Factory, BMJ-111), image analysis system (OLYMPUS, BX51), animal lung function detector (provided by the Department of Respiratory Medicine, Shuguang Hospital Attached to Shanghai University of Traditional Chinese Medicine, BioSystem XA SFT3410), electronic scales (Shanghai Precision and Scientific Instrument Co., Ltd., HANGPING FA1004N), optical microscope (OLYMPUS, BX41TF), tray electronic analytical balance (Shimadzu Corporation, AY220), electrically heated thermostatic water bath (Sumsung Laboratory Instrument Co., Ltd., DK-SD).", "The rats were randomized into blank group, COPD model group and Radix Stemonae group by using random number method, 10 cases in each group. Magenta (red) and picric acid (yellow) were used to mark the experimental animals. COPD model group was prepared according to the method of Song Yiping et al. [20]. LPS was injected into rat trachea at a dose of 200 ug (1 g/L) in the morning of day 1 and day 14; on days 2–13 and days 15–112, the rats were put in a self-prepared sealed organic glass poison-stained case for passive smoking, 30 min each time, twice a day at an interval of 2 h. Radix Stemonae group: the method was the same as the model group in the first 4 weeks, and from week 5 to week 16, apart from exposure to cigarette smoke, Radix Stemonae Concentrated Decoction was used for gavage every day, 3 times a day, 1 mL each time.", "On day 112 of the experiment, we firstly performed pulmonary function tests in rats in vivo according to the method of Wang et al. [22]. Briefly, the rats were anesthetized by intraperitoneally injection of 2 % sodium pentobarbital (40 mg/kg), and fixed with a supine position on the bench. After cutting off the fur in the middle of the neck, we created an inverted “T”-shaped incision below the annular cartilage, and performed tracheal intubation and fixation. The trachea cannula was connected to a small animal spirometer to measure the ratio between forced expiratory volume at the time of 0.2 s and forced vital capacity (FEV0.2/FVC), expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) for evaluation of lung function in rats.", "We fetched the right lower lung and put it in 4 % formaldehyde for fixation. Twenty-four hours later, the lung performed paraffin-embedding and slicing, conventional slice production, HE staining and routine pathological examination. We randomly selected 3 fields of view for each slice under the light microscope to observe the histological and morphological changes, including pathological semi-quantitative grading, judgment of the severity of lung inflammation and tracheal inflammation, and judgment of degree of alveolar fusion. Mild degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 1/3 of the whole slice, and alveolar fusion accounts for less than 1/3 of the field of view of the entire slice. Moderate degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 2/3 of the whole slice, and alveolar fusion accounts for less than 2/3 of the field of view of the entire slice. Severe degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for more than 2/3 of the whole slice, and alveolar fusion accounts for more than 2/3 of the field of view of the entire slice.", "Before the animals were sacrificed, we sampled 5 mL of their abdominal aortic blood, centrifuged it at 3000 rpm for 10 min. The supernatant was sub-packed and stored at −70 °C. We cut open the chest to expose the trachea and lungs. After the right main bronchus was ligatured, the left lung was douched with 2 mL normal saline. And 1.5 mL solution (recovery rate 75 %) should be gathered each time as required, and filtered through gauze. The left lung was douched with the same method for 3 times. In total, 4.5 mL solution was recovered, of which 3.5 mL were centrifuged at 1500 rpm for 15 min, 4 °C. The supernatant were collected and stored at −70 °C. The concentrations of TNF-α, IL-8 and LTB4 in serum and BALF were determined by using enzyme-linked immuno sorbent assay according to the kit instructions.", "Statistical analyses were performed using SPSS11.5 program. Data were presented as mean ± SD. One-way ANOVA was adopted for comparison of multi-group means in line with normalized distribution and homogeneity of variance tests; Mann–Whitney was adopted for paired comparison of means not in line with normalized distribution and homogeneity of variance tests. A value with P < 0.05 indicated statistically significant difference.", "In blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow.", "Body weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30#\n393.70 ± 3.95#\nModel207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*#\n387.00 ± 7.04*#\nAfter One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05\n\nComparison of rat weights in three groups\nAfter One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05", "The comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68#\n75.10 ± 2.78#\n0.32 ± 0.02#\n0.54 ± 0.05#\nModel1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*#\n66.87 ± 3.42*#\n0.38 ± 0.01*#\n0.47 ± 0.03*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n\nComparison of tested values of lung function in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05", "Pathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group\n\nPhotograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group\nIn blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group.", "Compared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90#\n142.60 ± 19.75#\n502.36 ± 55.62#\nModel10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*#\n378.71 ± 64.25*#\n673.62 ± 50.82*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nTable 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16#\n87.66 ± 23.97#\n516.70 ± 45.34#\nModel10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*#\n254.10 ± 61.02*#\n675.04 ± 33.82*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n\nTest results of serum inflammatory mediators in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nTest results of BALF inflammatory mediators in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Ethics statement", "Animals", "Reagents and drugs", "Instruments and equipment", "Preparation of animal samples", "Pulmonary function test", "Morphologic observation of lung tissue", "Inflammatory mediators in serum and bronchoalveolar lavage fluid (BALF)", "Statistical analysis", "Results", "Comparison of general conditions", "Body weight of rats", "Pulmonary function test", "Pathological changes in rats", "Inflammatory mediators in serum and BALF", "Discussion", "Conclusion" ]
[ "Chronic obstructive pulmonary disease (COPD) is a common and frequently occurring respiratory disease, which is characterized by persistent airflow limitation [1]. It is often complicated with exacerbations and hospitalizations [2], which increased mortality and reduced life-expectancy [3]. At present, western medicine treatment of COPD, especially treatment of the stable stage, mainly focuses on symptomatic treatment [4], and drug treatment mostly focuses on improving symptoms and (or) reducing complications [5]. Using Chinese medicines or integrated Chinese and Western medicines to treat stable COPD has significant efficacy [6], improving pulmonary function and increasing diaphragm muscular tension [7]. A large amount of high-quality and high-level evidence has shown that its efficacy in improving symptoms, reducing exacerbations, and improving exercise capacity and quality of life is better than treatment with Western medicines alone [8–10].\nStemonae Radix is a traditional Chinese medicine (TCM) used as an antitussive and insecticidal remedy, which is derived from Stemona tuberosa Lour, S. japonica and and S. sessilifolia [11]. It has been widely used for the treatment of respiratory diseases in China for thousands of years. Earlier studies have shown that Stemonae Radix can release coughs [12], remove phlegm [13] and has antihelminthic, antibacterial, antituberculous and antifungal activity [14, 15]. Alkaloids are the major effective ingredients in Radix Stemonae, which includes stemoninine, stemoninoamide, bisdehydrostemoninine, neotuberostemonine, neostenine, tuberostemonine and tuberostemonine H. Liao et al. [16] found in an in vitro experiment on guinea pigs that the extract of Stemona radix had a relaxation effect on airway smooth muscle and the relaxation effect is realized by interacting with musearinic receptors and dihydropyridine binding point. Besides, the pharmacokinetics study of multiple components absorbed in rat plasma after oral administration of Stemonae radix showed that croomine and tuberostemonine would be potential bioactive components for the treatment of chronic or acute cough in rat model [17, 18].\nIn previous study of this study group, Radix Stemonae lung-nourishing fried paste prepared by our hospital (Hu Yao Zhi Zi Hu 05050323) was used for interference with patients with stable COPD [19]. And the results showed that Radix Stemonae lung-nourishing fried paste could mitigate the clinical symptoms of COPD patients, improve their quality of life [19]. What’s more, it is safe and reliable. However, its mechanism of action in treating COPD was unknown.\nIn this study, we established SD rat COPD models by exposing to cigarette smoke combined with intratracheal instillation of LPS according to the method of Yiping Song et al. [20]. Radix Stemonae concentrated decoction was used for gavage therapy. By observation of the effect of Radix Stemonae concentrated decoction on the lung tissue pathology and inflammatory mediators in COPD rats, we tried to explore its possible mechanism of action.", " Ethics statement All experiments in this study were approved by Ethic Committee of Shanghai University of Traditional Chinese Medicine, Shanghai, China.\nAll experiments in this study were approved by Ethic Committee of Shanghai University of Traditional Chinese Medicine, Shanghai, China.\n Animals Thirty healthy male SD rats with weight of 200 ± 20 g, 10 weeks old were provided by Shanghai SLAC Laboratory Animal Co., Ltd (Shanghai, China). Rats were housed in the animal room of Shuguang Hospital 1 week before the experiment (Shanghai, China). Room temperature was maintained at 25 ± 1 °C, gas changes at 10~15 times per hour, relative humidity at (50 ± 10) %, ammonia concentration less than 14 mg/m3, noise ≤60 db. Sterilized diet and water were freely accessed.\nThirty healthy male SD rats with weight of 200 ± 20 g, 10 weeks old were provided by Shanghai SLAC Laboratory Animal Co., Ltd (Shanghai, China). Rats were housed in the animal room of Shuguang Hospital 1 week before the experiment (Shanghai, China). Room temperature was maintained at 25 ± 1 °C, gas changes at 10~15 times per hour, relative humidity at (50 ± 10) %, ammonia concentration less than 14 mg/m3, noise ≤60 db. Sterilized diet and water were freely accessed.\n Reagents and drugs Lipopolysaccharide (LPS) was purchased from Sigma (USA). Detection kits of TNF-α, IL-8 and LTB4 were purchased from JRDUN Biotechnology Shanghai Co., Ltd (Shanghai, China). Radix Stemonae Concentrated Decoction was obtained from Yueyang Hospital of Integrated Traditional Chinese and Western Medicine affiliated to Shanghai University of Traditional Chinese Medicine (Shanghai, China). Daqianmen cigarette (tar 13 mg/stick, nicotine 1.0 mg/stick, carbon monoxide 14 mg/stick) was purchased from Shanghai Tobacco Group Co., Ltd (Shanghai, China).\nPreparation of Radix Stemonae Concentrated Decoction: It was prepared by the Department of Pharmacy of Yueyang Hospital. We boiled the raw material in water twice, concentrated the filtrate to 2 g/mL, and added 70 % ethanol for precipitation. After 24 h for standing, ethanol was removed and an appropriate amount of water was added to dilute the solution to a concentration of 0.6 g/mL. The amount needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21] (each 250 g of Radix Stemonae Lung-nourishing Fried Paste contains Radix Stemonae crude drug 77.5 g; the clinical dose was 10 g/time, TID).\nLipopolysaccharide (LPS) was purchased from Sigma (USA). Detection kits of TNF-α, IL-8 and LTB4 were purchased from JRDUN Biotechnology Shanghai Co., Ltd (Shanghai, China). Radix Stemonae Concentrated Decoction was obtained from Yueyang Hospital of Integrated Traditional Chinese and Western Medicine affiliated to Shanghai University of Traditional Chinese Medicine (Shanghai, China). Daqianmen cigarette (tar 13 mg/stick, nicotine 1.0 mg/stick, carbon monoxide 14 mg/stick) was purchased from Shanghai Tobacco Group Co., Ltd (Shanghai, China).\nPreparation of Radix Stemonae Concentrated Decoction: It was prepared by the Department of Pharmacy of Yueyang Hospital. We boiled the raw material in water twice, concentrated the filtrate to 2 g/mL, and added 70 % ethanol for precipitation. After 24 h for standing, ethanol was removed and an appropriate amount of water was added to dilute the solution to a concentration of 0.6 g/mL. The amount needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21] (each 250 g of Radix Stemonae Lung-nourishing Fried Paste contains Radix Stemonae crude drug 77.5 g; the clinical dose was 10 g/time, TID).\n Instruments and equipment Microplate reader (Labsystems microplate reader, MK3), pipette (Pipetman, Gilson P), paraffin slicer (Leica, RM2235), water bath (Leica, HI1210), drying apparatus (Leica, HI1220 horizontal drying type), embedding machine (Changzhou Zhongwei Electronic Instrument Factory, BMJ-111), image analysis system (OLYMPUS, BX51), animal lung function detector (provided by the Department of Respiratory Medicine, Shuguang Hospital Attached to Shanghai University of Traditional Chinese Medicine, BioSystem XA SFT3410), electronic scales (Shanghai Precision and Scientific Instrument Co., Ltd., HANGPING FA1004N), optical microscope (OLYMPUS, BX41TF), tray electronic analytical balance (Shimadzu Corporation, AY220), electrically heated thermostatic water bath (Sumsung Laboratory Instrument Co., Ltd., DK-SD).\nMicroplate reader (Labsystems microplate reader, MK3), pipette (Pipetman, Gilson P), paraffin slicer (Leica, RM2235), water bath (Leica, HI1210), drying apparatus (Leica, HI1220 horizontal drying type), embedding machine (Changzhou Zhongwei Electronic Instrument Factory, BMJ-111), image analysis system (OLYMPUS, BX51), animal lung function detector (provided by the Department of Respiratory Medicine, Shuguang Hospital Attached to Shanghai University of Traditional Chinese Medicine, BioSystem XA SFT3410), electronic scales (Shanghai Precision and Scientific Instrument Co., Ltd., HANGPING FA1004N), optical microscope (OLYMPUS, BX41TF), tray electronic analytical balance (Shimadzu Corporation, AY220), electrically heated thermostatic water bath (Sumsung Laboratory Instrument Co., Ltd., DK-SD).\n Preparation of animal samples The rats were randomized into blank group, COPD model group and Radix Stemonae group by using random number method, 10 cases in each group. Magenta (red) and picric acid (yellow) were used to mark the experimental animals. COPD model group was prepared according to the method of Song Yiping et al. [20]. LPS was injected into rat trachea at a dose of 200 ug (1 g/L) in the morning of day 1 and day 14; on days 2–13 and days 15–112, the rats were put in a self-prepared sealed organic glass poison-stained case for passive smoking, 30 min each time, twice a day at an interval of 2 h. Radix Stemonae group: the method was the same as the model group in the first 4 weeks, and from week 5 to week 16, apart from exposure to cigarette smoke, Radix Stemonae Concentrated Decoction was used for gavage every day, 3 times a day, 1 mL each time.\nThe rats were randomized into blank group, COPD model group and Radix Stemonae group by using random number method, 10 cases in each group. Magenta (red) and picric acid (yellow) were used to mark the experimental animals. COPD model group was prepared according to the method of Song Yiping et al. [20]. LPS was injected into rat trachea at a dose of 200 ug (1 g/L) in the morning of day 1 and day 14; on days 2–13 and days 15–112, the rats were put in a self-prepared sealed organic glass poison-stained case for passive smoking, 30 min each time, twice a day at an interval of 2 h. Radix Stemonae group: the method was the same as the model group in the first 4 weeks, and from week 5 to week 16, apart from exposure to cigarette smoke, Radix Stemonae Concentrated Decoction was used for gavage every day, 3 times a day, 1 mL each time.\n Pulmonary function test On day 112 of the experiment, we firstly performed pulmonary function tests in rats in vivo according to the method of Wang et al. [22]. Briefly, the rats were anesthetized by intraperitoneally injection of 2 % sodium pentobarbital (40 mg/kg), and fixed with a supine position on the bench. After cutting off the fur in the middle of the neck, we created an inverted “T”-shaped incision below the annular cartilage, and performed tracheal intubation and fixation. The trachea cannula was connected to a small animal spirometer to measure the ratio between forced expiratory volume at the time of 0.2 s and forced vital capacity (FEV0.2/FVC), expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) for evaluation of lung function in rats.\nOn day 112 of the experiment, we firstly performed pulmonary function tests in rats in vivo according to the method of Wang et al. [22]. Briefly, the rats were anesthetized by intraperitoneally injection of 2 % sodium pentobarbital (40 mg/kg), and fixed with a supine position on the bench. After cutting off the fur in the middle of the neck, we created an inverted “T”-shaped incision below the annular cartilage, and performed tracheal intubation and fixation. The trachea cannula was connected to a small animal spirometer to measure the ratio between forced expiratory volume at the time of 0.2 s and forced vital capacity (FEV0.2/FVC), expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) for evaluation of lung function in rats.\n Morphologic observation of lung tissue We fetched the right lower lung and put it in 4 % formaldehyde for fixation. Twenty-four hours later, the lung performed paraffin-embedding and slicing, conventional slice production, HE staining and routine pathological examination. We randomly selected 3 fields of view for each slice under the light microscope to observe the histological and morphological changes, including pathological semi-quantitative grading, judgment of the severity of lung inflammation and tracheal inflammation, and judgment of degree of alveolar fusion. Mild degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 1/3 of the whole slice, and alveolar fusion accounts for less than 1/3 of the field of view of the entire slice. Moderate degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 2/3 of the whole slice, and alveolar fusion accounts for less than 2/3 of the field of view of the entire slice. Severe degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for more than 2/3 of the whole slice, and alveolar fusion accounts for more than 2/3 of the field of view of the entire slice.\nWe fetched the right lower lung and put it in 4 % formaldehyde for fixation. Twenty-four hours later, the lung performed paraffin-embedding and slicing, conventional slice production, HE staining and routine pathological examination. We randomly selected 3 fields of view for each slice under the light microscope to observe the histological and morphological changes, including pathological semi-quantitative grading, judgment of the severity of lung inflammation and tracheal inflammation, and judgment of degree of alveolar fusion. Mild degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 1/3 of the whole slice, and alveolar fusion accounts for less than 1/3 of the field of view of the entire slice. Moderate degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 2/3 of the whole slice, and alveolar fusion accounts for less than 2/3 of the field of view of the entire slice. Severe degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for more than 2/3 of the whole slice, and alveolar fusion accounts for more than 2/3 of the field of view of the entire slice.\n Inflammatory mediators in serum and bronchoalveolar lavage fluid (BALF) Before the animals were sacrificed, we sampled 5 mL of their abdominal aortic blood, centrifuged it at 3000 rpm for 10 min. The supernatant was sub-packed and stored at −70 °C. We cut open the chest to expose the trachea and lungs. After the right main bronchus was ligatured, the left lung was douched with 2 mL normal saline. And 1.5 mL solution (recovery rate 75 %) should be gathered each time as required, and filtered through gauze. The left lung was douched with the same method for 3 times. In total, 4.5 mL solution was recovered, of which 3.5 mL were centrifuged at 1500 rpm for 15 min, 4 °C. The supernatant were collected and stored at −70 °C. The concentrations of TNF-α, IL-8 and LTB4 in serum and BALF were determined by using enzyme-linked immuno sorbent assay according to the kit instructions.\nBefore the animals were sacrificed, we sampled 5 mL of their abdominal aortic blood, centrifuged it at 3000 rpm for 10 min. The supernatant was sub-packed and stored at −70 °C. We cut open the chest to expose the trachea and lungs. After the right main bronchus was ligatured, the left lung was douched with 2 mL normal saline. And 1.5 mL solution (recovery rate 75 %) should be gathered each time as required, and filtered through gauze. The left lung was douched with the same method for 3 times. In total, 4.5 mL solution was recovered, of which 3.5 mL were centrifuged at 1500 rpm for 15 min, 4 °C. The supernatant were collected and stored at −70 °C. The concentrations of TNF-α, IL-8 and LTB4 in serum and BALF were determined by using enzyme-linked immuno sorbent assay according to the kit instructions.\n Statistical analysis Statistical analyses were performed using SPSS11.5 program. Data were presented as mean ± SD. One-way ANOVA was adopted for comparison of multi-group means in line with normalized distribution and homogeneity of variance tests; Mann–Whitney was adopted for paired comparison of means not in line with normalized distribution and homogeneity of variance tests. A value with P < 0.05 indicated statistically significant difference.\nStatistical analyses were performed using SPSS11.5 program. Data were presented as mean ± SD. One-way ANOVA was adopted for comparison of multi-group means in line with normalized distribution and homogeneity of variance tests; Mann–Whitney was adopted for paired comparison of means not in line with normalized distribution and homogeneity of variance tests. A value with P < 0.05 indicated statistically significant difference.", "All experiments in this study were approved by Ethic Committee of Shanghai University of Traditional Chinese Medicine, Shanghai, China.", "Thirty healthy male SD rats with weight of 200 ± 20 g, 10 weeks old were provided by Shanghai SLAC Laboratory Animal Co., Ltd (Shanghai, China). Rats were housed in the animal room of Shuguang Hospital 1 week before the experiment (Shanghai, China). Room temperature was maintained at 25 ± 1 °C, gas changes at 10~15 times per hour, relative humidity at (50 ± 10) %, ammonia concentration less than 14 mg/m3, noise ≤60 db. Sterilized diet and water were freely accessed.", "Lipopolysaccharide (LPS) was purchased from Sigma (USA). Detection kits of TNF-α, IL-8 and LTB4 were purchased from JRDUN Biotechnology Shanghai Co., Ltd (Shanghai, China). Radix Stemonae Concentrated Decoction was obtained from Yueyang Hospital of Integrated Traditional Chinese and Western Medicine affiliated to Shanghai University of Traditional Chinese Medicine (Shanghai, China). Daqianmen cigarette (tar 13 mg/stick, nicotine 1.0 mg/stick, carbon monoxide 14 mg/stick) was purchased from Shanghai Tobacco Group Co., Ltd (Shanghai, China).\nPreparation of Radix Stemonae Concentrated Decoction: It was prepared by the Department of Pharmacy of Yueyang Hospital. We boiled the raw material in water twice, concentrated the filtrate to 2 g/mL, and added 70 % ethanol for precipitation. After 24 h for standing, ethanol was removed and an appropriate amount of water was added to dilute the solution to a concentration of 0.6 g/mL. The amount needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21] (each 250 g of Radix Stemonae Lung-nourishing Fried Paste contains Radix Stemonae crude drug 77.5 g; the clinical dose was 10 g/time, TID).", "Microplate reader (Labsystems microplate reader, MK3), pipette (Pipetman, Gilson P), paraffin slicer (Leica, RM2235), water bath (Leica, HI1210), drying apparatus (Leica, HI1220 horizontal drying type), embedding machine (Changzhou Zhongwei Electronic Instrument Factory, BMJ-111), image analysis system (OLYMPUS, BX51), animal lung function detector (provided by the Department of Respiratory Medicine, Shuguang Hospital Attached to Shanghai University of Traditional Chinese Medicine, BioSystem XA SFT3410), electronic scales (Shanghai Precision and Scientific Instrument Co., Ltd., HANGPING FA1004N), optical microscope (OLYMPUS, BX41TF), tray electronic analytical balance (Shimadzu Corporation, AY220), electrically heated thermostatic water bath (Sumsung Laboratory Instrument Co., Ltd., DK-SD).", "The rats were randomized into blank group, COPD model group and Radix Stemonae group by using random number method, 10 cases in each group. Magenta (red) and picric acid (yellow) were used to mark the experimental animals. COPD model group was prepared according to the method of Song Yiping et al. [20]. LPS was injected into rat trachea at a dose of 200 ug (1 g/L) in the morning of day 1 and day 14; on days 2–13 and days 15–112, the rats were put in a self-prepared sealed organic glass poison-stained case for passive smoking, 30 min each time, twice a day at an interval of 2 h. Radix Stemonae group: the method was the same as the model group in the first 4 weeks, and from week 5 to week 16, apart from exposure to cigarette smoke, Radix Stemonae Concentrated Decoction was used for gavage every day, 3 times a day, 1 mL each time.", "On day 112 of the experiment, we firstly performed pulmonary function tests in rats in vivo according to the method of Wang et al. [22]. Briefly, the rats were anesthetized by intraperitoneally injection of 2 % sodium pentobarbital (40 mg/kg), and fixed with a supine position on the bench. After cutting off the fur in the middle of the neck, we created an inverted “T”-shaped incision below the annular cartilage, and performed tracheal intubation and fixation. The trachea cannula was connected to a small animal spirometer to measure the ratio between forced expiratory volume at the time of 0.2 s and forced vital capacity (FEV0.2/FVC), expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) for evaluation of lung function in rats.", "We fetched the right lower lung and put it in 4 % formaldehyde for fixation. Twenty-four hours later, the lung performed paraffin-embedding and slicing, conventional slice production, HE staining and routine pathological examination. We randomly selected 3 fields of view for each slice under the light microscope to observe the histological and morphological changes, including pathological semi-quantitative grading, judgment of the severity of lung inflammation and tracheal inflammation, and judgment of degree of alveolar fusion. Mild degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 1/3 of the whole slice, and alveolar fusion accounts for less than 1/3 of the field of view of the entire slice. Moderate degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 2/3 of the whole slice, and alveolar fusion accounts for less than 2/3 of the field of view of the entire slice. Severe degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for more than 2/3 of the whole slice, and alveolar fusion accounts for more than 2/3 of the field of view of the entire slice.", "Before the animals were sacrificed, we sampled 5 mL of their abdominal aortic blood, centrifuged it at 3000 rpm for 10 min. The supernatant was sub-packed and stored at −70 °C. We cut open the chest to expose the trachea and lungs. After the right main bronchus was ligatured, the left lung was douched with 2 mL normal saline. And 1.5 mL solution (recovery rate 75 %) should be gathered each time as required, and filtered through gauze. The left lung was douched with the same method for 3 times. In total, 4.5 mL solution was recovered, of which 3.5 mL were centrifuged at 1500 rpm for 15 min, 4 °C. The supernatant were collected and stored at −70 °C. The concentrations of TNF-α, IL-8 and LTB4 in serum and BALF were determined by using enzyme-linked immuno sorbent assay according to the kit instructions.", "Statistical analyses were performed using SPSS11.5 program. Data were presented as mean ± SD. One-way ANOVA was adopted for comparison of multi-group means in line with normalized distribution and homogeneity of variance tests; Mann–Whitney was adopted for paired comparison of means not in line with normalized distribution and homogeneity of variance tests. A value with P < 0.05 indicated statistically significant difference.", " Comparison of general conditions In blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow.\nIn blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow.\n Body weight of rats Body weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30#\n393.70 ± 3.95#\nModel207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*#\n387.00 ± 7.04*#\nAfter One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05\n\nComparison of rat weights in three groups\nAfter One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05\nBody weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30#\n393.70 ± 3.95#\nModel207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*#\n387.00 ± 7.04*#\nAfter One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05\n\nComparison of rat weights in three groups\nAfter One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05\n Pulmonary function test The comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68#\n75.10 ± 2.78#\n0.32 ± 0.02#\n0.54 ± 0.05#\nModel1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*#\n66.87 ± 3.42*#\n0.38 ± 0.01*#\n0.47 ± 0.03*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n\nComparison of tested values of lung function in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nThe comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68#\n75.10 ± 2.78#\n0.32 ± 0.02#\n0.54 ± 0.05#\nModel1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*#\n66.87 ± 3.42*#\n0.38 ± 0.01*#\n0.47 ± 0.03*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n\nComparison of tested values of lung function in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n Pathological changes in rats Pathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group\n\nPhotograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group\nIn blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group.\nPathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group\n\nPhotograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group\nIn blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group.\n Inflammatory mediators in serum and BALF Compared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90#\n142.60 ± 19.75#\n502.36 ± 55.62#\nModel10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*#\n378.71 ± 64.25*#\n673.62 ± 50.82*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nTable 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16#\n87.66 ± 23.97#\n516.70 ± 45.34#\nModel10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*#\n254.10 ± 61.02*#\n675.04 ± 33.82*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n\nTest results of serum inflammatory mediators in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nTest results of BALF inflammatory mediators in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nCompared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90#\n142.60 ± 19.75#\n502.36 ± 55.62#\nModel10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*#\n378.71 ± 64.25*#\n673.62 ± 50.82*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nTable 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16#\n87.66 ± 23.97#\n516.70 ± 45.34#\nModel10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*#\n254.10 ± 61.02*#\n675.04 ± 33.82*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n\nTest results of serum inflammatory mediators in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nTest results of BALF inflammatory mediators in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05", "In blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow.", "Body weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30#\n393.70 ± 3.95#\nModel207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*#\n387.00 ± 7.04*#\nAfter One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05\n\nComparison of rat weights in three groups\nAfter One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05", "The comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68#\n75.10 ± 2.78#\n0.32 ± 0.02#\n0.54 ± 0.05#\nModel1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*#\n66.87 ± 3.42*#\n0.38 ± 0.01*#\n0.47 ± 0.03*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n\nComparison of tested values of lung function in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05", "Pathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group\n\nPhotograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group\nIn blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group.", "Compared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90#\n142.60 ± 19.75#\n502.36 ± 55.62#\nModel10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*#\n378.71 ± 64.25*#\n673.62 ± 50.82*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nTable 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16#\n87.66 ± 23.97#\n516.70 ± 45.34#\nModel10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*#\n254.10 ± 61.02*#\n675.04 ± 33.82*#\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\n\nTest results of serum inflammatory mediators in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05\nTest results of BALF inflammatory mediators in rats\n*Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05", "COPD is a common and frequently occurring respiratory disease. After an epidemiological survey among 20,245 adults in seven regions, patients suffering from COPD account for 8.2 % of the population over the age of 40 [23]. The prevalence of COPD was high, and showed an increasing trend year by year, and it was the only common disease that kept rising [24]. Risk for COPD is related to an interaction between genetic factors and many different environmental factors, which could also be influenced by comorbid disease. Environmental factors include tobacco smoking [25], infections [26], occupational dusts and chemicals [27], air pollution [28, 29] and other categories [30]. Smoking and respiratory infections are important factors in causing COPD. In this study, the method of exposure to cigarette smoke and intratracheal instillation of LPS was adopted to simulate the pathological process of COPD. In the general conditions, cough, shortness of breath and other clinical manifestations occurred, as well as weight loss. In pathological conditions, airway and lung parenchyma inflammation in the COPD model group was significantly aggravated compared with the blank group, and significant emphysema occurred. Pulmonary function test results also showed that airway resistance in the model group was increased significantly, compliance was decreased significantly, and FEV0.2/FVC and PEF were decreased significantly, suggesting that the model had been successfully established.\nAbnormal expression of TNF-α, IL-8, LTB4 and other inflammatory mediators have important effects on the occurrence and acute aggravation of COPD. Inflammatory mediators can cause lung dysfunction and airway inflammation, as well as systemic inflammatory response of the body [31]. Therefore, a lot of studies [32, 33] have explored new approaches of COPD treatment by observing changes in the levels of inflammatory mediators. Among them, TNF-α showing chemotaxis effects on white blood cells, can trigger the occurrence of inflammatory response, and may participate in the formation of emphysema and damage of epithelial cells [34]. Besides, it promotes inflammatory response, tissue fibrosis, and angiogenesis in the decomposition of extracellular proteins [35]. The main function of IL-8 is activating neutrophils and release of oxygen free radicals and proteases, thereby causing injury of pulmonary alveolar epithelial and microvascular, and playing an important role in the occurrence and development of airway inflammation in COPD patients [36]. LTB4 can recruit and activate neutrophils, and further expand local inflammatory response. Meanwhile, LTB4 has an effect on increasing infiltration of inflammatory cells in bronchial mucosa [37]. After gavage therapy with Radix Stemonae, pulmonary function indicators such as expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) turned much better than that in COPD model group. And the inflammation and emphysema were much milder. Inflammatory mediators (TNF-α, IL-8 and LTB4) were decreased significantly in Radix Stemonae group. Thus Radix Stemonae concentrated decoction may mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators.\nRadix Stemonae is the dry root of Radix Stemonae plants Stemona sessilifolia, Stemona japonica or Stemona tuberosa Lour, with a sweet-bitter state and warm nature, and having an effect on lung channel [38]. The amount of Radix Stemonae Concentrated Decoction needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21]. But it is better to optimize the dosage for Radix Stemonae Concentrated Decoction in a pilot study.\nModern research literature reports that Radix Stemonae mainly contains alkaloids [39], including bisdehydrostemoninine, neotuberostemonine, stemoninine, stemoninoamide, neostenine, tuberostemonine and tuberostemonine H, which present therapeutic effects in lung diseases by combating bacteria, eliminating sputum, stopping cough and relaxing bronchial smooth muscle [40]. Radix Stemonae alkaloid extract has a relaxing effect on histamine-induced guinea pig bronchial smooth muscle spasm, and the effect is slow and sustained. JF Liao et al. [16] found in an in vitro experiment on guinea pigs that the water extract of Stemona sessilifolia had a spasmolytic effect on guinea pig bronchial smooth muscle and the spasmolytic effect is realized by interacting with musearinic receptors and dihydropyridine binding point.\nBy using ultra-performance liquid-chromatography/mass spectrometry, pharmacokinetics study of multiple components absorbed in rat plasma after oral administration of Stemonae radix showed that that croomine and tuberostemonine would be potential efficacy markers [18]. Stemoninine was identified as the major absorbed compound after oral administration of Stemonae radix detected by HPLC, which was effective for chronic or acute cough [17]. In this study, although we did not extract the active components from Radix Stemonae Concentrated Decoction, the efficacy components for COPD may be some kinds of alkaloids.\nLimitations: Still, there are some limitations in the study. More groups with different dosage of Radix Stemonae should be supplemented in the study. Besides, more investigation should be provided to explore the potential molecular mechanism on treating CODP by Radix Stemonae. Thus, in the further study, we will perform more experiments to optimize the dosage for Radix Stemonae, to identify the active components and to explore its molecular mechanism.", "In this study, Radix Stemonae concentrated decoction was used for therapying COPD model rats. The pathological results of lung tissues showed that the manifestations of swelling, disorder and detachment in rat tracheal and bronchial mucosal epithelial and the increasing in goblet cells were improved significantly compared with COPD model group. The number of inflammatory cells in the small bronchial lumen and gland catheter was decreased than the model group. Alveolar wall thinning, alveolar enlargement and fusion were mitigated in Radix Stemonae group. The test results of inflammatory mediators in serum and BALF suggested that Radix Stemonae concentrated decoction could significantly decrease the concentrations of inflammatory mediators TNF-α, IL-8 and LTB4. Therefore, we suggested that Radix Stemonae concentrated decoction can mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators, which further delay the deterioration of lung function." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Radix Stemonae", "Chronic obstructive pulmonary disease", "Pathology", "Inflammatory mediators" ]
Background: Chronic obstructive pulmonary disease (COPD) is a common and frequently occurring respiratory disease, which is characterized by persistent airflow limitation [1]. It is often complicated with exacerbations and hospitalizations [2], which increased mortality and reduced life-expectancy [3]. At present, western medicine treatment of COPD, especially treatment of the stable stage, mainly focuses on symptomatic treatment [4], and drug treatment mostly focuses on improving symptoms and (or) reducing complications [5]. Using Chinese medicines or integrated Chinese and Western medicines to treat stable COPD has significant efficacy [6], improving pulmonary function and increasing diaphragm muscular tension [7]. A large amount of high-quality and high-level evidence has shown that its efficacy in improving symptoms, reducing exacerbations, and improving exercise capacity and quality of life is better than treatment with Western medicines alone [8–10]. Stemonae Radix is a traditional Chinese medicine (TCM) used as an antitussive and insecticidal remedy, which is derived from Stemona tuberosa Lour, S. japonica and and S. sessilifolia [11]. It has been widely used for the treatment of respiratory diseases in China for thousands of years. Earlier studies have shown that Stemonae Radix can release coughs [12], remove phlegm [13] and has antihelminthic, antibacterial, antituberculous and antifungal activity [14, 15]. Alkaloids are the major effective ingredients in Radix Stemonae, which includes stemoninine, stemoninoamide, bisdehydrostemoninine, neotuberostemonine, neostenine, tuberostemonine and tuberostemonine H. Liao et al. [16] found in an in vitro experiment on guinea pigs that the extract of Stemona radix had a relaxation effect on airway smooth muscle and the relaxation effect is realized by interacting with musearinic receptors and dihydropyridine binding point. Besides, the pharmacokinetics study of multiple components absorbed in rat plasma after oral administration of Stemonae radix showed that croomine and tuberostemonine would be potential bioactive components for the treatment of chronic or acute cough in rat model [17, 18]. In previous study of this study group, Radix Stemonae lung-nourishing fried paste prepared by our hospital (Hu Yao Zhi Zi Hu 05050323) was used for interference with patients with stable COPD [19]. And the results showed that Radix Stemonae lung-nourishing fried paste could mitigate the clinical symptoms of COPD patients, improve their quality of life [19]. What’s more, it is safe and reliable. However, its mechanism of action in treating COPD was unknown. In this study, we established SD rat COPD models by exposing to cigarette smoke combined with intratracheal instillation of LPS according to the method of Yiping Song et al. [20]. Radix Stemonae concentrated decoction was used for gavage therapy. By observation of the effect of Radix Stemonae concentrated decoction on the lung tissue pathology and inflammatory mediators in COPD rats, we tried to explore its possible mechanism of action. Methods: Ethics statement All experiments in this study were approved by Ethic Committee of Shanghai University of Traditional Chinese Medicine, Shanghai, China. All experiments in this study were approved by Ethic Committee of Shanghai University of Traditional Chinese Medicine, Shanghai, China. Animals Thirty healthy male SD rats with weight of 200 ± 20 g, 10 weeks old were provided by Shanghai SLAC Laboratory Animal Co., Ltd (Shanghai, China). Rats were housed in the animal room of Shuguang Hospital 1 week before the experiment (Shanghai, China). Room temperature was maintained at 25 ± 1 °C, gas changes at 10~15 times per hour, relative humidity at (50 ± 10) %, ammonia concentration less than 14 mg/m3, noise ≤60 db. Sterilized diet and water were freely accessed. Thirty healthy male SD rats with weight of 200 ± 20 g, 10 weeks old were provided by Shanghai SLAC Laboratory Animal Co., Ltd (Shanghai, China). Rats were housed in the animal room of Shuguang Hospital 1 week before the experiment (Shanghai, China). Room temperature was maintained at 25 ± 1 °C, gas changes at 10~15 times per hour, relative humidity at (50 ± 10) %, ammonia concentration less than 14 mg/m3, noise ≤60 db. Sterilized diet and water were freely accessed. Reagents and drugs Lipopolysaccharide (LPS) was purchased from Sigma (USA). Detection kits of TNF-α, IL-8 and LTB4 were purchased from JRDUN Biotechnology Shanghai Co., Ltd (Shanghai, China). Radix Stemonae Concentrated Decoction was obtained from Yueyang Hospital of Integrated Traditional Chinese and Western Medicine affiliated to Shanghai University of Traditional Chinese Medicine (Shanghai, China). Daqianmen cigarette (tar 13 mg/stick, nicotine 1.0 mg/stick, carbon monoxide 14 mg/stick) was purchased from Shanghai Tobacco Group Co., Ltd (Shanghai, China). Preparation of Radix Stemonae Concentrated Decoction: It was prepared by the Department of Pharmacy of Yueyang Hospital. We boiled the raw material in water twice, concentrated the filtrate to 2 g/mL, and added 70 % ethanol for precipitation. After 24 h for standing, ethanol was removed and an appropriate amount of water was added to dilute the solution to a concentration of 0.6 g/mL. The amount needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21] (each 250 g of Radix Stemonae Lung-nourishing Fried Paste contains Radix Stemonae crude drug 77.5 g; the clinical dose was 10 g/time, TID). Lipopolysaccharide (LPS) was purchased from Sigma (USA). Detection kits of TNF-α, IL-8 and LTB4 were purchased from JRDUN Biotechnology Shanghai Co., Ltd (Shanghai, China). Radix Stemonae Concentrated Decoction was obtained from Yueyang Hospital of Integrated Traditional Chinese and Western Medicine affiliated to Shanghai University of Traditional Chinese Medicine (Shanghai, China). Daqianmen cigarette (tar 13 mg/stick, nicotine 1.0 mg/stick, carbon monoxide 14 mg/stick) was purchased from Shanghai Tobacco Group Co., Ltd (Shanghai, China). Preparation of Radix Stemonae Concentrated Decoction: It was prepared by the Department of Pharmacy of Yueyang Hospital. We boiled the raw material in water twice, concentrated the filtrate to 2 g/mL, and added 70 % ethanol for precipitation. After 24 h for standing, ethanol was removed and an appropriate amount of water was added to dilute the solution to a concentration of 0.6 g/mL. The amount needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21] (each 250 g of Radix Stemonae Lung-nourishing Fried Paste contains Radix Stemonae crude drug 77.5 g; the clinical dose was 10 g/time, TID). Instruments and equipment Microplate reader (Labsystems microplate reader, MK3), pipette (Pipetman, Gilson P), paraffin slicer (Leica, RM2235), water bath (Leica, HI1210), drying apparatus (Leica, HI1220 horizontal drying type), embedding machine (Changzhou Zhongwei Electronic Instrument Factory, BMJ-111), image analysis system (OLYMPUS, BX51), animal lung function detector (provided by the Department of Respiratory Medicine, Shuguang Hospital Attached to Shanghai University of Traditional Chinese Medicine, BioSystem XA SFT3410), electronic scales (Shanghai Precision and Scientific Instrument Co., Ltd., HANGPING FA1004N), optical microscope (OLYMPUS, BX41TF), tray electronic analytical balance (Shimadzu Corporation, AY220), electrically heated thermostatic water bath (Sumsung Laboratory Instrument Co., Ltd., DK-SD). Microplate reader (Labsystems microplate reader, MK3), pipette (Pipetman, Gilson P), paraffin slicer (Leica, RM2235), water bath (Leica, HI1210), drying apparatus (Leica, HI1220 horizontal drying type), embedding machine (Changzhou Zhongwei Electronic Instrument Factory, BMJ-111), image analysis system (OLYMPUS, BX51), animal lung function detector (provided by the Department of Respiratory Medicine, Shuguang Hospital Attached to Shanghai University of Traditional Chinese Medicine, BioSystem XA SFT3410), electronic scales (Shanghai Precision and Scientific Instrument Co., Ltd., HANGPING FA1004N), optical microscope (OLYMPUS, BX41TF), tray electronic analytical balance (Shimadzu Corporation, AY220), electrically heated thermostatic water bath (Sumsung Laboratory Instrument Co., Ltd., DK-SD). Preparation of animal samples The rats were randomized into blank group, COPD model group and Radix Stemonae group by using random number method, 10 cases in each group. Magenta (red) and picric acid (yellow) were used to mark the experimental animals. COPD model group was prepared according to the method of Song Yiping et al. [20]. LPS was injected into rat trachea at a dose of 200 ug (1 g/L) in the morning of day 1 and day 14; on days 2–13 and days 15–112, the rats were put in a self-prepared sealed organic glass poison-stained case for passive smoking, 30 min each time, twice a day at an interval of 2 h. Radix Stemonae group: the method was the same as the model group in the first 4 weeks, and from week 5 to week 16, apart from exposure to cigarette smoke, Radix Stemonae Concentrated Decoction was used for gavage every day, 3 times a day, 1 mL each time. The rats were randomized into blank group, COPD model group and Radix Stemonae group by using random number method, 10 cases in each group. Magenta (red) and picric acid (yellow) were used to mark the experimental animals. COPD model group was prepared according to the method of Song Yiping et al. [20]. LPS was injected into rat trachea at a dose of 200 ug (1 g/L) in the morning of day 1 and day 14; on days 2–13 and days 15–112, the rats were put in a self-prepared sealed organic glass poison-stained case for passive smoking, 30 min each time, twice a day at an interval of 2 h. Radix Stemonae group: the method was the same as the model group in the first 4 weeks, and from week 5 to week 16, apart from exposure to cigarette smoke, Radix Stemonae Concentrated Decoction was used for gavage every day, 3 times a day, 1 mL each time. Pulmonary function test On day 112 of the experiment, we firstly performed pulmonary function tests in rats in vivo according to the method of Wang et al. [22]. Briefly, the rats were anesthetized by intraperitoneally injection of 2 % sodium pentobarbital (40 mg/kg), and fixed with a supine position on the bench. After cutting off the fur in the middle of the neck, we created an inverted “T”-shaped incision below the annular cartilage, and performed tracheal intubation and fixation. The trachea cannula was connected to a small animal spirometer to measure the ratio between forced expiratory volume at the time of 0.2 s and forced vital capacity (FEV0.2/FVC), expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) for evaluation of lung function in rats. On day 112 of the experiment, we firstly performed pulmonary function tests in rats in vivo according to the method of Wang et al. [22]. Briefly, the rats were anesthetized by intraperitoneally injection of 2 % sodium pentobarbital (40 mg/kg), and fixed with a supine position on the bench. After cutting off the fur in the middle of the neck, we created an inverted “T”-shaped incision below the annular cartilage, and performed tracheal intubation and fixation. The trachea cannula was connected to a small animal spirometer to measure the ratio between forced expiratory volume at the time of 0.2 s and forced vital capacity (FEV0.2/FVC), expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) for evaluation of lung function in rats. Morphologic observation of lung tissue We fetched the right lower lung and put it in 4 % formaldehyde for fixation. Twenty-four hours later, the lung performed paraffin-embedding and slicing, conventional slice production, HE staining and routine pathological examination. We randomly selected 3 fields of view for each slice under the light microscope to observe the histological and morphological changes, including pathological semi-quantitative grading, judgment of the severity of lung inflammation and tracheal inflammation, and judgment of degree of alveolar fusion. Mild degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 1/3 of the whole slice, and alveolar fusion accounts for less than 1/3 of the field of view of the entire slice. Moderate degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 2/3 of the whole slice, and alveolar fusion accounts for less than 2/3 of the field of view of the entire slice. Severe degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for more than 2/3 of the whole slice, and alveolar fusion accounts for more than 2/3 of the field of view of the entire slice. We fetched the right lower lung and put it in 4 % formaldehyde for fixation. Twenty-four hours later, the lung performed paraffin-embedding and slicing, conventional slice production, HE staining and routine pathological examination. We randomly selected 3 fields of view for each slice under the light microscope to observe the histological and morphological changes, including pathological semi-quantitative grading, judgment of the severity of lung inflammation and tracheal inflammation, and judgment of degree of alveolar fusion. Mild degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 1/3 of the whole slice, and alveolar fusion accounts for less than 1/3 of the field of view of the entire slice. Moderate degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 2/3 of the whole slice, and alveolar fusion accounts for less than 2/3 of the field of view of the entire slice. Severe degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for more than 2/3 of the whole slice, and alveolar fusion accounts for more than 2/3 of the field of view of the entire slice. Inflammatory mediators in serum and bronchoalveolar lavage fluid (BALF) Before the animals were sacrificed, we sampled 5 mL of their abdominal aortic blood, centrifuged it at 3000 rpm for 10 min. The supernatant was sub-packed and stored at −70 °C. We cut open the chest to expose the trachea and lungs. After the right main bronchus was ligatured, the left lung was douched with 2 mL normal saline. And 1.5 mL solution (recovery rate 75 %) should be gathered each time as required, and filtered through gauze. The left lung was douched with the same method for 3 times. In total, 4.5 mL solution was recovered, of which 3.5 mL were centrifuged at 1500 rpm for 15 min, 4 °C. The supernatant were collected and stored at −70 °C. The concentrations of TNF-α, IL-8 and LTB4 in serum and BALF were determined by using enzyme-linked immuno sorbent assay according to the kit instructions. Before the animals were sacrificed, we sampled 5 mL of their abdominal aortic blood, centrifuged it at 3000 rpm for 10 min. The supernatant was sub-packed and stored at −70 °C. We cut open the chest to expose the trachea and lungs. After the right main bronchus was ligatured, the left lung was douched with 2 mL normal saline. And 1.5 mL solution (recovery rate 75 %) should be gathered each time as required, and filtered through gauze. The left lung was douched with the same method for 3 times. In total, 4.5 mL solution was recovered, of which 3.5 mL were centrifuged at 1500 rpm for 15 min, 4 °C. The supernatant were collected and stored at −70 °C. The concentrations of TNF-α, IL-8 and LTB4 in serum and BALF were determined by using enzyme-linked immuno sorbent assay according to the kit instructions. Statistical analysis Statistical analyses were performed using SPSS11.5 program. Data were presented as mean ± SD. One-way ANOVA was adopted for comparison of multi-group means in line with normalized distribution and homogeneity of variance tests; Mann–Whitney was adopted for paired comparison of means not in line with normalized distribution and homogeneity of variance tests. A value with P < 0.05 indicated statistically significant difference. Statistical analyses were performed using SPSS11.5 program. Data were presented as mean ± SD. One-way ANOVA was adopted for comparison of multi-group means in line with normalized distribution and homogeneity of variance tests; Mann–Whitney was adopted for paired comparison of means not in line with normalized distribution and homogeneity of variance tests. A value with P < 0.05 indicated statistically significant difference. Ethics statement: All experiments in this study were approved by Ethic Committee of Shanghai University of Traditional Chinese Medicine, Shanghai, China. Animals: Thirty healthy male SD rats with weight of 200 ± 20 g, 10 weeks old were provided by Shanghai SLAC Laboratory Animal Co., Ltd (Shanghai, China). Rats were housed in the animal room of Shuguang Hospital 1 week before the experiment (Shanghai, China). Room temperature was maintained at 25 ± 1 °C, gas changes at 10~15 times per hour, relative humidity at (50 ± 10) %, ammonia concentration less than 14 mg/m3, noise ≤60 db. Sterilized diet and water were freely accessed. Reagents and drugs: Lipopolysaccharide (LPS) was purchased from Sigma (USA). Detection kits of TNF-α, IL-8 and LTB4 were purchased from JRDUN Biotechnology Shanghai Co., Ltd (Shanghai, China). Radix Stemonae Concentrated Decoction was obtained from Yueyang Hospital of Integrated Traditional Chinese and Western Medicine affiliated to Shanghai University of Traditional Chinese Medicine (Shanghai, China). Daqianmen cigarette (tar 13 mg/stick, nicotine 1.0 mg/stick, carbon monoxide 14 mg/stick) was purchased from Shanghai Tobacco Group Co., Ltd (Shanghai, China). Preparation of Radix Stemonae Concentrated Decoction: It was prepared by the Department of Pharmacy of Yueyang Hospital. We boiled the raw material in water twice, concentrated the filtrate to 2 g/mL, and added 70 % ethanol for precipitation. After 24 h for standing, ethanol was removed and an appropriate amount of water was added to dilute the solution to a concentration of 0.6 g/mL. The amount needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21] (each 250 g of Radix Stemonae Lung-nourishing Fried Paste contains Radix Stemonae crude drug 77.5 g; the clinical dose was 10 g/time, TID). Instruments and equipment: Microplate reader (Labsystems microplate reader, MK3), pipette (Pipetman, Gilson P), paraffin slicer (Leica, RM2235), water bath (Leica, HI1210), drying apparatus (Leica, HI1220 horizontal drying type), embedding machine (Changzhou Zhongwei Electronic Instrument Factory, BMJ-111), image analysis system (OLYMPUS, BX51), animal lung function detector (provided by the Department of Respiratory Medicine, Shuguang Hospital Attached to Shanghai University of Traditional Chinese Medicine, BioSystem XA SFT3410), electronic scales (Shanghai Precision and Scientific Instrument Co., Ltd., HANGPING FA1004N), optical microscope (OLYMPUS, BX41TF), tray electronic analytical balance (Shimadzu Corporation, AY220), electrically heated thermostatic water bath (Sumsung Laboratory Instrument Co., Ltd., DK-SD). Preparation of animal samples: The rats were randomized into blank group, COPD model group and Radix Stemonae group by using random number method, 10 cases in each group. Magenta (red) and picric acid (yellow) were used to mark the experimental animals. COPD model group was prepared according to the method of Song Yiping et al. [20]. LPS was injected into rat trachea at a dose of 200 ug (1 g/L) in the morning of day 1 and day 14; on days 2–13 and days 15–112, the rats were put in a self-prepared sealed organic glass poison-stained case for passive smoking, 30 min each time, twice a day at an interval of 2 h. Radix Stemonae group: the method was the same as the model group in the first 4 weeks, and from week 5 to week 16, apart from exposure to cigarette smoke, Radix Stemonae Concentrated Decoction was used for gavage every day, 3 times a day, 1 mL each time. Pulmonary function test: On day 112 of the experiment, we firstly performed pulmonary function tests in rats in vivo according to the method of Wang et al. [22]. Briefly, the rats were anesthetized by intraperitoneally injection of 2 % sodium pentobarbital (40 mg/kg), and fixed with a supine position on the bench. After cutting off the fur in the middle of the neck, we created an inverted “T”-shaped incision below the annular cartilage, and performed tracheal intubation and fixation. The trachea cannula was connected to a small animal spirometer to measure the ratio between forced expiratory volume at the time of 0.2 s and forced vital capacity (FEV0.2/FVC), expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) for evaluation of lung function in rats. Morphologic observation of lung tissue: We fetched the right lower lung and put it in 4 % formaldehyde for fixation. Twenty-four hours later, the lung performed paraffin-embedding and slicing, conventional slice production, HE staining and routine pathological examination. We randomly selected 3 fields of view for each slice under the light microscope to observe the histological and morphological changes, including pathological semi-quantitative grading, judgment of the severity of lung inflammation and tracheal inflammation, and judgment of degree of alveolar fusion. Mild degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 1/3 of the whole slice, and alveolar fusion accounts for less than 1/3 of the field of view of the entire slice. Moderate degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for less than 2/3 of the whole slice, and alveolar fusion accounts for less than 2/3 of the field of view of the entire slice. Severe degree means that infiltration of lung tissue and various small bronchi by inflammatory cells contained in the lung accounts for more than 2/3 of the whole slice, and alveolar fusion accounts for more than 2/3 of the field of view of the entire slice. Inflammatory mediators in serum and bronchoalveolar lavage fluid (BALF): Before the animals were sacrificed, we sampled 5 mL of their abdominal aortic blood, centrifuged it at 3000 rpm for 10 min. The supernatant was sub-packed and stored at −70 °C. We cut open the chest to expose the trachea and lungs. After the right main bronchus was ligatured, the left lung was douched with 2 mL normal saline. And 1.5 mL solution (recovery rate 75 %) should be gathered each time as required, and filtered through gauze. The left lung was douched with the same method for 3 times. In total, 4.5 mL solution was recovered, of which 3.5 mL were centrifuged at 1500 rpm for 15 min, 4 °C. The supernatant were collected and stored at −70 °C. The concentrations of TNF-α, IL-8 and LTB4 in serum and BALF were determined by using enzyme-linked immuno sorbent assay according to the kit instructions. Statistical analysis: Statistical analyses were performed using SPSS11.5 program. Data were presented as mean ± SD. One-way ANOVA was adopted for comparison of multi-group means in line with normalized distribution and homogeneity of variance tests; Mann–Whitney was adopted for paired comparison of means not in line with normalized distribution and homogeneity of variance tests. A value with P < 0.05 indicated statistically significant difference. Results: Comparison of general conditions In blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow. In blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow. Body weight of rats Body weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30# 393.70 ± 3.95# Model207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*# 387.00 ± 7.04*# After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Comparison of rat weights in three groups After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Body weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30# 393.70 ± 3.95# Model207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*# 387.00 ± 7.04*# After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Comparison of rat weights in three groups After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Pulmonary function test The comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68# 75.10 ± 2.78# 0.32 ± 0.02# 0.54 ± 0.05# Model1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*# 66.87 ± 3.42*# 0.38 ± 0.01*# 0.47 ± 0.03*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Comparison of tested values of lung function in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 The comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68# 75.10 ± 2.78# 0.32 ± 0.02# 0.54 ± 0.05# Model1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*# 66.87 ± 3.42*# 0.38 ± 0.01*# 0.47 ± 0.03*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Comparison of tested values of lung function in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Pathological changes in rats Pathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group In blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group. Pathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group In blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group. Inflammatory mediators in serum and BALF Compared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90# 142.60 ± 19.75# 502.36 ± 55.62# Model10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*# 378.71 ± 64.25*# 673.62 ± 50.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Table 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16# 87.66 ± 23.97# 516.70 ± 45.34# Model10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*# 254.10 ± 61.02*# 675.04 ± 33.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of serum inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of BALF inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Compared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90# 142.60 ± 19.75# 502.36 ± 55.62# Model10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*# 378.71 ± 64.25*# 673.62 ± 50.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Table 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16# 87.66 ± 23.97# 516.70 ± 45.34# Model10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*# 254.10 ± 61.02*# 675.04 ± 33.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of serum inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of BALF inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Comparison of general conditions: In blank group, the rats had smooth and glossy fur, bright eyes, stable breathing, normal food and water intake, good nutrition, healthy body, full muscles, flexible movements and swift response. In COPD model group, the rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. And in the late stage, the rats showed dry and yellow fur. Breathing was fast and shallow in some rats, slow and deep in some other rats. All rats had decreased activities, slow movements, low spirits, and reduced food and water intake. Some rats had a small amount of sticky secretions at the mouth and nose. While in Radix Stemonae group, appearance of the rats in the early stage of exposure to cigarette smoke was the same as in the COPD model group. The rats became manic and upset in the early stage of exposure to cigarette smoke, and successively experienced sneezing, cough, shortness of breath and other symptoms. After gavaged with Radix Stemonae, cough, shortness of breath and other symptoms in the rats were gradually mitigated, and mental state and behavioral activities and food and water intake were gradually normal, but their fur still remained dry and yellow. Body weight of rats: Body weight of the rats was measured on day 1, in week 4, 8 and 16 respectively. The results prompted that compared with the blank group, after rats were exposed to cigarette smoke for 4 weeks, body weights in the COPD model group were decreased significantly (P < 0.05). Compared with the model group, after gavage therapy with Radix Stemonae for 4 weeks (i.e. in week 8), body weights were increased (P < 0.05), as shown in Table 1.Table 1Comparison of rat weights in three groupsGroupthe first dayin week 4in week 8in week 16Blank206.70 ± 4.19267.40 ± 3.89326.10 ± 5.30# 393.70 ± 3.95# Model207.50 ± 2.78248.63 ± 3.96*309.38 ± 5.63*376.75 ± 5.65*Radix Stemonae208.20 ± 2.30249.80 ± 4.34*318.80 ± 5.67*# 387.00 ± 7.04*# After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Comparison of rat weights in three groups After One-way ANOVA, *compared with the blank group, there was significant difference, P < 0.05; #compared with the model group, there was significant difference, P < 0.05 Pulmonary function test: The comparison of lung function in three groups was presented in Table 2. Compared with the blank group, in the COPD model group, inspiratory resistance (Ri) was increased significantly (P < 0.05), lung compliance (Cldyn) was decreased significantly (P < 0.05). The 0.2 s forced expiratory volume (FEV0.2)/forced vital capacity (FVC) and expiratory peak flow (PEF) were decreased significantly (P < 0.05). Compared with the model group, in the Radix Stemonae group, airway Ri was decreased significantly (P < 0.05). Cldyn was increased significantly (P < 0.05), and FEV0.2/FVC (%) and PEF were increased significantly (P < 0.05).Table 2Comparison of tested values of lung function in ratsGroupNumberFEV0.2/FVC (%)PEF (mL/s)Ri (cmH2O · s/L)Cldyn (mL/cmH2O)Blank1082.41 ± 3.68# 75.10 ± 2.78# 0.32 ± 0.02# 0.54 ± 0.05# Model1064.33 ± 6.66*55.38 ± 4.15*0.47 ± 0.04*0.35 ± 0.03*Radix Stemonae1072.22 ± 1.16*# 66.87 ± 3.42*# 0.38 ± 0.01*# 0.47 ± 0.03*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Comparison of tested values of lung function in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Pathological changes in rats: Pathological changes in lungs are shown in Fig. 1.Fig. 1Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group Photograph of HE-stained lung tissue under optical microscope (×100). a The shape and structure of rat bronchi in blank group. b The shape and structure of rat bronchi in COPD model group. c The shape and structure of rat bronchi in Radix Stemonae group In blank group, the shape and structure of rat bronchi, pulmonary alveoli and alveolar septa were all normal (Fig. 1a). The structure of bronchial mucosal epithelium was intact. Ciliated columnar epithelium was regularly arranged and a few goblet cells could be seen. There were few mucosal and lamina propria cells, and no microglandular hyperplasia was observed. The size of pulmonary alveoli was normal, the structure was intact, and local invasion of a small amount of lymphocyte-dominated inflammatory cells could be seen. There was no hyperaemia and edema in alveolar septa, no significant thickening of the arterial wall, and no expansion of the alveolar cavity. In COPD model group, the structure of rat lung tissues was basically normal (Fig. 1b). Mucosal epithelium of tracheal and bronchial showed tumidness, disorder, detachment and decreased. The cilia were lodged, and increased goblet cells and gland hypertrophy were observed. Mucous plug and a large number of inflammatory cells could be seen inside the small bronchial lumen and gland catheter. Invasion of peripheral lymphocytes and plasma cells and smooth muscle hypertrophy were seen. Terminal bronchiolar mucosal epithelium was irregularly arranged. Alveolar wall turned thinner and ruptured, and alveolar turned enlargement. Besides, enlargement and mutual fusion of multiple alveolar cavities, and emphysema were observed. In Radix Stemonae group, the structure of rat lung tissues was basically normal (Fig. 1c). The symptoms of tracheal and bronchial mucosal epithelial swelling, disorder and detachment and the increasement of goblet cells were milder than those in the model group. Inflammatory cells could be seen inside the small bronchial lumen and gland catheter, and invasion of peripheral lymphocytes and plasma cells was observed, which were alleviated to different degrees compared with the model group. Hyperplasia and hypertrophy of smooth muscle was seen; alveolar wall thinning, alveolar enlargement and fusion of some alveolar cavities were all milder than those in the model group. Inflammatory mediators in serum and BALF: Compared with the blank control group, the concentrations of TNF-α, IL-8 and LTB4 in serum and BALF in the COPD model group and the Radix Stemonae group were increased significantly (P < 0.05). Compared with the model group, the levels of inflammatory mediators in serum and BALF in the Radix Stemonae group were all decreased significantly (P < 0.05), as shown in Tables 3 and 4.Table 3Test results of serum inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1033.88 ± 4.90# 142.60 ± 19.75# 502.36 ± 55.62# Model10200.43 ± 22.71*688.78 ± 49.07*910.94 ± 65.14*Radix Stemonae1078.20 ± 22.59*# 378.71 ± 64.25*# 673.62 ± 50.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Table 4Test results of BALF inflammatory mediators in ratsGroupNumberTNF-α (ng/L)IL-8 (ng/L)LTB4 (ng/L)Blank1035.27 ± 9.16# 87.66 ± 23.97# 516.70 ± 45.34# Model10189.90 ± 46.67*737.25 ± 87.58*909.27 ± 51.69*Radix Stemonae10125.08 ± 29.16*# 254.10 ± 61.02*# 675.04 ± 33.82*# *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of serum inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Test results of BALF inflammatory mediators in rats *Compared with the blank group, there was significant difference, P < 0.05. #Compared with the model group, there was significant difference, P < 0.05 Discussion: COPD is a common and frequently occurring respiratory disease. After an epidemiological survey among 20,245 adults in seven regions, patients suffering from COPD account for 8.2 % of the population over the age of 40 [23]. The prevalence of COPD was high, and showed an increasing trend year by year, and it was the only common disease that kept rising [24]. Risk for COPD is related to an interaction between genetic factors and many different environmental factors, which could also be influenced by comorbid disease. Environmental factors include tobacco smoking [25], infections [26], occupational dusts and chemicals [27], air pollution [28, 29] and other categories [30]. Smoking and respiratory infections are important factors in causing COPD. In this study, the method of exposure to cigarette smoke and intratracheal instillation of LPS was adopted to simulate the pathological process of COPD. In the general conditions, cough, shortness of breath and other clinical manifestations occurred, as well as weight loss. In pathological conditions, airway and lung parenchyma inflammation in the COPD model group was significantly aggravated compared with the blank group, and significant emphysema occurred. Pulmonary function test results also showed that airway resistance in the model group was increased significantly, compliance was decreased significantly, and FEV0.2/FVC and PEF were decreased significantly, suggesting that the model had been successfully established. Abnormal expression of TNF-α, IL-8, LTB4 and other inflammatory mediators have important effects on the occurrence and acute aggravation of COPD. Inflammatory mediators can cause lung dysfunction and airway inflammation, as well as systemic inflammatory response of the body [31]. Therefore, a lot of studies [32, 33] have explored new approaches of COPD treatment by observing changes in the levels of inflammatory mediators. Among them, TNF-α showing chemotaxis effects on white blood cells, can trigger the occurrence of inflammatory response, and may participate in the formation of emphysema and damage of epithelial cells [34]. Besides, it promotes inflammatory response, tissue fibrosis, and angiogenesis in the decomposition of extracellular proteins [35]. The main function of IL-8 is activating neutrophils and release of oxygen free radicals and proteases, thereby causing injury of pulmonary alveolar epithelial and microvascular, and playing an important role in the occurrence and development of airway inflammation in COPD patients [36]. LTB4 can recruit and activate neutrophils, and further expand local inflammatory response. Meanwhile, LTB4 has an effect on increasing infiltration of inflammatory cells in bronchial mucosa [37]. After gavage therapy with Radix Stemonae, pulmonary function indicators such as expiratory peak flow (PEF), inspiratory resistance (Ri), dynamic lung compliance (Cldyn) turned much better than that in COPD model group. And the inflammation and emphysema were much milder. Inflammatory mediators (TNF-α, IL-8 and LTB4) were decreased significantly in Radix Stemonae group. Thus Radix Stemonae concentrated decoction may mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators. Radix Stemonae is the dry root of Radix Stemonae plants Stemona sessilifolia, Stemona japonica or Stemona tuberosa Lour, with a sweet-bitter state and warm nature, and having an effect on lung channel [38]. The amount of Radix Stemonae Concentrated Decoction needed by rats was calculated as 15 times the clinical dose for human use according to the “Dose Conversion Coefficients Table per Kilogram of Body Weight between Animals and Patients” proposed by Jihan Huang et al. in China [21]. But it is better to optimize the dosage for Radix Stemonae Concentrated Decoction in a pilot study. Modern research literature reports that Radix Stemonae mainly contains alkaloids [39], including bisdehydrostemoninine, neotuberostemonine, stemoninine, stemoninoamide, neostenine, tuberostemonine and tuberostemonine H, which present therapeutic effects in lung diseases by combating bacteria, eliminating sputum, stopping cough and relaxing bronchial smooth muscle [40]. Radix Stemonae alkaloid extract has a relaxing effect on histamine-induced guinea pig bronchial smooth muscle spasm, and the effect is slow and sustained. JF Liao et al. [16] found in an in vitro experiment on guinea pigs that the water extract of Stemona sessilifolia had a spasmolytic effect on guinea pig bronchial smooth muscle and the spasmolytic effect is realized by interacting with musearinic receptors and dihydropyridine binding point. By using ultra-performance liquid-chromatography/mass spectrometry, pharmacokinetics study of multiple components absorbed in rat plasma after oral administration of Stemonae radix showed that that croomine and tuberostemonine would be potential efficacy markers [18]. Stemoninine was identified as the major absorbed compound after oral administration of Stemonae radix detected by HPLC, which was effective for chronic or acute cough [17]. In this study, although we did not extract the active components from Radix Stemonae Concentrated Decoction, the efficacy components for COPD may be some kinds of alkaloids. Limitations: Still, there are some limitations in the study. More groups with different dosage of Radix Stemonae should be supplemented in the study. Besides, more investigation should be provided to explore the potential molecular mechanism on treating CODP by Radix Stemonae. Thus, in the further study, we will perform more experiments to optimize the dosage for Radix Stemonae, to identify the active components and to explore its molecular mechanism. Conclusion: In this study, Radix Stemonae concentrated decoction was used for therapying COPD model rats. The pathological results of lung tissues showed that the manifestations of swelling, disorder and detachment in rat tracheal and bronchial mucosal epithelial and the increasing in goblet cells were improved significantly compared with COPD model group. The number of inflammatory cells in the small bronchial lumen and gland catheter was decreased than the model group. Alveolar wall thinning, alveolar enlargement and fusion were mitigated in Radix Stemonae group. The test results of inflammatory mediators in serum and BALF suggested that Radix Stemonae concentrated decoction could significantly decrease the concentrations of inflammatory mediators TNF-α, IL-8 and LTB4. Therefore, we suggested that Radix Stemonae concentrated decoction can mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators, which further delay the deterioration of lung function.
Background: Chronic obstructive pulmonary disease (COPD) is a common and frequently occurring respiratory disease. At present, western medicine treatment of COPD mainly focuses on symptomatic treatment. Using Chinese medicines or integrated Chinese and Western medicines to treat stable COPD has significant efficacy. In this study, we aimed to observe the effect of Radix Stemonae concentrated decoction on the lung tissue pathology and inflammatory mediators in COPD rats and explore its possible mechanism. Methods: SD rats were randomized into blank group, COPD model group and Radix Stemonae group, 10 cases in each group. Rats were fed for 112 days. Before the rats were sacrificed, lung function of the animals was tested. The right lower lung was fixed for morphologic observation. The inflammatory mediators in serum were determined using enzyme-linked immuno sorbent assay. Results: Body weight of animals in the model group was significantly decreased compared with blank group (P < 0.05). After gavage therapy with Radix Stemonae, body weight was significantly increased (P < 0.05). Compared with the blank group, pulmonary functions of rats in the model group were significantly abnormal (P < 0.05), while in Radix Stemonae group, these indicators turned much better than model group (P < 0.05). As for pathological changes in lungs, airway inflammation in the model group was aggravated. In the Radix Stemonae group, inflammation and emphysema were much milder. The concentrations of TNF-α, IL-8 and LTB4 in both model group and Radix Stemonae group were increased significantly (P < 0.05). But the levels in Radix Stemonae group were decreased significantly than model group (P < 0.05). Conclusions: Radix Stemonae concentrated decoction may mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators.
Background: Chronic obstructive pulmonary disease (COPD) is a common and frequently occurring respiratory disease, which is characterized by persistent airflow limitation [1]. It is often complicated with exacerbations and hospitalizations [2], which increased mortality and reduced life-expectancy [3]. At present, western medicine treatment of COPD, especially treatment of the stable stage, mainly focuses on symptomatic treatment [4], and drug treatment mostly focuses on improving symptoms and (or) reducing complications [5]. Using Chinese medicines or integrated Chinese and Western medicines to treat stable COPD has significant efficacy [6], improving pulmonary function and increasing diaphragm muscular tension [7]. A large amount of high-quality and high-level evidence has shown that its efficacy in improving symptoms, reducing exacerbations, and improving exercise capacity and quality of life is better than treatment with Western medicines alone [8–10]. Stemonae Radix is a traditional Chinese medicine (TCM) used as an antitussive and insecticidal remedy, which is derived from Stemona tuberosa Lour, S. japonica and and S. sessilifolia [11]. It has been widely used for the treatment of respiratory diseases in China for thousands of years. Earlier studies have shown that Stemonae Radix can release coughs [12], remove phlegm [13] and has antihelminthic, antibacterial, antituberculous and antifungal activity [14, 15]. Alkaloids are the major effective ingredients in Radix Stemonae, which includes stemoninine, stemoninoamide, bisdehydrostemoninine, neotuberostemonine, neostenine, tuberostemonine and tuberostemonine H. Liao et al. [16] found in an in vitro experiment on guinea pigs that the extract of Stemona radix had a relaxation effect on airway smooth muscle and the relaxation effect is realized by interacting with musearinic receptors and dihydropyridine binding point. Besides, the pharmacokinetics study of multiple components absorbed in rat plasma after oral administration of Stemonae radix showed that croomine and tuberostemonine would be potential bioactive components for the treatment of chronic or acute cough in rat model [17, 18]. In previous study of this study group, Radix Stemonae lung-nourishing fried paste prepared by our hospital (Hu Yao Zhi Zi Hu 05050323) was used for interference with patients with stable COPD [19]. And the results showed that Radix Stemonae lung-nourishing fried paste could mitigate the clinical symptoms of COPD patients, improve their quality of life [19]. What’s more, it is safe and reliable. However, its mechanism of action in treating COPD was unknown. In this study, we established SD rat COPD models by exposing to cigarette smoke combined with intratracheal instillation of LPS according to the method of Yiping Song et al. [20]. Radix Stemonae concentrated decoction was used for gavage therapy. By observation of the effect of Radix Stemonae concentrated decoction on the lung tissue pathology and inflammatory mediators in COPD rats, we tried to explore its possible mechanism of action. Conclusion: In this study, Radix Stemonae concentrated decoction was used for therapying COPD model rats. The pathological results of lung tissues showed that the manifestations of swelling, disorder and detachment in rat tracheal and bronchial mucosal epithelial and the increasing in goblet cells were improved significantly compared with COPD model group. The number of inflammatory cells in the small bronchial lumen and gland catheter was decreased than the model group. Alveolar wall thinning, alveolar enlargement and fusion were mitigated in Radix Stemonae group. The test results of inflammatory mediators in serum and BALF suggested that Radix Stemonae concentrated decoction could significantly decrease the concentrations of inflammatory mediators TNF-α, IL-8 and LTB4. Therefore, we suggested that Radix Stemonae concentrated decoction can mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators, which further delay the deterioration of lung function.
Background: Chronic obstructive pulmonary disease (COPD) is a common and frequently occurring respiratory disease. At present, western medicine treatment of COPD mainly focuses on symptomatic treatment. Using Chinese medicines or integrated Chinese and Western medicines to treat stable COPD has significant efficacy. In this study, we aimed to observe the effect of Radix Stemonae concentrated decoction on the lung tissue pathology and inflammatory mediators in COPD rats and explore its possible mechanism. Methods: SD rats were randomized into blank group, COPD model group and Radix Stemonae group, 10 cases in each group. Rats were fed for 112 days. Before the rats were sacrificed, lung function of the animals was tested. The right lower lung was fixed for morphologic observation. The inflammatory mediators in serum were determined using enzyme-linked immuno sorbent assay. Results: Body weight of animals in the model group was significantly decreased compared with blank group (P < 0.05). After gavage therapy with Radix Stemonae, body weight was significantly increased (P < 0.05). Compared with the blank group, pulmonary functions of rats in the model group were significantly abnormal (P < 0.05), while in Radix Stemonae group, these indicators turned much better than model group (P < 0.05). As for pathological changes in lungs, airway inflammation in the model group was aggravated. In the Radix Stemonae group, inflammation and emphysema were much milder. The concentrations of TNF-α, IL-8 and LTB4 in both model group and Radix Stemonae group were increased significantly (P < 0.05). But the levels in Radix Stemonae group were decreased significantly than model group (P < 0.05). Conclusions: Radix Stemonae concentrated decoction may mitigate and improve airway rebuilding in the lungs of COPD rats by inhibiting the release of inflammatory mediators.
11,338
356
[ 22, 112, 267, 148, 193, 156, 232, 185, 77, 243, 276, 340, 478, 410 ]
19
[ "group", "radix", "05", "model", "lung", "model group", "rats", "stemonae", "compared", "radix stemonae" ]
[ "dosage radix stemonae", "radix stemonae cough", "radix stemonae mainly", "bronchi radix stemonae", "stemonae lung nourishing" ]
null
[CONTENT] Radix Stemonae | Chronic obstructive pulmonary disease | Pathology | Inflammatory mediators [SUMMARY]
null
[CONTENT] Radix Stemonae | Chronic obstructive pulmonary disease | Pathology | Inflammatory mediators [SUMMARY]
[CONTENT] Radix Stemonae | Chronic obstructive pulmonary disease | Pathology | Inflammatory mediators [SUMMARY]
[CONTENT] Radix Stemonae | Chronic obstructive pulmonary disease | Pathology | Inflammatory mediators [SUMMARY]
[CONTENT] Radix Stemonae | Chronic obstructive pulmonary disease | Pathology | Inflammatory mediators [SUMMARY]
[CONTENT] Animals | Drugs, Chinese Herbal | Humans | Interleukin-8 | Lung | Male | Pulmonary Disease, Chronic Obstructive | Rats | Rats, Sprague-Dawley | Tumor Necrosis Factor-alpha [SUMMARY]
null
[CONTENT] Animals | Drugs, Chinese Herbal | Humans | Interleukin-8 | Lung | Male | Pulmonary Disease, Chronic Obstructive | Rats | Rats, Sprague-Dawley | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Animals | Drugs, Chinese Herbal | Humans | Interleukin-8 | Lung | Male | Pulmonary Disease, Chronic Obstructive | Rats | Rats, Sprague-Dawley | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Animals | Drugs, Chinese Herbal | Humans | Interleukin-8 | Lung | Male | Pulmonary Disease, Chronic Obstructive | Rats | Rats, Sprague-Dawley | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Animals | Drugs, Chinese Herbal | Humans | Interleukin-8 | Lung | Male | Pulmonary Disease, Chronic Obstructive | Rats | Rats, Sprague-Dawley | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] dosage radix stemonae | radix stemonae cough | radix stemonae mainly | bronchi radix stemonae | stemonae lung nourishing [SUMMARY]
null
[CONTENT] dosage radix stemonae | radix stemonae cough | radix stemonae mainly | bronchi radix stemonae | stemonae lung nourishing [SUMMARY]
[CONTENT] dosage radix stemonae | radix stemonae cough | radix stemonae mainly | bronchi radix stemonae | stemonae lung nourishing [SUMMARY]
[CONTENT] dosage radix stemonae | radix stemonae cough | radix stemonae mainly | bronchi radix stemonae | stemonae lung nourishing [SUMMARY]
[CONTENT] dosage radix stemonae | radix stemonae cough | radix stemonae mainly | bronchi radix stemonae | stemonae lung nourishing [SUMMARY]
[CONTENT] group | radix | 05 | model | lung | model group | rats | stemonae | compared | radix stemonae [SUMMARY]
null
[CONTENT] group | radix | 05 | model | lung | model group | rats | stemonae | compared | radix stemonae [SUMMARY]
[CONTENT] group | radix | 05 | model | lung | model group | rats | stemonae | compared | radix stemonae [SUMMARY]
[CONTENT] group | radix | 05 | model | lung | model group | rats | stemonae | compared | radix stemonae [SUMMARY]
[CONTENT] group | radix | 05 | model | lung | model group | rats | stemonae | compared | radix stemonae [SUMMARY]
[CONTENT] treatment | improving | radix | copd | stemonae | life | medicines | quality | study | stemonae radix [SUMMARY]
null
[CONTENT] group | 05 | compared | significant difference 05 | difference 05 | group significant difference 05 | group significant difference | group significant | model group | model [SUMMARY]
[CONTENT] inflammatory | suggested | suggested radix | suggested radix stemonae | suggested radix stemonae concentrated | concentrated | inflammatory mediators | concentrated decoction | radix stemonae concentrated | radix stemonae concentrated decoction [SUMMARY]
[CONTENT] group | 05 | shanghai | radix | stemonae | lung | model | compared | model group | rats [SUMMARY]
[CONTENT] group | 05 | shanghai | radix | stemonae | lung | model | compared | model group | rats [SUMMARY]
[CONTENT] ||| ||| Chinese | Chinese | Western ||| Radix Stemonae [SUMMARY]
null
[CONTENT] blank group | 0.05 ||| Radix Stemonae | 0.05 ||| 0.05 | Radix Stemonae | 0.05 ||| ||| Radix Stemonae | emphysema ||| TNF | LTB4 | Radix Stemonae | 0.05 ||| Radix Stemonae | 0.05 [SUMMARY]
[CONTENT] Radix Stemonae [SUMMARY]
[CONTENT] ||| ||| Chinese | Chinese | Western ||| Radix Stemonae ||| Radix Stemonae | 10 ||| 112 days ||| ||| ||| ||| ||| blank group | 0.05 ||| Radix Stemonae | 0.05 ||| 0.05 | Radix Stemonae | 0.05 ||| ||| Radix Stemonae | emphysema ||| TNF | LTB4 | Radix Stemonae | 0.05 ||| Radix Stemonae | 0.05 ||| Radix Stemonae [SUMMARY]
[CONTENT] ||| ||| Chinese | Chinese | Western ||| Radix Stemonae ||| Radix Stemonae | 10 ||| 112 days ||| ||| ||| ||| ||| blank group | 0.05 ||| Radix Stemonae | 0.05 ||| 0.05 | Radix Stemonae | 0.05 ||| ||| Radix Stemonae | emphysema ||| TNF | LTB4 | Radix Stemonae | 0.05 ||| Radix Stemonae | 0.05 ||| Radix Stemonae [SUMMARY]
Diagnostic value of white blood cell parameters for COVID-19: Is there a role for HFLC and IG?
34623763
As the Coronavirus disease 2019 (COVID-19) pandemic is still ongoing with patients overwhelming healthcare facilities, we aimed to investigate the ability of white blood cell count (WBC) and their subsets, high fluorescence lymphocyte cells (HFLC), immature granulocyte count (IG), and C-reactive protein (CRP) to aid diagnosis of COVID-19 during the triage process and as indicators of disease progression to serious and critical condition.
INTRODUCTION
We collected clinical and laboratory data of patients, suspected COVID-19 cases, admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Greece). We selected 197 negative and 368 positive cases, confirmed by polymerase chain reaction test for severe acute respiratory syndrome coronavirus 2. COVID-19 cases were classified into mild, serious, and critical disease. Receiver operating characteristic curve and binary logistic regression analysis were utilized for assessing the diagnosing ability of biomarkers.
METHODS
WBC, neutrophil count (NEUT), and HFLC can discriminate efficiently negative cases from mild and serious COVID-19, whereas eosinopenia and basopenia are early indicators of the disease. The combined WBC-HFLC marker is the best diagnostic marker for both mild (sensitivity: 90.6% and specificity: 64.1%) and serious (sensitivity: 90.3% and specificity: 73.4%) disease. CRP and Lymphocyte count are early indicators of progression to serious disease whereas WBC, NEUT, IG, and neutrophil-to-lymphocyte ratio are the best indicators of critical disease.
RESULTS
Lymphopenia is not useful in screening patients with COVID-19. HFLC is a good diagnostic marker for mild and serious disease either as a single marker or combined with WBC whereas IG is a good indicator of progression to critical disease.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "C-Reactive Protein", "COVID-19", "COVID-19 Testing", "Case-Control Studies", "Disease Progression", "Female", "Humans", "Leukocyte Count", "Male", "Middle Aged", "Retrospective Studies", "Severity of Illness Index" ]
8653118
INTRODUCTION
Coronavirus disease 2019 (COVID‐19) is an infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), first discovered in December 2019 in Wuhan, China. Since then, the world has been caught up in one of the deadliest pandemics in history with SARS‐CoV‐2 having spread to over 210 countries, resulting in more than 150 million confirmed cases and more than 3 million deaths as of 3 May 2021. 1 Furthermore, the increased transmissibility of SARS‐CoV‐2 variants has raised the concern for the high admittance rate to the emergency department and the resulting load on public health systems. 2 Under these circumstances, patients with clinical signs of infection (eg, fever, cough etc) presenting at the emergency department of a hospital are treated as suspected COVID‐19 patients. Diagnosis of COVID‐19 patients usually relies on RT‐PCR real‐time polymerase chain reaction (RT‐PCR) tests, which can be time‐consuming. On the other hand, the availability of rapid PCR testing may be restricted. Given the possible consequences from the delayed diagnosis and quarantine of SARS‐CoV‐2‐positive patients, triage at the emergency department has become a formidable task. The complete blood count (CBC) may offer valuable information, indicative of a possible SARS‐CoV‐2 infection, thus assisting clinicians in making decisions at the time of admission. Several studies have demonstrated the decrease in leukocytes and their subpopulations in COVID‐19 patients compared to healthy individuals and non‐COVID‐19 patients with other infectious diseases. Therefore, decrease in neutrophil count, lymphopenia, and eosinopenia are the most common markers suggested for the identification of COVID‐19 patients. However, most of the reports regarding diagnostic value of hematologic parameters for COVID‐19 refer mainly to comparison between the control group and COVID‐19 patients 3 , 4 , 5 , 6 whereas few studies have evaluated their performance as diagnostic markers in the emergency room. 7 , 8 , 9 On the other hand, there is a great load of data for the utilization of hematologic parameters for the diagnosis of progression to serious or severe disease and their performance as prognostic markers in COVID‐19 patients. The most common hematologic parameters derived from CBC with evidenced prognostic value include the following: neutrophil count, 10 lymphocyte count, 11 , 12 neutrophil‐to‐lymphocyte ratio (NLR), 13 , 14 and platelet‐to‐lymphocyte ratio (PLR). 14 , 15 We sought to determine the performance of parameters of white blood cells and their subpopulations as well as their combinations for the diagnosis of COVID‐19 and as indicators of disease progression to serious and critical condition. Since the blood cell parameters may depend on the stage of the disease, we classified patients according to their clinical condition into three groups: patients with mild, serious, and critical disease. In this way, we anticipated to discover early diagnostic markers for COVID‐19 disease with the potential to optimize the triage process as well as early indicators of disease progression in order to aid clinicians in the management of patients in need of close monitoring for developing serious and/or critical condition. In addition to the parameters of CBC, we also examined other known markers used as indicators of systematic inflammatory response such as NLR, PLR, and lymphocyte‐to‐monocyte ratio (LMR) 14 as well as high fluorescence lymphocyte cells (HFLC) and Immature Granulocyte count (IG). HFLC are lymphoplasmacytoid cells or plasma cells present in the blood of patients as a response of the innate immunity to infectious disease. 16 Their detection is based on their characteristical high fluorescence intensity and their count is reported by modern automated hematology analyzers as part of the full blood count. HFLC are elevated in COVID‐19 patients and are further increased in severe disease. 17 On the other hand, immature granulocytes in the peripheral blood can occur in response to infection, inflammation, or other cause of bone marrow stimulation. Both HFLC and IG have been thoroughly investigated as potential markers of sepsis. 18 , 19 C‐reactive protein (CRP), a well‐known inflammation and disease progression marker, was also included in the study. 7 In this respect, we explored their potential role in the diagnosis of COVID‐19 as standalone markers and in combination with other parameters of the CBC.
null
null
RESULTS
Patients The basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166). Basic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19 9.03 (6.97‐11.69) 130.1 (98.17‐190.5) 227.6 (136.7‐325.0) 140.5 (114.0‐175.9) 187.1 (133.3‐291.5) 417.0 (241.9‐720.2) Abbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count. Data presented as n (%) and median (IQR). Units are mg/L for CRP and 109/L for the rest of the parameters. Reference ranges as reported for Sysmex XN 20 . Progression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced. Both mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC. The basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166). Basic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19 9.03 (6.97‐11.69) 130.1 (98.17‐190.5) 227.6 (136.7‐325.0) 140.5 (114.0‐175.9) 187.1 (133.3‐291.5) 417.0 (241.9‐720.2) Abbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count. Data presented as n (%) and median (IQR). Units are mg/L for CRP and 109/L for the rest of the parameters. Reference ranges as reported for Sysmex XN 20 . Progression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced. Both mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC. Hematologic parameters as diagnostic markers of COVID‐19 Initially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease. Multivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease See Table 1 for abbreviations. OR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase. The best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers. Best performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease WBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis. Abbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve. In the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers. Initially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease. Multivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease See Table 1 for abbreviations. OR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase. The best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers. Best performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease WBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis. Abbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve. In the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers. Hematologic parameters as indicators of COVID‐19 disease progression The univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05). Age‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease See Table 1 for abbreviations. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase. Adjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase. ROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value. The univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05). Age‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease See Table 1 for abbreviations. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase. Adjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase. ROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value.
null
null
[ "INTRODUCTION", "Patients", "Data collection and management", "Study design and statistical analysis", "Patients", "Hematologic parameters as diagnostic markers of COVID‐19", "Hematologic parameters as indicators of COVID‐19 disease progression", "ETHICAL APPROVAL" ]
[ "Coronavirus disease 2019 (COVID‐19) is an infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), first discovered in December 2019 in Wuhan, China. Since then, the world has been caught up in one of the deadliest pandemics in history with SARS‐CoV‐2 having spread to over 210 countries, resulting in more than 150 million confirmed cases and more than 3 million deaths as of 3 May 2021.\n1\n Furthermore, the increased transmissibility of SARS‐CoV‐2 variants has raised the concern for the high admittance rate to the emergency department and the resulting load on public health systems.\n2\n\n\nUnder these circumstances, patients with clinical signs of infection (eg, fever, cough etc) presenting at the emergency department of a hospital are treated as suspected COVID‐19 patients. Diagnosis of COVID‐19 patients usually relies on RT‐PCR real‐time polymerase chain reaction (RT‐PCR) tests, which can be time‐consuming. On the other hand, the availability of rapid PCR testing may be restricted. Given the possible consequences from the delayed diagnosis and quarantine of SARS‐CoV‐2‐positive patients, triage at the emergency department has become a formidable task.\nThe complete blood count (CBC) may offer valuable information, indicative of a possible SARS‐CoV‐2 infection, thus assisting clinicians in making decisions at the time of admission. Several studies have demonstrated the decrease in leukocytes and their subpopulations in COVID‐19 patients compared to healthy individuals and non‐COVID‐19 patients with other infectious diseases. Therefore, decrease in neutrophil count, lymphopenia, and eosinopenia are the most common markers suggested for the identification of COVID‐19 patients. However, most of the reports regarding diagnostic value of hematologic parameters for COVID‐19 refer mainly to comparison between the control group and COVID‐19 patients\n3\n, \n4\n, \n5\n, \n6\n whereas few studies have evaluated their performance as diagnostic markers in the emergency room.\n7\n, \n8\n, \n9\n\n\nOn the other hand, there is a great load of data for the utilization of hematologic parameters for the diagnosis of progression to serious or severe disease and their performance as prognostic markers in COVID‐19 patients. The most common hematologic parameters derived from CBC with evidenced prognostic value include the following: neutrophil count,\n10\n lymphocyte count,\n11\n, \n12\n neutrophil‐to‐lymphocyte ratio (NLR),\n13\n, \n14\n and platelet‐to‐lymphocyte ratio (PLR).\n14\n, \n15\n\n\nWe sought to determine the performance of parameters of white blood cells and their subpopulations as well as their combinations for the diagnosis of COVID‐19 and as indicators of disease progression to serious and critical condition. Since the blood cell parameters may depend on the stage of the disease, we classified patients according to their clinical condition into three groups: patients with mild, serious, and critical disease. In this way, we anticipated to discover early diagnostic markers for COVID‐19 disease with the potential to optimize the triage process as well as early indicators of disease progression in order to aid clinicians in the management of patients in need of close monitoring for developing serious and/or critical condition. In addition to the parameters of CBC, we also examined other known markers used as indicators of systematic inflammatory response such as NLR, PLR, and lymphocyte‐to‐monocyte ratio (LMR)\n14\n as well as high fluorescence lymphocyte cells (HFLC) and Immature Granulocyte count (IG).\nHFLC are lymphoplasmacytoid cells or plasma cells present in the blood of patients as a response of the innate immunity to infectious disease.\n16\n Their detection is based on their characteristical high fluorescence intensity and their count is reported by modern automated hematology analyzers as part of the full blood count. HFLC are elevated in COVID‐19 patients and are further increased in severe disease.\n17\n On the other hand, immature granulocytes in the peripheral blood can occur in response to infection, inflammation, or other cause of bone marrow stimulation. Both HFLC and IG have been thoroughly investigated as potential markers of sepsis.\n18\n, \n19\n C‐reactive protein (CRP), a well‐known inflammation and disease progression marker, was also included in the study.\n7\n In this respect, we explored their potential role in the diagnosis of COVID‐19 as standalone markers and in combination with other parameters of the CBC.", "This is a retrospective case‐control study conducted from 14 March 2020 to 6 March 2021, with data collected from patients admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Epirus, Greece). Due to the low prevalence of COVID‐19 disease in our country from March to October 2020, we had to extend the time of data collection to March 2021 in order to include as many COVID‐19 patients as possible. All patients who presented at the emergency department with fever and/or respiratory symptoms were suspected for COVID‐19 infection, and their nasopharyngeal swab specimens were tested for SARS‐CoV‐2 with real‐time polymerase chain reaction (RT‐PCR). 197 patients who tested negative in RT‐PCR were selected as the control group (negative cases). Negative cases discharged to home were considered as mild negative cases (control 1, n = 103) while negative cases admitted to general ward were classified as serious negative cases (control 2, n = 94).\nThe clinical evaluation and management of SARS‐CoV‐2‐positive patients were performed according to the Guidelines of the National Institute of Public Health of Greece.\n21\n COVID‐19 patients were classified according to their clinical condition as evaluated at the emergency department into three groups defined as following: mild disease: discharged to home, serious disease: hospitalized in general ward and severe/critical disease: admitted to intensive care unit (ICU). Patients initially admitted to general ward and later transferred to ICU (n = 23) were included in the critical group; CBC on admission to ICU was used in this case.\nOnly adult patients were included in the study whereas patients with conditions associated with abnormal blood cell counts as hematological malignancies, metastatic bone marrow infiltration by malignancy, receiving chemotherapy or immunosuppressive therapy (n = 13) were excluded from the study. We also excluded patients without CBC on admission (n = 3) as well as patients transferred from another ward (n = 7). A total of 368 COVID‐19 cases were included in the study and were classified as having mild (n = 96), serious (n = 215), and critical (n = 57) disease (Figure 1).\nPatient flow chart", "Demographic data and clinical symptoms and signs were obtained from electronic medical records. The complete blood count and the extended parameters HFLC and IG on day of admission were measured on Sysmex XN‐3100 (Sysmex, Japan). CRP measurements were obtained from the hospital Laboratory Information System (LIS). RT‐PCR test was performed on Xpert Xpress SARS‐COV‐2 (Cepheid AB).", "The study of CBC parameters as diagnostic markers of COVID‐19 comprised of two parts. In the first part, their ability to discriminate between negative and positive cases was tested separately for mild (mild negative cases vs mild positive cases) and serious (serious negative cases vs serious positive cases) disease. In the second part, biomarkers were tested as potentials indicators of COVID‐19 disease progression. For this purpose, their ability to discriminate between mild and serious (mild positive cases vs serious positive cases) as well as between serious and critical disease (serious positive cases vs critical positive cases) was examined.\nContinuous variables were expressed as medians and interquartile ranges whereas categorical variables were expressed as the counts and percentages in each category. Non‐parametric Mann‐Whitney test was used for testing the significance between two groups. Receiver operating characteristic (ROC) curve analysis was applied for the selection of parameters with high diagnostic performance. The area under the curve (AUC) was used as a measure of performance, and parameters with AUC>0.7 were selected for the multivariable analysis. Keeping in mind the impact of false negatives in the case of COVID‐19, the selection of best cutoff values was based initially on Youden index and with a focus on maximizing sensitivity.\nEnter binary logistic regression analysis was conducted to determine the influence of the parameters on the outcome and in developing pairwise combinations for different parameters. The comparison of AUC between pairwise combinations and individual parameters indicated whether there was improvement in the discriminatory power. Furthermore, Nagelkerke R and Akaike information criteria (AIC) were used for the assessment of the goodness of fit of all pairwise combinations, with lower AICs indicating better model fit. Hosmer and Lemeshow test was used for the calibration of the method.\nMedCalc Statistical Software version 19.2.6 (MedCalc Software Ltd,; https://www.medcalc.org; 2020) was used for ROC curve analysis and comparison of ROC curves (z‐statistic). Logistic regression analysis, Pearson correlation, and calculation of variation inflation factors (VIF) were conducted using SPSS 23.0 (IBM Corp). A 2‐tailed P value <.05 was considered as statistically significant. Graphs were plotted using GraphPad Prism 6.00 (GraphPad Software).", "The basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166).\nBasic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19\n9.03 (6.97‐11.69)\n130.1\n(98.17‐190.5)\n227.6\n(136.7‐325.0)\n140.5\n(114.0‐175.9)\n187.1\n(133.3‐291.5)\n417.0\n(241.9‐720.2)\nAbbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count.\nData presented as n (%) and median (IQR).\nUnits are mg/L for CRP and 109/L for the rest of the parameters.\nReference ranges as reported for Sysmex XN \n20\n.\nProgression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced.\nBoth mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC.", "Initially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease.\nMultivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease\nSee Table 1 for abbreviations.\nOR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase.\nAdjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase.\nAdjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase.\nThe best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers.\nBest performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease\nWBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis.\nAbbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve.\nIn the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers.", "The univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05).\nAge‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease\nSee Table 1 for abbreviations.\nOR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase.\nOR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase.\nAdjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase.\nROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value.", "Due to the retrospective and non‐interventional nature of the study, informed consent was not required. The study was approved by the review board of University General Hospital of Ioannina (registration number: 10/26‐05‐2021)." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients", "Data collection and management", "Study design and statistical analysis", "RESULTS", "Patients", "Hematologic parameters as diagnostic markers of COVID‐19", "Hematologic parameters as indicators of COVID‐19 disease progression", "DISCUSSION", "CONFLICT OF INTEREST", "ETHICAL APPROVAL", "Supporting information" ]
[ "Coronavirus disease 2019 (COVID‐19) is an infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), first discovered in December 2019 in Wuhan, China. Since then, the world has been caught up in one of the deadliest pandemics in history with SARS‐CoV‐2 having spread to over 210 countries, resulting in more than 150 million confirmed cases and more than 3 million deaths as of 3 May 2021.\n1\n Furthermore, the increased transmissibility of SARS‐CoV‐2 variants has raised the concern for the high admittance rate to the emergency department and the resulting load on public health systems.\n2\n\n\nUnder these circumstances, patients with clinical signs of infection (eg, fever, cough etc) presenting at the emergency department of a hospital are treated as suspected COVID‐19 patients. Diagnosis of COVID‐19 patients usually relies on RT‐PCR real‐time polymerase chain reaction (RT‐PCR) tests, which can be time‐consuming. On the other hand, the availability of rapid PCR testing may be restricted. Given the possible consequences from the delayed diagnosis and quarantine of SARS‐CoV‐2‐positive patients, triage at the emergency department has become a formidable task.\nThe complete blood count (CBC) may offer valuable information, indicative of a possible SARS‐CoV‐2 infection, thus assisting clinicians in making decisions at the time of admission. Several studies have demonstrated the decrease in leukocytes and their subpopulations in COVID‐19 patients compared to healthy individuals and non‐COVID‐19 patients with other infectious diseases. Therefore, decrease in neutrophil count, lymphopenia, and eosinopenia are the most common markers suggested for the identification of COVID‐19 patients. However, most of the reports regarding diagnostic value of hematologic parameters for COVID‐19 refer mainly to comparison between the control group and COVID‐19 patients\n3\n, \n4\n, \n5\n, \n6\n whereas few studies have evaluated their performance as diagnostic markers in the emergency room.\n7\n, \n8\n, \n9\n\n\nOn the other hand, there is a great load of data for the utilization of hematologic parameters for the diagnosis of progression to serious or severe disease and their performance as prognostic markers in COVID‐19 patients. The most common hematologic parameters derived from CBC with evidenced prognostic value include the following: neutrophil count,\n10\n lymphocyte count,\n11\n, \n12\n neutrophil‐to‐lymphocyte ratio (NLR),\n13\n, \n14\n and platelet‐to‐lymphocyte ratio (PLR).\n14\n, \n15\n\n\nWe sought to determine the performance of parameters of white blood cells and their subpopulations as well as their combinations for the diagnosis of COVID‐19 and as indicators of disease progression to serious and critical condition. Since the blood cell parameters may depend on the stage of the disease, we classified patients according to their clinical condition into three groups: patients with mild, serious, and critical disease. In this way, we anticipated to discover early diagnostic markers for COVID‐19 disease with the potential to optimize the triage process as well as early indicators of disease progression in order to aid clinicians in the management of patients in need of close monitoring for developing serious and/or critical condition. In addition to the parameters of CBC, we also examined other known markers used as indicators of systematic inflammatory response such as NLR, PLR, and lymphocyte‐to‐monocyte ratio (LMR)\n14\n as well as high fluorescence lymphocyte cells (HFLC) and Immature Granulocyte count (IG).\nHFLC are lymphoplasmacytoid cells or plasma cells present in the blood of patients as a response of the innate immunity to infectious disease.\n16\n Their detection is based on their characteristical high fluorescence intensity and their count is reported by modern automated hematology analyzers as part of the full blood count. HFLC are elevated in COVID‐19 patients and are further increased in severe disease.\n17\n On the other hand, immature granulocytes in the peripheral blood can occur in response to infection, inflammation, or other cause of bone marrow stimulation. Both HFLC and IG have been thoroughly investigated as potential markers of sepsis.\n18\n, \n19\n C‐reactive protein (CRP), a well‐known inflammation and disease progression marker, was also included in the study.\n7\n In this respect, we explored their potential role in the diagnosis of COVID‐19 as standalone markers and in combination with other parameters of the CBC.", "Patients This is a retrospective case‐control study conducted from 14 March 2020 to 6 March 2021, with data collected from patients admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Epirus, Greece). Due to the low prevalence of COVID‐19 disease in our country from March to October 2020, we had to extend the time of data collection to March 2021 in order to include as many COVID‐19 patients as possible. All patients who presented at the emergency department with fever and/or respiratory symptoms were suspected for COVID‐19 infection, and their nasopharyngeal swab specimens were tested for SARS‐CoV‐2 with real‐time polymerase chain reaction (RT‐PCR). 197 patients who tested negative in RT‐PCR were selected as the control group (negative cases). Negative cases discharged to home were considered as mild negative cases (control 1, n = 103) while negative cases admitted to general ward were classified as serious negative cases (control 2, n = 94).\nThe clinical evaluation and management of SARS‐CoV‐2‐positive patients were performed according to the Guidelines of the National Institute of Public Health of Greece.\n21\n COVID‐19 patients were classified according to their clinical condition as evaluated at the emergency department into three groups defined as following: mild disease: discharged to home, serious disease: hospitalized in general ward and severe/critical disease: admitted to intensive care unit (ICU). Patients initially admitted to general ward and later transferred to ICU (n = 23) were included in the critical group; CBC on admission to ICU was used in this case.\nOnly adult patients were included in the study whereas patients with conditions associated with abnormal blood cell counts as hematological malignancies, metastatic bone marrow infiltration by malignancy, receiving chemotherapy or immunosuppressive therapy (n = 13) were excluded from the study. We also excluded patients without CBC on admission (n = 3) as well as patients transferred from another ward (n = 7). A total of 368 COVID‐19 cases were included in the study and were classified as having mild (n = 96), serious (n = 215), and critical (n = 57) disease (Figure 1).\nPatient flow chart\nThis is a retrospective case‐control study conducted from 14 March 2020 to 6 March 2021, with data collected from patients admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Epirus, Greece). Due to the low prevalence of COVID‐19 disease in our country from March to October 2020, we had to extend the time of data collection to March 2021 in order to include as many COVID‐19 patients as possible. All patients who presented at the emergency department with fever and/or respiratory symptoms were suspected for COVID‐19 infection, and their nasopharyngeal swab specimens were tested for SARS‐CoV‐2 with real‐time polymerase chain reaction (RT‐PCR). 197 patients who tested negative in RT‐PCR were selected as the control group (negative cases). Negative cases discharged to home were considered as mild negative cases (control 1, n = 103) while negative cases admitted to general ward were classified as serious negative cases (control 2, n = 94).\nThe clinical evaluation and management of SARS‐CoV‐2‐positive patients were performed according to the Guidelines of the National Institute of Public Health of Greece.\n21\n COVID‐19 patients were classified according to their clinical condition as evaluated at the emergency department into three groups defined as following: mild disease: discharged to home, serious disease: hospitalized in general ward and severe/critical disease: admitted to intensive care unit (ICU). Patients initially admitted to general ward and later transferred to ICU (n = 23) were included in the critical group; CBC on admission to ICU was used in this case.\nOnly adult patients were included in the study whereas patients with conditions associated with abnormal blood cell counts as hematological malignancies, metastatic bone marrow infiltration by malignancy, receiving chemotherapy or immunosuppressive therapy (n = 13) were excluded from the study. We also excluded patients without CBC on admission (n = 3) as well as patients transferred from another ward (n = 7). A total of 368 COVID‐19 cases were included in the study and were classified as having mild (n = 96), serious (n = 215), and critical (n = 57) disease (Figure 1).\nPatient flow chart\nData collection and management Demographic data and clinical symptoms and signs were obtained from electronic medical records. The complete blood count and the extended parameters HFLC and IG on day of admission were measured on Sysmex XN‐3100 (Sysmex, Japan). CRP measurements were obtained from the hospital Laboratory Information System (LIS). RT‐PCR test was performed on Xpert Xpress SARS‐COV‐2 (Cepheid AB).\nDemographic data and clinical symptoms and signs were obtained from electronic medical records. The complete blood count and the extended parameters HFLC and IG on day of admission were measured on Sysmex XN‐3100 (Sysmex, Japan). CRP measurements were obtained from the hospital Laboratory Information System (LIS). RT‐PCR test was performed on Xpert Xpress SARS‐COV‐2 (Cepheid AB).\nStudy design and statistical analysis The study of CBC parameters as diagnostic markers of COVID‐19 comprised of two parts. In the first part, their ability to discriminate between negative and positive cases was tested separately for mild (mild negative cases vs mild positive cases) and serious (serious negative cases vs serious positive cases) disease. In the second part, biomarkers were tested as potentials indicators of COVID‐19 disease progression. For this purpose, their ability to discriminate between mild and serious (mild positive cases vs serious positive cases) as well as between serious and critical disease (serious positive cases vs critical positive cases) was examined.\nContinuous variables were expressed as medians and interquartile ranges whereas categorical variables were expressed as the counts and percentages in each category. Non‐parametric Mann‐Whitney test was used for testing the significance between two groups. Receiver operating characteristic (ROC) curve analysis was applied for the selection of parameters with high diagnostic performance. The area under the curve (AUC) was used as a measure of performance, and parameters with AUC>0.7 were selected for the multivariable analysis. Keeping in mind the impact of false negatives in the case of COVID‐19, the selection of best cutoff values was based initially on Youden index and with a focus on maximizing sensitivity.\nEnter binary logistic regression analysis was conducted to determine the influence of the parameters on the outcome and in developing pairwise combinations for different parameters. The comparison of AUC between pairwise combinations and individual parameters indicated whether there was improvement in the discriminatory power. Furthermore, Nagelkerke R and Akaike information criteria (AIC) were used for the assessment of the goodness of fit of all pairwise combinations, with lower AICs indicating better model fit. Hosmer and Lemeshow test was used for the calibration of the method.\nMedCalc Statistical Software version 19.2.6 (MedCalc Software Ltd,; https://www.medcalc.org; 2020) was used for ROC curve analysis and comparison of ROC curves (z‐statistic). Logistic regression analysis, Pearson correlation, and calculation of variation inflation factors (VIF) were conducted using SPSS 23.0 (IBM Corp). A 2‐tailed P value <.05 was considered as statistically significant. Graphs were plotted using GraphPad Prism 6.00 (GraphPad Software).\nThe study of CBC parameters as diagnostic markers of COVID‐19 comprised of two parts. In the first part, their ability to discriminate between negative and positive cases was tested separately for mild (mild negative cases vs mild positive cases) and serious (serious negative cases vs serious positive cases) disease. In the second part, biomarkers were tested as potentials indicators of COVID‐19 disease progression. For this purpose, their ability to discriminate between mild and serious (mild positive cases vs serious positive cases) as well as between serious and critical disease (serious positive cases vs critical positive cases) was examined.\nContinuous variables were expressed as medians and interquartile ranges whereas categorical variables were expressed as the counts and percentages in each category. Non‐parametric Mann‐Whitney test was used for testing the significance between two groups. Receiver operating characteristic (ROC) curve analysis was applied for the selection of parameters with high diagnostic performance. The area under the curve (AUC) was used as a measure of performance, and parameters with AUC>0.7 were selected for the multivariable analysis. Keeping in mind the impact of false negatives in the case of COVID‐19, the selection of best cutoff values was based initially on Youden index and with a focus on maximizing sensitivity.\nEnter binary logistic regression analysis was conducted to determine the influence of the parameters on the outcome and in developing pairwise combinations for different parameters. The comparison of AUC between pairwise combinations and individual parameters indicated whether there was improvement in the discriminatory power. Furthermore, Nagelkerke R and Akaike information criteria (AIC) were used for the assessment of the goodness of fit of all pairwise combinations, with lower AICs indicating better model fit. Hosmer and Lemeshow test was used for the calibration of the method.\nMedCalc Statistical Software version 19.2.6 (MedCalc Software Ltd,; https://www.medcalc.org; 2020) was used for ROC curve analysis and comparison of ROC curves (z‐statistic). Logistic regression analysis, Pearson correlation, and calculation of variation inflation factors (VIF) were conducted using SPSS 23.0 (IBM Corp). A 2‐tailed P value <.05 was considered as statistically significant. Graphs were plotted using GraphPad Prism 6.00 (GraphPad Software).", "This is a retrospective case‐control study conducted from 14 March 2020 to 6 March 2021, with data collected from patients admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Epirus, Greece). Due to the low prevalence of COVID‐19 disease in our country from March to October 2020, we had to extend the time of data collection to March 2021 in order to include as many COVID‐19 patients as possible. All patients who presented at the emergency department with fever and/or respiratory symptoms were suspected for COVID‐19 infection, and their nasopharyngeal swab specimens were tested for SARS‐CoV‐2 with real‐time polymerase chain reaction (RT‐PCR). 197 patients who tested negative in RT‐PCR were selected as the control group (negative cases). Negative cases discharged to home were considered as mild negative cases (control 1, n = 103) while negative cases admitted to general ward were classified as serious negative cases (control 2, n = 94).\nThe clinical evaluation and management of SARS‐CoV‐2‐positive patients were performed according to the Guidelines of the National Institute of Public Health of Greece.\n21\n COVID‐19 patients were classified according to their clinical condition as evaluated at the emergency department into three groups defined as following: mild disease: discharged to home, serious disease: hospitalized in general ward and severe/critical disease: admitted to intensive care unit (ICU). Patients initially admitted to general ward and later transferred to ICU (n = 23) were included in the critical group; CBC on admission to ICU was used in this case.\nOnly adult patients were included in the study whereas patients with conditions associated with abnormal blood cell counts as hematological malignancies, metastatic bone marrow infiltration by malignancy, receiving chemotherapy or immunosuppressive therapy (n = 13) were excluded from the study. We also excluded patients without CBC on admission (n = 3) as well as patients transferred from another ward (n = 7). A total of 368 COVID‐19 cases were included in the study and were classified as having mild (n = 96), serious (n = 215), and critical (n = 57) disease (Figure 1).\nPatient flow chart", "Demographic data and clinical symptoms and signs were obtained from electronic medical records. The complete blood count and the extended parameters HFLC and IG on day of admission were measured on Sysmex XN‐3100 (Sysmex, Japan). CRP measurements were obtained from the hospital Laboratory Information System (LIS). RT‐PCR test was performed on Xpert Xpress SARS‐COV‐2 (Cepheid AB).", "The study of CBC parameters as diagnostic markers of COVID‐19 comprised of two parts. In the first part, their ability to discriminate between negative and positive cases was tested separately for mild (mild negative cases vs mild positive cases) and serious (serious negative cases vs serious positive cases) disease. In the second part, biomarkers were tested as potentials indicators of COVID‐19 disease progression. For this purpose, their ability to discriminate between mild and serious (mild positive cases vs serious positive cases) as well as between serious and critical disease (serious positive cases vs critical positive cases) was examined.\nContinuous variables were expressed as medians and interquartile ranges whereas categorical variables were expressed as the counts and percentages in each category. Non‐parametric Mann‐Whitney test was used for testing the significance between two groups. Receiver operating characteristic (ROC) curve analysis was applied for the selection of parameters with high diagnostic performance. The area under the curve (AUC) was used as a measure of performance, and parameters with AUC>0.7 were selected for the multivariable analysis. Keeping in mind the impact of false negatives in the case of COVID‐19, the selection of best cutoff values was based initially on Youden index and with a focus on maximizing sensitivity.\nEnter binary logistic regression analysis was conducted to determine the influence of the parameters on the outcome and in developing pairwise combinations for different parameters. The comparison of AUC between pairwise combinations and individual parameters indicated whether there was improvement in the discriminatory power. Furthermore, Nagelkerke R and Akaike information criteria (AIC) were used for the assessment of the goodness of fit of all pairwise combinations, with lower AICs indicating better model fit. Hosmer and Lemeshow test was used for the calibration of the method.\nMedCalc Statistical Software version 19.2.6 (MedCalc Software Ltd,; https://www.medcalc.org; 2020) was used for ROC curve analysis and comparison of ROC curves (z‐statistic). Logistic regression analysis, Pearson correlation, and calculation of variation inflation factors (VIF) were conducted using SPSS 23.0 (IBM Corp). A 2‐tailed P value <.05 was considered as statistically significant. Graphs were plotted using GraphPad Prism 6.00 (GraphPad Software).", "Patients The basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166).\nBasic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19\n9.03 (6.97‐11.69)\n130.1\n(98.17‐190.5)\n227.6\n(136.7‐325.0)\n140.5\n(114.0‐175.9)\n187.1\n(133.3‐291.5)\n417.0\n(241.9‐720.2)\nAbbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count.\nData presented as n (%) and median (IQR).\nUnits are mg/L for CRP and 109/L for the rest of the parameters.\nReference ranges as reported for Sysmex XN \n20\n.\nProgression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced.\nBoth mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC.\nThe basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166).\nBasic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19\n9.03 (6.97‐11.69)\n130.1\n(98.17‐190.5)\n227.6\n(136.7‐325.0)\n140.5\n(114.0‐175.9)\n187.1\n(133.3‐291.5)\n417.0\n(241.9‐720.2)\nAbbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count.\nData presented as n (%) and median (IQR).\nUnits are mg/L for CRP and 109/L for the rest of the parameters.\nReference ranges as reported for Sysmex XN \n20\n.\nProgression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced.\nBoth mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC.\nHematologic parameters as diagnostic markers of COVID‐19 Initially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease.\nMultivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease\nSee Table 1 for abbreviations.\nOR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase.\nAdjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase.\nAdjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase.\nThe best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers.\nBest performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease\nWBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis.\nAbbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve.\nIn the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers.\nInitially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease.\nMultivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease\nSee Table 1 for abbreviations.\nOR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase.\nAdjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase.\nAdjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase.\nThe best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers.\nBest performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease\nWBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis.\nAbbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve.\nIn the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers.\nHematologic parameters as indicators of COVID‐19 disease progression The univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05).\nAge‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease\nSee Table 1 for abbreviations.\nOR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase.\nOR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase.\nAdjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase.\nROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value.\nThe univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05).\nAge‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease\nSee Table 1 for abbreviations.\nOR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase.\nOR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase.\nAdjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase.\nROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value.", "The basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166).\nBasic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19\n9.03 (6.97‐11.69)\n130.1\n(98.17‐190.5)\n227.6\n(136.7‐325.0)\n140.5\n(114.0‐175.9)\n187.1\n(133.3‐291.5)\n417.0\n(241.9‐720.2)\nAbbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count.\nData presented as n (%) and median (IQR).\nUnits are mg/L for CRP and 109/L for the rest of the parameters.\nReference ranges as reported for Sysmex XN \n20\n.\nProgression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced.\nBoth mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC.", "Initially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease.\nMultivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease\nSee Table 1 for abbreviations.\nOR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase.\nAdjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase.\nAdjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase.\nThe best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers.\nBest performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease\nWBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis.\nAbbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve.\nIn the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers.", "The univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05).\nAge‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease\nSee Table 1 for abbreviations.\nOR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase.\nOR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase.\nAdjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase.\nROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value.", "Several studies have illustrated the utility of routine blood tests performed upon admittance of patients to the hospital for the diagnosis of COVID‐19. Most of them have underlined the importance of lymphopenia and eosinopenia present upon admittance of patients with COVID‐19.\n5\n, \n7\n, \n22\n, \n23\n In our study, no significant difference in the lymphocyte count was observed between the negative group and serious COVID‐19 disease. With an AUC of 0.635 in the case of mild disease and 0.510 for serious disease, lymphocyte count constitutes a diagnostic biomarker of low efficacy. The observed difference in the lymphocyte subset compared with other studies can be attributed to the different study design. In most cases, patients with mild symptoms are not included or not examined separately whereas the control group in some cases is comprised of healthy individuals and not patients with infectious disease. Furthermore, lymphopenia is often associated with other causes such as congenital immunodeficiency disorders, malnutrition, alcohol abuse, medications, malignancies, systemic autoimmune diseases, and (bacterial or viral) infections resulting in increased risk of hospitalization with infection.\n24\n Hence, this fact may account for the higher frequency of patients with lymphopenia in the groups with serious condition and in need for hospitalization, thus explaining the absence of significant difference in lymphocytes between negative and positive serious cases. On the other hand, in the case of mild disease, lymphopenia is more pronounced for the positive group, characterizing the early stages of COVID‐19 disease in contrast to other infectious diseases.\nOn the other hand, decrease in the eosinophil count is prominent for the mild disease group, present in 57.3% of patients compared with 19.4% of patients of the negative group and even more frequent in patients with serious disease reaching 72.6%. ROC curve analysis revealed medium performance for the eosinophil count (AUC: 0.659) compared with other blood biomarkers for the diagnosis of COVID‐19 serious disease.\nThe diagnostic ability of leukocyte and neutrophil count for COVID‐19 has been highlighted in several different studies.\n6\n, \n7\n, \n9\n, \n25\n Indeed, WBC and NEUT were significantly lower in mild and serious disease compared with the negative group and were both independent determinants of COVID‐19 disease. WBC and NEUT, both displayed high efficiency in the diagnosis of mild and serious COVID‐19 disease either as single or in combination with other parameters of the CBC.\nNLR has been proposed as a possible sufficient diagnostic marker for the COVID‐19 disease.\n6\n, \n9\n, \n14\n In our study, NLR is a diagnostic marker of medium performance (AUC: 0.656 for mild disease and 0.719 for serious disease).\nThe performance of basophil count as an indicator of COVID‐19 disease is surprisingly high. The basophil count depletion is observed early in the course of COVID‐19 disease following the trend in decrease of all white blood cell subsets. Based on the fact that basophil count is generally low, even in healthy individuals, concerns were raised about its variability not being specific to a certain pathological condition.\n3\n Consequently, it was assumed that basophils may not be implicated in the COVID‐19 pathogenesis and diagnosis. However, recent findings have suggested that basophils have an immune regulatory function both in innate and adaptive immune response.\n26\n, \n27\n, \n28\n By comparing mostly mild cases of COVID‐19 patients with other pulmonary infection patients, J. Dai et al found that among other CBC parameters, basophil count and proportion were the most discriminant biomarkers.\n29\n On the other hand, a protective role of high basophil count against developing severe disease was recently proposed and a causal association between basophil count and the risk of COVID‐19 and susceptibility was evidenced whereas the same association was not confirmed for lymphocytes and eosinophils.\n30\n In light of these findings, it is not surprising that basophil depletion may serve as an early marker for the diagnosis of COVID‐19. Our observations are indicative of high potency for basophil count as a diagnostic marker of mild COVID‐19 disease and its combination with WBC results in a combined marker with sensitivity 88.5% and specificity 60.2%.\nHFLC count is found to be elevated in COVID‐19 patients\n17\n and is further increased upon progression of disease, especially in the second week of illness concurring with the presence in serum of anti‐SARS‐CoV‐2‐specific antibodies.\n31\n The increase in HFLC also correlates with worsening of clinical condition, especially in the case of cytokine storm syndrome.\n32\n In our study, HFLC count is significantly increased and is independently associated with mild and serious COVID‐19 disease. The most important finding is that HFLC can be utilized for the diagnosis of both conditions either as a single marker or in combination with WBC, a superior marker compared with all single and other combined markers (Tables S2 and S3). As a marker of progression to critical disease, HFLC count is of moderate efficacy (AUC: 0.710).\nImmature granulocyte count has been proposed as a predictor of sepsis\n19\n, \n33\n and a marker of acute respiratory distress syndrome.\n34\n Furthermore, increase in neutrophil precursors is highly associated with severe COVID‐19.\n35\n, \n36\n Our results indicate low IG count in all COVID‐19 patients and medium performance as a diagnostic marker (Table S2). Interestingly, IG can be a very useful indicator of critical disease (AUC: 0.890, sensitivity: 86%, specificity: 83% at cutoff: >0.05), having no statistical difference from the other three excellent markers NLR, WBC, and NEUT.\nThe most significant markers proposed in literature as predictors of COVID‐19 disease severity are CRP,\n37\n, \n38\n white blood count,\n10\n, \n37\n, \n38\n neutrophil count,\n10\n, \n37\n lymphocyte count,\n12\n, \n37\n NLR,\n13\n, \n14\n and PLR.\n14\n, \n15\n Our findings are in good agreement with previous studies. CRP and LYMPH are independently associated with progression from mild to serious disease, and they can both be used efficiently as indicators of serious disease. For critical disease, the AUC we found for NLR is 0.911 which is comparable with 0.90 reported in a meta‐analysis conducted by Li et al\n13\n and 0.841 reported by Yang et al\n14\n Furthermore, there was good correlation for the AUC found for CRP (0.743) and PLR (0.806) compared with 0.714 and 0.784, respectively, found by Yang et al\n14\n Similarly, we concluded that NLR is a superior marker compared with PLR.\n14\n The added value from our observations in the area of diagnostic markers of COVID‐19 disease severity is the addition of two more good predictors of severe disease, IG, and HFLC, and the comparative evaluation of biomarker performance. Consequently, NLR, WBC, NEUT, and IG are the most efficient indicators of critical disease, followed by PLR, LYMPH, CRP, and HFLC.\nOur study has some limitations. First of all, it is a retrospective study conducted in a single clinical center. More valuable information could be gained from a multi‐center study. Second, due to time limitations, the size for mild and critical disease groups is much smaller compared with the negative and serious disease groups. Finally, decisions about the clinical condition of patients are based on expert's opinion which may introduce some bias in the classification of patients.\nConclusively, it is apparent from our study that lymphopenia is not an efficient marker for the discrimination of COVID‐19 patients from negative cases. On the other hand, eosinophil and basophil depletion are good indicators of COVID‐19 at the early stages of the disease. HFLC is a potent marker for the diagnosis of mild and serious COVID‐19 either as a single marker or combined with leukocyte count whereas IG shows excellent performance as an indicator of COVID‐19 disease progression from serious to critical condition.", "The authors have no competing interests.", "Due to the retrospective and non‐interventional nature of the study, informed consent was not required. The study was approved by the review board of University General Hospital of Ioannina (registration number: 10/26‐05‐2021).", "Table S1\nClick here for additional data file.\nTable S2\nClick here for additional data file.\nTable S3\nClick here for additional data file." ]
[ null, "materials-and-methods", null, null, null, "results", null, null, null, "discussion", "COI-statement", null, "supplementary-material" ]
[ "biomarkers", "COVID‐19", "diagnosis", "leukocytes", "statistic" ]
INTRODUCTION: Coronavirus disease 2019 (COVID‐19) is an infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), first discovered in December 2019 in Wuhan, China. Since then, the world has been caught up in one of the deadliest pandemics in history with SARS‐CoV‐2 having spread to over 210 countries, resulting in more than 150 million confirmed cases and more than 3 million deaths as of 3 May 2021. 1 Furthermore, the increased transmissibility of SARS‐CoV‐2 variants has raised the concern for the high admittance rate to the emergency department and the resulting load on public health systems. 2 Under these circumstances, patients with clinical signs of infection (eg, fever, cough etc) presenting at the emergency department of a hospital are treated as suspected COVID‐19 patients. Diagnosis of COVID‐19 patients usually relies on RT‐PCR real‐time polymerase chain reaction (RT‐PCR) tests, which can be time‐consuming. On the other hand, the availability of rapid PCR testing may be restricted. Given the possible consequences from the delayed diagnosis and quarantine of SARS‐CoV‐2‐positive patients, triage at the emergency department has become a formidable task. The complete blood count (CBC) may offer valuable information, indicative of a possible SARS‐CoV‐2 infection, thus assisting clinicians in making decisions at the time of admission. Several studies have demonstrated the decrease in leukocytes and their subpopulations in COVID‐19 patients compared to healthy individuals and non‐COVID‐19 patients with other infectious diseases. Therefore, decrease in neutrophil count, lymphopenia, and eosinopenia are the most common markers suggested for the identification of COVID‐19 patients. However, most of the reports regarding diagnostic value of hematologic parameters for COVID‐19 refer mainly to comparison between the control group and COVID‐19 patients 3 , 4 , 5 , 6 whereas few studies have evaluated their performance as diagnostic markers in the emergency room. 7 , 8 , 9 On the other hand, there is a great load of data for the utilization of hematologic parameters for the diagnosis of progression to serious or severe disease and their performance as prognostic markers in COVID‐19 patients. The most common hematologic parameters derived from CBC with evidenced prognostic value include the following: neutrophil count, 10 lymphocyte count, 11 , 12 neutrophil‐to‐lymphocyte ratio (NLR), 13 , 14 and platelet‐to‐lymphocyte ratio (PLR). 14 , 15 We sought to determine the performance of parameters of white blood cells and their subpopulations as well as their combinations for the diagnosis of COVID‐19 and as indicators of disease progression to serious and critical condition. Since the blood cell parameters may depend on the stage of the disease, we classified patients according to their clinical condition into three groups: patients with mild, serious, and critical disease. In this way, we anticipated to discover early diagnostic markers for COVID‐19 disease with the potential to optimize the triage process as well as early indicators of disease progression in order to aid clinicians in the management of patients in need of close monitoring for developing serious and/or critical condition. In addition to the parameters of CBC, we also examined other known markers used as indicators of systematic inflammatory response such as NLR, PLR, and lymphocyte‐to‐monocyte ratio (LMR) 14 as well as high fluorescence lymphocyte cells (HFLC) and Immature Granulocyte count (IG). HFLC are lymphoplasmacytoid cells or plasma cells present in the blood of patients as a response of the innate immunity to infectious disease. 16 Their detection is based on their characteristical high fluorescence intensity and their count is reported by modern automated hematology analyzers as part of the full blood count. HFLC are elevated in COVID‐19 patients and are further increased in severe disease. 17 On the other hand, immature granulocytes in the peripheral blood can occur in response to infection, inflammation, or other cause of bone marrow stimulation. Both HFLC and IG have been thoroughly investigated as potential markers of sepsis. 18 , 19 C‐reactive protein (CRP), a well‐known inflammation and disease progression marker, was also included in the study. 7 In this respect, we explored their potential role in the diagnosis of COVID‐19 as standalone markers and in combination with other parameters of the CBC. MATERIALS AND METHODS: Patients This is a retrospective case‐control study conducted from 14 March 2020 to 6 March 2021, with data collected from patients admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Epirus, Greece). Due to the low prevalence of COVID‐19 disease in our country from March to October 2020, we had to extend the time of data collection to March 2021 in order to include as many COVID‐19 patients as possible. All patients who presented at the emergency department with fever and/or respiratory symptoms were suspected for COVID‐19 infection, and their nasopharyngeal swab specimens were tested for SARS‐CoV‐2 with real‐time polymerase chain reaction (RT‐PCR). 197 patients who tested negative in RT‐PCR were selected as the control group (negative cases). Negative cases discharged to home were considered as mild negative cases (control 1, n = 103) while negative cases admitted to general ward were classified as serious negative cases (control 2, n = 94). The clinical evaluation and management of SARS‐CoV‐2‐positive patients were performed according to the Guidelines of the National Institute of Public Health of Greece. 21 COVID‐19 patients were classified according to their clinical condition as evaluated at the emergency department into three groups defined as following: mild disease: discharged to home, serious disease: hospitalized in general ward and severe/critical disease: admitted to intensive care unit (ICU). Patients initially admitted to general ward and later transferred to ICU (n = 23) were included in the critical group; CBC on admission to ICU was used in this case. Only adult patients were included in the study whereas patients with conditions associated with abnormal blood cell counts as hematological malignancies, metastatic bone marrow infiltration by malignancy, receiving chemotherapy or immunosuppressive therapy (n = 13) were excluded from the study. We also excluded patients without CBC on admission (n = 3) as well as patients transferred from another ward (n = 7). A total of 368 COVID‐19 cases were included in the study and were classified as having mild (n = 96), serious (n = 215), and critical (n = 57) disease (Figure 1). Patient flow chart This is a retrospective case‐control study conducted from 14 March 2020 to 6 March 2021, with data collected from patients admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Epirus, Greece). Due to the low prevalence of COVID‐19 disease in our country from March to October 2020, we had to extend the time of data collection to March 2021 in order to include as many COVID‐19 patients as possible. All patients who presented at the emergency department with fever and/or respiratory symptoms were suspected for COVID‐19 infection, and their nasopharyngeal swab specimens were tested for SARS‐CoV‐2 with real‐time polymerase chain reaction (RT‐PCR). 197 patients who tested negative in RT‐PCR were selected as the control group (negative cases). Negative cases discharged to home were considered as mild negative cases (control 1, n = 103) while negative cases admitted to general ward were classified as serious negative cases (control 2, n = 94). The clinical evaluation and management of SARS‐CoV‐2‐positive patients were performed according to the Guidelines of the National Institute of Public Health of Greece. 21 COVID‐19 patients were classified according to their clinical condition as evaluated at the emergency department into three groups defined as following: mild disease: discharged to home, serious disease: hospitalized in general ward and severe/critical disease: admitted to intensive care unit (ICU). Patients initially admitted to general ward and later transferred to ICU (n = 23) were included in the critical group; CBC on admission to ICU was used in this case. Only adult patients were included in the study whereas patients with conditions associated with abnormal blood cell counts as hematological malignancies, metastatic bone marrow infiltration by malignancy, receiving chemotherapy or immunosuppressive therapy (n = 13) were excluded from the study. We also excluded patients without CBC on admission (n = 3) as well as patients transferred from another ward (n = 7). A total of 368 COVID‐19 cases were included in the study and were classified as having mild (n = 96), serious (n = 215), and critical (n = 57) disease (Figure 1). Patient flow chart Data collection and management Demographic data and clinical symptoms and signs were obtained from electronic medical records. The complete blood count and the extended parameters HFLC and IG on day of admission were measured on Sysmex XN‐3100 (Sysmex, Japan). CRP measurements were obtained from the hospital Laboratory Information System (LIS). RT‐PCR test was performed on Xpert Xpress SARS‐COV‐2 (Cepheid AB). Demographic data and clinical symptoms and signs were obtained from electronic medical records. The complete blood count and the extended parameters HFLC and IG on day of admission were measured on Sysmex XN‐3100 (Sysmex, Japan). CRP measurements were obtained from the hospital Laboratory Information System (LIS). RT‐PCR test was performed on Xpert Xpress SARS‐COV‐2 (Cepheid AB). Study design and statistical analysis The study of CBC parameters as diagnostic markers of COVID‐19 comprised of two parts. In the first part, their ability to discriminate between negative and positive cases was tested separately for mild (mild negative cases vs mild positive cases) and serious (serious negative cases vs serious positive cases) disease. In the second part, biomarkers were tested as potentials indicators of COVID‐19 disease progression. For this purpose, their ability to discriminate between mild and serious (mild positive cases vs serious positive cases) as well as between serious and critical disease (serious positive cases vs critical positive cases) was examined. Continuous variables were expressed as medians and interquartile ranges whereas categorical variables were expressed as the counts and percentages in each category. Non‐parametric Mann‐Whitney test was used for testing the significance between two groups. Receiver operating characteristic (ROC) curve analysis was applied for the selection of parameters with high diagnostic performance. The area under the curve (AUC) was used as a measure of performance, and parameters with AUC>0.7 were selected for the multivariable analysis. Keeping in mind the impact of false negatives in the case of COVID‐19, the selection of best cutoff values was based initially on Youden index and with a focus on maximizing sensitivity. Enter binary logistic regression analysis was conducted to determine the influence of the parameters on the outcome and in developing pairwise combinations for different parameters. The comparison of AUC between pairwise combinations and individual parameters indicated whether there was improvement in the discriminatory power. Furthermore, Nagelkerke R and Akaike information criteria (AIC) were used for the assessment of the goodness of fit of all pairwise combinations, with lower AICs indicating better model fit. Hosmer and Lemeshow test was used for the calibration of the method. MedCalc Statistical Software version 19.2.6 (MedCalc Software Ltd,; https://www.medcalc.org; 2020) was used for ROC curve analysis and comparison of ROC curves (z‐statistic). Logistic regression analysis, Pearson correlation, and calculation of variation inflation factors (VIF) were conducted using SPSS 23.0 (IBM Corp). A 2‐tailed P value <.05 was considered as statistically significant. Graphs were plotted using GraphPad Prism 6.00 (GraphPad Software). The study of CBC parameters as diagnostic markers of COVID‐19 comprised of two parts. In the first part, their ability to discriminate between negative and positive cases was tested separately for mild (mild negative cases vs mild positive cases) and serious (serious negative cases vs serious positive cases) disease. In the second part, biomarkers were tested as potentials indicators of COVID‐19 disease progression. For this purpose, their ability to discriminate between mild and serious (mild positive cases vs serious positive cases) as well as between serious and critical disease (serious positive cases vs critical positive cases) was examined. Continuous variables were expressed as medians and interquartile ranges whereas categorical variables were expressed as the counts and percentages in each category. Non‐parametric Mann‐Whitney test was used for testing the significance between two groups. Receiver operating characteristic (ROC) curve analysis was applied for the selection of parameters with high diagnostic performance. The area under the curve (AUC) was used as a measure of performance, and parameters with AUC>0.7 were selected for the multivariable analysis. Keeping in mind the impact of false negatives in the case of COVID‐19, the selection of best cutoff values was based initially on Youden index and with a focus on maximizing sensitivity. Enter binary logistic regression analysis was conducted to determine the influence of the parameters on the outcome and in developing pairwise combinations for different parameters. The comparison of AUC between pairwise combinations and individual parameters indicated whether there was improvement in the discriminatory power. Furthermore, Nagelkerke R and Akaike information criteria (AIC) were used for the assessment of the goodness of fit of all pairwise combinations, with lower AICs indicating better model fit. Hosmer and Lemeshow test was used for the calibration of the method. MedCalc Statistical Software version 19.2.6 (MedCalc Software Ltd,; https://www.medcalc.org; 2020) was used for ROC curve analysis and comparison of ROC curves (z‐statistic). Logistic regression analysis, Pearson correlation, and calculation of variation inflation factors (VIF) were conducted using SPSS 23.0 (IBM Corp). A 2‐tailed P value <.05 was considered as statistically significant. Graphs were plotted using GraphPad Prism 6.00 (GraphPad Software). Patients: This is a retrospective case‐control study conducted from 14 March 2020 to 6 March 2021, with data collected from patients admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Epirus, Greece). Due to the low prevalence of COVID‐19 disease in our country from March to October 2020, we had to extend the time of data collection to March 2021 in order to include as many COVID‐19 patients as possible. All patients who presented at the emergency department with fever and/or respiratory symptoms were suspected for COVID‐19 infection, and their nasopharyngeal swab specimens were tested for SARS‐CoV‐2 with real‐time polymerase chain reaction (RT‐PCR). 197 patients who tested negative in RT‐PCR were selected as the control group (negative cases). Negative cases discharged to home were considered as mild negative cases (control 1, n = 103) while negative cases admitted to general ward were classified as serious negative cases (control 2, n = 94). The clinical evaluation and management of SARS‐CoV‐2‐positive patients were performed according to the Guidelines of the National Institute of Public Health of Greece. 21 COVID‐19 patients were classified according to their clinical condition as evaluated at the emergency department into three groups defined as following: mild disease: discharged to home, serious disease: hospitalized in general ward and severe/critical disease: admitted to intensive care unit (ICU). Patients initially admitted to general ward and later transferred to ICU (n = 23) were included in the critical group; CBC on admission to ICU was used in this case. Only adult patients were included in the study whereas patients with conditions associated with abnormal blood cell counts as hematological malignancies, metastatic bone marrow infiltration by malignancy, receiving chemotherapy or immunosuppressive therapy (n = 13) were excluded from the study. We also excluded patients without CBC on admission (n = 3) as well as patients transferred from another ward (n = 7). A total of 368 COVID‐19 cases were included in the study and were classified as having mild (n = 96), serious (n = 215), and critical (n = 57) disease (Figure 1). Patient flow chart Data collection and management: Demographic data and clinical symptoms and signs were obtained from electronic medical records. The complete blood count and the extended parameters HFLC and IG on day of admission were measured on Sysmex XN‐3100 (Sysmex, Japan). CRP measurements were obtained from the hospital Laboratory Information System (LIS). RT‐PCR test was performed on Xpert Xpress SARS‐COV‐2 (Cepheid AB). Study design and statistical analysis: The study of CBC parameters as diagnostic markers of COVID‐19 comprised of two parts. In the first part, their ability to discriminate between negative and positive cases was tested separately for mild (mild negative cases vs mild positive cases) and serious (serious negative cases vs serious positive cases) disease. In the second part, biomarkers were tested as potentials indicators of COVID‐19 disease progression. For this purpose, their ability to discriminate between mild and serious (mild positive cases vs serious positive cases) as well as between serious and critical disease (serious positive cases vs critical positive cases) was examined. Continuous variables were expressed as medians and interquartile ranges whereas categorical variables were expressed as the counts and percentages in each category. Non‐parametric Mann‐Whitney test was used for testing the significance between two groups. Receiver operating characteristic (ROC) curve analysis was applied for the selection of parameters with high diagnostic performance. The area under the curve (AUC) was used as a measure of performance, and parameters with AUC>0.7 were selected for the multivariable analysis. Keeping in mind the impact of false negatives in the case of COVID‐19, the selection of best cutoff values was based initially on Youden index and with a focus on maximizing sensitivity. Enter binary logistic regression analysis was conducted to determine the influence of the parameters on the outcome and in developing pairwise combinations for different parameters. The comparison of AUC between pairwise combinations and individual parameters indicated whether there was improvement in the discriminatory power. Furthermore, Nagelkerke R and Akaike information criteria (AIC) were used for the assessment of the goodness of fit of all pairwise combinations, with lower AICs indicating better model fit. Hosmer and Lemeshow test was used for the calibration of the method. MedCalc Statistical Software version 19.2.6 (MedCalc Software Ltd,; https://www.medcalc.org; 2020) was used for ROC curve analysis and comparison of ROC curves (z‐statistic). Logistic regression analysis, Pearson correlation, and calculation of variation inflation factors (VIF) were conducted using SPSS 23.0 (IBM Corp). A 2‐tailed P value <.05 was considered as statistically significant. Graphs were plotted using GraphPad Prism 6.00 (GraphPad Software). RESULTS: Patients The basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166). Basic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19 9.03 (6.97‐11.69) 130.1 (98.17‐190.5) 227.6 (136.7‐325.0) 140.5 (114.0‐175.9) 187.1 (133.3‐291.5) 417.0 (241.9‐720.2) Abbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count. Data presented as n (%) and median (IQR). Units are mg/L for CRP and 109/L for the rest of the parameters. Reference ranges as reported for Sysmex XN 20 . Progression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced. Both mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC. The basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166). Basic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19 9.03 (6.97‐11.69) 130.1 (98.17‐190.5) 227.6 (136.7‐325.0) 140.5 (114.0‐175.9) 187.1 (133.3‐291.5) 417.0 (241.9‐720.2) Abbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count. Data presented as n (%) and median (IQR). Units are mg/L for CRP and 109/L for the rest of the parameters. Reference ranges as reported for Sysmex XN 20 . Progression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced. Both mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC. Hematologic parameters as diagnostic markers of COVID‐19 Initially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease. Multivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease See Table 1 for abbreviations. OR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase. The best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers. Best performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease WBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis. Abbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve. In the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers. Initially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease. Multivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease See Table 1 for abbreviations. OR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase. The best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers. Best performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease WBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis. Abbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve. In the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers. Hematologic parameters as indicators of COVID‐19 disease progression The univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05). Age‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease See Table 1 for abbreviations. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase. Adjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase. ROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value. The univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05). Age‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease See Table 1 for abbreviations. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase. Adjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase. ROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value. Patients: The basic demographic data and laboratory findings of all groups of patients are summarized in Table 1 (Comorbidities are given in Table S1). Patients with mild disease have significantly lower WBC (P <.0001), NEUT (P <.0001), LYMPH (P =.001), MONO (P <.0001), EOS (P <.0001), BASO (P <.0001), and NLR (P <.0001) compared with negative control. On the other hand, there is no significant difference for CRP (P =.1278), LMR (P = 00 549), and PLR (P =.3768) between negative group and mild disease. In the case of serious disease, significantly lower values are observed for WBC (P <.0001), NEUT (P <.0001), MONO (P =.0001), EOS (P <.0001), BASO (P <.0001), NLR (P <.0001), LMR (P <.0001), and CRP (P =.0001) but not for LYMPH (P =.7884) and PLR (P =.1166). Basic demographic characteristics and blood biomarkers on admission of patients with and without COVID‐19 9.03 (6.97‐11.69) 130.1 (98.17‐190.5) 227.6 (136.7‐325.0) 140.5 (114.0‐175.9) 187.1 (133.3‐291.5) 417.0 (241.9‐720.2) Abbreviations: BASO, Basophil count; CRP, C‐reactive protein; EOS, Eosinophil count; HFLC, High Fluorescence Lymphocyte Cells; IG, Immature Granulocyte count; LMR, Lymphocyte‐to‐Monocyte Ratio; LYMPH, Lymphocyte count; MONO, Monocyte count; NEUT, Neutrophil count; NLR, Neutrophil‐to‐Lymphocyte Ratio; PLR, Platelet to Lymphocyte Ratio; WBC, White Blood Cell count. Data presented as n (%) and median (IQR). Units are mg/L for CRP and 109/L for the rest of the parameters. Reference ranges as reported for Sysmex XN 20 . Progression of disease from mild to serious is accompanied by significant decrease of LYM (P <.0001), MONO (P =.0007), EOS (P <.0001), BASO (P =.0038), and increase in NEUT (P =.0346), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001). Also, progression from serious to critical disease results in significant increase in WBC (P <.0001), NEUT (P <.0001), BASO (P =.0012), NLR (P <.0001), PLR (P <.0001), and CRP (P <.0001) whereas LYMPH (P <.0001) and LMR (P <.0001) are significantly reduced. Both mild and serious disease patients have significantly lower IG count compared with the negative groups. However, significant increase in IG occurs when serious disease progresses to critical (P <.0001). HFLC is significantly higher for both mild and serious disease compared with the negative group. Progression of serious to critical disease results also in significant rise of HFLC. Hematologic parameters as diagnostic markers of COVID‐19: Initially, we performed ROC curve analysis in order to select the best diagnostic markers among CBC parameters and CRP for mild disease (Table S2). The parameters with AUC>0.7 (WBC, NEUT, BASO, HFLC, and EOS) were included in the multivariable analysis, and the odds ratio (OR) for the odds of having mild COVID‐19 disease was calculated by conducting logistic regression (Table 2). Because of the strong correlation between WBC and NEUT (Pearson r = 0.959, P <.0001) and the high VIF values for WBC and NEUT due to collinearity, the best full model could not include both WBC and NEUT. Therefore, the best fitting logistic model indicated that WBC and HFLC were independently associated with mild COVID‐19 disease. Due to the low values observed for EOS, BASO, and HFLC, adjusted ORs have been calculated in order for the one‐unit change of the predictor to be meaningful (see Table 2). Hence, an increase in 0.01 (×109/L) in the value of HFLC corresponds to 1.655 increase in odds of having mild COVID‐19 disease. Multivariable logistic regression analysis for the diagnosis of mild and serious COVID‐19 disease See Table 1 for abbreviations. OR is calculated as the change in odds of having COVID‐19 upon 1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.1 unit (×109/L) increase. Adjusted OR calculated as the change in odds of having COVID‐19 upon 0.01 unit (×109/L) increase. The best performing logistic models of pairwise combinations of CBC parameters for the diagnosis of mild COVID‐19 disease are summarized in Table 3. Significant difference for the combinations WBC‐HFLC and WBC‐EOS was evidenced by comparison of ROC curves to all standalone blood biomarkers. Also, the significant difference between WBC‐HFLC and WBC‐EOS (P =.0240, z = 2.257) indicates that the combination WBC‐HFLC constitutes the best of all biomarkers. Best performing pairwise combinations of CBC parameters for the diagnosis of mild and serious COVID‐19 disease WBC, NEUT, EOS, BASO, HFLC, and NLR: See Table 1 for abbreviations. Cutoff values (109/L) are shown in parenthesis. Abbreviations: AIC, Akaike Information Criteria; AUC, Area Under the Curve. In the case of serious disease, the best performing markers (AUC > 0.7) were WBC, NEUT, HFLC, and NLR. WBC had strong correlation with NEUT (Pearson r = 0.982, P <.0001) producing strong collinearity effects; thus, WBC was once more selected for the multivariable model. WBC, HFLC, and NLR are all independent predictors of serious COVID‐19 disease (Table 2). Pairwise combinations of biomarkers were evaluated by logistic regression, and the results of the best performing pairs of CBC parameters are listed in Table 3. As revealed from the comparison of ROC curves, WBC‐HFLC and NEUT‐HFLC are the best combinations showing significant difference from all single markers. Hematologic parameters as indicators of COVID‐19 disease progression: The univariable analysis indicated LYMPH and CRP as good indicators of progression from mild to serious COVID‐19 disease (Table S3). Furthermore, multivariable analysis revealed that both markers are independent indicators of progression of mild to serious disease (Table 4). The logistic model of their combination failed the goodness of fit test (Hosmer and Lemeshow test <0.05). Age‐ and gender‐adjusted odds ratios of CBC parameters and CRP for the diagnosis of COVID‐19 progression in the case of mild and serious disease See Table 1 for abbreviations. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (×109/L) increase. OR is calculated as the change in odds of having serious COVID‐19 upon 1 unit (mg/L) increase. Adjusted OR calculated as the change in odds of having serious COVID‐19 upon 0.01 unit (×109/L) increase. ROC curve analysis highlighted several parameters for the indication of progression from serious to critical illness (Table S3). Therefore, NLR (AUC: 0.911), IG (AUC: 0.890), NEUT (AUC: 0.884), and WBC (AUC: 0.854) presented excellent performance whereas PLR (AUC: 0.806), CRP (AUC: 0.743), and LYMPH (AUC: 0.742) were also good indicators of critical disease. Due to collinearity effects, WBC and NEUT could not be included simultaneously in the full logistic model. Hence, logistic regression of the full model revealed that mostly WBC is an independent factor for progression to critical COVID‐19 disease while LYMPH and PLR displayed borderline significance (Table 4). Respectively, the equivalent full logistic model including NEUT instead of WBC exhibited similar results with NEUT being also an independent variable. Logistic regression of pairwise combination of blood biomarkers yielded several combined markers with high performance: NEUT‐PLR (AUC: 0.924), WBC‐PLR (AUC: 0.923), HFLC‐NLR (AUC: 0.918), NEUT‐NLR (AUC: 0.912), WBC‐NLR (AUC: 0.912), NEUT‐LYM (AUC: 0.910), WBC‐LYM (AUC: 0.910), NEUT‐CRP (AUC: 0.898), NEUT‐IG (0.890), NEUT‐HFLC (AUC: 0.887), and WBC‐CRP (AUC: 0.888). Interestingly, none of these combinations has statistically significant difference from the best performing single markers NLR, IG, WBC, and NEUT. Furthermore, comparison of ROC curves reveals that the AUC of the best performing parameter, NLR, does not differ significantly from the closely following parameters IG (P =.3331, z = 0.968), WBC (P =.0630, z = 1.859), and NEUT (P =.28902, z = 1.080). Consequently, standalone CBC parameters can be utilized sufficiently as diagnostic markers of progression from serious to critical disease and the combination of blood biomarkers does not contribute anything to their diagnostic value. DISCUSSION: Several studies have illustrated the utility of routine blood tests performed upon admittance of patients to the hospital for the diagnosis of COVID‐19. Most of them have underlined the importance of lymphopenia and eosinopenia present upon admittance of patients with COVID‐19. 5 , 7 , 22 , 23 In our study, no significant difference in the lymphocyte count was observed between the negative group and serious COVID‐19 disease. With an AUC of 0.635 in the case of mild disease and 0.510 for serious disease, lymphocyte count constitutes a diagnostic biomarker of low efficacy. The observed difference in the lymphocyte subset compared with other studies can be attributed to the different study design. In most cases, patients with mild symptoms are not included or not examined separately whereas the control group in some cases is comprised of healthy individuals and not patients with infectious disease. Furthermore, lymphopenia is often associated with other causes such as congenital immunodeficiency disorders, malnutrition, alcohol abuse, medications, malignancies, systemic autoimmune diseases, and (bacterial or viral) infections resulting in increased risk of hospitalization with infection. 24 Hence, this fact may account for the higher frequency of patients with lymphopenia in the groups with serious condition and in need for hospitalization, thus explaining the absence of significant difference in lymphocytes between negative and positive serious cases. On the other hand, in the case of mild disease, lymphopenia is more pronounced for the positive group, characterizing the early stages of COVID‐19 disease in contrast to other infectious diseases. On the other hand, decrease in the eosinophil count is prominent for the mild disease group, present in 57.3% of patients compared with 19.4% of patients of the negative group and even more frequent in patients with serious disease reaching 72.6%. ROC curve analysis revealed medium performance for the eosinophil count (AUC: 0.659) compared with other blood biomarkers for the diagnosis of COVID‐19 serious disease. The diagnostic ability of leukocyte and neutrophil count for COVID‐19 has been highlighted in several different studies. 6 , 7 , 9 , 25 Indeed, WBC and NEUT were significantly lower in mild and serious disease compared with the negative group and were both independent determinants of COVID‐19 disease. WBC and NEUT, both displayed high efficiency in the diagnosis of mild and serious COVID‐19 disease either as single or in combination with other parameters of the CBC. NLR has been proposed as a possible sufficient diagnostic marker for the COVID‐19 disease. 6 , 9 , 14 In our study, NLR is a diagnostic marker of medium performance (AUC: 0.656 for mild disease and 0.719 for serious disease). The performance of basophil count as an indicator of COVID‐19 disease is surprisingly high. The basophil count depletion is observed early in the course of COVID‐19 disease following the trend in decrease of all white blood cell subsets. Based on the fact that basophil count is generally low, even in healthy individuals, concerns were raised about its variability not being specific to a certain pathological condition. 3 Consequently, it was assumed that basophils may not be implicated in the COVID‐19 pathogenesis and diagnosis. However, recent findings have suggested that basophils have an immune regulatory function both in innate and adaptive immune response. 26 , 27 , 28 By comparing mostly mild cases of COVID‐19 patients with other pulmonary infection patients, J. Dai et al found that among other CBC parameters, basophil count and proportion were the most discriminant biomarkers. 29 On the other hand, a protective role of high basophil count against developing severe disease was recently proposed and a causal association between basophil count and the risk of COVID‐19 and susceptibility was evidenced whereas the same association was not confirmed for lymphocytes and eosinophils. 30 In light of these findings, it is not surprising that basophil depletion may serve as an early marker for the diagnosis of COVID‐19. Our observations are indicative of high potency for basophil count as a diagnostic marker of mild COVID‐19 disease and its combination with WBC results in a combined marker with sensitivity 88.5% and specificity 60.2%. HFLC count is found to be elevated in COVID‐19 patients 17 and is further increased upon progression of disease, especially in the second week of illness concurring with the presence in serum of anti‐SARS‐CoV‐2‐specific antibodies. 31 The increase in HFLC also correlates with worsening of clinical condition, especially in the case of cytokine storm syndrome. 32 In our study, HFLC count is significantly increased and is independently associated with mild and serious COVID‐19 disease. The most important finding is that HFLC can be utilized for the diagnosis of both conditions either as a single marker or in combination with WBC, a superior marker compared with all single and other combined markers (Tables S2 and S3). As a marker of progression to critical disease, HFLC count is of moderate efficacy (AUC: 0.710). Immature granulocyte count has been proposed as a predictor of sepsis 19 , 33 and a marker of acute respiratory distress syndrome. 34 Furthermore, increase in neutrophil precursors is highly associated with severe COVID‐19. 35 , 36 Our results indicate low IG count in all COVID‐19 patients and medium performance as a diagnostic marker (Table S2). Interestingly, IG can be a very useful indicator of critical disease (AUC: 0.890, sensitivity: 86%, specificity: 83% at cutoff: >0.05), having no statistical difference from the other three excellent markers NLR, WBC, and NEUT. The most significant markers proposed in literature as predictors of COVID‐19 disease severity are CRP, 37 , 38 white blood count, 10 , 37 , 38 neutrophil count, 10 , 37 lymphocyte count, 12 , 37 NLR, 13 , 14 and PLR. 14 , 15 Our findings are in good agreement with previous studies. CRP and LYMPH are independently associated with progression from mild to serious disease, and they can both be used efficiently as indicators of serious disease. For critical disease, the AUC we found for NLR is 0.911 which is comparable with 0.90 reported in a meta‐analysis conducted by Li et al 13 and 0.841 reported by Yang et al 14 Furthermore, there was good correlation for the AUC found for CRP (0.743) and PLR (0.806) compared with 0.714 and 0.784, respectively, found by Yang et al 14 Similarly, we concluded that NLR is a superior marker compared with PLR. 14 The added value from our observations in the area of diagnostic markers of COVID‐19 disease severity is the addition of two more good predictors of severe disease, IG, and HFLC, and the comparative evaluation of biomarker performance. Consequently, NLR, WBC, NEUT, and IG are the most efficient indicators of critical disease, followed by PLR, LYMPH, CRP, and HFLC. Our study has some limitations. First of all, it is a retrospective study conducted in a single clinical center. More valuable information could be gained from a multi‐center study. Second, due to time limitations, the size for mild and critical disease groups is much smaller compared with the negative and serious disease groups. Finally, decisions about the clinical condition of patients are based on expert's opinion which may introduce some bias in the classification of patients. Conclusively, it is apparent from our study that lymphopenia is not an efficient marker for the discrimination of COVID‐19 patients from negative cases. On the other hand, eosinophil and basophil depletion are good indicators of COVID‐19 at the early stages of the disease. HFLC is a potent marker for the diagnosis of mild and serious COVID‐19 either as a single marker or combined with leukocyte count whereas IG shows excellent performance as an indicator of COVID‐19 disease progression from serious to critical condition. CONFLICT OF INTEREST: The authors have no competing interests. ETHICAL APPROVAL: Due to the retrospective and non‐interventional nature of the study, informed consent was not required. The study was approved by the review board of University General Hospital of Ioannina (registration number: 10/26‐05‐2021). Supporting information: Table S1 Click here for additional data file. Table S2 Click here for additional data file. Table S3 Click here for additional data file.
Background: As the Coronavirus disease 2019 (COVID-19) pandemic is still ongoing with patients overwhelming healthcare facilities, we aimed to investigate the ability of white blood cell count (WBC) and their subsets, high fluorescence lymphocyte cells (HFLC), immature granulocyte count (IG), and C-reactive protein (CRP) to aid diagnosis of COVID-19 during the triage process and as indicators of disease progression to serious and critical condition. Methods: We collected clinical and laboratory data of patients, suspected COVID-19 cases, admitted at the emergency department of University General Hospital of Ioannina (Ioannina, Greece). We selected 197 negative and 368 positive cases, confirmed by polymerase chain reaction test for severe acute respiratory syndrome coronavirus 2. COVID-19 cases were classified into mild, serious, and critical disease. Receiver operating characteristic curve and binary logistic regression analysis were utilized for assessing the diagnosing ability of biomarkers. Results: WBC, neutrophil count (NEUT), and HFLC can discriminate efficiently negative cases from mild and serious COVID-19, whereas eosinopenia and basopenia are early indicators of the disease. The combined WBC-HFLC marker is the best diagnostic marker for both mild (sensitivity: 90.6% and specificity: 64.1%) and serious (sensitivity: 90.3% and specificity: 73.4%) disease. CRP and Lymphocyte count are early indicators of progression to serious disease whereas WBC, NEUT, IG, and neutrophil-to-lymphocyte ratio are the best indicators of critical disease. Conclusions: Lymphopenia is not useful in screening patients with COVID-19. HFLC is a good diagnostic marker for mild and serious disease either as a single marker or combined with WBC whereas IG is a good indicator of progression to critical disease.
null
null
10,534
328
[ 815, 428, 67, 406, 633, 583, 555, 38 ]
13
[ "disease", "19", "covid 19", "covid", "wbc", "0001", "mild", "auc", "neut", "patients" ]
[ "diagnosis covid", "coronavirus sars", "disease 2019 covid", "severe covid 19", "respiratory syndrome coronavirus" ]
null
null
null
[CONTENT] biomarkers | COVID‐19 | diagnosis | leukocytes | statistic [SUMMARY]
null
[CONTENT] biomarkers | COVID‐19 | diagnosis | leukocytes | statistic [SUMMARY]
null
[CONTENT] biomarkers | COVID‐19 | diagnosis | leukocytes | statistic [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | C-Reactive Protein | COVID-19 | COVID-19 Testing | Case-Control Studies | Disease Progression | Female | Humans | Leukocyte Count | Male | Middle Aged | Retrospective Studies | Severity of Illness Index [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | C-Reactive Protein | COVID-19 | COVID-19 Testing | Case-Control Studies | Disease Progression | Female | Humans | Leukocyte Count | Male | Middle Aged | Retrospective Studies | Severity of Illness Index [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | C-Reactive Protein | COVID-19 | COVID-19 Testing | Case-Control Studies | Disease Progression | Female | Humans | Leukocyte Count | Male | Middle Aged | Retrospective Studies | Severity of Illness Index [SUMMARY]
null
[CONTENT] diagnosis covid | coronavirus sars | disease 2019 covid | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] diagnosis covid | coronavirus sars | disease 2019 covid | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] diagnosis covid | coronavirus sars | disease 2019 covid | severe covid 19 | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] disease | 19 | covid 19 | covid | wbc | 0001 | mild | auc | neut | patients [SUMMARY]
null
[CONTENT] disease | 19 | covid 19 | covid | wbc | 0001 | mild | auc | neut | patients [SUMMARY]
null
[CONTENT] disease | 19 | covid 19 | covid | wbc | 0001 | mild | auc | neut | patients [SUMMARY]
null
[CONTENT] patients | 19 | covid 19 | covid | 19 patients | covid 19 patients | disease | count | markers | lymphocyte [SUMMARY]
null
[CONTENT] 0001 | wbc | neut | auc | disease | table | hflc | 19 | covid | covid 19 [SUMMARY]
null
[CONTENT] disease | 19 | 0001 | covid 19 | covid | wbc | patients | neut | auc | cases [SUMMARY]
null
[CONTENT] Coronavirus | 2019 | COVID-19 | WBC | IG | CRP | COVID-19 [SUMMARY]
null
[CONTENT] WBC | NEUT | HFLC | COVID-19 | basopenia ||| WBC | 90.6% | 64.1% | 90.3% | 73.4% ||| CRP | Lymphocyte | WBC | NEUT | IG [SUMMARY]
null
[CONTENT] Coronavirus | 2019 | COVID-19 | WBC | IG | CRP | COVID-19 ||| COVID-19 | University General Hospital of Ioannina (Ioannina | Greece ||| 197 | 368 | 2 ||| COVID-19 ||| ||| WBC | NEUT | HFLC | COVID-19 | basopenia ||| WBC | 90.6% | 64.1% | 90.3% | 73.4% ||| CRP | Lymphocyte | WBC | NEUT | IG ||| Lymphopenia | COVID-19 ||| WBC | IG [SUMMARY]
null
Estimation of Kidney Function in Patients With Multiple Myeloma: Implications for Lenalidomide Dosing.
35511200
Lenalidomide is an immunomodulatory drug used to treat multiple myeloma that requires renal dosing adjustment based on Cockcroft-Gault (CG). Various equations to estimate kidney function exist and pose a potential issue with lenalidomide dosing.
BACKGROUND
Data from 1121 multiple myeloma patients at the time of diagnosis acquired from the Mayo Clinic were used to calculate creatinine clearance (CrCl) using Cockcroft-Gault with actual body weight (CGABW), ideal body weight (CGIBW), or adjusted body weight (CGAdjBW); MDRD; and CKD-EPI for each subject. Discordances in dosing were then analyzed, and lenalidomide exposure was calculated for each subject to assess impact on pharmacokinetics of lenalidomide for patients who received discordant doses.
METHODS
Overall, approximately 16% of patients received a discordant dose when using MDRD or CKD-EPI instead of CGABW. The most common dose discordance was the decrease of a full dose of lenalidomide 25 mg when using CGABW down to 10 mg and when using MDRD or CKD-EPI with 53.8% to 55.6% of all discordances in this category. When assessing different body weights, the most common discordance was a decrease from 25 to 10 mg when using CGIBW instead of CGABW; the same trend was observed when using CGAdjBW instead as well. Patients were also at risk of over- or underexposure based on area under the concentration versus time curve (AUC) for discordant dosing.
RESULTS
A significant proportion of patients are at risk of under- or overdose of lenalidomide if CKD-EPI or MDRD are used instead of CGABW. Physicians should use CGABW when estimating renal function to dose lenalidomide.
CONCLUSION AND RELEVANCE
[ "Humans", "Glomerular Filtration Rate", "Lenalidomide", "Creatinine", "Multiple Myeloma", "Renal Insufficiency, Chronic", "Kidney", "Body Weight" ]
9619254
Introduction
Renal impairment is common in patients with cancer and may be a result of older age, prior anti-cancer therapies, or the cancer itself.1 Renal impairment often necessitates dose adjustments of cancer therapeutics and other drugs that are primarily excreted by the kidneys. Historically, the Cockcroft-Gault (CG) equation has been used to estimate creatinine clearance (CrCl) to guide renal dose adjustments in clinical practice. This equation incorporates weight, age, sex, and serum creatinine. The Modification of Diet in Renal Disease (MDRD) equation is also used for clinical purposes, including for staging of chronic kidney disease (CKD) by the estimated glomerular filtration rate (eGFR) and uses 6 variables: age, sex, ethnicity, serum creatinine, urea, and albumin with a simplified 4-variable version (MDRD-4) that does not use urea and albumin.2-4 The MDRD-4 equation was used in this analysis. Another equation used to calculate eGFR, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), was established to address the issue of underestimating GFR at higher GFR.5 The CKD-EPI equation incorporates 4 variables as well, taking into account age, sex, ethnicity, and serum creatinine like the simplified MDRD-4 equation. Discrepancies between the CG, MDRD, and CKD-EPI equations introduce risks for dosing errors, which are further compounded by the prominent display of the eGFR in electronic health records (EHRs). Another factor that may affect the calculation of renal function is handling overweight patients by using adjusted body weight (AdjBW) instead of actual body weight (ABW) that can lead to overestimation of renal function for heavier patients. Dosing errors for renally excreted anti-cancer therapies are of particular concern as many of these drugs have a narrow therapeutic window and minor changes in drug exposure may increase toxicity or decrease efficacy. Lenalidomide, a derivative of thalidomide, is an immunomodulatory imide drug (IMiD) that is primarily renally excreted and has steep dose reductions for declining renal function based on calculated CrCl.6-8 The immunomodulatory, anti-angiogenic, anti-inflammatory, and anti-neoplastic activity, as well as the favorable toxicity profile of lenalidomide led to its Food and Drug Administration (FDA) approval in combination with dexamethasone alone or with other anti-myeloma classes of drugs such as proteasome inhibitors or anti-CD38 monoclonal antibodies for relapsed/refractory multiple myeloma (MM), newly diagnosed MM, and as monotherapy for post-autologous hematopoietic cell transplant (HCT) maintenance in 2006, 2015, and 2017, respectively.6,9 Lenalidomide is administered orally, rapidly absorbed, highly bioavailable, and moderately distributed into tissues.7,8 Lenalidomide is predominantly eliminated unchanged in urine (85%) and decreased kidney function results in an increase in lenalidomide plasma concentrations, exposure, and toxicity.7,8 Recommended dose adjustments of lenalidomide for patients with a CrCl of less than 60 mL/min involve a reduction of the induction dose from 25 mg daily to 10 mg daily for 21 days of a 28-day cycle.7,8 This steep dose reduction may be especially problematic when disparate equations for renal function are utilized in error and shift the lenalidomide regimen from one dose to another. The objective of this study was to evaluate the impact of estimating kidney function in newly diagnosed MM patients with the CG, MDRD, and CKD-EPI on lenalidomide dosing, including pharmacokinetic outcomes.
Methods
Demographic data and clinical characteristics from 1121 MM patients at the time of diagnosis were used (Table 1). The data were obtained via a data use agreement (DUA) with the Mayo Clinic in Rochester, Minnesota. The senior author was added to the Institutional Review Board (IRB) for this study at Mayo Clinic. For each individual subject, CrCl (milliliter per minute) was calculated with the Cockcroft-Gault equation using either actual body weight (CGABW), ideal body weight (CGIBW), or adjusted body weight (CGAdjBW); eGFR (mL/min/1.73 m2) was also estimated using both MDRD and CKD-EPI for each subject. For each estimate of renal function, a lenalidomide dose was assigned based on FDA-approved drug labeling.9,10 For the purposes of this analysis, the difference in units between CrCl and eGFR was not considered in determining lenalidomide dosing (mL/min vs mL/min/1.73 m2) to reflect (1) how renal function estimates are displayed within EHRs and (2) clinicians do not typically calculate an absolute eGFR in milliliter per minute. Table of Subject Demographics. Abbreviations: BMI, body mass index; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; CrCl, creatinine clearance; IQR, interquartile range; MDRD, Modification of Diet in Renal Disease. Lenalidomide exposure for each subject was estimated using a linear regression model described previously.7 Apparent clearance (CL/F) was estimated using CrCl calculated with ABW, and the area under the concentration versus time curve (AUC) was then calculated as Dose/(CL/F). AUCs were calculated for each subject based on the different doses assigned for each renal function calculation. Discordance in dosing was assessed between CGABW versus MDRD, CGABW versus CKD-EPI, CGABW versus CGIBW, and CGABW versus CGAdjBW. For subjects assigned different doses when the renal function calculation switched from CGABW to MDRD or CKD-EPI, the AUCs at the 50th, 75th, 90th, 95th, and 99th percentile cutoffs were compared with the AUCs of patients who received the full 25-mg dose (ie, those patients with CrCl >60 mL/min) to assess differences in lenalidomide exposure.
Results
Assessment of the concordance between CGABW and MDRD demonstrated that 16.6% of patients with newly diagnosed MM would have received a different lenalidomide dose if renal function were estimated by MDRD instead of CG (Figure 1). Of the subjects assigned a different dose using MDRD versus CG, the most common discordance was decreasing the full dose (25 mg) using CGABW to 10 mg using MDRD (53.8%). The next most common discordance was an increase from a dose of 10 mg using CGABW to a full dose of 25 mg using MDRD (28%). The other discordances of 10 mg using CGABW to 7.5 mg (ie, 15 mg every other day) using MDRD were 13.4% of those in the discordant category, while the converse of 7.5 mg using CGABW to 10 mg using MDRD was 4.3%. The least common discordance with only 1 subject (0.5%) was decreased from 25 mg using CGABW to 7.5 mg using MDRD. Chart summarizing the 4 comparisons analyzed and the statistical breakdown of all dose discordances found. Abbreviations: CGABW, Cockcroft-Gault with actual body weight; CGAdjBW, Cockcroft-Gault with adjusted body weight; CGIBW, Cockcroft-Gault with ideal body weight; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease. Comparing CGABW with CKD-EPI revealed similar trends, with 16.1% of subjects receiving a discordant dose (Figure 1). Like the CGABW versus MDRD comparison, the most common discordance was decreasing from the full 25-mg dose using CGABW to 10 mg using CKD-EPI, with 55.6% of subjects in the discordant category demonstrating this dose decrease. The next most common discordance was 10 mg using CGABW to 25 mg using CKD-EPI, with 23.9% of subjects in the discordant category experiencing this dose increase. Those subjects in the discordant group whose dose decreased from 10 mg using CGABW down to 7.5 mg using CKD-EPI accounted for 17.2% of subjects, with only 2.8% increasing from 7.5 mg using CGABW to 10 mg with CKD-EPI. Again, the least common discordance was decreasing from 25 mg with CGABW down to 7.5 mg with CKD-EPI with only 1 subject (0.56%). Next, the use of the CG equation with different body weights was compared. When using CGIBW instead of the standard CGABW, 22% of all subjects showed discordant doses. Again, the most common discordance was a decrease from 25 mg using CGABW to 10 mg using CGIBW, with 81% of all the discordances falling into this category. Fewer dose modifications occurred in the categories of 10 to 7.5 mg (15.4%), 10 to 25 mg (2.8%), and 7.5 to 10 mg (0.8%). When comparing CGABW with CGAdjBW, the discordance was less drastic with only 13.6% of subjects receiving a discordant dose. Of those, the majority (86.2%) once again decreased from the full dose of 25 mg using CGABW to 10 mg using CGAdjBW; the only other discordance was a decrease from 10 mg using CGABW to 7.5 mg using CGAdjBW (13.8%). For each comparison, the estimated AUCs in each discordance group were determined (Figure 2). The trends seen in AUC change are consistent based on the expected dose change; those dropping from a 25-mg dose using CGABW to 10 mg using MDRD had a 60% decrease in dose, with a similar decrease observed in AUC. Conversely, subjects assigned an increased dose using MDRD similarly experienced a relatively proportional increase in AUC. Last, for the subjects assigned a different dose when their renal function was estimated using MDRD or CKD-EPI, the AUCs in each group were plotted and compared with the AUCs of all subjects who received the full dose (25 mg) using CGABW (ie, CGABW > 60 mL/min). As shown in Figure 3, the subjects who switched from a dose of 10 mg using CGABW to 25 mg using CKD-EPI or MDRD instead had significantly higher AUCs, whereas the remaining dose changes remained within the range of values seen for subjects who received the full dose. Boxplot demonstrating the AUC differences seen in subjects with discordances between (a) CG and MDRD renal function calculations, (b) CG and CKD-EPI, (c) CG using actual body weight and CG using ideal body weight, and (d) CG using actual body weight and CG using adjusted body weight. Abbreviations: AUC, area under the concentration versus time curve; CG, Cockcroft-Gault; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease. Boxplot demonstrating the AUC differences seen in subjects with discordances between CKD-EPI and MDRD compared with subjects with a renal function of 60 mL/min using Cockcroft-Gault. Abbreviations: AUC, area under the concentration versus time curve; CG, Cockcroft-Gault; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease.
Conclusion and Relevance
This analysis highlights the importance of dosing lenalidomide for patients with impaired renal function using CGABW. Using MDRD or CKD-EPI instead of CG or using AdjBW or IBW instead of ABW increases the risk of an inappropriate dose for the patient. An interdisciplinary clinical team of oncology pharmacists and nurses in collaboration with the oncologist can work together to ensure the correct apparent clearance (CL/F) and body weight is used to ensure the correct dose of lenalidomide is selected for a patient.14,15
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion and Relevance" ]
[ "Renal impairment is common in patients with cancer and may be a result of older age, prior anti-cancer therapies, or the cancer itself.1 Renal impairment often necessitates dose adjustments of cancer therapeutics and other drugs that are primarily excreted by the kidneys. Historically, the Cockcroft-Gault (CG) equation has been used to estimate creatinine clearance (CrCl) to guide renal dose adjustments in clinical practice. This equation incorporates weight, age, sex, and serum creatinine. The Modification of Diet in Renal Disease (MDRD) equation is also used for clinical purposes, including for staging of chronic kidney disease (CKD) by the estimated glomerular filtration rate (eGFR) and uses 6 variables: age, sex, ethnicity, serum creatinine, urea, and albumin with a simplified 4-variable version (MDRD-4) that does not use urea and albumin.2-4 The MDRD-4 equation was used in this analysis. Another equation used to calculate eGFR, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), was established to address the issue of underestimating GFR at higher GFR.5 The CKD-EPI equation incorporates 4 variables as well, taking into account age, sex, ethnicity, and serum creatinine like the simplified MDRD-4 equation. Discrepancies between the CG, MDRD, and CKD-EPI equations introduce risks for dosing errors, which are further compounded by the prominent display of the eGFR in electronic health records (EHRs). Another factor that may affect the calculation of renal function is handling overweight patients by using adjusted body weight (AdjBW) instead of actual body weight (ABW) that can lead to overestimation of renal function for heavier patients. Dosing errors for renally excreted anti-cancer therapies are of particular concern as many of these drugs have a narrow therapeutic window and minor changes in drug exposure may increase toxicity or decrease efficacy.\nLenalidomide, a derivative of thalidomide, is an immunomodulatory imide drug (IMiD) that is primarily renally excreted and has steep dose reductions for declining renal function based on calculated CrCl.6-8 The immunomodulatory, anti-angiogenic, anti-inflammatory, and anti-neoplastic activity, as well as the favorable toxicity profile of lenalidomide led to its Food and Drug Administration (FDA) approval in combination with dexamethasone alone or with other anti-myeloma classes of drugs such as proteasome inhibitors or anti-CD38 monoclonal antibodies for relapsed/refractory multiple myeloma (MM), newly diagnosed MM, and as monotherapy for post-autologous hematopoietic cell transplant (HCT) maintenance in 2006, 2015, and 2017, respectively.6,9 Lenalidomide is administered orally, rapidly absorbed, highly bioavailable, and moderately distributed into tissues.7,8 Lenalidomide is predominantly eliminated unchanged in urine (85%) and decreased kidney function results in an increase in lenalidomide plasma concentrations, exposure, and toxicity.7,8 Recommended dose adjustments of lenalidomide for patients with a CrCl of less than 60 mL/min involve a reduction of the induction dose from 25 mg daily to 10 mg daily for 21 days of a 28-day cycle.7,8 This steep dose reduction may be especially problematic when disparate equations for renal function are utilized in error and shift the lenalidomide regimen from one dose to another. The objective of this study was to evaluate the impact of estimating kidney function in newly diagnosed MM patients with the CG, MDRD, and CKD-EPI on lenalidomide dosing, including pharmacokinetic outcomes.", "Demographic data and clinical characteristics from 1121 MM patients at the time of diagnosis were used (Table 1). The data were obtained via a data use agreement (DUA) with the Mayo Clinic in Rochester, Minnesota. The senior author was added to the Institutional Review Board (IRB) for this study at Mayo Clinic. For each individual subject, CrCl (milliliter per minute) was calculated with the Cockcroft-Gault equation using either actual body weight (CGABW), ideal body weight (CGIBW), or adjusted body weight (CGAdjBW); eGFR (mL/min/1.73 m2) was also estimated using both MDRD and CKD-EPI for each subject. For each estimate of renal function, a lenalidomide dose was assigned based on FDA-approved drug labeling.9,10 For the purposes of this analysis, the difference in units between CrCl and eGFR was not considered in determining lenalidomide dosing (mL/min vs mL/min/1.73 m2) to reflect (1) how renal function estimates are displayed within EHRs and (2) clinicians do not typically calculate an absolute eGFR in milliliter per minute.\nTable of Subject Demographics.\nAbbreviations: BMI, body mass index; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; CrCl, creatinine clearance; IQR, interquartile range; MDRD, Modification of Diet in Renal Disease.\nLenalidomide exposure for each subject was estimated using a linear regression model described previously.7 Apparent clearance (CL/F) was estimated using CrCl calculated with ABW, and the area under the concentration versus time curve (AUC) was then calculated as Dose/(CL/F). AUCs were calculated for each subject based on the different doses assigned for each renal function calculation. Discordance in dosing was assessed between CGABW versus MDRD, CGABW versus CKD-EPI, CGABW versus CGIBW, and CGABW versus CGAdjBW.\nFor subjects assigned different doses when the renal function calculation switched from CGABW to MDRD or CKD-EPI, the AUCs at the 50th, 75th, 90th, 95th, and 99th percentile cutoffs were compared with the AUCs of patients who received the full 25-mg dose (ie, those patients with CrCl >60 mL/min) to assess differences in lenalidomide exposure.", "Assessment of the concordance between CGABW and MDRD demonstrated that 16.6% of patients with newly diagnosed MM would have received a different lenalidomide dose if renal function were estimated by MDRD instead of CG (Figure 1). Of the subjects assigned a different dose using MDRD versus CG, the most common discordance was decreasing the full dose (25 mg) using CGABW to 10 mg using MDRD (53.8%). The next most common discordance was an increase from a dose of 10 mg using CGABW to a full dose of 25 mg using MDRD (28%). The other discordances of 10 mg using CGABW to 7.5 mg (ie, 15 mg every other day) using MDRD were 13.4% of those in the discordant category, while the converse of 7.5 mg using CGABW to 10 mg using MDRD was 4.3%. The least common discordance with only 1 subject (0.5%) was decreased from 25 mg using CGABW to 7.5 mg using MDRD.\nChart summarizing the 4 comparisons analyzed and the statistical breakdown of all dose discordances found.\nAbbreviations: CGABW, Cockcroft-Gault with actual body weight; CGAdjBW, Cockcroft-Gault with adjusted body weight; CGIBW, Cockcroft-Gault with ideal body weight; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease.\nComparing CGABW with CKD-EPI revealed similar trends, with 16.1% of subjects receiving a discordant dose (Figure 1). Like the CGABW versus MDRD comparison, the most common discordance was decreasing from the full 25-mg dose using CGABW to 10 mg using CKD-EPI, with 55.6% of subjects in the discordant category demonstrating this dose decrease. The next most common discordance was 10 mg using CGABW to 25 mg using CKD-EPI, with 23.9% of subjects in the discordant category experiencing this dose increase. Those subjects in the discordant group whose dose decreased from 10 mg using CGABW down to 7.5 mg using CKD-EPI accounted for 17.2% of subjects, with only 2.8% increasing from 7.5 mg using CGABW to 10 mg with CKD-EPI. Again, the least common discordance was decreasing from 25 mg with CGABW down to 7.5 mg with CKD-EPI with only 1 subject (0.56%).\nNext, the use of the CG equation with different body weights was compared. When using CGIBW instead of the standard CGABW, 22% of all subjects showed discordant doses. Again, the most common discordance was a decrease from 25 mg using CGABW to 10 mg using CGIBW, with 81% of all the discordances falling into this category. Fewer dose modifications occurred in the categories of 10 to 7.5 mg (15.4%), 10 to 25 mg (2.8%), and 7.5 to 10 mg (0.8%). When comparing CGABW with CGAdjBW, the discordance was less drastic with only 13.6% of subjects receiving a discordant dose. Of those, the majority (86.2%) once again decreased from the full dose of 25 mg using CGABW to 10 mg using CGAdjBW; the only other discordance was a decrease from 10 mg using CGABW to 7.5 mg using CGAdjBW (13.8%).\nFor each comparison, the estimated AUCs in each discordance group were determined (Figure 2). The trends seen in AUC change are consistent based on the expected dose change; those dropping from a 25-mg dose using CGABW to 10 mg using MDRD had a 60% decrease in dose, with a similar decrease observed in AUC. Conversely, subjects assigned an increased dose using MDRD similarly experienced a relatively proportional increase in AUC. Last, for the subjects assigned a different dose when their renal function was estimated using MDRD or CKD-EPI, the AUCs in each group were plotted and compared with the AUCs of all subjects who received the full dose (25 mg) using CGABW (ie, CGABW > 60 mL/min). As shown in Figure 3, the subjects who switched from a dose of 10 mg using CGABW to 25 mg using CKD-EPI or MDRD instead had significantly higher AUCs, whereas the remaining dose changes remained within the range of values seen for subjects who received the full dose.\nBoxplot demonstrating the AUC differences seen in subjects with discordances between (a) CG and MDRD renal function calculations, (b) CG and CKD-EPI, (c) CG using actual body weight and CG using ideal body weight, and (d) CG using actual body weight and CG using adjusted body weight.\nAbbreviations: AUC, area under the concentration versus time curve; CG, Cockcroft-Gault; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease.\nBoxplot demonstrating the AUC differences seen in subjects with discordances between CKD-EPI and MDRD compared with subjects with a renal function of 60 mL/min using Cockcroft-Gault.\nAbbreviations: AUC, area under the concentration versus time curve; CG, Cockcroft-Gault; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease.", "The purpose of this analysis was to assess discordance between CG, MDRD, and CKD-EPI on lenalidomide dosing and exposure in newly diagnosed MM patients. Across each comparison, subjects with a CrCl above 60 mL/min (ie, received a full lenalidomide dose of 25 mg) were less at risk of underexposure compared to subjects with moderate or severe renal impairment who were at risk for overexposure (ie, received a higher dose than they should). The current renal dosing recommendations for lenalidomide based on the package insert are such that minor changes in kidney function estimates can result in large changes in dose.10 For example, if an EHR displays an eGFR derived via MDRD as 59 mL/min/1.73 m2 but the CG CrCl is above 60 mL/min, the patient may be prescribed 10 mg instead of 25 mg. Indeed, this was the most common discordance seen in this analysis and has implications on potential efficacy of therapy. Previous analyses recommend using the MDRD study equation for drug dose adjustments; however, our analysis demonstrates that for lenalidomide, major dose and exposure changes may result and therefore would not be advised.11 Similar calls from the International Myeloma Working Group (IMWG) to use CKD-EPI or MDRD equation for renal function calculations also complicate the picture and are not supported by this analysis.12 Based on the results presented here, the CKD-EPI equation and the MDRD equation led to significant discordance that most often leads to underdosing (prescribing 10 mg instead of 25 mg) for subjects with borderline CrCl around 60 mL/min using CG (8.9% of the time when converting from CGABW to MDRD or CKD-EPI; Figure 1). Conversely, using MDRD or CKD-EPI also led to overdose (subjects prescribed 25 mg instead of 10 mg) in 4.6% of subjects who had their renal function recalculated using MDRD from CGABW or 3.8% if recalculating using CKD-EPI (Figure 1). The package insert renal dosing recommendations for lenalidomide are based on CG; based on this analysis, health care providers should use ABW in the renal function calculation when determining the dose of this drug.\nCG can be calculated using a variety of different body weights, including IBW (ideal body weight), ABW, or AdjBW. AdjBW is calculated such that if the subject’s ABW is less than their IBW, their ABW is used; if their ABW is more than 1.3 times their IBW, AdjBW is calculated using the equation, AdjBW = IBW + 0.4 × (ABW – IBW). Thus, the only subjects who will have a different AdjBW from their ABW are subjects with relatively higher body mass indices (BMIs), and their AdjBW will be lower than their ABW. Clarity and expert consensus regarding which body weight (actual, ideal, or adjusted) to utilize in the CG formula is lacking and has the potential to result in inaccurate CrCl results and subsequent dosing errors.4 Many robust studies have sought and failed to resolve the controversy surrounding the appropriate body weight to utilize for the CG formula.4\nAs CG is highly depending on weight, the discordance seen in our study with the majority of subjects in the discordant category comparing CGABW with CGAdjBW was a decrease in lenalidomide dose from 25 to 10 mg; the only other discordance was decreasing from 10 to 7.5 mg, meaning that all subjects who experienced discordance when re-estimating their renal function using CGAdjBW received a lower dose. Similarly, using IBW instead of ABW often underestimates subject’s weight and subsequently their renal function; as a result, the majority of the discordance was again seen in decreasing 25 to 10 mg. Thus, this analysis supports using ABW instead of AdjBW or IBW.\nOne of the challenges of assessing lenalidomide exposure among subjects receiving discordant doses is that there are no well-established exposure-response relationships for efficacy or toxicity. As such, there is an unclear threshold to avoid toxicity and an unclear minimum exposure for efficacy. This guided the decision to instead compare predicted lenalidomide exposure with the group of subjects who received the full dose. Based on our findings, theoretically, the group at greatest risk of potential toxicity are the subjects with CGABW slightly below 60 mL/min who then are increased to above 60 mL/min/1.73 m2 using MDRD or CKD-EPI; this particular subpopulation demonstrated AUCs all above the 99th percentile of AUCs for those receiving the full 25-mg dose. Therefore, caution is warranted for subjects with borderline CrCl calculations.\nThe dosing approach for lenalidomide in renal impairment could potentially be improved by integrating these findings into pharmacokinetic-pharmacodynamic (PK-PD) models, but the lack of an exposure-response relationship is a barrier. Drug developers should consider potential discordance between estimating equations and put forward meaningful recommendations that are relevant to the clinical setting. In sum, this analysis elucidates the potential problems with using different renal function calculations that lead to dose discordance and presents a strong argument for the standardization of renal function calculations used in clinical studies that match those that are used in clinical practice.\nThe main limitation of this study is the lack of ethnic diversity seen in the subject pool; in our analysis, only 2% of subjects were black or African American; however, African Americans can make up over 24% of the total newly diagnosed cases and MM is the second most common blood cancer in people of African descent.13 Therefore, the conclusions of this analysis are not generalizable to centers with larger proportions of African American patients with MM. In addition, the conclusions may have been different if a larger proportion of the subjects were African American as MDRD and CKD-EPI both incorporate ethnicity in the calculation.", "This analysis highlights the importance of dosing lenalidomide for patients with impaired renal function using CGABW. Using MDRD or CKD-EPI instead of CG or using AdjBW or IBW instead of ABW increases the risk of an inappropriate dose for the patient. An interdisciplinary clinical team of oncology pharmacists and nurses in collaboration with the oncologist can work together to ensure the correct apparent clearance (CL/F) and body weight is used to ensure the correct dose of lenalidomide is selected for a patient.14,15" ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "multiple myeloma", "pharmacokinetics", "lenalidomide", "renal impairment" ]
Introduction: Renal impairment is common in patients with cancer and may be a result of older age, prior anti-cancer therapies, or the cancer itself.1 Renal impairment often necessitates dose adjustments of cancer therapeutics and other drugs that are primarily excreted by the kidneys. Historically, the Cockcroft-Gault (CG) equation has been used to estimate creatinine clearance (CrCl) to guide renal dose adjustments in clinical practice. This equation incorporates weight, age, sex, and serum creatinine. The Modification of Diet in Renal Disease (MDRD) equation is also used for clinical purposes, including for staging of chronic kidney disease (CKD) by the estimated glomerular filtration rate (eGFR) and uses 6 variables: age, sex, ethnicity, serum creatinine, urea, and albumin with a simplified 4-variable version (MDRD-4) that does not use urea and albumin.2-4 The MDRD-4 equation was used in this analysis. Another equation used to calculate eGFR, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), was established to address the issue of underestimating GFR at higher GFR.5 The CKD-EPI equation incorporates 4 variables as well, taking into account age, sex, ethnicity, and serum creatinine like the simplified MDRD-4 equation. Discrepancies between the CG, MDRD, and CKD-EPI equations introduce risks for dosing errors, which are further compounded by the prominent display of the eGFR in electronic health records (EHRs). Another factor that may affect the calculation of renal function is handling overweight patients by using adjusted body weight (AdjBW) instead of actual body weight (ABW) that can lead to overestimation of renal function for heavier patients. Dosing errors for renally excreted anti-cancer therapies are of particular concern as many of these drugs have a narrow therapeutic window and minor changes in drug exposure may increase toxicity or decrease efficacy. Lenalidomide, a derivative of thalidomide, is an immunomodulatory imide drug (IMiD) that is primarily renally excreted and has steep dose reductions for declining renal function based on calculated CrCl.6-8 The immunomodulatory, anti-angiogenic, anti-inflammatory, and anti-neoplastic activity, as well as the favorable toxicity profile of lenalidomide led to its Food and Drug Administration (FDA) approval in combination with dexamethasone alone or with other anti-myeloma classes of drugs such as proteasome inhibitors or anti-CD38 monoclonal antibodies for relapsed/refractory multiple myeloma (MM), newly diagnosed MM, and as monotherapy for post-autologous hematopoietic cell transplant (HCT) maintenance in 2006, 2015, and 2017, respectively.6,9 Lenalidomide is administered orally, rapidly absorbed, highly bioavailable, and moderately distributed into tissues.7,8 Lenalidomide is predominantly eliminated unchanged in urine (85%) and decreased kidney function results in an increase in lenalidomide plasma concentrations, exposure, and toxicity.7,8 Recommended dose adjustments of lenalidomide for patients with a CrCl of less than 60 mL/min involve a reduction of the induction dose from 25 mg daily to 10 mg daily for 21 days of a 28-day cycle.7,8 This steep dose reduction may be especially problematic when disparate equations for renal function are utilized in error and shift the lenalidomide regimen from one dose to another. The objective of this study was to evaluate the impact of estimating kidney function in newly diagnosed MM patients with the CG, MDRD, and CKD-EPI on lenalidomide dosing, including pharmacokinetic outcomes. Methods: Demographic data and clinical characteristics from 1121 MM patients at the time of diagnosis were used (Table 1). The data were obtained via a data use agreement (DUA) with the Mayo Clinic in Rochester, Minnesota. The senior author was added to the Institutional Review Board (IRB) for this study at Mayo Clinic. For each individual subject, CrCl (milliliter per minute) was calculated with the Cockcroft-Gault equation using either actual body weight (CGABW), ideal body weight (CGIBW), or adjusted body weight (CGAdjBW); eGFR (mL/min/1.73 m2) was also estimated using both MDRD and CKD-EPI for each subject. For each estimate of renal function, a lenalidomide dose was assigned based on FDA-approved drug labeling.9,10 For the purposes of this analysis, the difference in units between CrCl and eGFR was not considered in determining lenalidomide dosing (mL/min vs mL/min/1.73 m2) to reflect (1) how renal function estimates are displayed within EHRs and (2) clinicians do not typically calculate an absolute eGFR in milliliter per minute. Table of Subject Demographics. Abbreviations: BMI, body mass index; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; CrCl, creatinine clearance; IQR, interquartile range; MDRD, Modification of Diet in Renal Disease. Lenalidomide exposure for each subject was estimated using a linear regression model described previously.7 Apparent clearance (CL/F) was estimated using CrCl calculated with ABW, and the area under the concentration versus time curve (AUC) was then calculated as Dose/(CL/F). AUCs were calculated for each subject based on the different doses assigned for each renal function calculation. Discordance in dosing was assessed between CGABW versus MDRD, CGABW versus CKD-EPI, CGABW versus CGIBW, and CGABW versus CGAdjBW. For subjects assigned different doses when the renal function calculation switched from CGABW to MDRD or CKD-EPI, the AUCs at the 50th, 75th, 90th, 95th, and 99th percentile cutoffs were compared with the AUCs of patients who received the full 25-mg dose (ie, those patients with CrCl >60 mL/min) to assess differences in lenalidomide exposure. Results: Assessment of the concordance between CGABW and MDRD demonstrated that 16.6% of patients with newly diagnosed MM would have received a different lenalidomide dose if renal function were estimated by MDRD instead of CG (Figure 1). Of the subjects assigned a different dose using MDRD versus CG, the most common discordance was decreasing the full dose (25 mg) using CGABW to 10 mg using MDRD (53.8%). The next most common discordance was an increase from a dose of 10 mg using CGABW to a full dose of 25 mg using MDRD (28%). The other discordances of 10 mg using CGABW to 7.5 mg (ie, 15 mg every other day) using MDRD were 13.4% of those in the discordant category, while the converse of 7.5 mg using CGABW to 10 mg using MDRD was 4.3%. The least common discordance with only 1 subject (0.5%) was decreased from 25 mg using CGABW to 7.5 mg using MDRD. Chart summarizing the 4 comparisons analyzed and the statistical breakdown of all dose discordances found. Abbreviations: CGABW, Cockcroft-Gault with actual body weight; CGAdjBW, Cockcroft-Gault with adjusted body weight; CGIBW, Cockcroft-Gault with ideal body weight; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease. Comparing CGABW with CKD-EPI revealed similar trends, with 16.1% of subjects receiving a discordant dose (Figure 1). Like the CGABW versus MDRD comparison, the most common discordance was decreasing from the full 25-mg dose using CGABW to 10 mg using CKD-EPI, with 55.6% of subjects in the discordant category demonstrating this dose decrease. The next most common discordance was 10 mg using CGABW to 25 mg using CKD-EPI, with 23.9% of subjects in the discordant category experiencing this dose increase. Those subjects in the discordant group whose dose decreased from 10 mg using CGABW down to 7.5 mg using CKD-EPI accounted for 17.2% of subjects, with only 2.8% increasing from 7.5 mg using CGABW to 10 mg with CKD-EPI. Again, the least common discordance was decreasing from 25 mg with CGABW down to 7.5 mg with CKD-EPI with only 1 subject (0.56%). Next, the use of the CG equation with different body weights was compared. When using CGIBW instead of the standard CGABW, 22% of all subjects showed discordant doses. Again, the most common discordance was a decrease from 25 mg using CGABW to 10 mg using CGIBW, with 81% of all the discordances falling into this category. Fewer dose modifications occurred in the categories of 10 to 7.5 mg (15.4%), 10 to 25 mg (2.8%), and 7.5 to 10 mg (0.8%). When comparing CGABW with CGAdjBW, the discordance was less drastic with only 13.6% of subjects receiving a discordant dose. Of those, the majority (86.2%) once again decreased from the full dose of 25 mg using CGABW to 10 mg using CGAdjBW; the only other discordance was a decrease from 10 mg using CGABW to 7.5 mg using CGAdjBW (13.8%). For each comparison, the estimated AUCs in each discordance group were determined (Figure 2). The trends seen in AUC change are consistent based on the expected dose change; those dropping from a 25-mg dose using CGABW to 10 mg using MDRD had a 60% decrease in dose, with a similar decrease observed in AUC. Conversely, subjects assigned an increased dose using MDRD similarly experienced a relatively proportional increase in AUC. Last, for the subjects assigned a different dose when their renal function was estimated using MDRD or CKD-EPI, the AUCs in each group were plotted and compared with the AUCs of all subjects who received the full dose (25 mg) using CGABW (ie, CGABW > 60 mL/min). As shown in Figure 3, the subjects who switched from a dose of 10 mg using CGABW to 25 mg using CKD-EPI or MDRD instead had significantly higher AUCs, whereas the remaining dose changes remained within the range of values seen for subjects who received the full dose. Boxplot demonstrating the AUC differences seen in subjects with discordances between (a) CG and MDRD renal function calculations, (b) CG and CKD-EPI, (c) CG using actual body weight and CG using ideal body weight, and (d) CG using actual body weight and CG using adjusted body weight. Abbreviations: AUC, area under the concentration versus time curve; CG, Cockcroft-Gault; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease. Boxplot demonstrating the AUC differences seen in subjects with discordances between CKD-EPI and MDRD compared with subjects with a renal function of 60 mL/min using Cockcroft-Gault. Abbreviations: AUC, area under the concentration versus time curve; CG, Cockcroft-Gault; CKD-EPI, Chronic Kidney Disease Epidemiology Collaboration; MDRD, Modification of Diet in Renal Disease. Discussion: The purpose of this analysis was to assess discordance between CG, MDRD, and CKD-EPI on lenalidomide dosing and exposure in newly diagnosed MM patients. Across each comparison, subjects with a CrCl above 60 mL/min (ie, received a full lenalidomide dose of 25 mg) were less at risk of underexposure compared to subjects with moderate or severe renal impairment who were at risk for overexposure (ie, received a higher dose than they should). The current renal dosing recommendations for lenalidomide based on the package insert are such that minor changes in kidney function estimates can result in large changes in dose.10 For example, if an EHR displays an eGFR derived via MDRD as 59 mL/min/1.73 m2 but the CG CrCl is above 60 mL/min, the patient may be prescribed 10 mg instead of 25 mg. Indeed, this was the most common discordance seen in this analysis and has implications on potential efficacy of therapy. Previous analyses recommend using the MDRD study equation for drug dose adjustments; however, our analysis demonstrates that for lenalidomide, major dose and exposure changes may result and therefore would not be advised.11 Similar calls from the International Myeloma Working Group (IMWG) to use CKD-EPI or MDRD equation for renal function calculations also complicate the picture and are not supported by this analysis.12 Based on the results presented here, the CKD-EPI equation and the MDRD equation led to significant discordance that most often leads to underdosing (prescribing 10 mg instead of 25 mg) for subjects with borderline CrCl around 60 mL/min using CG (8.9% of the time when converting from CGABW to MDRD or CKD-EPI; Figure 1). Conversely, using MDRD or CKD-EPI also led to overdose (subjects prescribed 25 mg instead of 10 mg) in 4.6% of subjects who had their renal function recalculated using MDRD from CGABW or 3.8% if recalculating using CKD-EPI (Figure 1). The package insert renal dosing recommendations for lenalidomide are based on CG; based on this analysis, health care providers should use ABW in the renal function calculation when determining the dose of this drug. CG can be calculated using a variety of different body weights, including IBW (ideal body weight), ABW, or AdjBW. AdjBW is calculated such that if the subject’s ABW is less than their IBW, their ABW is used; if their ABW is more than 1.3 times their IBW, AdjBW is calculated using the equation, AdjBW = IBW + 0.4 × (ABW – IBW). Thus, the only subjects who will have a different AdjBW from their ABW are subjects with relatively higher body mass indices (BMIs), and their AdjBW will be lower than their ABW. Clarity and expert consensus regarding which body weight (actual, ideal, or adjusted) to utilize in the CG formula is lacking and has the potential to result in inaccurate CrCl results and subsequent dosing errors.4 Many robust studies have sought and failed to resolve the controversy surrounding the appropriate body weight to utilize for the CG formula.4 As CG is highly depending on weight, the discordance seen in our study with the majority of subjects in the discordant category comparing CGABW with CGAdjBW was a decrease in lenalidomide dose from 25 to 10 mg; the only other discordance was decreasing from 10 to 7.5 mg, meaning that all subjects who experienced discordance when re-estimating their renal function using CGAdjBW received a lower dose. Similarly, using IBW instead of ABW often underestimates subject’s weight and subsequently their renal function; as a result, the majority of the discordance was again seen in decreasing 25 to 10 mg. Thus, this analysis supports using ABW instead of AdjBW or IBW. One of the challenges of assessing lenalidomide exposure among subjects receiving discordant doses is that there are no well-established exposure-response relationships for efficacy or toxicity. As such, there is an unclear threshold to avoid toxicity and an unclear minimum exposure for efficacy. This guided the decision to instead compare predicted lenalidomide exposure with the group of subjects who received the full dose. Based on our findings, theoretically, the group at greatest risk of potential toxicity are the subjects with CGABW slightly below 60 mL/min who then are increased to above 60 mL/min/1.73 m2 using MDRD or CKD-EPI; this particular subpopulation demonstrated AUCs all above the 99th percentile of AUCs for those receiving the full 25-mg dose. Therefore, caution is warranted for subjects with borderline CrCl calculations. The dosing approach for lenalidomide in renal impairment could potentially be improved by integrating these findings into pharmacokinetic-pharmacodynamic (PK-PD) models, but the lack of an exposure-response relationship is a barrier. Drug developers should consider potential discordance between estimating equations and put forward meaningful recommendations that are relevant to the clinical setting. In sum, this analysis elucidates the potential problems with using different renal function calculations that lead to dose discordance and presents a strong argument for the standardization of renal function calculations used in clinical studies that match those that are used in clinical practice. The main limitation of this study is the lack of ethnic diversity seen in the subject pool; in our analysis, only 2% of subjects were black or African American; however, African Americans can make up over 24% of the total newly diagnosed cases and MM is the second most common blood cancer in people of African descent.13 Therefore, the conclusions of this analysis are not generalizable to centers with larger proportions of African American patients with MM. In addition, the conclusions may have been different if a larger proportion of the subjects were African American as MDRD and CKD-EPI both incorporate ethnicity in the calculation. Conclusion and Relevance: This analysis highlights the importance of dosing lenalidomide for patients with impaired renal function using CGABW. Using MDRD or CKD-EPI instead of CG or using AdjBW or IBW instead of ABW increases the risk of an inappropriate dose for the patient. An interdisciplinary clinical team of oncology pharmacists and nurses in collaboration with the oncologist can work together to ensure the correct apparent clearance (CL/F) and body weight is used to ensure the correct dose of lenalidomide is selected for a patient.14,15
Background: Lenalidomide is an immunomodulatory drug used to treat multiple myeloma that requires renal dosing adjustment based on Cockcroft-Gault (CG). Various equations to estimate kidney function exist and pose a potential issue with lenalidomide dosing. Methods: Data from 1121 multiple myeloma patients at the time of diagnosis acquired from the Mayo Clinic were used to calculate creatinine clearance (CrCl) using Cockcroft-Gault with actual body weight (CGABW), ideal body weight (CGIBW), or adjusted body weight (CGAdjBW); MDRD; and CKD-EPI for each subject. Discordances in dosing were then analyzed, and lenalidomide exposure was calculated for each subject to assess impact on pharmacokinetics of lenalidomide for patients who received discordant doses. Results: Overall, approximately 16% of patients received a discordant dose when using MDRD or CKD-EPI instead of CGABW. The most common dose discordance was the decrease of a full dose of lenalidomide 25 mg when using CGABW down to 10 mg and when using MDRD or CKD-EPI with 53.8% to 55.6% of all discordances in this category. When assessing different body weights, the most common discordance was a decrease from 25 to 10 mg when using CGIBW instead of CGABW; the same trend was observed when using CGAdjBW instead as well. Patients were also at risk of over- or underexposure based on area under the concentration versus time curve (AUC) for discordant dosing. Conclusions: A significant proportion of patients are at risk of under- or overdose of lenalidomide if CKD-EPI or MDRD are used instead of CGABW. Physicians should use CGABW when estimating renal function to dose lenalidomide.
Introduction: Renal impairment is common in patients with cancer and may be a result of older age, prior anti-cancer therapies, or the cancer itself.1 Renal impairment often necessitates dose adjustments of cancer therapeutics and other drugs that are primarily excreted by the kidneys. Historically, the Cockcroft-Gault (CG) equation has been used to estimate creatinine clearance (CrCl) to guide renal dose adjustments in clinical practice. This equation incorporates weight, age, sex, and serum creatinine. The Modification of Diet in Renal Disease (MDRD) equation is also used for clinical purposes, including for staging of chronic kidney disease (CKD) by the estimated glomerular filtration rate (eGFR) and uses 6 variables: age, sex, ethnicity, serum creatinine, urea, and albumin with a simplified 4-variable version (MDRD-4) that does not use urea and albumin.2-4 The MDRD-4 equation was used in this analysis. Another equation used to calculate eGFR, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), was established to address the issue of underestimating GFR at higher GFR.5 The CKD-EPI equation incorporates 4 variables as well, taking into account age, sex, ethnicity, and serum creatinine like the simplified MDRD-4 equation. Discrepancies between the CG, MDRD, and CKD-EPI equations introduce risks for dosing errors, which are further compounded by the prominent display of the eGFR in electronic health records (EHRs). Another factor that may affect the calculation of renal function is handling overweight patients by using adjusted body weight (AdjBW) instead of actual body weight (ABW) that can lead to overestimation of renal function for heavier patients. Dosing errors for renally excreted anti-cancer therapies are of particular concern as many of these drugs have a narrow therapeutic window and minor changes in drug exposure may increase toxicity or decrease efficacy. Lenalidomide, a derivative of thalidomide, is an immunomodulatory imide drug (IMiD) that is primarily renally excreted and has steep dose reductions for declining renal function based on calculated CrCl.6-8 The immunomodulatory, anti-angiogenic, anti-inflammatory, and anti-neoplastic activity, as well as the favorable toxicity profile of lenalidomide led to its Food and Drug Administration (FDA) approval in combination with dexamethasone alone or with other anti-myeloma classes of drugs such as proteasome inhibitors or anti-CD38 monoclonal antibodies for relapsed/refractory multiple myeloma (MM), newly diagnosed MM, and as monotherapy for post-autologous hematopoietic cell transplant (HCT) maintenance in 2006, 2015, and 2017, respectively.6,9 Lenalidomide is administered orally, rapidly absorbed, highly bioavailable, and moderately distributed into tissues.7,8 Lenalidomide is predominantly eliminated unchanged in urine (85%) and decreased kidney function results in an increase in lenalidomide plasma concentrations, exposure, and toxicity.7,8 Recommended dose adjustments of lenalidomide for patients with a CrCl of less than 60 mL/min involve a reduction of the induction dose from 25 mg daily to 10 mg daily for 21 days of a 28-day cycle.7,8 This steep dose reduction may be especially problematic when disparate equations for renal function are utilized in error and shift the lenalidomide regimen from one dose to another. The objective of this study was to evaluate the impact of estimating kidney function in newly diagnosed MM patients with the CG, MDRD, and CKD-EPI on lenalidomide dosing, including pharmacokinetic outcomes. Conclusion and Relevance: This analysis highlights the importance of dosing lenalidomide for patients with impaired renal function using CGABW. Using MDRD or CKD-EPI instead of CG or using AdjBW or IBW instead of ABW increases the risk of an inappropriate dose for the patient. An interdisciplinary clinical team of oncology pharmacists and nurses in collaboration with the oncologist can work together to ensure the correct apparent clearance (CL/F) and body weight is used to ensure the correct dose of lenalidomide is selected for a patient.14,15
Background: Lenalidomide is an immunomodulatory drug used to treat multiple myeloma that requires renal dosing adjustment based on Cockcroft-Gault (CG). Various equations to estimate kidney function exist and pose a potential issue with lenalidomide dosing. Methods: Data from 1121 multiple myeloma patients at the time of diagnosis acquired from the Mayo Clinic were used to calculate creatinine clearance (CrCl) using Cockcroft-Gault with actual body weight (CGABW), ideal body weight (CGIBW), or adjusted body weight (CGAdjBW); MDRD; and CKD-EPI for each subject. Discordances in dosing were then analyzed, and lenalidomide exposure was calculated for each subject to assess impact on pharmacokinetics of lenalidomide for patients who received discordant doses. Results: Overall, approximately 16% of patients received a discordant dose when using MDRD or CKD-EPI instead of CGABW. The most common dose discordance was the decrease of a full dose of lenalidomide 25 mg when using CGABW down to 10 mg and when using MDRD or CKD-EPI with 53.8% to 55.6% of all discordances in this category. When assessing different body weights, the most common discordance was a decrease from 25 to 10 mg when using CGIBW instead of CGABW; the same trend was observed when using CGAdjBW instead as well. Patients were also at risk of over- or underexposure based on area under the concentration versus time curve (AUC) for discordant dosing. Conclusions: A significant proportion of patients are at risk of under- or overdose of lenalidomide if CKD-EPI or MDRD are used instead of CGABW. Physicians should use CGABW when estimating renal function to dose lenalidomide.
3,203
312
[]
5
[ "mg", "dose", "mdrd", "cgabw", "subjects", "renal", "ckd", "epi", "ckd epi", "10" ]
[ "calculation renal", "renal function estimates", "guide renal dose", "estimating kidney function", "cancer renal impairment" ]
[CONTENT] multiple myeloma | pharmacokinetics | lenalidomide | renal impairment [SUMMARY]
[CONTENT] multiple myeloma | pharmacokinetics | lenalidomide | renal impairment [SUMMARY]
[CONTENT] multiple myeloma | pharmacokinetics | lenalidomide | renal impairment [SUMMARY]
[CONTENT] multiple myeloma | pharmacokinetics | lenalidomide | renal impairment [SUMMARY]
[CONTENT] multiple myeloma | pharmacokinetics | lenalidomide | renal impairment [SUMMARY]
[CONTENT] multiple myeloma | pharmacokinetics | lenalidomide | renal impairment [SUMMARY]
[CONTENT] Humans | Glomerular Filtration Rate | Lenalidomide | Creatinine | Multiple Myeloma | Renal Insufficiency, Chronic | Kidney | Body Weight [SUMMARY]
[CONTENT] Humans | Glomerular Filtration Rate | Lenalidomide | Creatinine | Multiple Myeloma | Renal Insufficiency, Chronic | Kidney | Body Weight [SUMMARY]
[CONTENT] Humans | Glomerular Filtration Rate | Lenalidomide | Creatinine | Multiple Myeloma | Renal Insufficiency, Chronic | Kidney | Body Weight [SUMMARY]
[CONTENT] Humans | Glomerular Filtration Rate | Lenalidomide | Creatinine | Multiple Myeloma | Renal Insufficiency, Chronic | Kidney | Body Weight [SUMMARY]
[CONTENT] Humans | Glomerular Filtration Rate | Lenalidomide | Creatinine | Multiple Myeloma | Renal Insufficiency, Chronic | Kidney | Body Weight [SUMMARY]
[CONTENT] Humans | Glomerular Filtration Rate | Lenalidomide | Creatinine | Multiple Myeloma | Renal Insufficiency, Chronic | Kidney | Body Weight [SUMMARY]
[CONTENT] calculation renal | renal function estimates | guide renal dose | estimating kidney function | cancer renal impairment [SUMMARY]
[CONTENT] calculation renal | renal function estimates | guide renal dose | estimating kidney function | cancer renal impairment [SUMMARY]
[CONTENT] calculation renal | renal function estimates | guide renal dose | estimating kidney function | cancer renal impairment [SUMMARY]
[CONTENT] calculation renal | renal function estimates | guide renal dose | estimating kidney function | cancer renal impairment [SUMMARY]
[CONTENT] calculation renal | renal function estimates | guide renal dose | estimating kidney function | cancer renal impairment [SUMMARY]
[CONTENT] calculation renal | renal function estimates | guide renal dose | estimating kidney function | cancer renal impairment [SUMMARY]
[CONTENT] mg | dose | mdrd | cgabw | subjects | renal | ckd | epi | ckd epi | 10 [SUMMARY]
[CONTENT] mg | dose | mdrd | cgabw | subjects | renal | ckd | epi | ckd epi | 10 [SUMMARY]
[CONTENT] mg | dose | mdrd | cgabw | subjects | renal | ckd | epi | ckd epi | 10 [SUMMARY]
[CONTENT] mg | dose | mdrd | cgabw | subjects | renal | ckd | epi | ckd epi | 10 [SUMMARY]
[CONTENT] mg | dose | mdrd | cgabw | subjects | renal | ckd | epi | ckd epi | 10 [SUMMARY]
[CONTENT] mg | dose | mdrd | cgabw | subjects | renal | ckd | epi | ckd epi | 10 [SUMMARY]
[CONTENT] anti | cancer | age | equation | renal | lenalidomide | dose | creatinine | serum creatinine | age sex [SUMMARY]
[CONTENT] versus | cgabw | crcl | subject | cgabw versus | data | calculated | assigned | renal | ml min [SUMMARY]
[CONTENT] mg | mg cgabw | cgabw | dose | subjects | 10 mg | 10 | mdrd | cgabw 10 mg | cgabw 10 [SUMMARY]
[CONTENT] correct | ensure correct | ensure | patient | instead | pharmacists nurses collaboration | lenalidomide selected patient 14 | oncology pharmacists nurses collaboration | analysis highlights importance dosing | inappropriate dose patient interdisciplinary [SUMMARY]
[CONTENT] dose | mg | cgabw | mdrd | subjects | renal | ckd | lenalidomide | epi | ckd epi [SUMMARY]
[CONTENT] dose | mg | cgabw | mdrd | subjects | renal | ckd | lenalidomide | epi | ckd epi [SUMMARY]
[CONTENT] Cockcroft-Gault ||| [SUMMARY]
[CONTENT] 1121 | the Mayo Clinic | CrCl | Cockcroft-Gault | CGABW | MDRD ||| [SUMMARY]
[CONTENT] approximately 16% | MDRD | CGABW ||| 25 | CGABW | 10 | MDRD | 53.8% | 55.6% ||| 25 | 10 | CGABW ||| [SUMMARY]
[CONTENT] MDRD | CGABW ||| CGABW [SUMMARY]
[CONTENT] Cockcroft-Gault ||| ||| 1121 | the Mayo Clinic | CrCl | Cockcroft-Gault | CGABW | MDRD ||| ||| approximately 16% | MDRD | CGABW ||| 25 | CGABW | 10 | MDRD | 53.8% | 55.6% ||| 25 | 10 | CGABW ||| ||| MDRD | CGABW ||| CGABW [SUMMARY]
[CONTENT] Cockcroft-Gault ||| ||| 1121 | the Mayo Clinic | CrCl | Cockcroft-Gault | CGABW | MDRD ||| ||| approximately 16% | MDRD | CGABW ||| 25 | CGABW | 10 | MDRD | 53.8% | 55.6% ||| 25 | 10 | CGABW ||| ||| MDRD | CGABW ||| CGABW [SUMMARY]
Mitochondrial pathway of the lysine demethylase 5C inhibitor CPI-455 in the Eca-109 esophageal squamous cell carcinoma cell line.
33967558
Esophageal cancer is a malignant tumor of the digestive tract that is difficult to diagnose early. CPI-455 has been reported to inhibit various cancers, but its role in esophageal squamous cell carcinoma (ESCC) is unknown.
BACKGROUND
A methyl tetrazolium assay was used to detect the inhibitory effect of CPI-455 on the proliferation of Eca-109 cells. Apoptosis, reactive oxygen species (ROS), and mitochondrial membrane potential were assessed by flow cytometry. Laser confocal scanning and transmission electron microscopy were used to observe changes in Eca-109 cell morphology. The protein expression of P53, Bax, lysine-specific demethylase 5C (KDM5C), cleaved Caspase-9, and cleaved Caspase-3 were assayed by western blotting.
METHODS
Compared with the control group, CPI-455 significantly inhibited Eca-109 cell proliferation. Gemcitabine inhibited Eca-109 cell proliferation in a concentration- and time-dependent manner. CPI-455 caused extensive alteration of the mitochondria, which appeared to have become atrophied. The cell membrane was weakly stained and the cytoplasmic structures were indistinct and disorganized, with serious cavitation when viewed by transmission electron microscopy. The flow cytometry and western blot results showed that, compared with the control group, the mitochondrial membrane potential was decreased and depolarized in Eca-109 cells treated with CPI-455. CPI-455 significantly upregulated the ROS content, P53, Bax, Caspase-9, and Caspase-3 protein expression in Eca-109 cells, whereas KDM5C expression was downregulated.
RESULTS
CPI-455 inhibited Eca-109 cell proliferation via mitochondrial apoptosis by regulating the expression of related genes.
CONCLUSION
[ "Apoptosis", "Cell Line, Tumor", "Cell Proliferation", "Cyclopropanes", "Esophageal Neoplasms", "Esophageal Squamous Cell Carcinoma", "Humans", "Indoles", "Lysine", "Mitochondria" ]
8072195
INTRODUCTION
Esophageal cancer is a common digestive tract cancer, of which esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma represent the main histological types[1,2]. The incidence of esophageal cancer is relatively high, ranking seventh worldwide. ESCC accounts for more than 90% of esophageal cancers. As it is difficult to diagnose ESCC early due to a lack of specific clinical symptoms and insufficient preventive measures, it is often diagnosed at an advanced stage and has a poor prognosis. In addition, the 5-yr postoperative survival rate is less than 50%[3-5].The lysine-specific demethylase 5C (KDM5C) inhibitor, CPI-455, regulates the apoptosis and proliferation of tumor cells by regulating the methylation state of lysine in KDM5C[6]. CPI-455 activity been reported in cervical[7], gastric[8], and prostate cancer[9], and in other diseases[10,11]; but not in the treatment of esophageal cancer. In this study, human ESCC Eca-109 cells were used as a research model to preliminarily study the effect and mechanism of CPI-455 against ESCC and to explore novel approaches for the treatment and prevention of ESCC.
MATERIALS AND METHODS
Cell lines Eca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used. Eca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used. Cell culture Eca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture. Eca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture. Methyl tetrazolium assay For assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100. For assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100. Laser confocal scanning microscope assay of the ectropion of membrane phosphatidylserine The fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above. The fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above. Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay of nucleation and DNA fragmentation of Eca-109 cells Eca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm). Eca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm). Flow cytometry assay of reactive oxygen species (ROS) Eca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate. Eca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate. Flow cytometry assay of mitochondrial membrane potential Cells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry. Cells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry. Western blotting assay of P53, Bax, KDM5C, cleaved Caspase-9, and cleaved Caspase-3 Eca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression. Eca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression. Statistical analysis GraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05. GraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05.
null
null
CONCLUSION
We thank all the members of Department of Clinical Laboratory, Huangshi Central Hospital, Affiliated Hospital of Hubei Polytechnic University, Edong Healthcare Group and Department of Laboratory Medicine, Hubei cancer hospital, Tongji Medical College, Huazhong University of Science and Technology.
[ "INTRODUCTION", "Cell lines", "Cell culture", "Methyl tetrazolium assay", "Laser confocal scanning microscope assay of the ectropion of membrane phosphatidylserine", "Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay of nucleation and DNA fragmentation of Eca-109 cells", "Flow cytometry assay of reactive oxygen species (ROS)", "Flow cytometry assay of mitochondrial membrane potential", "Western blotting assay of P53, Bax, KDM5C, cleaved Caspase-9, and cleaved Caspase-3", "Statistical analysis", "RESULTS", "CPI-455 inhibits Eca-109 cell proliferation", "CPI-455 induces Eca-109 cell apoptosis", "CPI-455 upregulates ROS content, P53, Bax, Caspase-9, and Caspase-3 expression and downregulates KDM5C expression and mitochondrial membrane potential in Eca-109 cells", "DISCUSSION", "CONCLUSION" ]
[ "Esophageal cancer is a common digestive tract cancer, of which esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma represent the main histological types[1,2]. The incidence of esophageal cancer is relatively high, ranking seventh worldwide. ESCC accounts for more than 90% of esophageal cancers. As it is difficult to diagnose ESCC early due to a lack of specific clinical symptoms and insufficient preventive measures, it is often diagnosed at an advanced stage and has a poor prognosis. In addition, the 5-yr postoperative survival rate is less than 50%[3-5].The lysine-specific demethylase 5C (KDM5C) inhibitor, CPI-455, regulates the apoptosis and proliferation of tumor cells by regulating the methylation state of lysine in KDM5C[6]. CPI-455 activity been reported in cervical[7], gastric[8], and prostate cancer[9], and in other diseases[10,11]; but not in the treatment of esophageal cancer. In this study, human ESCC Eca-109 cells were used as a research model to preliminarily study the effect and mechanism of CPI-455 against ESCC and to explore novel approaches for the treatment and prevention of ESCC.", "Eca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used.", "Eca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture.", "For assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100.", "The fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above.", "Eca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm).", "Eca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate.", "Cells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry.", "Eca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression.", "GraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05.", "CPI-455 inhibits Eca-109 cell proliferation The MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1).\n\nEca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control.\nThe MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1).\n\nEca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control.\nCPI-455 induces Eca-109 cell apoptosis The laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and \"pipe-like network\" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible \"point and flake\" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a \"structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B).\n\nChanges in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3).\nTUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D).\nThe laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and \"pipe-like network\" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible \"point and flake\" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a \"structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B).\n\nChanges in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3).\nTUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D).\nCPI-455 upregulates ROS content, P53, Bax, Caspase-9, and Caspase-3 expression and downregulates KDM5C expression and mitochondrial membrane potential in Eca-109 cells Flow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6).\n\nEca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species.\n\nMitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control.\n\nWestern blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase.\n\nDensitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control.\nFlow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6).\n\nEca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species.\n\nMitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control.\n\nWestern blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase.\n\nDensitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control.", "The MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1).\n\nEca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control.", "The laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and \"pipe-like network\" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible \"point and flake\" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a \"structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B).\n\nChanges in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3).\nTUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D).", "Flow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6).\n\nEca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species.\n\nMitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control.\n\nWestern blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase.\n\nDensitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control.", "Esophageal cancer is a digestive tract malignant tumor that is difficult to diagnose early. The most common treatments are surgery, radiotherapy, chemotherapy, and biologically targeted therapy[12]. The prognosis remains poor. The occurrence and development of esophageal cancer is complex and involves multiple gene changes, commonly resulting from histone methylation and epigenetic regulation of tumor-related genes.\nCPI-455 is a KDM5C inhibitor that modulates the apoptosis and proliferation of tumor cells by regulating the lysine methylation status KDM5C. Recent reports have confirmed that the anti-tumor effect of CPI-455 in ovarian, breast, stomach, lung, and liver cancer is achieved by inducing the apoptosis and autophagy of tumor cells via modulating KDM5C gene transcription and expression[13,14]. In this study, the anti-Eca-109 effect of CPI-455 was studied and the associated mechanism was explored.\nThe study results found that CPI-455 inhibited the proliferation of esophageal cancer cells in a time-and concentration-dependent manner. However, at its effective concentration, CPI-455 was less toxic to normal esophageal cells than to Eca-109 cells. The inhibitory effect of CPI-455 on Eca-109 cell proliferation was confirmed by the assay of cell-membrane permeability by laser confocal scanning microscopy. This finding indicates that CPI-455 has favorable effects by inhibiting Eca-109 cell proliferation and growth, as well as the selective killing on ESCC cells (Figures 1 and 2).\nFlow cytometry was used to determine the ROS content of Eca-109 cells, which is considered to be the by-product of oxygen consumption and cell metabolism. Previous studies have found that various chemotherapy drugs can induce an increase in the ROS content in tumor cells, and that at a certain level, ROS can induce the apoptosis of tumor cells[15-18]. Decrease of the mitochondrial membrane potential is the initial manifestation of the hypoxia apoptosis cascade[19]. In this study, the level of intracellular ROS significantly increased in treated Eca-109 cells (Figure 3), and depolarization of mitochondrial membrane was increased compared with controls (Figure 4). CPI-455 significantly induced the apoptosis of Eca-109 cells.\nPrevious studies have confirmed that the mechanism by which tumor cell apoptosis is induced varies widely depending on the various pathways that are involved. These includes mitochondrial pathways, which play an important role in the maintenance of stability, and those involving activation of multiple upstream and downstream genes (e.g., p53 and Bax). Activation of Caspase-2, 3, and 9 on the mitochondrial membrane is linked to the activation of mitochondrial apoptotic pathways[22-24].\nTherefore, the inhibitory effect of the KDM5C inhibitor, CPI-455, on Eca-109 cell proliferation may be related to the downregulation of KDM5C expression, activation of Caspase-3 and Caspase-9, change in mitochondrial membrane permeability, promotion of ROS production, depolarization of the mitochondrial membrane, release of the proapoptotic proteins, Bax and p53, in the mitochondria into the cytoplasm, and induction of tumor cell apoptosis (Figures 5 and 6). The above results indicate that CPI-455 induced apoptosis through an ROS-dependent mitochondrial signaling pathway[25].\nOur study had some limitations, the mechanism involved in inducing apoptosis of tumor cells is extensive, and the mitochondrial apoptosis pathway is only one of them that plays an important role in maintaining cell stability. The mechanism has not been fully elucidated and needs further study. The results of this study provide a theoretical basis for the application of CPI-455 for the treatment of ESCC.", "CPI-455 inhibited ECA-109 cell proliferation via the mitochondrial apoptosis pathway by regulating the expression of related genes." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Cell lines", "Cell culture", "Methyl tetrazolium assay", "Laser confocal scanning microscope assay of the ectropion of membrane phosphatidylserine", "Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay of nucleation and DNA fragmentation of Eca-109 cells", "Flow cytometry assay of reactive oxygen species (ROS)", "Flow cytometry assay of mitochondrial membrane potential", "Western blotting assay of P53, Bax, KDM5C, cleaved Caspase-9, and cleaved Caspase-3", "Statistical analysis", "RESULTS", "CPI-455 inhibits Eca-109 cell proliferation", "CPI-455 induces Eca-109 cell apoptosis", "CPI-455 upregulates ROS content, P53, Bax, Caspase-9, and Caspase-3 expression and downregulates KDM5C expression and mitochondrial membrane potential in Eca-109 cells", "DISCUSSION", "CONCLUSION" ]
[ "Esophageal cancer is a common digestive tract cancer, of which esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma represent the main histological types[1,2]. The incidence of esophageal cancer is relatively high, ranking seventh worldwide. ESCC accounts for more than 90% of esophageal cancers. As it is difficult to diagnose ESCC early due to a lack of specific clinical symptoms and insufficient preventive measures, it is often diagnosed at an advanced stage and has a poor prognosis. In addition, the 5-yr postoperative survival rate is less than 50%[3-5].The lysine-specific demethylase 5C (KDM5C) inhibitor, CPI-455, regulates the apoptosis and proliferation of tumor cells by regulating the methylation state of lysine in KDM5C[6]. CPI-455 activity been reported in cervical[7], gastric[8], and prostate cancer[9], and in other diseases[10,11]; but not in the treatment of esophageal cancer. In this study, human ESCC Eca-109 cells were used as a research model to preliminarily study the effect and mechanism of CPI-455 against ESCC and to explore novel approaches for the treatment and prevention of ESCC.", "Cell lines Eca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used.\nEca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used.\nCell culture Eca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture.\nEca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture.\nMethyl tetrazolium assay For assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100.\nFor assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100.\nLaser confocal scanning microscope assay of the ectropion of membrane phosphatidylserine The fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above.\nThe fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above.\nTerminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay of nucleation and DNA fragmentation of Eca-109 cells Eca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm).\nEca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm).\nFlow cytometry assay of reactive oxygen species (ROS) Eca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate.\nEca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate.\nFlow cytometry assay of mitochondrial membrane potential Cells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry.\nCells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry.\nWestern blotting assay of P53, Bax, KDM5C, cleaved Caspase-9, and cleaved Caspase-3 Eca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression.\nEca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression.\nStatistical analysis GraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05.\nGraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05.", "Eca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used.", "Eca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture.", "For assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100.", "The fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above.", "Eca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm).", "Eca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate.", "Cells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry.", "Eca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression.", "GraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05.", "CPI-455 inhibits Eca-109 cell proliferation The MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1).\n\nEca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control.\nThe MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1).\n\nEca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control.\nCPI-455 induces Eca-109 cell apoptosis The laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and \"pipe-like network\" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible \"point and flake\" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a \"structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B).\n\nChanges in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3).\nTUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D).\nThe laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and \"pipe-like network\" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible \"point and flake\" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a \"structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B).\n\nChanges in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3).\nTUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D).\nCPI-455 upregulates ROS content, P53, Bax, Caspase-9, and Caspase-3 expression and downregulates KDM5C expression and mitochondrial membrane potential in Eca-109 cells Flow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6).\n\nEca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species.\n\nMitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control.\n\nWestern blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase.\n\nDensitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control.\nFlow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6).\n\nEca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species.\n\nMitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control.\n\nWestern blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase.\n\nDensitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control.", "The MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1).\n\nEca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control.", "The laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and \"pipe-like network\" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible \"point and flake\" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a \"structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B).\n\nChanges in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3).\nTUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D).", "Flow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6).\n\nEca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species.\n\nMitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control.\n\nWestern blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase.\n\nDensitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control.", "Esophageal cancer is a digestive tract malignant tumor that is difficult to diagnose early. The most common treatments are surgery, radiotherapy, chemotherapy, and biologically targeted therapy[12]. The prognosis remains poor. The occurrence and development of esophageal cancer is complex and involves multiple gene changes, commonly resulting from histone methylation and epigenetic regulation of tumor-related genes.\nCPI-455 is a KDM5C inhibitor that modulates the apoptosis and proliferation of tumor cells by regulating the lysine methylation status KDM5C. Recent reports have confirmed that the anti-tumor effect of CPI-455 in ovarian, breast, stomach, lung, and liver cancer is achieved by inducing the apoptosis and autophagy of tumor cells via modulating KDM5C gene transcription and expression[13,14]. In this study, the anti-Eca-109 effect of CPI-455 was studied and the associated mechanism was explored.\nThe study results found that CPI-455 inhibited the proliferation of esophageal cancer cells in a time-and concentration-dependent manner. However, at its effective concentration, CPI-455 was less toxic to normal esophageal cells than to Eca-109 cells. The inhibitory effect of CPI-455 on Eca-109 cell proliferation was confirmed by the assay of cell-membrane permeability by laser confocal scanning microscopy. This finding indicates that CPI-455 has favorable effects by inhibiting Eca-109 cell proliferation and growth, as well as the selective killing on ESCC cells (Figures 1 and 2).\nFlow cytometry was used to determine the ROS content of Eca-109 cells, which is considered to be the by-product of oxygen consumption and cell metabolism. Previous studies have found that various chemotherapy drugs can induce an increase in the ROS content in tumor cells, and that at a certain level, ROS can induce the apoptosis of tumor cells[15-18]. Decrease of the mitochondrial membrane potential is the initial manifestation of the hypoxia apoptosis cascade[19]. In this study, the level of intracellular ROS significantly increased in treated Eca-109 cells (Figure 3), and depolarization of mitochondrial membrane was increased compared with controls (Figure 4). CPI-455 significantly induced the apoptosis of Eca-109 cells.\nPrevious studies have confirmed that the mechanism by which tumor cell apoptosis is induced varies widely depending on the various pathways that are involved. These includes mitochondrial pathways, which play an important role in the maintenance of stability, and those involving activation of multiple upstream and downstream genes (e.g., p53 and Bax). Activation of Caspase-2, 3, and 9 on the mitochondrial membrane is linked to the activation of mitochondrial apoptotic pathways[22-24].\nTherefore, the inhibitory effect of the KDM5C inhibitor, CPI-455, on Eca-109 cell proliferation may be related to the downregulation of KDM5C expression, activation of Caspase-3 and Caspase-9, change in mitochondrial membrane permeability, promotion of ROS production, depolarization of the mitochondrial membrane, release of the proapoptotic proteins, Bax and p53, in the mitochondria into the cytoplasm, and induction of tumor cell apoptosis (Figures 5 and 6). The above results indicate that CPI-455 induced apoptosis through an ROS-dependent mitochondrial signaling pathway[25].\nOur study had some limitations, the mechanism involved in inducing apoptosis of tumor cells is extensive, and the mitochondrial apoptosis pathway is only one of them that plays an important role in maintaining cell stability. The mechanism has not been fully elucidated and needs further study. The results of this study provide a theoretical basis for the application of CPI-455 for the treatment of ESCC.", "CPI-455 inhibited ECA-109 cell proliferation via the mitochondrial apoptosis pathway by regulating the expression of related genes." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Lysine-specific demethylase 5C", "CPI-455", "Esophageal squamous cell carcinoma", "Caspase", "P53" ]
INTRODUCTION: Esophageal cancer is a common digestive tract cancer, of which esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma represent the main histological types[1,2]. The incidence of esophageal cancer is relatively high, ranking seventh worldwide. ESCC accounts for more than 90% of esophageal cancers. As it is difficult to diagnose ESCC early due to a lack of specific clinical symptoms and insufficient preventive measures, it is often diagnosed at an advanced stage and has a poor prognosis. In addition, the 5-yr postoperative survival rate is less than 50%[3-5].The lysine-specific demethylase 5C (KDM5C) inhibitor, CPI-455, regulates the apoptosis and proliferation of tumor cells by regulating the methylation state of lysine in KDM5C[6]. CPI-455 activity been reported in cervical[7], gastric[8], and prostate cancer[9], and in other diseases[10,11]; but not in the treatment of esophageal cancer. In this study, human ESCC Eca-109 cells were used as a research model to preliminarily study the effect and mechanism of CPI-455 against ESCC and to explore novel approaches for the treatment and prevention of ESCC. MATERIALS AND METHODS: Cell lines Eca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used. Eca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used. Cell culture Eca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture. Eca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture. Methyl tetrazolium assay For assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100. For assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100. Laser confocal scanning microscope assay of the ectropion of membrane phosphatidylserine The fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above. The fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above. Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay of nucleation and DNA fragmentation of Eca-109 cells Eca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm). Eca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm). Flow cytometry assay of reactive oxygen species (ROS) Eca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate. Eca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate. Flow cytometry assay of mitochondrial membrane potential Cells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry. Cells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry. Western blotting assay of P53, Bax, KDM5C, cleaved Caspase-9, and cleaved Caspase-3 Eca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression. Eca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression. Statistical analysis GraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05. GraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05. Cell lines: Eca-109 human esophageal cancer cells and HPT-1A normal human esophageal epithelial cells were provided by the Cell Resource Center of the Shanghai Life Sciences Institute, Chinese Academy of Sciences (Shanghai, China). Other materials included CPI-455 (MSDS, 1628208-23-0; Gibco, Ltd., United States), DMEM and RPMI 1640 cell culture media and fetal bovine serum (Gibco, Ltd). Anti-human P53 (cat no. sc-6243), Bax, KDM5C (cat no. sc-81623 ), Caspase-9 (cat no. 9746), Caspase-3 (cat no. 9502), GAPDH (glyceraldehyde-3-phosphate dehydrogenase, cat no. sc-47778) polyclonal antibody (1:200; Abcam, United Kingdom) were used for western blotting. Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) apoptosis detection kits (cat no. F7250) were from Sigma-Aldrich (United States). polyvinylidene difluoride (PVDF) membranes were from Biyuntian Biotech (China) and bicinchoninic acid (BCA) protein quantitative detection kits were from Thermo Fisher (United States). An Epics Ultra flow cytometer (Beckman Coulter, United States), JEM-100sx transmission electron microscope (Jeol Ltd., Japan), and a TCS SP2 laser confocal microscope (Leica, Germany) were used. Cell culture: Eca-109 cells were cultured in DMEM and HET-1A cells were cultured in RPIM-1640 medium. Both were supplemented with 10% pre-inactivated FBS, 100 U/mL penicillin, and 100 U/mL streptomycin. Cells were adherently cultured in a 37 °C constant temperature incubator with a volume fraction of 5% CO2 and 95% humidity; 0.25% trypsin-EDTA was used for cell digestion and passage. Cell growth in the culture flask was observed every 1-2 d, and the cell culture medium was replaced for passage culture. Methyl tetrazolium assay: For assay of growth and the logarithmic phase, cells were collected by trypsin digestion and centrifugation and 90 mL of a single cell suspension (3 × 103/mL) were inoculated into 96-well culture plates and cultured at 37 °C in an atmosphere containing 5% CO2. overnight to ensure adherence. When the cells grew to the appropriate density, the supernatant was discarded and 100 mL of the CPI-455 KDM5C inhibitor was added to each well, giving a final concentration of 15 mmol/L. The plates were cultured in an atmosphere containing 5% CO2 at 37 °C for 0, 24, 48, 72, and 96 h (5 wells/time point). Following this, the cells were collected and centrifuged at 150 g for 10 min. The supernatant was discarded and the precipitate was washed once with phosphate buffered saline (PBS). A total of 200 µL of serum-free RPMI-1640 medium was added, followed by addition of 10 µL of methyl tetrazolium (MTT; 5 mg/mL). Following culture for a further 4 h, the supernatant was removed and 150 µL DMSO was added to each well to dissolve the purple crystals. The optical density (OD) was read at 490 nm after the crystals had completely dissolved. The survival rate was calculated as S% = (OD of the experimental group/OD of the control group) × 100. Laser confocal scanning microscope assay of the ectropion of membrane phosphatidylserine: The fluorescence of cell smears and suspension cultures after polylysine treatment was observed by confocal microscopy. Red (488/590 + 42 nm) and green (488/530 + 30 nm) dual channel detection and differential interference contrast imaging were performed. A total of 100 fields of vision were screened and 30 typical fields of vision were selected for inclusion in the preliminary statistical analysis. Annexin V-FITC (fluorescein isothiocyanate), PI double staining was performed and the fluorescence of Eca-109 cell membranes and necrotic nuclei was observed and analyzed as described above. DAPI (4', 6-diamidine-2-phenylindole, dihydrochloride) was used to stain the nuclei in cell smears and glycerol patches and analyzed by fluorescence microscopy, as described above. Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay of nucleation and DNA fragmentation of Eca-109 cells: Eca-109 cells (5 × 106/mL) were seeded in 24-well plates and cultured for 48 h. Cell smears were prepared from 50 µL of cell suspension on polylysine-treated slides that were fixed with 4% paraformaldehyde at room temperature for 30-60 min and washed twice with PBS. After treatment with terminal deoxynucleotidyl transferase and termination, the preparations were washed four times with PBS and incubated with FITC-labeled antibody for 30 min. Nuclei were stained with PI (1.0 g/mL) and the preparations were sealed with glycerin-mounted coverslips and marked. The cells were observed with a confocal laser scanning microscope, with FITC and PI dual channel fluorescence at an excitation wavelength of 490 nm and an emitted wavelength of 520 nm). Flow cytometry assay of reactive oxygen species (ROS): Eca-109 cells were cultured in a six-well plates (3 × 103 cells/well), treated with 15 μmol/L CPI-455 for 48 h, and collected. The cells were washed three times in PBS, centrifuged for 5 min, and the supernatant was discarded. The cells were resuspended in serum-free medium, dichloro-dihydro-fluorescein diacetate (DCFH-DA) was added to a final concentration of 10 mmol/L, and the plates were incubated at 37 °C for 40 min. The cells were centrifuged for 10 min, and the pellet was collected, washed, and resuspended in precooled PBS. The cells were centrifuged to obtain a pellet and were stained for fluorescence detection. Each assay was performed in triplicate. Flow cytometry assay of mitochondrial membrane potential: Cells were harvested (2 × 106/mL) at scheduled times after treatment with 15 μmol/L CPI-455 for 48 h. The Mitocapture dye and antibody incubation solutions were freshly prepared. The cells were washed and centrifuged and the mitochondria were stained. The cells were immediately transferred to collecting tubes for flow cytometry. Western blotting assay of P53, Bax, KDM5C, cleaved Caspase-9, and cleaved Caspase-3: Eca-109 cells were cultured for 48 h, collected, and centrifuged at 150 g for 5 min (r = 6 cm). Total protein was extracted using a protein extraction kit. The supernatant was collected, and the protein concentration was determined using a BCA protein kit. Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes. After blocking in 5% skim milk at room temperature for 2 h, anti-human mouse P53, Bax, KDM5C, Caspase-9, Caspase-3, and GAPDH monoclonal antibodies (1:1000; Santa Cruz Biotechnology, Inc., United States) were added and the membranes were incubated overnight at 4 °C and then with horseradish peroxidase (HRP)-labeled mouse secondary antibody (1:3,000; Origene Technologies Inc., China). The films were developed, fixed, scanned and analyzed using Image J software to calculate the gray values. Protein expression of the target protein was reported relative to GAPDH protein expression. Statistical analysis: GraphPad Prism 6.0 software was used to statistical analyze the data. The data were reported as means ± SD. All assays were repeated independently three times. Between-group differences were compared with independent samples t-tests. One-way analysis of variance was used to compare data with normal distributions. Least significant difference (LSD)-t-tests were used for pairwise comparisons of multiple groups. The significance level was = 0.05. RESULTS: CPI-455 inhibits Eca-109 cell proliferation The MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1). Eca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control. The MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1). Eca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control. CPI-455 induces Eca-109 cell apoptosis The laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and "pipe-like network" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible "point and flake" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a "structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B). Changes in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3). TUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D). The laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and "pipe-like network" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible "point and flake" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a "structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B). Changes in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3). TUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D). CPI-455 upregulates ROS content, P53, Bax, Caspase-9, and Caspase-3 expression and downregulates KDM5C expression and mitochondrial membrane potential in Eca-109 cells Flow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6). Eca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species. Mitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control. Western blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase. Densitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control. Flow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6). Eca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species. Mitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control. Western blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase. Densitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control. CPI-455 inhibits Eca-109 cell proliferation: The MTT results demonstrated that CPI-455 significantly inhibited the proliferation of Eca-109 cells in a time- and a concentration-dependent manner. The CPI-455 LD50 was 15 mol/L CPI-455 at 48 h. Therefore, the 48 h concentration of 15 mol/L CPI-455 was selected for the follow-up procedures (Figure 1). Eca-109 cell viability following treatment with CPI-455 at different times and concentrations. Data are expressed as means ± SE (n = 3). aP < 0.05 vs control. CPI-455 induces Eca-109 cell apoptosis: The laser confocal scanning microscopy results showed that prior to CPI-455 treatment, the Eca-109 cells had developed a complete mitochondrial structure. Mitocapture revealed a bright red color and "pipe-like network" that surrounded the nuclear lobule and was distributed throughout the entire cell. Following treatment with CPI-455 (15 μmol/L) for 48 h, both groups showed the presence of dye in the cytoplasm, with red mitochondrial staining and green cytoplasmic staining. Some cells contained visible "point and flake" red-stained mitochondria scattered in the cytoplasm, not present in clusters surrounding the nucleus, and gradually merging to form a "structureless aggregation”. Substantial changes to the mitochondrial structure were observed, which appeared to indicate mitochondrial atrophy. Following CPI-455 treatment, the Eca-109 cell membranes were weakly stained and the internal cytoplasmic structures indistinct and disorganized, with severe cavitation (Figure 2A and B). Changes in Eca109 cells after 48 h culture with and without CPI-455 treatment. Left panels show the morphology of the mitochondria of Eca109 cells. A: Arrows indicate mitochondria with complete and clear structure. B: Arrows indicate mitochondria with a fuzzy structure and weak staining of cells. Right panels show TUNEL assay results (bar = 10 μm) and laser confocal scanning microscopy × 600 of nuclear staining and DNA fragmentation of Eca109 cells C: without, and D: with CPI-455 treatment. Data are reported as means ± SE (n = 3). TUNEL and laser confocal scanning microscopy showed green fluorescence generated by the combination of FITC and the staining of 3'-OH terminal fragments produced by DNA fragmentation in the nucleus of apoptotic cells. Red fluorescence was emitted by PI-stained nuclei, which revealed the location of cells. Double-positive cells were seen in cells treated with CPI-455. Most Eca-109 cells in the CPI-455 group were double-positive for FITC and PI fluorescence. Most Eca-109 cells in the control group were negative for FITC and positive for PI, indicating that TUNEL assay of the CPI-455-stimulated group was positive. In terms of cell morphology, compared with the control group, most Eca-109 cells in the control group had a lower nuclear staining density, an unclear nuclear boundary, and larger nuclei. However, most ECA-109 cells in gemcitabine group had dense chromatin and small nuclei (Figure 2C and D). CPI-455 upregulates ROS content, P53, Bax, Caspase-9, and Caspase-3 expression and downregulates KDM5C expression and mitochondrial membrane potential in Eca-109 cells: Flow cytometry was used to detect the effect of CPI-455 on the level of ROS in Eca-109 cells. With an extension of the induction time, the level of intracellular ROS in the Eca-109 cells was increased significantly, with statistically significant differences observed at 24, 48, and 72 h (P < 0.01, Figure 3). Compared with the control group, the mitochondrial membrane potential was depolarized and decreased (P < 0.01, Figure 4). The western blot results found increased expression of p53, Bax, Caspase-9, and Caspase-3 at 24, 48, and 72 h following CPI-455 treatment (Figure 5). However, KDM5C protein expression in the treated cells was significantly decreased compared with the controls (P < 0.01, Figure 6). Eca-109 induced by CPI-455 is dependent on the generation of reactive oxygen species. A: Eca109 untreated (0 h); B: Eca-109 induced by CPI-455 for 24 h; C: Eca-109 induced by CPI-455 for 48 h; D: Eca-109 induced by CPI-455 for 72 h. Data are reported as means ± SE (n = 3). ROS: Reactive oxygen species. Mitochondrial membrane potential assayed by flow cytometry. A: Eca109 untreated (0 h); B: Eca109 treated with CPI-455 for 48 h; C: Gray scale results of membrane potential energy of mitochondria (Blue: Results of the first assay; Red: Results of the second assay; Green: Results of the third assay). Data are reported as means ± SE (n = 3). aP < 0.05 vs control. Western blot assays of p53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 protein expression in Eca-109 cells. Lane 1: Eca109 untreated (0 h); Lane 2: Eca109 induced by CPI-455 for 24 h; Lane 3: Eca109 induced by CPI-455 for 48 h; Lane 4: Eca109 induced by CPI-455 for 72h. KDM5C: Lysine-specific demethylase 5C; GAPDH: Glyceraldehyde-3-phosphate dehydrogenase. Densitometry analysis of the immunoblotting data of P53, Bax, lysine-specific demethylase 5C, cleaved Caspase 3, and cleaved Caspase 9 proteins in Eca-109 cells. Data are reported as means ± SE (n = 3). aP < 0.05 vs control; bP < 0.01 vs control. DISCUSSION: Esophageal cancer is a digestive tract malignant tumor that is difficult to diagnose early. The most common treatments are surgery, radiotherapy, chemotherapy, and biologically targeted therapy[12]. The prognosis remains poor. The occurrence and development of esophageal cancer is complex and involves multiple gene changes, commonly resulting from histone methylation and epigenetic regulation of tumor-related genes. CPI-455 is a KDM5C inhibitor that modulates the apoptosis and proliferation of tumor cells by regulating the lysine methylation status KDM5C. Recent reports have confirmed that the anti-tumor effect of CPI-455 in ovarian, breast, stomach, lung, and liver cancer is achieved by inducing the apoptosis and autophagy of tumor cells via modulating KDM5C gene transcription and expression[13,14]. In this study, the anti-Eca-109 effect of CPI-455 was studied and the associated mechanism was explored. The study results found that CPI-455 inhibited the proliferation of esophageal cancer cells in a time-and concentration-dependent manner. However, at its effective concentration, CPI-455 was less toxic to normal esophageal cells than to Eca-109 cells. The inhibitory effect of CPI-455 on Eca-109 cell proliferation was confirmed by the assay of cell-membrane permeability by laser confocal scanning microscopy. This finding indicates that CPI-455 has favorable effects by inhibiting Eca-109 cell proliferation and growth, as well as the selective killing on ESCC cells (Figures 1 and 2). Flow cytometry was used to determine the ROS content of Eca-109 cells, which is considered to be the by-product of oxygen consumption and cell metabolism. Previous studies have found that various chemotherapy drugs can induce an increase in the ROS content in tumor cells, and that at a certain level, ROS can induce the apoptosis of tumor cells[15-18]. Decrease of the mitochondrial membrane potential is the initial manifestation of the hypoxia apoptosis cascade[19]. In this study, the level of intracellular ROS significantly increased in treated Eca-109 cells (Figure 3), and depolarization of mitochondrial membrane was increased compared with controls (Figure 4). CPI-455 significantly induced the apoptosis of Eca-109 cells. Previous studies have confirmed that the mechanism by which tumor cell apoptosis is induced varies widely depending on the various pathways that are involved. These includes mitochondrial pathways, which play an important role in the maintenance of stability, and those involving activation of multiple upstream and downstream genes (e.g., p53 and Bax). Activation of Caspase-2, 3, and 9 on the mitochondrial membrane is linked to the activation of mitochondrial apoptotic pathways[22-24]. Therefore, the inhibitory effect of the KDM5C inhibitor, CPI-455, on Eca-109 cell proliferation may be related to the downregulation of KDM5C expression, activation of Caspase-3 and Caspase-9, change in mitochondrial membrane permeability, promotion of ROS production, depolarization of the mitochondrial membrane, release of the proapoptotic proteins, Bax and p53, in the mitochondria into the cytoplasm, and induction of tumor cell apoptosis (Figures 5 and 6). The above results indicate that CPI-455 induced apoptosis through an ROS-dependent mitochondrial signaling pathway[25]. Our study had some limitations, the mechanism involved in inducing apoptosis of tumor cells is extensive, and the mitochondrial apoptosis pathway is only one of them that plays an important role in maintaining cell stability. The mechanism has not been fully elucidated and needs further study. The results of this study provide a theoretical basis for the application of CPI-455 for the treatment of ESCC. CONCLUSION: CPI-455 inhibited ECA-109 cell proliferation via the mitochondrial apoptosis pathway by regulating the expression of related genes.
Background: Esophageal cancer is a malignant tumor of the digestive tract that is difficult to diagnose early. CPI-455 has been reported to inhibit various cancers, but its role in esophageal squamous cell carcinoma (ESCC) is unknown. Methods: A methyl tetrazolium assay was used to detect the inhibitory effect of CPI-455 on the proliferation of Eca-109 cells. Apoptosis, reactive oxygen species (ROS), and mitochondrial membrane potential were assessed by flow cytometry. Laser confocal scanning and transmission electron microscopy were used to observe changes in Eca-109 cell morphology. The protein expression of P53, Bax, lysine-specific demethylase 5C (KDM5C), cleaved Caspase-9, and cleaved Caspase-3 were assayed by western blotting. Results: Compared with the control group, CPI-455 significantly inhibited Eca-109 cell proliferation. Gemcitabine inhibited Eca-109 cell proliferation in a concentration- and time-dependent manner. CPI-455 caused extensive alteration of the mitochondria, which appeared to have become atrophied. The cell membrane was weakly stained and the cytoplasmic structures were indistinct and disorganized, with serious cavitation when viewed by transmission electron microscopy. The flow cytometry and western blot results showed that, compared with the control group, the mitochondrial membrane potential was decreased and depolarized in Eca-109 cells treated with CPI-455. CPI-455 significantly upregulated the ROS content, P53, Bax, Caspase-9, and Caspase-3 protein expression in Eca-109 cells, whereas KDM5C expression was downregulated. Conclusions: CPI-455 inhibited Eca-109 cell proliferation via mitochondrial apoptosis by regulating the expression of related genes.
INTRODUCTION: Esophageal cancer is a common digestive tract cancer, of which esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma represent the main histological types[1,2]. The incidence of esophageal cancer is relatively high, ranking seventh worldwide. ESCC accounts for more than 90% of esophageal cancers. As it is difficult to diagnose ESCC early due to a lack of specific clinical symptoms and insufficient preventive measures, it is often diagnosed at an advanced stage and has a poor prognosis. In addition, the 5-yr postoperative survival rate is less than 50%[3-5].The lysine-specific demethylase 5C (KDM5C) inhibitor, CPI-455, regulates the apoptosis and proliferation of tumor cells by regulating the methylation state of lysine in KDM5C[6]. CPI-455 activity been reported in cervical[7], gastric[8], and prostate cancer[9], and in other diseases[10,11]; but not in the treatment of esophageal cancer. In this study, human ESCC Eca-109 cells were used as a research model to preliminarily study the effect and mechanism of CPI-455 against ESCC and to explore novel approaches for the treatment and prevention of ESCC. CONCLUSION: We thank all the members of Department of Clinical Laboratory, Huangshi Central Hospital, Affiliated Hospital of Hubei Polytechnic University, Edong Healthcare Group and Department of Laboratory Medicine, Hubei cancer hospital, Tongji Medical College, Huazhong University of Science and Technology.
Background: Esophageal cancer is a malignant tumor of the digestive tract that is difficult to diagnose early. CPI-455 has been reported to inhibit various cancers, but its role in esophageal squamous cell carcinoma (ESCC) is unknown. Methods: A methyl tetrazolium assay was used to detect the inhibitory effect of CPI-455 on the proliferation of Eca-109 cells. Apoptosis, reactive oxygen species (ROS), and mitochondrial membrane potential were assessed by flow cytometry. Laser confocal scanning and transmission electron microscopy were used to observe changes in Eca-109 cell morphology. The protein expression of P53, Bax, lysine-specific demethylase 5C (KDM5C), cleaved Caspase-9, and cleaved Caspase-3 were assayed by western blotting. Results: Compared with the control group, CPI-455 significantly inhibited Eca-109 cell proliferation. Gemcitabine inhibited Eca-109 cell proliferation in a concentration- and time-dependent manner. CPI-455 caused extensive alteration of the mitochondria, which appeared to have become atrophied. The cell membrane was weakly stained and the cytoplasmic structures were indistinct and disorganized, with serious cavitation when viewed by transmission electron microscopy. The flow cytometry and western blot results showed that, compared with the control group, the mitochondrial membrane potential was decreased and depolarized in Eca-109 cells treated with CPI-455. CPI-455 significantly upregulated the ROS content, P53, Bax, Caspase-9, and Caspase-3 protein expression in Eca-109 cells, whereas KDM5C expression was downregulated. Conclusions: CPI-455 inhibited Eca-109 cell proliferation via mitochondrial apoptosis by regulating the expression of related genes.
8,111
281
[ 203, 240, 102, 265, 135, 143, 143, 59, 184, 80, 1978, 93, 436, 440, 637, 18 ]
17
[ "cells", "cpi 455", "455", "cpi", "eca", "109", "eca 109", "cell", "eca 109 cells", "109 cells" ]
[ "tract cancer esophageal", "esophageal cancer common", "discussion esophageal cancer", "treatment esophageal cancer", "proliferation esophageal cancer" ]
null
[CONTENT] Lysine-specific demethylase 5C | CPI-455 | Esophageal squamous cell carcinoma | Caspase | P53 [SUMMARY]
[CONTENT] Lysine-specific demethylase 5C | CPI-455 | Esophageal squamous cell carcinoma | Caspase | P53 [SUMMARY]
null
[CONTENT] Lysine-specific demethylase 5C | CPI-455 | Esophageal squamous cell carcinoma | Caspase | P53 [SUMMARY]
[CONTENT] Lysine-specific demethylase 5C | CPI-455 | Esophageal squamous cell carcinoma | Caspase | P53 [SUMMARY]
[CONTENT] Lysine-specific demethylase 5C | CPI-455 | Esophageal squamous cell carcinoma | Caspase | P53 [SUMMARY]
[CONTENT] Apoptosis | Cell Line, Tumor | Cell Proliferation | Cyclopropanes | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Humans | Indoles | Lysine | Mitochondria [SUMMARY]
[CONTENT] Apoptosis | Cell Line, Tumor | Cell Proliferation | Cyclopropanes | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Humans | Indoles | Lysine | Mitochondria [SUMMARY]
null
[CONTENT] Apoptosis | Cell Line, Tumor | Cell Proliferation | Cyclopropanes | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Humans | Indoles | Lysine | Mitochondria [SUMMARY]
[CONTENT] Apoptosis | Cell Line, Tumor | Cell Proliferation | Cyclopropanes | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Humans | Indoles | Lysine | Mitochondria [SUMMARY]
[CONTENT] Apoptosis | Cell Line, Tumor | Cell Proliferation | Cyclopropanes | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Humans | Indoles | Lysine | Mitochondria [SUMMARY]
[CONTENT] tract cancer esophageal | esophageal cancer common | discussion esophageal cancer | treatment esophageal cancer | proliferation esophageal cancer [SUMMARY]
[CONTENT] tract cancer esophageal | esophageal cancer common | discussion esophageal cancer | treatment esophageal cancer | proliferation esophageal cancer [SUMMARY]
null
[CONTENT] tract cancer esophageal | esophageal cancer common | discussion esophageal cancer | treatment esophageal cancer | proliferation esophageal cancer [SUMMARY]
[CONTENT] tract cancer esophageal | esophageal cancer common | discussion esophageal cancer | treatment esophageal cancer | proliferation esophageal cancer [SUMMARY]
[CONTENT] tract cancer esophageal | esophageal cancer common | discussion esophageal cancer | treatment esophageal cancer | proliferation esophageal cancer [SUMMARY]
[CONTENT] cells | cpi 455 | 455 | cpi | eca | 109 | eca 109 | cell | eca 109 cells | 109 cells [SUMMARY]
[CONTENT] cells | cpi 455 | 455 | cpi | eca | 109 | eca 109 | cell | eca 109 cells | 109 cells [SUMMARY]
null
[CONTENT] cells | cpi 455 | 455 | cpi | eca | 109 | eca 109 | cell | eca 109 cells | 109 cells [SUMMARY]
[CONTENT] cells | cpi 455 | 455 | cpi | eca | 109 | eca 109 | cell | eca 109 cells | 109 cells [SUMMARY]
[CONTENT] cells | cpi 455 | 455 | cpi | eca | 109 | eca 109 | cell | eca 109 cells | 109 cells [SUMMARY]
[CONTENT] escc | esophageal | cancer | esophageal cancer | study | specific | lysine | cpi 455 | cpi | 455 [SUMMARY]
[CONTENT] cells | ml | protein | cat | cell | cultured | united | min | collected | culture [SUMMARY]
null
[CONTENT] proliferation mitochondrial apoptosis | expression related | cpi 455 inhibited eca | pathway regulating expression related | pathway regulating expression | pathway regulating | mitochondrial apoptosis pathway regulating | apoptosis pathway regulating | apoptosis pathway regulating expression | 109 cell proliferation mitochondrial [SUMMARY]
[CONTENT] cells | 455 | cpi | cpi 455 | 109 | eca 109 | eca | cell | ml | eca 109 cells [SUMMARY]
[CONTENT] cells | 455 | cpi | cpi 455 | 109 | eca 109 | eca | cell | ml | eca 109 cells [SUMMARY]
[CONTENT] ||| ESCC [SUMMARY]
[CONTENT] ||| ROS ||| ||| P53, Bax | 5C [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ESCC ||| ||| ROS ||| ||| P53, Bax | 5C ||| ||| Gemcitabine ||| ||| ||| ||| ROS | P53 | Bax | KDM5C ||| [SUMMARY]
[CONTENT] ||| ESCC ||| ||| ROS ||| ||| P53, Bax | 5C ||| ||| Gemcitabine ||| ||| ||| ||| ROS | P53 | Bax | KDM5C ||| [SUMMARY]
A novel technique using endoscopic band ligation for removal of long-stalked (>10 mm) pedunculated colon polyps: A prospective pilot study.
33642356
Endoscopic removal of large and thick-stalked pedunculated colonic polyps, often leads to massive hemorrhage. Several techniques to minimize this complication have not been widely adopted due to some caveats. In order to prevent postpolypectomy bleeding, we invented a novel technique to dissect long-stalked pedunculated colonic polyps using endoscopic band ligation (EBL) by laterally approaching the stalk.
BACKGROUND
In this prospective single-center study, 17 pedunculated polyps in 15 patients were removed between April 2012 and January 2016. We targeted pedunculated polyps with a long stalk length (>10 mm) and a large head (>10 mm) located in the distal colon. After identifying lesions with a colonoscope, we reapproached the middle part of the stalk of the targeted polyp with an EBL-equipped gastroscope to ligate it. Snare polypectomy was performed just above the ligation site of the stalk.
METHODS
EBL-assisted polypectomy removed all of the lesions successfully, which were confirmed pathologically. There was little technical difficulty associated with the endoscopic procedures, regardless of polyp size and stalk thickness, except for one case with a very large polyp that impeded the visualization of the ligation site. We observed a positive correlation between procedure time and the diameter of the head (spearman ρ = 0.52, P = 0.034). After dissection of the polyp, the EBL bands remained fastened to the dissected stalks in all cases. There was no complication associated with polypectomy for 1 month.
RESULTS
EBL-assisted polypectomy is an easy, safe, and effective technique to remove long-stalked pedunculated colonic polyps without postpolypectomy bleeding.
CONCLUSION
[ "Colon", "Colonic Polyps", "Colonoscopy", "Humans", "Pilot Projects", "Prospective Studies" ]
8555771
INTRODUCTION
Colonoscopic polypectomy including standard snare polypectomy, endoscopic mucosal resection (EMR), and endoscopic submucosal dissection (ESD) is a standard therapy for colon polyps that has replaced surgical polypectomy throughout the colon.[1] One of the most common and serious complications associated with endoscopic polypectomy is hemorrhage, with an incidence ranging from 0.3% to 6%.[2] Postpolypectomy bleeding (PPB) occurs frequently in cases of large polyps, with thick stalks (>5 mm), in the presence of malignancy and location in the right colon.[3] In particular, large pedunculated colonic polyps with thick stalk often cause immediate and profuse PPB by incomplete hemostasis of feeding arteries immediately after removal.[4] A few endoscopic techniques have been reported to reduce the risk of bleeding, including injection therapy and mechanical hemostatic methods using endoloops and the application of clips.[45678910] For example, polypectomy using an endoloop may require sophisticated endoscopic techniques and PPB may occur in relation to the procedure because of loop slippage or inadvertent dissection in thin-stalk polyps. Another method of clipping the base of the stalk of targeted polyps before polypectomy, remains controversial[68] because risk of coagulation syndrome might occur by contacting clips with a polypectomy snare. Endoscopic band ligation (EBL) has been shown to be an effective method of controlling bleeding from esophageal varices, gastric lesions with visible vessels, and rectal varices.[11] Since the techniques are simple and familiar to the endoscopists, the band ligation apparatus has been widely used to control other types of hemostasis for colonic PPB, colonic diverticular hemorrhage, and endoscopic submucosal resection in cases of small rectal carcinoids.[12] EBL can provide a clear view of lesions under direct pressure by a transparent cap to the targeted lesions. However, with the above method, the EBL approaches directly and head-on to the target lesions, which is not suitable to remove the polyp because of little space to reserve the polyp. To overcome the problem, the authors intended lateral access to the stalk of targeting polyps and then ligate it by releasing the band of EBL in order to compress and block the feeding arteries of pedunculated polyps, thereby preventing PPB. In this study, we evaluated the feasibility, safety, and efficacy of this hemostatic technique to remove long-stalked pedunculated colon polyps.
null
null
RESULTS
Fifteen patients were enrolled in the study; 17 pedunculated colonic polyps were removed by EAP. The mean age of the patients was 64 years (range 52–81 years). The mean size of resected polyps was 15.6 mm (range 10–40 mm): size ≥20 mm (3/17, 17.6%) and 10–19 mm (14/17, 82.4%). The width of the stalks was 14.4 mm (range 10–25 mm): stalk width ≤9 mm (8/17, 47.1%) and ≥10 mm (9/17, 52.9%) [Table 1]. The average duration of the endoscopic procedure was 365 s (range 151–890 s). We observed positive correlation between procedure time and the diameter of the head (spearman ρ = 0.52, P = 0.034) [Figure 4]. However, the procedure time was not related to other polyp-related factors including width and length of the stalk and pathologic types. All lesions were resected without difficulty except one case of a 40 × 40 mm polyp with a stalk 15 mm thick (patient 15 in Table 1). The procedural time of that case was about 15 mins because of the difficulty in finding the exact ligation, due to the visual hindrance by the huge mass itself. Nevertheless, R0 resection was obtained in of the cases. Histological examination of the 17 polyps revealed 10 tubular adenomas with low-grade dysplasia, three tubular adenomas with high-grade dysplasia, one tubulovillous adenoma with low-grade dysplasia, and three tubulovillous adenomas with high-grade dysplasia. There were no procedure-related complications for 1 month, such as PPB and perforations. Characteristics of 17 large pedunculated polyps in 15 patients who underwent endoscopic band ligation-assisted polypectomy S-colon: Sigmoid colon; SDJ: Sigmoid-descending junction; TA c LGD: Tubular adenoma with low-grade dysplasia; TVA c LGD: Tubulovillous adenoma with low-grade dysplasia; TA c HGD: Tubular adenoma with high-grade dysplasia; TVA c HGD: Tubulovillous adenoma with high-grade dysplasia A positive Spearman correlation between procedure time and the diameter of the head (ρ = 0.52,P= 0.034)
null
null
[ "Patients", "Endoscopic procedure", "Clinical outcomes", "Financial support and sponsorship" ]
[ "Between April 2012 and January 2016, 15 patients with 17 pedunculated colonic polyps were enrolled in this study. Eligible criteria were as follows: (1) polyps with head >10 mm and stalk length >10 mm; (2) location at the distal segments of the colon; and (3) benign features under endoscopic inspection (absence of ulceration and induration or friability). In patients taking anticoagulants such as aspirin, coumadin, or nonsteroidal anti-inflammatory drugs (NSAIDs), this medication was discontinued 7 days before the polypectomy and resumed a week later.\nPatients underwent colonoscopy before performing endoscopic band ligation assisted polypectomy (EAP). Colonoscopic examinations were performed using a standard colonoscope (CF-H260 series; Olympus, Tokyo, Japan). During colonoscopy, we found targeted polyps and identified their sizes and locations. The size of the polyp head and the diameter of the stalk were estimated using the complete opening width of the biopsy forceps. We then changed the colonoscope to an upper endoscope for EBL (Sumitomo Bakelite Co., Akita, Japan) because of the absence of colonoscopy-fitted EBL. After EAP, the patients stayed in the hospital for 1 day. Hemoglobin, hematocrit, and plain abdomen X-rays were checked 1 day after the treatment. Physical examinations were performed to assess for abdominal pain and hematochezia. The patients were followed up in the outpatient clinic for 1 month. This study protocol was approved by our Institutional Research Ethics Board and adhered to the Helsinki Declaration. All study subjects provided written informed consent before the procedure.", "The EAP procedure is illustrated in Figures 1 and 2. A conventional upper endoscope loaded with a band ligator was inserted to the site of the targeted pedunculated polyp Figure 2a. After approaching the lateral side of the stalk [Figures 1a and 2b] and grasping the mid-portion of the stalk using a tripod grasper (FG-45L-1, Olympus), we drew it into the cap, feeling some resistance [Figures 1b and 2c]. When the endoscopic view was blurred, the rubber band was released from the cap to ligate the stalk [Figures 1c and 2d]. We were able to confirm the proper ligation band by identifying endoscopically and presume the compression of feeding arteries to the polyp by congestion of the polyp head. Thereafter, we resected the remaining free margin of the stalk just above the ligation with an electrosurgical snare (SD-9U-1 or SD-11U-1, Olympus) [Figure 1d]. The polyps were removed by electrocoagulation with Endocut Q current (effect 3, cut duration 2 ms, cut interval 1200 ms) generated by a VIO300D electrosurgical unit (Erbe, Tübingen, Germany) [Figure 2e]. All procedures were performed by three experienced endoscopists who had an experience of more than 1,000 colorectal endoscopic mucosal resections. The resected polyps were collected for histopathological evaluation and reviewed by pathologists specializing in gastrointestinal pathology; a cutting site with coagulation is shown inFigure 3. Hematoxylin and eosin-stained slides were reviewed for each case.\nSchematic showing the endoscopic band ligation–associated colonic polypectomy technique. (a) Endoscope-equipped with a rubber band to be deployed at the target polyp. (b) Tripod forceps grasping the middle part of the stalk with suction and application of the ligation band. (c) The ligation band is located in the middle of the stalk, which was inverted and ligated to create an omega shape. (d) Endoscopic snare polypectomy is performed just above the ligation band\nEndoscopic view of the endoscopic band ligation–associated colonic polypectomy. (a) A large pedunculated polyp with a long stalk. (b) Endoscopic view of the lateral side of the pedunculated polyp approach. (c) Tripod forceps are used to grasp the middle of the pedunculated polyp stalk, which is then pulled into the ligation device. (d) Endoscopic view of the ligated polyp stalk. (e) Endoscopic view after removal of the pedunculated polyp, with no hemorrhage visible at the polypectomy site\nHistological view of a resected pedunculated polyp. The stalk was resected completely with clean margins by coagulation (hematoxylin and eosin stain, ×12 (a), ×40 (b))", "We evaluated several parameters, including completeness of resection, procedure time, and complications including immediate PPB, delayed PPB, and perforation. Complete resection was defined as a lesion-free margin with both the lateral and basal tissues microscopically. Immediate PPB was defined as pulsating bleeding or oozing of blood lasting more than 60 s immediately after the procedure. Delayed PPB was defined as gross rectal bleeding, bleeding requiring endoscopic or radiological hemostasis, or transfusions requiring surgery up to 1 month after the procedure. Perforation was detected with either endoscopic penetration of the wall or radiological examinations. In addition, spearman's correlation test was used to analyze the correlation between the procedure time and diameter of the head or stalk of the polyp.", "Nil." ]
[ null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Patients", "Endoscopic procedure", "Clinical outcomes", "RESULTS", "DISCUSSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Colonoscopic polypectomy including standard snare polypectomy, endoscopic mucosal resection (EMR), and endoscopic submucosal dissection (ESD) is a standard therapy for colon polyps that has replaced surgical polypectomy throughout the colon.[1] One of the most common and serious complications associated with endoscopic polypectomy is hemorrhage, with an incidence ranging from 0.3% to 6%.[2] Postpolypectomy bleeding (PPB) occurs frequently in cases of large polyps, with thick stalks (>5 mm), in the presence of malignancy and location in the right colon.[3] In particular, large pedunculated colonic polyps with thick stalk often cause immediate and profuse PPB by incomplete hemostasis of feeding arteries immediately after removal.[4] A few endoscopic techniques have been reported to reduce the risk of bleeding, including injection therapy and mechanical hemostatic methods using endoloops and the application of clips.[45678910] For example, polypectomy using an endoloop may require sophisticated endoscopic techniques and PPB may occur in relation to the procedure because of loop slippage or inadvertent dissection in thin-stalk polyps. Another method of clipping the base of the stalk of targeted polyps before polypectomy, remains controversial[68] because risk of coagulation syndrome might occur by contacting clips with a polypectomy snare.\nEndoscopic band ligation (EBL) has been shown to be an effective method of controlling bleeding from esophageal varices, gastric lesions with visible vessels, and rectal varices.[11] Since the techniques are simple and familiar to the endoscopists, the band ligation apparatus has been widely used to control other types of hemostasis for colonic PPB, colonic diverticular hemorrhage, and endoscopic submucosal resection in cases of small rectal carcinoids.[12] EBL can provide a clear view of lesions under direct pressure by a transparent cap to the targeted lesions. However, with the above method, the EBL approaches directly and head-on to the target lesions, which is not suitable to remove the polyp because of little space to reserve the polyp. To overcome the problem, the authors intended lateral access to the stalk of targeting polyps and then ligate it by releasing the band of EBL in order to compress and block the feeding arteries of pedunculated polyps, thereby preventing PPB. In this study, we evaluated the feasibility, safety, and efficacy of this hemostatic technique to remove long-stalked pedunculated colon polyps.", "Patients Between April 2012 and January 2016, 15 patients with 17 pedunculated colonic polyps were enrolled in this study. Eligible criteria were as follows: (1) polyps with head >10 mm and stalk length >10 mm; (2) location at the distal segments of the colon; and (3) benign features under endoscopic inspection (absence of ulceration and induration or friability). In patients taking anticoagulants such as aspirin, coumadin, or nonsteroidal anti-inflammatory drugs (NSAIDs), this medication was discontinued 7 days before the polypectomy and resumed a week later.\nPatients underwent colonoscopy before performing endoscopic band ligation assisted polypectomy (EAP). Colonoscopic examinations were performed using a standard colonoscope (CF-H260 series; Olympus, Tokyo, Japan). During colonoscopy, we found targeted polyps and identified their sizes and locations. The size of the polyp head and the diameter of the stalk were estimated using the complete opening width of the biopsy forceps. We then changed the colonoscope to an upper endoscope for EBL (Sumitomo Bakelite Co., Akita, Japan) because of the absence of colonoscopy-fitted EBL. After EAP, the patients stayed in the hospital for 1 day. Hemoglobin, hematocrit, and plain abdomen X-rays were checked 1 day after the treatment. Physical examinations were performed to assess for abdominal pain and hematochezia. The patients were followed up in the outpatient clinic for 1 month. This study protocol was approved by our Institutional Research Ethics Board and adhered to the Helsinki Declaration. All study subjects provided written informed consent before the procedure.\nBetween April 2012 and January 2016, 15 patients with 17 pedunculated colonic polyps were enrolled in this study. Eligible criteria were as follows: (1) polyps with head >10 mm and stalk length >10 mm; (2) location at the distal segments of the colon; and (3) benign features under endoscopic inspection (absence of ulceration and induration or friability). In patients taking anticoagulants such as aspirin, coumadin, or nonsteroidal anti-inflammatory drugs (NSAIDs), this medication was discontinued 7 days before the polypectomy and resumed a week later.\nPatients underwent colonoscopy before performing endoscopic band ligation assisted polypectomy (EAP). Colonoscopic examinations were performed using a standard colonoscope (CF-H260 series; Olympus, Tokyo, Japan). During colonoscopy, we found targeted polyps and identified their sizes and locations. The size of the polyp head and the diameter of the stalk were estimated using the complete opening width of the biopsy forceps. We then changed the colonoscope to an upper endoscope for EBL (Sumitomo Bakelite Co., Akita, Japan) because of the absence of colonoscopy-fitted EBL. After EAP, the patients stayed in the hospital for 1 day. Hemoglobin, hematocrit, and plain abdomen X-rays were checked 1 day after the treatment. Physical examinations were performed to assess for abdominal pain and hematochezia. The patients were followed up in the outpatient clinic for 1 month. This study protocol was approved by our Institutional Research Ethics Board and adhered to the Helsinki Declaration. All study subjects provided written informed consent before the procedure.\nEndoscopic procedure The EAP procedure is illustrated in Figures 1 and 2. A conventional upper endoscope loaded with a band ligator was inserted to the site of the targeted pedunculated polyp Figure 2a. After approaching the lateral side of the stalk [Figures 1a and 2b] and grasping the mid-portion of the stalk using a tripod grasper (FG-45L-1, Olympus), we drew it into the cap, feeling some resistance [Figures 1b and 2c]. When the endoscopic view was blurred, the rubber band was released from the cap to ligate the stalk [Figures 1c and 2d]. We were able to confirm the proper ligation band by identifying endoscopically and presume the compression of feeding arteries to the polyp by congestion of the polyp head. Thereafter, we resected the remaining free margin of the stalk just above the ligation with an electrosurgical snare (SD-9U-1 or SD-11U-1, Olympus) [Figure 1d]. The polyps were removed by electrocoagulation with Endocut Q current (effect 3, cut duration 2 ms, cut interval 1200 ms) generated by a VIO300D electrosurgical unit (Erbe, Tübingen, Germany) [Figure 2e]. All procedures were performed by three experienced endoscopists who had an experience of more than 1,000 colorectal endoscopic mucosal resections. The resected polyps were collected for histopathological evaluation and reviewed by pathologists specializing in gastrointestinal pathology; a cutting site with coagulation is shown inFigure 3. Hematoxylin and eosin-stained slides were reviewed for each case.\nSchematic showing the endoscopic band ligation–associated colonic polypectomy technique. (a) Endoscope-equipped with a rubber band to be deployed at the target polyp. (b) Tripod forceps grasping the middle part of the stalk with suction and application of the ligation band. (c) The ligation band is located in the middle of the stalk, which was inverted and ligated to create an omega shape. (d) Endoscopic snare polypectomy is performed just above the ligation band\nEndoscopic view of the endoscopic band ligation–associated colonic polypectomy. (a) A large pedunculated polyp with a long stalk. (b) Endoscopic view of the lateral side of the pedunculated polyp approach. (c) Tripod forceps are used to grasp the middle of the pedunculated polyp stalk, which is then pulled into the ligation device. (d) Endoscopic view of the ligated polyp stalk. (e) Endoscopic view after removal of the pedunculated polyp, with no hemorrhage visible at the polypectomy site\nHistological view of a resected pedunculated polyp. The stalk was resected completely with clean margins by coagulation (hematoxylin and eosin stain, ×12 (a), ×40 (b))\nThe EAP procedure is illustrated in Figures 1 and 2. A conventional upper endoscope loaded with a band ligator was inserted to the site of the targeted pedunculated polyp Figure 2a. After approaching the lateral side of the stalk [Figures 1a and 2b] and grasping the mid-portion of the stalk using a tripod grasper (FG-45L-1, Olympus), we drew it into the cap, feeling some resistance [Figures 1b and 2c]. When the endoscopic view was blurred, the rubber band was released from the cap to ligate the stalk [Figures 1c and 2d]. We were able to confirm the proper ligation band by identifying endoscopically and presume the compression of feeding arteries to the polyp by congestion of the polyp head. Thereafter, we resected the remaining free margin of the stalk just above the ligation with an electrosurgical snare (SD-9U-1 or SD-11U-1, Olympus) [Figure 1d]. The polyps were removed by electrocoagulation with Endocut Q current (effect 3, cut duration 2 ms, cut interval 1200 ms) generated by a VIO300D electrosurgical unit (Erbe, Tübingen, Germany) [Figure 2e]. All procedures were performed by three experienced endoscopists who had an experience of more than 1,000 colorectal endoscopic mucosal resections. The resected polyps were collected for histopathological evaluation and reviewed by pathologists specializing in gastrointestinal pathology; a cutting site with coagulation is shown inFigure 3. Hematoxylin and eosin-stained slides were reviewed for each case.\nSchematic showing the endoscopic band ligation–associated colonic polypectomy technique. (a) Endoscope-equipped with a rubber band to be deployed at the target polyp. (b) Tripod forceps grasping the middle part of the stalk with suction and application of the ligation band. (c) The ligation band is located in the middle of the stalk, which was inverted and ligated to create an omega shape. (d) Endoscopic snare polypectomy is performed just above the ligation band\nEndoscopic view of the endoscopic band ligation–associated colonic polypectomy. (a) A large pedunculated polyp with a long stalk. (b) Endoscopic view of the lateral side of the pedunculated polyp approach. (c) Tripod forceps are used to grasp the middle of the pedunculated polyp stalk, which is then pulled into the ligation device. (d) Endoscopic view of the ligated polyp stalk. (e) Endoscopic view after removal of the pedunculated polyp, with no hemorrhage visible at the polypectomy site\nHistological view of a resected pedunculated polyp. The stalk was resected completely with clean margins by coagulation (hematoxylin and eosin stain, ×12 (a), ×40 (b))\nClinical outcomes We evaluated several parameters, including completeness of resection, procedure time, and complications including immediate PPB, delayed PPB, and perforation. Complete resection was defined as a lesion-free margin with both the lateral and basal tissues microscopically. Immediate PPB was defined as pulsating bleeding or oozing of blood lasting more than 60 s immediately after the procedure. Delayed PPB was defined as gross rectal bleeding, bleeding requiring endoscopic or radiological hemostasis, or transfusions requiring surgery up to 1 month after the procedure. Perforation was detected with either endoscopic penetration of the wall or radiological examinations. In addition, spearman's correlation test was used to analyze the correlation between the procedure time and diameter of the head or stalk of the polyp.\nWe evaluated several parameters, including completeness of resection, procedure time, and complications including immediate PPB, delayed PPB, and perforation. Complete resection was defined as a lesion-free margin with both the lateral and basal tissues microscopically. Immediate PPB was defined as pulsating bleeding or oozing of blood lasting more than 60 s immediately after the procedure. Delayed PPB was defined as gross rectal bleeding, bleeding requiring endoscopic or radiological hemostasis, or transfusions requiring surgery up to 1 month after the procedure. Perforation was detected with either endoscopic penetration of the wall or radiological examinations. In addition, spearman's correlation test was used to analyze the correlation between the procedure time and diameter of the head or stalk of the polyp.", "Between April 2012 and January 2016, 15 patients with 17 pedunculated colonic polyps were enrolled in this study. Eligible criteria were as follows: (1) polyps with head >10 mm and stalk length >10 mm; (2) location at the distal segments of the colon; and (3) benign features under endoscopic inspection (absence of ulceration and induration or friability). In patients taking anticoagulants such as aspirin, coumadin, or nonsteroidal anti-inflammatory drugs (NSAIDs), this medication was discontinued 7 days before the polypectomy and resumed a week later.\nPatients underwent colonoscopy before performing endoscopic band ligation assisted polypectomy (EAP). Colonoscopic examinations were performed using a standard colonoscope (CF-H260 series; Olympus, Tokyo, Japan). During colonoscopy, we found targeted polyps and identified their sizes and locations. The size of the polyp head and the diameter of the stalk were estimated using the complete opening width of the biopsy forceps. We then changed the colonoscope to an upper endoscope for EBL (Sumitomo Bakelite Co., Akita, Japan) because of the absence of colonoscopy-fitted EBL. After EAP, the patients stayed in the hospital for 1 day. Hemoglobin, hematocrit, and plain abdomen X-rays were checked 1 day after the treatment. Physical examinations were performed to assess for abdominal pain and hematochezia. The patients were followed up in the outpatient clinic for 1 month. This study protocol was approved by our Institutional Research Ethics Board and adhered to the Helsinki Declaration. All study subjects provided written informed consent before the procedure.", "The EAP procedure is illustrated in Figures 1 and 2. A conventional upper endoscope loaded with a band ligator was inserted to the site of the targeted pedunculated polyp Figure 2a. After approaching the lateral side of the stalk [Figures 1a and 2b] and grasping the mid-portion of the stalk using a tripod grasper (FG-45L-1, Olympus), we drew it into the cap, feeling some resistance [Figures 1b and 2c]. When the endoscopic view was blurred, the rubber band was released from the cap to ligate the stalk [Figures 1c and 2d]. We were able to confirm the proper ligation band by identifying endoscopically and presume the compression of feeding arteries to the polyp by congestion of the polyp head. Thereafter, we resected the remaining free margin of the stalk just above the ligation with an electrosurgical snare (SD-9U-1 or SD-11U-1, Olympus) [Figure 1d]. The polyps were removed by electrocoagulation with Endocut Q current (effect 3, cut duration 2 ms, cut interval 1200 ms) generated by a VIO300D electrosurgical unit (Erbe, Tübingen, Germany) [Figure 2e]. All procedures were performed by three experienced endoscopists who had an experience of more than 1,000 colorectal endoscopic mucosal resections. The resected polyps were collected for histopathological evaluation and reviewed by pathologists specializing in gastrointestinal pathology; a cutting site with coagulation is shown inFigure 3. Hematoxylin and eosin-stained slides were reviewed for each case.\nSchematic showing the endoscopic band ligation–associated colonic polypectomy technique. (a) Endoscope-equipped with a rubber band to be deployed at the target polyp. (b) Tripod forceps grasping the middle part of the stalk with suction and application of the ligation band. (c) The ligation band is located in the middle of the stalk, which was inverted and ligated to create an omega shape. (d) Endoscopic snare polypectomy is performed just above the ligation band\nEndoscopic view of the endoscopic band ligation–associated colonic polypectomy. (a) A large pedunculated polyp with a long stalk. (b) Endoscopic view of the lateral side of the pedunculated polyp approach. (c) Tripod forceps are used to grasp the middle of the pedunculated polyp stalk, which is then pulled into the ligation device. (d) Endoscopic view of the ligated polyp stalk. (e) Endoscopic view after removal of the pedunculated polyp, with no hemorrhage visible at the polypectomy site\nHistological view of a resected pedunculated polyp. The stalk was resected completely with clean margins by coagulation (hematoxylin and eosin stain, ×12 (a), ×40 (b))", "We evaluated several parameters, including completeness of resection, procedure time, and complications including immediate PPB, delayed PPB, and perforation. Complete resection was defined as a lesion-free margin with both the lateral and basal tissues microscopically. Immediate PPB was defined as pulsating bleeding or oozing of blood lasting more than 60 s immediately after the procedure. Delayed PPB was defined as gross rectal bleeding, bleeding requiring endoscopic or radiological hemostasis, or transfusions requiring surgery up to 1 month after the procedure. Perforation was detected with either endoscopic penetration of the wall or radiological examinations. In addition, spearman's correlation test was used to analyze the correlation between the procedure time and diameter of the head or stalk of the polyp.", "Fifteen patients were enrolled in the study; 17 pedunculated colonic polyps were removed by EAP. The mean age of the patients was 64 years (range 52–81 years). The mean size of resected polyps was 15.6 mm (range 10–40 mm): size ≥20 mm (3/17, 17.6%) and 10–19 mm (14/17, 82.4%). The width of the stalks was 14.4 mm (range 10–25 mm): stalk width ≤9 mm (8/17, 47.1%) and ≥10 mm (9/17, 52.9%) [Table 1]. The average duration of the endoscopic procedure was 365 s (range 151–890 s). We observed positive correlation between procedure time and the diameter of the head (spearman ρ = 0.52, P = 0.034) [Figure 4]. However, the procedure time was not related to other polyp-related factors including width and length of the stalk and pathologic types. All lesions were resected without difficulty except one case of a 40 × 40 mm polyp with a stalk 15 mm thick (patient 15 in Table 1). The procedural time of that case was about 15 mins because of the difficulty in finding the exact ligation, due to the visual hindrance by the huge mass itself. Nevertheless, R0 resection was obtained in of the cases. Histological examination of the 17 polyps revealed 10 tubular adenomas with low-grade dysplasia, three tubular adenomas with high-grade dysplasia, one tubulovillous adenoma with low-grade dysplasia, and three tubulovillous adenomas with high-grade dysplasia. There were no procedure-related complications for 1 month, such as PPB and perforations.\nCharacteristics of 17 large pedunculated polyps in 15 patients who underwent endoscopic band ligation-assisted polypectomy\nS-colon: Sigmoid colon; SDJ: Sigmoid-descending junction; TA c LGD: Tubular adenoma with low-grade dysplasia; TVA c LGD: Tubulovillous adenoma with low-grade dysplasia; TA c HGD: Tubular adenoma with high-grade dysplasia; TVA c HGD: Tubulovillous adenoma with high-grade dysplasia\nA positive Spearman correlation between procedure time and the diameter of the head (ρ = 0.52,P= 0.034)", "The incidence of early and late PPB as the most common complication is 10.0%–15.1%, especially in pedunculated polyps without any preventive intervention.[41314] The relevant factors for PPB are related to characteristics of polyp itself (size, morphology, and location) as well as other underlying conditions of patients (age, cardiovascular or chronic renal disease, and anticoagulant use).[15] Usually, PPB occurs in settings with large polyp size, or pedunculated polyp with a thick stalk. Although several preventive measures for PPB including injection therapy, endoloop ligation, and application of hemoclips have been developed, PPB remains a major complication of polypectomy until now.[579101617181920] EBL was first introduced for the treatment of bleeding esophageal varices in 1988.[11] The simplicity and high efficiency of this technique have prompted widespread use in the management of nonvariceal bleeding, including PPB, arteriovenous malformations, and colonic diverticular hemorrhage.[122122]\nHerein, for the first time, we applied EBL in polypectomy of pedunculated colonic polyps for the prevention of PPB. In this study, the features of all resected polyps were as follows: localization in the distal colon, mean size of a polyp 15.6 mm, mean stalk width of 10.1 mm, and pathologic type (13 tubular adenomas and 4 tubulovillous adenomas). The average total procedure duration for polyp removal was less than 10 min, except for one case. The larger the size of the head, the longer the procedure time was noted in this study, which resulted from a voluminous head making it difficult to visualize the lesion endoscopically. Nevertheless, all the targeted colonic polyps were easily and successfully removed with EAP. There were no complications such as bleeding or perforation in any case. From our experience, this technique has many advantages including safety, ease of use, and high success rates for the treatment of large pedunculated polyps, regardless of stalk thickness, unlike endoloop procedure that has the limitation in the thickness of stalk of polyps.[7]\nAs shown in Figures 1 and 2, the steps of the procedure in EAP are as follows: (1) turning from the front view of the polyp to the side, approaching the stalk of the pedunculated polyp of interest; (2) suction and traction of the stalk into the transparent cap with a tripod grasper until there is the blurring of endoscopic vision; (3) release of the band from the cap; (4) identifying the congested stalk by tight engagement of the rubber band; and (5) cutting the remaining exposed stalk above the ligation using a hot snare. The second step of the procedure, aspiration of the lesion into the cap, is an essential part like the EBL technique,[12] because it is difficult to predict if the volume of captured stalk is enough to compress the feeding vessel. It is necessary that endoscopists feel some resistance on the passage of the stalk at the outer end of the cap during withdrawal of stalk, by continuously pulling the stalk until endoscopic vision is blurred. Insufficient suction would make the rubber band slip off. The second important point is the positioning of the cap: correct positioning of the cap can avoid the unwanted chances of trapping the normal colon wall and head portion of the polyp to misinterpret the resected specimen, microscopically.\nIn our method, we used a tripod grasper to draw the targeted site of the stalk appropriately into the space within the cap, before EBL. We could confirm endoscopically choked stalks in addition to slippage of ligated bands until the end of the procedure. Other capture devices such as biopsy forceps are also available to grasp the stalk. Interestingly, we observed the change of stalk shape from originally linear to inverted, acutely angled, omega by EBL. This morphologic alteration may display more effective blocking of blood flow into the polyp with compression of the dual sites, than other traditional methods blocking one site of blood flow into the polyp. EAP has another strength. From the point of the operating endoscopists, they can select the available hemostatic tools depending on the features of colon polyps including their location, size of the head, and thickness of the stalk as well as characteristics of subjects. Furthermore, this method has another advantage over the endoloop in the removal of thin-stalked polyp less than 8 mm, which is unwillingly dissected by mechanical force.[8] In other words, EAP has the advantage in removing all polyps irrespective of the width of the stalk.\nFor executing this procedure, it is important for the endoscopist to use his own experience to be able to perform the various hemostasis methods. If an endoscopist is generally familiar with the EBL procedure for the treatment of variceal bleeding, it seems likely that the endoscopist may perform this procedure without difficulty. However, the learning curve of endoscopists is also necessary to master the technique and achieve satisfactory results.\nThis procedure has several limitations in clinical application. It cannot be applied in cases of pedunculated polyps with short length stalk (<10 mm) because of little space to grasp the stalk and in cases of insufficient visualization of the stalk due to large heads. Another drawback is the endoscope change from colonoscopy to upper endoscopy because of the absence of colonoscope-fitted EBL to shorten total procedure time. As the proximal colon is unreachable by upper endoscopes, we performed EAP only in patients with left-sided polyps. Besides, the development of a new device of a larger cap allowing to adapt to very thick stalks is also required.\nIn conclusion, we have described the use of band ligation for polypectomy of pedunculated colonic polyps. EAP appears to be an easy, safe, and effective technique for the prevention of PPB of pedunculated colonic polyps. These results should be confirmed in large-scale, prospective, controlled studies.\nFinancial support and sponsorship Nil.\nNil.\nConflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "Nil.", "There are no conflicts of interest." ]
[ "intro", "materials|methods", null, null, null, "results", "discussion", null, "COI-statement" ]
[ "Bleeding", "colonic polyp", "colonoscopy", "endoscopic mucosal resection", "ligation" ]
INTRODUCTION: Colonoscopic polypectomy including standard snare polypectomy, endoscopic mucosal resection (EMR), and endoscopic submucosal dissection (ESD) is a standard therapy for colon polyps that has replaced surgical polypectomy throughout the colon.[1] One of the most common and serious complications associated with endoscopic polypectomy is hemorrhage, with an incidence ranging from 0.3% to 6%.[2] Postpolypectomy bleeding (PPB) occurs frequently in cases of large polyps, with thick stalks (>5 mm), in the presence of malignancy and location in the right colon.[3] In particular, large pedunculated colonic polyps with thick stalk often cause immediate and profuse PPB by incomplete hemostasis of feeding arteries immediately after removal.[4] A few endoscopic techniques have been reported to reduce the risk of bleeding, including injection therapy and mechanical hemostatic methods using endoloops and the application of clips.[45678910] For example, polypectomy using an endoloop may require sophisticated endoscopic techniques and PPB may occur in relation to the procedure because of loop slippage or inadvertent dissection in thin-stalk polyps. Another method of clipping the base of the stalk of targeted polyps before polypectomy, remains controversial[68] because risk of coagulation syndrome might occur by contacting clips with a polypectomy snare. Endoscopic band ligation (EBL) has been shown to be an effective method of controlling bleeding from esophageal varices, gastric lesions with visible vessels, and rectal varices.[11] Since the techniques are simple and familiar to the endoscopists, the band ligation apparatus has been widely used to control other types of hemostasis for colonic PPB, colonic diverticular hemorrhage, and endoscopic submucosal resection in cases of small rectal carcinoids.[12] EBL can provide a clear view of lesions under direct pressure by a transparent cap to the targeted lesions. However, with the above method, the EBL approaches directly and head-on to the target lesions, which is not suitable to remove the polyp because of little space to reserve the polyp. To overcome the problem, the authors intended lateral access to the stalk of targeting polyps and then ligate it by releasing the band of EBL in order to compress and block the feeding arteries of pedunculated polyps, thereby preventing PPB. In this study, we evaluated the feasibility, safety, and efficacy of this hemostatic technique to remove long-stalked pedunculated colon polyps. METHODS: Patients Between April 2012 and January 2016, 15 patients with 17 pedunculated colonic polyps were enrolled in this study. Eligible criteria were as follows: (1) polyps with head >10 mm and stalk length >10 mm; (2) location at the distal segments of the colon; and (3) benign features under endoscopic inspection (absence of ulceration and induration or friability). In patients taking anticoagulants such as aspirin, coumadin, or nonsteroidal anti-inflammatory drugs (NSAIDs), this medication was discontinued 7 days before the polypectomy and resumed a week later. Patients underwent colonoscopy before performing endoscopic band ligation assisted polypectomy (EAP). Colonoscopic examinations were performed using a standard colonoscope (CF-H260 series; Olympus, Tokyo, Japan). During colonoscopy, we found targeted polyps and identified their sizes and locations. The size of the polyp head and the diameter of the stalk were estimated using the complete opening width of the biopsy forceps. We then changed the colonoscope to an upper endoscope for EBL (Sumitomo Bakelite Co., Akita, Japan) because of the absence of colonoscopy-fitted EBL. After EAP, the patients stayed in the hospital for 1 day. Hemoglobin, hematocrit, and plain abdomen X-rays were checked 1 day after the treatment. Physical examinations were performed to assess for abdominal pain and hematochezia. The patients were followed up in the outpatient clinic for 1 month. This study protocol was approved by our Institutional Research Ethics Board and adhered to the Helsinki Declaration. All study subjects provided written informed consent before the procedure. Between April 2012 and January 2016, 15 patients with 17 pedunculated colonic polyps were enrolled in this study. Eligible criteria were as follows: (1) polyps with head >10 mm and stalk length >10 mm; (2) location at the distal segments of the colon; and (3) benign features under endoscopic inspection (absence of ulceration and induration or friability). In patients taking anticoagulants such as aspirin, coumadin, or nonsteroidal anti-inflammatory drugs (NSAIDs), this medication was discontinued 7 days before the polypectomy and resumed a week later. Patients underwent colonoscopy before performing endoscopic band ligation assisted polypectomy (EAP). Colonoscopic examinations were performed using a standard colonoscope (CF-H260 series; Olympus, Tokyo, Japan). During colonoscopy, we found targeted polyps and identified their sizes and locations. The size of the polyp head and the diameter of the stalk were estimated using the complete opening width of the biopsy forceps. We then changed the colonoscope to an upper endoscope for EBL (Sumitomo Bakelite Co., Akita, Japan) because of the absence of colonoscopy-fitted EBL. After EAP, the patients stayed in the hospital for 1 day. Hemoglobin, hematocrit, and plain abdomen X-rays were checked 1 day after the treatment. Physical examinations were performed to assess for abdominal pain and hematochezia. The patients were followed up in the outpatient clinic for 1 month. This study protocol was approved by our Institutional Research Ethics Board and adhered to the Helsinki Declaration. All study subjects provided written informed consent before the procedure. Endoscopic procedure The EAP procedure is illustrated in Figures 1 and 2. A conventional upper endoscope loaded with a band ligator was inserted to the site of the targeted pedunculated polyp Figure 2a. After approaching the lateral side of the stalk [Figures 1a and 2b] and grasping the mid-portion of the stalk using a tripod grasper (FG-45L-1, Olympus), we drew it into the cap, feeling some resistance [Figures 1b and 2c]. When the endoscopic view was blurred, the rubber band was released from the cap to ligate the stalk [Figures 1c and 2d]. We were able to confirm the proper ligation band by identifying endoscopically and presume the compression of feeding arteries to the polyp by congestion of the polyp head. Thereafter, we resected the remaining free margin of the stalk just above the ligation with an electrosurgical snare (SD-9U-1 or SD-11U-1, Olympus) [Figure 1d]. The polyps were removed by electrocoagulation with Endocut Q current (effect 3, cut duration 2 ms, cut interval 1200 ms) generated by a VIO300D electrosurgical unit (Erbe, Tübingen, Germany) [Figure 2e]. All procedures were performed by three experienced endoscopists who had an experience of more than 1,000 colorectal endoscopic mucosal resections. The resected polyps were collected for histopathological evaluation and reviewed by pathologists specializing in gastrointestinal pathology; a cutting site with coagulation is shown inFigure 3. Hematoxylin and eosin-stained slides were reviewed for each case. Schematic showing the endoscopic band ligation–associated colonic polypectomy technique. (a) Endoscope-equipped with a rubber band to be deployed at the target polyp. (b) Tripod forceps grasping the middle part of the stalk with suction and application of the ligation band. (c) The ligation band is located in the middle of the stalk, which was inverted and ligated to create an omega shape. (d) Endoscopic snare polypectomy is performed just above the ligation band Endoscopic view of the endoscopic band ligation–associated colonic polypectomy. (a) A large pedunculated polyp with a long stalk. (b) Endoscopic view of the lateral side of the pedunculated polyp approach. (c) Tripod forceps are used to grasp the middle of the pedunculated polyp stalk, which is then pulled into the ligation device. (d) Endoscopic view of the ligated polyp stalk. (e) Endoscopic view after removal of the pedunculated polyp, with no hemorrhage visible at the polypectomy site Histological view of a resected pedunculated polyp. The stalk was resected completely with clean margins by coagulation (hematoxylin and eosin stain, ×12 (a), ×40 (b)) The EAP procedure is illustrated in Figures 1 and 2. A conventional upper endoscope loaded with a band ligator was inserted to the site of the targeted pedunculated polyp Figure 2a. After approaching the lateral side of the stalk [Figures 1a and 2b] and grasping the mid-portion of the stalk using a tripod grasper (FG-45L-1, Olympus), we drew it into the cap, feeling some resistance [Figures 1b and 2c]. When the endoscopic view was blurred, the rubber band was released from the cap to ligate the stalk [Figures 1c and 2d]. We were able to confirm the proper ligation band by identifying endoscopically and presume the compression of feeding arteries to the polyp by congestion of the polyp head. Thereafter, we resected the remaining free margin of the stalk just above the ligation with an electrosurgical snare (SD-9U-1 or SD-11U-1, Olympus) [Figure 1d]. The polyps were removed by electrocoagulation with Endocut Q current (effect 3, cut duration 2 ms, cut interval 1200 ms) generated by a VIO300D electrosurgical unit (Erbe, Tübingen, Germany) [Figure 2e]. All procedures were performed by three experienced endoscopists who had an experience of more than 1,000 colorectal endoscopic mucosal resections. The resected polyps were collected for histopathological evaluation and reviewed by pathologists specializing in gastrointestinal pathology; a cutting site with coagulation is shown inFigure 3. Hematoxylin and eosin-stained slides were reviewed for each case. Schematic showing the endoscopic band ligation–associated colonic polypectomy technique. (a) Endoscope-equipped with a rubber band to be deployed at the target polyp. (b) Tripod forceps grasping the middle part of the stalk with suction and application of the ligation band. (c) The ligation band is located in the middle of the stalk, which was inverted and ligated to create an omega shape. (d) Endoscopic snare polypectomy is performed just above the ligation band Endoscopic view of the endoscopic band ligation–associated colonic polypectomy. (a) A large pedunculated polyp with a long stalk. (b) Endoscopic view of the lateral side of the pedunculated polyp approach. (c) Tripod forceps are used to grasp the middle of the pedunculated polyp stalk, which is then pulled into the ligation device. (d) Endoscopic view of the ligated polyp stalk. (e) Endoscopic view after removal of the pedunculated polyp, with no hemorrhage visible at the polypectomy site Histological view of a resected pedunculated polyp. The stalk was resected completely with clean margins by coagulation (hematoxylin and eosin stain, ×12 (a), ×40 (b)) Clinical outcomes We evaluated several parameters, including completeness of resection, procedure time, and complications including immediate PPB, delayed PPB, and perforation. Complete resection was defined as a lesion-free margin with both the lateral and basal tissues microscopically. Immediate PPB was defined as pulsating bleeding or oozing of blood lasting more than 60 s immediately after the procedure. Delayed PPB was defined as gross rectal bleeding, bleeding requiring endoscopic or radiological hemostasis, or transfusions requiring surgery up to 1 month after the procedure. Perforation was detected with either endoscopic penetration of the wall or radiological examinations. In addition, spearman's correlation test was used to analyze the correlation between the procedure time and diameter of the head or stalk of the polyp. We evaluated several parameters, including completeness of resection, procedure time, and complications including immediate PPB, delayed PPB, and perforation. Complete resection was defined as a lesion-free margin with both the lateral and basal tissues microscopically. Immediate PPB was defined as pulsating bleeding or oozing of blood lasting more than 60 s immediately after the procedure. Delayed PPB was defined as gross rectal bleeding, bleeding requiring endoscopic or radiological hemostasis, or transfusions requiring surgery up to 1 month after the procedure. Perforation was detected with either endoscopic penetration of the wall or radiological examinations. In addition, spearman's correlation test was used to analyze the correlation between the procedure time and diameter of the head or stalk of the polyp. Patients: Between April 2012 and January 2016, 15 patients with 17 pedunculated colonic polyps were enrolled in this study. Eligible criteria were as follows: (1) polyps with head >10 mm and stalk length >10 mm; (2) location at the distal segments of the colon; and (3) benign features under endoscopic inspection (absence of ulceration and induration or friability). In patients taking anticoagulants such as aspirin, coumadin, or nonsteroidal anti-inflammatory drugs (NSAIDs), this medication was discontinued 7 days before the polypectomy and resumed a week later. Patients underwent colonoscopy before performing endoscopic band ligation assisted polypectomy (EAP). Colonoscopic examinations were performed using a standard colonoscope (CF-H260 series; Olympus, Tokyo, Japan). During colonoscopy, we found targeted polyps and identified their sizes and locations. The size of the polyp head and the diameter of the stalk were estimated using the complete opening width of the biopsy forceps. We then changed the colonoscope to an upper endoscope for EBL (Sumitomo Bakelite Co., Akita, Japan) because of the absence of colonoscopy-fitted EBL. After EAP, the patients stayed in the hospital for 1 day. Hemoglobin, hematocrit, and plain abdomen X-rays were checked 1 day after the treatment. Physical examinations were performed to assess for abdominal pain and hematochezia. The patients were followed up in the outpatient clinic for 1 month. This study protocol was approved by our Institutional Research Ethics Board and adhered to the Helsinki Declaration. All study subjects provided written informed consent before the procedure. Endoscopic procedure: The EAP procedure is illustrated in Figures 1 and 2. A conventional upper endoscope loaded with a band ligator was inserted to the site of the targeted pedunculated polyp Figure 2a. After approaching the lateral side of the stalk [Figures 1a and 2b] and grasping the mid-portion of the stalk using a tripod grasper (FG-45L-1, Olympus), we drew it into the cap, feeling some resistance [Figures 1b and 2c]. When the endoscopic view was blurred, the rubber band was released from the cap to ligate the stalk [Figures 1c and 2d]. We were able to confirm the proper ligation band by identifying endoscopically and presume the compression of feeding arteries to the polyp by congestion of the polyp head. Thereafter, we resected the remaining free margin of the stalk just above the ligation with an electrosurgical snare (SD-9U-1 or SD-11U-1, Olympus) [Figure 1d]. The polyps were removed by electrocoagulation with Endocut Q current (effect 3, cut duration 2 ms, cut interval 1200 ms) generated by a VIO300D electrosurgical unit (Erbe, Tübingen, Germany) [Figure 2e]. All procedures were performed by three experienced endoscopists who had an experience of more than 1,000 colorectal endoscopic mucosal resections. The resected polyps were collected for histopathological evaluation and reviewed by pathologists specializing in gastrointestinal pathology; a cutting site with coagulation is shown inFigure 3. Hematoxylin and eosin-stained slides were reviewed for each case. Schematic showing the endoscopic band ligation–associated colonic polypectomy technique. (a) Endoscope-equipped with a rubber band to be deployed at the target polyp. (b) Tripod forceps grasping the middle part of the stalk with suction and application of the ligation band. (c) The ligation band is located in the middle of the stalk, which was inverted and ligated to create an omega shape. (d) Endoscopic snare polypectomy is performed just above the ligation band Endoscopic view of the endoscopic band ligation–associated colonic polypectomy. (a) A large pedunculated polyp with a long stalk. (b) Endoscopic view of the lateral side of the pedunculated polyp approach. (c) Tripod forceps are used to grasp the middle of the pedunculated polyp stalk, which is then pulled into the ligation device. (d) Endoscopic view of the ligated polyp stalk. (e) Endoscopic view after removal of the pedunculated polyp, with no hemorrhage visible at the polypectomy site Histological view of a resected pedunculated polyp. The stalk was resected completely with clean margins by coagulation (hematoxylin and eosin stain, ×12 (a), ×40 (b)) Clinical outcomes: We evaluated several parameters, including completeness of resection, procedure time, and complications including immediate PPB, delayed PPB, and perforation. Complete resection was defined as a lesion-free margin with both the lateral and basal tissues microscopically. Immediate PPB was defined as pulsating bleeding or oozing of blood lasting more than 60 s immediately after the procedure. Delayed PPB was defined as gross rectal bleeding, bleeding requiring endoscopic or radiological hemostasis, or transfusions requiring surgery up to 1 month after the procedure. Perforation was detected with either endoscopic penetration of the wall or radiological examinations. In addition, spearman's correlation test was used to analyze the correlation between the procedure time and diameter of the head or stalk of the polyp. RESULTS: Fifteen patients were enrolled in the study; 17 pedunculated colonic polyps were removed by EAP. The mean age of the patients was 64 years (range 52–81 years). The mean size of resected polyps was 15.6 mm (range 10–40 mm): size ≥20 mm (3/17, 17.6%) and 10–19 mm (14/17, 82.4%). The width of the stalks was 14.4 mm (range 10–25 mm): stalk width ≤9 mm (8/17, 47.1%) and ≥10 mm (9/17, 52.9%) [Table 1]. The average duration of the endoscopic procedure was 365 s (range 151–890 s). We observed positive correlation between procedure time and the diameter of the head (spearman ρ = 0.52, P = 0.034) [Figure 4]. However, the procedure time was not related to other polyp-related factors including width and length of the stalk and pathologic types. All lesions were resected without difficulty except one case of a 40 × 40 mm polyp with a stalk 15 mm thick (patient 15 in Table 1). The procedural time of that case was about 15 mins because of the difficulty in finding the exact ligation, due to the visual hindrance by the huge mass itself. Nevertheless, R0 resection was obtained in of the cases. Histological examination of the 17 polyps revealed 10 tubular adenomas with low-grade dysplasia, three tubular adenomas with high-grade dysplasia, one tubulovillous adenoma with low-grade dysplasia, and three tubulovillous adenomas with high-grade dysplasia. There were no procedure-related complications for 1 month, such as PPB and perforations. Characteristics of 17 large pedunculated polyps in 15 patients who underwent endoscopic band ligation-assisted polypectomy S-colon: Sigmoid colon; SDJ: Sigmoid-descending junction; TA c LGD: Tubular adenoma with low-grade dysplasia; TVA c LGD: Tubulovillous adenoma with low-grade dysplasia; TA c HGD: Tubular adenoma with high-grade dysplasia; TVA c HGD: Tubulovillous adenoma with high-grade dysplasia A positive Spearman correlation between procedure time and the diameter of the head (ρ = 0.52,P= 0.034) DISCUSSION: The incidence of early and late PPB as the most common complication is 10.0%–15.1%, especially in pedunculated polyps without any preventive intervention.[41314] The relevant factors for PPB are related to characteristics of polyp itself (size, morphology, and location) as well as other underlying conditions of patients (age, cardiovascular or chronic renal disease, and anticoagulant use).[15] Usually, PPB occurs in settings with large polyp size, or pedunculated polyp with a thick stalk. Although several preventive measures for PPB including injection therapy, endoloop ligation, and application of hemoclips have been developed, PPB remains a major complication of polypectomy until now.[579101617181920] EBL was first introduced for the treatment of bleeding esophageal varices in 1988.[11] The simplicity and high efficiency of this technique have prompted widespread use in the management of nonvariceal bleeding, including PPB, arteriovenous malformations, and colonic diverticular hemorrhage.[122122] Herein, for the first time, we applied EBL in polypectomy of pedunculated colonic polyps for the prevention of PPB. In this study, the features of all resected polyps were as follows: localization in the distal colon, mean size of a polyp 15.6 mm, mean stalk width of 10.1 mm, and pathologic type (13 tubular adenomas and 4 tubulovillous adenomas). The average total procedure duration for polyp removal was less than 10 min, except for one case. The larger the size of the head, the longer the procedure time was noted in this study, which resulted from a voluminous head making it difficult to visualize the lesion endoscopically. Nevertheless, all the targeted colonic polyps were easily and successfully removed with EAP. There were no complications such as bleeding or perforation in any case. From our experience, this technique has many advantages including safety, ease of use, and high success rates for the treatment of large pedunculated polyps, regardless of stalk thickness, unlike endoloop procedure that has the limitation in the thickness of stalk of polyps.[7] As shown in Figures 1 and 2, the steps of the procedure in EAP are as follows: (1) turning from the front view of the polyp to the side, approaching the stalk of the pedunculated polyp of interest; (2) suction and traction of the stalk into the transparent cap with a tripod grasper until there is the blurring of endoscopic vision; (3) release of the band from the cap; (4) identifying the congested stalk by tight engagement of the rubber band; and (5) cutting the remaining exposed stalk above the ligation using a hot snare. The second step of the procedure, aspiration of the lesion into the cap, is an essential part like the EBL technique,[12] because it is difficult to predict if the volume of captured stalk is enough to compress the feeding vessel. It is necessary that endoscopists feel some resistance on the passage of the stalk at the outer end of the cap during withdrawal of stalk, by continuously pulling the stalk until endoscopic vision is blurred. Insufficient suction would make the rubber band slip off. The second important point is the positioning of the cap: correct positioning of the cap can avoid the unwanted chances of trapping the normal colon wall and head portion of the polyp to misinterpret the resected specimen, microscopically. In our method, we used a tripod grasper to draw the targeted site of the stalk appropriately into the space within the cap, before EBL. We could confirm endoscopically choked stalks in addition to slippage of ligated bands until the end of the procedure. Other capture devices such as biopsy forceps are also available to grasp the stalk. Interestingly, we observed the change of stalk shape from originally linear to inverted, acutely angled, omega by EBL. This morphologic alteration may display more effective blocking of blood flow into the polyp with compression of the dual sites, than other traditional methods blocking one site of blood flow into the polyp. EAP has another strength. From the point of the operating endoscopists, they can select the available hemostatic tools depending on the features of colon polyps including their location, size of the head, and thickness of the stalk as well as characteristics of subjects. Furthermore, this method has another advantage over the endoloop in the removal of thin-stalked polyp less than 8 mm, which is unwillingly dissected by mechanical force.[8] In other words, EAP has the advantage in removing all polyps irrespective of the width of the stalk. For executing this procedure, it is important for the endoscopist to use his own experience to be able to perform the various hemostasis methods. If an endoscopist is generally familiar with the EBL procedure for the treatment of variceal bleeding, it seems likely that the endoscopist may perform this procedure without difficulty. However, the learning curve of endoscopists is also necessary to master the technique and achieve satisfactory results. This procedure has several limitations in clinical application. It cannot be applied in cases of pedunculated polyps with short length stalk (<10 mm) because of little space to grasp the stalk and in cases of insufficient visualization of the stalk due to large heads. Another drawback is the endoscope change from colonoscopy to upper endoscopy because of the absence of colonoscope-fitted EBL to shorten total procedure time. As the proximal colon is unreachable by upper endoscopes, we performed EAP only in patients with left-sided polyps. Besides, the development of a new device of a larger cap allowing to adapt to very thick stalks is also required. In conclusion, we have described the use of band ligation for polypectomy of pedunculated colonic polyps. EAP appears to be an easy, safe, and effective technique for the prevention of PPB of pedunculated colonic polyps. These results should be confirmed in large-scale, prospective, controlled studies. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Background: Endoscopic removal of large and thick-stalked pedunculated colonic polyps, often leads to massive hemorrhage. Several techniques to minimize this complication have not been widely adopted due to some caveats. In order to prevent postpolypectomy bleeding, we invented a novel technique to dissect long-stalked pedunculated colonic polyps using endoscopic band ligation (EBL) by laterally approaching the stalk. Methods: In this prospective single-center study, 17 pedunculated polyps in 15 patients were removed between April 2012 and January 2016. We targeted pedunculated polyps with a long stalk length (>10 mm) and a large head (>10 mm) located in the distal colon. After identifying lesions with a colonoscope, we reapproached the middle part of the stalk of the targeted polyp with an EBL-equipped gastroscope to ligate it. Snare polypectomy was performed just above the ligation site of the stalk. Results: EBL-assisted polypectomy removed all of the lesions successfully, which were confirmed pathologically. There was little technical difficulty associated with the endoscopic procedures, regardless of polyp size and stalk thickness, except for one case with a very large polyp that impeded the visualization of the ligation site. We observed a positive correlation between procedure time and the diameter of the head (spearman ρ = 0.52, P = 0.034). After dissection of the polyp, the EBL bands remained fastened to the dissected stalks in all cases. There was no complication associated with polypectomy for 1 month. Conclusions: EBL-assisted polypectomy is an easy, safe, and effective technique to remove long-stalked pedunculated colonic polyps without postpolypectomy bleeding.
null
null
4,773
309
[ 297, 494, 135, 2 ]
9
[ "stalk", "endoscopic", "polyp", "polyps", "band", "procedure", "ligation", "pedunculated", "polypectomy", "ppb" ]
[ "colonic polypectomy", "polyps polypectomy", "polypectomy hemorrhage", "polypectomy endoscopic mucosal", "endoscopic polypectomy hemorrhage" ]
null
null
null
[CONTENT] Bleeding | colonic polyp | colonoscopy | endoscopic mucosal resection | ligation [SUMMARY]
null
[CONTENT] Bleeding | colonic polyp | colonoscopy | endoscopic mucosal resection | ligation [SUMMARY]
null
[CONTENT] Bleeding | colonic polyp | colonoscopy | endoscopic mucosal resection | ligation [SUMMARY]
null
[CONTENT] Colon | Colonic Polyps | Colonoscopy | Humans | Pilot Projects | Prospective Studies [SUMMARY]
null
[CONTENT] Colon | Colonic Polyps | Colonoscopy | Humans | Pilot Projects | Prospective Studies [SUMMARY]
null
[CONTENT] Colon | Colonic Polyps | Colonoscopy | Humans | Pilot Projects | Prospective Studies [SUMMARY]
null
[CONTENT] colonic polypectomy | polyps polypectomy | polypectomy hemorrhage | polypectomy endoscopic mucosal | endoscopic polypectomy hemorrhage [SUMMARY]
null
[CONTENT] colonic polypectomy | polyps polypectomy | polypectomy hemorrhage | polypectomy endoscopic mucosal | endoscopic polypectomy hemorrhage [SUMMARY]
null
[CONTENT] colonic polypectomy | polyps polypectomy | polypectomy hemorrhage | polypectomy endoscopic mucosal | endoscopic polypectomy hemorrhage [SUMMARY]
null
[CONTENT] stalk | endoscopic | polyp | polyps | band | procedure | ligation | pedunculated | polypectomy | ppb [SUMMARY]
null
[CONTENT] stalk | endoscopic | polyp | polyps | band | procedure | ligation | pedunculated | polypectomy | ppb [SUMMARY]
null
[CONTENT] stalk | endoscopic | polyp | polyps | band | procedure | ligation | pedunculated | polypectomy | ppb [SUMMARY]
null
[CONTENT] polyps | polypectomy | lesions | endoscopic | techniques | ppb | ebl | method | colon | endoscopic submucosal [SUMMARY]
null
[CONTENT] grade | grade dysplasia | dysplasia | 17 | mm | adenoma | high grade | 52 | low grade dysplasia | high grade dysplasia [SUMMARY]
null
[CONTENT] nil | stalk | interest | conflicts interest | conflicts | endoscopic | polyp | polyps | procedure | band [SUMMARY]
null
[CONTENT] ||| ||| EBL [SUMMARY]
null
[CONTENT] EBL ||| one ||| 0.52 | 0.034 ||| EBL ||| 1 month [SUMMARY]
null
[CONTENT] ||| ||| EBL ||| 17 | 15 | April 2012 and January 2016 ||| 10 mm | 10 mm ||| EBL ||| ||| EBL ||| one ||| 0.52 | 0.034 ||| EBL ||| 1 month ||| EBL [SUMMARY]
null
Serum lipid profiles are associated with disability and MRI outcomes in multiple sclerosis.
21970791
The breakdown of the blood-brain-barrier vascular endothelium is critical for entry of immune cells into the MS brain. Vascular co-morbidities are associated with increased risk of progression. Dyslipidemia, elevated LDL and reduced HDL may increase progression by activating inflammatory processes at the vascular endothelium.
BACKGROUND
This study included 492 MS patients (age: 47.1 ± 10.8 years; disease duration: 12.8 ± 10.1 years) with baseline and follow-up Expanded Disability Status Score (EDSS) assessments after a mean period of 2.2 ± 1.0 years. The associations of baseline lipid profile variables with disability changes were assessed. Quantitative MRI findings at baseline were available for 210 patients.
METHODS
EDSS worsening was associated with higher baseline LDL (p = 0.006) and total cholesterol (p = 0.001, 0.008) levels, with trends for higher triglyceride (p = 0.025); HDL was not associated. A similar pattern was found for MSSS worsening. Higher HDL levels (p < 0.001) were associated with lower contrast-enhancing lesion volume. Higher total cholesterol was associated with a trend for lower brain parenchymal fraction (p = 0.033).
RESULTS
Serum lipid profile has modest effects on disease progression in MS. Worsening disability is associated with higher levels of LDL, total cholesterol and triglycerides. Higher HDL is associated with lower levels of acute inflammatory activity.
CONCLUSIONS
[ "Adult", "Blood-Brain Barrier", "Cholesterol", "Disability Evaluation", "Disease Progression", "Endothelium, Vascular", "Female", "Humans", "Lipids", "Magnetic Resonance Imaging", "Male", "Middle Aged", "Multiple Sclerosis", "Retrospective Studies", "Triglycerides" ]
3228782
Introduction and Background
Multiple sclerosis (MS) is a complex inflammatory, demyelinating and neurodegenerative disease with a heterogeneous pathology and clinical outcomes [1]. The chronic inflammatory processes that characterize MS pathology interfere with immune mechanisms that regulate and confine the inflammatory cascade to prevent irreversible tissue damage [2]. Cholesterol is an important component of intact myelin. Lipids, especially lipoproteins, are involved in the regulation of neural functions in the central nervous system through local mechanisms that are linked to systemic lipid metabolism [3,4]. High-density lipoproteins (HDL) and low-density lipoproteins (LDL) play a key role in the transport of cholesterol and lipids in human plasma. Under normal physiological conditions, high concentrations of HDL and LDL are present in CNS as a result of transport across the blood-brain barrier [5,6]. Apolipoprotein A-I, a major component of plasma HDL, is synthesized within the vascular endothelial cells [7]. HDL has immunomodulatory and anti-oxidant effects on endothelial cells [8] and it has been shown to inhibit production of the pro-inflammatory cytokines interleukin-1beta and tumor necrosis factor [9,10]. Apolipoprotein A-1 and paraoxonase are associated with HDL and contribute to its anti-oxidant and anti-inflammatory properties [9,11,12]. Dyslipidemia can potentiate inflammatory processes at the vascular endothelium, lead to the induction of adhesion molecules, and the recruitment of monocytes [13-15]. Associations between dyslipidemia and increased inflammation are well established in conditions such atherosclerosis, cardiovascular disease, metabolic syndrome and obesity [16]. In the context of autoimmune diseases, a strong association between dyslipidemia and cardiovascular disease has emerged in systematic lupus erythematosus [17] and increased cardiovascular risk and lipid profile changes have been reported in rheumatoid arthritis [18]. HDL and LDL also modulate the function and survival of β-cells in Type 2 diabetes mellitus [19]. Neuromyelitis optica patients were reported to have significantly higher serum cholesterol triglycerides and lower LDL than healthy controls [20]. However, only limited information is available on the effect of serum triglycerides and cholesterol levels and the roles of HDL and LDL levels on MS disease progression. Increased total cholesterol was associated with increases in the number of contrast-enhancing lesions on brain MRI in clinically isolated syndrome patients following a first clinical demyelinating event [21]. MS patients were found to have a higher occurrence of hypercholesterolemia and paraoxonase-1, the anti-oxidant enzyme associated with HDL, was decreased during relapses [12]. A retrospective analysis of a large dataset of 8,983 patients from the North American Research Committee on Multiple Sclerosis Registry reported that the presence of vascular comorbidities linked to dyslipidemia was associated with an increased risk for disability progression in MS [22]. The aim of this study therefore was to assess the associations of serum lipid profile variables (serum cholesterol, HDL, LDL and triglycerides) to clinical disability and brain tissue integrity as measured with quantitative magnetic resonance imaging (MRI) metrics in a large cohort of MS patients.
Methods
Study Population Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. Study Design Single-center, retrospective, longitudinal study. Single-center, retrospective, longitudinal study. Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. Study Design Single-center, retrospective, longitudinal study. Single-center, retrospective, longitudinal study. Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. MRI Analysis Quantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1. Quantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1. Data Analysis SPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses. One-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate. The MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations. The difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. Baseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest. The CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. To correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10. SPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses. One-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate. The MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations. The difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. Baseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest. The CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. To correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10.
null
null
Discussion
BWG contributed to study design, oversaw all clinical aspects of the project including clinical data acquisition, data analysis and interpretation and manuscript preparation. RZ contributed to study design, MRI data acquisition, data interpretation and manuscript preparation. EC contributed to MRI data acquisition. AD contributed to clinical data acquisition. JS contributed to clinical data acquisition. BT oversaw clinical data acquisition. SH contributed to data acquisition. BM contributed to clinical data acquisition. MW contributed to clinical data acquisition. JD contributed to MRI data acquisition. NB contributed to MRI data acquisition. MR contributed to study design, data analysis and interpretation and manuscript preparation. Al authors read and approved the final manuscript.
[ "Introduction and Background", "Study Population", "Ethics Statement", "Study Design", "Study Population", "MRI Analysis", "Data Analysis", "Results", "Demographic and Clinical Characteristics", "Associations with Disability and Disability Changes", "Associations with MRI", "Discussion" ]
[ "Multiple sclerosis (MS) is a complex inflammatory, demyelinating and neurodegenerative disease with a heterogeneous pathology and clinical outcomes [1]. The chronic inflammatory processes that characterize MS pathology interfere with immune mechanisms that regulate and confine the inflammatory cascade to prevent irreversible tissue damage [2].\nCholesterol is an important component of intact myelin. Lipids, especially lipoproteins, are involved in the regulation of neural functions in the central nervous system through local mechanisms that are linked to systemic lipid metabolism [3,4]. High-density lipoproteins (HDL) and low-density lipoproteins (LDL) play a key role in the transport of cholesterol and lipids in human plasma. Under normal physiological conditions, high concentrations of HDL and LDL are present in CNS as a result of transport across the blood-brain barrier [5,6]. Apolipoprotein A-I, a major component of plasma HDL, is synthesized within the vascular endothelial cells [7]. HDL has immunomodulatory and anti-oxidant effects on endothelial cells [8] and it has been shown to inhibit production of the pro-inflammatory cytokines interleukin-1beta and tumor necrosis factor [9,10]. Apolipoprotein A-1 and paraoxonase are associated with HDL and contribute to its anti-oxidant and anti-inflammatory properties [9,11,12].\nDyslipidemia can potentiate inflammatory processes at the vascular endothelium, lead to the induction of adhesion molecules, and the recruitment of monocytes [13-15]. Associations between dyslipidemia and increased inflammation are well established in conditions such atherosclerosis, cardiovascular disease, metabolic syndrome and obesity [16].\nIn the context of autoimmune diseases, a strong association between dyslipidemia and cardiovascular disease has emerged in systematic lupus erythematosus [17] and increased cardiovascular risk and lipid profile changes have been reported in rheumatoid arthritis [18]. HDL and LDL also modulate the function and survival of β-cells in Type 2 diabetes mellitus [19]. Neuromyelitis optica patients were reported to have significantly higher serum cholesterol triglycerides and lower LDL than healthy controls [20].\nHowever, only limited information is available on the effect of serum triglycerides and cholesterol levels and the roles of HDL and LDL levels on MS disease progression. Increased total cholesterol was associated with increases in the number of contrast-enhancing lesions on brain MRI in clinically isolated syndrome patients following a first clinical demyelinating event [21]. MS patients were found to have a higher occurrence of hypercholesterolemia and paraoxonase-1, the anti-oxidant enzyme associated with HDL, was decreased during relapses [12]. A retrospective analysis of a large dataset of 8,983 patients from the North American Research Committee on Multiple Sclerosis Registry reported that the presence of vascular comorbidities linked to dyslipidemia was associated with an increased risk for disability progression in MS [22].\nThe aim of this study therefore was to assess the associations of serum lipid profile variables (serum cholesterol, HDL, LDL and triglycerides) to clinical disability and brain tissue integrity as measured with quantitative magnetic resonance imaging (MRI) metrics in a large cohort of MS patients.", " Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.\nThe study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.\n Study Design Single-center, retrospective, longitudinal study.\nSingle-center, retrospective, longitudinal study.\n Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].\nThe study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].", "The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.", "Single-center, retrospective, longitudinal study.", "The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].", "Quantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1.", "SPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses.\nOne-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate.\nThe MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations.\nThe difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment.\nBaseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest.\nThe CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment.\nTo correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10.", " Demographic and Clinical Characteristics The clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing.\nDemographic and clinical characteristics of the cohort.\nThe continuous variables expressed as mean ± SD and categorical variables as frequency (%).\n* At baseline lipid profile assessment. §Statin usage status unavailable for one patient.\nThe median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years).\nThe majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies.\nMRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1).\nThe frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2).\nDemographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins.\nStatin usage data were available for 491 patients.\n* At time of baseline lipid profile assessment.\n§ Fisher exact test\n‡ Fisher exact test for presence of secondary progressive or progressive forms of MS.\n# Mann-Whitney test\n¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables.\nThe frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA).\nThe clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing.\nDemographic and clinical characteristics of the cohort.\nThe continuous variables expressed as mean ± SD and categorical variables as frequency (%).\n* At baseline lipid profile assessment. §Statin usage status unavailable for one patient.\nThe median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years).\nThe majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies.\nMRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1).\nThe frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2).\nDemographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins.\nStatin usage data were available for 491 patients.\n* At time of baseline lipid profile assessment.\n§ Fisher exact test\n‡ Fisher exact test for presence of secondary progressive or progressive forms of MS.\n# Mann-Whitney test\n¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables.\nThe frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA).\n Associations with Disability and Disability Changes Higher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038).\nThe associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown).\nLipid profile associations with disability changes.\nSignificant p-values are underlined.\nSE is standard error of the slope and rp is the partial correlation.\nThese results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients.\nHigher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038).\nThe associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown).\nLipid profile associations with disability changes.\nSignificant p-values are underlined.\nSE is standard error of the slope and rp is the partial correlation.\nThese results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients.\n Associations with MRI Higher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001).\nIn contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins.\nThere was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins.\nThere were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054).\nHigher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001).\nIn contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins.\nThere was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins.\nThere were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054).", "The clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing.\nDemographic and clinical characteristics of the cohort.\nThe continuous variables expressed as mean ± SD and categorical variables as frequency (%).\n* At baseline lipid profile assessment. §Statin usage status unavailable for one patient.\nThe median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years).\nThe majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies.\nMRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1).\nThe frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2).\nDemographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins.\nStatin usage data were available for 491 patients.\n* At time of baseline lipid profile assessment.\n§ Fisher exact test\n‡ Fisher exact test for presence of secondary progressive or progressive forms of MS.\n# Mann-Whitney test\n¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables.\nThe frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA).", "Higher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038).\nThe associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown).\nLipid profile associations with disability changes.\nSignificant p-values are underlined.\nSE is standard error of the slope and rp is the partial correlation.\nThese results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients.", "Higher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001).\nIn contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins.\nThere was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins.\nThere were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054).", "In this paper, we have reported results indicating that lipid profile variables such as increased LDL, triglycerides and total cholesterol levels are associated with increased disability progression in MS. Higher HDL levels and lower levels of triglycerides were associated with decreased CEL activity whereas higher total cholesterol levels were associated with lower BPF.\nThe recruitment and extravasation of immune cells across the activated vascular endothelium of the blood brain is considered to a critical step in MS pathogenesis [1]. MS is also associated with significant amounts of cerebral vascular endothelial dysfunction [28,29] and with cerebral hypoperfusion [30,31]. Our working hypothesis is that the pro-inflammatory and thrombogenic processes associated with dyslipidemia could plausibly contribute to disease progression in MS via diverse mechanisms at the blood brain barrier vascular endothelium, e.g., by enhancing leukocyte recruitment, increasing endothelial dysfunction and by increasing the risk of hypoperfusion.\nThe effects size contributions of individual lipid profile variables to disability change were modest but significant: the partial correlation coefficient rp values were in the 0.10 - 0.15 range. We found greater EDSS worsening in patients with higher cholesterol (p = 0.001) and LDL (p = 0.006) levels at baseline. Similar associations were seen for MSSS, a disability measure with better metric properties that corrects the EDSS for disease duration. Nonetheless, our results provide mechanistic support, albeit indirect to the epidemiological findings of Marrie et al. who found that vascular comorbidities are associated with a substantially increased risk of disability progression in MS [22]. Long-term adherence to a low saturated fat diet has been implicated in better clinical outcomes in MS [32]. Although the MS cases in the Nurse Health Study cohort did not indicate associations between diet and the risk of developing MS, an association between obesity during adolescence has been reported [33].\nThe primary limitations of our study stem from its retrospective study design. Another caveat is the inclusion of statin-treated patients (22.2% of sample). Because hypercholesterolemia occurs with greater frequency in older male patients, the inclusion of the statin-treated sub-group introduces demographic heterogeneity. We did not find evidence for differences in overall lipid profiles in the statin-treated subset but the group on statin treatment was more frequently male, had greater mean age, disease duration, BMI, baseline EDSS scores and also a somewhat higher proportion of progressive MS, all of which would also be expected in an older and male MS patient group. This cluster of demographic characteristics is generally representative of statin treated patients in the population. All of our statistical analyses were corrected for age and sex to address demographic differences. In addition to their direct effects on cholesterol production, statins exhibit pleiotropic immunomodulatory effects in vitro [34] and in chronic and relapsing experimental autoimmune encephalomyelitis, an animal model of MS [35]. Cholesterol is a major component of myelin and statins may hinder remyelination by inhibiting cholesterol synthesis in the brain [36,37]. The studies of statin treatment in MS have likewise also yielded mixed results [38-42]. Therefore, to further address limitations imposed by the pleiotropic effects of statins and the representative demographic differences, we conducted sub-analyses in patients who were not on statin therapy. Our statin treated group did show a lower CEL number and CE-LV, with a higher T1-LV and a trend toward decreased BPF compared to the non-statin group. We avoided comparing the groups with and without statin treatment in results because this study was not designed to address the specific role if any of statins in MS therapeutics.\nIn a study of 30 MS patients, statin treatment resulted in a significant decrease in the number and volume of CEL on serial monthly MRI [39]. A post hoc analysis of the interferon-beta treated control arm of the SENTINEL study did not indicate an effect of statins on adjusted annualized relapse rate, disability progression, number of CEL, or number of new or enlarging T2-hyperintense lesions over 2 years [40]. The STAYCIS trial to assess statin treatment in slowing the conversion of CIS did not meet its primary endpoint [41]. The SIMCOMBIN trial indicated that statin treatment did not provide benefit in MS patients on interferon-beta [43].\nOur data suggest a negative influence of high cholesterol and triglycerides on disease course and a favorable influence of higher HDL levels on acute inflammatory activity in MS patients. Lifestyle changes including adoption of a healthier diet and regular exercise in order to improve the serum lipid profile may be beneficial for MS patients to improve their neurological condition." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction and Background", "Methods", "Study Population", "Ethics Statement", "Study Design", "Study Population", "MRI Analysis", "Data Analysis", "Results", "Demographic and Clinical Characteristics", "Associations with Disability and Disability Changes", "Associations with MRI", "Discussion", "Supplementary Material" ]
[ "Multiple sclerosis (MS) is a complex inflammatory, demyelinating and neurodegenerative disease with a heterogeneous pathology and clinical outcomes [1]. The chronic inflammatory processes that characterize MS pathology interfere with immune mechanisms that regulate and confine the inflammatory cascade to prevent irreversible tissue damage [2].\nCholesterol is an important component of intact myelin. Lipids, especially lipoproteins, are involved in the regulation of neural functions in the central nervous system through local mechanisms that are linked to systemic lipid metabolism [3,4]. High-density lipoproteins (HDL) and low-density lipoproteins (LDL) play a key role in the transport of cholesterol and lipids in human plasma. Under normal physiological conditions, high concentrations of HDL and LDL are present in CNS as a result of transport across the blood-brain barrier [5,6]. Apolipoprotein A-I, a major component of plasma HDL, is synthesized within the vascular endothelial cells [7]. HDL has immunomodulatory and anti-oxidant effects on endothelial cells [8] and it has been shown to inhibit production of the pro-inflammatory cytokines interleukin-1beta and tumor necrosis factor [9,10]. Apolipoprotein A-1 and paraoxonase are associated with HDL and contribute to its anti-oxidant and anti-inflammatory properties [9,11,12].\nDyslipidemia can potentiate inflammatory processes at the vascular endothelium, lead to the induction of adhesion molecules, and the recruitment of monocytes [13-15]. Associations between dyslipidemia and increased inflammation are well established in conditions such atherosclerosis, cardiovascular disease, metabolic syndrome and obesity [16].\nIn the context of autoimmune diseases, a strong association between dyslipidemia and cardiovascular disease has emerged in systematic lupus erythematosus [17] and increased cardiovascular risk and lipid profile changes have been reported in rheumatoid arthritis [18]. HDL and LDL also modulate the function and survival of β-cells in Type 2 diabetes mellitus [19]. Neuromyelitis optica patients were reported to have significantly higher serum cholesterol triglycerides and lower LDL than healthy controls [20].\nHowever, only limited information is available on the effect of serum triglycerides and cholesterol levels and the roles of HDL and LDL levels on MS disease progression. Increased total cholesterol was associated with increases in the number of contrast-enhancing lesions on brain MRI in clinically isolated syndrome patients following a first clinical demyelinating event [21]. MS patients were found to have a higher occurrence of hypercholesterolemia and paraoxonase-1, the anti-oxidant enzyme associated with HDL, was decreased during relapses [12]. A retrospective analysis of a large dataset of 8,983 patients from the North American Research Committee on Multiple Sclerosis Registry reported that the presence of vascular comorbidities linked to dyslipidemia was associated with an increased risk for disability progression in MS [22].\nThe aim of this study therefore was to assess the associations of serum lipid profile variables (serum cholesterol, HDL, LDL and triglycerides) to clinical disability and brain tissue integrity as measured with quantitative magnetic resonance imaging (MRI) metrics in a large cohort of MS patients.", " Study Population Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.\nThe study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.\n Study Design Single-center, retrospective, longitudinal study.\nSingle-center, retrospective, longitudinal study.\n Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].\nThe study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].\n Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.\nThe study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.\n Study Design Single-center, retrospective, longitudinal study.\nSingle-center, retrospective, longitudinal study.\n Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].\nThe study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].\n MRI Analysis Quantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1.\nQuantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1.\n Data Analysis SPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses.\nOne-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate.\nThe MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations.\nThe difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment.\nBaseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest.\nThe CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment.\nTo correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10.\nSPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses.\nOne-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate.\nThe MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations.\nThe difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment.\nBaseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest.\nThe CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment.\nTo correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10.", " Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.\nThe study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.\n Study Design Single-center, retrospective, longitudinal study.\nSingle-center, retrospective, longitudinal study.\n Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].\nThe study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].", "The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent.", "Single-center, retrospective, longitudinal study.", "The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included.\nThe collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio.\nThe exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24].", "Quantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1.", "SPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses.\nOne-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate.\nThe MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations.\nThe difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment.\nBaseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest.\nThe CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment.\nTo correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10.", " Demographic and Clinical Characteristics The clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing.\nDemographic and clinical characteristics of the cohort.\nThe continuous variables expressed as mean ± SD and categorical variables as frequency (%).\n* At baseline lipid profile assessment. §Statin usage status unavailable for one patient.\nThe median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years).\nThe majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies.\nMRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1).\nThe frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2).\nDemographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins.\nStatin usage data were available for 491 patients.\n* At time of baseline lipid profile assessment.\n§ Fisher exact test\n‡ Fisher exact test for presence of secondary progressive or progressive forms of MS.\n# Mann-Whitney test\n¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables.\nThe frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA).\nThe clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing.\nDemographic and clinical characteristics of the cohort.\nThe continuous variables expressed as mean ± SD and categorical variables as frequency (%).\n* At baseline lipid profile assessment. §Statin usage status unavailable for one patient.\nThe median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years).\nThe majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies.\nMRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1).\nThe frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2).\nDemographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins.\nStatin usage data were available for 491 patients.\n* At time of baseline lipid profile assessment.\n§ Fisher exact test\n‡ Fisher exact test for presence of secondary progressive or progressive forms of MS.\n# Mann-Whitney test\n¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables.\nThe frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA).\n Associations with Disability and Disability Changes Higher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038).\nThe associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown).\nLipid profile associations with disability changes.\nSignificant p-values are underlined.\nSE is standard error of the slope and rp is the partial correlation.\nThese results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients.\nHigher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038).\nThe associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown).\nLipid profile associations with disability changes.\nSignificant p-values are underlined.\nSE is standard error of the slope and rp is the partial correlation.\nThese results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients.\n Associations with MRI Higher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001).\nIn contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins.\nThere was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins.\nThere were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054).\nHigher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001).\nIn contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins.\nThere was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins.\nThere were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054).", "The clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing.\nDemographic and clinical characteristics of the cohort.\nThe continuous variables expressed as mean ± SD and categorical variables as frequency (%).\n* At baseline lipid profile assessment. §Statin usage status unavailable for one patient.\nThe median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years).\nThe majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies.\nMRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1).\nThe frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2).\nDemographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins.\nStatin usage data were available for 491 patients.\n* At time of baseline lipid profile assessment.\n§ Fisher exact test\n‡ Fisher exact test for presence of secondary progressive or progressive forms of MS.\n# Mann-Whitney test\n¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables.\nThe frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA).", "Higher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038).\nThe associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown).\nLipid profile associations with disability changes.\nSignificant p-values are underlined.\nSE is standard error of the slope and rp is the partial correlation.\nThese results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients.", "Higher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001).\nIn contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins.\nThere was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins.\nThere were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054).", "In this paper, we have reported results indicating that lipid profile variables such as increased LDL, triglycerides and total cholesterol levels are associated with increased disability progression in MS. Higher HDL levels and lower levels of triglycerides were associated with decreased CEL activity whereas higher total cholesterol levels were associated with lower BPF.\nThe recruitment and extravasation of immune cells across the activated vascular endothelium of the blood brain is considered to a critical step in MS pathogenesis [1]. MS is also associated with significant amounts of cerebral vascular endothelial dysfunction [28,29] and with cerebral hypoperfusion [30,31]. Our working hypothesis is that the pro-inflammatory and thrombogenic processes associated with dyslipidemia could plausibly contribute to disease progression in MS via diverse mechanisms at the blood brain barrier vascular endothelium, e.g., by enhancing leukocyte recruitment, increasing endothelial dysfunction and by increasing the risk of hypoperfusion.\nThe effects size contributions of individual lipid profile variables to disability change were modest but significant: the partial correlation coefficient rp values were in the 0.10 - 0.15 range. We found greater EDSS worsening in patients with higher cholesterol (p = 0.001) and LDL (p = 0.006) levels at baseline. Similar associations were seen for MSSS, a disability measure with better metric properties that corrects the EDSS for disease duration. Nonetheless, our results provide mechanistic support, albeit indirect to the epidemiological findings of Marrie et al. who found that vascular comorbidities are associated with a substantially increased risk of disability progression in MS [22]. Long-term adherence to a low saturated fat diet has been implicated in better clinical outcomes in MS [32]. Although the MS cases in the Nurse Health Study cohort did not indicate associations between diet and the risk of developing MS, an association between obesity during adolescence has been reported [33].\nThe primary limitations of our study stem from its retrospective study design. Another caveat is the inclusion of statin-treated patients (22.2% of sample). Because hypercholesterolemia occurs with greater frequency in older male patients, the inclusion of the statin-treated sub-group introduces demographic heterogeneity. We did not find evidence for differences in overall lipid profiles in the statin-treated subset but the group on statin treatment was more frequently male, had greater mean age, disease duration, BMI, baseline EDSS scores and also a somewhat higher proportion of progressive MS, all of which would also be expected in an older and male MS patient group. This cluster of demographic characteristics is generally representative of statin treated patients in the population. All of our statistical analyses were corrected for age and sex to address demographic differences. In addition to their direct effects on cholesterol production, statins exhibit pleiotropic immunomodulatory effects in vitro [34] and in chronic and relapsing experimental autoimmune encephalomyelitis, an animal model of MS [35]. Cholesterol is a major component of myelin and statins may hinder remyelination by inhibiting cholesterol synthesis in the brain [36,37]. The studies of statin treatment in MS have likewise also yielded mixed results [38-42]. Therefore, to further address limitations imposed by the pleiotropic effects of statins and the representative demographic differences, we conducted sub-analyses in patients who were not on statin therapy. Our statin treated group did show a lower CEL number and CE-LV, with a higher T1-LV and a trend toward decreased BPF compared to the non-statin group. We avoided comparing the groups with and without statin treatment in results because this study was not designed to address the specific role if any of statins in MS therapeutics.\nIn a study of 30 MS patients, statin treatment resulted in a significant decrease in the number and volume of CEL on serial monthly MRI [39]. A post hoc analysis of the interferon-beta treated control arm of the SENTINEL study did not indicate an effect of statins on adjusted annualized relapse rate, disability progression, number of CEL, or number of new or enlarging T2-hyperintense lesions over 2 years [40]. The STAYCIS trial to assess statin treatment in slowing the conversion of CIS did not meet its primary endpoint [41]. The SIMCOMBIN trial indicated that statin treatment did not provide benefit in MS patients on interferon-beta [43].\nOur data suggest a negative influence of high cholesterol and triglycerides on disease course and a favorable influence of higher HDL levels on acute inflammatory activity in MS patients. Lifestyle changes including adoption of a healthier diet and regular exercise in order to improve the serum lipid profile may be beneficial for MS patients to improve their neurological condition.", "Additional file 1contains MRI Acquisition Protocol, Image Analysis methods and Table S1.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[ "Multiple sclerosis", "diet", "lipid profile", "MRI", "environmental factors", "gene-environment interactions", "lesion volume", "brain atrophy" ]
Introduction and Background: Multiple sclerosis (MS) is a complex inflammatory, demyelinating and neurodegenerative disease with a heterogeneous pathology and clinical outcomes [1]. The chronic inflammatory processes that characterize MS pathology interfere with immune mechanisms that regulate and confine the inflammatory cascade to prevent irreversible tissue damage [2]. Cholesterol is an important component of intact myelin. Lipids, especially lipoproteins, are involved in the regulation of neural functions in the central nervous system through local mechanisms that are linked to systemic lipid metabolism [3,4]. High-density lipoproteins (HDL) and low-density lipoproteins (LDL) play a key role in the transport of cholesterol and lipids in human plasma. Under normal physiological conditions, high concentrations of HDL and LDL are present in CNS as a result of transport across the blood-brain barrier [5,6]. Apolipoprotein A-I, a major component of plasma HDL, is synthesized within the vascular endothelial cells [7]. HDL has immunomodulatory and anti-oxidant effects on endothelial cells [8] and it has been shown to inhibit production of the pro-inflammatory cytokines interleukin-1beta and tumor necrosis factor [9,10]. Apolipoprotein A-1 and paraoxonase are associated with HDL and contribute to its anti-oxidant and anti-inflammatory properties [9,11,12]. Dyslipidemia can potentiate inflammatory processes at the vascular endothelium, lead to the induction of adhesion molecules, and the recruitment of monocytes [13-15]. Associations between dyslipidemia and increased inflammation are well established in conditions such atherosclerosis, cardiovascular disease, metabolic syndrome and obesity [16]. In the context of autoimmune diseases, a strong association between dyslipidemia and cardiovascular disease has emerged in systematic lupus erythematosus [17] and increased cardiovascular risk and lipid profile changes have been reported in rheumatoid arthritis [18]. HDL and LDL also modulate the function and survival of β-cells in Type 2 diabetes mellitus [19]. Neuromyelitis optica patients were reported to have significantly higher serum cholesterol triglycerides and lower LDL than healthy controls [20]. However, only limited information is available on the effect of serum triglycerides and cholesterol levels and the roles of HDL and LDL levels on MS disease progression. Increased total cholesterol was associated with increases in the number of contrast-enhancing lesions on brain MRI in clinically isolated syndrome patients following a first clinical demyelinating event [21]. MS patients were found to have a higher occurrence of hypercholesterolemia and paraoxonase-1, the anti-oxidant enzyme associated with HDL, was decreased during relapses [12]. A retrospective analysis of a large dataset of 8,983 patients from the North American Research Committee on Multiple Sclerosis Registry reported that the presence of vascular comorbidities linked to dyslipidemia was associated with an increased risk for disability progression in MS [22]. The aim of this study therefore was to assess the associations of serum lipid profile variables (serum cholesterol, HDL, LDL and triglycerides) to clinical disability and brain tissue integrity as measured with quantitative magnetic resonance imaging (MRI) metrics in a large cohort of MS patients. Methods: Study Population Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. Study Design Single-center, retrospective, longitudinal study. Single-center, retrospective, longitudinal study. Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. Study Design Single-center, retrospective, longitudinal study. Single-center, retrospective, longitudinal study. Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. MRI Analysis Quantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1. Quantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1. Data Analysis SPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses. One-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate. The MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations. The difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. Baseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest. The CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. To correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10. SPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses. One-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate. The MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations. The difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. Baseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest. The CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. To correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10. Study Population: Ethics Statement The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. Study Design Single-center, retrospective, longitudinal study. Single-center, retrospective, longitudinal study. Study Population The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. Ethics Statement: The study was approved by the University at Buffalo Human Subjects Institutional Review Board. The Institutional Review Board approval waived the requirement for informed consent. Study Design: Single-center, retrospective, longitudinal study. Study Population: The study population included consecutive patients, followed at the Baird MS Center, State University of New York, Buffalo, NY, with clinically definite MS patients according to the McDonald criteria [23] with available baseline EDSS assessment within ± 6 months of lipid profile testing and a follow-up EDSS assessment ≥ 6 months from the baseline clinical visit. Patients with CIS and neuromyelitis optica were not included. The collected data included demographic and clinical information, statin use history, height and weight and fasting lipid profile laboratory values: HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL ratio. The exclusion criteria consisted of: any relapse with corticosteroid treatment at the time or within one month preceding study entry or MRI examination, pre-existing medical conditions known to be associated with brain pathology (e.g., neurodegenerative disorders, cerebrovascular disease, positive history of alcohol abuse, etc.), and insufficient quality of the MRI scan for quantitative analysis [24]. MRI Analysis: Quantitative MRI analysis obtained within ± 3 months from the baseline clinical visit (yielding EDSS and fasting cholesterol levels) was available for 210 of 492 patients at baseline. MRI image analysis was conducted at the Buffalo Neuroimaging Analysis Center using approaches previously described [25,26]. MRI analysts were blinded to lipid profile and clinical status. The standardized acquisition and analysis methods for obtaining contrast-enhancing lesion volume (CE-LV), CE lesion number (CEL number), T2-LV, T1-LV and brain parenchymal fraction (BPF) are detailed in Additional File 1. Data Analysis: SPSS (SPSS Inc., Chicago, IL, version 15.0) statistical program was used for all statistical analyses. One-way ANOVA followed by post-hoc independent sample t-tests were used to test for differences in means of continuous demographic variables such as age, age of onset, and disease duration. The 2 test was used for analysis of count variables for categorical data and the Fisher exact test was used where appropriate. The MS Severity Scale (MSSS) was calculated from the EDSS and disease duration values using software downloaded from http://www-gene.cimr.cam.ac.uk/MSgenetics/GAMES/MSSS/Readme.html. The global reference data set provided with the software was used for calculations. The difference between EDSS at follow-up and EDSS at baseline was analyzed as the dependent variable in regression analysis with gender, disease duration at baseline EDSS, EDSS at baseline, time difference between follow-up and baseline EDSS assessments, statin use and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. The difference between MSSS at follow-up and MSSS at baseline was analyzed in the same manner as the EDSS; however, the MSSS at baseline was included as a predictor in place of EDSS at baseline and the disease duration was not included as a predictor variable. Similar regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. Baseline EDSS was dichotomized into two groups based on EDSS < 4.0 and ≥ 4.0. The baseline EDSS groups were analyzed using logistic regression with sex as a factor and disease duration and lipid profile variable of interest. The CE-LV, T2-LV and T1-LV data were normalized by cube-root transformation to reduce skew. The cube-root-transformed T2-LV and T1-LV values were analyzed as dependent variables using multiple linear regression. The presence/absence of CE lesions (CEL) was analyzed with logistic regression and the CEL number was analyzed with Poisson loglinear regression and the transformed CE-LV values were analyzed with Tweedie regression [27]. All regression MRI analyses included sex, disease duration at time of MRI, statin use, and a lipid profile variable of interest (either HDL, LDL, triglycerides, total cholesterol or cholesterol to HDL ratio) as predictor variables. Regression analyses were also conducted in the subset of patients who were not on statins to assess the contributions of lipid profile variables in the absence of statin treatment. To correct for the multiple testing involved, a conservative Type I error level of 0.01 was used to assess significance; a trend was assumed if the Type I error level ≤ 0.10. Results: Demographic and Clinical Characteristics The clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing. Demographic and clinical characteristics of the cohort. The continuous variables expressed as mean ± SD and categorical variables as frequency (%). * At baseline lipid profile assessment. §Statin usage status unavailable for one patient. The median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years). The majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies. MRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1). The frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2). Demographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins. Statin usage data were available for 491 patients. * At time of baseline lipid profile assessment. § Fisher exact test ‡ Fisher exact test for presence of secondary progressive or progressive forms of MS. # Mann-Whitney test ¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables. The frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA). The clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing. Demographic and clinical characteristics of the cohort. The continuous variables expressed as mean ± SD and categorical variables as frequency (%). * At baseline lipid profile assessment. §Statin usage status unavailable for one patient. The median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years). The majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies. MRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1). The frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2). Demographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins. Statin usage data were available for 491 patients. * At time of baseline lipid profile assessment. § Fisher exact test ‡ Fisher exact test for presence of secondary progressive or progressive forms of MS. # Mann-Whitney test ¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables. The frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA). Associations with Disability and Disability Changes Higher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038). The associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown). Lipid profile associations with disability changes. Significant p-values are underlined. SE is standard error of the slope and rp is the partial correlation. These results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients. Higher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038). The associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown). Lipid profile associations with disability changes. Significant p-values are underlined. SE is standard error of the slope and rp is the partial correlation. These results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients. Associations with MRI Higher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001). In contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins. There was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins. There were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054). Higher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001). In contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins. There was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins. There were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054). Demographic and Clinical Characteristics: The clinical, demographic and MRI features of the cohort are summarized in Table 1. The frequency of Caucasian-Americans was 422 (85.8%), African-Americans was 28 (5.7%), Hispanics was 5 (1%), Native American 1 (0.2%), and the racial information for 34 (6.9%) patients was missing. Demographic and clinical characteristics of the cohort. The continuous variables expressed as mean ± SD and categorical variables as frequency (%). * At baseline lipid profile assessment. §Statin usage status unavailable for one patient. The median absolute time difference between lipid profile and baseline EDSS assessment was 25 days (Inter-quartile range: 51 days). The median absolute time difference between MRI and lipid profile assessments was 30 days (Inter-quartile range: 46 days). The median time between baseline EDSS and follow-up EDSS was 1.88 years (Inter-quartile range: 1.62 years). The majority of patients were on disease-modifying therapies: 45% were on interferon-beta-1a monotherapy, 0.8% were on interferon-beta-1b monotherapy, 14% were on glatiramer acetate, 20% were on natalizumab, 8% were on no therapy and the remainder were on combination therapies or chemotherapies. MRI data were available for 210 patients. There was no evidence for lipid profile differences between the groups with and without MRI available (See Additional File 1, Table S1). The group with MRI differed from the group without MRI in the higher frequency of progressive forms of MS and a modestly shorter time between baseline EDSS and follow up EDSS (See Additional File 1, Table S1). The frequency of statin usage was 109/491 patients (22.2%). There was no evidence for differences in the groups with and without statin treatment in the lipid profile variables including HDL, LDL, triglycerides, total cholesterol and cholesterol to HDL. Not surprisingly, the group on statin treatment had a higher proportion of males, greater mean age, disease duration, BMI and baseline EDSS than the group not on statin treatment (Table 2). Demographic, clinical and MRI characteristics, and lipid profiles of patient subsets with and without statins. Statin usage data were available for 491 patients. * At time of baseline lipid profile assessment. § Fisher exact test ‡ Fisher exact test for presence of secondary progressive or progressive forms of MS. # Mann-Whitney test ¶ p-values for statin variable from regression analyses with sex, disease duration and statin use as predictor variables. The frequency of disease-modifying therapy usage in the group on statin treatment (51% interferon-beta 1a, 7% glatiramer acetate, 20% natalizumab, 9% no current disease-modifying therapy, with the remainder on combination therapies or chemotherapies) was similar to the group not receiving statins (43% interferon-beta 1a, 16% glatiramer acetate, 20% natalizumab, 8% no therapy, with the remainder on combination therapies or chemotherapies). There was no evidence for significant differences in the lipid profile variables among the interferon-beta, glatiramer acetate, natalizumab, combination therapy or chemotherapies and no current disease-modifying therapy groups (one-way ANOVA). Associations with Disability and Disability Changes: Higher total cholesterol to HDL ratio showed an association trend with baseline MSSS (Slope = 0.161 ± 0.092, Partial correlation coefficient rp = 0.080, p = 0.080) and with higher probability of occurrence of baseline EDSS ≥ 4.0 (p = 0.082, OR = 1.17). There was no evidence for associations for the other lipid profile variables or BMI. In the subset without statin treatment, the probability of occurrence of baseline EDSS ≥ 4.0 exhibited increasing trends with higher total cholesterol (p = 0.040) and cholesterol to HDL ratio (p = 0.017). There was no evidence for an association with HDL. Baseline MSSS trended higher with higher total cholesterol to HDL ratio (Slope = 0.23 ± 0.11, rp = 0.11, p = 0.038). The associations of lipid profile variables with EDSS and MSSS changes are summarized in Table 3. Worsening EDSS changes were associated with higher LDL (p = 0.006), triglycerides (p = 0.025), total cholesterol (p = 0.001) and exhibited a trend with total cholesterol to HDL ratio (p = 0.047) levels. The EDSS change was not associated with higher HDL (p = 0.79). Similarly, worsening MSSS changes were associated with higher total cholesterol levels (p = 0.008); trends were also found with higher LDL (p = 0.012) and triglyceride (p = 0.037) levels. BMI was not associated with disability changes on either the EDSS or MSSS (results not shown). Qualitatively, similar results were obtained in the subset of patients who were not on statin treatment (results not shown). Lipid profile associations with disability changes. Significant p-values are underlined. SE is standard error of the slope and rp is the partial correlation. These results indicate that LDL, triglyceride and total cholesterol lipid profile variables are associated with disability changes in MS patients. Associations with MRI: Higher HDL levels were associated with a lower probability for the presence of CEL (p = 0.01) and lower CE-LV (p < 0.001). A qualitatively similar pattern of protective associations for higher HDL was found in the group not receiving statin treatment for the presence of CEL (p = 0.029, a trend) and for CE-LV (p < 0.001). In contrast, higher triglyceride levels were associated with trends for a higher probability for the presence of CEL (p = 0.038) and with higher CE-LV (p = 0.023). There were similar trends for triglyceride levels with the presence of CEL (p = 0.060) in the group not receiving statins. There was no evidence for associations between the presence of CEL and LDL (p = 0.80) or total cholesterol (p = 0.44) levels. There was also no evidence for associations between CE-LV with total cholesterol levels (p = 0.20). Greater levels of total cholesterol were associated as a trend with lower CEL number (p = 0.046) in part as a consequence of the HDL associations with CEL number. Lower CE-LV was also associated as a trend with lower levels of cholesterol to HDL ratio (p = 0.025). There was no evidence for associations of LDL with CEL number (p = 0.44) or CE-LV (p = 0.89) in patients not on statins. There were no significant associations of T2-LV and T1-LV with any of the lipid profile variables (HDL, LDL, Triglycerides, total cholesterol and cholesterol to HDL ratio) or BMI. However, lower BPF values were associated with high total cholesterol levels (rp = -0.16, p = 0.033). There was also a trend toward an association between lower BPF values with higher total cholesterol in the sub-group that was not on statin treatment (rp = -0.16, p = 0.054). Discussion: In this paper, we have reported results indicating that lipid profile variables such as increased LDL, triglycerides and total cholesterol levels are associated with increased disability progression in MS. Higher HDL levels and lower levels of triglycerides were associated with decreased CEL activity whereas higher total cholesterol levels were associated with lower BPF. The recruitment and extravasation of immune cells across the activated vascular endothelium of the blood brain is considered to a critical step in MS pathogenesis [1]. MS is also associated with significant amounts of cerebral vascular endothelial dysfunction [28,29] and with cerebral hypoperfusion [30,31]. Our working hypothesis is that the pro-inflammatory and thrombogenic processes associated with dyslipidemia could plausibly contribute to disease progression in MS via diverse mechanisms at the blood brain barrier vascular endothelium, e.g., by enhancing leukocyte recruitment, increasing endothelial dysfunction and by increasing the risk of hypoperfusion. The effects size contributions of individual lipid profile variables to disability change were modest but significant: the partial correlation coefficient rp values were in the 0.10 - 0.15 range. We found greater EDSS worsening in patients with higher cholesterol (p = 0.001) and LDL (p = 0.006) levels at baseline. Similar associations were seen for MSSS, a disability measure with better metric properties that corrects the EDSS for disease duration. Nonetheless, our results provide mechanistic support, albeit indirect to the epidemiological findings of Marrie et al. who found that vascular comorbidities are associated with a substantially increased risk of disability progression in MS [22]. Long-term adherence to a low saturated fat diet has been implicated in better clinical outcomes in MS [32]. Although the MS cases in the Nurse Health Study cohort did not indicate associations between diet and the risk of developing MS, an association between obesity during adolescence has been reported [33]. The primary limitations of our study stem from its retrospective study design. Another caveat is the inclusion of statin-treated patients (22.2% of sample). Because hypercholesterolemia occurs with greater frequency in older male patients, the inclusion of the statin-treated sub-group introduces demographic heterogeneity. We did not find evidence for differences in overall lipid profiles in the statin-treated subset but the group on statin treatment was more frequently male, had greater mean age, disease duration, BMI, baseline EDSS scores and also a somewhat higher proportion of progressive MS, all of which would also be expected in an older and male MS patient group. This cluster of demographic characteristics is generally representative of statin treated patients in the population. All of our statistical analyses were corrected for age and sex to address demographic differences. In addition to their direct effects on cholesterol production, statins exhibit pleiotropic immunomodulatory effects in vitro [34] and in chronic and relapsing experimental autoimmune encephalomyelitis, an animal model of MS [35]. Cholesterol is a major component of myelin and statins may hinder remyelination by inhibiting cholesterol synthesis in the brain [36,37]. The studies of statin treatment in MS have likewise also yielded mixed results [38-42]. Therefore, to further address limitations imposed by the pleiotropic effects of statins and the representative demographic differences, we conducted sub-analyses in patients who were not on statin therapy. Our statin treated group did show a lower CEL number and CE-LV, with a higher T1-LV and a trend toward decreased BPF compared to the non-statin group. We avoided comparing the groups with and without statin treatment in results because this study was not designed to address the specific role if any of statins in MS therapeutics. In a study of 30 MS patients, statin treatment resulted in a significant decrease in the number and volume of CEL on serial monthly MRI [39]. A post hoc analysis of the interferon-beta treated control arm of the SENTINEL study did not indicate an effect of statins on adjusted annualized relapse rate, disability progression, number of CEL, or number of new or enlarging T2-hyperintense lesions over 2 years [40]. The STAYCIS trial to assess statin treatment in slowing the conversion of CIS did not meet its primary endpoint [41]. The SIMCOMBIN trial indicated that statin treatment did not provide benefit in MS patients on interferon-beta [43]. Our data suggest a negative influence of high cholesterol and triglycerides on disease course and a favorable influence of higher HDL levels on acute inflammatory activity in MS patients. Lifestyle changes including adoption of a healthier diet and regular exercise in order to improve the serum lipid profile may be beneficial for MS patients to improve their neurological condition. Supplementary Material: Additional file 1contains MRI Acquisition Protocol, Image Analysis methods and Table S1. Click here for file
Background: The breakdown of the blood-brain-barrier vascular endothelium is critical for entry of immune cells into the MS brain. Vascular co-morbidities are associated with increased risk of progression. Dyslipidemia, elevated LDL and reduced HDL may increase progression by activating inflammatory processes at the vascular endothelium. Methods: This study included 492 MS patients (age: 47.1 ± 10.8 years; disease duration: 12.8 ± 10.1 years) with baseline and follow-up Expanded Disability Status Score (EDSS) assessments after a mean period of 2.2 ± 1.0 years. The associations of baseline lipid profile variables with disability changes were assessed. Quantitative MRI findings at baseline were available for 210 patients. Results: EDSS worsening was associated with higher baseline LDL (p = 0.006) and total cholesterol (p = 0.001, 0.008) levels, with trends for higher triglyceride (p = 0.025); HDL was not associated. A similar pattern was found for MSSS worsening. Higher HDL levels (p < 0.001) were associated with lower contrast-enhancing lesion volume. Higher total cholesterol was associated with a trend for lower brain parenchymal fraction (p = 0.033). Conclusions: Serum lipid profile has modest effects on disease progression in MS. Worsening disability is associated with higher levels of LDL, total cholesterol and triglycerides. Higher HDL is associated with lower levels of acute inflammatory activity.
Introduction and Background: Multiple sclerosis (MS) is a complex inflammatory, demyelinating and neurodegenerative disease with a heterogeneous pathology and clinical outcomes [1]. The chronic inflammatory processes that characterize MS pathology interfere with immune mechanisms that regulate and confine the inflammatory cascade to prevent irreversible tissue damage [2]. Cholesterol is an important component of intact myelin. Lipids, especially lipoproteins, are involved in the regulation of neural functions in the central nervous system through local mechanisms that are linked to systemic lipid metabolism [3,4]. High-density lipoproteins (HDL) and low-density lipoproteins (LDL) play a key role in the transport of cholesterol and lipids in human plasma. Under normal physiological conditions, high concentrations of HDL and LDL are present in CNS as a result of transport across the blood-brain barrier [5,6]. Apolipoprotein A-I, a major component of plasma HDL, is synthesized within the vascular endothelial cells [7]. HDL has immunomodulatory and anti-oxidant effects on endothelial cells [8] and it has been shown to inhibit production of the pro-inflammatory cytokines interleukin-1beta and tumor necrosis factor [9,10]. Apolipoprotein A-1 and paraoxonase are associated with HDL and contribute to its anti-oxidant and anti-inflammatory properties [9,11,12]. Dyslipidemia can potentiate inflammatory processes at the vascular endothelium, lead to the induction of adhesion molecules, and the recruitment of monocytes [13-15]. Associations between dyslipidemia and increased inflammation are well established in conditions such atherosclerosis, cardiovascular disease, metabolic syndrome and obesity [16]. In the context of autoimmune diseases, a strong association between dyslipidemia and cardiovascular disease has emerged in systematic lupus erythematosus [17] and increased cardiovascular risk and lipid profile changes have been reported in rheumatoid arthritis [18]. HDL and LDL also modulate the function and survival of β-cells in Type 2 diabetes mellitus [19]. Neuromyelitis optica patients were reported to have significantly higher serum cholesterol triglycerides and lower LDL than healthy controls [20]. However, only limited information is available on the effect of serum triglycerides and cholesterol levels and the roles of HDL and LDL levels on MS disease progression. Increased total cholesterol was associated with increases in the number of contrast-enhancing lesions on brain MRI in clinically isolated syndrome patients following a first clinical demyelinating event [21]. MS patients were found to have a higher occurrence of hypercholesterolemia and paraoxonase-1, the anti-oxidant enzyme associated with HDL, was decreased during relapses [12]. A retrospective analysis of a large dataset of 8,983 patients from the North American Research Committee on Multiple Sclerosis Registry reported that the presence of vascular comorbidities linked to dyslipidemia was associated with an increased risk for disability progression in MS [22]. The aim of this study therefore was to assess the associations of serum lipid profile variables (serum cholesterol, HDL, LDL and triglycerides) to clinical disability and brain tissue integrity as measured with quantitative magnetic resonance imaging (MRI) metrics in a large cohort of MS patients. Discussion: BWG contributed to study design, oversaw all clinical aspects of the project including clinical data acquisition, data analysis and interpretation and manuscript preparation. RZ contributed to study design, MRI data acquisition, data interpretation and manuscript preparation. EC contributed to MRI data acquisition. AD contributed to clinical data acquisition. JS contributed to clinical data acquisition. BT oversaw clinical data acquisition. SH contributed to data acquisition. BM contributed to clinical data acquisition. MW contributed to clinical data acquisition. JD contributed to MRI data acquisition. NB contributed to MRI data acquisition. MR contributed to study design, data analysis and interpretation and manuscript preparation. Al authors read and approved the final manuscript.
Background: The breakdown of the blood-brain-barrier vascular endothelium is critical for entry of immune cells into the MS brain. Vascular co-morbidities are associated with increased risk of progression. Dyslipidemia, elevated LDL and reduced HDL may increase progression by activating inflammatory processes at the vascular endothelium. Methods: This study included 492 MS patients (age: 47.1 ± 10.8 years; disease duration: 12.8 ± 10.1 years) with baseline and follow-up Expanded Disability Status Score (EDSS) assessments after a mean period of 2.2 ± 1.0 years. The associations of baseline lipid profile variables with disability changes were assessed. Quantitative MRI findings at baseline were available for 210 patients. Results: EDSS worsening was associated with higher baseline LDL (p = 0.006) and total cholesterol (p = 0.001, 0.008) levels, with trends for higher triglyceride (p = 0.025); HDL was not associated. A similar pattern was found for MSSS worsening. Higher HDL levels (p < 0.001) were associated with lower contrast-enhancing lesion volume. Higher total cholesterol was associated with a trend for lower brain parenchymal fraction (p = 0.033). Conclusions: Serum lipid profile has modest effects on disease progression in MS. Worsening disability is associated with higher levels of LDL, total cholesterol and triglycerides. Higher HDL is associated with lower levels of acute inflammatory activity.
9,097
265
[ 573, 457, 27, 9, 185, 109, 518, 2728, 628, 356, 369, 866 ]
14
[ "cholesterol", "edss", "hdl", "baseline", "lipid", "lipid profile", "profile", "statin", "patients", "higher" ]
[ "inflammatory activity ms", "transport cholesterol lipids", "density lipoproteins hdl", "brain barrier apolipoprotein", "remyelination inhibiting cholesterol" ]
null
[CONTENT] Multiple sclerosis | diet | lipid profile | MRI | environmental factors | gene-environment interactions | lesion volume | brain atrophy [SUMMARY]
[CONTENT] Multiple sclerosis | diet | lipid profile | MRI | environmental factors | gene-environment interactions | lesion volume | brain atrophy [SUMMARY]
null
[CONTENT] Multiple sclerosis | diet | lipid profile | MRI | environmental factors | gene-environment interactions | lesion volume | brain atrophy [SUMMARY]
[CONTENT] Multiple sclerosis | diet | lipid profile | MRI | environmental factors | gene-environment interactions | lesion volume | brain atrophy [SUMMARY]
[CONTENT] Multiple sclerosis | diet | lipid profile | MRI | environmental factors | gene-environment interactions | lesion volume | brain atrophy [SUMMARY]
[CONTENT] Adult | Blood-Brain Barrier | Cholesterol | Disability Evaluation | Disease Progression | Endothelium, Vascular | Female | Humans | Lipids | Magnetic Resonance Imaging | Male | Middle Aged | Multiple Sclerosis | Retrospective Studies | Triglycerides [SUMMARY]
[CONTENT] Adult | Blood-Brain Barrier | Cholesterol | Disability Evaluation | Disease Progression | Endothelium, Vascular | Female | Humans | Lipids | Magnetic Resonance Imaging | Male | Middle Aged | Multiple Sclerosis | Retrospective Studies | Triglycerides [SUMMARY]
null
[CONTENT] Adult | Blood-Brain Barrier | Cholesterol | Disability Evaluation | Disease Progression | Endothelium, Vascular | Female | Humans | Lipids | Magnetic Resonance Imaging | Male | Middle Aged | Multiple Sclerosis | Retrospective Studies | Triglycerides [SUMMARY]
[CONTENT] Adult | Blood-Brain Barrier | Cholesterol | Disability Evaluation | Disease Progression | Endothelium, Vascular | Female | Humans | Lipids | Magnetic Resonance Imaging | Male | Middle Aged | Multiple Sclerosis | Retrospective Studies | Triglycerides [SUMMARY]
[CONTENT] Adult | Blood-Brain Barrier | Cholesterol | Disability Evaluation | Disease Progression | Endothelium, Vascular | Female | Humans | Lipids | Magnetic Resonance Imaging | Male | Middle Aged | Multiple Sclerosis | Retrospective Studies | Triglycerides [SUMMARY]
[CONTENT] inflammatory activity ms | transport cholesterol lipids | density lipoproteins hdl | brain barrier apolipoprotein | remyelination inhibiting cholesterol [SUMMARY]
[CONTENT] inflammatory activity ms | transport cholesterol lipids | density lipoproteins hdl | brain barrier apolipoprotein | remyelination inhibiting cholesterol [SUMMARY]
null
[CONTENT] inflammatory activity ms | transport cholesterol lipids | density lipoproteins hdl | brain barrier apolipoprotein | remyelination inhibiting cholesterol [SUMMARY]
[CONTENT] inflammatory activity ms | transport cholesterol lipids | density lipoproteins hdl | brain barrier apolipoprotein | remyelination inhibiting cholesterol [SUMMARY]
[CONTENT] inflammatory activity ms | transport cholesterol lipids | density lipoproteins hdl | brain barrier apolipoprotein | remyelination inhibiting cholesterol [SUMMARY]
[CONTENT] cholesterol | edss | hdl | baseline | lipid | lipid profile | profile | statin | patients | higher [SUMMARY]
[CONTENT] cholesterol | edss | hdl | baseline | lipid | lipid profile | profile | statin | patients | higher [SUMMARY]
null
[CONTENT] cholesterol | edss | hdl | baseline | lipid | lipid profile | profile | statin | patients | higher [SUMMARY]
[CONTENT] cholesterol | edss | hdl | baseline | lipid | lipid profile | profile | statin | patients | higher [SUMMARY]
[CONTENT] cholesterol | edss | hdl | baseline | lipid | lipid profile | profile | statin | patients | higher [SUMMARY]
[CONTENT] inflammatory | anti | hdl | dyslipidemia | increased | serum | cardiovascular | lipoproteins | anti oxidant | oxidant [SUMMARY]
[CONTENT] edss | baseline | regression | included | analyzed | study | lv | analysis | mri | lipid profile [SUMMARY]
null
[CONTENT] ms | statin | treated | statin treated | patients | higher | disability | group | vascular | effects [SUMMARY]
[CONTENT] cholesterol | edss | baseline | study | hdl | higher | lv | lipid | statin | patients [SUMMARY]
[CONTENT] cholesterol | edss | baseline | study | hdl | higher | lv | lipid | statin | patients [SUMMARY]
[CONTENT] ||| ||| HDL [SUMMARY]
[CONTENT] 492 | 47.1 | 10.8 years | 12.8 | 10.1 years | Expanded Disability Status Score | EDSS | 2.2 ± | 1.0 years ||| ||| 210 [SUMMARY]
null
[CONTENT] MS ||| ||| HDL [SUMMARY]
[CONTENT] ||| ||| HDL ||| 492 | 47.1 | 10.8 years | 12.8 | 10.1 years | Expanded Disability Status Score | EDSS | 2.2 ± | 1.0 years ||| ||| 210 ||| ||| EDSS | 0.006 | 0.001 | 0.008 | 0.025 | HDL ||| ||| HDL ||| 0.033 ||| MS ||| ||| HDL [SUMMARY]
[CONTENT] ||| ||| HDL ||| 492 | 47.1 | 10.8 years | 12.8 | 10.1 years | Expanded Disability Status Score | EDSS | 2.2 ± | 1.0 years ||| ||| 210 ||| ||| EDSS | 0.006 | 0.001 | 0.008 | 0.025 | HDL ||| ||| HDL ||| 0.033 ||| MS ||| ||| HDL [SUMMARY]
Refugee camp health services utilisation by non-camp residents as an indicator of unaddressed health needs of surrounding populations: a perspective from Mae La refugee camp in Thailand during 2006 and 2007.
31312300
This study explored the differences on the level of medical care required by camp and non-camp resident patients during utilisation of the health services in Mae La refugee camp, Tak province, Thailand during the years 2006 and 2007.
INTRODUCTION
Data were extracted from camp registers and the Health Information System used during the years 2006 and 2007 and statistical analysis was performed.
METHODS
The analysis showed that during 2006 and 2007 non-camp resident patients, coming from Thailand as well as Myanmar, who sought care in the outpatient department (OPD) of the camp required at a significantly higher proportion admission to the inpatient department (IPD) or referral to the district hospital compared to camp resident patients. Although there was a statistically significant increased mortality of the non-camp resident patients admitted in the IPD compared to camp resident patients, there was no significant difference in mortality among these two groups when the referrals to the district hospital were analysed.
RESULTS
Non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients. Provided that this it is further validated, the above observed pattern might be potentially useful as an indirect indicator of unaddressed health needs of populations surrounding a refugee camp.
CONCLUSION
[ "Health Services", "Health Services Needs and Demand", "Humans", "Patient Acceptance of Health Care", "Refugee Camps", "Refugees", "Thailand" ]
6620057
Introduction
Mae La Refugee camp (Tak Province, Thailand) was originally established in 1984. As of end of June 2007 the camp had a population of 49,783 [1]. Since 2005 the non-governmental organisation (NGO) Aide Médicale Internationale (AMI) was responsible for all aspects of medical care in the camp with the exception of the tuberculosis program which was taken care by Médecins Sans Frontières (MSF) and obstetrics care which was managed by Shoklo Malaria Research Unit (SMRU). The health facilities run by AMI consisted of two Out-Patient Departments (OPD), one 154-bed In-Patient department (IPD) as well as two laboratories and a pharmacy stock, all located within the refugee camp. All these departments were staffed by refugees trained to work independently as medics (performing medical acts), nurses or lab technicians. In addition to them there were minimum one to maximum two full time expatriate medical doctors specifically assigned to clinical supervision responsibilities who were providing daily support to the refugee staff. Most of the patients were dealt with in the OPD, while the complicated cases were hospitalised in the IPD. Selected patients were referred to Mae Sot General Hospital (MSGH) which was the district hospital and was located on a distance approximately one hour drive by car from the camp. These patient referrals were mainly for surgical operations, radiology exams or specialist consultations. Samples for laboratory tests which could not be performed in the camp's lab were also sent there. Mae La facilities constituted an example of a basic health care system which involved versatile health workers able to cope with a variety of health problems [2]. Delivery of health care and drugs was free of charge for all patients independent of their status being camp residents or not. One of the authors (LCA) while working in the camp during 2007-2008 observed that patients who were not camp residents, presented in a worse general health status compared to camp residents. They suffered from more complicated morbidity often neglected for long time and appeared to represent a considerable and disproportionate part of the staff's workload. These patients were coming from Myanmar side of the border, as well as from Thai villages in the vicinity. By contrast camp residents (defined as people living in the camp for more than three months continuously and independently of whether they were UNHCR registered refugees or not) appeared to be in better general health status. In order to quantify the burden of non-camp resident patients on the camp's health care facilities a detailed review of the available anonymous data was organized. The research question was whether there was a difference between camp and non-camp residents on the level of medical care, they required during utilisation of the health services available in Mae La refugee camp.
Methods
This study was a purely descriptive observational evaluation based on routinely collected anonymous and non-identifiable population data. Figures on OPD consultations and IPD admissions for camp residents as well as non-camp residents were extracted from the Health Information System (HIS) used by Aide Médicale Internationale in Mae La refugee camp for the years 2006 and 2007. Figures concerning referrals to Mae Sot General Hospital for the same years (2006 and 2007) of camp and non-camp resident patients were obtained from the camp's referrals register. Mortality figures in the IPD as well as among referrals to the district hospital for camp and non-camp resident patients were extracted from the HIS for the years 2006 and 2007. Statistical analysis was performed by the use of STATA software (version 10.1). The absolute numbers and percentages of non-camp residents in the total number of OPD consultations, IPD admissions, and referrals to the district hospital were calculated per month and subsequently were used to calculate the values for 2006 and 2007 respectively. The association between the origin of the patients (camp or non-camp) and the different levels of care that was required (OPD consultation, IPD admission, referral to district hospital) was tested using chi-square test. P-value < 0.001 was considered statistically significant. For IPD admissions and for hospital referrals, the association between the origin of the patients and the final outcome (death or not death) was tested by means of Fisher's exact test.
Results
During the year 2006 non-camp residents comprised 3.75% of all OPD consultations, 8.2% of all IPD admissions and 15.91% of all referrals to the Thai hospital. The same pattern was also evident for 2007 with non-camp residents comprising 3.16% of OPD, 7.78% of IPD and 12.52% of referrals (Table 1). Specifically for the year 2007 we had also detailed data on the origin of non-camp resident patients. From the total OPD consultations of non-camp resident patients during this year (n=3814), 37.86% (n=1444) came from Myanmar (Burma), 60.59% (n= 2311) came from Thailand and 1.54% (n=59) concerned follow up consultations in which the exact origin of the non-camp resident patient was not registered again. From all the non-camp resident patients who were admitted in Mae La IPD during 2007 (n=539), 51.39% (n=277) came from Myanmar while 48.61% (n=262) came from Thailand. From the non-camp resident patients referred to the district hospital during the same year (n=61), 80.33% (n=49) came from Myanmar while 19.67% (n=12) came from Thailand (Table 2). During the year 2006 IPD mortality for non-camp resident patients was 34.63/1000 admissions (25 deaths out of 722 admissions) and for camp resident patients was 8.67/1000 admissions (70 of 8078). The same year the mortality among referrals to the district hospital for non-camp resident patients was 21.74/1000 admissions (2 of 92) and for camp resident patients was 41.15/1000 admissions (20 of 486). During 2007 IPD mortality for non-camp resident patients was 57.51/1000 admissions (31 deaths out of 539 admissions) and for camp resident patients was 11.58/1000 admissions (74 of 6390). The same year the mortality among referrals to the district hospital for non-camp resident patients was 81.97/1000 admissions (5 of 61) and for camp resident patients was 61.03/1000 admissions (26 of 426) (Table 3). The association between the origin of the patient (camp or non-camp resident) and the different levels of care that was needed was tested using chi-square test. When using the 2006 data there was a statistically significant difference between the origin of patients and the level of care they needed (Pearson chi2=619.38, P=0.000). Non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients. Similar results were obtained when data regarding 2007 were used (Pearson chi2=539.39, P=0.000). During 2007 too there was a statistically significant difference between the origin of patients and the level of care needed and non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients. Proportion of non-camp residents in the total number of medical acts through different levels of medical care in Mae La refugee camp during 2006 and 2007 Non-camp resident patients during 2007 per country of origin Fatal cases per 1000 admissions for IPD and referrals IPD: inpatient department In addition, for 2007 the association between the countries of origin (Myanmar or Thailand) within the non-camp resident patients group was also tested as data were available for this year (Table 2). A statistically significant difference between the two subgroups was found, with non-camp resident patients coming from Myanmar tending to need more advanced level of medical care compared to non-camp resident patients coming from Thailand (Pearson chi2=73.083, P=0.000). The association between the origin of patients and the final outcome (death or not death) was tested using Fisher's exact test. Regarding IPD admissions in 2006, there was a statistically significant increased mortality in non-camp resident patients compared to camp resident patients (Fisher's exact P=0.000). The final outcome of cases that were referred to the district hospital in 2006 was not associated with their origin, that is there was no statistically significant difference in mortality between non-camp and camp resident patients who were referred to the district hospital (Fisher's exact P=0.5546). Similar results were obtained when we used data regarding 2007. For IPD admissions in 2007 there was a statistically significant increased mortality in non-camp resident patients compared to camp resident patients (Fisher's exact P=0.000). Also during this year there was no statistically significant difference in mortality between non-camp and camp resident patients referred to the district hospital (Fisher's exact P=0.5718).
Conclusion
A considerable number of OPD consultations in Mae La refugee camp during 2006 and 2007 involved patients who were non-camp residents, originating from surrounding Thai villages and the Myanmar side of the border. These patients required at a statistically significant higher proportion admission to the IPD of the camp or referral to the district hospital in comparison with camp residents. Non-camp resident patients admitted in IPD suffered from a statistically significant increased mortality compared to camp residents admitted in the same department. This pattern of refugee camp health services utilisation by non-camp residents might reflect a worse health status of the non-camp resident population compared to the camp resident one, possibly indicating unaddressed health needs in this population. The health facilities of a refugee camp during the chronic stable phase might function as a reference health facility for populations from the surrounding area who might have difficulties in accessing other health care facilities. Further research in this field is needed in order to map any unaddressed health needs in the Thailand-Myanmar border area and provide an update of the current situation. This could be subsequently compared with the pattern observed during 2006 and 2007 described in this study. Provided that this pattern is further validated in similar settings in other geographic areas, it might prove a useful indirect indicator of unaddressed health needs of surrounding populations living in proximity to refugee camps or similar settlements. What is known about this topic From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members; In some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services; Analysis of health services utilization patterns for refugees and host population might help future policy planning. From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members; In some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services; Analysis of health services utilization patterns for refugees and host population might help future policy planning. What this study adds A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp; A pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area; These health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas. A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp; A pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area; These health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas.
[ "What is known about this topic", "What this study adds" ]
[ "From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members;\nIn some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services;\nAnalysis of health services utilization patterns for refugees and host population might help future policy planning.", "A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp;\nA pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area;\nThese health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "Mae La Refugee camp (Tak Province, Thailand) was originally established in 1984. As of end of June 2007 the camp had a population of 49,783 [1]. Since 2005 the non-governmental organisation (NGO) Aide Médicale Internationale (AMI) was responsible for all aspects of medical care in the camp with the exception of the tuberculosis program which was taken care by Médecins Sans Frontières (MSF) and obstetrics care which was managed by Shoklo Malaria Research Unit (SMRU). The health facilities run by AMI consisted of two Out-Patient Departments (OPD), one 154-bed In-Patient department (IPD) as well as two laboratories and a pharmacy stock, all located within the refugee camp. All these departments were staffed by refugees trained to work independently as medics (performing medical acts), nurses or lab technicians. In addition to them there were minimum one to maximum two full time expatriate medical doctors specifically assigned to clinical supervision responsibilities who were providing daily support to the refugee staff.\nMost of the patients were dealt with in the OPD, while the complicated cases were hospitalised in the IPD. Selected patients were referred to Mae Sot General Hospital (MSGH) which was the district hospital and was located on a distance approximately one hour drive by car from the camp. These patient referrals were mainly for surgical operations, radiology exams or specialist consultations. Samples for laboratory tests which could not be performed in the camp's lab were also sent there. Mae La facilities constituted an example of a basic health care system which involved versatile health workers able to cope with a variety of health problems [2]. Delivery of health care and drugs was free of charge for all patients independent of their status being camp residents or not. One of the authors (LCA) while working in the camp during 2007-2008 observed that patients who were not camp residents, presented in a worse general health status compared to camp residents. They suffered from more complicated morbidity often neglected for long time and appeared to represent a considerable and disproportionate part of the staff's workload. These patients were coming from Myanmar side of the border, as well as from Thai villages in the vicinity. By contrast camp residents (defined as people living in the camp for more than three months continuously and independently of whether they were UNHCR registered refugees or not) appeared to be in better general health status. In order to quantify the burden of non-camp resident patients on the camp's health care facilities a detailed review of the available anonymous data was organized. The research question was whether there was a difference between camp and non-camp residents on the level of medical care, they required during utilisation of the health services available in Mae La refugee camp.", "This study was a purely descriptive observational evaluation based on routinely collected anonymous and non-identifiable population data. Figures on OPD consultations and IPD admissions for camp residents as well as non-camp residents were extracted from the Health Information System (HIS) used by Aide Médicale Internationale in Mae La refugee camp for the years 2006 and 2007. Figures concerning referrals to Mae Sot General Hospital for the same years (2006 and 2007) of camp and non-camp resident patients were obtained from the camp's referrals register. Mortality figures in the IPD as well as among referrals to the district hospital for camp and non-camp resident patients were extracted from the HIS for the years 2006 and 2007. Statistical analysis was performed by the use of STATA software (version 10.1). The absolute numbers and percentages of non-camp residents in the total number of OPD consultations, IPD admissions, and referrals to the district hospital were calculated per month and subsequently were used to calculate the values for 2006 and 2007 respectively. The association between the origin of the patients (camp or non-camp) and the different levels of care that was required (OPD consultation, IPD admission, referral to district hospital) was tested using chi-square test. P-value < 0.001 was considered statistically significant. For IPD admissions and for hospital referrals, the association between the origin of the patients and the final outcome (death or not death) was tested by means of Fisher's exact test.", "During the year 2006 non-camp residents comprised 3.75% of all OPD consultations, 8.2% of all IPD admissions and 15.91% of all referrals to the Thai hospital. The same pattern was also evident for 2007 with non-camp residents comprising 3.16% of OPD, 7.78% of IPD and 12.52% of referrals (Table 1). Specifically for the year 2007 we had also detailed data on the origin of non-camp resident patients. From the total OPD consultations of non-camp resident patients during this year (n=3814), 37.86% (n=1444) came from Myanmar (Burma), 60.59% (n= 2311) came from Thailand and 1.54% (n=59) concerned follow up consultations in which the exact origin of the non-camp resident patient was not registered again. From all the non-camp resident patients who were admitted in Mae La IPD during 2007 (n=539), 51.39% (n=277) came from Myanmar while 48.61% (n=262) came from Thailand. From the non-camp resident patients referred to the district hospital during the same year (n=61), 80.33% (n=49) came from Myanmar while 19.67% (n=12) came from Thailand (Table 2). During the year 2006 IPD mortality for non-camp resident patients was 34.63/1000 admissions (25 deaths out of 722 admissions) and for camp resident patients was 8.67/1000 admissions (70 of 8078). The same year the mortality among referrals to the district hospital for non-camp resident patients was 21.74/1000 admissions (2 of 92) and for camp resident patients was 41.15/1000 admissions (20 of 486). During 2007 IPD mortality for non-camp resident patients was 57.51/1000 admissions (31 deaths out of 539 admissions) and for camp resident patients was 11.58/1000 admissions (74 of 6390). The same year the mortality among referrals to the district hospital for non-camp resident patients was 81.97/1000 admissions (5 of 61) and for camp resident patients was 61.03/1000 admissions (26 of 426) (Table 3). The association between the origin of the patient (camp or non-camp resident) and the different levels of care that was needed was tested using chi-square test. When using the 2006 data there was a statistically significant difference between the origin of patients and the level of care they needed (Pearson chi2=619.38, P=0.000). Non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients. Similar results were obtained when data regarding 2007 were used (Pearson chi2=539.39, P=0.000). During 2007 too there was a statistically significant difference between the origin of patients and the level of care needed and non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients.\nProportion of non-camp residents in the total number of medical acts through different levels of medical care in Mae La refugee camp during 2006 and 2007\nNon-camp resident patients during 2007 per country of origin\nFatal cases per 1000 admissions for IPD and referrals\nIPD: inpatient department\nIn addition, for 2007 the association between the countries of origin (Myanmar or Thailand) within the non-camp resident patients group was also tested as data were available for this year (Table 2). A statistically significant difference between the two subgroups was found, with non-camp resident patients coming from Myanmar tending to need more advanced level of medical care compared to non-camp resident patients coming from Thailand (Pearson chi2=73.083, P=0.000). The association between the origin of patients and the final outcome (death or not death) was tested using Fisher's exact test. Regarding IPD admissions in 2006, there was a statistically significant increased mortality in non-camp resident patients compared to camp resident patients (Fisher's exact P=0.000). The final outcome of cases that were referred to the district hospital in 2006 was not associated with their origin, that is there was no statistically significant difference in mortality between non-camp and camp resident patients who were referred to the district hospital (Fisher's exact P=0.5546). Similar results were obtained when we used data regarding 2007. For IPD admissions in 2007 there was a statistically significant increased mortality in non-camp resident patients compared to camp resident patients (Fisher's exact P=0.000). Also during this year there was no statistically significant difference in mortality between non-camp and camp resident patients referred to the district hospital (Fisher's exact P=0.5718).", "As shown in Table 1 there was a statistically significant increase in the portion of non-camp resident patients in each level of care as we proceed from OPD consultations to IPD admissions and to district hospital referrals during 2006 and 2007 in Mae La refugee camp. Non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients during the same period. This pattern (Figure 1) might reflect a worse general health status as well as more complicated or neglected for long time morbidity of non-camp residents compared to camp residents. It could potentially indicate unaddressed health needs of surrounding populations within Myanmar as well as within Thailand during the period studied. Globally it is common for refugee camp health facilities to provide care also to non-camp residents. On average 2% of OPD consultations in Asia and 21% in Africa were attributable to host community members [3]. Many different reasons might have influenced health care seeking behaviour of non-camp residents in the case of Mae La camp. Proximity to the camp could have been a contributing factor as certain Thai villages (eg. Ban Mae Ok Hu, Ban Mae La) were located closer to the camp's medical facilities than the closest public Thai hospital. The camp was located just next to a major paved road connecting the border towns of Mae Ramat and Tha Song Yang, so it was easily accessible. Language was another factor as many non-camp resident patients were ethnic Karen and they spoke Karen and Burmese. Communication difficulties for non-Thai speakers while seeking care in Thai health system could lead to frustration or dissatisfaction. By contrast most of the staff who worked in the camp medical facilities spoke Karen, Burmese, English and often also Thai. During 2007 in the two districts around Mae La refugee camp (Tha Song Yang and Mae Ramat districts) the migrant Burmese population with all ethnicities included was estimated to be 30,405 out of a total population of 137,020 (excluding the refugee camp population). Many of these migrants were not officially registered and had no access to social security. During 2007 only 17,633 migrants had social security in the whole Tak province [4]. Uninsured migrants who were living in Thailand as well as patients who were arriving from the Myanmar side of the border might have been more motivated to use the health care services available in Mae La camp, as medical care there as well as any potential referral to a Thai hospital was provided free of charge. Internationally displaced people remain a disadvantaged population even if they become integrated in the local host community. This has been demonstrated in rural South Africa where higher childhood mortality rates were found among children from former Mozambican refugee households compared to those from the host community [5]. In a study among refugees living in Pretoria, South Africa several challenges have been identified. These included the luck of security, language barriers, the difficulty of obtaining legal papers, the luck of employment, deplorable living conditions and falling victims of crime or police harassment [6]. On the other hand in Guinea, refugees were allowed to settle in existing villages instead of refugee camps and were given free access to national health services, which were reinforced. The improvement of the health system and transport infrastructure was reflected in an increase of the rates of major obstetric interventions for the host population [7].\nPercentage of camp resident patients and non-camp resident patients in the different levels of medical care during the years 2006 and 2007 in Mae La refugee camp, Tak province, Thailand. OPD: outpatient department; IPD: inpatient department\nWhen trying to explain the increased morbidity of non-camp resident patients compared to camp resident patients we have to acknowledge the situation in Myanmar at the same time period. At that time there were still ongoing conflicts in the eastern provinces of the country having a direct effect on the health status of the affected communities. For populations living in conflict zones in eastern Burma, high crude, under-five and infant mortality rates have been described [8]. In Karen state as of October 2007 the number of Internally Displaced Persons was estimated at 116,900 [9]. Conflict and displacement might further decrease access of the affected population to primary and secondary care as well as essential drugs, in a country where the per capita government expenditure on health is already very low. In Myanmar this was at 4.9 international dollar rate in 2004 compared to Thailand at 189.7 for the same year [10]. Different approaches have been used to improve the health outcomes in this region, one of which included the training of village health workers to implement malaria control interventions among the internal displaced populations [11]. In addition to local backpack health worker teams supported by cross border local-global partnerships, mobile obstetric maternal health workers have been providing some health services to internally displaced persons in eastern Burma [12, 13]. Nevertheless a considerable number of patients was crossing the border between Myanmar and Thailand in order to obtain health care when this was unavailable or unaffordable locally [14, 15]. On the other hand Mae La refugee camp was a stable protected environment located within Thailand, where many aspects of human security were catered for [16]. Situated in short distance, eight kilometers, from the Thai-Myanmar border it was accessible on foot. The camp provided decent medical facilities with health care workers who could speak both Karen and Burmese. Medical care, basic lab exams and quality drugs were provided free of charge. A patient arriving there would receive free primary outpatient or inpatient care according to his needs and had many chances to be reviewed by a qualified medical doctor, or even referred for specialised exams or a surgical operation in a well equipped Thai hospital, with all expenses paid by NGOs. A refugee camp may develop de facto and out of necessity into an important health care provider and a reference health care facility for unprivileged populations in its vicinity besides the refugee population itself. This is easier during a stable, chronic, post-emergency phase especially when the inpatient and outpatient departments within the camp are well supported and equipped by different actors, and the possibility of district hospital referral exists, as it was the case in Mae La. Its function can be complementary to existing state health facilities as it may provide care to populations with limited access to the health system of the host country as well as to individuals arriving from conflict affected areas of a neighboring country. Although the ideal model of health service delivery for refugees and host population is not yet clear, analysis of utilisation patterns for refugees and host population might help future policy planning in this field [17].\nSeveral limitations exist in this study. It is possible that only poor non-camp patients from the host community sought care in the camp. Compromised health status due to malnutrition, bad hygiene or poor living conditions as consequences of poverty could reflect in our data. Medics might have been more willing to admit in the IPD a non-camp resident patient who was unable to stay overnight in the camp in situations where outpatient follow up would have been preferred for camp residents. Selection bias during the referral procedure could also occur. There were budget constrains as transport, examinations and hospitalisation costs for all patients referred to the district hospital were paid by AMI. As a result the majority of referrals, from both patient groups, were acute surgical referrals (e.g. fractures, trauma patients, appendicitis, intestinal obstruction, pyomyositis) which could be dealt with quickly and cost effectively with a good prognosis. Acute and chronic medical problems, infections, malaria, HIV/AIDS, blood transfusions, cancer and terminally ill patients were taken care in the OPD and IPD of the camp with the means available there. This could influence and partly explain the difference observed in mortality among the two groups in IPD admissions and its absence among those referred to hospital. Another aspect is the exact geographic distribution by origin of the non-camp resident patients who sought care in Mae La camp. Although not covered in this study, it might have been useful to know precisely from which villages within Thailand or Myanmar these patients came. Such a mapping could point to specific geographic areas in both sides of the border with poor access to primary and/ or secondary health care. Due to lack of data regarding the number and geographic distribution of host and migrant populations in Thailand as well as of populations in the Myanmar side of the border, the frequency that non-camp resident populations used the health services of Mae La camp could not be calculated in this study. Although the data in this study are relatively old, they can provide a historical perspective of the situation during the chronic stable phase of one of the biggest refugee camps in the Thailand-Myanmar border area during the years 2006 and 2007. This information can be useful for comparisons with different geographic areas or newer data in other studies. Furthermore from 2008 onwards a new HIS for data collection was implemented in the camp. This was introduced by United Nations High Commissioner for Refugees (UNHCR) in order to standardise data collection across refugee camps. The new HIS differentiated only two groups of patients: refugee versus host country patients. As a result information concerning the detailed origin of non-camp resident patients, which is documented in this study, was not routinely collected anymore [18].", "A considerable number of OPD consultations in Mae La refugee camp during 2006 and 2007 involved patients who were non-camp residents, originating from surrounding Thai villages and the Myanmar side of the border. These patients required at a statistically significant higher proportion admission to the IPD of the camp or referral to the district hospital in comparison with camp residents. Non-camp resident patients admitted in IPD suffered from a statistically significant increased mortality compared to camp residents admitted in the same department. This pattern of refugee camp health services utilisation by non-camp residents might reflect a worse health status of the non-camp resident population compared to the camp resident one, possibly indicating unaddressed health needs in this population. The health facilities of a refugee camp during the chronic stable phase might function as a reference health facility for populations from the surrounding area who might have difficulties in accessing other health care facilities. Further research in this field is needed in order to map any unaddressed health needs in the Thailand-Myanmar border area and provide an update of the current situation. This could be subsequently compared with the pattern observed during 2006 and 2007 described in this study. Provided that this pattern is further validated in similar settings in other geographic areas, it might prove a useful indirect indicator of unaddressed health needs of surrounding populations living in proximity to refugee camps or similar settlements.\n What is known about this topic From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members;\nIn some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services;\nAnalysis of health services utilization patterns for refugees and host population might help future policy planning.\nFrom the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members;\nIn some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services;\nAnalysis of health services utilization patterns for refugees and host population might help future policy planning.\n What this study adds A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp;\nA pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area;\nThese health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas.\nA refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp;\nA pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area;\nThese health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas.", "From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members;\nIn some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services;\nAnalysis of health services utilization patterns for refugees and host population might help future policy planning.", "A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp;\nA pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area;\nThese health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas.", "The authors declare no competing interests." ]
[ "intro", "methods", "results", "discussion", "conclusions", null, null, "COI-statement" ]
[ "Refugee camps", "health services", "host community", "host population" ]
Introduction: Mae La Refugee camp (Tak Province, Thailand) was originally established in 1984. As of end of June 2007 the camp had a population of 49,783 [1]. Since 2005 the non-governmental organisation (NGO) Aide Médicale Internationale (AMI) was responsible for all aspects of medical care in the camp with the exception of the tuberculosis program which was taken care by Médecins Sans Frontières (MSF) and obstetrics care which was managed by Shoklo Malaria Research Unit (SMRU). The health facilities run by AMI consisted of two Out-Patient Departments (OPD), one 154-bed In-Patient department (IPD) as well as two laboratories and a pharmacy stock, all located within the refugee camp. All these departments were staffed by refugees trained to work independently as medics (performing medical acts), nurses or lab technicians. In addition to them there were minimum one to maximum two full time expatriate medical doctors specifically assigned to clinical supervision responsibilities who were providing daily support to the refugee staff. Most of the patients were dealt with in the OPD, while the complicated cases were hospitalised in the IPD. Selected patients were referred to Mae Sot General Hospital (MSGH) which was the district hospital and was located on a distance approximately one hour drive by car from the camp. These patient referrals were mainly for surgical operations, radiology exams or specialist consultations. Samples for laboratory tests which could not be performed in the camp's lab were also sent there. Mae La facilities constituted an example of a basic health care system which involved versatile health workers able to cope with a variety of health problems [2]. Delivery of health care and drugs was free of charge for all patients independent of their status being camp residents or not. One of the authors (LCA) while working in the camp during 2007-2008 observed that patients who were not camp residents, presented in a worse general health status compared to camp residents. They suffered from more complicated morbidity often neglected for long time and appeared to represent a considerable and disproportionate part of the staff's workload. These patients were coming from Myanmar side of the border, as well as from Thai villages in the vicinity. By contrast camp residents (defined as people living in the camp for more than three months continuously and independently of whether they were UNHCR registered refugees or not) appeared to be in better general health status. In order to quantify the burden of non-camp resident patients on the camp's health care facilities a detailed review of the available anonymous data was organized. The research question was whether there was a difference between camp and non-camp residents on the level of medical care, they required during utilisation of the health services available in Mae La refugee camp. Methods: This study was a purely descriptive observational evaluation based on routinely collected anonymous and non-identifiable population data. Figures on OPD consultations and IPD admissions for camp residents as well as non-camp residents were extracted from the Health Information System (HIS) used by Aide Médicale Internationale in Mae La refugee camp for the years 2006 and 2007. Figures concerning referrals to Mae Sot General Hospital for the same years (2006 and 2007) of camp and non-camp resident patients were obtained from the camp's referrals register. Mortality figures in the IPD as well as among referrals to the district hospital for camp and non-camp resident patients were extracted from the HIS for the years 2006 and 2007. Statistical analysis was performed by the use of STATA software (version 10.1). The absolute numbers and percentages of non-camp residents in the total number of OPD consultations, IPD admissions, and referrals to the district hospital were calculated per month and subsequently were used to calculate the values for 2006 and 2007 respectively. The association between the origin of the patients (camp or non-camp) and the different levels of care that was required (OPD consultation, IPD admission, referral to district hospital) was tested using chi-square test. P-value < 0.001 was considered statistically significant. For IPD admissions and for hospital referrals, the association between the origin of the patients and the final outcome (death or not death) was tested by means of Fisher's exact test. Results: During the year 2006 non-camp residents comprised 3.75% of all OPD consultations, 8.2% of all IPD admissions and 15.91% of all referrals to the Thai hospital. The same pattern was also evident for 2007 with non-camp residents comprising 3.16% of OPD, 7.78% of IPD and 12.52% of referrals (Table 1). Specifically for the year 2007 we had also detailed data on the origin of non-camp resident patients. From the total OPD consultations of non-camp resident patients during this year (n=3814), 37.86% (n=1444) came from Myanmar (Burma), 60.59% (n= 2311) came from Thailand and 1.54% (n=59) concerned follow up consultations in which the exact origin of the non-camp resident patient was not registered again. From all the non-camp resident patients who were admitted in Mae La IPD during 2007 (n=539), 51.39% (n=277) came from Myanmar while 48.61% (n=262) came from Thailand. From the non-camp resident patients referred to the district hospital during the same year (n=61), 80.33% (n=49) came from Myanmar while 19.67% (n=12) came from Thailand (Table 2). During the year 2006 IPD mortality for non-camp resident patients was 34.63/1000 admissions (25 deaths out of 722 admissions) and for camp resident patients was 8.67/1000 admissions (70 of 8078). The same year the mortality among referrals to the district hospital for non-camp resident patients was 21.74/1000 admissions (2 of 92) and for camp resident patients was 41.15/1000 admissions (20 of 486). During 2007 IPD mortality for non-camp resident patients was 57.51/1000 admissions (31 deaths out of 539 admissions) and for camp resident patients was 11.58/1000 admissions (74 of 6390). The same year the mortality among referrals to the district hospital for non-camp resident patients was 81.97/1000 admissions (5 of 61) and for camp resident patients was 61.03/1000 admissions (26 of 426) (Table 3). The association between the origin of the patient (camp or non-camp resident) and the different levels of care that was needed was tested using chi-square test. When using the 2006 data there was a statistically significant difference between the origin of patients and the level of care they needed (Pearson chi2=619.38, P=0.000). Non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients. Similar results were obtained when data regarding 2007 were used (Pearson chi2=539.39, P=0.000). During 2007 too there was a statistically significant difference between the origin of patients and the level of care needed and non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients. Proportion of non-camp residents in the total number of medical acts through different levels of medical care in Mae La refugee camp during 2006 and 2007 Non-camp resident patients during 2007 per country of origin Fatal cases per 1000 admissions for IPD and referrals IPD: inpatient department In addition, for 2007 the association between the countries of origin (Myanmar or Thailand) within the non-camp resident patients group was also tested as data were available for this year (Table 2). A statistically significant difference between the two subgroups was found, with non-camp resident patients coming from Myanmar tending to need more advanced level of medical care compared to non-camp resident patients coming from Thailand (Pearson chi2=73.083, P=0.000). The association between the origin of patients and the final outcome (death or not death) was tested using Fisher's exact test. Regarding IPD admissions in 2006, there was a statistically significant increased mortality in non-camp resident patients compared to camp resident patients (Fisher's exact P=0.000). The final outcome of cases that were referred to the district hospital in 2006 was not associated with their origin, that is there was no statistically significant difference in mortality between non-camp and camp resident patients who were referred to the district hospital (Fisher's exact P=0.5546). Similar results were obtained when we used data regarding 2007. For IPD admissions in 2007 there was a statistically significant increased mortality in non-camp resident patients compared to camp resident patients (Fisher's exact P=0.000). Also during this year there was no statistically significant difference in mortality between non-camp and camp resident patients referred to the district hospital (Fisher's exact P=0.5718). Discussion: As shown in Table 1 there was a statistically significant increase in the portion of non-camp resident patients in each level of care as we proceed from OPD consultations to IPD admissions and to district hospital referrals during 2006 and 2007 in Mae La refugee camp. Non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients during the same period. This pattern (Figure 1) might reflect a worse general health status as well as more complicated or neglected for long time morbidity of non-camp residents compared to camp residents. It could potentially indicate unaddressed health needs of surrounding populations within Myanmar as well as within Thailand during the period studied. Globally it is common for refugee camp health facilities to provide care also to non-camp residents. On average 2% of OPD consultations in Asia and 21% in Africa were attributable to host community members [3]. Many different reasons might have influenced health care seeking behaviour of non-camp residents in the case of Mae La camp. Proximity to the camp could have been a contributing factor as certain Thai villages (eg. Ban Mae Ok Hu, Ban Mae La) were located closer to the camp's medical facilities than the closest public Thai hospital. The camp was located just next to a major paved road connecting the border towns of Mae Ramat and Tha Song Yang, so it was easily accessible. Language was another factor as many non-camp resident patients were ethnic Karen and they spoke Karen and Burmese. Communication difficulties for non-Thai speakers while seeking care in Thai health system could lead to frustration or dissatisfaction. By contrast most of the staff who worked in the camp medical facilities spoke Karen, Burmese, English and often also Thai. During 2007 in the two districts around Mae La refugee camp (Tha Song Yang and Mae Ramat districts) the migrant Burmese population with all ethnicities included was estimated to be 30,405 out of a total population of 137,020 (excluding the refugee camp population). Many of these migrants were not officially registered and had no access to social security. During 2007 only 17,633 migrants had social security in the whole Tak province [4]. Uninsured migrants who were living in Thailand as well as patients who were arriving from the Myanmar side of the border might have been more motivated to use the health care services available in Mae La camp, as medical care there as well as any potential referral to a Thai hospital was provided free of charge. Internationally displaced people remain a disadvantaged population even if they become integrated in the local host community. This has been demonstrated in rural South Africa where higher childhood mortality rates were found among children from former Mozambican refugee households compared to those from the host community [5]. In a study among refugees living in Pretoria, South Africa several challenges have been identified. These included the luck of security, language barriers, the difficulty of obtaining legal papers, the luck of employment, deplorable living conditions and falling victims of crime or police harassment [6]. On the other hand in Guinea, refugees were allowed to settle in existing villages instead of refugee camps and were given free access to national health services, which were reinforced. The improvement of the health system and transport infrastructure was reflected in an increase of the rates of major obstetric interventions for the host population [7]. Percentage of camp resident patients and non-camp resident patients in the different levels of medical care during the years 2006 and 2007 in Mae La refugee camp, Tak province, Thailand. OPD: outpatient department; IPD: inpatient department When trying to explain the increased morbidity of non-camp resident patients compared to camp resident patients we have to acknowledge the situation in Myanmar at the same time period. At that time there were still ongoing conflicts in the eastern provinces of the country having a direct effect on the health status of the affected communities. For populations living in conflict zones in eastern Burma, high crude, under-five and infant mortality rates have been described [8]. In Karen state as of October 2007 the number of Internally Displaced Persons was estimated at 116,900 [9]. Conflict and displacement might further decrease access of the affected population to primary and secondary care as well as essential drugs, in a country where the per capita government expenditure on health is already very low. In Myanmar this was at 4.9 international dollar rate in 2004 compared to Thailand at 189.7 for the same year [10]. Different approaches have been used to improve the health outcomes in this region, one of which included the training of village health workers to implement malaria control interventions among the internal displaced populations [11]. In addition to local backpack health worker teams supported by cross border local-global partnerships, mobile obstetric maternal health workers have been providing some health services to internally displaced persons in eastern Burma [12, 13]. Nevertheless a considerable number of patients was crossing the border between Myanmar and Thailand in order to obtain health care when this was unavailable or unaffordable locally [14, 15]. On the other hand Mae La refugee camp was a stable protected environment located within Thailand, where many aspects of human security were catered for [16]. Situated in short distance, eight kilometers, from the Thai-Myanmar border it was accessible on foot. The camp provided decent medical facilities with health care workers who could speak both Karen and Burmese. Medical care, basic lab exams and quality drugs were provided free of charge. A patient arriving there would receive free primary outpatient or inpatient care according to his needs and had many chances to be reviewed by a qualified medical doctor, or even referred for specialised exams or a surgical operation in a well equipped Thai hospital, with all expenses paid by NGOs. A refugee camp may develop de facto and out of necessity into an important health care provider and a reference health care facility for unprivileged populations in its vicinity besides the refugee population itself. This is easier during a stable, chronic, post-emergency phase especially when the inpatient and outpatient departments within the camp are well supported and equipped by different actors, and the possibility of district hospital referral exists, as it was the case in Mae La. Its function can be complementary to existing state health facilities as it may provide care to populations with limited access to the health system of the host country as well as to individuals arriving from conflict affected areas of a neighboring country. Although the ideal model of health service delivery for refugees and host population is not yet clear, analysis of utilisation patterns for refugees and host population might help future policy planning in this field [17]. Several limitations exist in this study. It is possible that only poor non-camp patients from the host community sought care in the camp. Compromised health status due to malnutrition, bad hygiene or poor living conditions as consequences of poverty could reflect in our data. Medics might have been more willing to admit in the IPD a non-camp resident patient who was unable to stay overnight in the camp in situations where outpatient follow up would have been preferred for camp residents. Selection bias during the referral procedure could also occur. There were budget constrains as transport, examinations and hospitalisation costs for all patients referred to the district hospital were paid by AMI. As a result the majority of referrals, from both patient groups, were acute surgical referrals (e.g. fractures, trauma patients, appendicitis, intestinal obstruction, pyomyositis) which could be dealt with quickly and cost effectively with a good prognosis. Acute and chronic medical problems, infections, malaria, HIV/AIDS, blood transfusions, cancer and terminally ill patients were taken care in the OPD and IPD of the camp with the means available there. This could influence and partly explain the difference observed in mortality among the two groups in IPD admissions and its absence among those referred to hospital. Another aspect is the exact geographic distribution by origin of the non-camp resident patients who sought care in Mae La camp. Although not covered in this study, it might have been useful to know precisely from which villages within Thailand or Myanmar these patients came. Such a mapping could point to specific geographic areas in both sides of the border with poor access to primary and/ or secondary health care. Due to lack of data regarding the number and geographic distribution of host and migrant populations in Thailand as well as of populations in the Myanmar side of the border, the frequency that non-camp resident populations used the health services of Mae La camp could not be calculated in this study. Although the data in this study are relatively old, they can provide a historical perspective of the situation during the chronic stable phase of one of the biggest refugee camps in the Thailand-Myanmar border area during the years 2006 and 2007. This information can be useful for comparisons with different geographic areas or newer data in other studies. Furthermore from 2008 onwards a new HIS for data collection was implemented in the camp. This was introduced by United Nations High Commissioner for Refugees (UNHCR) in order to standardise data collection across refugee camps. The new HIS differentiated only two groups of patients: refugee versus host country patients. As a result information concerning the detailed origin of non-camp resident patients, which is documented in this study, was not routinely collected anymore [18]. Conclusion: A considerable number of OPD consultations in Mae La refugee camp during 2006 and 2007 involved patients who were non-camp residents, originating from surrounding Thai villages and the Myanmar side of the border. These patients required at a statistically significant higher proportion admission to the IPD of the camp or referral to the district hospital in comparison with camp residents. Non-camp resident patients admitted in IPD suffered from a statistically significant increased mortality compared to camp residents admitted in the same department. This pattern of refugee camp health services utilisation by non-camp residents might reflect a worse health status of the non-camp resident population compared to the camp resident one, possibly indicating unaddressed health needs in this population. The health facilities of a refugee camp during the chronic stable phase might function as a reference health facility for populations from the surrounding area who might have difficulties in accessing other health care facilities. Further research in this field is needed in order to map any unaddressed health needs in the Thailand-Myanmar border area and provide an update of the current situation. This could be subsequently compared with the pattern observed during 2006 and 2007 described in this study. Provided that this pattern is further validated in similar settings in other geographic areas, it might prove a useful indirect indicator of unaddressed health needs of surrounding populations living in proximity to refugee camps or similar settlements. What is known about this topic From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members; In some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services; Analysis of health services utilization patterns for refugees and host population might help future policy planning. From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members; In some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services; Analysis of health services utilization patterns for refugees and host population might help future policy planning. What this study adds A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp; A pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area; These health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas. A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp; A pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area; These health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas. What is known about this topic: From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members; In some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services; Analysis of health services utilization patterns for refugees and host population might help future policy planning. What this study adds: A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp; A pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area; These health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas. Competing interests: The authors declare no competing interests.
Background: This study explored the differences on the level of medical care required by camp and non-camp resident patients during utilisation of the health services in Mae La refugee camp, Tak province, Thailand during the years 2006 and 2007. Methods: Data were extracted from camp registers and the Health Information System used during the years 2006 and 2007 and statistical analysis was performed. Results: The analysis showed that during 2006 and 2007 non-camp resident patients, coming from Thailand as well as Myanmar, who sought care in the outpatient department (OPD) of the camp required at a significantly higher proportion admission to the inpatient department (IPD) or referral to the district hospital compared to camp resident patients. Although there was a statistically significant increased mortality of the non-camp resident patients admitted in the IPD compared to camp resident patients, there was no significant difference in mortality among these two groups when the referrals to the district hospital were analysed. Conclusions: Non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients. Provided that this it is further validated, the above observed pattern might be potentially useful as an indirect indicator of unaddressed health needs of populations surrounding a refugee camp.
Introduction: Mae La Refugee camp (Tak Province, Thailand) was originally established in 1984. As of end of June 2007 the camp had a population of 49,783 [1]. Since 2005 the non-governmental organisation (NGO) Aide Médicale Internationale (AMI) was responsible for all aspects of medical care in the camp with the exception of the tuberculosis program which was taken care by Médecins Sans Frontières (MSF) and obstetrics care which was managed by Shoklo Malaria Research Unit (SMRU). The health facilities run by AMI consisted of two Out-Patient Departments (OPD), one 154-bed In-Patient department (IPD) as well as two laboratories and a pharmacy stock, all located within the refugee camp. All these departments were staffed by refugees trained to work independently as medics (performing medical acts), nurses or lab technicians. In addition to them there were minimum one to maximum two full time expatriate medical doctors specifically assigned to clinical supervision responsibilities who were providing daily support to the refugee staff. Most of the patients were dealt with in the OPD, while the complicated cases were hospitalised in the IPD. Selected patients were referred to Mae Sot General Hospital (MSGH) which was the district hospital and was located on a distance approximately one hour drive by car from the camp. These patient referrals were mainly for surgical operations, radiology exams or specialist consultations. Samples for laboratory tests which could not be performed in the camp's lab were also sent there. Mae La facilities constituted an example of a basic health care system which involved versatile health workers able to cope with a variety of health problems [2]. Delivery of health care and drugs was free of charge for all patients independent of their status being camp residents or not. One of the authors (LCA) while working in the camp during 2007-2008 observed that patients who were not camp residents, presented in a worse general health status compared to camp residents. They suffered from more complicated morbidity often neglected for long time and appeared to represent a considerable and disproportionate part of the staff's workload. These patients were coming from Myanmar side of the border, as well as from Thai villages in the vicinity. By contrast camp residents (defined as people living in the camp for more than three months continuously and independently of whether they were UNHCR registered refugees or not) appeared to be in better general health status. In order to quantify the burden of non-camp resident patients on the camp's health care facilities a detailed review of the available anonymous data was organized. The research question was whether there was a difference between camp and non-camp residents on the level of medical care, they required during utilisation of the health services available in Mae La refugee camp. Conclusion: A considerable number of OPD consultations in Mae La refugee camp during 2006 and 2007 involved patients who were non-camp residents, originating from surrounding Thai villages and the Myanmar side of the border. These patients required at a statistically significant higher proportion admission to the IPD of the camp or referral to the district hospital in comparison with camp residents. Non-camp resident patients admitted in IPD suffered from a statistically significant increased mortality compared to camp residents admitted in the same department. This pattern of refugee camp health services utilisation by non-camp residents might reflect a worse health status of the non-camp resident population compared to the camp resident one, possibly indicating unaddressed health needs in this population. The health facilities of a refugee camp during the chronic stable phase might function as a reference health facility for populations from the surrounding area who might have difficulties in accessing other health care facilities. Further research in this field is needed in order to map any unaddressed health needs in the Thailand-Myanmar border area and provide an update of the current situation. This could be subsequently compared with the pattern observed during 2006 and 2007 described in this study. Provided that this pattern is further validated in similar settings in other geographic areas, it might prove a useful indirect indicator of unaddressed health needs of surrounding populations living in proximity to refugee camps or similar settlements. What is known about this topic From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members; In some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services; Analysis of health services utilization patterns for refugees and host population might help future policy planning. From the OPD consultations taking place in refugee camps, 2% in Asia and 21% in Africa on average are attributable to host community members; In some cases refugee hosting improved the quality and accessibility of health services and health outcomes for host national population but there are limited data to support integrated health services; Analysis of health services utilization patterns for refugees and host population might help future policy planning. What this study adds A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp; A pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area; These health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas. A refugee camp health services utilisation pattern in which non-camp resident patients visiting the OPD require at a significantly higher proportion admission to the IPD or referral to the district hospital, compared to camp resident patients, might indicate unaddressed health needs of populations living in the area around the refugee camp; A pattern where non-camp resident patients admitted to the IPD present with significantly increased mortality compared to camp resident patients might also indicate unaddressed health needs in the surrounding area; These health services utilisation patterns observed in a major refugee camp in the Thailand-Myanmar border area during 2006-2007 should be further explored in similar settings in other geographic areas.
Background: This study explored the differences on the level of medical care required by camp and non-camp resident patients during utilisation of the health services in Mae La refugee camp, Tak province, Thailand during the years 2006 and 2007. Methods: Data were extracted from camp registers and the Health Information System used during the years 2006 and 2007 and statistical analysis was performed. Results: The analysis showed that during 2006 and 2007 non-camp resident patients, coming from Thailand as well as Myanmar, who sought care in the outpatient department (OPD) of the camp required at a significantly higher proportion admission to the inpatient department (IPD) or referral to the district hospital compared to camp resident patients. Although there was a statistically significant increased mortality of the non-camp resident patients admitted in the IPD compared to camp resident patients, there was no significant difference in mortality among these two groups when the referrals to the district hospital were analysed. Conclusions: Non-camp resident patients tended to need a more advanced level of medical care compared to camp resident patients. Provided that this it is further validated, the above observed pattern might be potentially useful as an indirect indicator of unaddressed health needs of populations surrounding a refugee camp.
4,378
241
[ 78, 125 ]
8
[ "camp", "patients", "health", "camp resident", "resident", "non", "non camp", "camp resident patients", "resident patients", "refugee" ]
[ "thailand patients arriving", "thai hospital provided", "thailand opd outpatient", "refugee camp health", "refugee staff patients" ]
[CONTENT] Refugee camps | health services | host community | host population [SUMMARY]
[CONTENT] Refugee camps | health services | host community | host population [SUMMARY]
[CONTENT] Refugee camps | health services | host community | host population [SUMMARY]
[CONTENT] Refugee camps | health services | host community | host population [SUMMARY]
[CONTENT] Refugee camps | health services | host community | host population [SUMMARY]
[CONTENT] Refugee camps | health services | host community | host population [SUMMARY]
[CONTENT] Health Services | Health Services Needs and Demand | Humans | Patient Acceptance of Health Care | Refugee Camps | Refugees | Thailand [SUMMARY]
[CONTENT] Health Services | Health Services Needs and Demand | Humans | Patient Acceptance of Health Care | Refugee Camps | Refugees | Thailand [SUMMARY]
[CONTENT] Health Services | Health Services Needs and Demand | Humans | Patient Acceptance of Health Care | Refugee Camps | Refugees | Thailand [SUMMARY]
[CONTENT] Health Services | Health Services Needs and Demand | Humans | Patient Acceptance of Health Care | Refugee Camps | Refugees | Thailand [SUMMARY]
[CONTENT] Health Services | Health Services Needs and Demand | Humans | Patient Acceptance of Health Care | Refugee Camps | Refugees | Thailand [SUMMARY]
[CONTENT] Health Services | Health Services Needs and Demand | Humans | Patient Acceptance of Health Care | Refugee Camps | Refugees | Thailand [SUMMARY]
[CONTENT] thailand patients arriving | thai hospital provided | thailand opd outpatient | refugee camp health | refugee staff patients [SUMMARY]
[CONTENT] thailand patients arriving | thai hospital provided | thailand opd outpatient | refugee camp health | refugee staff patients [SUMMARY]
[CONTENT] thailand patients arriving | thai hospital provided | thailand opd outpatient | refugee camp health | refugee staff patients [SUMMARY]
[CONTENT] thailand patients arriving | thai hospital provided | thailand opd outpatient | refugee camp health | refugee staff patients [SUMMARY]
[CONTENT] thailand patients arriving | thai hospital provided | thailand opd outpatient | refugee camp health | refugee staff patients [SUMMARY]
[CONTENT] thailand patients arriving | thai hospital provided | thailand opd outpatient | refugee camp health | refugee staff patients [SUMMARY]
[CONTENT] camp | patients | health | camp resident | resident | non | non camp | camp resident patients | resident patients | refugee [SUMMARY]
[CONTENT] camp | patients | health | camp resident | resident | non | non camp | camp resident patients | resident patients | refugee [SUMMARY]
[CONTENT] camp | patients | health | camp resident | resident | non | non camp | camp resident patients | resident patients | refugee [SUMMARY]
[CONTENT] camp | patients | health | camp resident | resident | non | non camp | camp resident patients | resident patients | refugee [SUMMARY]
[CONTENT] camp | patients | health | camp resident | resident | non | non camp | camp resident patients | resident patients | refugee [SUMMARY]
[CONTENT] camp | patients | health | camp resident | resident | non | non camp | camp resident patients | resident patients | refugee [SUMMARY]
[CONTENT] camp | health | care | patients | medical | residents | camp residents | mae | facilities | patient [SUMMARY]
[CONTENT] camp | referrals | figures | non | years 2006 2007 | years | years 2006 | hospital | ipd | non camp [SUMMARY]
[CONTENT] camp | patients | resident | camp resident | resident patients | camp resident patients | non camp | non | admissions | 1000 [SUMMARY]
[CONTENT] camp | health | refugee | health services | services | area | resident | camp resident | patients | unaddressed health needs [SUMMARY]
[CONTENT] camp | health | patients | camp resident | resident | non | resident patients | camp resident patients | non camp | refugee [SUMMARY]
[CONTENT] camp | health | patients | camp resident | resident | non | resident patients | camp resident patients | non camp | refugee [SUMMARY]
[CONTENT] Mae La | Tak province | Thailand | the years 2006 | 2007 [SUMMARY]
[CONTENT] the Health Information System | the years 2006 | 2007 [SUMMARY]
[CONTENT] 2006 | 2007 | Thailand | Myanmar | IPD ||| IPD | two [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| Mae La | Tak province | Thailand | the years 2006 | 2007 ||| the Health Information System | the years 2006 | 2007 ||| ||| 2006 | 2007 | Thailand | Myanmar | IPD ||| IPD | two ||| ||| [SUMMARY]
[CONTENT] ||| Mae La | Tak province | Thailand | the years 2006 | 2007 ||| the Health Information System | the years 2006 | 2007 ||| ||| 2006 | 2007 | Thailand | Myanmar | IPD ||| IPD | two ||| ||| [SUMMARY]
Prevalence of intestinal parasitic infections among highland and lowland dwellers in Gamo area, South Ethiopia.
23419037
Epidemiological information on the prevalence of intestinal parasitic infections in different regions is a prerequisite to develop appropriate control strategies. Therefore, this present study was conducted to assess the magnitude and pattern of intestinal parasitism in highland and lowland dwellers in Gamo area, South Ethiopia.
BACKGROUND
Community-based cross-sectional study was conducted between September 2010 and July 2011 at Lante, Kolla Shelle, Dorze and Geressie kebeles of Gamo Gofa Zone, South Ethiopia. The study sites and study participants were selected using multistage sampling method. Data were gathered through house-to-house survey. A total of 858 stool specimens were collected and processed using direct wet mount and formol-ether concentration techniques for the presence of parasite.
METHODS
Out of the total examined subjects, 342(39.9%) were found positive for at least one intestinal parasite. The prevalence of Entamoeba histolytica/dispar was the highest 98(11.4%), followed by Giardia lamblia 91(10.6%), Ascaris lumbricoides 67(7.8%), Strongyloides stercoralis 51(5.9%), hookworm 42(4.9%), Trichuris trichiura 24(2.8%), Taenia species 18(2.1%), Hymenolepis nana 7(0.6%) and Schistosoma mansoni 1(0.12%). No statistically significant difference was observed in the prevalence of intestinal parasitic infections among lowland (37.9%) and highland dwellers (42.3%) (P = 0.185). The prevalence of intestinal parasitic infection was not significantly different among the study sites but it was relatively higher in Geressie (42.8%) than other kebeles. Sex was not associated with parasitic infections (P = 0.481). No statistically significant difference of infection was observed among the age groups (P = 0.228) but it was higher in reproductive age group.
RESULTS
The high prevalence of intestinal parasitic infections among the lowland and highland dwellers in Gamo area indicated that parasitic infections are important public health problems. Thus, infection control measures and the development of awareness strategies to improve sanitation and health education should be considered.
CONCLUSIONS
[ "Adolescent", "Adult", "Age Distribution", "Child", "Child, Preschool", "Cross-Sectional Studies", "Ethiopia", "Feces", "Female", "Humans", "Infant", "Infant, Newborn", "Intestinal Diseases, Parasitic", "Male", "Middle Aged", "Prevalence", "Sex Distribution", "Young Adult" ]
3584849
Background
Intestinal parasitic infestation represents a large and serious medical and public health problem in developing countries. It is estimated that some 3.5 billion people are affected, and that 450 million are ill as a result of these infections, the majority being children [1]. Apart from causing morbidity and mortality, infection with intestinal parasites has known to cause iron deficiency anemia, growth retardation in children and other physical and mental health problems [2]. Furthermore, chronic intestinal parasitic infections have become the subject of speculation and investigation in relation to the spreading and severity of other infectious diseases of viral origin, tuberculosis and malaria [3-6]. Several factors like climatic conditions, poor sanitation, unsafe drinking water, and lack of toilet facilities are the main contributors to the high prevalence of intestinal parasites in the tropical and sub-tropical countries [7]. In addition, intestinal parasitic agents increase in polluted environments such as refuse heaps, gutters and sewage units in and around human dwelling and living conditions of the people in crowded or unhealthy situations [8]. Hence, a better understanding of the above factors, as well as how social, cultural, behavioral and community awareness affect the epidemiology and control of intestinal parasites may help to design effective control strategies for these diseases [9,10]. Intestinal parasites are widely distributed in Ethiopia largely due to the low level of environmental and personal hygiene, contamination of food and drinking water that results from improper disposal of human excreta [11,12]. In addition, lack of awareness of simple health promotion practices is also a contributing factor [13]. According to the Ethiopian Ministry of Health [14] more than half a million annual visits of the outpatient services of the health institutions are due to intestinal parasitic infections. However, this report may be an underestimate, because most of the health institutions lack appropriate diagnostic methods to detect low levels of parasite burden. In addition, some of the diagnostic methods for specific intestinal parasites, especially for the newly emerging opportunistic intestinal parasites, are not available to peripheral health institutions. Previous studies gave due attention to the distributions of intestinal parasites in different altitudes, community groups such as school children or other groups confined to camps [15-18]. Hence, the pattern of intestinal parasitism in a community with diverse groups of people as a whole was not illustrated particularly in the study area. The purpose of this study was to assess the magnitude and patterns of intestinal parasitism in highland and lowland dwellers, Gamo area, South Ethiopia.
Methods
Study area A community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible. Two of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water. A community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible. Two of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water. Sample size and sampling techniques The sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings. The sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings. Stool collection and processing About 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above. About 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above. Quality control Before starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result. Before starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result. Data analysis Statistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05. Statistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05. Ethical clearance The study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection. The study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection.
Results
A total of 867 study participants were selected for investigation. However, 10(1.2%) were excluded because of inability to provide specimen. For this reason a total of 858 individuals were included in the study (Table 1). Three hundred forty two of the study individuals were found to have single or multiple intestinal parasitic infections, which make the overall prevalence 39.9%. Of the entire positive samples for the parasite, 188 were female participants and 154 were male participants with female to male ratio of 1:0.8 (Table 1). The mean age of the participants was 25 ± 19. Frequency distribution of sex, age group and altitude of the study subjects in Gamo area Key: * Lante and Kolla Shelle. ** Dorze and Gereessie. Different types of parasites including protozoans, trematode, cestodes and nematodes were detected from the stool samples of study participants. Prevalence of Entamoeba histolytica/dispar (E. histolytica/dispar) was the highest 98(11.4%), followed by Giardia lamblia (G. lamblia) 91(10.6%), Ascaris lumbricoides (A. lumbricoides) 67(7.8%), Strongyloides stercoralis (S. stercoralis) 51(5.9%), hookworm 42(4.9%), Trichuris trichiura (T. trichiura) 24(2.8%), Taenia species (Taenia spp.) 18(2.1%), Hymenolepis nana (H. nana) 7(0.6%) and Schistosoma mansoni (S. mansoni) 1(0.12%), in that order. The majorities of the positive cases were single infections (83.9%) and double infections (15.5%). Of the triple infected persons, one was coinfected with E. histolytica/dispar, G. lamblia and Taenia spp. and the other with E. histolytica/dispar, S. stercoralis and hookworm (Figure 1). Single and mixed infections among residents of Gamo area, Gamo Gofa Zone, South Ethiopia. The prevalence of infection with different intestinal helminths and protozoan parasites for lowland (Lante and Kolla Shelle) and highland (Dorze and Gressie) is shown in Table 2. Out of 459 stool samples collected from lowland area, 174(37.9%) were positive for at least one parasite. Similarly, of the 399 stool samples collected from highland area, 169(42.3%) were positive for at least one parasite. No statistically significant difference was observed (P = 0.185) between presence of intestinal parasites and altitude (Tables 1 and 2). However, G. lamblia (P < 0.001) and hookworm (P = 0.002) were significantly more prevalent in lowland areas whereas A. lumbricoides (P < 0.001) and T. trichiura (P < 0.001) were significantly more prevalent in highland areas, but S. stercoralis was exclusive to lowland areas (Table 2). Prevalence of intestinal parasites among study subjects in lowland and highland dwellers in Gamo area Key: χ2 and P-value values represent lowland and highland comparison. * Represent statistically significant difference (P < 0.05). As observed in Table 2, of 190 and 269 stool samples collected from Lante and Kolla Shelle, 66(34.7%) and 108(40.1%) were found positive for at least one parasite, respectively. Similarly, out of 123 and 276 fecal samples collected from Dorze and Geressie sites, 50(40.7%) and 118(42.8%) were infected with one or more intestinal parasites. The overall difference in parasite prevalence was no statistically significant among the study sites, (P = 0.38). However, E. histolytica/dispar prevalence was significantly higher in Geressie (P < 0.001) than other study areas. A. lumbricoides and T. trichiura were significantly higher in Dorze (P < 0.005) and S. mansoni was detected in Lante alone. The distribution of infection among female and male participants is shown in Table 3. Female participants showed the highest infection rate (41.0%), followed closely by male participants (38.6%) (Table 1). The calculated P-value (0.48) indicates that the difference in the prevalence of intestinal parasites between female and male participants was statistically not significant. However, A. lumbricoides infection was significantly higher (P = 0.004) among female participants than male participants (Table 3). Sex related prevalence of intestinal parasites among study subjects in Gamo area Key: * Represent statistically significant difference (P < 0.05). To see variations in age group, the study population was divided into 4 age groups: birth to 4 years, 5 to 14 years, 15–44 years, and over 44 years. The distribution of infection among study subjects in the different age groups is shown in Table 4. The overall infection rate was highest among the 15–44 years age group (42.3%) followed by above 44 years age group (39.7%). Only 29.4% of children from birth to 4 years age group were infected. The difference in the prevalence of intestinal parasites was insignificant (P = 0.228) among the age groups (Table 1). Each parasite was also not associated with the age of the participant (P > 0.005) except T. trichiura which shown significantly high (P = 0.007) prevalence among the age group of 5–14 years (Table 4). Age related prevalence of intestinal parasites among study subjects in Gamo area Key: < 4 years = Preschool children, 5–14 years = School children, 15–44 years = Reproductive age, > 44 years = elder. * Represent statistically significant difference (P < 0.05).
Conclusion
The high prevalence of intestinal parasitic infections among the lowland and highland dwellers in Gamo area indicated that parasitic infections are considerable public health problems. The present study has also revealed that E. histolytica/dispar and Giardia as common protozoan while A. lumberciod, hookworm and T. trichiura as common helminths that cause parasitic infection with varying magnitude in the study area. Enhancing socioeconomic status, improving sanitation facilities, instilling health education and promoting ways of keeping personal hygiene can be good strategies to control these infections in the area.
[ "Background", "Study area", "Sample size and sampling techniques", "Stool collection and processing", "Quality control", "Data analysis", "Ethical clearance", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Intestinal parasitic infestation represents a large and serious medical and public health problem in developing countries. It is estimated that some 3.5 billion people are affected, and that 450 million are ill as a result of these infections, the majority being children [1]. Apart from causing morbidity and mortality, infection with intestinal parasites has known to cause iron deficiency anemia, growth retardation in children and other physical and mental health problems [2]. Furthermore, chronic intestinal parasitic infections have become the subject of speculation and investigation in relation to the spreading and severity of other infectious diseases of viral origin, tuberculosis and malaria [3-6].\nSeveral factors like climatic conditions, poor sanitation, unsafe drinking water, and lack of toilet facilities are the main contributors to the high prevalence of intestinal parasites in the tropical and sub-tropical countries [7]. In addition, intestinal parasitic agents increase in polluted environments such as refuse heaps, gutters and sewage units in and around human dwelling and living conditions of the people in crowded or unhealthy situations [8]. Hence, a better understanding of the above factors, as well as how social, cultural, behavioral and community awareness affect the epidemiology and control of intestinal parasites may help to design effective control strategies for these diseases [9,10].\nIntestinal parasites are widely distributed in Ethiopia largely due to the low level of environmental and personal hygiene, contamination of food and drinking water that results from improper disposal of human excreta [11,12]. In addition, lack of awareness of simple health promotion practices is also a contributing factor [13]. According to the Ethiopian Ministry of Health [14] more than half a million annual visits of the outpatient services of the health institutions are due to intestinal parasitic infections. However, this report may be an underestimate, because most of the health institutions lack appropriate diagnostic methods to detect low levels of parasite burden. In addition, some of the diagnostic methods for specific intestinal parasites, especially for the newly emerging opportunistic intestinal parasites, are not available to peripheral health institutions.\nPrevious studies gave due attention to the distributions of intestinal parasites in different altitudes, community groups such as school children or other groups confined to camps [15-18]. Hence, the pattern of intestinal parasitism in a community with diverse groups of people as a whole was not illustrated particularly in the study area. The purpose of this study was to assess the magnitude and patterns of intestinal parasitism in highland and lowland dwellers, Gamo area, South Ethiopia.", "A community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible.\nTwo of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water.", "The sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings.", "About 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above.", "Before starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result.", "Statistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05.", "The study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection.", "The authors declare that they have no competing interests.", "TW Conception of the research idea, designing, collection of data, data analysis, interpretation, and manuscript drafting. TT1 Designing, collection of data and manuscript reviewing. BS Collection of data and manuscript reviewing. TT Data analysis, interpretation and manuscript drafting. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/13/151/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study area", "Sample size and sampling techniques", "Stool collection and processing", "Quality control", "Data analysis", "Ethical clearance", "Results", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Intestinal parasitic infestation represents a large and serious medical and public health problem in developing countries. It is estimated that some 3.5 billion people are affected, and that 450 million are ill as a result of these infections, the majority being children [1]. Apart from causing morbidity and mortality, infection with intestinal parasites has known to cause iron deficiency anemia, growth retardation in children and other physical and mental health problems [2]. Furthermore, chronic intestinal parasitic infections have become the subject of speculation and investigation in relation to the spreading and severity of other infectious diseases of viral origin, tuberculosis and malaria [3-6].\nSeveral factors like climatic conditions, poor sanitation, unsafe drinking water, and lack of toilet facilities are the main contributors to the high prevalence of intestinal parasites in the tropical and sub-tropical countries [7]. In addition, intestinal parasitic agents increase in polluted environments such as refuse heaps, gutters and sewage units in and around human dwelling and living conditions of the people in crowded or unhealthy situations [8]. Hence, a better understanding of the above factors, as well as how social, cultural, behavioral and community awareness affect the epidemiology and control of intestinal parasites may help to design effective control strategies for these diseases [9,10].\nIntestinal parasites are widely distributed in Ethiopia largely due to the low level of environmental and personal hygiene, contamination of food and drinking water that results from improper disposal of human excreta [11,12]. In addition, lack of awareness of simple health promotion practices is also a contributing factor [13]. According to the Ethiopian Ministry of Health [14] more than half a million annual visits of the outpatient services of the health institutions are due to intestinal parasitic infections. However, this report may be an underestimate, because most of the health institutions lack appropriate diagnostic methods to detect low levels of parasite burden. In addition, some of the diagnostic methods for specific intestinal parasites, especially for the newly emerging opportunistic intestinal parasites, are not available to peripheral health institutions.\nPrevious studies gave due attention to the distributions of intestinal parasites in different altitudes, community groups such as school children or other groups confined to camps [15-18]. Hence, the pattern of intestinal parasitism in a community with diverse groups of people as a whole was not illustrated particularly in the study area. The purpose of this study was to assess the magnitude and patterns of intestinal parasitism in highland and lowland dwellers, Gamo area, South Ethiopia.", " Study area A community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible.\nTwo of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water.\nA community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible.\nTwo of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water.\n Sample size and sampling techniques The sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings.\nThe sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings.\n Stool collection and processing About 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above.\nAbout 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above.\n Quality control Before starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result.\nBefore starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result.\n Data analysis Statistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05.\nStatistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05.\n Ethical clearance The study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection.\nThe study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection.", "A community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible.\nTwo of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water.", "The sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings.", "About 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above.", "Before starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result.", "Statistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05.", "The study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection.", "A total of 867 study participants were selected for investigation. However, 10(1.2%) were excluded because of inability to provide specimen. For this reason a total of 858 individuals were included in the study (Table 1). Three hundred forty two of the study individuals were found to have single or multiple intestinal parasitic infections, which make the overall prevalence 39.9%. Of the entire positive samples for the parasite, 188 were female participants and 154 were male participants with female to male ratio of 1:0.8 (Table 1). The mean age of the participants was 25 ± 19.\n\nFrequency distribution of sex, age group and altitude of the study subjects in Gamo area\nKey: * Lante and Kolla Shelle.\n** Dorze and Gereessie.\nDifferent types of parasites including protozoans, trematode, cestodes and nematodes were detected from the stool samples of study participants. Prevalence of Entamoeba histolytica/dispar (E. histolytica/dispar) was the highest 98(11.4%), followed by Giardia lamblia (G. lamblia) 91(10.6%), Ascaris lumbricoides (A. lumbricoides) 67(7.8%), Strongyloides stercoralis (S. stercoralis) 51(5.9%), hookworm 42(4.9%), Trichuris trichiura (T. trichiura) 24(2.8%), Taenia species (Taenia spp.) 18(2.1%), Hymenolepis nana (H. nana) 7(0.6%) and Schistosoma mansoni (S. mansoni) 1(0.12%), in that order. The majorities of the positive cases were single infections (83.9%) and double infections (15.5%). Of the triple infected persons, one was coinfected with E. histolytica/dispar, G. lamblia and Taenia spp. and the other with E. histolytica/dispar, S. stercoralis and hookworm (Figure 1).\n\nSingle and mixed infections among residents of Gamo area, Gamo Gofa Zone, South Ethiopia.\nThe prevalence of infection with different intestinal helminths and protozoan parasites for lowland (Lante and Kolla Shelle) and highland (Dorze and Gressie) is shown in Table 2. Out of 459 stool samples collected from lowland area, 174(37.9%) were positive for at least one parasite. Similarly, of the 399 stool samples collected from highland area, 169(42.3%) were positive for at least one parasite. No statistically significant difference was observed (P = 0.185) between presence of intestinal parasites and altitude (Tables 1 and 2). However, G. lamblia (P < 0.001) and hookworm (P = 0.002) were significantly more prevalent in lowland areas whereas A. lumbricoides (P < 0.001) and T. trichiura (P < 0.001) were significantly more prevalent in highland areas, but S. stercoralis was exclusive to lowland areas (Table 2).\n\nPrevalence of intestinal parasites among study subjects in lowland and highland dwellers in Gamo area\nKey: χ2 and P-value values represent lowland and highland comparison.\n* Represent statistically significant difference (P < 0.05).\nAs observed in Table 2, of 190 and 269 stool samples collected from Lante and Kolla Shelle, 66(34.7%) and 108(40.1%) were found positive for at least one parasite, respectively. Similarly, out of 123 and 276 fecal samples collected from Dorze and Geressie sites, 50(40.7%) and 118(42.8%) were infected with one or more intestinal parasites. The overall difference in parasite prevalence was no statistically significant among the study sites, (P = 0.38). However, E. histolytica/dispar prevalence was significantly higher in Geressie (P < 0.001) than other study areas. A. lumbricoides and T. trichiura were significantly higher in Dorze (P < 0.005) and S. mansoni was detected in Lante alone.\nThe distribution of infection among female and male participants is shown in Table 3. Female participants showed the highest infection rate (41.0%), followed closely by male participants (38.6%) (Table 1). The calculated P-value (0.48) indicates that the difference in the prevalence of intestinal parasites between female and male participants was statistically not significant. However, A. lumbricoides infection was significantly higher (P = 0.004) among female participants than male participants (Table 3).\n\nSex related prevalence of intestinal parasites among study subjects in Gamo area\nKey: * Represent statistically significant difference (P < 0.05).\nTo see variations in age group, the study population was divided into 4 age groups: birth to 4 years, 5 to 14 years, 15–44 years, and over 44 years. The distribution of infection among study subjects in the different age groups is shown in Table 4. The overall infection rate was highest among the 15–44 years age group (42.3%) followed by above 44 years age group (39.7%). Only 29.4% of children from birth to 4 years age group were infected. The difference in the prevalence of intestinal parasites was insignificant (P = 0.228) among the age groups (Table 1). Each parasite was also not associated with the age of the participant (P > 0.005) except T. trichiura which shown significantly high (P = 0.007) prevalence among the age group of 5–14 years (Table 4).\n\nAge related prevalence of intestinal parasites among study subjects in Gamo area\nKey: < 4 years = Preschool children, 5–14 years = School children, 15–44 years = Reproductive age, > 44 years = elder.\n* Represent statistically significant difference (P < 0.05).", "Primary objectives of epidemiological studies on the prevalence of infection of intestinal parasites in different regions/localities are to identify high-risk communities and formulate appropriate intervention. In line with this view, the present study attempted to assess the prevalence of different intestinal parasitic infections in highland and lowland dwellers in Gamo area, Gamo Gofa Zone and then recommend appropriate intervention.\nThe results of the study showed the occurrence of several intestinal parasites of public health importance among inhabitants in four kebeles found in Gamo area of Gamo Gofa Zone, South Nations. The overall prevalence of 39.9% with one or more intestinal parasites found in this study was much lower than what was reported (82.8%) from residents of four villages in southwestern Ethiopia by Yeneneh [21]; (83.8%) from school-children around Lake Langano by Legesse and Erko [22] and from that of Mengistu and collegues [19] (83%) from urban dwellers in southwest Ethiopia. However, the prevalence in our study was slightly higher compared to other community-based studies conducted in Saudi Arabia by Al-Shammari et al.[23] showing an overall prevalence of 32.2%. The possible explanations for the discrepancy between the present and previous study finding might be the result of variation in sampling techniques used, the difference in the quality of drinking water source, and variation in the environmental condition of the different study localities.\nThe prevalence of E. histolytica/dispar and G. lamblia infection in this study was 11.4% and 10.6%, respectively. These are within the range of the nation-wide prevalence rate for amoebiasis and giardiasis [24]. However, the present finding was relatively higher than that reported from urban dwellers in southwest Ethiopia, where Mengistu et al.[19] recorded 3.1% and 3.6% for amoebiasis and giardiasis, respectively [19]. The prevalence of giardiasis was also higher than that of Birrie and Erko report of 3.1% among non-school children [25]. Giardia cysts have been isolated from water supplies in different parts of the world [26,27]. Epidemic giardiasis may be related to drinking water [28]. The present study was also conducted in a rural area that may share the mentioned risk factors.\nThe level of ascariasis observed in this study (7.8%) of fecal samples was far lower than that reported from urban dwellers in Jimma, where Mengistu et al.[19] recorded 41.0% prevalence. The prevalence of hookworm infection was 42(4.9%). The rate is lower than the previous community based study in Jimma, where Mengistu and colleagues recorded 17.5% prevalence [19]. However, the present findings on prevalence of S. stercoralis, H. nana, and Taenia spp. was not much different from the findings of previous studies reported by Woldemichael et al., McConnel and Armstrong [18,24].\nAlthough significant difference was not observed, the prevalence of intestinal parasites infections was slightly lower among lowland dwellers (37.9%) than highland dwellers (42.3%). The existence of relatively low prevalence of human intestinal parasites in lowlands is due to low prevalence of A. lumbricoides and T. trichiura. A nation-wide study conducted on ascariasis in Ethiopia has indicated a low prevalence of ascariasis in the low and dry areas of the country [29]. In agreement to this report, the present study showed relatively low prevalence of ascariasis (0.7%) and trichuriasis (0.2%) among lowland dwellers. This might be because of proper toilet facilities, apposite hand washing habit and better awareness about health in lowland dwellers than highland dwellers in our study area.\nThough the reason for high prevalence of G. lamblia in lowland area is unclear, the high prevalence of hookworm infection might be contributed to low shoe wearing habit during irrigation. According to the geo climatic type of area, significant highest prevalence of S. stercoralis infection was found in lowland. The explanation of this is that prevalence of S. stercoralis infection is parallel with hookworm infection [30]. This may lead us to a conclusion that geo climatic factors, such as soil textures, farming ecosystem, temperature, humidity and rainfall essentially influence the infection of S. stercoralis as the lowlands are located near Lake Abaya.\nFrom the four study sites, S. mansoni was reported only from Lante kebele. However, the prevalence of S. mansoni (0.12) was much lower as compared to 14.8% reported by Mengistu et al.[19]. Although this study did not evaluate the migratory history of the study participants, the existence of such low level infection rate may imply the relocation of infected individual from endemic foci. Nevertheless, the presence of schistosome-infected dweller in Lante kebele represents a risk for the introduction of a new transmission focus where the snail hosts might be available. Hence, extensive study on the transmission foci in the area is recommended to take timely intervention.\nVariations that might have occurred due to gender and age group differences were also examined in this current study. Variation of results due to gender differences was not observed in the study. The result was similar to a study conducted in the central part of Turkey [31]. But A. lumbricoides infection was significantly higher among female participants than male participants in this study. Though significant difference among age groups was not observed in the current study, the prevalence of infection was higher in reproductive age group. This might be due to the fact that this is the age group of working people who are engaged in agriculture and most likely exposed to infection through contaminated soil, water and food. The rate of infection by Taenia spp. was also higher in 15–44 and > 44 age groups. This could be due to the fact that as the child grows older the habit of feeding beef increases.\nMultiple infections occurred in 55 individuals making 6.4% of the total examined subjects and 16.1% of those who had intestinal parasites. The level of double infections with intestinal parasites determined in the present study (6.2%) was much lower than what was reported from southwest Ethiopia portraying a double infection of 35.8% among urban communities [19]. The possible difference in the socio-demographic condition of the study population and the environmental condition might explain the observed difference in double infection in the two study localities.\nThis study did not assess opportunistic intestinal parasitic infections due to lack of laboratory facilities in our department. Although important risk factors such as age, sex and altitude were considered, some risk factors were not evaluated in the current study. Apart from these limitations our study has the following strengths. It is the first of its kind in the area i.e. pattern of intestinal parasitism was not studied earlier than this current study in the study area. Moreover, all the participants from each kebele were sampled at one specific time in the sampling period to avoid seasonal biases. In addition, the sample was collected from the entire population of highland and lowland dwellers by giving equal probability for each individual in selected kebeles which reflects the real prevalence of intestinal parasites in Gamo area. Furthermore, standard laboratory techniques were used; all laboratory tasks followed standard procedures; and quality control mechanisms were employed at each step of the study.", "The high prevalence of intestinal parasitic infections among the lowland and highland dwellers in Gamo area indicated that parasitic infections are considerable public health problems. The present study has also revealed that E. histolytica/dispar and Giardia as common protozoan while A. lumberciod, hookworm and T. trichiura as common helminths that cause parasitic infection with varying magnitude in the study area. Enhancing socioeconomic status, improving sanitation facilities, instilling health education and promoting ways of keeping personal hygiene can be good strategies to control these infections in the area.", "The authors declare that they have no competing interests.", "TW Conception of the research idea, designing, collection of data, data analysis, interpretation, and manuscript drafting. TT1 Designing, collection of data and manuscript reviewing. BS Collection of data and manuscript reviewing. TT Data analysis, interpretation and manuscript drafting. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/13/151/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, "results", "discussion", "conclusions", null, null, null ]
[ "Control strategies", "Intestinal parasites", "Parasitism", "Protozoan" ]
Background: Intestinal parasitic infestation represents a large and serious medical and public health problem in developing countries. It is estimated that some 3.5 billion people are affected, and that 450 million are ill as a result of these infections, the majority being children [1]. Apart from causing morbidity and mortality, infection with intestinal parasites has known to cause iron deficiency anemia, growth retardation in children and other physical and mental health problems [2]. Furthermore, chronic intestinal parasitic infections have become the subject of speculation and investigation in relation to the spreading and severity of other infectious diseases of viral origin, tuberculosis and malaria [3-6]. Several factors like climatic conditions, poor sanitation, unsafe drinking water, and lack of toilet facilities are the main contributors to the high prevalence of intestinal parasites in the tropical and sub-tropical countries [7]. In addition, intestinal parasitic agents increase in polluted environments such as refuse heaps, gutters and sewage units in and around human dwelling and living conditions of the people in crowded or unhealthy situations [8]. Hence, a better understanding of the above factors, as well as how social, cultural, behavioral and community awareness affect the epidemiology and control of intestinal parasites may help to design effective control strategies for these diseases [9,10]. Intestinal parasites are widely distributed in Ethiopia largely due to the low level of environmental and personal hygiene, contamination of food and drinking water that results from improper disposal of human excreta [11,12]. In addition, lack of awareness of simple health promotion practices is also a contributing factor [13]. According to the Ethiopian Ministry of Health [14] more than half a million annual visits of the outpatient services of the health institutions are due to intestinal parasitic infections. However, this report may be an underestimate, because most of the health institutions lack appropriate diagnostic methods to detect low levels of parasite burden. In addition, some of the diagnostic methods for specific intestinal parasites, especially for the newly emerging opportunistic intestinal parasites, are not available to peripheral health institutions. Previous studies gave due attention to the distributions of intestinal parasites in different altitudes, community groups such as school children or other groups confined to camps [15-18]. Hence, the pattern of intestinal parasitism in a community with diverse groups of people as a whole was not illustrated particularly in the study area. The purpose of this study was to assess the magnitude and patterns of intestinal parasitism in highland and lowland dwellers, Gamo area, South Ethiopia. Methods: Study area A community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible. Two of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water. A community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible. Two of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water. Sample size and sampling techniques The sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings. The sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings. Stool collection and processing About 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above. About 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above. Quality control Before starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result. Before starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result. Data analysis Statistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05. Statistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05. Ethical clearance The study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection. The study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection. Study area: A community-based cross-sectional study was conducted between September 2010 and July 2011 in four selected areas of Gamo Gofa Zone administrative sub-division: Lante, Kolla Shelle, Dorze and Geressie kebeles. The zone is located at 505 kms South of Addis Ababa and has a total area of 12, 581.4 square kms. The general elevation of the zone ranges from 600 to 3300 m above sea level. The highland and lowland areas of the zone are characterized by an average annual rain fall of 1166 mm and 900 mm, respectively. The topography of the land is characterized by an undulating feature that makes the existence of different climatic zones in the area possible. Two of the study ‘kebeles’ (these are the smallest officially acknowledged administrative vicinities in the zone): Dorze and Geressie kebeles are positioned on the highland areas where as Lante and Kolla Shelle kebeles are situated on lowland settings of the zone. They are located a short distance from the zonal administrative center, Arba Minch. Dorze and Lante are located at 30 kms and 22 kms to the north, where as Geressie and Kolla Shelle are located at 56 kms and 28.6 kms to the south of Arba Minch, respectively. In all study kebeles healthcare is provided by health centers and private clinics which are staffed by few health officers, nurses and laboratory technicians. Though it is irregular and non-continuous, the control measures for intestinal parasites include health education, deworming of under five year children and treatment of drinking water. Sample size and sampling techniques: The sample size was determined using the single proportion population formula. It was calculated based on a prevalence of 83% [19] with a margin of error of 0.05 and a confidence level of 95%. The design effect was calculated by taking the intraclass correlation for the statistic (i.e.1%) in parasitic infection in highland and lowland areas of Ethiopia. A design effect of 4 was used to allow for multistage sampling. The calculated study sample size was 867. The study sites were divided into two regions based on altitude and from each altitude two kebeles were randomly selected. The households were selected using systematic sampling method and the study individuals were chosen using a simple random sampling method. The calculated sample size was divided to each kebele based on population size. As a result, 464 (192 from Lante and 272 from Kolla Shelle) samples were obtained from lowland and 403 (124 from Dorze and 279 from Geressie) samples were obtained from highland settings. Stool collection and processing: About 2 g of fresh fecal samples were collected from each consenting study subject and placed in separate labeled clean plastic stool containers. At the time of collection, date of sampling, the name of the participant, age, sex and consistency of the stool (formed, soft, semi-soft and watery) were recorded for each subject on a recording format. A portion of stool was examined at field by direct wet mount with saline (0.85% sodium chloride solution) to observe motile intestinal parasites and trophozoites under light microscope at 100× and 400× magnifications. The remaining part was preserved with 10% formalin in the ratio of 1 g of stool to 3 ml of formalin for later examination at Arba Minch University. Lugol’s iodine staining technique was also done to observe cysts of the intestinal protozoan parasites. A portion of preserved stool sample was processed by formol-ether concentration method as described by Ritchie [20], with some modification. In brief the stool sample was sieved with cotton gauze and transferred to 15 ml centrifuge tube. Then 8 ml of 10% formalin and 3 ml of diethyl ether was added and centrifuged for 2 min at 2000 rpm. The supernatant was discarded and the residues were transferred to microscopic slides and observed under light microscope at 100× and 400× magnifications for the presence of cysts and ova of the parasites. The presence of parasites was confirmed when observed by any of the methods above. Quality control: Before starting the actual work, quality of reagents and instruments were checked by experienced laboratory technologist. The specimens were also checked for serial number, quality and procedures of collection. To eliminate observer bias, each stool sample was examined by two laboratory technicians. The technicians were not informed about the health and other status of the study participants. In cases where the results were discordant, a third senior reader was used. The result of the third expert reader was considered the final result. Data analysis: Statistical analysis was performed with SPSS software version 16. Chi-square (χ2) was used to verify possible association between infection and exposure to different factors. Probability values were considered to be statistically significant when the calculated P-value was equal to or less than 0.05. Ethical clearance: The study was reviewed and approved by ethical committee of Arba Minch University. The ethical considerations were addressed by treating positive individuals using standard drugs under the supervision of a local nurse. The objective of the study was explained to kebele leaders and dwellers; and written consent was sought from parents or guardians of the selected children during stool sample collection. Results: A total of 867 study participants were selected for investigation. However, 10(1.2%) were excluded because of inability to provide specimen. For this reason a total of 858 individuals were included in the study (Table 1). Three hundred forty two of the study individuals were found to have single or multiple intestinal parasitic infections, which make the overall prevalence 39.9%. Of the entire positive samples for the parasite, 188 were female participants and 154 were male participants with female to male ratio of 1:0.8 (Table 1). The mean age of the participants was 25 ± 19. Frequency distribution of sex, age group and altitude of the study subjects in Gamo area Key: * Lante and Kolla Shelle. ** Dorze and Gereessie. Different types of parasites including protozoans, trematode, cestodes and nematodes were detected from the stool samples of study participants. Prevalence of Entamoeba histolytica/dispar (E. histolytica/dispar) was the highest 98(11.4%), followed by Giardia lamblia (G. lamblia) 91(10.6%), Ascaris lumbricoides (A. lumbricoides) 67(7.8%), Strongyloides stercoralis (S. stercoralis) 51(5.9%), hookworm 42(4.9%), Trichuris trichiura (T. trichiura) 24(2.8%), Taenia species (Taenia spp.) 18(2.1%), Hymenolepis nana (H. nana) 7(0.6%) and Schistosoma mansoni (S. mansoni) 1(0.12%), in that order. The majorities of the positive cases were single infections (83.9%) and double infections (15.5%). Of the triple infected persons, one was coinfected with E. histolytica/dispar, G. lamblia and Taenia spp. and the other with E. histolytica/dispar, S. stercoralis and hookworm (Figure 1). Single and mixed infections among residents of Gamo area, Gamo Gofa Zone, South Ethiopia. The prevalence of infection with different intestinal helminths and protozoan parasites for lowland (Lante and Kolla Shelle) and highland (Dorze and Gressie) is shown in Table 2. Out of 459 stool samples collected from lowland area, 174(37.9%) were positive for at least one parasite. Similarly, of the 399 stool samples collected from highland area, 169(42.3%) were positive for at least one parasite. No statistically significant difference was observed (P = 0.185) between presence of intestinal parasites and altitude (Tables 1 and 2). However, G. lamblia (P < 0.001) and hookworm (P = 0.002) were significantly more prevalent in lowland areas whereas A. lumbricoides (P < 0.001) and T. trichiura (P < 0.001) were significantly more prevalent in highland areas, but S. stercoralis was exclusive to lowland areas (Table 2). Prevalence of intestinal parasites among study subjects in lowland and highland dwellers in Gamo area Key: χ2 and P-value values represent lowland and highland comparison. * Represent statistically significant difference (P < 0.05). As observed in Table 2, of 190 and 269 stool samples collected from Lante and Kolla Shelle, 66(34.7%) and 108(40.1%) were found positive for at least one parasite, respectively. Similarly, out of 123 and 276 fecal samples collected from Dorze and Geressie sites, 50(40.7%) and 118(42.8%) were infected with one or more intestinal parasites. The overall difference in parasite prevalence was no statistically significant among the study sites, (P = 0.38). However, E. histolytica/dispar prevalence was significantly higher in Geressie (P < 0.001) than other study areas. A. lumbricoides and T. trichiura were significantly higher in Dorze (P < 0.005) and S. mansoni was detected in Lante alone. The distribution of infection among female and male participants is shown in Table 3. Female participants showed the highest infection rate (41.0%), followed closely by male participants (38.6%) (Table 1). The calculated P-value (0.48) indicates that the difference in the prevalence of intestinal parasites between female and male participants was statistically not significant. However, A. lumbricoides infection was significantly higher (P = 0.004) among female participants than male participants (Table 3). Sex related prevalence of intestinal parasites among study subjects in Gamo area Key: * Represent statistically significant difference (P < 0.05). To see variations in age group, the study population was divided into 4 age groups: birth to 4 years, 5 to 14 years, 15–44 years, and over 44 years. The distribution of infection among study subjects in the different age groups is shown in Table 4. The overall infection rate was highest among the 15–44 years age group (42.3%) followed by above 44 years age group (39.7%). Only 29.4% of children from birth to 4 years age group were infected. The difference in the prevalence of intestinal parasites was insignificant (P = 0.228) among the age groups (Table 1). Each parasite was also not associated with the age of the participant (P > 0.005) except T. trichiura which shown significantly high (P = 0.007) prevalence among the age group of 5–14 years (Table 4). Age related prevalence of intestinal parasites among study subjects in Gamo area Key: < 4 years = Preschool children, 5–14 years = School children, 15–44 years = Reproductive age, > 44 years = elder. * Represent statistically significant difference (P < 0.05). Discussion: Primary objectives of epidemiological studies on the prevalence of infection of intestinal parasites in different regions/localities are to identify high-risk communities and formulate appropriate intervention. In line with this view, the present study attempted to assess the prevalence of different intestinal parasitic infections in highland and lowland dwellers in Gamo area, Gamo Gofa Zone and then recommend appropriate intervention. The results of the study showed the occurrence of several intestinal parasites of public health importance among inhabitants in four kebeles found in Gamo area of Gamo Gofa Zone, South Nations. The overall prevalence of 39.9% with one or more intestinal parasites found in this study was much lower than what was reported (82.8%) from residents of four villages in southwestern Ethiopia by Yeneneh [21]; (83.8%) from school-children around Lake Langano by Legesse and Erko [22] and from that of Mengistu and collegues [19] (83%) from urban dwellers in southwest Ethiopia. However, the prevalence in our study was slightly higher compared to other community-based studies conducted in Saudi Arabia by Al-Shammari et al.[23] showing an overall prevalence of 32.2%. The possible explanations for the discrepancy between the present and previous study finding might be the result of variation in sampling techniques used, the difference in the quality of drinking water source, and variation in the environmental condition of the different study localities. The prevalence of E. histolytica/dispar and G. lamblia infection in this study was 11.4% and 10.6%, respectively. These are within the range of the nation-wide prevalence rate for amoebiasis and giardiasis [24]. However, the present finding was relatively higher than that reported from urban dwellers in southwest Ethiopia, where Mengistu et al.[19] recorded 3.1% and 3.6% for amoebiasis and giardiasis, respectively [19]. The prevalence of giardiasis was also higher than that of Birrie and Erko report of 3.1% among non-school children [25]. Giardia cysts have been isolated from water supplies in different parts of the world [26,27]. Epidemic giardiasis may be related to drinking water [28]. The present study was also conducted in a rural area that may share the mentioned risk factors. The level of ascariasis observed in this study (7.8%) of fecal samples was far lower than that reported from urban dwellers in Jimma, where Mengistu et al.[19] recorded 41.0% prevalence. The prevalence of hookworm infection was 42(4.9%). The rate is lower than the previous community based study in Jimma, where Mengistu and colleagues recorded 17.5% prevalence [19]. However, the present findings on prevalence of S. stercoralis, H. nana, and Taenia spp. was not much different from the findings of previous studies reported by Woldemichael et al., McConnel and Armstrong [18,24]. Although significant difference was not observed, the prevalence of intestinal parasites infections was slightly lower among lowland dwellers (37.9%) than highland dwellers (42.3%). The existence of relatively low prevalence of human intestinal parasites in lowlands is due to low prevalence of A. lumbricoides and T. trichiura. A nation-wide study conducted on ascariasis in Ethiopia has indicated a low prevalence of ascariasis in the low and dry areas of the country [29]. In agreement to this report, the present study showed relatively low prevalence of ascariasis (0.7%) and trichuriasis (0.2%) among lowland dwellers. This might be because of proper toilet facilities, apposite hand washing habit and better awareness about health in lowland dwellers than highland dwellers in our study area. Though the reason for high prevalence of G. lamblia in lowland area is unclear, the high prevalence of hookworm infection might be contributed to low shoe wearing habit during irrigation. According to the geo climatic type of area, significant highest prevalence of S. stercoralis infection was found in lowland. The explanation of this is that prevalence of S. stercoralis infection is parallel with hookworm infection [30]. This may lead us to a conclusion that geo climatic factors, such as soil textures, farming ecosystem, temperature, humidity and rainfall essentially influence the infection of S. stercoralis as the lowlands are located near Lake Abaya. From the four study sites, S. mansoni was reported only from Lante kebele. However, the prevalence of S. mansoni (0.12) was much lower as compared to 14.8% reported by Mengistu et al.[19]. Although this study did not evaluate the migratory history of the study participants, the existence of such low level infection rate may imply the relocation of infected individual from endemic foci. Nevertheless, the presence of schistosome-infected dweller in Lante kebele represents a risk for the introduction of a new transmission focus where the snail hosts might be available. Hence, extensive study on the transmission foci in the area is recommended to take timely intervention. Variations that might have occurred due to gender and age group differences were also examined in this current study. Variation of results due to gender differences was not observed in the study. The result was similar to a study conducted in the central part of Turkey [31]. But A. lumbricoides infection was significantly higher among female participants than male participants in this study. Though significant difference among age groups was not observed in the current study, the prevalence of infection was higher in reproductive age group. This might be due to the fact that this is the age group of working people who are engaged in agriculture and most likely exposed to infection through contaminated soil, water and food. The rate of infection by Taenia spp. was also higher in 15–44 and > 44 age groups. This could be due to the fact that as the child grows older the habit of feeding beef increases. Multiple infections occurred in 55 individuals making 6.4% of the total examined subjects and 16.1% of those who had intestinal parasites. The level of double infections with intestinal parasites determined in the present study (6.2%) was much lower than what was reported from southwest Ethiopia portraying a double infection of 35.8% among urban communities [19]. The possible difference in the socio-demographic condition of the study population and the environmental condition might explain the observed difference in double infection in the two study localities. This study did not assess opportunistic intestinal parasitic infections due to lack of laboratory facilities in our department. Although important risk factors such as age, sex and altitude were considered, some risk factors were not evaluated in the current study. Apart from these limitations our study has the following strengths. It is the first of its kind in the area i.e. pattern of intestinal parasitism was not studied earlier than this current study in the study area. Moreover, all the participants from each kebele were sampled at one specific time in the sampling period to avoid seasonal biases. In addition, the sample was collected from the entire population of highland and lowland dwellers by giving equal probability for each individual in selected kebeles which reflects the real prevalence of intestinal parasites in Gamo area. Furthermore, standard laboratory techniques were used; all laboratory tasks followed standard procedures; and quality control mechanisms were employed at each step of the study. Conclusion: The high prevalence of intestinal parasitic infections among the lowland and highland dwellers in Gamo area indicated that parasitic infections are considerable public health problems. The present study has also revealed that E. histolytica/dispar and Giardia as common protozoan while A. lumberciod, hookworm and T. trichiura as common helminths that cause parasitic infection with varying magnitude in the study area. Enhancing socioeconomic status, improving sanitation facilities, instilling health education and promoting ways of keeping personal hygiene can be good strategies to control these infections in the area. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: TW Conception of the research idea, designing, collection of data, data analysis, interpretation, and manuscript drafting. TT1 Designing, collection of data and manuscript reviewing. BS Collection of data and manuscript reviewing. TT Data analysis, interpretation and manuscript drafting. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/13/151/prepub
Background: Epidemiological information on the prevalence of intestinal parasitic infections in different regions is a prerequisite to develop appropriate control strategies. Therefore, this present study was conducted to assess the magnitude and pattern of intestinal parasitism in highland and lowland dwellers in Gamo area, South Ethiopia. Methods: Community-based cross-sectional study was conducted between September 2010 and July 2011 at Lante, Kolla Shelle, Dorze and Geressie kebeles of Gamo Gofa Zone, South Ethiopia. The study sites and study participants were selected using multistage sampling method. Data were gathered through house-to-house survey. A total of 858 stool specimens were collected and processed using direct wet mount and formol-ether concentration techniques for the presence of parasite. Results: Out of the total examined subjects, 342(39.9%) were found positive for at least one intestinal parasite. The prevalence of Entamoeba histolytica/dispar was the highest 98(11.4%), followed by Giardia lamblia 91(10.6%), Ascaris lumbricoides 67(7.8%), Strongyloides stercoralis 51(5.9%), hookworm 42(4.9%), Trichuris trichiura 24(2.8%), Taenia species 18(2.1%), Hymenolepis nana 7(0.6%) and Schistosoma mansoni 1(0.12%). No statistically significant difference was observed in the prevalence of intestinal parasitic infections among lowland (37.9%) and highland dwellers (42.3%) (P = 0.185). The prevalence of intestinal parasitic infection was not significantly different among the study sites but it was relatively higher in Geressie (42.8%) than other kebeles. Sex was not associated with parasitic infections (P = 0.481). No statistically significant difference of infection was observed among the age groups (P = 0.228) but it was higher in reproductive age group. Conclusions: The high prevalence of intestinal parasitic infections among the lowland and highland dwellers in Gamo area indicated that parasitic infections are important public health problems. Thus, infection control measures and the development of awareness strategies to improve sanitation and health education should be considered.
Background: Intestinal parasitic infestation represents a large and serious medical and public health problem in developing countries. It is estimated that some 3.5 billion people are affected, and that 450 million are ill as a result of these infections, the majority being children [1]. Apart from causing morbidity and mortality, infection with intestinal parasites has known to cause iron deficiency anemia, growth retardation in children and other physical and mental health problems [2]. Furthermore, chronic intestinal parasitic infections have become the subject of speculation and investigation in relation to the spreading and severity of other infectious diseases of viral origin, tuberculosis and malaria [3-6]. Several factors like climatic conditions, poor sanitation, unsafe drinking water, and lack of toilet facilities are the main contributors to the high prevalence of intestinal parasites in the tropical and sub-tropical countries [7]. In addition, intestinal parasitic agents increase in polluted environments such as refuse heaps, gutters and sewage units in and around human dwelling and living conditions of the people in crowded or unhealthy situations [8]. Hence, a better understanding of the above factors, as well as how social, cultural, behavioral and community awareness affect the epidemiology and control of intestinal parasites may help to design effective control strategies for these diseases [9,10]. Intestinal parasites are widely distributed in Ethiopia largely due to the low level of environmental and personal hygiene, contamination of food and drinking water that results from improper disposal of human excreta [11,12]. In addition, lack of awareness of simple health promotion practices is also a contributing factor [13]. According to the Ethiopian Ministry of Health [14] more than half a million annual visits of the outpatient services of the health institutions are due to intestinal parasitic infections. However, this report may be an underestimate, because most of the health institutions lack appropriate diagnostic methods to detect low levels of parasite burden. In addition, some of the diagnostic methods for specific intestinal parasites, especially for the newly emerging opportunistic intestinal parasites, are not available to peripheral health institutions. Previous studies gave due attention to the distributions of intestinal parasites in different altitudes, community groups such as school children or other groups confined to camps [15-18]. Hence, the pattern of intestinal parasitism in a community with diverse groups of people as a whole was not illustrated particularly in the study area. The purpose of this study was to assess the magnitude and patterns of intestinal parasitism in highland and lowland dwellers, Gamo area, South Ethiopia. Conclusion: The high prevalence of intestinal parasitic infections among the lowland and highland dwellers in Gamo area indicated that parasitic infections are considerable public health problems. The present study has also revealed that E. histolytica/dispar and Giardia as common protozoan while A. lumberciod, hookworm and T. trichiura as common helminths that cause parasitic infection with varying magnitude in the study area. Enhancing socioeconomic status, improving sanitation facilities, instilling health education and promoting ways of keeping personal hygiene can be good strategies to control these infections in the area.
Background: Epidemiological information on the prevalence of intestinal parasitic infections in different regions is a prerequisite to develop appropriate control strategies. Therefore, this present study was conducted to assess the magnitude and pattern of intestinal parasitism in highland and lowland dwellers in Gamo area, South Ethiopia. Methods: Community-based cross-sectional study was conducted between September 2010 and July 2011 at Lante, Kolla Shelle, Dorze and Geressie kebeles of Gamo Gofa Zone, South Ethiopia. The study sites and study participants were selected using multistage sampling method. Data were gathered through house-to-house survey. A total of 858 stool specimens were collected and processed using direct wet mount and formol-ether concentration techniques for the presence of parasite. Results: Out of the total examined subjects, 342(39.9%) were found positive for at least one intestinal parasite. The prevalence of Entamoeba histolytica/dispar was the highest 98(11.4%), followed by Giardia lamblia 91(10.6%), Ascaris lumbricoides 67(7.8%), Strongyloides stercoralis 51(5.9%), hookworm 42(4.9%), Trichuris trichiura 24(2.8%), Taenia species 18(2.1%), Hymenolepis nana 7(0.6%) and Schistosoma mansoni 1(0.12%). No statistically significant difference was observed in the prevalence of intestinal parasitic infections among lowland (37.9%) and highland dwellers (42.3%) (P = 0.185). The prevalence of intestinal parasitic infection was not significantly different among the study sites but it was relatively higher in Geressie (42.8%) than other kebeles. Sex was not associated with parasitic infections (P = 0.481). No statistically significant difference of infection was observed among the age groups (P = 0.228) but it was higher in reproductive age group. Conclusions: The high prevalence of intestinal parasitic infections among the lowland and highland dwellers in Gamo area indicated that parasitic infections are important public health problems. Thus, infection control measures and the development of awareness strategies to improve sanitation and health education should be considered.
6,022
383
[ 481, 284, 183, 269, 92, 52, 65, 10, 58, 16 ]
14
[ "study", "intestinal", "prevalence", "parasites", "stool", "area", "infection", "intestinal parasites", "lowland", "sample" ]
[ "intestinal parasitic infections", "intestinal parasites help", "intestinal parasites especially", "intestinal parasites level", "prevalence intestinal parasitic" ]
[CONTENT] Control strategies | Intestinal parasites | Parasitism | Protozoan [SUMMARY]
[CONTENT] Control strategies | Intestinal parasites | Parasitism | Protozoan [SUMMARY]
[CONTENT] Control strategies | Intestinal parasites | Parasitism | Protozoan [SUMMARY]
[CONTENT] Control strategies | Intestinal parasites | Parasitism | Protozoan [SUMMARY]
[CONTENT] Control strategies | Intestinal parasites | Parasitism | Protozoan [SUMMARY]
[CONTENT] Control strategies | Intestinal parasites | Parasitism | Protozoan [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Child | Child, Preschool | Cross-Sectional Studies | Ethiopia | Feces | Female | Humans | Infant | Infant, Newborn | Intestinal Diseases, Parasitic | Male | Middle Aged | Prevalence | Sex Distribution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Child | Child, Preschool | Cross-Sectional Studies | Ethiopia | Feces | Female | Humans | Infant | Infant, Newborn | Intestinal Diseases, Parasitic | Male | Middle Aged | Prevalence | Sex Distribution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Child | Child, Preschool | Cross-Sectional Studies | Ethiopia | Feces | Female | Humans | Infant | Infant, Newborn | Intestinal Diseases, Parasitic | Male | Middle Aged | Prevalence | Sex Distribution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Child | Child, Preschool | Cross-Sectional Studies | Ethiopia | Feces | Female | Humans | Infant | Infant, Newborn | Intestinal Diseases, Parasitic | Male | Middle Aged | Prevalence | Sex Distribution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Child | Child, Preschool | Cross-Sectional Studies | Ethiopia | Feces | Female | Humans | Infant | Infant, Newborn | Intestinal Diseases, Parasitic | Male | Middle Aged | Prevalence | Sex Distribution | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Child | Child, Preschool | Cross-Sectional Studies | Ethiopia | Feces | Female | Humans | Infant | Infant, Newborn | Intestinal Diseases, Parasitic | Male | Middle Aged | Prevalence | Sex Distribution | Young Adult [SUMMARY]
[CONTENT] intestinal parasitic infections | intestinal parasites help | intestinal parasites especially | intestinal parasites level | prevalence intestinal parasitic [SUMMARY]
[CONTENT] intestinal parasitic infections | intestinal parasites help | intestinal parasites especially | intestinal parasites level | prevalence intestinal parasitic [SUMMARY]
[CONTENT] intestinal parasitic infections | intestinal parasites help | intestinal parasites especially | intestinal parasites level | prevalence intestinal parasitic [SUMMARY]
[CONTENT] intestinal parasitic infections | intestinal parasites help | intestinal parasites especially | intestinal parasites level | prevalence intestinal parasitic [SUMMARY]
[CONTENT] intestinal parasitic infections | intestinal parasites help | intestinal parasites especially | intestinal parasites level | prevalence intestinal parasitic [SUMMARY]
[CONTENT] intestinal parasitic infections | intestinal parasites help | intestinal parasites especially | intestinal parasites level | prevalence intestinal parasitic [SUMMARY]
[CONTENT] study | intestinal | prevalence | parasites | stool | area | infection | intestinal parasites | lowland | sample [SUMMARY]
[CONTENT] study | intestinal | prevalence | parasites | stool | area | infection | intestinal parasites | lowland | sample [SUMMARY]
[CONTENT] study | intestinal | prevalence | parasites | stool | area | infection | intestinal parasites | lowland | sample [SUMMARY]
[CONTENT] study | intestinal | prevalence | parasites | stool | area | infection | intestinal parasites | lowland | sample [SUMMARY]
[CONTENT] study | intestinal | prevalence | parasites | stool | area | infection | intestinal parasites | lowland | sample [SUMMARY]
[CONTENT] study | intestinal | prevalence | parasites | stool | area | infection | intestinal parasites | lowland | sample [SUMMARY]
[CONTENT] intestinal | intestinal parasites | parasites | health | institutions | health institutions | intestinal parasitic | addition | people | lack [SUMMARY]
[CONTENT] stool | kms | study | sample | zone | kebeles | size | calculated | ml | sampling [SUMMARY]
[CONTENT] years | table | age | participants | prevalence | difference | 44 years | parasites | study | group [SUMMARY]
[CONTENT] infections | common | parasitic | area | parasitic infections | health | good strategies | ways keeping | socioeconomic status improving sanitation | socioeconomic status improving [SUMMARY]
[CONTENT] study | intestinal | parasites | stool | prevalence | area | health | sample | infection | intestinal parasites [SUMMARY]
[CONTENT] study | intestinal | parasites | stool | prevalence | area | health | sample | infection | intestinal parasites [SUMMARY]
[CONTENT] ||| Gamo | South Ethiopia [SUMMARY]
[CONTENT] between September 2010 and July 2011 | Dorze | Geressie | Gofa Zone | South Ethiopia ||| ||| ||| 858 | wet mount [SUMMARY]
[CONTENT] 342(39.9% | at least one ||| Entamoeba | 98(11.4% | Giardia | 91(10.6% | Ascaris | 67(7.8% | Strongyloides | 51(5.9% | 42(4.9% | Trichuris | 24(2.8% | Taenia | 18(2.1% | Hymenolepis | 7(0.6% | Schistosoma | 1(0.12% ||| 37.9% | 42.3% | 0.185 ||| Geressie | 42.8% ||| 0.481 ||| 0.228 [SUMMARY]
[CONTENT] Gamo ||| [SUMMARY]
[CONTENT] ||| Gamo | South Ethiopia ||| between September 2010 and July 2011 | Dorze | Geressie | Gofa Zone | South Ethiopia ||| ||| ||| 858 | wet mount ||| 342(39.9% | at least one ||| Entamoeba | 98(11.4% | Giardia | 91(10.6% | Ascaris | 67(7.8% | Strongyloides | 51(5.9% | 42(4.9% | Trichuris | 24(2.8% | Taenia | 18(2.1% | Hymenolepis | 7(0.6% | Schistosoma | 1(0.12% ||| 37.9% | 42.3% | 0.185 ||| Geressie | 42.8% ||| 0.481 ||| 0.228 ||| Gamo ||| [SUMMARY]
[CONTENT] ||| Gamo | South Ethiopia ||| between September 2010 and July 2011 | Dorze | Geressie | Gofa Zone | South Ethiopia ||| ||| ||| 858 | wet mount ||| 342(39.9% | at least one ||| Entamoeba | 98(11.4% | Giardia | 91(10.6% | Ascaris | 67(7.8% | Strongyloides | 51(5.9% | 42(4.9% | Trichuris | 24(2.8% | Taenia | 18(2.1% | Hymenolepis | 7(0.6% | Schistosoma | 1(0.12% ||| 37.9% | 42.3% | 0.185 ||| Geressie | 42.8% ||| 0.481 ||| 0.228 ||| Gamo ||| [SUMMARY]
The etiological roles of miRNAs, lncRNAs, and circRNAs in neuropathic pain: A narrative review.
35808924
Non-coding RNAs (ncRNAs) are involved in neuropathic pain development. Herein, we systematically searched for neuropathic pain-related ncRNAs expression changes, including microRNAs (miRNAs), long non-coding RNAs (lncRNAs), and circular non-coding RNAs (circRNAs).
BACKGROUND
We searched two databases, PubMed and GeenMedical, for relevant studies.
METHODS
Peripheral nerve injury or noxious stimuli can induce extensive changes in the expression of ncRNAs. For example, higher serum miR-132-3p, -146b-5p, and -384 was observed in neuropathic pain patients. Either sciatic nerve ligation, dorsal root ganglion (DRG) transaction, or ventral root transection (VRT) could upregulate miR-21 and miR-31 while downregulating miR-668 and miR-672 in the injured DRG. lncRNAs, such as early growth response 2-antisense-RNA (Egr2-AS-RNA) and Kcna2-AS-RNA, were upregulated in Schwann cells and inflicted DRG after nerve injury, respectively. Dysregulated circRNA homeodomain-interacting protein kinase 3 (circHIPK3) in serum and the DRG, abnormally expressed lncRNAs X-inactive specific transcript (XIST), nuclear enriched abundant transcript 1 (NEAT1), small nucleolar RNA host gene 1 (SNHG1), as well as ciRS-7, zinc finger protein 609 (cirZNF609), circ_0005075, and circAnks1a in the spinal cord were suggested to participate in neuropathic pain development. Dysregulated miRNAs contribute to neuropathic pain via neuroinflammation, autophagy, abnormal ion channel expression, regulating pain-related mediators, protein kinases, structural proteins, neurotransmission excitatory-inhibitory imbalances, or exosome miRNA-mediated neuron-glia communication. In addition, lncRNAs and circRNAs are essential in neuropathic pain by acting as antisense RNA and miRNA sponges, epigenetically regulating pain-related molecules expression, or modulating miRNA processing.
RESULTS
Numerous dysregulated ncRNAs have been suggested to participate in neuropathic pain development. However, there is much work to be done before ncRNA-based analgesics can be clinically used for various reasons such as conservation among species, proper delivery, stability, and off-target effects.
CONCLUSIONS
[ "Ganglia, Spinal", "Humans", "MicroRNAs", "Neuralgia", "RNA, Circular", "RNA, Long Noncoding" ]
9396192
INTRODUCTION
According to the International Association for the Study of Pain, neuropathic pain is the most severe chronic pain condition triggered by a lesion or disease of the somatosensory system. It is characteristic of hyperalgesia, allodynia, or spontaneous pain. 1 Neuropathic pain can have peripheral and central origins, with the former including neuropathic pain after peripheral nerve injury, trigeminal neuralgia, postherpetic neuralgia, painful radiculopathy, and painful polyneuropathy. In contrast, central neuropathic pain includes neuropathic pain after spinal cord or brain injury, multiple sclerosis, and chronic central post‐stroke pain. Approximately 7%–10% of the general population will experience neuropathic pain, with the majority not having satisfactory pain relief with current therapies, leading to great suffering for individuals and enormous economic and social burdens. 2 The exact molecular mechanisms underlying neuropathic pain remain unclear, and elucidating them is crucial for developing mechanism‐based treatment strategies. One proposed mechanism involves altered gene or protein expression along the pain processing pathways. Therefore, understanding how genes or proteins are dysregulated may help us to find a way to normalize these abnormalities and treat neuropathic pain. In recent years, accumulating evidence has suggested the essential role of non‐coding RNAs (ncRNAs) in various physiological and pathological procedures, such as embryonic development, inflammation, tumors, and respiratory and cardiovascular diseases. 3 ncRNAs have no protein‐coding potential, but they can govern gene or protein expression with diverse mechanisms. ncRNAs are extensively distributed in the peripheral and central nervous systems, including pain‐related structures. 4 Broad abnormal ncRNAs expression is observed following peripheral stimulation. These abnormalities are related to hyperalgesia during chronic pain development. The available data indicate that ncRNAs may be essential for hyperalgesia. In this review, we focus on microRNA (miRNA), long non‐coding RNA (lncRNA), and circular non‐coding RNA (circRNA) expression changes in neuropathic pain. Other types of ncRNAs are seldom reported in neuropathic pain and, thus, are not discussed herein. Notably, we will pay attention to their etiological role in the development of neuropathic pain and the current challenges and considerations for miRNA‐, lncRNA‐, and circRNA‐based therapeutics for neuropathic pain.
null
null
null
null
CONCLUSION
Recent human and animal studies have identified accumulating dysregulated miRNAs and lncRNAs in the serum or along pain processing pathways following peripheral nerve injury or noxious stimulation. Experimental studies have validated their essential role in neuropathic pain. miRNAs contribute to neuropathic pain development via neuroinflammation, autophagy, abnormal ion channel expression, regulating pain‐related mediators, protein kinases, structural proteins, neurotransmission excitatory–‐inhibitory imbalances, and exosome miRNA‐mediated neuron–glia communication. Meanwhile, lncRNAs and circRNAs are crucial in neuropathic pain development by acting as AS RNA and miRNA sponges, epigenetically regulating pain‐related molecules expression, or modulating miRNA processing. However, more work is required before miRNA‐ and lncRNA‐based analgesics can be clinically used for various reasons, including their conservation among species, proper delivery, stability, off‐target effects, and potential activation of the immune system.
[ "INTRODUCTION", "\nBIOGENESIS AND FUNCTION OF miRNAs, lncRNAs, AND circRNAs\n", "\nPERIPHERAL NERVE INJURY OR NOXIOUS STIMULI INDUCE EXTENSIVE miRNAs, lncRNAs, AND circRNAs EXPRESSION CHANGES\n", "\nmiRNAs expression changes", "\nlncRNAs and circRNAs expression changes", "\nSTUDYING miRNA NEUROPATHIC PAIN MECHANISMS\n", "\nmiRNAs regulate neuroinflammation in neuropathic pain development", "Autophagy", "The contribution of miRNAs to ion channel expression in neuropathic pain", "\nmiRNAs regulate pain‐related mediators, protein kinases, and structural proteins", "Neurotransmission excitatory–inhibitory imbalances", "\nEV miRNAs‐mediated neuron–glia communication", "\nSTUDYING lncRNAs AND circRNAs NEUROPATHIC PAIN MECHANISMS\n", "\nlncRNAs act as AS RNA\n", "\nlncRNAs act as miRNA sponges", "\nlncRNAs epigenetically regulate pain‐related molecules expression", "\nlncRNAs modulate miRNA processing", "\nCHALLENGES AND CONSIDERATIONS IN DELIVERING miRNA‐AND lncRNAs‐BASED THERAPETUTICS\n", "AUTHOR CONTRIBUTIONS" ]
[ "According to the International Association for the Study of Pain, neuropathic pain is the most severe chronic pain condition triggered by a lesion or disease of the somatosensory system. It is characteristic of hyperalgesia, allodynia, or spontaneous pain.\n1\n Neuropathic pain can have peripheral and central origins, with the former including neuropathic pain after peripheral nerve injury, trigeminal neuralgia, postherpetic neuralgia, painful radiculopathy, and painful polyneuropathy. In contrast, central neuropathic pain includes neuropathic pain after spinal cord or brain injury, multiple sclerosis, and chronic central post‐stroke pain. Approximately 7%–10% of the general population will experience neuropathic pain, with the majority not having satisfactory pain relief with current therapies, leading to great suffering for individuals and enormous economic and social burdens.\n2\n\n\nThe exact molecular mechanisms underlying neuropathic pain remain unclear, and elucidating them is crucial for developing mechanism‐based treatment strategies. One proposed mechanism involves altered gene or protein expression along the pain processing pathways. Therefore, understanding how genes or proteins are dysregulated may help us to find a way to normalize these abnormalities and treat neuropathic pain.\nIn recent years, accumulating evidence has suggested the essential role of non‐coding RNAs (ncRNAs) in various physiological and pathological procedures, such as embryonic development, inflammation, tumors, and respiratory and cardiovascular diseases.\n3\n ncRNAs have no protein‐coding potential, but they can govern gene or protein expression with diverse mechanisms. ncRNAs are extensively distributed in the peripheral and central nervous systems, including pain‐related structures.\n4\n Broad abnormal ncRNAs expression is observed following peripheral stimulation. These abnormalities are related to hyperalgesia during chronic pain development. The available data indicate that ncRNAs may be essential for hyperalgesia. In this review, we focus on microRNA (miRNA), long non‐coding RNA (lncRNA), and circular non‐coding RNA (circRNA) expression changes in neuropathic pain. Other types of ncRNAs are seldom reported in neuropathic pain and, thus, are not discussed herein. Notably, we will pay attention to their etiological role in the development of neuropathic pain and the current challenges and considerations for miRNA‐, lncRNA‐, and circRNA‐based therapeutics for neuropathic pain.", "Typically, miRNA production involves three steps: cropping, exporting, and dicing. First, the miRNA gene is transcribed in the nucleus, mainly by RNA polymerase II.\n5\n The resulting primary miRNA transcript (pri‐miRNA) is several kilobases (kb) in length, with a specific stem–loop structure that harbors mature miRNAs in the stem. The mature miRNA is ‘cropped’ by Drosha and its interactor DGCR8 (DiGeorge syndrome critical region 8), which cleaves pri‐miRNA at the stem to produce pre‐miRNA,\n6\n 60–70 nucleotides (nt) in length with a hairpin structure. Then, exportin‐5 recognizes and exports pre‐miRNA from the nucleus to the cytoplasm.\n7\n In the cytoplasm, ribonuclease III (RNAse III), termed Dicer, further cleaves the pre‐miRNA to release double‐stranded miRNA with a length of ~22 nt.\n8\n This miRNA is unwound by an unknown helicase or cleaved by Argonaute (Ago) to form the RNA‐induced silencing complex.\n9\n One strand in the RNA duplex remains with Ago as a mature miRNA, and the other is degraded. The seed sequence of the miRNA incompletely or entirely combines with the target mRNA sequence, resulting in target mRNAs degradation or transcriptional regulation.\n10\n\n\nUnlike miRNAs, lncRNAs are mRNAs‐like transcripts ranging in length from 200 nt to 100 kb that lack prominent open reading frames.\n11\n The lncRNA cellular mechanism is highly related to their intracellular localization. lncRNAs control chromatin functions, transcription, and RNA processing in the nucleus and affect mRNA stability, translation, and cellular signaling in the cytoplasm.\n12\n Compared to lncRNAs, circRNAs are more stable because a single circRNA molecular ends can be covalently linked compared to linear RNA. circRNAs are evolutionarily conserved molecules that are essential in the post‐transcriptional modification of gene expression by acting as miRNAs sponges or interacting with transcription or translational machinery. Numerous lncRNAs and circRNAs are distributed within pain‐related regions and dysregulated after peripheral noxious stimulation. Moreover, functional studies have indicated that miRNAs, lncRNAs, and circRNAs participate in neuropathic pain development by regulating diverse pain‐related genes along the pain processing pathways.", "\nmiRNAs expression changes Microarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions.\nHuman studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI).\n13\n miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls.\n14\n Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin.\n15\n Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia.\n16\n Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment.\n17\n\n\nIn addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways.\n18\n Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent,\n19\n, \n20\n implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals.\n21\n Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG,\n22\n implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain.\nThe spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents.\n23\n, \n24\n, \n25\n, \n26\n Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21,\n25\n thus, acting as potential targets in its treatment.\n27\n\n\nCytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis.\n28\n Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation.\n29\n DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus.\n30\n (Figure 1).\nmicroRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection\nMicroarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions.\nHuman studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI).\n13\n miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls.\n14\n Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin.\n15\n Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia.\n16\n Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment.\n17\n\n\nIn addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways.\n18\n Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent,\n19\n, \n20\n implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals.\n21\n Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG,\n22\n implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain.\nThe spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents.\n23\n, \n24\n, \n25\n, \n26\n Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21,\n25\n thus, acting as potential targets in its treatment.\n27\n\n\nCytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis.\n28\n Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation.\n29\n DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus.\n30\n (Figure 1).\nmicroRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection\n\nlncRNAs and circRNAs expression changes Patients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms.\n31\n LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development.\n32\n\n\nMicroarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve.\n33\n lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site.\n34\n In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury.\n35\n Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain.\nNext‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord.\n36\n In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development.\n37\n\n\ncircRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury.\n38\n (Figure 2).\nLong non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury\nPatients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms.\n31\n LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development.\n32\n\n\nMicroarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve.\n33\n lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site.\n34\n In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury.\n35\n Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain.\nNext‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord.\n36\n In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development.\n37\n\n\ncircRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury.\n38\n (Figure 2).\nLong non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury", "Microarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions.\nHuman studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI).\n13\n miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls.\n14\n Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin.\n15\n Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia.\n16\n Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment.\n17\n\n\nIn addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways.\n18\n Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent,\n19\n, \n20\n implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals.\n21\n Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG,\n22\n implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain.\nThe spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents.\n23\n, \n24\n, \n25\n, \n26\n Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21,\n25\n thus, acting as potential targets in its treatment.\n27\n\n\nCytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis.\n28\n Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation.\n29\n DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus.\n30\n (Figure 1).\nmicroRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection", "Patients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms.\n31\n LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development.\n32\n\n\nMicroarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve.\n33\n lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site.\n34\n In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury.\n35\n Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain.\nNext‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord.\n36\n In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development.\n37\n\n\ncircRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury.\n38\n (Figure 2).\nLong non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury", "miRNAs contribute to the development of neuropathic pain via diverse mechanisms, such as neuroinflammation, autophagy, abnormal ion channel expression, regulating pain‐related mediators, protein kinases, structural proteins, neurotransmission excitatory–inhibitory imbalances, and exosome miRNA‐mediated neuron–glia communication (Table 1).\nmiRNAs expression change for the development of neuropathic pain\nKv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5,\nDPP10, Navβ1 ↓\np‐p38 MAPK ↑\nAbbreviations: ACC, anterior cingulate cortex; BACE1, beta‐site amyloid precursor protein‐cleaving enzyme 1; BDNF, brain‐derived neurotrophic factor; CCI, chronic constriction injury; CCR1, C–C chemokine receptor 1; CSF, cerebro spinal fluid; CSF1, colony‐stimulating factor‐1; CXCR4, chemokine CXC receptor 4; DNMT3a, DNA methyltransferase 3a; DNP, diabetic neuropathic pain; DRG, dorsal root ganglion; EV, extracellular vesicle; EZH2, enhancer of zeste homolog 2; GRK2, G protein‐coupled receptor kinases 2; IGF‐1/IGF‐1R, insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor; KCNMA1, calcium‐activated potassium channel subunit α‐1; MeCP2, methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2; MIP‐1a, macrophage inflammatory protein‐1 alpha; NEFL, neurofilament light polypeptide; NF‐κB, nuclear factor – kappa B; NGF, nerve growth factor; NLRP3, NOD‐like receptor protein 3; NOX2, NADPH oxidase 2; NP, neuropathic pain; pSNL, partial sciatic nerve ligation; RAP1A, Ras‐related protein 1A; RASA1, RAS P21 protein activator 1; ROS, reactive oxygen species; S1PR1, sphingosine‐1‐phosphate receptor 1; SCI, spinal cord injury; Scn2b, β2 subunit of the voltage‐gated sodium channel; SIRT1, histone deacetylase sirtuin 1; SNL, spinal nerve ligation; SOCS1, suppressor of cytokine signaling 1; STAT3, signal transducer and activator of transcription 3; TLR4, toll‐like receptor 4; TNFR1, TNF receptor 1; TRPA1, transient receptor potential ankyrin 1; TXNIP, thioredoxin‐interacting protein; VTR, ventral root transection; ZEB1, zinc finger E‐box binding homeobox 1.\n\nmiRNAs regulate neuroinflammation in neuropathic pain development miRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways.\nAmong patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation.\n39\n Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation.\n40\n Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain.\nmiR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs).\n41\n NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration.\n42\n Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively.\n43\n, \n44\n\n\nmiRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation.\n45\n miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2).\n46\n EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways.\n47\n\n\nmiR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI.\n48\n GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124.\n49\n SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats.\n50\n Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation.\n51\n Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord.\n52\n, \n53\n Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development.\n54\n\n\nIn a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia.\n55\n\n\nNotably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages.\n56\n The ability to induce glial polarization was also observed in cultured BV‐2 microglia.\n57\n In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1).\n58\n Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO.\n59\n miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation.\n60\n However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition.\n61\n In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways.\n62\n The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models.\nmiRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways.\nAmong patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation.\n39\n Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation.\n40\n Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain.\nmiR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs).\n41\n NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration.\n42\n Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively.\n43\n, \n44\n\n\nmiRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation.\n45\n miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2).\n46\n EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways.\n47\n\n\nmiR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI.\n48\n GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124.\n49\n SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats.\n50\n Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation.\n51\n Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord.\n52\n, \n53\n Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development.\n54\n\n\nIn a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia.\n55\n\n\nNotably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages.\n56\n The ability to induce glial polarization was also observed in cultured BV‐2 microglia.\n57\n In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1).\n58\n Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO.\n59\n miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation.\n60\n However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition.\n61\n In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways.\n62\n The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models.\nAutophagy miRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI.\n63\n Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways.\n64\n, \n65\n Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation.\n66\n The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro.\nmiRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI.\n63\n Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways.\n64\n, \n65\n Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation.\n66\n The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro.\nThe contribution of miRNAs to ion channel expression in neuropathic pain Another class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release.\nTheoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL,\n41\n miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability.\n67\n Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3),\n68\n, \n69\n, \n70\n miR‐30b‐5p and its target Scn8a (encoding Nav1.6),\n71\n and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7).\n72\n, \n73\n\n\nA miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1.\n74\n CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn.\n75\n By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased.\n76\n\n\nFurthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain.\n77\n The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively.\n78\n Intrathecal miR‐103 successfully relieved neuropathic pain.\nIn addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations.\n79\n In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain.\n80\n In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain.\n81\n\n\nAnother class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release.\nTheoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL,\n41\n miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability.\n67\n Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3),\n68\n, \n69\n, \n70\n miR‐30b‐5p and its target Scn8a (encoding Nav1.6),\n71\n and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7).\n72\n, \n73\n\n\nA miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1.\n74\n CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn.\n75\n By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased.\n76\n\n\nFurthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain.\n77\n The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively.\n78\n Intrathecal miR‐103 successfully relieved neuropathic pain.\nIn addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations.\n79\n In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain.\n80\n In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain.\n81\n\n\n\nmiRNAs regulate pain‐related mediators, protein kinases, and structural proteins miRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes.\n82\n Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target.\n83\n, \n84\n, \n85\n After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation.\n86\n miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia.\n16\n A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system.\n16\n\n\nmiRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain.\n87\n Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally.\n88\n SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a).\n89\n Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development.\nThe CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes.\n83\n miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1.\n90\n However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG.\n83\n, \n91\n, \n92\n As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain.\n91\n, \n93\n Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent.\nNotably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats.\n94\n BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression.\nmiRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes.\n82\n Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target.\n83\n, \n84\n, \n85\n After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation.\n86\n miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia.\n16\n A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system.\n16\n\n\nmiRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain.\n87\n Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally.\n88\n SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a).\n89\n Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development.\nThe CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes.\n83\n miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1.\n90\n However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG.\n83\n, \n91\n, \n92\n As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain.\n91\n, \n93\n Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent.\nNotably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats.\n94\n BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression.\nNeurotransmission excitatory–inhibitory imbalances Neurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn.\n95\n GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis.\n96\n Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain.\n97\n\n\nNeurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn.\n95\n GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis.\n96\n Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain.\n97\n\n\n\nEV miRNAs‐mediated neuron–glia communication We mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG.\n22\n Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice.\n98\n In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype.\n99\n Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG.\nUnlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development.\n100\n TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons.\n101\n Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene.\n102\n Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling.\nWe mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG.\n22\n Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice.\n98\n In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype.\n99\n Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG.\nUnlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development.\n100\n TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons.\n101\n Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene.\n102\n Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling.", "miRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways.\nAmong patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation.\n39\n Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation.\n40\n Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain.\nmiR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs).\n41\n NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration.\n42\n Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively.\n43\n, \n44\n\n\nmiRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation.\n45\n miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2).\n46\n EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways.\n47\n\n\nmiR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI.\n48\n GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124.\n49\n SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats.\n50\n Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation.\n51\n Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord.\n52\n, \n53\n Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development.\n54\n\n\nIn a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia.\n55\n\n\nNotably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages.\n56\n The ability to induce glial polarization was also observed in cultured BV‐2 microglia.\n57\n In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1).\n58\n Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO.\n59\n miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation.\n60\n However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition.\n61\n In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways.\n62\n The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models.", "miRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI.\n63\n Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways.\n64\n, \n65\n Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation.\n66\n The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro.", "Another class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release.\nTheoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL,\n41\n miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability.\n67\n Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3),\n68\n, \n69\n, \n70\n miR‐30b‐5p and its target Scn8a (encoding Nav1.6),\n71\n and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7).\n72\n, \n73\n\n\nA miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1.\n74\n CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn.\n75\n By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased.\n76\n\n\nFurthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain.\n77\n The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively.\n78\n Intrathecal miR‐103 successfully relieved neuropathic pain.\nIn addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations.\n79\n In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain.\n80\n In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain.\n81\n\n", "miRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes.\n82\n Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target.\n83\n, \n84\n, \n85\n After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation.\n86\n miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia.\n16\n A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system.\n16\n\n\nmiRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain.\n87\n Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally.\n88\n SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a).\n89\n Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development.\nThe CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes.\n83\n miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1.\n90\n However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG.\n83\n, \n91\n, \n92\n As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain.\n91\n, \n93\n Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent.\nNotably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats.\n94\n BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression.", "Neurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn.\n95\n GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis.\n96\n Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain.\n97\n\n", "We mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG.\n22\n Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice.\n98\n In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype.\n99\n Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG.\nUnlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development.\n100\n TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons.\n101\n Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene.\n102\n Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling.", "lncRNAs and circRNAs are essential in neuropathic pain development by acting as AS RNA or miRNA sponges, epigenetically regulating pain‐related molecules expression, or modulating miRNA processing (Table 2).\nLong non‐coding RNAs (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change for the development of neuropathic pain\nAbbreviations: AQP4, aquaporin 4; AS RNA, antisense RNA; CCI, chronic constriction injury; CDK4, cyclin‐dependent kinase 4; CDK6, cyclin‐dependent kinase 6; DNP, diabetic neuropathic pain; DRG, dorsal root ganglion; ENO1, enolase 1; EphB1, ephrin type‐b 1; EZH2, enhancer of zeste homolog 2; HMGB1, high mobility group box 1; NAT, natural antisense transcript; NOTCH2, NOTCH receptor 2; NP, neuropathic pain; pSNL, partial sciatic nerve ligation; pSTAT3, phosphorylated signal transducer and activator of transcription 3; SCI, spinal cord injury; SGC, satellite glia cell; SNL, spinal nerve ligation; STAT3, signal transducer and activator of transcription 3; TLR5, toll‐like receptor 5; TNFAIP1, tumor necrosis factor alpha‐induced protein 1; TRPV1, transient receptor potential vanilloid 1; XIST, X‐inactive specific transcript; ZEB1, zinc finger E‐box binding homeobox 1.\n\nlncRNAs act as AS RNA\n After peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells.\n103\n On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination.\nVoltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain.\n104\n It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms.\nThe complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG.\n105\n Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state.\nAfter peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells.\n103\n On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination.\nVoltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain.\n104\n It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms.\nThe complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG.\n105\n Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state.\n\nlncRNAs act as miRNA sponges lncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3.\n106\n, \n107\n, \n108\n, \n109\n TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively.\n110\n, \n111\n NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain.\nAlthough the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function.\n112\n For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats.\n113\n circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats.\n114\n Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI.\n115\n, \n116\n\n\nlncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3.\n106\n, \n107\n, \n108\n, \n109\n TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively.\n110\n, \n111\n NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain.\nAlthough the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function.\n112\n For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats.\n113\n circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats.\n114\n Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI.\n115\n, \n116\n\n\n\nlncRNAs epigenetically regulate pain‐related molecules expression In addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity.\n117\n In addition, transcriptome screening in the DRG of DNP rats\n118\n has identified diverse dysregulated lncRNAs, including uc.48+,\n119\n BC168687,\n120\n, \n121\n NONRATT021972,\n31\n, \n122\n, \n123\n MRAK009713,\n124\n and Lncenc1.\n125\n The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats.\n119\n, \n120\n, \n121\n, \n122\n, \n123\n, \n124\n, \n125\n Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors,\n119\n, \n120\n, \n122\n, \n123\n, \n124\n TNF‐α‐related pathways,\n31\n transient receptor potential vanilloid 1 (TRPV1),\n121\n or EZH2.\n125\n\n\nAs demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression.\n126\n SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6.\n127\n Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation.\n128\n Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation.\n129\n\n\nIn addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity.\n117\n In addition, transcriptome screening in the DRG of DNP rats\n118\n has identified diverse dysregulated lncRNAs, including uc.48+,\n119\n BC168687,\n120\n, \n121\n NONRATT021972,\n31\n, \n122\n, \n123\n MRAK009713,\n124\n and Lncenc1.\n125\n The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats.\n119\n, \n120\n, \n121\n, \n122\n, \n123\n, \n124\n, \n125\n Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors,\n119\n, \n120\n, \n122\n, \n123\n, \n124\n TNF‐α‐related pathways,\n31\n transient receptor potential vanilloid 1 (TRPV1),\n121\n or EZH2.\n125\n\n\nAs demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression.\n126\n SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6.\n127\n Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation.\n128\n Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation.\n129\n\n\n\nlncRNAs modulate miRNA processing Specific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia.\n130\n\n\nCollectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K).\n131\n Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury.\n132\n\n\nWe also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition.\n133\n Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation.\n134\n The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required.\nSpecific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia.\n130\n\n\nCollectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K).\n131\n Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury.\n132\n\n\nWe also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition.\n133\n Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation.\n134\n The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required.", "After peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells.\n103\n On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination.\nVoltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain.\n104\n It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms.\nThe complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG.\n105\n Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state.", "lncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3.\n106\n, \n107\n, \n108\n, \n109\n TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively.\n110\n, \n111\n NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain.\nAlthough the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function.\n112\n For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats.\n113\n circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats.\n114\n Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI.\n115\n, \n116\n\n", "In addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity.\n117\n In addition, transcriptome screening in the DRG of DNP rats\n118\n has identified diverse dysregulated lncRNAs, including uc.48+,\n119\n BC168687,\n120\n, \n121\n NONRATT021972,\n31\n, \n122\n, \n123\n MRAK009713,\n124\n and Lncenc1.\n125\n The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats.\n119\n, \n120\n, \n121\n, \n122\n, \n123\n, \n124\n, \n125\n Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors,\n119\n, \n120\n, \n122\n, \n123\n, \n124\n TNF‐α‐related pathways,\n31\n transient receptor potential vanilloid 1 (TRPV1),\n121\n or EZH2.\n125\n\n\nAs demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression.\n126\n SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6.\n127\n Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation.\n128\n Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation.\n129\n\n", "Specific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia.\n130\n\n\nCollectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K).\n131\n Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury.\n132\n\n\nWe also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition.\n133\n Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation.\n134\n The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required.", "There are two main strategies for modulating miRNA function during pain treatment: upregulation or downregulation of specific miRNAs. miRNA mimics or virus‐based constructs were used to upregulate miRNA expression. miRNA inhibitors, miRNA sponges, or inhibition of a particular miRNA‐mRNA interaction has been applied to downregulate specific miRNAs. The design of lncRNA‐based therapeutics includes diverse approaches, such as post‐transcriptional inhibition of lncRNAs by antisense oligonucleotides or siRNA and steric blockades of lncRNA–protein interactions by small molecules and morpholino.\n135\n\n\nHowever, there is much work to be done before miRNA‐ and lncRNA‐based analgesics can be clinically used for various reasons, including their conservation among species, proper delivery, stability, off‐target effects, and potential activation of the immune system.\nFirst, most available findings regarding the efficacy of miRNAs or lncRNAs during neuropathic pain are based on animal but not human studies. Whether such results can be extrapolated to humans and their translational potential is still unknown because of conservation among species.\nSecond, the blood–brain barrier (BBB) is a practical challenge for delivering RNA‐based therapeutics into the central nervous system (CNS) via intravenous injection. Viral vectors, polypeptides, aptamers, and particular chemical modifications have been developed. Supplementing cholesterol molecules to the sense strand of a miRNA mimic or inhibitor has proven to be an efficient strategy. Cholesterol‐conjugated siRNAs showed better silencing potency than unconjugated siRNAs and presented a high efficacy for delivery to oligodendrocytes in the CNS.\n136\n Another proven method is the immunoliposome, a combination of liposomes, receptor‐targeted monoclonal antibodies, and the target molecules.\n137\n An immunoliposome nanocomplex has been reported to deliver therapeutic nucleic acids across the BBB into the deep brain by transferrin receptors.\n138\n In addition, intrathecal injections are a feasible approach in animal studies for neuropathic pain treatment. As reported, miR‐146a attenuated neuropathic pain partially by inhibiting TNF receptor‐associated factor 6 (TRAF6) and its downstream phospho‐JNK/C–C motif chemokine ligand 2 (pJNK/CCL2) signaling in the spinal cord.\n139\n Intrathecal injection of miR‐146a‐5p encapsulated nanoparticles provided an analgesic effect via NF‐κB and p38‐MAPK inhibition in spinal microglia.\n140\n In recent years, poly (D, L‐lactic‐co‐glycolic acid) (PLGA)‐nanoparticles have been applied to deliver siRNAs and plasmids into the spinal cord to treat neuropathic pain in rats.\n141\n PLGA copolymer is a promising US Food and Drug Administration (FDA)‐approved gene transmission material because of its biodegradability and biocompatibility in humans.\n142\n Intrathecal treatments with C‐X3‐C motif chemokine receptor 1 (CX3CR1), p38, p66shc siRNA encapsulated PLGA nanoparticles, or forkhead box P3 (Foxp3) plasmid‐encapsulated PLGA nanoparticles inhibit microglial activation and hyperalgesia in SNL rats.\n141\n, \n143\n, \n144\n, \n145\n\n\nExosomes are another promising delivery carrier for treating neuropathic pain. Exosomes are natural membranous microvesicles that carry RNAs, with the advantage of being efficient, cell‐free, and nonimmunogenicity. Intravenous injecting neuron‐targeted exosomes delivered the carried siRNAs to neurons, microglia, and oligodendrocytes to knock down specific gene expression in mice brains.\n146\n This approach enabled cell‐specific delivery of the siRNA‐cargo across the BBB. Mesenchymal stem cells (MSCs) are pluripotent stem cells with immunomodulatory, anti‐inflammatory, and nutritional properties. The treatment efficacy of exosomes derived from MSCs has been proven in neuropathic pain.\n147\n Intrathecal, local, or subcutaneous application of exosomes obtained from human umbilical cord MSCs could mitigate nerve injury‐induced hyperalgesia.\n148\n, \n149\n, \n150\n Furthermore, immunofluorescence results showed that most intrathecally injected exosomes could be found in injured peripheral axons, the DRG, and the spinal dorsal horn, potentiating a homing ability of exosomes.\n148\n\n\nThird, although miRNAs are relatively stable in vivo, they have long‐lasting efficacy and a high resistance to nucleolytic degradation compared to mRNAs. Particular chemical modifications are still required to prolong its half‐life or increase its stability, for example, by generating locked‐nucleic acids (LNAs). LNA modifications increase the RNA affinity of antisense oligonucleotides, presenting an excellent miRNA inhibitory activity at a low dosage.\n151\n Additionally, the LNA technique was used to synthesize highly stable aptamers.\n152\n\n\nFourth, the off‐target effect is another consideration. One miRNA may regulate multiple genes, and the off‐target potential of one miRNA may produce undesirable side effects.\n153\n, \n154\n\n\nFinally, activation of the immune system is a potential adverse event. Specific CpG motifs in oligonucleotides could trigger nonspecific immunological activity.\n155\n Therefore, RNA‐based therapeutics might also produce specific immune reactions after producing antibodies against the oligonucleotides.", "Ming Jiang: Investigation, Methodology. Yelong Wang: Manuscript preparation. Jing Wang: Figures and tables preparation. Shanwu Feng: Supervision, Funding acquisition. Xian Wang: Conceptualization, Funding acquisition." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "\nBIOGENESIS AND FUNCTION OF miRNAs, lncRNAs, AND circRNAs\n", "\nPERIPHERAL NERVE INJURY OR NOXIOUS STIMULI INDUCE EXTENSIVE miRNAs, lncRNAs, AND circRNAs EXPRESSION CHANGES\n", "\nmiRNAs expression changes", "\nlncRNAs and circRNAs expression changes", "\nSTUDYING miRNA NEUROPATHIC PAIN MECHANISMS\n", "\nmiRNAs regulate neuroinflammation in neuropathic pain development", "Autophagy", "The contribution of miRNAs to ion channel expression in neuropathic pain", "\nmiRNAs regulate pain‐related mediators, protein kinases, and structural proteins", "Neurotransmission excitatory–inhibitory imbalances", "\nEV miRNAs‐mediated neuron–glia communication", "\nSTUDYING lncRNAs AND circRNAs NEUROPATHIC PAIN MECHANISMS\n", "\nlncRNAs act as AS RNA\n", "\nlncRNAs act as miRNA sponges", "\nlncRNAs epigenetically regulate pain‐related molecules expression", "\nlncRNAs modulate miRNA processing", "\nCHALLENGES AND CONSIDERATIONS IN DELIVERING miRNA‐AND lncRNAs‐BASED THERAPETUTICS\n", "CONCLUSION", "AUTHOR CONTRIBUTIONS", "CONFLICT OF INTERESTS" ]
[ "According to the International Association for the Study of Pain, neuropathic pain is the most severe chronic pain condition triggered by a lesion or disease of the somatosensory system. It is characteristic of hyperalgesia, allodynia, or spontaneous pain.\n1\n Neuropathic pain can have peripheral and central origins, with the former including neuropathic pain after peripheral nerve injury, trigeminal neuralgia, postherpetic neuralgia, painful radiculopathy, and painful polyneuropathy. In contrast, central neuropathic pain includes neuropathic pain after spinal cord or brain injury, multiple sclerosis, and chronic central post‐stroke pain. Approximately 7%–10% of the general population will experience neuropathic pain, with the majority not having satisfactory pain relief with current therapies, leading to great suffering for individuals and enormous economic and social burdens.\n2\n\n\nThe exact molecular mechanisms underlying neuropathic pain remain unclear, and elucidating them is crucial for developing mechanism‐based treatment strategies. One proposed mechanism involves altered gene or protein expression along the pain processing pathways. Therefore, understanding how genes or proteins are dysregulated may help us to find a way to normalize these abnormalities and treat neuropathic pain.\nIn recent years, accumulating evidence has suggested the essential role of non‐coding RNAs (ncRNAs) in various physiological and pathological procedures, such as embryonic development, inflammation, tumors, and respiratory and cardiovascular diseases.\n3\n ncRNAs have no protein‐coding potential, but they can govern gene or protein expression with diverse mechanisms. ncRNAs are extensively distributed in the peripheral and central nervous systems, including pain‐related structures.\n4\n Broad abnormal ncRNAs expression is observed following peripheral stimulation. These abnormalities are related to hyperalgesia during chronic pain development. The available data indicate that ncRNAs may be essential for hyperalgesia. In this review, we focus on microRNA (miRNA), long non‐coding RNA (lncRNA), and circular non‐coding RNA (circRNA) expression changes in neuropathic pain. Other types of ncRNAs are seldom reported in neuropathic pain and, thus, are not discussed herein. Notably, we will pay attention to their etiological role in the development of neuropathic pain and the current challenges and considerations for miRNA‐, lncRNA‐, and circRNA‐based therapeutics for neuropathic pain.", "Typically, miRNA production involves three steps: cropping, exporting, and dicing. First, the miRNA gene is transcribed in the nucleus, mainly by RNA polymerase II.\n5\n The resulting primary miRNA transcript (pri‐miRNA) is several kilobases (kb) in length, with a specific stem–loop structure that harbors mature miRNAs in the stem. The mature miRNA is ‘cropped’ by Drosha and its interactor DGCR8 (DiGeorge syndrome critical region 8), which cleaves pri‐miRNA at the stem to produce pre‐miRNA,\n6\n 60–70 nucleotides (nt) in length with a hairpin structure. Then, exportin‐5 recognizes and exports pre‐miRNA from the nucleus to the cytoplasm.\n7\n In the cytoplasm, ribonuclease III (RNAse III), termed Dicer, further cleaves the pre‐miRNA to release double‐stranded miRNA with a length of ~22 nt.\n8\n This miRNA is unwound by an unknown helicase or cleaved by Argonaute (Ago) to form the RNA‐induced silencing complex.\n9\n One strand in the RNA duplex remains with Ago as a mature miRNA, and the other is degraded. The seed sequence of the miRNA incompletely or entirely combines with the target mRNA sequence, resulting in target mRNAs degradation or transcriptional regulation.\n10\n\n\nUnlike miRNAs, lncRNAs are mRNAs‐like transcripts ranging in length from 200 nt to 100 kb that lack prominent open reading frames.\n11\n The lncRNA cellular mechanism is highly related to their intracellular localization. lncRNAs control chromatin functions, transcription, and RNA processing in the nucleus and affect mRNA stability, translation, and cellular signaling in the cytoplasm.\n12\n Compared to lncRNAs, circRNAs are more stable because a single circRNA molecular ends can be covalently linked compared to linear RNA. circRNAs are evolutionarily conserved molecules that are essential in the post‐transcriptional modification of gene expression by acting as miRNAs sponges or interacting with transcription or translational machinery. Numerous lncRNAs and circRNAs are distributed within pain‐related regions and dysregulated after peripheral noxious stimulation. Moreover, functional studies have indicated that miRNAs, lncRNAs, and circRNAs participate in neuropathic pain development by regulating diverse pain‐related genes along the pain processing pathways.", "\nmiRNAs expression changes Microarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions.\nHuman studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI).\n13\n miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls.\n14\n Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin.\n15\n Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia.\n16\n Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment.\n17\n\n\nIn addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways.\n18\n Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent,\n19\n, \n20\n implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals.\n21\n Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG,\n22\n implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain.\nThe spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents.\n23\n, \n24\n, \n25\n, \n26\n Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21,\n25\n thus, acting as potential targets in its treatment.\n27\n\n\nCytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis.\n28\n Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation.\n29\n DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus.\n30\n (Figure 1).\nmicroRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection\nMicroarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions.\nHuman studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI).\n13\n miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls.\n14\n Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin.\n15\n Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia.\n16\n Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment.\n17\n\n\nIn addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways.\n18\n Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent,\n19\n, \n20\n implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals.\n21\n Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG,\n22\n implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain.\nThe spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents.\n23\n, \n24\n, \n25\n, \n26\n Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21,\n25\n thus, acting as potential targets in its treatment.\n27\n\n\nCytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis.\n28\n Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation.\n29\n DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus.\n30\n (Figure 1).\nmicroRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection\n\nlncRNAs and circRNAs expression changes Patients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms.\n31\n LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development.\n32\n\n\nMicroarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve.\n33\n lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site.\n34\n In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury.\n35\n Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain.\nNext‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord.\n36\n In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development.\n37\n\n\ncircRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury.\n38\n (Figure 2).\nLong non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury\nPatients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms.\n31\n LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development.\n32\n\n\nMicroarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve.\n33\n lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site.\n34\n In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury.\n35\n Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain.\nNext‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord.\n36\n In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development.\n37\n\n\ncircRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury.\n38\n (Figure 2).\nLong non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury", "Microarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions.\nHuman studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI).\n13\n miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls.\n14\n Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin.\n15\n Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia.\n16\n Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment.\n17\n\n\nIn addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways.\n18\n Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent,\n19\n, \n20\n implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals.\n21\n Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG,\n22\n implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain.\nThe spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents.\n23\n, \n24\n, \n25\n, \n26\n Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21,\n25\n thus, acting as potential targets in its treatment.\n27\n\n\nCytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis.\n28\n Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation.\n29\n DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus.\n30\n (Figure 1).\nmicroRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection", "Patients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms.\n31\n LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development.\n32\n\n\nMicroarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve.\n33\n lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site.\n34\n In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury.\n35\n Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain.\nNext‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord.\n36\n In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development.\n37\n\n\ncircRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury.\n38\n (Figure 2).\nLong non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury", "miRNAs contribute to the development of neuropathic pain via diverse mechanisms, such as neuroinflammation, autophagy, abnormal ion channel expression, regulating pain‐related mediators, protein kinases, structural proteins, neurotransmission excitatory–inhibitory imbalances, and exosome miRNA‐mediated neuron–glia communication (Table 1).\nmiRNAs expression change for the development of neuropathic pain\nKv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5,\nDPP10, Navβ1 ↓\np‐p38 MAPK ↑\nAbbreviations: ACC, anterior cingulate cortex; BACE1, beta‐site amyloid precursor protein‐cleaving enzyme 1; BDNF, brain‐derived neurotrophic factor; CCI, chronic constriction injury; CCR1, C–C chemokine receptor 1; CSF, cerebro spinal fluid; CSF1, colony‐stimulating factor‐1; CXCR4, chemokine CXC receptor 4; DNMT3a, DNA methyltransferase 3a; DNP, diabetic neuropathic pain; DRG, dorsal root ganglion; EV, extracellular vesicle; EZH2, enhancer of zeste homolog 2; GRK2, G protein‐coupled receptor kinases 2; IGF‐1/IGF‐1R, insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor; KCNMA1, calcium‐activated potassium channel subunit α‐1; MeCP2, methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2; MIP‐1a, macrophage inflammatory protein‐1 alpha; NEFL, neurofilament light polypeptide; NF‐κB, nuclear factor – kappa B; NGF, nerve growth factor; NLRP3, NOD‐like receptor protein 3; NOX2, NADPH oxidase 2; NP, neuropathic pain; pSNL, partial sciatic nerve ligation; RAP1A, Ras‐related protein 1A; RASA1, RAS P21 protein activator 1; ROS, reactive oxygen species; S1PR1, sphingosine‐1‐phosphate receptor 1; SCI, spinal cord injury; Scn2b, β2 subunit of the voltage‐gated sodium channel; SIRT1, histone deacetylase sirtuin 1; SNL, spinal nerve ligation; SOCS1, suppressor of cytokine signaling 1; STAT3, signal transducer and activator of transcription 3; TLR4, toll‐like receptor 4; TNFR1, TNF receptor 1; TRPA1, transient receptor potential ankyrin 1; TXNIP, thioredoxin‐interacting protein; VTR, ventral root transection; ZEB1, zinc finger E‐box binding homeobox 1.\n\nmiRNAs regulate neuroinflammation in neuropathic pain development miRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways.\nAmong patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation.\n39\n Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation.\n40\n Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain.\nmiR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs).\n41\n NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration.\n42\n Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively.\n43\n, \n44\n\n\nmiRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation.\n45\n miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2).\n46\n EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways.\n47\n\n\nmiR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI.\n48\n GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124.\n49\n SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats.\n50\n Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation.\n51\n Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord.\n52\n, \n53\n Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development.\n54\n\n\nIn a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia.\n55\n\n\nNotably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages.\n56\n The ability to induce glial polarization was also observed in cultured BV‐2 microglia.\n57\n In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1).\n58\n Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO.\n59\n miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation.\n60\n However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition.\n61\n In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways.\n62\n The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models.\nmiRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways.\nAmong patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation.\n39\n Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation.\n40\n Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain.\nmiR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs).\n41\n NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration.\n42\n Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively.\n43\n, \n44\n\n\nmiRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation.\n45\n miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2).\n46\n EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways.\n47\n\n\nmiR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI.\n48\n GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124.\n49\n SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats.\n50\n Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation.\n51\n Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord.\n52\n, \n53\n Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development.\n54\n\n\nIn a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia.\n55\n\n\nNotably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages.\n56\n The ability to induce glial polarization was also observed in cultured BV‐2 microglia.\n57\n In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1).\n58\n Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO.\n59\n miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation.\n60\n However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition.\n61\n In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways.\n62\n The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models.\nAutophagy miRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI.\n63\n Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways.\n64\n, \n65\n Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation.\n66\n The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro.\nmiRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI.\n63\n Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways.\n64\n, \n65\n Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation.\n66\n The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro.\nThe contribution of miRNAs to ion channel expression in neuropathic pain Another class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release.\nTheoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL,\n41\n miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability.\n67\n Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3),\n68\n, \n69\n, \n70\n miR‐30b‐5p and its target Scn8a (encoding Nav1.6),\n71\n and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7).\n72\n, \n73\n\n\nA miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1.\n74\n CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn.\n75\n By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased.\n76\n\n\nFurthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain.\n77\n The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively.\n78\n Intrathecal miR‐103 successfully relieved neuropathic pain.\nIn addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations.\n79\n In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain.\n80\n In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain.\n81\n\n\nAnother class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release.\nTheoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL,\n41\n miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability.\n67\n Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3),\n68\n, \n69\n, \n70\n miR‐30b‐5p and its target Scn8a (encoding Nav1.6),\n71\n and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7).\n72\n, \n73\n\n\nA miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1.\n74\n CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn.\n75\n By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased.\n76\n\n\nFurthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain.\n77\n The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively.\n78\n Intrathecal miR‐103 successfully relieved neuropathic pain.\nIn addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations.\n79\n In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain.\n80\n In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain.\n81\n\n\n\nmiRNAs regulate pain‐related mediators, protein kinases, and structural proteins miRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes.\n82\n Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target.\n83\n, \n84\n, \n85\n After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation.\n86\n miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia.\n16\n A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system.\n16\n\n\nmiRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain.\n87\n Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally.\n88\n SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a).\n89\n Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development.\nThe CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes.\n83\n miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1.\n90\n However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG.\n83\n, \n91\n, \n92\n As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain.\n91\n, \n93\n Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent.\nNotably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats.\n94\n BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression.\nmiRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes.\n82\n Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target.\n83\n, \n84\n, \n85\n After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation.\n86\n miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia.\n16\n A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system.\n16\n\n\nmiRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain.\n87\n Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally.\n88\n SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a).\n89\n Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development.\nThe CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes.\n83\n miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1.\n90\n However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG.\n83\n, \n91\n, \n92\n As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain.\n91\n, \n93\n Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent.\nNotably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats.\n94\n BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression.\nNeurotransmission excitatory–inhibitory imbalances Neurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn.\n95\n GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis.\n96\n Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain.\n97\n\n\nNeurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn.\n95\n GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis.\n96\n Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain.\n97\n\n\n\nEV miRNAs‐mediated neuron–glia communication We mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG.\n22\n Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice.\n98\n In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype.\n99\n Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG.\nUnlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development.\n100\n TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons.\n101\n Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene.\n102\n Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling.\nWe mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG.\n22\n Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice.\n98\n In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype.\n99\n Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG.\nUnlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development.\n100\n TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons.\n101\n Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene.\n102\n Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling.", "miRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways.\nAmong patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation.\n39\n Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation.\n40\n Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain.\nmiR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs).\n41\n NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration.\n42\n Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively.\n43\n, \n44\n\n\nmiRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation.\n45\n miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2).\n46\n EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways.\n47\n\n\nmiR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI.\n48\n GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124.\n49\n SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats.\n50\n Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation.\n51\n Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord.\n52\n, \n53\n Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development.\n54\n\n\nIn a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia.\n55\n\n\nNotably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages.\n56\n The ability to induce glial polarization was also observed in cultured BV‐2 microglia.\n57\n In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1).\n58\n Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO.\n59\n miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation.\n60\n However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition.\n61\n In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways.\n62\n The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models.", "miRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI.\n63\n Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways.\n64\n, \n65\n Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation.\n66\n The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro.", "Another class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release.\nTheoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL,\n41\n miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability.\n67\n Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3),\n68\n, \n69\n, \n70\n miR‐30b‐5p and its target Scn8a (encoding Nav1.6),\n71\n and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7).\n72\n, \n73\n\n\nA miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1.\n74\n CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn.\n75\n By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased.\n76\n\n\nFurthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain.\n77\n The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively.\n78\n Intrathecal miR‐103 successfully relieved neuropathic pain.\nIn addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations.\n79\n In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain.\n80\n In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain.\n81\n\n", "miRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes.\n82\n Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target.\n83\n, \n84\n, \n85\n After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation.\n86\n miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia.\n16\n A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system.\n16\n\n\nmiRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain.\n87\n Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally.\n88\n SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a).\n89\n Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development.\nThe CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes.\n83\n miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1.\n90\n However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG.\n83\n, \n91\n, \n92\n As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain.\n91\n, \n93\n Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent.\nNotably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats.\n94\n BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression.", "Neurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn.\n95\n GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis.\n96\n Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain.\n97\n\n", "We mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG.\n22\n Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice.\n98\n In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype.\n99\n Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG.\nUnlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development.\n100\n TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons.\n101\n Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene.\n102\n Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling.", "lncRNAs and circRNAs are essential in neuropathic pain development by acting as AS RNA or miRNA sponges, epigenetically regulating pain‐related molecules expression, or modulating miRNA processing (Table 2).\nLong non‐coding RNAs (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change for the development of neuropathic pain\nAbbreviations: AQP4, aquaporin 4; AS RNA, antisense RNA; CCI, chronic constriction injury; CDK4, cyclin‐dependent kinase 4; CDK6, cyclin‐dependent kinase 6; DNP, diabetic neuropathic pain; DRG, dorsal root ganglion; ENO1, enolase 1; EphB1, ephrin type‐b 1; EZH2, enhancer of zeste homolog 2; HMGB1, high mobility group box 1; NAT, natural antisense transcript; NOTCH2, NOTCH receptor 2; NP, neuropathic pain; pSNL, partial sciatic nerve ligation; pSTAT3, phosphorylated signal transducer and activator of transcription 3; SCI, spinal cord injury; SGC, satellite glia cell; SNL, spinal nerve ligation; STAT3, signal transducer and activator of transcription 3; TLR5, toll‐like receptor 5; TNFAIP1, tumor necrosis factor alpha‐induced protein 1; TRPV1, transient receptor potential vanilloid 1; XIST, X‐inactive specific transcript; ZEB1, zinc finger E‐box binding homeobox 1.\n\nlncRNAs act as AS RNA\n After peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells.\n103\n On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination.\nVoltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain.\n104\n It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms.\nThe complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG.\n105\n Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state.\nAfter peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells.\n103\n On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination.\nVoltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain.\n104\n It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms.\nThe complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG.\n105\n Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state.\n\nlncRNAs act as miRNA sponges lncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3.\n106\n, \n107\n, \n108\n, \n109\n TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively.\n110\n, \n111\n NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain.\nAlthough the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function.\n112\n For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats.\n113\n circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats.\n114\n Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI.\n115\n, \n116\n\n\nlncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3.\n106\n, \n107\n, \n108\n, \n109\n TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively.\n110\n, \n111\n NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain.\nAlthough the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function.\n112\n For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats.\n113\n circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats.\n114\n Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI.\n115\n, \n116\n\n\n\nlncRNAs epigenetically regulate pain‐related molecules expression In addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity.\n117\n In addition, transcriptome screening in the DRG of DNP rats\n118\n has identified diverse dysregulated lncRNAs, including uc.48+,\n119\n BC168687,\n120\n, \n121\n NONRATT021972,\n31\n, \n122\n, \n123\n MRAK009713,\n124\n and Lncenc1.\n125\n The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats.\n119\n, \n120\n, \n121\n, \n122\n, \n123\n, \n124\n, \n125\n Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors,\n119\n, \n120\n, \n122\n, \n123\n, \n124\n TNF‐α‐related pathways,\n31\n transient receptor potential vanilloid 1 (TRPV1),\n121\n or EZH2.\n125\n\n\nAs demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression.\n126\n SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6.\n127\n Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation.\n128\n Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation.\n129\n\n\nIn addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity.\n117\n In addition, transcriptome screening in the DRG of DNP rats\n118\n has identified diverse dysregulated lncRNAs, including uc.48+,\n119\n BC168687,\n120\n, \n121\n NONRATT021972,\n31\n, \n122\n, \n123\n MRAK009713,\n124\n and Lncenc1.\n125\n The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats.\n119\n, \n120\n, \n121\n, \n122\n, \n123\n, \n124\n, \n125\n Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors,\n119\n, \n120\n, \n122\n, \n123\n, \n124\n TNF‐α‐related pathways,\n31\n transient receptor potential vanilloid 1 (TRPV1),\n121\n or EZH2.\n125\n\n\nAs demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression.\n126\n SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6.\n127\n Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation.\n128\n Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation.\n129\n\n\n\nlncRNAs modulate miRNA processing Specific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia.\n130\n\n\nCollectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K).\n131\n Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury.\n132\n\n\nWe also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition.\n133\n Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation.\n134\n The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required.\nSpecific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia.\n130\n\n\nCollectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K).\n131\n Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury.\n132\n\n\nWe also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition.\n133\n Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation.\n134\n The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required.", "After peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells.\n103\n On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination.\nVoltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain.\n104\n It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms.\nThe complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG.\n105\n Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state.", "lncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3.\n106\n, \n107\n, \n108\n, \n109\n TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively.\n110\n, \n111\n NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain.\nAlthough the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function.\n112\n For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats.\n113\n circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats.\n114\n Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI.\n115\n, \n116\n\n", "In addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity.\n117\n In addition, transcriptome screening in the DRG of DNP rats\n118\n has identified diverse dysregulated lncRNAs, including uc.48+,\n119\n BC168687,\n120\n, \n121\n NONRATT021972,\n31\n, \n122\n, \n123\n MRAK009713,\n124\n and Lncenc1.\n125\n The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats.\n119\n, \n120\n, \n121\n, \n122\n, \n123\n, \n124\n, \n125\n Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors,\n119\n, \n120\n, \n122\n, \n123\n, \n124\n TNF‐α‐related pathways,\n31\n transient receptor potential vanilloid 1 (TRPV1),\n121\n or EZH2.\n125\n\n\nAs demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression.\n126\n SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6.\n127\n Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation.\n128\n Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation.\n129\n\n", "Specific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia.\n130\n\n\nCollectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K).\n131\n Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury.\n132\n\n\nWe also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition.\n133\n Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation.\n134\n The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required.", "There are two main strategies for modulating miRNA function during pain treatment: upregulation or downregulation of specific miRNAs. miRNA mimics or virus‐based constructs were used to upregulate miRNA expression. miRNA inhibitors, miRNA sponges, or inhibition of a particular miRNA‐mRNA interaction has been applied to downregulate specific miRNAs. The design of lncRNA‐based therapeutics includes diverse approaches, such as post‐transcriptional inhibition of lncRNAs by antisense oligonucleotides or siRNA and steric blockades of lncRNA–protein interactions by small molecules and morpholino.\n135\n\n\nHowever, there is much work to be done before miRNA‐ and lncRNA‐based analgesics can be clinically used for various reasons, including their conservation among species, proper delivery, stability, off‐target effects, and potential activation of the immune system.\nFirst, most available findings regarding the efficacy of miRNAs or lncRNAs during neuropathic pain are based on animal but not human studies. Whether such results can be extrapolated to humans and their translational potential is still unknown because of conservation among species.\nSecond, the blood–brain barrier (BBB) is a practical challenge for delivering RNA‐based therapeutics into the central nervous system (CNS) via intravenous injection. Viral vectors, polypeptides, aptamers, and particular chemical modifications have been developed. Supplementing cholesterol molecules to the sense strand of a miRNA mimic or inhibitor has proven to be an efficient strategy. Cholesterol‐conjugated siRNAs showed better silencing potency than unconjugated siRNAs and presented a high efficacy for delivery to oligodendrocytes in the CNS.\n136\n Another proven method is the immunoliposome, a combination of liposomes, receptor‐targeted monoclonal antibodies, and the target molecules.\n137\n An immunoliposome nanocomplex has been reported to deliver therapeutic nucleic acids across the BBB into the deep brain by transferrin receptors.\n138\n In addition, intrathecal injections are a feasible approach in animal studies for neuropathic pain treatment. As reported, miR‐146a attenuated neuropathic pain partially by inhibiting TNF receptor‐associated factor 6 (TRAF6) and its downstream phospho‐JNK/C–C motif chemokine ligand 2 (pJNK/CCL2) signaling in the spinal cord.\n139\n Intrathecal injection of miR‐146a‐5p encapsulated nanoparticles provided an analgesic effect via NF‐κB and p38‐MAPK inhibition in spinal microglia.\n140\n In recent years, poly (D, L‐lactic‐co‐glycolic acid) (PLGA)‐nanoparticles have been applied to deliver siRNAs and plasmids into the spinal cord to treat neuropathic pain in rats.\n141\n PLGA copolymer is a promising US Food and Drug Administration (FDA)‐approved gene transmission material because of its biodegradability and biocompatibility in humans.\n142\n Intrathecal treatments with C‐X3‐C motif chemokine receptor 1 (CX3CR1), p38, p66shc siRNA encapsulated PLGA nanoparticles, or forkhead box P3 (Foxp3) plasmid‐encapsulated PLGA nanoparticles inhibit microglial activation and hyperalgesia in SNL rats.\n141\n, \n143\n, \n144\n, \n145\n\n\nExosomes are another promising delivery carrier for treating neuropathic pain. Exosomes are natural membranous microvesicles that carry RNAs, with the advantage of being efficient, cell‐free, and nonimmunogenicity. Intravenous injecting neuron‐targeted exosomes delivered the carried siRNAs to neurons, microglia, and oligodendrocytes to knock down specific gene expression in mice brains.\n146\n This approach enabled cell‐specific delivery of the siRNA‐cargo across the BBB. Mesenchymal stem cells (MSCs) are pluripotent stem cells with immunomodulatory, anti‐inflammatory, and nutritional properties. The treatment efficacy of exosomes derived from MSCs has been proven in neuropathic pain.\n147\n Intrathecal, local, or subcutaneous application of exosomes obtained from human umbilical cord MSCs could mitigate nerve injury‐induced hyperalgesia.\n148\n, \n149\n, \n150\n Furthermore, immunofluorescence results showed that most intrathecally injected exosomes could be found in injured peripheral axons, the DRG, and the spinal dorsal horn, potentiating a homing ability of exosomes.\n148\n\n\nThird, although miRNAs are relatively stable in vivo, they have long‐lasting efficacy and a high resistance to nucleolytic degradation compared to mRNAs. Particular chemical modifications are still required to prolong its half‐life or increase its stability, for example, by generating locked‐nucleic acids (LNAs). LNA modifications increase the RNA affinity of antisense oligonucleotides, presenting an excellent miRNA inhibitory activity at a low dosage.\n151\n Additionally, the LNA technique was used to synthesize highly stable aptamers.\n152\n\n\nFourth, the off‐target effect is another consideration. One miRNA may regulate multiple genes, and the off‐target potential of one miRNA may produce undesirable side effects.\n153\n, \n154\n\n\nFinally, activation of the immune system is a potential adverse event. Specific CpG motifs in oligonucleotides could trigger nonspecific immunological activity.\n155\n Therefore, RNA‐based therapeutics might also produce specific immune reactions after producing antibodies against the oligonucleotides.", "Recent human and animal studies have identified accumulating dysregulated miRNAs and lncRNAs in the serum or along pain processing pathways following peripheral nerve injury or noxious stimulation. Experimental studies have validated their essential role in neuropathic pain. miRNAs contribute to neuropathic pain development via neuroinflammation, autophagy, abnormal ion channel expression, regulating pain‐related mediators, protein kinases, structural proteins, neurotransmission excitatory–‐inhibitory imbalances, and exosome miRNA‐mediated neuron–glia communication. Meanwhile, lncRNAs and circRNAs are crucial in neuropathic pain development by acting as AS RNA and miRNA sponges, epigenetically regulating pain‐related molecules expression, or modulating miRNA processing. However, more work is required before miRNA‐ and lncRNA‐based analgesics can be clinically used for various reasons, including their conservation among species, proper delivery, stability, off‐target effects, and potential activation of the immune system.", "Ming Jiang: Investigation, Methodology. Yelong Wang: Manuscript preparation. Jing Wang: Figures and tables preparation. Shanwu Feng: Supervision, Funding acquisition. Xian Wang: Conceptualization, Funding acquisition.", "The authors declare no competing interest in this review." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "conclusions", null, "COI-statement" ]
[ "circRNAs", "lncRNAs", "mechanism study", "miRNAs", "neuropathic pain" ]
INTRODUCTION: According to the International Association for the Study of Pain, neuropathic pain is the most severe chronic pain condition triggered by a lesion or disease of the somatosensory system. It is characteristic of hyperalgesia, allodynia, or spontaneous pain. 1 Neuropathic pain can have peripheral and central origins, with the former including neuropathic pain after peripheral nerve injury, trigeminal neuralgia, postherpetic neuralgia, painful radiculopathy, and painful polyneuropathy. In contrast, central neuropathic pain includes neuropathic pain after spinal cord or brain injury, multiple sclerosis, and chronic central post‐stroke pain. Approximately 7%–10% of the general population will experience neuropathic pain, with the majority not having satisfactory pain relief with current therapies, leading to great suffering for individuals and enormous economic and social burdens. 2 The exact molecular mechanisms underlying neuropathic pain remain unclear, and elucidating them is crucial for developing mechanism‐based treatment strategies. One proposed mechanism involves altered gene or protein expression along the pain processing pathways. Therefore, understanding how genes or proteins are dysregulated may help us to find a way to normalize these abnormalities and treat neuropathic pain. In recent years, accumulating evidence has suggested the essential role of non‐coding RNAs (ncRNAs) in various physiological and pathological procedures, such as embryonic development, inflammation, tumors, and respiratory and cardiovascular diseases. 3 ncRNAs have no protein‐coding potential, but they can govern gene or protein expression with diverse mechanisms. ncRNAs are extensively distributed in the peripheral and central nervous systems, including pain‐related structures. 4 Broad abnormal ncRNAs expression is observed following peripheral stimulation. These abnormalities are related to hyperalgesia during chronic pain development. The available data indicate that ncRNAs may be essential for hyperalgesia. In this review, we focus on microRNA (miRNA), long non‐coding RNA (lncRNA), and circular non‐coding RNA (circRNA) expression changes in neuropathic pain. Other types of ncRNAs are seldom reported in neuropathic pain and, thus, are not discussed herein. Notably, we will pay attention to their etiological role in the development of neuropathic pain and the current challenges and considerations for miRNA‐, lncRNA‐, and circRNA‐based therapeutics for neuropathic pain. BIOGENESIS AND FUNCTION OF miRNAs, lncRNAs, AND circRNAs : Typically, miRNA production involves three steps: cropping, exporting, and dicing. First, the miRNA gene is transcribed in the nucleus, mainly by RNA polymerase II. 5 The resulting primary miRNA transcript (pri‐miRNA) is several kilobases (kb) in length, with a specific stem–loop structure that harbors mature miRNAs in the stem. The mature miRNA is ‘cropped’ by Drosha and its interactor DGCR8 (DiGeorge syndrome critical region 8), which cleaves pri‐miRNA at the stem to produce pre‐miRNA, 6 60–70 nucleotides (nt) in length with a hairpin structure. Then, exportin‐5 recognizes and exports pre‐miRNA from the nucleus to the cytoplasm. 7 In the cytoplasm, ribonuclease III (RNAse III), termed Dicer, further cleaves the pre‐miRNA to release double‐stranded miRNA with a length of ~22 nt. 8 This miRNA is unwound by an unknown helicase or cleaved by Argonaute (Ago) to form the RNA‐induced silencing complex. 9 One strand in the RNA duplex remains with Ago as a mature miRNA, and the other is degraded. The seed sequence of the miRNA incompletely or entirely combines with the target mRNA sequence, resulting in target mRNAs degradation or transcriptional regulation. 10 Unlike miRNAs, lncRNAs are mRNAs‐like transcripts ranging in length from 200 nt to 100 kb that lack prominent open reading frames. 11 The lncRNA cellular mechanism is highly related to their intracellular localization. lncRNAs control chromatin functions, transcription, and RNA processing in the nucleus and affect mRNA stability, translation, and cellular signaling in the cytoplasm. 12 Compared to lncRNAs, circRNAs are more stable because a single circRNA molecular ends can be covalently linked compared to linear RNA. circRNAs are evolutionarily conserved molecules that are essential in the post‐transcriptional modification of gene expression by acting as miRNAs sponges or interacting with transcription or translational machinery. Numerous lncRNAs and circRNAs are distributed within pain‐related regions and dysregulated after peripheral noxious stimulation. Moreover, functional studies have indicated that miRNAs, lncRNAs, and circRNAs participate in neuropathic pain development by regulating diverse pain‐related genes along the pain processing pathways. PERIPHERAL NERVE INJURY OR NOXIOUS STIMULI INDUCE EXTENSIVE miRNAs, lncRNAs, AND circRNAs EXPRESSION CHANGES : miRNAs expression changes Microarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions. Human studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI). 13 miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls. 14 Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin. 15 Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia. 16 Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment. 17 In addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways. 18 Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent, 19 , 20 implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals. 21 Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG, 22 implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain. The spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents. 23 , 24 , 25 , 26 Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21, 25 thus, acting as potential targets in its treatment. 27 Cytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis. 28 Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation. 29 DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus. 30 (Figure 1). microRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection Microarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions. Human studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI). 13 miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls. 14 Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin. 15 Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia. 16 Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment. 17 In addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways. 18 Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent, 19 , 20 implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals. 21 Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG, 22 implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain. The spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents. 23 , 24 , 25 , 26 Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21, 25 thus, acting as potential targets in its treatment. 27 Cytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis. 28 Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation. 29 DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus. 30 (Figure 1). microRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection lncRNAs and circRNAs expression changes Patients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms. 31 LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development. 32 Microarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve. 33 lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site. 34 In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury. 35 Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain. Next‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord. 36 In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development. 37 circRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury. 38 (Figure 2). Long non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury Patients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms. 31 LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development. 32 Microarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve. 33 lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site. 34 In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury. 35 Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain. Next‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord. 36 In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development. 37 circRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury. 38 (Figure 2). Long non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury miRNAs expression changes: Microarray and deep‐sequencing analyses revealed that nerve injury or noxious stimuli could induce broad changes in miRNA expression in serum or along the pain processing pathways, including the perineal nerve, dorsal root ganglion (DRG), spinal cord, and supraspinal regions. For normal pain signal transmission, nerve injury or noxious stimuli are detected by nociceptors in the DRG or trigeminal ganglion (TG) and then transmitted to upstream neurons in the spinal dorsal horn. Subsequently, nociceptive stimuli are integrated, processed, and further transmitted ascending to specific supraspinal brain regions. Human studies have identified as many as 1134 differentially expressed (DE) genes in the serum of individuals with or without neuropathic pain after spinal cord injury (SCI). 13 miR‐204‐5p, ‐519d‐3p, ‐20b‐5p, and ‐6838‐5p might act as promising biomarkers and intervention targets for preventing and therapizing neuropathic pain after SCI. In trigeminal neuralgia individuals, serum miR‐132‐3p, ‐146b‐5p, ‐155‐5p, and ‐384 levels were prominently increased compared with healthy controls. 14 Patients with painful peripheral neuropathy had higher miR‐21 level in the serum and sural nerve compared with healthy controls. Meanwhile, miR‐155 was reduced in the serum and inflicted lower leg skin. 15 Animal and human studies showed that miR‐30c‐5p was upregulated in the serum and cerebro spinal fluid (CSF) of sciatic nerve injury rats and neuropathic pain patients with chronic peripheral ischemia. The high expression of miR‐30c‐5p, together with other clinical parameters, might be used to predict neuropathic pain development in patients with chronic peripheral ischemia. 16 Additionally, an animal study showed that plasma‐derived DE extracellular vesicle (EV) miRNAs regulated processes that are essential for neuropathic pain development. Most DE EV miRNAs for inflammation suppression were downregulated, potentially acting as biomarkers and targets in neuropathic pain treatment. 17 In addition, numerous DE miRNAs have been identified in the DRG. One week after spared nerve injury (SNI), 33 and 39 miRNAs in the DRG were upregulated and downregulated, respectively, with most DE miRNAs related to axon guidance, focal adhesion, Ras and Wnt signaling pathways. 18 Furthermore, nerve injury‐induced miRNAs expression was dynamic and time‐dependent, 19 , 20 implicating multiple regulatory mechanisms in neuropathic pain initiation and development. Nerve injury redistributes miRNAs from a uniform style within the DRG soma of non‐allodynic animals to preferential localization to peripheral neurons in allodynic animals. 21 Furthermore, either sciatic nerve ligation (SNL), DRG transaction (DRT), or ventral root transection (VRT) could upregulate miR‐21 and miR‐31 while downregulating miR‐668 and miR‐672 in the injured DRG, 22 implying that these miRNAs could be therapeutic targets for treating diverse types of neuropathic pain. The spinal cord dorsal horn relays and modulates pain signals from the peripheral nociceptors to the supraspinal regions. In the spinal cord, numerous miRNA expression changes have been observed in chronic constriction injury (CCI) and diabetic neuropathic pain (DNP) rodents. 23 , 24 , 25 , 26 Previous research has suggested some miRNAs may be related to neuropathic pain development, for example, miR‐500, ‐221, and ‐21, 25 thus, acting as potential targets in its treatment. 27 Cytoscape software constructed the miRNA–target gene regulatory network in the supraspinal region, including the nucleus accumbens (NAc), medial prefrontal cortex, and periaqueductal gray, between SNI and sham rats. Finally, four essential DE genes, includingCXCR2, IL12B, TNFSF8, and GRK1, and five miRNAs, including miR‐208a‐5p, −7688‐3p, −344f‐3p, −135b‐3p, and ‐135a‐2‐3p, were identified, indicating their essential roles in neuropathic pain pathogenesis. 28 Furthermore, in the prelimbic cortex of SNI rats, the DE miRNA–mRNA network pointed to molecules associated with inflammation. 29 DE miRNAs were also observed in the bilateral hippocampus of CCI rats. However, no significant difference was observed bilaterally in the hippocampus. 30 (Figure 1). microRNAs (miRNAs) expression change following peripheral nerve injury or noxious stimuli SCI, spinal cord injury; CSF, cerebrol spinal fluid; DE, differential expressed; EV, extravascular vesicle; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury; DRT, DRG transection; VRT, ventral root transection lncRNAs and circRNAs expression changes: Patients with type 2 DNP had a prominent higher expression of serum lncRNA NONRATT021972 and more severe neuropathic pain symptoms. 31 LINC01119 and LINC02447 in the peripheral blood of SCI patients were identified in pain pathways that were important for neuropathic pain development. 32 Microarray analysis revealed that a nerve injury could induce time‐dependent lncRNA expression changes in the sciatic nerve. 33 lncRNA H19 was persistently upregulated in Schwann cells along the peripheral nerve, proximal and distal to the injured site. 34 In the DRG, the transcriptomic analysis identified 86 known and 26 novel lncRNAs genes to be DE after spared sciatic nerve injury. 35 Of these, rno‐Cntnap2 and AC111653.1 were essential in peripheral nerve regeneration and were involved in neuropathic pain. Next‐generation RNA sequencing showed 134 lncRNAs and 188 circRNAs were prominently changed 14 days after SNI in the spinal cord. 36 In addition, microarray analysis identified 1481 and 1096 DE lncRNAs and mRNAs, respectively, in the spinal cord dorsal horn of DNP rats. Of these, 289 neighboring and 57 overlapping lncRNA‐mRNA pairs, including ENSMUST00000150952‐Mbp and AK081017‐Usp15, have been suggested to participate in neuropathic pain development. 37 circRNAs have characteristic circularized structures following the backsplicing of exons from antisense RNAs (AS RNAs) or mRNAs and, thus, are highly stable. Most circRNAs are highly conserved among species and lack translation potential, despite cap‐independent translation. A recent study showed that 363 and 106 circRNAs were significantly dysregulated in the ipsilateral dorsal horn after nerve injury. 38 (Figure 2). Long non‐coding RNA (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change following peripheral nerve injury or noxious stimuli DNP, diabetic neuropathic pain; SCI, spinal cord injury; SNL, spinal nerve ligation; DRG, dorsal root ganglion; SNI, spared nerve injury; CCI, chronic constriction injury STUDYING miRNA NEUROPATHIC PAIN MECHANISMS : miRNAs contribute to the development of neuropathic pain via diverse mechanisms, such as neuroinflammation, autophagy, abnormal ion channel expression, regulating pain‐related mediators, protein kinases, structural proteins, neurotransmission excitatory–inhibitory imbalances, and exosome miRNA‐mediated neuron–glia communication (Table 1). miRNAs expression change for the development of neuropathic pain Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, DPP10, Navβ1 ↓ p‐p38 MAPK ↑ Abbreviations: ACC, anterior cingulate cortex; BACE1, beta‐site amyloid precursor protein‐cleaving enzyme 1; BDNF, brain‐derived neurotrophic factor; CCI, chronic constriction injury; CCR1, C–C chemokine receptor 1; CSF, cerebro spinal fluid; CSF1, colony‐stimulating factor‐1; CXCR4, chemokine CXC receptor 4; DNMT3a, DNA methyltransferase 3a; DNP, diabetic neuropathic pain; DRG, dorsal root ganglion; EV, extracellular vesicle; EZH2, enhancer of zeste homolog 2; GRK2, G protein‐coupled receptor kinases 2; IGF‐1/IGF‐1R, insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor; KCNMA1, calcium‐activated potassium channel subunit α‐1; MeCP2, methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2; MIP‐1a, macrophage inflammatory protein‐1 alpha; NEFL, neurofilament light polypeptide; NF‐κB, nuclear factor – kappa B; NGF, nerve growth factor; NLRP3, NOD‐like receptor protein 3; NOX2, NADPH oxidase 2; NP, neuropathic pain; pSNL, partial sciatic nerve ligation; RAP1A, Ras‐related protein 1A; RASA1, RAS P21 protein activator 1; ROS, reactive oxygen species; S1PR1, sphingosine‐1‐phosphate receptor 1; SCI, spinal cord injury; Scn2b, β2 subunit of the voltage‐gated sodium channel; SIRT1, histone deacetylase sirtuin 1; SNL, spinal nerve ligation; SOCS1, suppressor of cytokine signaling 1; STAT3, signal transducer and activator of transcription 3; TLR4, toll‐like receptor 4; TNFR1, TNF receptor 1; TRPA1, transient receptor potential ankyrin 1; TXNIP, thioredoxin‐interacting protein; VTR, ventral root transection; ZEB1, zinc finger E‐box binding homeobox 1. miRNAs regulate neuroinflammation in neuropathic pain development miRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways. Among patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation. 39 Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation. 40 Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain. miR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs). 41 NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration. 42 Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively. 43 , 44 miRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation. 45 miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2). 46 EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways. 47 miR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI. 48 GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124. 49 SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats. 50 Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation. 51 Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord. 52 , 53 Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development. 54 In a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia. 55 Notably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages. 56 The ability to induce glial polarization was also observed in cultured BV‐2 microglia. 57 In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1). 58 Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO. 59 miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation. 60 However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition. 61 In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways. 62 The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models. miRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways. Among patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation. 39 Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation. 40 Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain. miR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs). 41 NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration. 42 Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively. 43 , 44 miRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation. 45 miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2). 46 EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways. 47 miR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI. 48 GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124. 49 SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats. 50 Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation. 51 Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord. 52 , 53 Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development. 54 In a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia. 55 Notably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages. 56 The ability to induce glial polarization was also observed in cultured BV‐2 microglia. 57 In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1). 58 Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO. 59 miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation. 60 However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition. 61 In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways. 62 The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models. Autophagy miRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI. 63 Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways. 64 , 65 Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation. 66 The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro. miRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI. 63 Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways. 64 , 65 Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation. 66 The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro. The contribution of miRNAs to ion channel expression in neuropathic pain Another class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release. Theoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL, 41 miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability. 67 Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3), 68 , 69 , 70 miR‐30b‐5p and its target Scn8a (encoding Nav1.6), 71 and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7). 72 , 73 A miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1. 74 CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn. 75 By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased. 76 Furthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain. 77 The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively. 78 Intrathecal miR‐103 successfully relieved neuropathic pain. In addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations. 79 In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain. 80 In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain. 81 Another class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release. Theoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL, 41 miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability. 67 Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3), 68 , 69 , 70 miR‐30b‐5p and its target Scn8a (encoding Nav1.6), 71 and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7). 72 , 73 A miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1. 74 CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn. 75 By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased. 76 Furthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain. 77 The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively. 78 Intrathecal miR‐103 successfully relieved neuropathic pain. In addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations. 79 In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain. 80 In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain. 81 miRNAs regulate pain‐related mediators, protein kinases, and structural proteins miRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes. 82 Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target. 83 , 84 , 85 After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation. 86 miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia. 16 A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system. 16 miRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain. 87 Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally. 88 SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a). 89 Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development. The CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes. 83 miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1. 90 However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG. 83 , 91 , 92 As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain. 91 , 93 Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent. Notably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats. 94 BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression. miRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes. 82 Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target. 83 , 84 , 85 After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation. 86 miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia. 16 A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system. 16 miRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain. 87 Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally. 88 SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a). 89 Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development. The CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes. 83 miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1. 90 However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG. 83 , 91 , 92 As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain. 91 , 93 Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent. Notably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats. 94 BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression. Neurotransmission excitatory–inhibitory imbalances Neurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn. 95 GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis. 96 Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain. 97 Neurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn. 95 GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis. 96 Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain. 97 EV miRNAs‐mediated neuron–glia communication We mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG. 22 Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice. 98 In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype. 99 Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG. Unlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development. 100 TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons. 101 Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene. 102 Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling. We mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG. 22 Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice. 98 In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype. 99 Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG. Unlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development. 100 TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons. 101 Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene. 102 Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling. miRNAs regulate neuroinflammation in neuropathic pain development: miRNA‐based epigenetic regulation is essential in neuroinflammation. miRNAs are predicted to regulate diverse neuroinflammation‐related targets along the pain processing pathways. Among patients with neuropathic pain, miR‐101 decreased in the serum and sural nerve, which is related to nuclear factor‐kappa B (NF‐κB) signaling activation. 39 Meanwhile, serum miR‐124a and miR‐155 expression was upregulated. They were identified to inhibit histone deacetylase sirtuin 1 (SIRT1) in primary human cluster of differentiation 4 (CD4) (+) cells and induce their differentiation toward regulatory T cells (Tregs), thus reducing pain‐related inflammation. 40 Such miRNA–target interactions may act as an endogenous protective mechanism for neuropathic pain. miR‐7a, expressed in small‐sized nociceptive DRG neurons, is downregulated after nerve injury, as reported, targeting neurofilament light polypeptides (NEFLs). 41 NEFL encodes a neuronal protein vital for neurofilament formation and increases signal transducer and activator of transcription 3 (STAT3) phosphorylation, which is highly related to cell differentiation and neuroinflammation. In diabetic peripheral neuropathic mice, miR‐590‐3p was downregulated to disinhibit Ras‐related protein 1A (RAP1A) in the DRG tissue and inhibit neural T cells infiltration. 42 Thus, exogenous miR‐590‐3p may be a potential alternative for neuropathic pain treatment. CCI downregulated miR‐140 and miR‐144 expression in the DRG. Intrathecally injected miR‐140 and miR‐144 agomir decreased inflammatory factor secretion and ameliorated hyperalgesia by targeting sphingosine‐1‐phosphate receptor 1 (S1PR1) and RAS P21 protein activator 1 (RASA1), respectively. 43 , 44 miRNAs are essential in neuropathic pain development via neuroinflammation‐related mechanisms in the spinal cord. miR‐130a‐3p targets and downregulates insulin‐like growth factor‐1/insulin‐like growth factor‐1 receptor (IGF‐1/IGF‐1R) expression to alleviate SCI‐induced neuropathic pain by mitigating microglial activation and NF‐κB phosphorylation. 45 miR‐378 was decreased in CCI rats, inhibiting neuropathic pain development by targeting the enhancer of zeste homolog 2 (EZH2). 46 EZH2 promotes neuropathic pain by increasing tumor necrosis factor‐alpha (TNF‐α), interleukin (IL)‐1β, and monocyte chemoattractant protein‐1 (MCP‐1) production. Intrathecally injecting miRNA‐138 lentivirus can remarkably alleviate neuropathic pain in partial sciatic nerve ligation (pSNL) rats by suppressing toll‐like receptor 4 (TLR4) and macrophage inflammatory protein‐1 alpha (MIP‐1α)/C‐C chemokine receptor 1 (CCR1) signaling pathways. 47 miR‐15a/16 targets and downregulates G protein‐coupled receptor kinases 2 (GRK2) to disinhibit p38‐MAPK (mitogen‐activated protein kinase) and NF‐κB, contributing to neuroinflammation after CCI. 48 GRK2‐deficient mice present a pro‐inflammatory phenotype in spinal cord microglia/macrophages, restored by miR‐124. 49 SNL increased DNA methyltransferase 3a (DNMT3a) expression related to hypermethylation of the miR‐214‐3p promoter, resulting in miR‐214‐3p expression reduction, which enhanced astrocyte reactivity, colony‐stimulating factor‐1 (CSF1), and interleukin 6 (IL‐6) production, and hyperalgesia in rats. 50 Electro‐acupuncture attenuated SCI by inhibiting Nav1.3 and Bax in the injured spinal cord through miR‐214 upregulation. 51 Downregulated miR‐128 was reported to contribute to neuropathic pain via p38 or zinc finger E‐box binding homeobox 1 (ZEB1) activation in the spinal cord. 52 , 53 Meanwhile, ZEB1 was also targeted by miR‐200b/miR‐429, orchestrating neuropathic pain development. 54 In a pSNL mouse model, nerve injury significantly reduced miR‐23a expression in spinal glial cells, concomitant with the upregulation of its target chemokine, CXC receptor 4 (CXCR4). In naïve mice, either miR‐23a downregulation or CXCR4 upregulation could active the thioredoxin‐interacting protein (TXNIP)/NOD‐like receptor protein 3 (NLRP3) inflammasome axis. Both intrathecal miR‐23a mimics and spinal CXCR4 downregulation by a lentivirus inhibited TXNIP or NLRP3 upregulation to alleviate hyperalgesia. 55 Notably, one miRNA was shown to have distinctive targets in different animal models of the spinal cord. For example, miR‐155 downregulation targeted and upregulated NADPH oxidase 2 (NOX2) expression to induce reactive oxygen species (ROS) production after SCI, presenting a pro‐inflammatory phenotype in microglia/macrophages. 56 The ability to induce glial polarization was also observed in cultured BV‐2 microglia. 57 In bortezomib‐induced neuropathic pain rats, downregulated miR‐155 upregulated TNF receptor 1 (TNFR1) expression, which activated its downstream signaling pathways, including p38‐MAPK, c‐Jun N‐terminal kinase (JNK), and transient receptor potential ankyrin 1 (TRPA1). 58 Therefore, it is suggested that miR‐155 might act as an intervention target for neuropathic pain. As expected, treatment with ibuprofen and L‐arginine delayed the behavioral pain changes while inhibiting spinal miR‐155 and NO. 59 miR‐155‐5p is also known to destabilize the blood–nerve barrier and expression of tight junction proteins, such as claudin‐1 and zonula occludens‐1 (ZO‐1). Tissue plasminogen activator (tPA) could transiently open such barriers to facilitate topically applying analgesics, via miR‐155‐5p upregulation. 60 However, we also noted that CCI upregulated but did not downregulate spinal cord miR‐155, and miR‐155 inhibition enhanced suppressor of cytokine signaling 1 (SOCS1) expression to inactivate inflammation via NF‐κB and p38‐MAPK inhibition. 61 In oxaliplatin‐induced peripheral neuropathic pain, spinal cord miR‐155 expression was also upregulated, and intrathecally injecting the miR‐155 inhibitor attenuated hyperalgesia in rats, possibly by inhibiting oxidative stress–TRPA1 pathways. 62 The underlying mechanisms of such distinctive miR‐155 expression change are still unknown and require further research into different pain models. Autophagy: miRNA‐related autophagy is involved in neuropathic pain regulation. As reported, miR‐15a downregulated and stimulated AKT serine/threonine kinase 3 (AKT3), and inhibited autophagy post‐CCI. 63 Impaired autophagy participates in neuropathic pain development. Intrathecal miR‐15a agomir prominently suppressed AKT3 expression, induced autophagy, and attenuated CCI‐induced neuropathic pain. Similar to miR‐15a, miR‐145 and miR‐20b‐5p contributed to neuropathic pain regulation via protein kinase B (AKT)‐related autophagy pathways. 64 , 65 Meanwhile, miR‐195 in the spinal cord was observed to upregulate post‐SNL, targeting and inhibiting Autophagy‐Related 14 (ATG14) and its autophagy activation. 66 The miR‐195 inhibitor activated autophagy and suppressed neuroinflammation in vivo and in vitro. The contribution of miRNAs to ion channel expression in neuropathic pain: Another class of miRNA‐based regulation focuses on ion channels, including sodium, potassium, and calcium channels, and transient receptor potential (TRP) channels, to modulate action potential production, firing rate, and neurotransmitter release. Theoretically, miRNAs can simultaneously have multiple target mRNAs because precise matching is not a prerequisite for inhibiting the target sequence. For example, other than NEFL, 41 miR‐7a targets the β2 subunit of the voltage‐gated sodium channel (Scn2b) with post‐transcriptional regulation to induce nociceptive DRG neurons hyperexcitability. 67 Other than miR‐7a, the other known miRNA‐ targets involved in sodium channels expression include miR‐30b, ‐96, ‐384‐5p and their target Scn3a (encoding Nav1.3), 68 , 69 , 70 miR‐30b‐5p and its target Scn8a (encoding Nav1.6), 71 and miR‐182 and ‐30b and their target Scn9a (encoding Nav1.7). 72 , 73 A miRNA cluster is a polycistronic gene containing several miRNAs derived from a single primary or nascent transcript. Approximately 40% of miRNAs are predicted to form clusters, but their significance is still unknown. miR‐17‐92 is a miRNA cluster that includes six different members, of which miR‐18a, ‐19a, ‐19b, and ‐92a upregulation induces allodynia. The predicted targets of the miR‐17‐92 cluster encompass genes encoding diverse voltage‐gated potassium channels and their regulatory subunits, including Kv1.1, Kv1.4, Kv3.4, Kv4.3, Kv7.5, dipeptidyl peptidase 10 (DPP10), and Navβ1. 74 CCI upregulated miR‐137 to target and downregulate Kcna2, which encodes Kv1.2 in the DRG and spinal dorsal horn. 75 By contrast, CCI decreased miR‐183‐5p expression in the DRG, and the predicted target gene TREK‐1, a subunit of the 2P‐domain K+ channel, was increased. 76 Furthermore, the miRNA‐183 cluster (miR‐183, part of miR‐96/182/183) regulates either basal mechanical or neuropathic pain. 77 The miR‐183 cluster targets Cacna2d1 and Cacna2d2, which encode auxiliary voltage‐gated calcium channel subunits α2δ‐1 and α2δ‐2, to affect nociceptor excitability. Nerve injury downregulated miR‐103 in the spinal cord, which simultaneously targeted and inhibited Cacna1c, Cacna2d1, and Cacnb1 and encoded the Cav1.2‐α1, α2δ1, and β1 subunits of the voltage‐gated calcium channels macromolecular complex Cav1.2 L‐type calcium channel, respectively. 78 Intrathecal miR‐103 successfully relieved neuropathic pain. In addition to ion channels, TRP channels are ligand‐gated ion channels that promote painful sensations. 79 In the DRG, TRPA1 participates in miR‐141‐5p alleviated oxaliplatin‐induced neuropathic pain. 80 In SNI mice, miR‐449a ameliorated neuropathic pain by decreasing the activity of TRPA1 and calcium‐activated potassium channel subunit α‐1 (KCNMA1), thus acting as a potential therapeutic alternative for treating neuropathic pain. 81 miRNAs regulate pain‐related mediators, protein kinases, and structural proteins: miRNAs regulate diverse pain‐related mediators, protein kinases, and structural proteins along the pain processing pathways. For HIV‐associated symptomatic distal sensory polyneuropathy and neuropathic pain, serum miR‐455‐3p acted as a potential biomarker, possibly targeting multiple genes involved in peripheral neuropathic pain, such as nerve growth factor (NGF) and related genes. 82 Brain‐derived neurotrophic factor (BDNF), another well‐recognized pain mediator, is a common miR‐1, ‐183, and ‐206 target. 83 , 84 , 85 After nerve injury, several miRNAs, including miR‐19a, ‐301, and ‐132, were downregulated, and their target methyl cytosine–guanine dinucleotide (CpG)‐binding protein 2 (MeCP2) expression was increased, leading to concomitant BDNF upregulation. 86 miR‐30c‐5p was upregulated in the serum and CSF of patients with chronic peripheral ischemia. 16 A miR‐30c‐5p inhibitor intraventricular injection postponed neuropathic pain development and fully reversed hyperalgesia in rodents. Transforming growth factor beta (TGF‐β) participates in the effects of miR‐30c‐5p, which refers to the endogenous opioid analgesic system. 16 miRNAs dysregulate specific pain‐related protein kinases. For example, in DNP rats, miR‐133a‐3p was dysregulated in the sciatic nerve, interacting with p‐p38 MAPK to participate in the development of neuropathic pain. 87 Via a methyl CpG‐binding domain and transcriptional repressor domain, MeCP2 acts as a transcriptional repressor, and its overexpression improves neuropathic pain, potentiating an anti‐nociceptive effect of MeCP2. The authors also noted that MeCP2 expression changed post‐transcriptionally, and its mRNA level did not significantly change after SNI. However, the protein level was upregulated. During the development period of neuropathic pain, phospho‐cAMP response element‐binding protein (p‐CREB) elevated rapidly but returned 3–7 days after SNI, concomitant with miR‐132 downregulation, which targets MeCP2 and inhibits its expression post‐transcriptionally. 88 SNL decreased miR‐200b and miR‐429 in NAc neurons, along with an upregulation of their target (DNMT3a). 89 Further mechanism studies found that DNMT3a in the NAc was expressed in NR1 immunoreactive neurons, suggesting the dysregulation of ‘mesolimbic motivation circuitry’ for neuropathic pain development. The CCI induced time‐dependent miR‐1 downregulation in injured sciatic nerves. This change in expression was related to the upregulation and translocation of the miR‐1‐targeted connexin 43 (Cx43), the major connexin of astrocytes. 83 miR‐1 mimics could reduce Cx43 expression in cultured human glioblastoma cells. However, miR‐1 mimics intraneural transfection failed to alter Cx43 protein expression and did not improve pain behavior. The authors attributed such treatment failure to insufficient inhibition of miR‐1 for Cx43 via intraneural injection. Alternatively, there were regulatory mechanisms for Cx43 in vivo other than miR‐1. 90 However, miR‐1 in the DRG was DE according to peripheral nerve injury type. CCI, pSNL, and sural nerve injury downregulated miR‐1. However, axotomy of the sciatic nerve and tibial nerve injury increased its expression in the DRG. 83 , 91 , 92 As previously described, miR‐1 was also upregulated following capsaicin treatment and bone cancer pain. 91 , 93 Additionally, miR‐1 downregulation inhibited bone cancer pain. Collectively, the action of miR‐1 on pain seems to be complex and stimulus‐dependent. Notably, specific miRNAs can promote neuropathic pain development via diverse mechanisms. For example, beta‐site amyloid precursor protein‐cleaving enzyme 1 (BACE1), a membrane protease essential for myelination, was downregulated after miR‐15b overexpression in vitro or in the DRG of chemotherapy‐related neuropathic pain rats. 94 BACE1‐mediated neuregulin 1 reduction decreases nerve conduction velocity. In addition, BACE1 modulated Navβ2 subunit expression and neuronal activity and regulated inflammation‐related TNFR expression. Neurotransmission excitatory–inhibitory imbalances: Neurotransmission excitatory and inhibitory imbalances in the spinal cord also contribute to neuropathic pain development. For example, miR‐500 increased to regulate glutamic acid decarboxylase 67 (GAD67) expression and target the specific site of Gad1 in the dorsal horn. 95 GAD67 expression reduction inhibited the function of GABAergic neurons and the resultant inhibitory synaptic transmission dysregulation contributed to neuropathic pain development. In addition, miR‐23b is crucial for improving neuropathic pain in the injured spinal cord by downregulating its target gene, Nox4, which further normalizes glutamic acid decarboxylase 65/67 (GAD65/67) expression and protects GABAergic neurons from apoptosis. 96 Microarray analysis showed that miR‐539 was prominently reduced in the contralateral anterior cingulate cortex (ACC) after CCI, which is related to enhanced NR2B protein expression. Injecting miR‐539 mimics into the contralateral ACC attenuated CCI‐evoked mechanical hyperalgesia, suggesting that the N‐methyl‐D‐aspartate (NMDA) receptor NR2B subunit regulates neuropathic pain. 97 EV miRNAs‐mediated neuron–glia communication: We mainly focused on exosomal miR‐21 with regard to EV miRNAs in neuropathic pain. As aforementioned, SNL, DRT, or VRT could upregulate miR‐21 in the injured DRG. 22 Notably, miR‐21 was also increased in serum exosomes from nerve‐ligated mice. 98 In another in‐depth study, macrophages readily took up pure sensory neuron‐derived exosomes encompassing miR‐21‐5p to promote a pro‐inflammatory phenotype. 99 Either intrathecal miR‐21‐5p antagomir or miR‐21 conditional deletion in sensory neurons ameliorated hyperalgesia and macrophage recruitment in the DRG. Unlike the miRNA target gene inhibitory mechanism, miR‐21 acts as an endogenous toll‐like receptor 8 (TLR8) ligand, leading to neuropathic pain development. 100 TLR8, a nucleic acid‐sensing receptor, is located in the endosomes and lysosomes, leading to ERK‐mediated inflammatory mediator production and neuronal activation after SNL. Although miR‐21 and TLR8 are only distributed in small‐ and medium‐sized neurons, miR‐21 can also be derived from large‐sized neurons and reach TLR8 in the endosomes of other types of neurons. 101 Similar to miR‐21, DRG sensory neurons secreted miR‐23a‐enriched EVs following nerve injury and were taken up by macrophages to enhance M1 polarization in vitro. A20, an NF‐κB signaling pathway inhibitor, is a verified miR‐23a target gene. 102 Moreover, intrathecally delivering EVs‐miR‐23a antagomir attenuated neuropathic hyperalgesia and reduced M1 macrophages by inhibiting A20 to activate NF‐κB signaling. STUDYING lncRNAs AND circRNAs NEUROPATHIC PAIN MECHANISMS : lncRNAs and circRNAs are essential in neuropathic pain development by acting as AS RNA or miRNA sponges, epigenetically regulating pain‐related molecules expression, or modulating miRNA processing (Table 2). Long non‐coding RNAs (lncRNAs) and circular non‐coding RNAs (circRNAs) expression change for the development of neuropathic pain Abbreviations: AQP4, aquaporin 4; AS RNA, antisense RNA; CCI, chronic constriction injury; CDK4, cyclin‐dependent kinase 4; CDK6, cyclin‐dependent kinase 6; DNP, diabetic neuropathic pain; DRG, dorsal root ganglion; ENO1, enolase 1; EphB1, ephrin type‐b 1; EZH2, enhancer of zeste homolog 2; HMGB1, high mobility group box 1; NAT, natural antisense transcript; NOTCH2, NOTCH receptor 2; NP, neuropathic pain; pSNL, partial sciatic nerve ligation; pSTAT3, phosphorylated signal transducer and activator of transcription 3; SCI, spinal cord injury; SGC, satellite glia cell; SNL, spinal nerve ligation; STAT3, signal transducer and activator of transcription 3; TLR5, toll‐like receptor 5; TNFAIP1, tumor necrosis factor alpha‐induced protein 1; TRPV1, transient receptor potential vanilloid 1; XIST, X‐inactive specific transcript; ZEB1, zinc finger E‐box binding homeobox 1. lncRNAs act as AS RNA After peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells. 103 On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination. Voltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain. 104 It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms. The complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG. 105 Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state. After peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells. 103 On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination. Voltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain. 104 It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms. The complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG. 105 Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state. lncRNAs act as miRNA sponges lncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3. 106 , 107 , 108 , 109 TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively. 110 , 111 NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain. Although the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function. 112 For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats. 113 circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats. 114 Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI. 115 , 116 lncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3. 106 , 107 , 108 , 109 TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively. 110 , 111 NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain. Although the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function. 112 For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats. 113 circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats. 114 Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI. 115 , 116 lncRNAs epigenetically regulate pain‐related molecules expression In addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity. 117 In addition, transcriptome screening in the DRG of DNP rats 118 has identified diverse dysregulated lncRNAs, including uc.48+, 119 BC168687, 120 , 121 NONRATT021972, 31 , 122 , 123 MRAK009713, 124 and Lncenc1. 125 The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats. 119 , 120 , 121 , 122 , 123 , 124 , 125 Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors, 119 , 120 , 122 , 123 , 124 TNF‐α‐related pathways, 31 transient receptor potential vanilloid 1 (TRPV1), 121 or EZH2. 125 As demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression. 126 SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6. 127 Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation. 128 Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation. 129 In addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity. 117 In addition, transcriptome screening in the DRG of DNP rats 118 has identified diverse dysregulated lncRNAs, including uc.48+, 119 BC168687, 120 , 121 NONRATT021972, 31 , 122 , 123 MRAK009713, 124 and Lncenc1. 125 The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats. 119 , 120 , 121 , 122 , 123 , 124 , 125 Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors, 119 , 120 , 122 , 123 , 124 TNF‐α‐related pathways, 31 transient receptor potential vanilloid 1 (TRPV1), 121 or EZH2. 125 As demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression. 126 SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6. 127 Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation. 128 Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation. 129 lncRNAs modulate miRNA processing Specific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia. 130 Collectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K). 131 Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury. 132 We also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition. 133 Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation. 134 The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required. Specific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia. 130 Collectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K). 131 Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury. 132 We also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition. 133 Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation. 134 The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required. lncRNAs act as AS RNA : After peripheral nerve injury, Egr2‐AS‐RNA is upregulated in Schwann cells. 103 On the early growth response 2 (Egr2) promoter, Egr2‐AS‐RNA recruits an epigenetic silencing complex to downregulate Egr2, essential for peripheral myelination. Ectopic Egr2‐AS‐RNA expression in DRG cultures downregulates Egr2 mRNA and induces demyelination. In vivo, Egr2‐AS‐RNA inhibition reverts Egr2‐related gene expression and delays demyelination. Voltage‐gated potassium channel (Kv) Kcna2‐AS‐RNA is an endogenous, highly conserved, and widely explored lncRNA in neuropathic pain. 104 It is a natural antisense transcript (NAT), distributed in the cytoplasm and targets Kcna2 mRNA, which encodes the pain regulation‐related membrane Kv1.2 subunit. Kcna2‐AS‐RNA was time‐dependently upregulated in the inflicted rat DRG after nerve injury. It decreased the total Kv current, upregulated DRG neurons excitability, and produced neuropathic pain symptoms. The complementary strand of DNA opposite the Scn9a gene encodes Scn9a NAT, another antisense lncRNA expressed in the DRG. 105 Scn9a NAT is suggested to be a negative regulator of Scn9a mRNA. Scn9a NAT overexpression inhibited Scn9a mRNA, its encoded protein Nav1.7, and Nav1.7 currents in DRG neurons. However, Scn9a NAT and Scn9a mRNA levels did not significantly change in the injured DRG until 2 weeks after nerve injury. More work is required to determine whether NAT can confer analgesia and reduce pain in the neuropathic pain state. lncRNAs act as miRNA sponges: lncRNAs may act as miRNA sponges to form the lncRNA‐miRNA‐mRNA axis and regulate target gene expression. For example, the lncRNA X‐inactive specific transcript (XIST) is upregulated in the dorsal horn of the spinal cord after CCI. By sponging miR‐137, ‐150, ‐154‐5p, and ‐544, XIST can inhibit the expression of corresponding targets, including TNF‐α‐induced protein 1 (TNFAIP1), ZEB1, TLR5, and STAT3. 106 , 107 , 108 , 109 TNFAIP1 activates the NF‐κB signaling pathway, while ZEB1, TLR5, and STAT3 are crucial in the neuroinflammatory response. XIST inhibition markedly ameliorates neuropathic pain development. In addition, lncRNA nuclear enriched abundant transcript 1 (NEAT1) was upregulated in the spinal cord dorsal horn to form NEAT1‐miR‐381‐high mobility group box 1 (HMGB1) and NEAT1–miR‐128‐3p–aquaporin 4 (AQP4) axes following CCI and SCI, respectively. 110 , 111 NEAT1 downregulation inhibited IL‐6, IL‐1β, and TNF‐α to improve neuropathic pain. Although the function of circRNAs as miRNA sponges is still largely unknown, they have been termed as competing endogenous RNAs that bind to target miRNAs and regulate their function. 112 For example, circHIPK3 is a circRNA highly enriched in serum from DNP patients and in the DRG from DNP rats. 113 circHIPK3 sponges miR‐124 to promote neuroinflammation, pointing to the involvement of the circHIPK3–miR‐124 axis during DNP. circRNA ciRS‐7 participates in neuropathic pain progression by sponging miR‐135a‐5p to regulate autophagy and inflammation in the spinal cord of CCI rats. 114 Besides, cirZNF609–miR‐22‐3p‐enolase 1 (ENO1) and circ_0005075–miR‐151a‐3p‐NOTCH receptor 2 (NOTCH2) regulatory axes upregulate inflammatory factor expression and promote neuropathic pain development in the spinal cord after CCI. 115 , 116 lncRNAs epigenetically regulate pain‐related molecules expression: In addition to acting as a miRNA sponge, lncRNAs can epigenetically regulate the expression of pain‐related molecules. For example, peripheral nerve injury decreases the expression of DRG‐specifically enriched lncRNAs (DS‐lncRNAs) in the injured DRG. Restoring DS‐lncRNAs blocks nerve injury‐induced increases in euchromatic histone lysine N‐methyltransferase 2 (Ehmt2) mRNA and its encoding protein G9a, reverses G9a‐related decreases in opioid receptors and Kcna2 in the injured DRG, and ameliorates nerve injury‐induced pain hypersensitivity. 117 In addition, transcriptome screening in the DRG of DNP rats 118 has identified diverse dysregulated lncRNAs, including uc.48+, 119 BC168687, 120 , 121 NONRATT021972, 31 , 122 , 123 MRAK009713, 124 and Lncenc1. 125 The expression of uc.48+, BC168687, NONRATT021972, MRAK009713, and Lncenc1 was prominently higher in the DRG of neuropathic pain rats. 119 , 120 , 121 , 122 , 123 , 124 , 125 Blocking the upregulation via intrathecal or intravenous administration of the corresponding small‐interfering RNA (siRNA) may alleviate neuropathic pain by inhibiting the excitatory transmission mediated by purinergic receptors, 119 , 120 , 122 , 123 , 124 TNF‐α‐related pathways, 31 transient receptor potential vanilloid 1 (TRPV1), 121 or EZH2. 125 As demonstrated by a luciferase assay and RNA‐binding protein immunoprecipitation, lncRNA small nucleolar RNA host gene 1 (SNHG1) can induce neuropathic pain in the spinal cord by binding to the promoter region of cyclin‐dependent kinase 4 (CDK4), stimulating its expression. 126 SNHG1 knockdown alleviated neuropathic pain development, and SNHG1 overexpression was able to induce neuropathic pain. Similarly, lncRNA PKIA‐AS1 participates in SNL‐induced neuropathic pain by downregulating DNA methyltransferase 1‐catalyzed cyclin‐dependent kinase 6 (DNMT1‐catalyzed) CDK6 promoter methylation and regulating CDK6. 127 Cyclin‐dependent kinases (CDKs) transcriptionally enhanced pro‐inflammatory gene expression during the G1 cell phase. Furthermore, CDK6 recruitment to the nuclear chromatin fraction by cytokines is related to NF‐κB, STAT, and activator protein 1 (AP‐1) activation to induce neuroinflammation. 128 Other than the DRG, lncRNA Kcna2‐AS‐RNA was also highly expressed in the spinal cord of postherpetic neuralgia rats, and its downregulation alleviated neuropathic pain by reducing phospho‐STAT3 (pSTAT3) translocation from the cytoplasm to the nucleus and then inhibiting spinal astrocytes activation. 129 lncRNAs modulate miRNA processing: Specific lncRNAs are essential in neuropathic pain by modulating miRNA processing. For example, the transcribed ultraconserved lncRNA uc.153 was prominently increased in the spinal cord of rats with CCI neuropathic pain. uc.153 knockdown reversed CCI‐induced pain behaviors and spinal neuronal hypersensitivity. Mechanistically, uc.153 negatively modulated Dicer‐mediated pre‐miR‐182‐5p processing and inhibited maturation. Meanwhile, spinal miR‐182‐5p downregulation increased the expression of its target, ephrin type‐b (EphB1), and p‐NR2B (phosphorylated N‐methyl‐D‐aspartate receptor (NMDAR) 2B subunit) expression and facilitated hyperalgesia. 130 Collectively, lncRNAs are crucial in neuropathic pain via diverse mechanisms. Notably, one lncRNA may act with more than one mechanism to regulate target gene expression. For example, SNL upregulated circAnks1a in the cytoplasm and nucleus. circAnks1a enhances the interplay between Y‐box‐binding protein 1 (YBX1) and transportin‐1 to facilitate YBX1 nuclear translocation in the cytoplasm. Meanwhile, in the nucleus, circAnks1a combines with the Vegfb promoter and recruits YBX1 to the Vegfb promoter to enhance its transcription. Additionally, cytoplasmic circAnks1a sponges miR‐324‐3p to regulate vascular endothelial growth factor B (VEGFB) expression. VEGFB binding to its receptor results in the activation of various downstream targets, including p38‐MAPK, PKB/AKT (protein kinase B), extracellular signal‐regulated kinase (ERK)/MAPK, and phosphoinositide 3‐kinase (PI3K). 131 Therefore, VEGFB upregulation increases dorsal horn neuron excitability and contributes to pain hypersensitivity after nerve injury. 132 We also noted that specific lncRNAs were DE following various nerve injuries. For example, lncRNA Malat1 was upregulated after CCI to sponge miR‐206, and Malat1 suppression delayed neuropathic pain progression via miR‐206‐ZEB2 axis‐mediated neuroinflammation inhibition. 133 Conversely, in rats with complete brachial plexus avulsion‐induced neuropathic pain, Malat1 decreased in spinal cord neurons, and such downregulation increased neuronal spontaneous electrical activity via calcium flux regulation. 134 The reason for such distinctive Malat1 expression following different pain models is still unclear, and further research is required. CHALLENGES AND CONSIDERATIONS IN DELIVERING miRNA‐AND lncRNAs‐BASED THERAPETUTICS : There are two main strategies for modulating miRNA function during pain treatment: upregulation or downregulation of specific miRNAs. miRNA mimics or virus‐based constructs were used to upregulate miRNA expression. miRNA inhibitors, miRNA sponges, or inhibition of a particular miRNA‐mRNA interaction has been applied to downregulate specific miRNAs. The design of lncRNA‐based therapeutics includes diverse approaches, such as post‐transcriptional inhibition of lncRNAs by antisense oligonucleotides or siRNA and steric blockades of lncRNA–protein interactions by small molecules and morpholino. 135 However, there is much work to be done before miRNA‐ and lncRNA‐based analgesics can be clinically used for various reasons, including their conservation among species, proper delivery, stability, off‐target effects, and potential activation of the immune system. First, most available findings regarding the efficacy of miRNAs or lncRNAs during neuropathic pain are based on animal but not human studies. Whether such results can be extrapolated to humans and their translational potential is still unknown because of conservation among species. Second, the blood–brain barrier (BBB) is a practical challenge for delivering RNA‐based therapeutics into the central nervous system (CNS) via intravenous injection. Viral vectors, polypeptides, aptamers, and particular chemical modifications have been developed. Supplementing cholesterol molecules to the sense strand of a miRNA mimic or inhibitor has proven to be an efficient strategy. Cholesterol‐conjugated siRNAs showed better silencing potency than unconjugated siRNAs and presented a high efficacy for delivery to oligodendrocytes in the CNS. 136 Another proven method is the immunoliposome, a combination of liposomes, receptor‐targeted monoclonal antibodies, and the target molecules. 137 An immunoliposome nanocomplex has been reported to deliver therapeutic nucleic acids across the BBB into the deep brain by transferrin receptors. 138 In addition, intrathecal injections are a feasible approach in animal studies for neuropathic pain treatment. As reported, miR‐146a attenuated neuropathic pain partially by inhibiting TNF receptor‐associated factor 6 (TRAF6) and its downstream phospho‐JNK/C–C motif chemokine ligand 2 (pJNK/CCL2) signaling in the spinal cord. 139 Intrathecal injection of miR‐146a‐5p encapsulated nanoparticles provided an analgesic effect via NF‐κB and p38‐MAPK inhibition in spinal microglia. 140 In recent years, poly (D, L‐lactic‐co‐glycolic acid) (PLGA)‐nanoparticles have been applied to deliver siRNAs and plasmids into the spinal cord to treat neuropathic pain in rats. 141 PLGA copolymer is a promising US Food and Drug Administration (FDA)‐approved gene transmission material because of its biodegradability and biocompatibility in humans. 142 Intrathecal treatments with C‐X3‐C motif chemokine receptor 1 (CX3CR1), p38, p66shc siRNA encapsulated PLGA nanoparticles, or forkhead box P3 (Foxp3) plasmid‐encapsulated PLGA nanoparticles inhibit microglial activation and hyperalgesia in SNL rats. 141 , 143 , 144 , 145 Exosomes are another promising delivery carrier for treating neuropathic pain. Exosomes are natural membranous microvesicles that carry RNAs, with the advantage of being efficient, cell‐free, and nonimmunogenicity. Intravenous injecting neuron‐targeted exosomes delivered the carried siRNAs to neurons, microglia, and oligodendrocytes to knock down specific gene expression in mice brains. 146 This approach enabled cell‐specific delivery of the siRNA‐cargo across the BBB. Mesenchymal stem cells (MSCs) are pluripotent stem cells with immunomodulatory, anti‐inflammatory, and nutritional properties. The treatment efficacy of exosomes derived from MSCs has been proven in neuropathic pain. 147 Intrathecal, local, or subcutaneous application of exosomes obtained from human umbilical cord MSCs could mitigate nerve injury‐induced hyperalgesia. 148 , 149 , 150 Furthermore, immunofluorescence results showed that most intrathecally injected exosomes could be found in injured peripheral axons, the DRG, and the spinal dorsal horn, potentiating a homing ability of exosomes. 148 Third, although miRNAs are relatively stable in vivo, they have long‐lasting efficacy and a high resistance to nucleolytic degradation compared to mRNAs. Particular chemical modifications are still required to prolong its half‐life or increase its stability, for example, by generating locked‐nucleic acids (LNAs). LNA modifications increase the RNA affinity of antisense oligonucleotides, presenting an excellent miRNA inhibitory activity at a low dosage. 151 Additionally, the LNA technique was used to synthesize highly stable aptamers. 152 Fourth, the off‐target effect is another consideration. One miRNA may regulate multiple genes, and the off‐target potential of one miRNA may produce undesirable side effects. 153 , 154 Finally, activation of the immune system is a potential adverse event. Specific CpG motifs in oligonucleotides could trigger nonspecific immunological activity. 155 Therefore, RNA‐based therapeutics might also produce specific immune reactions after producing antibodies against the oligonucleotides. CONCLUSION: Recent human and animal studies have identified accumulating dysregulated miRNAs and lncRNAs in the serum or along pain processing pathways following peripheral nerve injury or noxious stimulation. Experimental studies have validated their essential role in neuropathic pain. miRNAs contribute to neuropathic pain development via neuroinflammation, autophagy, abnormal ion channel expression, regulating pain‐related mediators, protein kinases, structural proteins, neurotransmission excitatory–‐inhibitory imbalances, and exosome miRNA‐mediated neuron–glia communication. Meanwhile, lncRNAs and circRNAs are crucial in neuropathic pain development by acting as AS RNA and miRNA sponges, epigenetically regulating pain‐related molecules expression, or modulating miRNA processing. However, more work is required before miRNA‐ and lncRNA‐based analgesics can be clinically used for various reasons, including their conservation among species, proper delivery, stability, off‐target effects, and potential activation of the immune system. AUTHOR CONTRIBUTIONS: Ming Jiang: Investigation, Methodology. Yelong Wang: Manuscript preparation. Jing Wang: Figures and tables preparation. Shanwu Feng: Supervision, Funding acquisition. Xian Wang: Conceptualization, Funding acquisition. CONFLICT OF INTERESTS: The authors declare no competing interest in this review.
Background: Non-coding RNAs (ncRNAs) are involved in neuropathic pain development. Herein, we systematically searched for neuropathic pain-related ncRNAs expression changes, including microRNAs (miRNAs), long non-coding RNAs (lncRNAs), and circular non-coding RNAs (circRNAs). Methods: We searched two databases, PubMed and GeenMedical, for relevant studies. Results: Peripheral nerve injury or noxious stimuli can induce extensive changes in the expression of ncRNAs. For example, higher serum miR-132-3p, -146b-5p, and -384 was observed in neuropathic pain patients. Either sciatic nerve ligation, dorsal root ganglion (DRG) transaction, or ventral root transection (VRT) could upregulate miR-21 and miR-31 while downregulating miR-668 and miR-672 in the injured DRG. lncRNAs, such as early growth response 2-antisense-RNA (Egr2-AS-RNA) and Kcna2-AS-RNA, were upregulated in Schwann cells and inflicted DRG after nerve injury, respectively. Dysregulated circRNA homeodomain-interacting protein kinase 3 (circHIPK3) in serum and the DRG, abnormally expressed lncRNAs X-inactive specific transcript (XIST), nuclear enriched abundant transcript 1 (NEAT1), small nucleolar RNA host gene 1 (SNHG1), as well as ciRS-7, zinc finger protein 609 (cirZNF609), circ_0005075, and circAnks1a in the spinal cord were suggested to participate in neuropathic pain development. Dysregulated miRNAs contribute to neuropathic pain via neuroinflammation, autophagy, abnormal ion channel expression, regulating pain-related mediators, protein kinases, structural proteins, neurotransmission excitatory-inhibitory imbalances, or exosome miRNA-mediated neuron-glia communication. In addition, lncRNAs and circRNAs are essential in neuropathic pain by acting as antisense RNA and miRNA sponges, epigenetically regulating pain-related molecules expression, or modulating miRNA processing. Conclusions: Numerous dysregulated ncRNAs have been suggested to participate in neuropathic pain development. However, there is much work to be done before ncRNA-based analgesics can be clinically used for various reasons such as conservation among species, proper delivery, stability, and off-target effects.
INTRODUCTION: According to the International Association for the Study of Pain, neuropathic pain is the most severe chronic pain condition triggered by a lesion or disease of the somatosensory system. It is characteristic of hyperalgesia, allodynia, or spontaneous pain. 1 Neuropathic pain can have peripheral and central origins, with the former including neuropathic pain after peripheral nerve injury, trigeminal neuralgia, postherpetic neuralgia, painful radiculopathy, and painful polyneuropathy. In contrast, central neuropathic pain includes neuropathic pain after spinal cord or brain injury, multiple sclerosis, and chronic central post‐stroke pain. Approximately 7%–10% of the general population will experience neuropathic pain, with the majority not having satisfactory pain relief with current therapies, leading to great suffering for individuals and enormous economic and social burdens. 2 The exact molecular mechanisms underlying neuropathic pain remain unclear, and elucidating them is crucial for developing mechanism‐based treatment strategies. One proposed mechanism involves altered gene or protein expression along the pain processing pathways. Therefore, understanding how genes or proteins are dysregulated may help us to find a way to normalize these abnormalities and treat neuropathic pain. In recent years, accumulating evidence has suggested the essential role of non‐coding RNAs (ncRNAs) in various physiological and pathological procedures, such as embryonic development, inflammation, tumors, and respiratory and cardiovascular diseases. 3 ncRNAs have no protein‐coding potential, but they can govern gene or protein expression with diverse mechanisms. ncRNAs are extensively distributed in the peripheral and central nervous systems, including pain‐related structures. 4 Broad abnormal ncRNAs expression is observed following peripheral stimulation. These abnormalities are related to hyperalgesia during chronic pain development. The available data indicate that ncRNAs may be essential for hyperalgesia. In this review, we focus on microRNA (miRNA), long non‐coding RNA (lncRNA), and circular non‐coding RNA (circRNA) expression changes in neuropathic pain. Other types of ncRNAs are seldom reported in neuropathic pain and, thus, are not discussed herein. Notably, we will pay attention to their etiological role in the development of neuropathic pain and the current challenges and considerations for miRNA‐, lncRNA‐, and circRNA‐based therapeutics for neuropathic pain. CONCLUSION: Recent human and animal studies have identified accumulating dysregulated miRNAs and lncRNAs in the serum or along pain processing pathways following peripheral nerve injury or noxious stimulation. Experimental studies have validated their essential role in neuropathic pain. miRNAs contribute to neuropathic pain development via neuroinflammation, autophagy, abnormal ion channel expression, regulating pain‐related mediators, protein kinases, structural proteins, neurotransmission excitatory–‐inhibitory imbalances, and exosome miRNA‐mediated neuron–glia communication. Meanwhile, lncRNAs and circRNAs are crucial in neuropathic pain development by acting as AS RNA and miRNA sponges, epigenetically regulating pain‐related molecules expression, or modulating miRNA processing. However, more work is required before miRNA‐ and lncRNA‐based analgesics can be clinically used for various reasons, including their conservation among species, proper delivery, stability, off‐target effects, and potential activation of the immune system.
Background: Non-coding RNAs (ncRNAs) are involved in neuropathic pain development. Herein, we systematically searched for neuropathic pain-related ncRNAs expression changes, including microRNAs (miRNAs), long non-coding RNAs (lncRNAs), and circular non-coding RNAs (circRNAs). Methods: We searched two databases, PubMed and GeenMedical, for relevant studies. Results: Peripheral nerve injury or noxious stimuli can induce extensive changes in the expression of ncRNAs. For example, higher serum miR-132-3p, -146b-5p, and -384 was observed in neuropathic pain patients. Either sciatic nerve ligation, dorsal root ganglion (DRG) transaction, or ventral root transection (VRT) could upregulate miR-21 and miR-31 while downregulating miR-668 and miR-672 in the injured DRG. lncRNAs, such as early growth response 2-antisense-RNA (Egr2-AS-RNA) and Kcna2-AS-RNA, were upregulated in Schwann cells and inflicted DRG after nerve injury, respectively. Dysregulated circRNA homeodomain-interacting protein kinase 3 (circHIPK3) in serum and the DRG, abnormally expressed lncRNAs X-inactive specific transcript (XIST), nuclear enriched abundant transcript 1 (NEAT1), small nucleolar RNA host gene 1 (SNHG1), as well as ciRS-7, zinc finger protein 609 (cirZNF609), circ_0005075, and circAnks1a in the spinal cord were suggested to participate in neuropathic pain development. Dysregulated miRNAs contribute to neuropathic pain via neuroinflammation, autophagy, abnormal ion channel expression, regulating pain-related mediators, protein kinases, structural proteins, neurotransmission excitatory-inhibitory imbalances, or exosome miRNA-mediated neuron-glia communication. In addition, lncRNAs and circRNAs are essential in neuropathic pain by acting as antisense RNA and miRNA sponges, epigenetically regulating pain-related molecules expression, or modulating miRNA processing. Conclusions: Numerous dysregulated ncRNAs have been suggested to participate in neuropathic pain development. However, there is much work to be done before ncRNA-based analgesics can be clinically used for various reasons such as conservation among species, proper delivery, stability, and off-target effects.
18,773
398
[ 403, 401, 2370, 826, 353, 5917, 1000, 130, 510, 682, 170, 254, 3089, 253, 338, 465, 362, 868, 37 ]
21
[ "mir", "pain", "neuropathic", "neuropathic pain", "expression", "nerve", "spinal", "drg", "injury", "mirna" ]
[ "pain regulation protein", "related genes pain", "genes pain processing", "neuropathic pain pathogenesis", "neuropathic pain mechanisms" ]
null
null
[CONTENT] circRNAs | lncRNAs | mechanism study | miRNAs | neuropathic pain [SUMMARY]
null
null
[CONTENT] circRNAs | lncRNAs | mechanism study | miRNAs | neuropathic pain [SUMMARY]
[CONTENT] circRNAs | lncRNAs | mechanism study | miRNAs | neuropathic pain [SUMMARY]
[CONTENT] circRNAs | lncRNAs | mechanism study | miRNAs | neuropathic pain [SUMMARY]
[CONTENT] Ganglia, Spinal | Humans | MicroRNAs | Neuralgia | RNA, Circular | RNA, Long Noncoding [SUMMARY]
null
null
[CONTENT] Ganglia, Spinal | Humans | MicroRNAs | Neuralgia | RNA, Circular | RNA, Long Noncoding [SUMMARY]
[CONTENT] Ganglia, Spinal | Humans | MicroRNAs | Neuralgia | RNA, Circular | RNA, Long Noncoding [SUMMARY]
[CONTENT] Ganglia, Spinal | Humans | MicroRNAs | Neuralgia | RNA, Circular | RNA, Long Noncoding [SUMMARY]
[CONTENT] pain regulation protein | related genes pain | genes pain processing | neuropathic pain pathogenesis | neuropathic pain mechanisms [SUMMARY]
null
null
[CONTENT] pain regulation protein | related genes pain | genes pain processing | neuropathic pain pathogenesis | neuropathic pain mechanisms [SUMMARY]
[CONTENT] pain regulation protein | related genes pain | genes pain processing | neuropathic pain pathogenesis | neuropathic pain mechanisms [SUMMARY]
[CONTENT] pain regulation protein | related genes pain | genes pain processing | neuropathic pain pathogenesis | neuropathic pain mechanisms [SUMMARY]
[CONTENT] mir | pain | neuropathic | neuropathic pain | expression | nerve | spinal | drg | injury | mirna [SUMMARY]
null
null
[CONTENT] mir | pain | neuropathic | neuropathic pain | expression | nerve | spinal | drg | injury | mirna [SUMMARY]
[CONTENT] mir | pain | neuropathic | neuropathic pain | expression | nerve | spinal | drg | injury | mirna [SUMMARY]
[CONTENT] mir | pain | neuropathic | neuropathic pain | expression | nerve | spinal | drg | injury | mirna [SUMMARY]
[CONTENT] pain | ncrnas | neuropathic | neuropathic pain | central | coding | non coding | non | peripheral central | pain peripheral [SUMMARY]
null
null
[CONTENT] pain | regulating pain related | regulating pain | mirna | regulating | studies | pain related | lncrnas | role neuropathic pain mirnas | role neuropathic pain [SUMMARY]
[CONTENT] mir | pain | neuropathic | neuropathic pain | expression | nerve | spinal | drg | mirna | injury [SUMMARY]
[CONTENT] mir | pain | neuropathic | neuropathic pain | expression | nerve | spinal | drg | mirna | injury [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| two | PubMed | GeenMedical ||| ||| ncRNAs ||| ||| VRT | miR-668 ||| 2 | Kcna2-AS-RNA | Schwann ||| 3 | circHIPK3 | DRG | XIST | 1 | NEAT1 | RNA | 1 | SNHG1 | 609 ||| ||| RNA ||| ||| [SUMMARY]
[CONTENT] ||| ||| two | PubMed | GeenMedical ||| ||| ncRNAs ||| ||| VRT | miR-668 ||| 2 | Kcna2-AS-RNA | Schwann ||| 3 | circHIPK3 | DRG | XIST | 1 | NEAT1 | RNA | 1 | SNHG1 | 609 ||| ||| RNA ||| ||| [SUMMARY]
Lectin histochemistry of the olfactory mucosa of Korean native cattle,
36448434
The olfactory mucosa (OM) is crucial for odorant perception in the main olfactory system. The terminal carbohydrates of glycoconjugates influence chemoreception in the olfactory epithelium (OE).
BACKGROUND
The OM of neonate and adult Korean native cattle was evaluated using histological, immunohistochemical, and lectin histochemical methods.
METHODS
Histologically, the OM in both neonates and adults consists of the olfactory epithelium and the lamina propria. Additionally, using periodic acid Schiff and Alcian blue (pH 2.5), the mucus specificity of the Bowman's gland duct and acini in the lamina propria was determined. Immunohistochemistry demonstrated that mature and immature olfactory sensory neurons of OEs express the olfactory marker protein and growth associated protein-43, respectively. Lectin histochemistry indicated that numerous glycoconjugates, including as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose groups, were expressed at varied levels in the different cell types in the OMs of neonates and adults at varying levels. According to our observations, the cattle possessed a well-developed olfactory system, and the expression patterns of glycoconjugates in neonatal and adult OMs varied considerably.
RESULTS
This is the first study to describe the morphological assessment of the OM of Korean native cattle with a focus on lectin histochemistry. The findings suggest that glycoconjugates may play a role in olfactory chemoreception, and that their labeling properties may be closely related to OM development and maturity.
CONCLUSIONS
[ "Cattle", "Animals", "Lectins", "Galactose", "Olfactory Mucosa", "Republic of Korea" ]
9715387
INTRODUCTION
Olfaction is important for animals because it allows them to explore their surroundings for odorous signals from food sources and environments, as well as detect chemical compounds that influence social interaction and reproductive behavior [1]. This perception is mediated by two chemosensory systems: the main olfactory system and the vomeronasal system. The olfactory mucosa (OM), positioned at the caudal/posterior roof of the nasal cavity, and the vomeronasal organ (VNO), located at the base of the nasal septum or on the roof of the mouth, are the organs comprising these two systems [23]. Traditionally, the OM and VNO have been considered functionally and anatomically distinct, with the OM detecting conventional volatile odorants and the VNO receiving pheromones [45]. These two organs share some common features but differ in neuron types, primary structures of receptor proteins, physiological pathways, and central neuroanatomical projections into the brain [6]. However, despite the anatomical and functional differences, previous studies indicate that the OM and VNO play a synergistic role in the regulation of various olfactory-induced behaviors and reproductive and social interactions in mammals [15789]. The OM is critical for chemical signal acquisition in the main olfactory system, conveying signals to the main olfactory bulb [4]. In mammals, the OM is composed of the olfactory epithelium (OE) and the lamina propria. The OE is predominantly composed of chemosensory neurons, supporting cells, and basal cells [10]. The lamina propria is constituted by loose connective tissue in which olfactory nerve axon bundles, Bowman's glands, and blood and lymph vessels [1011]. In mammals, odorant reception occurs via the chemosensory neurons of the OE, which contain dendrites that extend beyond the apical surface, where the cilia protrude into the mucus layer, and a basally projecting axon process [12]. Glycoconjugates (terminal carbohydrates) have been shown to play a crucial role in the chemoreception of the OE [13]. Cell surfaces are densely packed with a diverse array of glycoconjugates, each of which provides considerable biological information [14]. Lectins are naturally occurring glycoconjugate-binding molecules, and a large number of purified lectins are regarded as the primary analytical tool for studying glycoconjugates in the olfactory system [131516]. Specific lectin binding is closely related to the olfactory neuron function, as evidenced by the abundance of glycoconjugates in the mucosensory compartment of the chemosensory epithelia, where receptor-specific events associated with transduction occur [13]. Numerous animals, including rats [1718], mice [1920], marmosets [21], sheep [1522], camels [22], horses [23], and humans [24], express the carbohydrate (lectin-binding) moiety in their OM. Despite the fact that cattle are one of the best species for biochemical investigations on olfaction [25], there is little published information on the immunohistochemical and lectin histochemical properties of the OM in bovines. We first evaluated the histological characteristics and lectin histochemistry of the OM in Korean native cattle, and then we compared neonate to adult OMs to determine the features associated with maturity. To our knowledge, this is the first study that provides a comprehensive outlook on the expression levels of lectin-binding glycoconjugates in bovine OM.
null
null
RESULTS
Histological findings of the OM In neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs. The lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages. In neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs. The lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages. Immunohistochemical analysis of OMP in the OM In neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C). OMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells. In neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C). OMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells. Lectin histochemistry in the OM The intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. aApical perinuclear labeling. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. N-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. Mannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). Galactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). N-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. Complex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). Fucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). The intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. aApical perinuclear labeling. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. N-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. Mannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). Galactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). N-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. Complex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). Fucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).
null
null
[ "Tissue preparation", "Histological examination", "Mucus histochemistry", "Immunohistochemistry", "Lectin histochemistry", "Histological findings of the OM", "Immunohistochemical analysis of OMP in the OM", "Lectin histochemistry in the OM", "N-acetylglucosamine-binding lectins", "Mannose-binding lectins", "Galactose-binding lectins", "N-acetylgalactosamine-binding lectins", "Complex type N-glycan-binding lectins", "Fucose-binding lectin" ]
[ "Three neonatal OM samples (1 to 3 day old) and three adult OM samples (2.5-year-old) of Korean native cattle (Hanwoo, Bos taurus coreanae) were obtained from local farms and a local slaughterhouse in Chungcheongbuk-do, South Korea, respectively. Both sexes of all animals were utilized in this study (male: four neonates and five adults; female: three neonates and three adults). The OM used in this study belongs to a small area on the caudal roof of the nasal cavity, where it completely covered the ethmoturbinates and caudal portions of the dorsal and middle nasal conchae (Fig. 1A). Morphological exams of the nasal cavity indicated that none of them had underlying respiratory diseases. All experimental and animal handling procedures were conducted in accordance with the guidelines of Institutional Animal Care and Use Committee of Chonnam National University (26 October 2021; CNU IACUC-YB-2021-131).\nOM, olfactory mucosa; BD, Bowman’s glandular duct; BG, Bowman’s gland; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells.", "For light microscopic examination, the ethmoturbinates were removed immediately after death, cut into 5-cm-thick cross sections, and fixed for 3–5 days in 10% buffered formalin. Following fixation, the ethmoturbinates were trimmed and decalcified through multiple solution changes of sodium citrate-formic acid solution. When a needle could readily penetrate the bone with very little force, the decalcification process was stopped. The samples were then washed for 24 h with running tap water, dehydrated in a succession of ethanol concentrations (70%, 80%, 90%, 95%, and 100%), cleaned in xylene, embedded in paraffin, and sectioned into 4 μm slices. Following deparaffinization, the sections were utilized for hematoxylin and eosin staining, mucus and lectin histochemistry, and immunohistochemistry.", "The acidic mucin was distinguished from the neutral epithelial mucin using periodic acid Schiff (PAS) and Alcian blue (pH 2.5) staining. For PAS staining, the sections were subjected to 0.5% periodic acid for 15 min, then washed and incubated in Schiff reagent for 30 min. After 10 min of washing with running tap water, the sections were counterstained with hematoxylin. For Alcian blue staining, the sections were first incubated in 3% acetic acid for 3 min, followed by 30 min in 1% Alcian blue solution in 3% acetic acid (pH 2.5). After 1 min of washing with running tap water, the sections were counterstained with neutral red.", "To retrieve the antigens, the sections were heated for 1 h at 90°C in citrate buffer (0.01 M, pH 6.0). After cooling, the sections were treated for 20 min with 0.3% hydrogen peroxide in distilled water to suppress the endogenous peroxidase activity. To avoid non-specific binding, the sections were treated with normal goat serum (Vectastain Elite ABC kit; Vector Laboratories, USA) for 1 h before being incubated overnight at 4°C with rabbit monoclonal anti-olfactory marker protein (OMP) (1:1,000 dilution, Cat. No. ab183947; Abcam, UK) and rabbit polyclonal anti-growth associated protein-43 (GAP-43) (1:1,000 dilution, Cat. No. PA5-34943; ThermoFisher Scientific, USA) antibodies. The primary antibodies were excluded from the procedure as negative controls. The sections were rinsed with phosphate-buffered saline (PBS), treated with biotinylated goat anti-rabbit IgG (Vectastain Elite ABC kit) for 1 h, and then reacted with the avidin-biotin-peroxidase complex (Vectastain Elite ABC kit) for 1 h at room temperature (RT). Immunoreactivity was detected using a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), followed by counterstaining with hematoxylin.", "Biotinylated lectin kits I, II, and III (Cat. No. BK-1000, BK-2000, and BK-3000) were acquired from Vector Laboratories. The acronyms, sources, and specificities of the used lectins are shown in Table 1. On the basis of their binding specificity and inhibitory sugars, lectins were categorized as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose-binding lectins. For competitive inhibition, the following sugars were purchased from Sigma-Aldrich (USA) and Vector Laboratories (Table 1): N-acetyl-D-glucosamine (β-D–GlcNAc; Sigma-Aldrich), Chitin Hydrolysate (Vector Laboratories), α-methyl mannoside/α-methyl glucoside (Sigma-Aldrich), lactose (Galβ1, 4Glc; Sigma-Aldrich), N-acetyl-D-galactosamine (α-D-GalNAc; Sigma-Aldrich), melibiose (Galα1, 6Glc; Sigma-Aldrich), and β-D-galactose (Sigma-Aldrich).\nFuc, fucose; Gal, galactose; GalNAc, N-acetylgalactosamine; Glc, glucose; GlcNAc, N-acetylglucosamine; Man, mannose.\naThe acronyms and specificities of the 21 lectins were obtained from the data sheep (Vector laboratory) and grouped, as shown in a previous paper [23].\nTo eliminate the endogenous peroxidase activity, the sections were treated with 0.3% hydrogen peroxide in methanol. To prevent non-specific reactions, the sections were rinsed with PBS and then incubated with 1% bovine serum albumin in PBS. The sections were then treated overnight at 4°C with biotinylated lectins and reacted for 45 min at RT with an avidin-biotin-peroxidase complex (Vectastain Elite ABC kit). The sections were rinsed with PBS, treated with a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), and counterstained with hematoxylin. Negative controls for lectin histochemistry were generated by removing the biotinylated lectins and preincubating the lectins with appropriate inhibitors for 1 h at RT.", "In neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs.\nThe lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages.", "In neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C).\nOMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells.", "The intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively.\n−, negative staining; +, faint staining; ++, moderate staining; +++, intense staining.\naApical perinuclear labeling.\n−, negative staining; +, faint staining; ++, moderate staining; +++, intense staining.\nN-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.\nAll lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.\nMannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).\nConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).\nGalactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).\nIn the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).\nN-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.\nAll the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.\nComplex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).\nIn the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).\nFucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).\nIn the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).", "All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.", "ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).", "In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).", "All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.", "In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).", "In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Tissue preparation", "Histological examination", "Mucus histochemistry", "Immunohistochemistry", "Lectin histochemistry", "RESULTS", "Histological findings of the OM", "Immunohistochemical analysis of OMP in the OM", "Lectin histochemistry in the OM", "N-acetylglucosamine-binding lectins", "Mannose-binding lectins", "Galactose-binding lectins", "N-acetylgalactosamine-binding lectins", "Complex type N-glycan-binding lectins", "Fucose-binding lectin", "DISCUSSION" ]
[ "Olfaction is important for animals because it allows them to explore their surroundings for odorous signals from food sources and environments, as well as detect chemical compounds that influence social interaction and reproductive behavior [1]. This perception is mediated by two chemosensory systems: the main olfactory system and the vomeronasal system. The olfactory mucosa (OM), positioned at the caudal/posterior roof of the nasal cavity, and the vomeronasal organ (VNO), located at the base of the nasal septum or on the roof of the mouth, are the organs comprising these two systems [23]. Traditionally, the OM and VNO have been considered functionally and anatomically distinct, with the OM detecting conventional volatile odorants and the VNO receiving pheromones [45]. These two organs share some common features but differ in neuron types, primary structures of receptor proteins, physiological pathways, and central neuroanatomical projections into the brain [6]. However, despite the anatomical and functional differences, previous studies indicate that the OM and VNO play a synergistic role in the regulation of various olfactory-induced behaviors and reproductive and social interactions in mammals [15789].\nThe OM is critical for chemical signal acquisition in the main olfactory system, conveying signals to the main olfactory bulb [4]. In mammals, the OM is composed of the olfactory epithelium (OE) and the lamina propria. The OE is predominantly composed of chemosensory neurons, supporting cells, and basal cells [10]. The lamina propria is constituted by loose connective tissue in which olfactory nerve axon bundles, Bowman's glands, and blood and lymph vessels [1011]. In mammals, odorant reception occurs via the chemosensory neurons of the OE, which contain dendrites that extend beyond the apical surface, where the cilia protrude into the mucus layer, and a basally projecting axon process [12]. Glycoconjugates (terminal carbohydrates) have been shown to play a crucial role in the chemoreception of the OE [13].\nCell surfaces are densely packed with a diverse array of glycoconjugates, each of which provides considerable biological information [14]. Lectins are naturally occurring glycoconjugate-binding molecules, and a large number of purified lectins are regarded as the primary analytical tool for studying glycoconjugates in the olfactory system [131516]. Specific lectin binding is closely related to the olfactory neuron function, as evidenced by the abundance of glycoconjugates in the mucosensory compartment of the chemosensory epithelia, where receptor-specific events associated with transduction occur [13]. Numerous animals, including rats [1718], mice [1920], marmosets [21], sheep [1522], camels [22], horses [23], and humans [24], express the carbohydrate (lectin-binding) moiety in their OM.\nDespite the fact that cattle are one of the best species for biochemical investigations on olfaction [25], there is little published information on the immunohistochemical and lectin histochemical properties of the OM in bovines. We first evaluated the histological characteristics and lectin histochemistry of the OM in Korean native cattle, and then we compared neonate to adult OMs to determine the features associated with maturity. To our knowledge, this is the first study that provides a comprehensive outlook on the expression levels of lectin-binding glycoconjugates in bovine OM.", "Tissue preparation Three neonatal OM samples (1 to 3 day old) and three adult OM samples (2.5-year-old) of Korean native cattle (Hanwoo, Bos taurus coreanae) were obtained from local farms and a local slaughterhouse in Chungcheongbuk-do, South Korea, respectively. Both sexes of all animals were utilized in this study (male: four neonates and five adults; female: three neonates and three adults). The OM used in this study belongs to a small area on the caudal roof of the nasal cavity, where it completely covered the ethmoturbinates and caudal portions of the dorsal and middle nasal conchae (Fig. 1A). Morphological exams of the nasal cavity indicated that none of them had underlying respiratory diseases. All experimental and animal handling procedures were conducted in accordance with the guidelines of Institutional Animal Care and Use Committee of Chonnam National University (26 October 2021; CNU IACUC-YB-2021-131).\nOM, olfactory mucosa; BD, Bowman’s glandular duct; BG, Bowman’s gland; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells.\nThree neonatal OM samples (1 to 3 day old) and three adult OM samples (2.5-year-old) of Korean native cattle (Hanwoo, Bos taurus coreanae) were obtained from local farms and a local slaughterhouse in Chungcheongbuk-do, South Korea, respectively. Both sexes of all animals were utilized in this study (male: four neonates and five adults; female: three neonates and three adults). The OM used in this study belongs to a small area on the caudal roof of the nasal cavity, where it completely covered the ethmoturbinates and caudal portions of the dorsal and middle nasal conchae (Fig. 1A). Morphological exams of the nasal cavity indicated that none of them had underlying respiratory diseases. All experimental and animal handling procedures were conducted in accordance with the guidelines of Institutional Animal Care and Use Committee of Chonnam National University (26 October 2021; CNU IACUC-YB-2021-131).\nOM, olfactory mucosa; BD, Bowman’s glandular duct; BG, Bowman’s gland; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells.\nHistological examination For light microscopic examination, the ethmoturbinates were removed immediately after death, cut into 5-cm-thick cross sections, and fixed for 3–5 days in 10% buffered formalin. Following fixation, the ethmoturbinates were trimmed and decalcified through multiple solution changes of sodium citrate-formic acid solution. When a needle could readily penetrate the bone with very little force, the decalcification process was stopped. The samples were then washed for 24 h with running tap water, dehydrated in a succession of ethanol concentrations (70%, 80%, 90%, 95%, and 100%), cleaned in xylene, embedded in paraffin, and sectioned into 4 μm slices. Following deparaffinization, the sections were utilized for hematoxylin and eosin staining, mucus and lectin histochemistry, and immunohistochemistry.\nFor light microscopic examination, the ethmoturbinates were removed immediately after death, cut into 5-cm-thick cross sections, and fixed for 3–5 days in 10% buffered formalin. Following fixation, the ethmoturbinates were trimmed and decalcified through multiple solution changes of sodium citrate-formic acid solution. When a needle could readily penetrate the bone with very little force, the decalcification process was stopped. The samples were then washed for 24 h with running tap water, dehydrated in a succession of ethanol concentrations (70%, 80%, 90%, 95%, and 100%), cleaned in xylene, embedded in paraffin, and sectioned into 4 μm slices. Following deparaffinization, the sections were utilized for hematoxylin and eosin staining, mucus and lectin histochemistry, and immunohistochemistry.\nMucus histochemistry The acidic mucin was distinguished from the neutral epithelial mucin using periodic acid Schiff (PAS) and Alcian blue (pH 2.5) staining. For PAS staining, the sections were subjected to 0.5% periodic acid for 15 min, then washed and incubated in Schiff reagent for 30 min. After 10 min of washing with running tap water, the sections were counterstained with hematoxylin. For Alcian blue staining, the sections were first incubated in 3% acetic acid for 3 min, followed by 30 min in 1% Alcian blue solution in 3% acetic acid (pH 2.5). After 1 min of washing with running tap water, the sections were counterstained with neutral red.\nThe acidic mucin was distinguished from the neutral epithelial mucin using periodic acid Schiff (PAS) and Alcian blue (pH 2.5) staining. For PAS staining, the sections were subjected to 0.5% periodic acid for 15 min, then washed and incubated in Schiff reagent for 30 min. After 10 min of washing with running tap water, the sections were counterstained with hematoxylin. For Alcian blue staining, the sections were first incubated in 3% acetic acid for 3 min, followed by 30 min in 1% Alcian blue solution in 3% acetic acid (pH 2.5). After 1 min of washing with running tap water, the sections were counterstained with neutral red.\nImmunohistochemistry To retrieve the antigens, the sections were heated for 1 h at 90°C in citrate buffer (0.01 M, pH 6.0). After cooling, the sections were treated for 20 min with 0.3% hydrogen peroxide in distilled water to suppress the endogenous peroxidase activity. To avoid non-specific binding, the sections were treated with normal goat serum (Vectastain Elite ABC kit; Vector Laboratories, USA) for 1 h before being incubated overnight at 4°C with rabbit monoclonal anti-olfactory marker protein (OMP) (1:1,000 dilution, Cat. No. ab183947; Abcam, UK) and rabbit polyclonal anti-growth associated protein-43 (GAP-43) (1:1,000 dilution, Cat. No. PA5-34943; ThermoFisher Scientific, USA) antibodies. The primary antibodies were excluded from the procedure as negative controls. The sections were rinsed with phosphate-buffered saline (PBS), treated with biotinylated goat anti-rabbit IgG (Vectastain Elite ABC kit) for 1 h, and then reacted with the avidin-biotin-peroxidase complex (Vectastain Elite ABC kit) for 1 h at room temperature (RT). Immunoreactivity was detected using a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), followed by counterstaining with hematoxylin.\nTo retrieve the antigens, the sections were heated for 1 h at 90°C in citrate buffer (0.01 M, pH 6.0). After cooling, the sections were treated for 20 min with 0.3% hydrogen peroxide in distilled water to suppress the endogenous peroxidase activity. To avoid non-specific binding, the sections were treated with normal goat serum (Vectastain Elite ABC kit; Vector Laboratories, USA) for 1 h before being incubated overnight at 4°C with rabbit monoclonal anti-olfactory marker protein (OMP) (1:1,000 dilution, Cat. No. ab183947; Abcam, UK) and rabbit polyclonal anti-growth associated protein-43 (GAP-43) (1:1,000 dilution, Cat. No. PA5-34943; ThermoFisher Scientific, USA) antibodies. The primary antibodies were excluded from the procedure as negative controls. The sections were rinsed with phosphate-buffered saline (PBS), treated with biotinylated goat anti-rabbit IgG (Vectastain Elite ABC kit) for 1 h, and then reacted with the avidin-biotin-peroxidase complex (Vectastain Elite ABC kit) for 1 h at room temperature (RT). Immunoreactivity was detected using a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), followed by counterstaining with hematoxylin.\nLectin histochemistry Biotinylated lectin kits I, II, and III (Cat. No. BK-1000, BK-2000, and BK-3000) were acquired from Vector Laboratories. The acronyms, sources, and specificities of the used lectins are shown in Table 1. On the basis of their binding specificity and inhibitory sugars, lectins were categorized as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose-binding lectins. For competitive inhibition, the following sugars were purchased from Sigma-Aldrich (USA) and Vector Laboratories (Table 1): N-acetyl-D-glucosamine (β-D–GlcNAc; Sigma-Aldrich), Chitin Hydrolysate (Vector Laboratories), α-methyl mannoside/α-methyl glucoside (Sigma-Aldrich), lactose (Galβ1, 4Glc; Sigma-Aldrich), N-acetyl-D-galactosamine (α-D-GalNAc; Sigma-Aldrich), melibiose (Galα1, 6Glc; Sigma-Aldrich), and β-D-galactose (Sigma-Aldrich).\nFuc, fucose; Gal, galactose; GalNAc, N-acetylgalactosamine; Glc, glucose; GlcNAc, N-acetylglucosamine; Man, mannose.\naThe acronyms and specificities of the 21 lectins were obtained from the data sheep (Vector laboratory) and grouped, as shown in a previous paper [23].\nTo eliminate the endogenous peroxidase activity, the sections were treated with 0.3% hydrogen peroxide in methanol. To prevent non-specific reactions, the sections were rinsed with PBS and then incubated with 1% bovine serum albumin in PBS. The sections were then treated overnight at 4°C with biotinylated lectins and reacted for 45 min at RT with an avidin-biotin-peroxidase complex (Vectastain Elite ABC kit). The sections were rinsed with PBS, treated with a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), and counterstained with hematoxylin. Negative controls for lectin histochemistry were generated by removing the biotinylated lectins and preincubating the lectins with appropriate inhibitors for 1 h at RT.\nBiotinylated lectin kits I, II, and III (Cat. No. BK-1000, BK-2000, and BK-3000) were acquired from Vector Laboratories. The acronyms, sources, and specificities of the used lectins are shown in Table 1. On the basis of their binding specificity and inhibitory sugars, lectins were categorized as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose-binding lectins. For competitive inhibition, the following sugars were purchased from Sigma-Aldrich (USA) and Vector Laboratories (Table 1): N-acetyl-D-glucosamine (β-D–GlcNAc; Sigma-Aldrich), Chitin Hydrolysate (Vector Laboratories), α-methyl mannoside/α-methyl glucoside (Sigma-Aldrich), lactose (Galβ1, 4Glc; Sigma-Aldrich), N-acetyl-D-galactosamine (α-D-GalNAc; Sigma-Aldrich), melibiose (Galα1, 6Glc; Sigma-Aldrich), and β-D-galactose (Sigma-Aldrich).\nFuc, fucose; Gal, galactose; GalNAc, N-acetylgalactosamine; Glc, glucose; GlcNAc, N-acetylglucosamine; Man, mannose.\naThe acronyms and specificities of the 21 lectins were obtained from the data sheep (Vector laboratory) and grouped, as shown in a previous paper [23].\nTo eliminate the endogenous peroxidase activity, the sections were treated with 0.3% hydrogen peroxide in methanol. To prevent non-specific reactions, the sections were rinsed with PBS and then incubated with 1% bovine serum albumin in PBS. The sections were then treated overnight at 4°C with biotinylated lectins and reacted for 45 min at RT with an avidin-biotin-peroxidase complex (Vectastain Elite ABC kit). The sections were rinsed with PBS, treated with a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), and counterstained with hematoxylin. Negative controls for lectin histochemistry were generated by removing the biotinylated lectins and preincubating the lectins with appropriate inhibitors for 1 h at RT.", "Three neonatal OM samples (1 to 3 day old) and three adult OM samples (2.5-year-old) of Korean native cattle (Hanwoo, Bos taurus coreanae) were obtained from local farms and a local slaughterhouse in Chungcheongbuk-do, South Korea, respectively. Both sexes of all animals were utilized in this study (male: four neonates and five adults; female: three neonates and three adults). The OM used in this study belongs to a small area on the caudal roof of the nasal cavity, where it completely covered the ethmoturbinates and caudal portions of the dorsal and middle nasal conchae (Fig. 1A). Morphological exams of the nasal cavity indicated that none of them had underlying respiratory diseases. All experimental and animal handling procedures were conducted in accordance with the guidelines of Institutional Animal Care and Use Committee of Chonnam National University (26 October 2021; CNU IACUC-YB-2021-131).\nOM, olfactory mucosa; BD, Bowman’s glandular duct; BG, Bowman’s gland; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells.", "For light microscopic examination, the ethmoturbinates were removed immediately after death, cut into 5-cm-thick cross sections, and fixed for 3–5 days in 10% buffered formalin. Following fixation, the ethmoturbinates were trimmed and decalcified through multiple solution changes of sodium citrate-formic acid solution. When a needle could readily penetrate the bone with very little force, the decalcification process was stopped. The samples were then washed for 24 h with running tap water, dehydrated in a succession of ethanol concentrations (70%, 80%, 90%, 95%, and 100%), cleaned in xylene, embedded in paraffin, and sectioned into 4 μm slices. Following deparaffinization, the sections were utilized for hematoxylin and eosin staining, mucus and lectin histochemistry, and immunohistochemistry.", "The acidic mucin was distinguished from the neutral epithelial mucin using periodic acid Schiff (PAS) and Alcian blue (pH 2.5) staining. For PAS staining, the sections were subjected to 0.5% periodic acid for 15 min, then washed and incubated in Schiff reagent for 30 min. After 10 min of washing with running tap water, the sections were counterstained with hematoxylin. For Alcian blue staining, the sections were first incubated in 3% acetic acid for 3 min, followed by 30 min in 1% Alcian blue solution in 3% acetic acid (pH 2.5). After 1 min of washing with running tap water, the sections were counterstained with neutral red.", "To retrieve the antigens, the sections were heated for 1 h at 90°C in citrate buffer (0.01 M, pH 6.0). After cooling, the sections were treated for 20 min with 0.3% hydrogen peroxide in distilled water to suppress the endogenous peroxidase activity. To avoid non-specific binding, the sections were treated with normal goat serum (Vectastain Elite ABC kit; Vector Laboratories, USA) for 1 h before being incubated overnight at 4°C with rabbit monoclonal anti-olfactory marker protein (OMP) (1:1,000 dilution, Cat. No. ab183947; Abcam, UK) and rabbit polyclonal anti-growth associated protein-43 (GAP-43) (1:1,000 dilution, Cat. No. PA5-34943; ThermoFisher Scientific, USA) antibodies. The primary antibodies were excluded from the procedure as negative controls. The sections were rinsed with phosphate-buffered saline (PBS), treated with biotinylated goat anti-rabbit IgG (Vectastain Elite ABC kit) for 1 h, and then reacted with the avidin-biotin-peroxidase complex (Vectastain Elite ABC kit) for 1 h at room temperature (RT). Immunoreactivity was detected using a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), followed by counterstaining with hematoxylin.", "Biotinylated lectin kits I, II, and III (Cat. No. BK-1000, BK-2000, and BK-3000) were acquired from Vector Laboratories. The acronyms, sources, and specificities of the used lectins are shown in Table 1. On the basis of their binding specificity and inhibitory sugars, lectins were categorized as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose-binding lectins. For competitive inhibition, the following sugars were purchased from Sigma-Aldrich (USA) and Vector Laboratories (Table 1): N-acetyl-D-glucosamine (β-D–GlcNAc; Sigma-Aldrich), Chitin Hydrolysate (Vector Laboratories), α-methyl mannoside/α-methyl glucoside (Sigma-Aldrich), lactose (Galβ1, 4Glc; Sigma-Aldrich), N-acetyl-D-galactosamine (α-D-GalNAc; Sigma-Aldrich), melibiose (Galα1, 6Glc; Sigma-Aldrich), and β-D-galactose (Sigma-Aldrich).\nFuc, fucose; Gal, galactose; GalNAc, N-acetylgalactosamine; Glc, glucose; GlcNAc, N-acetylglucosamine; Man, mannose.\naThe acronyms and specificities of the 21 lectins were obtained from the data sheep (Vector laboratory) and grouped, as shown in a previous paper [23].\nTo eliminate the endogenous peroxidase activity, the sections were treated with 0.3% hydrogen peroxide in methanol. To prevent non-specific reactions, the sections were rinsed with PBS and then incubated with 1% bovine serum albumin in PBS. The sections were then treated overnight at 4°C with biotinylated lectins and reacted for 45 min at RT with an avidin-biotin-peroxidase complex (Vectastain Elite ABC kit). The sections were rinsed with PBS, treated with a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), and counterstained with hematoxylin. Negative controls for lectin histochemistry were generated by removing the biotinylated lectins and preincubating the lectins with appropriate inhibitors for 1 h at RT.", "Histological findings of the OM In neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs.\nThe lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages.\nIn neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs.\nThe lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages.\nImmunohistochemical analysis of OMP in the OM In neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C).\nOMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells.\nIn neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C).\nOMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells.\nLectin histochemistry in the OM The intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively.\n−, negative staining; +, faint staining; ++, moderate staining; +++, intense staining.\naApical perinuclear labeling.\n−, negative staining; +, faint staining; ++, moderate staining; +++, intense staining.\nN-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.\nAll lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.\nMannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).\nConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).\nGalactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).\nIn the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).\nN-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.\nAll the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.\nComplex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).\nIn the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).\nFucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).\nIn the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).\nThe intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively.\n−, negative staining; +, faint staining; ++, moderate staining; +++, intense staining.\naApical perinuclear labeling.\n−, negative staining; +, faint staining; ++, moderate staining; +++, intense staining.\nN-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.\nAll lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.\nMannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).\nConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).\nGalactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).\nIn the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).\nN-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.\nAll the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.\nComplex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).\nIn the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).\nFucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).\nIn the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).", "In neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs.\nThe lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages.", "In neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C).\nOMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells.", "The intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively.\n−, negative staining; +, faint staining; ++, moderate staining; +++, intense staining.\naApical perinuclear labeling.\n−, negative staining; +, faint staining; ++, moderate staining; +++, intense staining.\nN-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.\nAll lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.\nMannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).\nConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).\nGalactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).\nIn the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).\nN-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.\nAll the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.\nComplex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).\nIn the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).\nFucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).\nIn the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).", "All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3).\nOM, olfactory mucosa.", "ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3).", "In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3).", "All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3).\nOM, olfactory mucosa.", "In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H).", "In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3).", "This is the first study that visualized individual glycoconjugate (carbohydrate residues) during the postnatal development of bovine OM. We employed specific antibodies for olfactory sensory neurons against OMP and 21 lectins specific for the glycoconjugate sugar residues. Finally, we found that the distribution and expression levels of glycoconjugates varied remarkably between the OMs of neonatal and adult cattle.\nIn the present study, histological examination revealed that the cattle OM is composed of the OE and the underlying lamina propria, which contains Bowman’s glands, bundles of olfactory axons, and ensheathing glia, as observed in other mammals, such as rats [26], sheep [15], and dogs [22]. A previous study established that age-related thickening results from a rapid increase in olfactory density and contributes to an improved odor sensitivity in domestic dogs and sheep [27]. This study evaluated differences in OM morphology between neonate and adult cattle to test the notion that mucosal refinement changes with age. Our findings indicated that adult cattle OM had a more complete structural shape than that of the neonate. Additionally, PAS and Alcian blue (pH 2.5) staining were utilized to confirm the mucus specificity of the Bowman’s gland duct/acini in the lamina propria. In neonates and adults, Bowman’s gland ducts/acini were stained positively with PAS and Alcian blue to varied degrees of intensity. These findings imply that the Bowman’s glands release neutral and acidic components in the lamina propria in both the neonatal and adult OM, which may be implicated in the perception of scents or in protection against infectious agents and particles [28].\nOMP has been implicated in olfactory transduction [29], and its expression is restricted to mature chemosensory neurons in the OE [30]. We assessed the OMP expression in the OM to verify whether it is associated with mature sensory neurons that extend their dendritic knobs to the OE mucociliary layer or free border [31]. In contrast to OMP, GAP-43 expression is detected in the basal region of the OE, indicating that GAP-43 is expressed in immature neurons [32]. In neonatal and adult cattle, mature receptor cells in the middle region and immature receptor cells in the basal region of the OE were markedly positive for OMP and GAP-43, respectively. Therefore, OMP-positive mature and GAP-43-positive immature sensory neurons may be essential for the development of the main olfactory system [3233]. However, the precise roles of OMP and GAP-43 in cattle OM should be further verified.\n\nTables 2 and 3 demonstrate the results of lectin-binding patterns in the OM of neonatal and adult cattle. In all regions of the OM, including the OE free border, receptor cells, supporting cells, basal cells, nerve bundles, and Bowman’s glands, lectin-reactive glycoconjugates were found at varying intensities. Except for BSL-II and jacalin, all lectins utilized in this study were expressed in the free border of both neonatal and adult cattle, implying that the majority of the carbohydrates detected by these lectins are present in the mucociliary layer of the OE. This finding somewhat contrasts with previously published research on sheep [15] and horses [23], wherein the free borders were labeled with 13 and 19 of the 21 lectins, respectively. Except for BSL-II and jacalin, 19 of the 21 lectins were labeled in receptor cells with varying intensities in the cattle OE, indicating that sugar residues play a role in the cell biology of receptor cells. The results of this study were consistent with those of a previous study in horses [23]. However, DBA, SJA, PHA-L, and UEA-I were found in cattle but not in horses [23]. Furthermore, similar lectins have been identified in the olfactory receptor neurons and axons of other animals [34353637], which is consistent with our findings. Among the 21 lectins, 10 and 9 lectins were labeled as supporting cells in neonates and adults, respectively, and the labeling patterns were relatively comparable to those reported previously [2238], but differed from the results obtained by a previous study in the horse [23]. In the lamina propria, 14 of the 21 lectins were found in the nerve bundles with varying intensities, whereas 3 lectins that were not found in the nerve bundles were found in the OE receptor cells. We hypothesize that sugar residue modifications cause this disparity as a result of positional changes. This event also occurred in the Bowman’s gland in the lamina propria, where the expression patterns of lectin were altered as the Bowman’s gland ducts exited through the OE. In the Bowman’s glands, the majority of lectins were expressed more strongly in acinar cells than in ductal cells, implying that the acini contain a greater variety of sugar residues than the secretory ductal epithelium.\nA previous study suggested a link between the expression pattern of DBA-binding glycoconjugates and the development of the olfactory system [36]. Similarly, previous studies hypothesized that the expression patterns of lectins, such as VVA and PNA, may be associated with the development of the OM [3940]. The majority of the lectin-binding sites in the cattle OM were identical or stronger in neonates than in adults, implying that the labeling of lectins may be closely related to the development and maturation of the OM. Thus, developmental changes in the expression patterns of lectin-binding glycoconjugates were evident in the OM of Korean native cattle. However, further functional investigations are required to determine how each glycoconjugate contributes to olfactory sensory processing.\nGlobally, cattle is important for animal protein production and may be a desirable animal model for studying olfaction and its effect on animal behavior. The Korean native cattle used in this study is part of a national breeding and selection program for beef production and thus subjected to artificial selection for intensive production in South Korea [41]. For example, in the United States, artificial insemination (AI) accounts for 50% of beef cattle breeding; in South Korea, it accounts for more than 90% [42]. AI timing is an essential component of conception, and the role of pheromones and olfaction in this process is being investigated [43]. Additional factors, such as irregular or prolonged estrous cycles and anestrus, which contribute to lower conception rates during AI, have been linked to pheromones and their receptors [44]. Consequently, olfaction may differ between Korean native cattle and other cattle according to these various factors [45]; however, information about the association between fertility-related behaviors and olfaction in Korean native cattle is limited. Still, we suggest that our data on the histology and lectin histochemistry in the OM of Korean native cattle may serve as a basis for further investigating their olfactory behavior and physiology.\nIn conclusion, we have performed a comprehensive characterization of the Korean native cattle OM at the histological, immunohistochemical, and lectin histochemical levels, thereby revealing new information about the main olfactory system. Additionally, the expression levels of glycoconjugates were considerably different between the OMs of neonatal and adult cattle, with adult OMs exhibiting higher intensities than neonates. Thus, this information can help clarify the role of the OM and its chemo-communication pathways in cattle. Further functional studies, however, are required to demonstrate the role of lectin-binding glycoconjugates in olfactory sensory processing and its associated behaviors." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, null, null, null, null, "discussion" ]
[ "Glycoconjugate", "immunohistochemistry", "Korean native cattle (Hanwoo)", "lectin histochemistry", "olfactory mucosa" ]
INTRODUCTION: Olfaction is important for animals because it allows them to explore their surroundings for odorous signals from food sources and environments, as well as detect chemical compounds that influence social interaction and reproductive behavior [1]. This perception is mediated by two chemosensory systems: the main olfactory system and the vomeronasal system. The olfactory mucosa (OM), positioned at the caudal/posterior roof of the nasal cavity, and the vomeronasal organ (VNO), located at the base of the nasal septum or on the roof of the mouth, are the organs comprising these two systems [23]. Traditionally, the OM and VNO have been considered functionally and anatomically distinct, with the OM detecting conventional volatile odorants and the VNO receiving pheromones [45]. These two organs share some common features but differ in neuron types, primary structures of receptor proteins, physiological pathways, and central neuroanatomical projections into the brain [6]. However, despite the anatomical and functional differences, previous studies indicate that the OM and VNO play a synergistic role in the regulation of various olfactory-induced behaviors and reproductive and social interactions in mammals [15789]. The OM is critical for chemical signal acquisition in the main olfactory system, conveying signals to the main olfactory bulb [4]. In mammals, the OM is composed of the olfactory epithelium (OE) and the lamina propria. The OE is predominantly composed of chemosensory neurons, supporting cells, and basal cells [10]. The lamina propria is constituted by loose connective tissue in which olfactory nerve axon bundles, Bowman's glands, and blood and lymph vessels [1011]. In mammals, odorant reception occurs via the chemosensory neurons of the OE, which contain dendrites that extend beyond the apical surface, where the cilia protrude into the mucus layer, and a basally projecting axon process [12]. Glycoconjugates (terminal carbohydrates) have been shown to play a crucial role in the chemoreception of the OE [13]. Cell surfaces are densely packed with a diverse array of glycoconjugates, each of which provides considerable biological information [14]. Lectins are naturally occurring glycoconjugate-binding molecules, and a large number of purified lectins are regarded as the primary analytical tool for studying glycoconjugates in the olfactory system [131516]. Specific lectin binding is closely related to the olfactory neuron function, as evidenced by the abundance of glycoconjugates in the mucosensory compartment of the chemosensory epithelia, where receptor-specific events associated with transduction occur [13]. Numerous animals, including rats [1718], mice [1920], marmosets [21], sheep [1522], camels [22], horses [23], and humans [24], express the carbohydrate (lectin-binding) moiety in their OM. Despite the fact that cattle are one of the best species for biochemical investigations on olfaction [25], there is little published information on the immunohistochemical and lectin histochemical properties of the OM in bovines. We first evaluated the histological characteristics and lectin histochemistry of the OM in Korean native cattle, and then we compared neonate to adult OMs to determine the features associated with maturity. To our knowledge, this is the first study that provides a comprehensive outlook on the expression levels of lectin-binding glycoconjugates in bovine OM. MATERIALS AND METHODS: Tissue preparation Three neonatal OM samples (1 to 3 day old) and three adult OM samples (2.5-year-old) of Korean native cattle (Hanwoo, Bos taurus coreanae) were obtained from local farms and a local slaughterhouse in Chungcheongbuk-do, South Korea, respectively. Both sexes of all animals were utilized in this study (male: four neonates and five adults; female: three neonates and three adults). The OM used in this study belongs to a small area on the caudal roof of the nasal cavity, where it completely covered the ethmoturbinates and caudal portions of the dorsal and middle nasal conchae (Fig. 1A). Morphological exams of the nasal cavity indicated that none of them had underlying respiratory diseases. All experimental and animal handling procedures were conducted in accordance with the guidelines of Institutional Animal Care and Use Committee of Chonnam National University (26 October 2021; CNU IACUC-YB-2021-131). OM, olfactory mucosa; BD, Bowman’s glandular duct; BG, Bowman’s gland; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells. Three neonatal OM samples (1 to 3 day old) and three adult OM samples (2.5-year-old) of Korean native cattle (Hanwoo, Bos taurus coreanae) were obtained from local farms and a local slaughterhouse in Chungcheongbuk-do, South Korea, respectively. Both sexes of all animals were utilized in this study (male: four neonates and five adults; female: three neonates and three adults). The OM used in this study belongs to a small area on the caudal roof of the nasal cavity, where it completely covered the ethmoturbinates and caudal portions of the dorsal and middle nasal conchae (Fig. 1A). Morphological exams of the nasal cavity indicated that none of them had underlying respiratory diseases. All experimental and animal handling procedures were conducted in accordance with the guidelines of Institutional Animal Care and Use Committee of Chonnam National University (26 October 2021; CNU IACUC-YB-2021-131). OM, olfactory mucosa; BD, Bowman’s glandular duct; BG, Bowman’s gland; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells. Histological examination For light microscopic examination, the ethmoturbinates were removed immediately after death, cut into 5-cm-thick cross sections, and fixed for 3–5 days in 10% buffered formalin. Following fixation, the ethmoturbinates were trimmed and decalcified through multiple solution changes of sodium citrate-formic acid solution. When a needle could readily penetrate the bone with very little force, the decalcification process was stopped. The samples were then washed for 24 h with running tap water, dehydrated in a succession of ethanol concentrations (70%, 80%, 90%, 95%, and 100%), cleaned in xylene, embedded in paraffin, and sectioned into 4 μm slices. Following deparaffinization, the sections were utilized for hematoxylin and eosin staining, mucus and lectin histochemistry, and immunohistochemistry. For light microscopic examination, the ethmoturbinates were removed immediately after death, cut into 5-cm-thick cross sections, and fixed for 3–5 days in 10% buffered formalin. Following fixation, the ethmoturbinates were trimmed and decalcified through multiple solution changes of sodium citrate-formic acid solution. When a needle could readily penetrate the bone with very little force, the decalcification process was stopped. The samples were then washed for 24 h with running tap water, dehydrated in a succession of ethanol concentrations (70%, 80%, 90%, 95%, and 100%), cleaned in xylene, embedded in paraffin, and sectioned into 4 μm slices. Following deparaffinization, the sections were utilized for hematoxylin and eosin staining, mucus and lectin histochemistry, and immunohistochemistry. Mucus histochemistry The acidic mucin was distinguished from the neutral epithelial mucin using periodic acid Schiff (PAS) and Alcian blue (pH 2.5) staining. For PAS staining, the sections were subjected to 0.5% periodic acid for 15 min, then washed and incubated in Schiff reagent for 30 min. After 10 min of washing with running tap water, the sections were counterstained with hematoxylin. For Alcian blue staining, the sections were first incubated in 3% acetic acid for 3 min, followed by 30 min in 1% Alcian blue solution in 3% acetic acid (pH 2.5). After 1 min of washing with running tap water, the sections were counterstained with neutral red. The acidic mucin was distinguished from the neutral epithelial mucin using periodic acid Schiff (PAS) and Alcian blue (pH 2.5) staining. For PAS staining, the sections were subjected to 0.5% periodic acid for 15 min, then washed and incubated in Schiff reagent for 30 min. After 10 min of washing with running tap water, the sections were counterstained with hematoxylin. For Alcian blue staining, the sections were first incubated in 3% acetic acid for 3 min, followed by 30 min in 1% Alcian blue solution in 3% acetic acid (pH 2.5). After 1 min of washing with running tap water, the sections were counterstained with neutral red. Immunohistochemistry To retrieve the antigens, the sections were heated for 1 h at 90°C in citrate buffer (0.01 M, pH 6.0). After cooling, the sections were treated for 20 min with 0.3% hydrogen peroxide in distilled water to suppress the endogenous peroxidase activity. To avoid non-specific binding, the sections were treated with normal goat serum (Vectastain Elite ABC kit; Vector Laboratories, USA) for 1 h before being incubated overnight at 4°C with rabbit monoclonal anti-olfactory marker protein (OMP) (1:1,000 dilution, Cat. No. ab183947; Abcam, UK) and rabbit polyclonal anti-growth associated protein-43 (GAP-43) (1:1,000 dilution, Cat. No. PA5-34943; ThermoFisher Scientific, USA) antibodies. The primary antibodies were excluded from the procedure as negative controls. The sections were rinsed with phosphate-buffered saline (PBS), treated with biotinylated goat anti-rabbit IgG (Vectastain Elite ABC kit) for 1 h, and then reacted with the avidin-biotin-peroxidase complex (Vectastain Elite ABC kit) for 1 h at room temperature (RT). Immunoreactivity was detected using a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), followed by counterstaining with hematoxylin. To retrieve the antigens, the sections were heated for 1 h at 90°C in citrate buffer (0.01 M, pH 6.0). After cooling, the sections were treated for 20 min with 0.3% hydrogen peroxide in distilled water to suppress the endogenous peroxidase activity. To avoid non-specific binding, the sections were treated with normal goat serum (Vectastain Elite ABC kit; Vector Laboratories, USA) for 1 h before being incubated overnight at 4°C with rabbit monoclonal anti-olfactory marker protein (OMP) (1:1,000 dilution, Cat. No. ab183947; Abcam, UK) and rabbit polyclonal anti-growth associated protein-43 (GAP-43) (1:1,000 dilution, Cat. No. PA5-34943; ThermoFisher Scientific, USA) antibodies. The primary antibodies were excluded from the procedure as negative controls. The sections were rinsed with phosphate-buffered saline (PBS), treated with biotinylated goat anti-rabbit IgG (Vectastain Elite ABC kit) for 1 h, and then reacted with the avidin-biotin-peroxidase complex (Vectastain Elite ABC kit) for 1 h at room temperature (RT). Immunoreactivity was detected using a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), followed by counterstaining with hematoxylin. Lectin histochemistry Biotinylated lectin kits I, II, and III (Cat. No. BK-1000, BK-2000, and BK-3000) were acquired from Vector Laboratories. The acronyms, sources, and specificities of the used lectins are shown in Table 1. On the basis of their binding specificity and inhibitory sugars, lectins were categorized as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose-binding lectins. For competitive inhibition, the following sugars were purchased from Sigma-Aldrich (USA) and Vector Laboratories (Table 1): N-acetyl-D-glucosamine (β-D–GlcNAc; Sigma-Aldrich), Chitin Hydrolysate (Vector Laboratories), α-methyl mannoside/α-methyl glucoside (Sigma-Aldrich), lactose (Galβ1, 4Glc; Sigma-Aldrich), N-acetyl-D-galactosamine (α-D-GalNAc; Sigma-Aldrich), melibiose (Galα1, 6Glc; Sigma-Aldrich), and β-D-galactose (Sigma-Aldrich). Fuc, fucose; Gal, galactose; GalNAc, N-acetylgalactosamine; Glc, glucose; GlcNAc, N-acetylglucosamine; Man, mannose. aThe acronyms and specificities of the 21 lectins were obtained from the data sheep (Vector laboratory) and grouped, as shown in a previous paper [23]. To eliminate the endogenous peroxidase activity, the sections were treated with 0.3% hydrogen peroxide in methanol. To prevent non-specific reactions, the sections were rinsed with PBS and then incubated with 1% bovine serum albumin in PBS. The sections were then treated overnight at 4°C with biotinylated lectins and reacted for 45 min at RT with an avidin-biotin-peroxidase complex (Vectastain Elite ABC kit). The sections were rinsed with PBS, treated with a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), and counterstained with hematoxylin. Negative controls for lectin histochemistry were generated by removing the biotinylated lectins and preincubating the lectins with appropriate inhibitors for 1 h at RT. Biotinylated lectin kits I, II, and III (Cat. No. BK-1000, BK-2000, and BK-3000) were acquired from Vector Laboratories. The acronyms, sources, and specificities of the used lectins are shown in Table 1. On the basis of their binding specificity and inhibitory sugars, lectins were categorized as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose-binding lectins. For competitive inhibition, the following sugars were purchased from Sigma-Aldrich (USA) and Vector Laboratories (Table 1): N-acetyl-D-glucosamine (β-D–GlcNAc; Sigma-Aldrich), Chitin Hydrolysate (Vector Laboratories), α-methyl mannoside/α-methyl glucoside (Sigma-Aldrich), lactose (Galβ1, 4Glc; Sigma-Aldrich), N-acetyl-D-galactosamine (α-D-GalNAc; Sigma-Aldrich), melibiose (Galα1, 6Glc; Sigma-Aldrich), and β-D-galactose (Sigma-Aldrich). Fuc, fucose; Gal, galactose; GalNAc, N-acetylgalactosamine; Glc, glucose; GlcNAc, N-acetylglucosamine; Man, mannose. aThe acronyms and specificities of the 21 lectins were obtained from the data sheep (Vector laboratory) and grouped, as shown in a previous paper [23]. To eliminate the endogenous peroxidase activity, the sections were treated with 0.3% hydrogen peroxide in methanol. To prevent non-specific reactions, the sections were rinsed with PBS and then incubated with 1% bovine serum albumin in PBS. The sections were then treated overnight at 4°C with biotinylated lectins and reacted for 45 min at RT with an avidin-biotin-peroxidase complex (Vectastain Elite ABC kit). The sections were rinsed with PBS, treated with a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), and counterstained with hematoxylin. Negative controls for lectin histochemistry were generated by removing the biotinylated lectins and preincubating the lectins with appropriate inhibitors for 1 h at RT. Tissue preparation: Three neonatal OM samples (1 to 3 day old) and three adult OM samples (2.5-year-old) of Korean native cattle (Hanwoo, Bos taurus coreanae) were obtained from local farms and a local slaughterhouse in Chungcheongbuk-do, South Korea, respectively. Both sexes of all animals were utilized in this study (male: four neonates and five adults; female: three neonates and three adults). The OM used in this study belongs to a small area on the caudal roof of the nasal cavity, where it completely covered the ethmoturbinates and caudal portions of the dorsal and middle nasal conchae (Fig. 1A). Morphological exams of the nasal cavity indicated that none of them had underlying respiratory diseases. All experimental and animal handling procedures were conducted in accordance with the guidelines of Institutional Animal Care and Use Committee of Chonnam National University (26 October 2021; CNU IACUC-YB-2021-131). OM, olfactory mucosa; BD, Bowman’s glandular duct; BG, Bowman’s gland; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells. Histological examination: For light microscopic examination, the ethmoturbinates were removed immediately after death, cut into 5-cm-thick cross sections, and fixed for 3–5 days in 10% buffered formalin. Following fixation, the ethmoturbinates were trimmed and decalcified through multiple solution changes of sodium citrate-formic acid solution. When a needle could readily penetrate the bone with very little force, the decalcification process was stopped. The samples were then washed for 24 h with running tap water, dehydrated in a succession of ethanol concentrations (70%, 80%, 90%, 95%, and 100%), cleaned in xylene, embedded in paraffin, and sectioned into 4 μm slices. Following deparaffinization, the sections were utilized for hematoxylin and eosin staining, mucus and lectin histochemistry, and immunohistochemistry. Mucus histochemistry: The acidic mucin was distinguished from the neutral epithelial mucin using periodic acid Schiff (PAS) and Alcian blue (pH 2.5) staining. For PAS staining, the sections were subjected to 0.5% periodic acid for 15 min, then washed and incubated in Schiff reagent for 30 min. After 10 min of washing with running tap water, the sections were counterstained with hematoxylin. For Alcian blue staining, the sections were first incubated in 3% acetic acid for 3 min, followed by 30 min in 1% Alcian blue solution in 3% acetic acid (pH 2.5). After 1 min of washing with running tap water, the sections were counterstained with neutral red. Immunohistochemistry: To retrieve the antigens, the sections were heated for 1 h at 90°C in citrate buffer (0.01 M, pH 6.0). After cooling, the sections were treated for 20 min with 0.3% hydrogen peroxide in distilled water to suppress the endogenous peroxidase activity. To avoid non-specific binding, the sections were treated with normal goat serum (Vectastain Elite ABC kit; Vector Laboratories, USA) for 1 h before being incubated overnight at 4°C with rabbit monoclonal anti-olfactory marker protein (OMP) (1:1,000 dilution, Cat. No. ab183947; Abcam, UK) and rabbit polyclonal anti-growth associated protein-43 (GAP-43) (1:1,000 dilution, Cat. No. PA5-34943; ThermoFisher Scientific, USA) antibodies. The primary antibodies were excluded from the procedure as negative controls. The sections were rinsed with phosphate-buffered saline (PBS), treated with biotinylated goat anti-rabbit IgG (Vectastain Elite ABC kit) for 1 h, and then reacted with the avidin-biotin-peroxidase complex (Vectastain Elite ABC kit) for 1 h at room temperature (RT). Immunoreactivity was detected using a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), followed by counterstaining with hematoxylin. Lectin histochemistry: Biotinylated lectin kits I, II, and III (Cat. No. BK-1000, BK-2000, and BK-3000) were acquired from Vector Laboratories. The acronyms, sources, and specificities of the used lectins are shown in Table 1. On the basis of their binding specificity and inhibitory sugars, lectins were categorized as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose-binding lectins. For competitive inhibition, the following sugars were purchased from Sigma-Aldrich (USA) and Vector Laboratories (Table 1): N-acetyl-D-glucosamine (β-D–GlcNAc; Sigma-Aldrich), Chitin Hydrolysate (Vector Laboratories), α-methyl mannoside/α-methyl glucoside (Sigma-Aldrich), lactose (Galβ1, 4Glc; Sigma-Aldrich), N-acetyl-D-galactosamine (α-D-GalNAc; Sigma-Aldrich), melibiose (Galα1, 6Glc; Sigma-Aldrich), and β-D-galactose (Sigma-Aldrich). Fuc, fucose; Gal, galactose; GalNAc, N-acetylgalactosamine; Glc, glucose; GlcNAc, N-acetylglucosamine; Man, mannose. aThe acronyms and specificities of the 21 lectins were obtained from the data sheep (Vector laboratory) and grouped, as shown in a previous paper [23]. To eliminate the endogenous peroxidase activity, the sections were treated with 0.3% hydrogen peroxide in methanol. To prevent non-specific reactions, the sections were rinsed with PBS and then incubated with 1% bovine serum albumin in PBS. The sections were then treated overnight at 4°C with biotinylated lectins and reacted for 45 min at RT with an avidin-biotin-peroxidase complex (Vectastain Elite ABC kit). The sections were rinsed with PBS, treated with a diaminobenzidine substrate kit (DAB Substrate Kit SK-4100; Vector Laboratories), and counterstained with hematoxylin. Negative controls for lectin histochemistry were generated by removing the biotinylated lectins and preincubating the lectins with appropriate inhibitors for 1 h at RT. RESULTS: Histological findings of the OM In neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs. The lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages. In neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs. The lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages. Immunohistochemical analysis of OMP in the OM In neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C). OMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells. In neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C). OMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells. Lectin histochemistry in the OM The intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. aApical perinuclear labeling. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. N-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. Mannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). Galactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). N-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. Complex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). Fucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). The intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. aApical perinuclear labeling. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. N-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. Mannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). Galactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). N-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. Complex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). Fucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). Histological findings of the OM: In neonatal (Fig. 1A) and adult cattle (Fig. 1B), the OM was composed of the OE and the lamina propria. The OE is predominantly constituted of basal cells, receptor cells, and supporting cells. The nuclei of the basal, receptor, and supporting cells were located in the basal, middle, and apical regions, respectively, of the OE (Fig. 1A and B). The basal cells were round, pallid, and had little nuclei. The nuclei of the supporting cells were elongated and spindle-shaped, but the nuclei of receptor cells were more rounded. While neonatal and adult OMs displayed comparable histological features, adult OMs appeared thicker than neonatal OMs. The lamina propria included Bowman’s glands and olfactory axon bundles. Bowman’s glands were composed of clustered acinar cells that were connected through ducts to the apical surface (Fig. 1A and B). The Bowman’s glandular ducts and/or acini stained positively for PAS (Fig. 1C and E) and Alcian blue stains (pH 2.5; Fig. 1D and F) in neonates (Fig. 1C and D) and adults (Fig. 1E and F); however, the reactivity of duct cells was relatively lower than that of gland cells. Additionally, the histological features of the OM did not differ between males and females at both ages. Immunohistochemical analysis of OMP in the OM: In neonatal (Fig. 2A) and adult (Fig. 2B) cattle, OMP, a marker for mature olfactory sensory neurons, immunoreactivity was identified in the majority of the mature receptor cells in the middle region of the OE, with dendrites reaching to the OE free border, but not in basal and supporting cells. Their reactivity was more prominent in adults. GAP-43, a marker of immature olfactory sensory neurons, was found in certain receptor cells in the basal region of the OE, and some dendrites were immunopositive for GAP-43 (Fig. 2C and D). In addition, OMP and GAP-43 immunoreactivities were observed in the nerve bundles of the lamina propria (Fig. 2A and C). OMP, olfactory marker protein; GAP-43, growth associated protein-43; OM, olfactory mucosa; OE, olfactory epithelium; NB, nerve bundle; OBC, olfactory basal cells; OSC, olfactory supporting cells; ORC, olfactory receptor cells. Lectin histochemistry in the OM: The intensity values of 21 lectin-binding sites in the OE and lamina propria of the OM of Korean native cattle are summarized in Tables 2 and 3, respectively. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. aApical perinuclear labeling. −, negative staining; +, faint staining; ++, moderate staining; +++, intense staining. N-acetylglucosamine-binding lectins All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. Mannose-binding lectins ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). Galactose-binding lectins In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). N-acetylgalactosamine-binding lectins All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. Complex type N-glycan-binding lectins In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). Fucose-binding lectin In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). N-acetylglucosamine-binding lectins: All lectins except BSL-II were highly present in the free border of neonatal and adult OEs (Fig. 3A-F, Table 2). In neonates and adults, most OE cells lacked BSL-II reactivity (Table 2). Neonatal OE cells (especially receptor cells) responded more strongly to DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) than adult ones, but they had no remarkable differences in s-WGA (Fig. 3A and B) and WGA labeling (Table 2). Except for BSL-II, all these lectins were present in different amounts in the nerve bundles, but they did not vary with age (Fig. 3A-F, Table 3). Neonatal Bowman’s gland ducts displayed various reactivities to these lectins although reactivity in adults was faint (Fig. 3A-F, Table 3). The intensities of s-WGA (Fig. 3A and B), BSL-II, DSL (Fig. 3C and D), LEL, and STL (Fig. 3E and F) labeling were greater in neonates than in adults in Bowman’s gland acini (Table 3). OM, olfactory mucosa. Mannose-binding lectins: ConA, LCA (Fig. 3G and H), and PSA were labeled with varying intensities across the OE layers, except the LCA in the supporting cells of neonates; their intensities were greater in adults than in neonates (Table 2). ConA labeling was more intense in nerve bundles and Bowman’s gland duct/acini of the lamina propria in neonates than in adults (Table 3). LCA (Fig. 3G and H) labeling was more intense in Bowman’s gland duct/acini of the lamina propria in neonates than in adults, but PSA labeling was more intense in the nerve bundles of the lamina propria in adults than in neonates (Table 3). Galactose-binding lectins: In the epithelial free border, most lectins were tagged similarly in neonates and adults (Table 2), but the intensity of PNA labeling was greater in neonates than in adults (Fig. 3I and J). RCA120, PNA (Fig. 3I and J), and ECL reactivities were observed in all OE layers except for supporting cells in neonates and/or adults (Table 2). The results of BSL-I labeling of the components of the lamina propria did not differ between neonatal and adult OM (Table 3). Jacalin had no reactivity in any of the OE layers in neonates and adults (Table 2) although it was tagged in Bowman’s gland duct and acini in the lamina propria (Table 3). N-acetylgalactosamine-binding lectins: All the lectins produced a reaction of different intensities in neonates and adults at the epithelial free border (Fig. 4A-F, Table 2). In receptor cells, although all these lectins were detected faintly in neonates, the intensities of VVA (Fig. 4A and B) and SBA (Fig. 4E and F) were enhanced in adults (Table 2). Additionally, except for DBA, none of these lectins reacted with neonatal basal cells, whereas all lectins were faintly labeled in adults (Table 2). In nerve bundles, DBA (Fig. 4C and D) staining was more intense in neonates than in adults, whereas other lectins were absent (Table 3). In adults, the reactivities for VVA (Fig. 4A and B), SBA (Fig. 4E and F), and SJA were moderate to intense in Bowman’s gland duct and acini but were relatively lower in adults (Table 3). OM, olfactory mucosa. Complex type N-glycan-binding lectins: In the OE (Table 2) and lamina propria (Table 3), the reactivities of PHA-E and PHA-L (Fig. 4G and H) were extensively higher in neonates than in adults. Bowman’s gland duct and acini were positive for both lectins (Fig. 4G and H). Fucose-binding lectin: In the OE of neonates (Fig. 4I) and adults (Fig. 4J), UEA-I was identified in the free border and receptor cells but not in the supporting cells (Table 2). UEA-I was labeled extensively in Bowman’s gland duct/acini in the lamina propria but was relatively lower in adults than in neonates (Table 3). DISCUSSION: This is the first study that visualized individual glycoconjugate (carbohydrate residues) during the postnatal development of bovine OM. We employed specific antibodies for olfactory sensory neurons against OMP and 21 lectins specific for the glycoconjugate sugar residues. Finally, we found that the distribution and expression levels of glycoconjugates varied remarkably between the OMs of neonatal and adult cattle. In the present study, histological examination revealed that the cattle OM is composed of the OE and the underlying lamina propria, which contains Bowman’s glands, bundles of olfactory axons, and ensheathing glia, as observed in other mammals, such as rats [26], sheep [15], and dogs [22]. A previous study established that age-related thickening results from a rapid increase in olfactory density and contributes to an improved odor sensitivity in domestic dogs and sheep [27]. This study evaluated differences in OM morphology between neonate and adult cattle to test the notion that mucosal refinement changes with age. Our findings indicated that adult cattle OM had a more complete structural shape than that of the neonate. Additionally, PAS and Alcian blue (pH 2.5) staining were utilized to confirm the mucus specificity of the Bowman’s gland duct/acini in the lamina propria. In neonates and adults, Bowman’s gland ducts/acini were stained positively with PAS and Alcian blue to varied degrees of intensity. These findings imply that the Bowman’s glands release neutral and acidic components in the lamina propria in both the neonatal and adult OM, which may be implicated in the perception of scents or in protection against infectious agents and particles [28]. OMP has been implicated in olfactory transduction [29], and its expression is restricted to mature chemosensory neurons in the OE [30]. We assessed the OMP expression in the OM to verify whether it is associated with mature sensory neurons that extend their dendritic knobs to the OE mucociliary layer or free border [31]. In contrast to OMP, GAP-43 expression is detected in the basal region of the OE, indicating that GAP-43 is expressed in immature neurons [32]. In neonatal and adult cattle, mature receptor cells in the middle region and immature receptor cells in the basal region of the OE were markedly positive for OMP and GAP-43, respectively. Therefore, OMP-positive mature and GAP-43-positive immature sensory neurons may be essential for the development of the main olfactory system [3233]. However, the precise roles of OMP and GAP-43 in cattle OM should be further verified. Tables 2 and 3 demonstrate the results of lectin-binding patterns in the OM of neonatal and adult cattle. In all regions of the OM, including the OE free border, receptor cells, supporting cells, basal cells, nerve bundles, and Bowman’s glands, lectin-reactive glycoconjugates were found at varying intensities. Except for BSL-II and jacalin, all lectins utilized in this study were expressed in the free border of both neonatal and adult cattle, implying that the majority of the carbohydrates detected by these lectins are present in the mucociliary layer of the OE. This finding somewhat contrasts with previously published research on sheep [15] and horses [23], wherein the free borders were labeled with 13 and 19 of the 21 lectins, respectively. Except for BSL-II and jacalin, 19 of the 21 lectins were labeled in receptor cells with varying intensities in the cattle OE, indicating that sugar residues play a role in the cell biology of receptor cells. The results of this study were consistent with those of a previous study in horses [23]. However, DBA, SJA, PHA-L, and UEA-I were found in cattle but not in horses [23]. Furthermore, similar lectins have been identified in the olfactory receptor neurons and axons of other animals [34353637], which is consistent with our findings. Among the 21 lectins, 10 and 9 lectins were labeled as supporting cells in neonates and adults, respectively, and the labeling patterns were relatively comparable to those reported previously [2238], but differed from the results obtained by a previous study in the horse [23]. In the lamina propria, 14 of the 21 lectins were found in the nerve bundles with varying intensities, whereas 3 lectins that were not found in the nerve bundles were found in the OE receptor cells. We hypothesize that sugar residue modifications cause this disparity as a result of positional changes. This event also occurred in the Bowman’s gland in the lamina propria, where the expression patterns of lectin were altered as the Bowman’s gland ducts exited through the OE. In the Bowman’s glands, the majority of lectins were expressed more strongly in acinar cells than in ductal cells, implying that the acini contain a greater variety of sugar residues than the secretory ductal epithelium. A previous study suggested a link between the expression pattern of DBA-binding glycoconjugates and the development of the olfactory system [36]. Similarly, previous studies hypothesized that the expression patterns of lectins, such as VVA and PNA, may be associated with the development of the OM [3940]. The majority of the lectin-binding sites in the cattle OM were identical or stronger in neonates than in adults, implying that the labeling of lectins may be closely related to the development and maturation of the OM. Thus, developmental changes in the expression patterns of lectin-binding glycoconjugates were evident in the OM of Korean native cattle. However, further functional investigations are required to determine how each glycoconjugate contributes to olfactory sensory processing. Globally, cattle is important for animal protein production and may be a desirable animal model for studying olfaction and its effect on animal behavior. The Korean native cattle used in this study is part of a national breeding and selection program for beef production and thus subjected to artificial selection for intensive production in South Korea [41]. For example, in the United States, artificial insemination (AI) accounts for 50% of beef cattle breeding; in South Korea, it accounts for more than 90% [42]. AI timing is an essential component of conception, and the role of pheromones and olfaction in this process is being investigated [43]. Additional factors, such as irregular or prolonged estrous cycles and anestrus, which contribute to lower conception rates during AI, have been linked to pheromones and their receptors [44]. Consequently, olfaction may differ between Korean native cattle and other cattle according to these various factors [45]; however, information about the association between fertility-related behaviors and olfaction in Korean native cattle is limited. Still, we suggest that our data on the histology and lectin histochemistry in the OM of Korean native cattle may serve as a basis for further investigating their olfactory behavior and physiology. In conclusion, we have performed a comprehensive characterization of the Korean native cattle OM at the histological, immunohistochemical, and lectin histochemical levels, thereby revealing new information about the main olfactory system. Additionally, the expression levels of glycoconjugates were considerably different between the OMs of neonatal and adult cattle, with adult OMs exhibiting higher intensities than neonates. Thus, this information can help clarify the role of the OM and its chemo-communication pathways in cattle. Further functional studies, however, are required to demonstrate the role of lectin-binding glycoconjugates in olfactory sensory processing and its associated behaviors.
Background: The olfactory mucosa (OM) is crucial for odorant perception in the main olfactory system. The terminal carbohydrates of glycoconjugates influence chemoreception in the olfactory epithelium (OE). Methods: The OM of neonate and adult Korean native cattle was evaluated using histological, immunohistochemical, and lectin histochemical methods. Results: Histologically, the OM in both neonates and adults consists of the olfactory epithelium and the lamina propria. Additionally, using periodic acid Schiff and Alcian blue (pH 2.5), the mucus specificity of the Bowman's gland duct and acini in the lamina propria was determined. Immunohistochemistry demonstrated that mature and immature olfactory sensory neurons of OEs express the olfactory marker protein and growth associated protein-43, respectively. Lectin histochemistry indicated that numerous glycoconjugates, including as N-acetylglucosamine, mannose, galactose, N-acetylgalactosamine, complex type N-glycan, and fucose groups, were expressed at varied levels in the different cell types in the OMs of neonates and adults at varying levels. According to our observations, the cattle possessed a well-developed olfactory system, and the expression patterns of glycoconjugates in neonatal and adult OMs varied considerably. Conclusions: This is the first study to describe the morphological assessment of the OM of Korean native cattle with a focus on lectin histochemistry. The findings suggest that glycoconjugates may play a role in olfactory chemoreception, and that their labeling properties may be closely related to OM development and maturity.
null
null
13,049
275
[ 218, 148, 128, 238, 394, 259, 180, 1781, 235, 131, 139, 187, 62, 72 ]
18
[ "fig", "table", "adults", "cells", "neonates", "lectins", "oe", "neonates adults", "bowman", "om" ]
[ "olfactory receptor neurons", "identified olfactory receptor", "olfactory sensory processing", "vomeronasal system olfactory", "olfactory system vomeronasal" ]
null
null
null
[CONTENT] Glycoconjugate | immunohistochemistry | Korean native cattle (Hanwoo) | lectin histochemistry | olfactory mucosa [SUMMARY]
null
[CONTENT] Glycoconjugate | immunohistochemistry | Korean native cattle (Hanwoo) | lectin histochemistry | olfactory mucosa [SUMMARY]
null
[CONTENT] Glycoconjugate | immunohistochemistry | Korean native cattle (Hanwoo) | lectin histochemistry | olfactory mucosa [SUMMARY]
null
[CONTENT] Cattle | Animals | Lectins | Galactose | Olfactory Mucosa | Republic of Korea [SUMMARY]
null
[CONTENT] Cattle | Animals | Lectins | Galactose | Olfactory Mucosa | Republic of Korea [SUMMARY]
null
[CONTENT] Cattle | Animals | Lectins | Galactose | Olfactory Mucosa | Republic of Korea [SUMMARY]
null
[CONTENT] olfactory receptor neurons | identified olfactory receptor | olfactory sensory processing | vomeronasal system olfactory | olfactory system vomeronasal [SUMMARY]
null
[CONTENT] olfactory receptor neurons | identified olfactory receptor | olfactory sensory processing | vomeronasal system olfactory | olfactory system vomeronasal [SUMMARY]
null
[CONTENT] olfactory receptor neurons | identified olfactory receptor | olfactory sensory processing | vomeronasal system olfactory | olfactory system vomeronasal [SUMMARY]
null
[CONTENT] fig | table | adults | cells | neonates | lectins | oe | neonates adults | bowman | om [SUMMARY]
null
[CONTENT] fig | table | adults | cells | neonates | lectins | oe | neonates adults | bowman | om [SUMMARY]
null
[CONTENT] fig | table | adults | cells | neonates | lectins | oe | neonates adults | bowman | om [SUMMARY]
null
[CONTENT] om | glycoconjugates | olfactory | vno | system | chemosensory | lectin | olfactory system | mammals | main [SUMMARY]
null
[CONTENT] fig | table | adults | neonates | cells | lectins | neonates adults | oe | adults table | labeling [SUMMARY]
null
[CONTENT] fig | table | adults | neonates | cells | lectins | oe | neonates adults | olfactory | sections [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
null
[CONTENT] OM ||| Alcian | Bowman ||| protein-43 ||| Lectin ||| [SUMMARY]
null
[CONTENT] ||| ||| Korean | lectin ||| ||| OM ||| Alcian | Bowman ||| protein-43 ||| Lectin ||| ||| first | Korean | lectin ||| [SUMMARY]
null
Polymicrobial bloodstream infections in the neonatal intensive care unit are associated with increased mortality: a case-control study.
25022748
Polymicrobial infections in adults and children are associated with increase in mortality, duration of intensive care and healthcare costs. Very few studies have characterized polymicrobial bloodstream infections in the neonatal unit. Considerable variation has been reported in incidence of polymicrobial infections and associated clinical outcomes. We characterized the risk factors and outcomes of polymicrobial bloodstream infections in our neonatal units in a tertiary hospital in North America.
BACKGROUND
In a retrospective case control study design, we identified infants in the neonatal intensive care unit with positive blood cultures at Texas Children's Hospital, over a 16-year period from January 1, 1997 to December 31, 2012. Clinical data from online databases were available from January 2009 to December 2012. For each polymicrobial bloodstream infection (case), we matched three infants with monomicrobial bloodstream infection (control) by gestational age and birth weight.
METHODS
We identified 2007 episodes of bloodstream infections during the 16 year study period and 280 (14%) of these were polymicrobial. Coagulase-negative Staphylococcus, Enterococcus, Klebsiella and Candida were the most common microbial genera isolated from polymicrobial infections. Polymicrobial bloodstream infections were associated with more than 3-fold increase in mortality and an increase in duration of infection. Surgical intervention was a significant risk factor for polymicrobial infection.
RESULTS
The frequency and increased mortality emphasizes the clinical significance of polymicrobial bloodstream infections in the neonatal intensive care unit. Clinical awareness and focused research on neonatal polymicrobial infections is urgently needed.
CONCLUSION
[ "Bacteremia", "Birth Weight", "Candida", "Case-Control Studies", "Coinfection", "Female", "Humans", "Incidence", "Infant", "Infant, Newborn", "Intensive Care Units, Neonatal", "Klebsiella", "Male", "Retrospective Studies", "Risk Factors", "Staphylococcus", "Texas", "Treatment Outcome" ]
4226990
Background
Polymicrobial infections increase mortality more than 2-fold in adults and children, increase length of hospital stay and healthcare costs [1,2]. Risk factors for polymicrobial infections in children and adults include the presence of a central venous catheter, administration of parenteral nutrition, gastrointestinal pathology, especially short gut syndrome, use of broad-spectrum antibiotics and immunosuppression [1,3,4]. Neonatal polymicrobial infections are less well characterized compared to those in children or adults. Increased survival of extremely premature infants at the edge of viability, dependence on catheters and parenteral nutrition (PN), and antibiotic therapy may predispose to polymicrobial infections in neonates. The frequency of neonatal polymicrobial bloodstream infections reported in clinical studies varies from 4 to 24% of all bloodstream infections [5-10]. A standard definition for neonatal polymicrobial infections is lacking and the incidence varies from study to study partly due to variability in definition, neonatal population and practices [11]. Few studies have focused on polymicrobial infections in neonates, especially on risk factors and clinical outcomes in the western world [10-12]. In a neonatal review, the mortality due to polymicrobial infections was 3-fold greater than that of monomicrobial infections (70% vs. 23%) [10]. Organisms that are commonly implicated in neonatal polymicrobial bloodstream infections are coagulase-negative staphylococcus (CONS), Candida spp., Staphylococcus aureus and Enterococcus spp. [1,9,13-16]. A common risk factor appears to be multi-species biofilm infections originating from indwelling medical devices, notably indwelling vascular catheters or endotracheal tubes [1,17]. Polymicrobial bloodstream infections may be defined as multiple organisms isolated during an infectious episode including those from a single blood specimen [1] or more restrictively as isolation of more than one organism from a single blood specimen only [11,12,17]. Different definitions may partly explain the varying incidence of polymicrobial bloodstream infections reported. Bizzarro et al. (Yale, 1989-2006) and Faix et al. (Ann Arbor, 1971-1986) have reported the only two studies on neonatal polymicrobial infections from North America, [10,11]. Changing epidemiology of neonatal infections dictates a need to update and understand the epidemiology of polymicrobial infections in the hospitalized infant [11]. Lack of data on polymicrobial infections from our neonatal unit and concern of significant increase in mortality, morbidity and healthcare costs due to polymicrobial infections prompted us to investigate its frequency, risk factors and clinical outcomes at Texas Children’s Hospital (TCH).
Methods
We hypothesized that polymicrobial infections comprise > 5% of bloodstream infections in infants residing in the neonatal intensive care units, have identifiable risk factors and are associated with higher mortality and morbidity than monomicrobial infections. We tested our hypothesis by performing a single center, retrospective, matched case-control study during a 16 year study period. Our study protocol was approved by the Institutional Review Board at Baylor College of Medicine, Houston. We followed the STROBE guidelines in reporting this study [18]. Identification of cases and controls We identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012. We identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012. Definitions Polymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses. Polymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses. Clinical data collection Neonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions. Neonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions. Statistical analyses Data were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model. Data were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model.
Results
We identified 2007 bacteremia episodes during a period of 16 years in patients admitted to the NICUs at TCH; 280 episodes (14%) were polymicrobial (Figure 1). The percentage of annual polymicrobial bacteremia varied during the study period ranging from 6.3 to 18.6%. Polymicrobial bacteremia identified in single blood specimens constituted on an average of 56% (range 40 to 75%) during the 16 year study period. Frequency of polymicrobial bacteremia episodes. The frequencies of neonatal bacteremia episodes are plotted against the years from 1997 to 2012. The dark shaded portion of the bars indicate polymicrobial component of the infectious episodes, which ranged from 6.3 to 18.6% and on an average of about 14% during the 16-year study period. There were significant decrease in the number of infections and polymicrobial infections between the time epochs, 1998-2009 and 2010-2012 (p < 0.01). Microbes isolated from monomicrobial (controls) and polymicrobial infections (cases) were similar but varied in frequency (Table 1). Microbes isolated at greater frequencies in polymicrobial infections were Candida spp. (>3%), Enterococcus faecalis (>2-fold) and Klebsiella pneumoniae (>2-fold). The most common combinations of polymicrobial organisms were CONS and Candida spp. (n = 6) and CONS and Enterococcus faecalis (n = 3). Gram positive organisms were isolated more frequently than gram negative organisms and the proportion of gram positive to gram negative organisms were similar in both monomicrobial (gram positive 51% and gram negative 42%) and polymicrobial infections (gram positive 48% and gram negative 40%). Microbiology of monomicrobial and polymicrobial bloodstream infections In monomicrobial bloodstream infections, 102 organisms were isolated from 102 infectious episodes. In polymicrobial bloodstream infections, 74 organisms were isolated from 34 infectious episodes. ‘Others’ include Streptococcus salivarius, Citrobacter freundii, Clostridium tertium for monomicrobial infections and for polymicrobial infections; they were Lactobacillus, Proteus mirabilis, Streptococcus sanguinus, Micrococcus spp., Acinetobacter baumanii complex and Gamma Streptococcus. All the organisms in the ‘Others’ group was cultured at least twice from 2 different samples. Candida species, Enterococcus species and Klebsiella species were isolated at higher frequencies in polymicrobial infections than monomicrobial infections. The most common combination of polymicrobial organisms were CONS and Candida (n = 6) and CONS and Enterococcus fecalis (n = 3). We compared clinical data pertaining to demographics, risk factors and clinical outcomes between infants with polymicrobial bacteremia and gestational age and birthweight matched infants with monomicrobial bacteremia (data from January 2009 to December 2012) (Table 2). We obtained neonatal data for 34 infants with polymicrobial bacteremia and 102 matched infants with monomicrobial bacteremia during that period. The demographics were similar between infants with polymicrobial compared to infants with monomicrobial bacteremia. As expected, the gestational age (30.5 vs. 30.4 wk, p = 0.90) and birth weights (1592 vs.1535 g, p = 0.77) were adequately matched between controls and cases respectively. Male sex and age at infection did not differ significantly between cases and controls. Infant demographics, risk factors and outcomes in cases and controls 95% CI- 95% confidence intervals, Bwt-Birth weight, GA-gestational age, PN- parenteral nutrition, NEC- necrotizing enterocolitis, BPD-bronchopulmonary dysplasia, IPPV- Intermittent positive pressure ventilation, CPAP- continuous positive airway pressure, HFOV- High frequency oscillatory ventilation, INO- inhaled nitric oxide, ECMO- extracorporeal membrane oxygenator, PDA-patent ductus arteriosus, ROP-retinopathy of prematurity, IVH-intraventricular hemorrhage, PVL- periventricular leucomalacia, severe IVH- IVH > grade 2, severe ROP- ROP > stage 2. *p values < 0.05. We observed a significant difference between cases and controls in terms of surgery other than for NEC (p = 0.04) and any surgery (including NEC surgery) (p = 0.05, borderline significance). We observed an increase in the presence of a central venous catheter (97 vs. 91%, p =0.20) and incidence of direct hyperbilirubinemia (50 vs. 32%, p = 0.06) both of which did not reach statistical significance. We did not observe any significant differences in the proportion of infants receiving PN, days on PN, the incidence of NEC, surgery for NEC, congenital heart disease or congenital malformations between cases and controls. We noted a significant difference in associated mortality between controls and cases (47 vs. 20%), odds ratio adjusted for ‘any surgery’, 4.3 [95% CI, 1.8 to 10.2] (p = 0.001). A significant increase in the duration of infection (culture positivity) was noted between cases and controls (2.91 vs. 2.07 days, p = 0.02). No significant differences were observed in length of hospital stay, BPD, IPPV at 36 corrected wks, CPAP at 36 corrected wks, HFOV, ECMO, PDA, severe IVH (IVH > grade 2), PVL, severe IVH or PVL, ROP or severe ROP (ROP > stage 2) (p > 0.05). An increase in INO therapy in cases were noted (41 vs. 25%, p = 0.08) but was not statistically significant. In a subgroup analysis of VLBW infants, polymicrobial infections compared to monomicrobial infections, had a significantly higher mortality (OR 6.4 [95% CI 1.3 to 30.3], longer duration of infection (OR 1.4 [95% CI 1.04 to 1.8] and higher incidence of congenital malformations (OR 17.7 [95% CI, 2.1 to 148.7).
Conclusions
The main aim of the study was to investigate the frequency of polymicrobial infections in our tertiary neonatal intensive care unit, understand its impact on neonatal outcomes and promote research in the management of these infections. Polymicrobial bacteremia is common in neonates and comprises nearly 14% of all infectious episodes. The most common organisms in neonatal polymicrobial infections are CONS, S. aureus, Enterococcus species, E. coli, and Candida species. Surgical intervention is a significant risk factor for neonatal polymicrobial infections. We observed a more than 3-fold increase in mortality in infants with polymicrobial bacteremia and a significant increase in duration of the bacteremia. Research focused to prevent and to improve clinical outcomes in neonatal polymicrobial infections is urgently needed.
[ "Background", "Identification of cases and controls", "Definitions", "Clinical data collection", "Statistical analyses", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Polymicrobial infections increase mortality more than 2-fold in adults and children, increase length of hospital stay and healthcare costs [1,2]. Risk factors for polymicrobial infections in children and adults include the presence of a central venous catheter, administration of parenteral nutrition, gastrointestinal pathology, especially short gut syndrome, use of broad-spectrum antibiotics and immunosuppression [1,3,4].\nNeonatal polymicrobial infections are less well characterized compared to those in children or adults. Increased survival of extremely premature infants at the edge of viability, dependence on catheters and parenteral nutrition (PN), and antibiotic therapy may predispose to polymicrobial infections in neonates. The frequency of neonatal polymicrobial bloodstream infections reported in clinical studies varies from 4 to 24% of all bloodstream infections [5-10]. A standard definition for neonatal polymicrobial infections is lacking and the incidence varies from study to study partly due to variability in definition, neonatal population and practices [11]. Few studies have focused on polymicrobial infections in neonates, especially on risk factors and clinical outcomes in the western world [10-12]. In a neonatal review, the mortality due to polymicrobial infections was 3-fold greater than that of monomicrobial infections (70% vs. 23%) [10]. Organisms that are commonly implicated in neonatal polymicrobial bloodstream infections are coagulase-negative staphylococcus (CONS), Candida spp., Staphylococcus aureus and Enterococcus spp. [1,9,13-16]. A common risk factor appears to be multi-species biofilm infections originating from indwelling medical devices, notably indwelling vascular catheters or endotracheal tubes [1,17].\nPolymicrobial bloodstream infections may be defined as multiple organisms isolated during an infectious episode including those from a single blood specimen [1] or more restrictively as isolation of more than one organism from a single blood specimen only [11,12,17]. Different definitions may partly explain the varying incidence of polymicrobial bloodstream infections reported. Bizzarro et al. (Yale, 1989-2006) and Faix et al. (Ann Arbor, 1971-1986) have reported the only two studies on neonatal polymicrobial infections from North America, [10,11]. Changing epidemiology of neonatal infections dictates a need to update and understand the epidemiology of polymicrobial infections in the hospitalized infant [11]. Lack of data on polymicrobial infections from our neonatal unit and concern of significant increase in mortality, morbidity and healthcare costs due to polymicrobial infections prompted us to investigate its frequency, risk factors and clinical outcomes at Texas Children’s Hospital (TCH).", "We identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012.", "Polymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses.", "Neonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions.", "Data were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model.", "TCH: Texas Children’s Hospital; VON: Vermont Oxford Network; CONS: Coagulase negative Staphylococci; ELBW: Extremely low birth weight; VLBW: Very low birth weight; NICU: Neonatal Intensive Care Unit; PDA: Patent ductus arteriosus; BPD: Bronchopulmonary dysplasia; PVL: Periventricular leucomalacia; NEC: Necrotizing enterocolitis; TPN: Total parenteral nutrition; ROP: Retinopathy of prematurity; IVH: Intraventricular hemorrhage; IPPV: Intermittent positive pressure ventilation; CPAP: Continuous positive airway pressure; HFOV: High frequency oscillatory ventilation; INO: Inhaled nitric oxide.", "None of the authors have any financial interests to disclose or competing interest.", "We state that all the authors satisfy the requirements of author as laid down in ‘Instructions to the Authors’. MP conceived the project, participated in the design, data acquisition, analysis and interpretation, drafted the initial manuscript and approved the final manuscript as submitted. DZ participated in the acquisition of microbiology data and tabulating the data. YJ for clinical data acquisition and revision of the manuscript. PR participated in microbiology data acquisition, analysis and revision of the manuscript. JV for critical intellectual input and revision of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/14/390/prepub\n" ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Identification of cases and controls", "Definitions", "Clinical data collection", "Statistical analyses", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Polymicrobial infections increase mortality more than 2-fold in adults and children, increase length of hospital stay and healthcare costs [1,2]. Risk factors for polymicrobial infections in children and adults include the presence of a central venous catheter, administration of parenteral nutrition, gastrointestinal pathology, especially short gut syndrome, use of broad-spectrum antibiotics and immunosuppression [1,3,4].\nNeonatal polymicrobial infections are less well characterized compared to those in children or adults. Increased survival of extremely premature infants at the edge of viability, dependence on catheters and parenteral nutrition (PN), and antibiotic therapy may predispose to polymicrobial infections in neonates. The frequency of neonatal polymicrobial bloodstream infections reported in clinical studies varies from 4 to 24% of all bloodstream infections [5-10]. A standard definition for neonatal polymicrobial infections is lacking and the incidence varies from study to study partly due to variability in definition, neonatal population and practices [11]. Few studies have focused on polymicrobial infections in neonates, especially on risk factors and clinical outcomes in the western world [10-12]. In a neonatal review, the mortality due to polymicrobial infections was 3-fold greater than that of monomicrobial infections (70% vs. 23%) [10]. Organisms that are commonly implicated in neonatal polymicrobial bloodstream infections are coagulase-negative staphylococcus (CONS), Candida spp., Staphylococcus aureus and Enterococcus spp. [1,9,13-16]. A common risk factor appears to be multi-species biofilm infections originating from indwelling medical devices, notably indwelling vascular catheters or endotracheal tubes [1,17].\nPolymicrobial bloodstream infections may be defined as multiple organisms isolated during an infectious episode including those from a single blood specimen [1] or more restrictively as isolation of more than one organism from a single blood specimen only [11,12,17]. Different definitions may partly explain the varying incidence of polymicrobial bloodstream infections reported. Bizzarro et al. (Yale, 1989-2006) and Faix et al. (Ann Arbor, 1971-1986) have reported the only two studies on neonatal polymicrobial infections from North America, [10,11]. Changing epidemiology of neonatal infections dictates a need to update and understand the epidemiology of polymicrobial infections in the hospitalized infant [11]. Lack of data on polymicrobial infections from our neonatal unit and concern of significant increase in mortality, morbidity and healthcare costs due to polymicrobial infections prompted us to investigate its frequency, risk factors and clinical outcomes at Texas Children’s Hospital (TCH).", "We hypothesized that polymicrobial infections comprise > 5% of bloodstream infections in infants residing in the neonatal intensive care units, have identifiable risk factors and are associated with higher mortality and morbidity than monomicrobial infections. We tested our hypothesis by performing a single center, retrospective, matched case-control study during a 16 year study period. Our study protocol was approved by the Institutional Review Board at Baylor College of Medicine, Houston. We followed the STROBE guidelines in reporting this study [18].\n Identification of cases and controls We identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012.\nWe identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012.\n Definitions Polymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses.\nPolymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses.\n Clinical data collection Neonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions.\nNeonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions.\n Statistical analyses Data were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model.\nData were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model.", "We identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012.", "Polymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses.", "Neonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions.", "Data were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model.", "We identified 2007 bacteremia episodes during a period of 16 years in patients admitted to the NICUs at TCH; 280 episodes (14%) were polymicrobial (Figure 1). The percentage of annual polymicrobial bacteremia varied during the study period ranging from 6.3 to 18.6%. Polymicrobial bacteremia identified in single blood specimens constituted on an average of 56% (range 40 to 75%) during the 16 year study period.\nFrequency of polymicrobial bacteremia episodes. The frequencies of neonatal bacteremia episodes are plotted against the years from 1997 to 2012. The dark shaded portion of the bars indicate polymicrobial component of the infectious episodes, which ranged from 6.3 to 18.6% and on an average of about 14% during the 16-year study period. There were significant decrease in the number of infections and polymicrobial infections between the time epochs, 1998-2009 and 2010-2012 (p < 0.01).\nMicrobes isolated from monomicrobial (controls) and polymicrobial infections (cases) were similar but varied in frequency (Table 1). Microbes isolated at greater frequencies in polymicrobial infections were Candida spp. (>3%), Enterococcus faecalis (>2-fold) and Klebsiella pneumoniae (>2-fold). The most common combinations of polymicrobial organisms were CONS and Candida spp. (n = 6) and CONS and Enterococcus faecalis (n = 3). Gram positive organisms were isolated more frequently than gram negative organisms and the proportion of gram positive to gram negative organisms were similar in both monomicrobial (gram positive 51% and gram negative 42%) and polymicrobial infections (gram positive 48% and gram negative 40%).\nMicrobiology of monomicrobial and polymicrobial bloodstream infections\nIn monomicrobial bloodstream infections, 102 organisms were isolated from 102 infectious episodes. In polymicrobial bloodstream infections, 74 organisms were isolated from 34 infectious episodes. ‘Others’ include Streptococcus salivarius, Citrobacter freundii, Clostridium tertium for monomicrobial infections and for polymicrobial infections; they were Lactobacillus, Proteus mirabilis, Streptococcus sanguinus, Micrococcus spp., Acinetobacter baumanii complex and Gamma Streptococcus. All the organisms in the ‘Others’ group was cultured at least twice from 2 different samples. Candida species, Enterococcus species and Klebsiella species were isolated at higher frequencies in polymicrobial infections than monomicrobial infections. The most common combination of polymicrobial organisms were CONS and Candida (n = 6) and CONS and Enterococcus fecalis (n = 3).\nWe compared clinical data pertaining to demographics, risk factors and clinical outcomes between infants with polymicrobial bacteremia and gestational age and birthweight matched infants with monomicrobial bacteremia (data from January 2009 to December 2012) (Table 2). We obtained neonatal data for 34 infants with polymicrobial bacteremia and 102 matched infants with monomicrobial bacteremia during that period. The demographics were similar between infants with polymicrobial compared to infants with monomicrobial bacteremia. As expected, the gestational age (30.5 vs. 30.4 wk, p = 0.90) and birth weights (1592 vs.1535 g, p = 0.77) were adequately matched between controls and cases respectively. Male sex and age at infection did not differ significantly between cases and controls.\nInfant demographics, risk factors and outcomes in cases and controls\n95% CI- 95% confidence intervals, Bwt-Birth weight, GA-gestational age, PN- parenteral nutrition, NEC- necrotizing enterocolitis, BPD-bronchopulmonary dysplasia, IPPV- Intermittent positive pressure ventilation, CPAP- continuous positive airway pressure, HFOV- High frequency oscillatory ventilation, INO- inhaled nitric oxide, ECMO- extracorporeal membrane oxygenator, PDA-patent ductus arteriosus, ROP-retinopathy of prematurity, IVH-intraventricular hemorrhage, PVL- periventricular leucomalacia, severe IVH- IVH > grade 2, severe ROP- ROP > stage 2. *p values < 0.05.\nWe observed a significant difference between cases and controls in terms of surgery other than for NEC (p = 0.04) and any surgery (including NEC surgery) (p = 0.05, borderline significance). We observed an increase in the presence of a central venous catheter (97 vs. 91%, p =0.20) and incidence of direct hyperbilirubinemia (50 vs. 32%, p = 0.06) both of which did not reach statistical significance. We did not observe any significant differences in the proportion of infants receiving PN, days on PN, the incidence of NEC, surgery for NEC, congenital heart disease or congenital malformations between cases and controls.\nWe noted a significant difference in associated mortality between controls and cases (47 vs. 20%), odds ratio adjusted for ‘any surgery’, 4.3 [95% CI, 1.8 to 10.2] (p = 0.001). A significant increase in the duration of infection (culture positivity) was noted between cases and controls (2.91 vs. 2.07 days, p = 0.02). No significant differences were observed in length of hospital stay, BPD, IPPV at 36 corrected wks, CPAP at 36 corrected wks, HFOV, ECMO, PDA, severe IVH (IVH > grade 2), PVL, severe IVH or PVL, ROP or severe ROP (ROP > stage 2) (p > 0.05). An increase in INO therapy in cases were noted (41 vs. 25%, p = 0.08) but was not statistically significant.\nIn a subgroup analysis of VLBW infants, polymicrobial infections compared to monomicrobial infections, had a significantly higher mortality (OR 6.4 [95% CI 1.3 to 30.3], longer duration of infection (OR 1.4 [95% CI 1.04 to 1.8] and higher incidence of congenital malformations (OR 17.7 [95% CI, 2.1 to 148.7).", "We performed a case-control study of neonatal polymicrobial infections in a tertiary hospital in North America, and identified polymicrobial bloodstream infections in nearly 14% of bloodstream infections. We also probed the electronic neonatal clinical database over a four year period from January 2009 to December 2012 for relevant clinical data including risk factors and outcomes. We observed that polymicrobial bloodstream infections were associated with an increased mortality (>3-fold) and increased duration of infection compared to monomicrobial bloodstream infections. Surgical intervention excluding NEC surgery was a significant risk factor for neonatal polymicrobial infections.\nThe annual frequency of polymicrobial bloodstream infections in our study was approximately 14% over a 16 year study period. The frequency of neonatal polymicrobial infections reported in literature is variable, ranging from 3.9 to 25% [10-12,17,19-21]. Contaminated multi-dose lipid emulsions were responsible for the high frequency (25%) of polymicrobial infections in one study [20]. In most studies that report neonatal infections, the isolation of multiple organisms is often not discussed. The higher frequency of polymicrobial infections in our study compared to other reports in literature may be due to two reasons. First is the varying definition of polymicrobial infections in the different studies. We defined polymicrobial bloodstream infections as multiple organisms isolated during an infectious episode including those from a single blood specimen similar to Sutter et al., as we believe that this is a true reflection of a polymicrobial infection [1]. Other studies defined polymicrobial infections as multiple pathogens isolated from a single blood specimen, which may account for the lower incidence of polymicrobial infections [11,12,17]. Using the latter definition the annual frequency of polymicrobial bloodstream infections in our study would be approximately 7.8%, comparable to the 10% frequency reported in North American by Bizzarro et al. [11]. Secondly, increased incidence of polymicrobial bloodstream infections in our study may also relate to the large percentage of medically complex infants referred to our NICU including infants requiring surgical interventions, complex congenital heart disease, ECMO, abdominal wall defects, short gut syndrome and other congenital anomalies. This complex group of infants often requires indwelling vascular catheters for parenteral nutrition or medications for extended periods of time. The annual frequency of polymicrobial bacteremia in the neonatal intensive care unit in our study varied from 6.3 to 18.6% of all bacteremia during the 16-year study period. This annual variation may be in part due to expansion of the NICU, changes in the case-mix and referral patterns, nursing policy changes or implementation of catheter care bundles. There were significant decrease in the number of infections and polymicrobial infections between the time epochs, 1998-2009 and 2010-2012 (p < 0.01). We did not note any significant differences in the number of admissions of VLBW and ELBW infants. A dedicated vascular access team was formed in June 2009 followed by better implementation of catheter care bundles in June 2009, which along with increased compliance with hand hygiene and other infection control measures may have contributed to the decrease in infections.\nIn our study, cases and controls had similar demographics, a mean gestational age of approximately 30 weeks, birth weight of approximately 1500 g, similar male to female ratio and similar percentage being inborn. We focused on risk factors reported for polymicrobial infections in existing literature. The average age at the onset of the infections was 41 days in controls and 38 days in cases and hence all infections were in the late onset sepsis category and mostly occurred beyond the first 28 days of life. Almost all of the infections were late-onset infections and hence we did not collect data on risk factors for early onset sepsis such as maternal prolonged rupture of membranes, group B streptococcus colonization, urinary tract infection or chorioamnionitis. Subgroups of infections in neonates less than 28 days of age or those that were admitted from home were too small for meaningful comparisons. We noted a significant association of polymicrobial infections with surgery other than NEC and a trend towards association with direct jaundice. Cases had a higher incidence of central venous catheter compared to controls (97 vs. 91%, p = 0.20) but this association was not statistically significant.\nIn our case-control study, we observed more than 3-fold increase in associated mortality in neonates with polymicrobial bacteremia, but the retrospective and database oriented nature of our study precludes any conclusions of causality. Mortality due to polymicrobial infections has been reported to be at least 2-fold more than that of monomicrobial infections both in adults and children [1,2]. Faix et al. reported that mortality due to polymicrobial infections in neonates is increased almost 3-fold in his study of 15 cases of polymicrobial infections in 1971-86 [10]. More recent studies by Bizzarro et al. (study period 1989-2006), Tsai et al (study period 2004 to 2011) and Gupta et al. (1 year period in early 2000) did not report an increase in attributable mortality to polymicrobial infections in their studies [11,12,17]. The variability in mortality rates reported by different studies on polymicrobial infections may be multifactorial including differences in study design, variable definitions of polymicrobial infections, patient population, study periods, virulence of the organisms isolated or other unidentified factors [11]. The mechanisms for increased mortality in polymicrobial infections are not clear. Similar increases in mortality are observed in animal models of systemic and local polymicrobial infections [22-25]. We have observed increased catheter infection and systemic dissemination in a polymicrobial biofilm catheter infection model compared to monomicrobial infection [11]. The increased mortality may also arise from inherent host vulnerability that causes the infection in the first place (e.g. prematurity, short gut) or facilitation of one infection by the other (synergism) [26]. Polymicrobial interactions in polymicrobial environments such as a mixed-species biofilms on catheters may induce a host of synergistic mechanisms including quorum sensing, induction of virulence among others that may contribute to enhanced mortality or morbidity of the host [12].\nWe also noted an increase in the duration of infection in cases compared to controls, which may be due to increased host susceptibility or a synergistic effect of the polymicrobial infection. We did not observe any differences in length of hospital stay or other clinically relevant neonatal outcomes such as BPD, IPPV or CPAP at 36 corrected weeks, HFOV, ECMO, ROP, NEC or IVH or PVL. We observed an increase in the use of INO therapy in cases compared to controls, which was not statistically significant.\nThe organisms isolated in our study were mostly similar in both monomicrobial and polymicrobial bloodstream infections, similar in antibiotic susceptibility patterns and comparable to those organisms from polymicrobial infections reported in literature. The most common organisms isolated in both monomicrobial and polymicrobial infections were CONS, Staphylococcus aureus, Escherichia coli, Enterococcus species, Klebsiella spp. and Candida spp. Infections due to Candida spp., Enterococcus species and Klebsiella spp. increased in frequency in cases compared to controls. The most frequent polymicrobial combinations were CONS with Candida spp. and other organisms with Enterococcus faecalis, similar to reported literature [1,14,27]. Studies from Asia, by Tsai et al. and Gupta et al. report a high incidence of Gram negative infections (approximately 60%) in the polymicrobial group [12,17]. In the North American study by Bizarro et al., polymicrobial infections were preponderantly caused by Gram positive organisms (77%) similar to our study [11]. Geographic variations in organisms isolated from neonatal polymicrobial infections and their antibiotic susceptibility patterns may partly explain the variation in mortality and morbidity across the world due to polymicrobial infections. In our study we did not discern any differences in the treatment regimens used to treat polymicrobial infections and monomicrobial infections that could explain the differences in mortality. Polymicrobial infections were treated adequately for all the organisms isolated and duration of therapy was consistent with our written neonatal guidelines.\nLimitations of our study include being a retrospective, observational, case-control study. Bloodstream episodes were identified from the clinical microbiology database and hence the clinical data associated with the infection or their severity were not available. The clinical risk factors and outcomes were identified from the clinical database, which was available only for a 4 year period. Being a retrospective study, it is difficult to assign a causal relationship to the increased mortality associated with polymicrobial infections in our study. We did not assess long term developmental or growth outcomes as longitudinal assessment data were not available.\nThe human microbiome project and other microbiome studies emphasize the polymicrobial nature of organism communities in the human body [28,29]. The availability of molecular culture-independent methods for detection of sepsis may increase the identification of polymicrobial infections [30]. Guidelines or recommendations for therapeutic and preventive strategies against polymicrobial infections do not exist. Prolonged therapy with antibiotics targeting all organisms isolated in a polymicrobial infection and removing infected catheters remain the mainstay in therapy. Defining the epidemiology and clinical impact of polymicrobial infections may be the first step towards delineating optimal therapy and clinical outcomes. Preventive strategies should emphasize catheter care bundles that decrease central line associated bloodstream infections. Focused research is necessary to prevent and treat polymicrobial infections to improve clinical outcomes in the vulnerable neonate.", "The main aim of the study was to investigate the frequency of polymicrobial infections in our tertiary neonatal intensive care unit, understand its impact on neonatal outcomes and promote research in the management of these infections. Polymicrobial bacteremia is common in neonates and comprises nearly 14% of all infectious episodes. The most common organisms in neonatal polymicrobial infections are CONS, S. aureus, Enterococcus species, E. coli, and Candida species. Surgical intervention is a significant risk factor for neonatal polymicrobial infections. We observed a more than 3-fold increase in mortality in infants with polymicrobial bacteremia and a significant increase in duration of the bacteremia. Research focused to prevent and to improve clinical outcomes in neonatal polymicrobial infections is urgently needed.", "TCH: Texas Children’s Hospital; VON: Vermont Oxford Network; CONS: Coagulase negative Staphylococci; ELBW: Extremely low birth weight; VLBW: Very low birth weight; NICU: Neonatal Intensive Care Unit; PDA: Patent ductus arteriosus; BPD: Bronchopulmonary dysplasia; PVL: Periventricular leucomalacia; NEC: Necrotizing enterocolitis; TPN: Total parenteral nutrition; ROP: Retinopathy of prematurity; IVH: Intraventricular hemorrhage; IPPV: Intermittent positive pressure ventilation; CPAP: Continuous positive airway pressure; HFOV: High frequency oscillatory ventilation; INO: Inhaled nitric oxide.", "None of the authors have any financial interests to disclose or competing interest.", "We state that all the authors satisfy the requirements of author as laid down in ‘Instructions to the Authors’. MP conceived the project, participated in the design, data acquisition, analysis and interpretation, drafted the initial manuscript and approved the final manuscript as submitted. DZ participated in the acquisition of microbiology data and tabulating the data. YJ for clinical data acquisition and revision of the manuscript. PR participated in microbiology data acquisition, analysis and revision of the manuscript. JV for critical intellectual input and revision of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/14/390/prepub\n" ]
[ null, "methods", null, null, null, null, "results", "discussion", "conclusions", null, null, null, null ]
[ "Neonate", "Infant", "NICU", "Sepsis", "Polymicrobial", "Mortality" ]
Background: Polymicrobial infections increase mortality more than 2-fold in adults and children, increase length of hospital stay and healthcare costs [1,2]. Risk factors for polymicrobial infections in children and adults include the presence of a central venous catheter, administration of parenteral nutrition, gastrointestinal pathology, especially short gut syndrome, use of broad-spectrum antibiotics and immunosuppression [1,3,4]. Neonatal polymicrobial infections are less well characterized compared to those in children or adults. Increased survival of extremely premature infants at the edge of viability, dependence on catheters and parenteral nutrition (PN), and antibiotic therapy may predispose to polymicrobial infections in neonates. The frequency of neonatal polymicrobial bloodstream infections reported in clinical studies varies from 4 to 24% of all bloodstream infections [5-10]. A standard definition for neonatal polymicrobial infections is lacking and the incidence varies from study to study partly due to variability in definition, neonatal population and practices [11]. Few studies have focused on polymicrobial infections in neonates, especially on risk factors and clinical outcomes in the western world [10-12]. In a neonatal review, the mortality due to polymicrobial infections was 3-fold greater than that of monomicrobial infections (70% vs. 23%) [10]. Organisms that are commonly implicated in neonatal polymicrobial bloodstream infections are coagulase-negative staphylococcus (CONS), Candida spp., Staphylococcus aureus and Enterococcus spp. [1,9,13-16]. A common risk factor appears to be multi-species biofilm infections originating from indwelling medical devices, notably indwelling vascular catheters or endotracheal tubes [1,17]. Polymicrobial bloodstream infections may be defined as multiple organisms isolated during an infectious episode including those from a single blood specimen [1] or more restrictively as isolation of more than one organism from a single blood specimen only [11,12,17]. Different definitions may partly explain the varying incidence of polymicrobial bloodstream infections reported. Bizzarro et al. (Yale, 1989-2006) and Faix et al. (Ann Arbor, 1971-1986) have reported the only two studies on neonatal polymicrobial infections from North America, [10,11]. Changing epidemiology of neonatal infections dictates a need to update and understand the epidemiology of polymicrobial infections in the hospitalized infant [11]. Lack of data on polymicrobial infections from our neonatal unit and concern of significant increase in mortality, morbidity and healthcare costs due to polymicrobial infections prompted us to investigate its frequency, risk factors and clinical outcomes at Texas Children’s Hospital (TCH). Methods: We hypothesized that polymicrobial infections comprise > 5% of bloodstream infections in infants residing in the neonatal intensive care units, have identifiable risk factors and are associated with higher mortality and morbidity than monomicrobial infections. We tested our hypothesis by performing a single center, retrospective, matched case-control study during a 16 year study period. Our study protocol was approved by the Institutional Review Board at Baylor College of Medicine, Houston. We followed the STROBE guidelines in reporting this study [18]. Identification of cases and controls We identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012. We identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012. Definitions Polymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses. Polymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses. Clinical data collection Neonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions. Neonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions. Statistical analyses Data were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model. Data were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model. Identification of cases and controls: We identified blood culture positive infants from the clinical microbiology database, who were admitted to the neonatal intensive care units (NICUs) III and II at TCH from January 1, 1997 to December 31, 2012. Infants more than 28 days of age in the NICU at the time of the infection and those admitted from home to the NICU with positive blood cultures were also included. The data in the clinical microbiology database is entered by the microbiologist and all positive and negative cultures including those regarded as contaminants are recorded. The tertiary NICU at TCH has approximately 1500 admissions per year including inborn and outborn transferred neonates (ranged from 1338 to 1547 during the years 2009 to 2012). Very low birth weight (VLBW, birth weight < 1500 g) infant admissions ranged from 222 to 291 infants during the years 2009 to 2012. Definitions: Polymicrobial bloodstream infection was defined as: i) isolation of more than one organism from a single blood culture specimen and ii) isolation of more than one organism in different blood culture specimens during the same bloodstream infectious episode. We defined ‘bloodstream infection episodes’ as time-periods associated with positive blood cultures [1] and the infection episode was considered resolved when at least two subsequent blood cultures performed every 24 hrs were negative. Usually when an organism is isolated from a blood culture specimen, blood cultures are repeated every 24 hr till two blood culture specimens are negative for organisms. Our neonatal units used only aerobic blood cultures and it is not routine to perform anaerobic blood cultures for the evaluation of neonatal sepsis. Single specimen polymicrobial infections were defined as more than one organism isolated from the same blood culture specimen. We also collected data regarding duration of the infection episode from culture positivity to culture negativity. For each polymicrobial bloodstream infection (case), we selected three gestational age matched neonates with comparable birth weights with monomicrobial bloodstream infection (control) during the time period 2009 to 2012. The matching was performed by an investigator who was blinded to neonatal risk factors and outcomes. An infant who had both polymicrobial and monomicrobial infections was included in the polymicrobial infection category for evaluation of neonatal outcomes because of the possibility that even one exposure to polymicrobial infection may increase mortality or morbidity. If an infant had multiple episodes of polymicrobial infections, the first polymicrobial infection episode was analysed. Coagulase negative staphylococcal (CONS) infections, skin flora (e.g. Micrococcus spp., Gamma Streptococci) and organisms infrequent in neonates (e.g. Bacillus spp.) were considered real infections when grown from two clinical specimens and a single culture of the above organisms were deemed contaminant and not included in the analyses. Clinical data collection: Neonatal clinical data including demographics, risk factors and outcomes were identified by cross referencing our institution’s neonatal clinical database (from Vermont Oxford Network (VON) database), for a 4 year period from January 1, 2009 to December 31, 2012. We collected the following clinical data for the identified cases and controls: demographic data (gestational age, birth weight, sex, age at the start of the infection episode and whether inborn or outborn), parenteral nutrition (PN) administration and its duration, presence of intravenous catheters at the time of the infectious episode (percutaneously inserted central catheters and broviac catheters but excluded umbilical catheters). We excluded umbilical venous catheters because our neonatal policy is to replace the umbilical venous catheters in the first few days of life with percutaneously inserted central catheters (PICC) and hence most umbilical lines lasted only a few days. Also, most of the bloodstream infections occurred at a time when umbilical catheters were no longer in place. We also collected data on important clinical outcomes (mortality, length of hospital stay, bronchopulmonary dysplasia, patent ductus arteriosus (PDA), necrotizing enterocolitis (NEC, ≥ stage 2 by Bell’s classification), intraventricular hemorrhage (IVH), periventricular leucomalacia (PVL) and retinopathy of prematurity (ROP), intermittent positive pressure ventilation (IPPV) at 36 wks corrected gestational age (GA), continuous positive airway pressure (CPAP) at 36 wks corrected GA, high frequency oscillatory ventilation (HFOV), inhaled nitric oxide (INO) therapy, extracorporeal membrane oxygenation (ECMO), surgeries, direct hyperbilirubinemia (direct bilirubin > 2 mg/dl or > 15% of total bilirubin) congenital heart disease and congenital malformations. All outcomes were defined as per VON database definitions. Statistical analyses: Data were analyzed using STATA 11, Stata Corporation, Dallas, USA. Infants with polymicrobial bloodstream infections were compared with three gestational age and birth weight matched infants with monomicrobial bloodstream infections. Continuous data were analyzed for statistical significance by the Student’s t test and categorical data were analyzed by chi squared analyses. A p value of < 0.05 was considered significant. A logistic regression analysis was performed for binary outcomes and odds ratios with 95% confidence intervals were estimated. The outcome of mortality was analyzed adjusting for ‘any surgery’ in the logistic regression model. Polymicrobial infections were used as the outcome of the logistic regression analysis for risk factors and as a covariate when assessing for neonatal outcomes. A subgroup analysis of VLBW infants was performed for mortality and risk factors in the logistic regression model. Results: We identified 2007 bacteremia episodes during a period of 16 years in patients admitted to the NICUs at TCH; 280 episodes (14%) were polymicrobial (Figure 1). The percentage of annual polymicrobial bacteremia varied during the study period ranging from 6.3 to 18.6%. Polymicrobial bacteremia identified in single blood specimens constituted on an average of 56% (range 40 to 75%) during the 16 year study period. Frequency of polymicrobial bacteremia episodes. The frequencies of neonatal bacteremia episodes are plotted against the years from 1997 to 2012. The dark shaded portion of the bars indicate polymicrobial component of the infectious episodes, which ranged from 6.3 to 18.6% and on an average of about 14% during the 16-year study period. There were significant decrease in the number of infections and polymicrobial infections between the time epochs, 1998-2009 and 2010-2012 (p < 0.01). Microbes isolated from monomicrobial (controls) and polymicrobial infections (cases) were similar but varied in frequency (Table 1). Microbes isolated at greater frequencies in polymicrobial infections were Candida spp. (>3%), Enterococcus faecalis (>2-fold) and Klebsiella pneumoniae (>2-fold). The most common combinations of polymicrobial organisms were CONS and Candida spp. (n = 6) and CONS and Enterococcus faecalis (n = 3). Gram positive organisms were isolated more frequently than gram negative organisms and the proportion of gram positive to gram negative organisms were similar in both monomicrobial (gram positive 51% and gram negative 42%) and polymicrobial infections (gram positive 48% and gram negative 40%). Microbiology of monomicrobial and polymicrobial bloodstream infections In monomicrobial bloodstream infections, 102 organisms were isolated from 102 infectious episodes. In polymicrobial bloodstream infections, 74 organisms were isolated from 34 infectious episodes. ‘Others’ include Streptococcus salivarius, Citrobacter freundii, Clostridium tertium for monomicrobial infections and for polymicrobial infections; they were Lactobacillus, Proteus mirabilis, Streptococcus sanguinus, Micrococcus spp., Acinetobacter baumanii complex and Gamma Streptococcus. All the organisms in the ‘Others’ group was cultured at least twice from 2 different samples. Candida species, Enterococcus species and Klebsiella species were isolated at higher frequencies in polymicrobial infections than monomicrobial infections. The most common combination of polymicrobial organisms were CONS and Candida (n = 6) and CONS and Enterococcus fecalis (n = 3). We compared clinical data pertaining to demographics, risk factors and clinical outcomes between infants with polymicrobial bacteremia and gestational age and birthweight matched infants with monomicrobial bacteremia (data from January 2009 to December 2012) (Table 2). We obtained neonatal data for 34 infants with polymicrobial bacteremia and 102 matched infants with monomicrobial bacteremia during that period. The demographics were similar between infants with polymicrobial compared to infants with monomicrobial bacteremia. As expected, the gestational age (30.5 vs. 30.4 wk, p = 0.90) and birth weights (1592 vs.1535 g, p = 0.77) were adequately matched between controls and cases respectively. Male sex and age at infection did not differ significantly between cases and controls. Infant demographics, risk factors and outcomes in cases and controls 95% CI- 95% confidence intervals, Bwt-Birth weight, GA-gestational age, PN- parenteral nutrition, NEC- necrotizing enterocolitis, BPD-bronchopulmonary dysplasia, IPPV- Intermittent positive pressure ventilation, CPAP- continuous positive airway pressure, HFOV- High frequency oscillatory ventilation, INO- inhaled nitric oxide, ECMO- extracorporeal membrane oxygenator, PDA-patent ductus arteriosus, ROP-retinopathy of prematurity, IVH-intraventricular hemorrhage, PVL- periventricular leucomalacia, severe IVH- IVH > grade 2, severe ROP- ROP > stage 2. *p values < 0.05. We observed a significant difference between cases and controls in terms of surgery other than for NEC (p = 0.04) and any surgery (including NEC surgery) (p = 0.05, borderline significance). We observed an increase in the presence of a central venous catheter (97 vs. 91%, p =0.20) and incidence of direct hyperbilirubinemia (50 vs. 32%, p = 0.06) both of which did not reach statistical significance. We did not observe any significant differences in the proportion of infants receiving PN, days on PN, the incidence of NEC, surgery for NEC, congenital heart disease or congenital malformations between cases and controls. We noted a significant difference in associated mortality between controls and cases (47 vs. 20%), odds ratio adjusted for ‘any surgery’, 4.3 [95% CI, 1.8 to 10.2] (p = 0.001). A significant increase in the duration of infection (culture positivity) was noted between cases and controls (2.91 vs. 2.07 days, p = 0.02). No significant differences were observed in length of hospital stay, BPD, IPPV at 36 corrected wks, CPAP at 36 corrected wks, HFOV, ECMO, PDA, severe IVH (IVH > grade 2), PVL, severe IVH or PVL, ROP or severe ROP (ROP > stage 2) (p > 0.05). An increase in INO therapy in cases were noted (41 vs. 25%, p = 0.08) but was not statistically significant. In a subgroup analysis of VLBW infants, polymicrobial infections compared to monomicrobial infections, had a significantly higher mortality (OR 6.4 [95% CI 1.3 to 30.3], longer duration of infection (OR 1.4 [95% CI 1.04 to 1.8] and higher incidence of congenital malformations (OR 17.7 [95% CI, 2.1 to 148.7). Discussion: We performed a case-control study of neonatal polymicrobial infections in a tertiary hospital in North America, and identified polymicrobial bloodstream infections in nearly 14% of bloodstream infections. We also probed the electronic neonatal clinical database over a four year period from January 2009 to December 2012 for relevant clinical data including risk factors and outcomes. We observed that polymicrobial bloodstream infections were associated with an increased mortality (>3-fold) and increased duration of infection compared to monomicrobial bloodstream infections. Surgical intervention excluding NEC surgery was a significant risk factor for neonatal polymicrobial infections. The annual frequency of polymicrobial bloodstream infections in our study was approximately 14% over a 16 year study period. The frequency of neonatal polymicrobial infections reported in literature is variable, ranging from 3.9 to 25% [10-12,17,19-21]. Contaminated multi-dose lipid emulsions were responsible for the high frequency (25%) of polymicrobial infections in one study [20]. In most studies that report neonatal infections, the isolation of multiple organisms is often not discussed. The higher frequency of polymicrobial infections in our study compared to other reports in literature may be due to two reasons. First is the varying definition of polymicrobial infections in the different studies. We defined polymicrobial bloodstream infections as multiple organisms isolated during an infectious episode including those from a single blood specimen similar to Sutter et al., as we believe that this is a true reflection of a polymicrobial infection [1]. Other studies defined polymicrobial infections as multiple pathogens isolated from a single blood specimen, which may account for the lower incidence of polymicrobial infections [11,12,17]. Using the latter definition the annual frequency of polymicrobial bloodstream infections in our study would be approximately 7.8%, comparable to the 10% frequency reported in North American by Bizzarro et al. [11]. Secondly, increased incidence of polymicrobial bloodstream infections in our study may also relate to the large percentage of medically complex infants referred to our NICU including infants requiring surgical interventions, complex congenital heart disease, ECMO, abdominal wall defects, short gut syndrome and other congenital anomalies. This complex group of infants often requires indwelling vascular catheters for parenteral nutrition or medications for extended periods of time. The annual frequency of polymicrobial bacteremia in the neonatal intensive care unit in our study varied from 6.3 to 18.6% of all bacteremia during the 16-year study period. This annual variation may be in part due to expansion of the NICU, changes in the case-mix and referral patterns, nursing policy changes or implementation of catheter care bundles. There were significant decrease in the number of infections and polymicrobial infections between the time epochs, 1998-2009 and 2010-2012 (p < 0.01). We did not note any significant differences in the number of admissions of VLBW and ELBW infants. A dedicated vascular access team was formed in June 2009 followed by better implementation of catheter care bundles in June 2009, which along with increased compliance with hand hygiene and other infection control measures may have contributed to the decrease in infections. In our study, cases and controls had similar demographics, a mean gestational age of approximately 30 weeks, birth weight of approximately 1500 g, similar male to female ratio and similar percentage being inborn. We focused on risk factors reported for polymicrobial infections in existing literature. The average age at the onset of the infections was 41 days in controls and 38 days in cases and hence all infections were in the late onset sepsis category and mostly occurred beyond the first 28 days of life. Almost all of the infections were late-onset infections and hence we did not collect data on risk factors for early onset sepsis such as maternal prolonged rupture of membranes, group B streptococcus colonization, urinary tract infection or chorioamnionitis. Subgroups of infections in neonates less than 28 days of age or those that were admitted from home were too small for meaningful comparisons. We noted a significant association of polymicrobial infections with surgery other than NEC and a trend towards association with direct jaundice. Cases had a higher incidence of central venous catheter compared to controls (97 vs. 91%, p = 0.20) but this association was not statistically significant. In our case-control study, we observed more than 3-fold increase in associated mortality in neonates with polymicrobial bacteremia, but the retrospective and database oriented nature of our study precludes any conclusions of causality. Mortality due to polymicrobial infections has been reported to be at least 2-fold more than that of monomicrobial infections both in adults and children [1,2]. Faix et al. reported that mortality due to polymicrobial infections in neonates is increased almost 3-fold in his study of 15 cases of polymicrobial infections in 1971-86 [10]. More recent studies by Bizzarro et al. (study period 1989-2006), Tsai et al (study period 2004 to 2011) and Gupta et al. (1 year period in early 2000) did not report an increase in attributable mortality to polymicrobial infections in their studies [11,12,17]. The variability in mortality rates reported by different studies on polymicrobial infections may be multifactorial including differences in study design, variable definitions of polymicrobial infections, patient population, study periods, virulence of the organisms isolated or other unidentified factors [11]. The mechanisms for increased mortality in polymicrobial infections are not clear. Similar increases in mortality are observed in animal models of systemic and local polymicrobial infections [22-25]. We have observed increased catheter infection and systemic dissemination in a polymicrobial biofilm catheter infection model compared to monomicrobial infection [11]. The increased mortality may also arise from inherent host vulnerability that causes the infection in the first place (e.g. prematurity, short gut) or facilitation of one infection by the other (synergism) [26]. Polymicrobial interactions in polymicrobial environments such as a mixed-species biofilms on catheters may induce a host of synergistic mechanisms including quorum sensing, induction of virulence among others that may contribute to enhanced mortality or morbidity of the host [12]. We also noted an increase in the duration of infection in cases compared to controls, which may be due to increased host susceptibility or a synergistic effect of the polymicrobial infection. We did not observe any differences in length of hospital stay or other clinically relevant neonatal outcomes such as BPD, IPPV or CPAP at 36 corrected weeks, HFOV, ECMO, ROP, NEC or IVH or PVL. We observed an increase in the use of INO therapy in cases compared to controls, which was not statistically significant. The organisms isolated in our study were mostly similar in both monomicrobial and polymicrobial bloodstream infections, similar in antibiotic susceptibility patterns and comparable to those organisms from polymicrobial infections reported in literature. The most common organisms isolated in both monomicrobial and polymicrobial infections were CONS, Staphylococcus aureus, Escherichia coli, Enterococcus species, Klebsiella spp. and Candida spp. Infections due to Candida spp., Enterococcus species and Klebsiella spp. increased in frequency in cases compared to controls. The most frequent polymicrobial combinations were CONS with Candida spp. and other organisms with Enterococcus faecalis, similar to reported literature [1,14,27]. Studies from Asia, by Tsai et al. and Gupta et al. report a high incidence of Gram negative infections (approximately 60%) in the polymicrobial group [12,17]. In the North American study by Bizarro et al., polymicrobial infections were preponderantly caused by Gram positive organisms (77%) similar to our study [11]. Geographic variations in organisms isolated from neonatal polymicrobial infections and their antibiotic susceptibility patterns may partly explain the variation in mortality and morbidity across the world due to polymicrobial infections. In our study we did not discern any differences in the treatment regimens used to treat polymicrobial infections and monomicrobial infections that could explain the differences in mortality. Polymicrobial infections were treated adequately for all the organisms isolated and duration of therapy was consistent with our written neonatal guidelines. Limitations of our study include being a retrospective, observational, case-control study. Bloodstream episodes were identified from the clinical microbiology database and hence the clinical data associated with the infection or their severity were not available. The clinical risk factors and outcomes were identified from the clinical database, which was available only for a 4 year period. Being a retrospective study, it is difficult to assign a causal relationship to the increased mortality associated with polymicrobial infections in our study. We did not assess long term developmental or growth outcomes as longitudinal assessment data were not available. The human microbiome project and other microbiome studies emphasize the polymicrobial nature of organism communities in the human body [28,29]. The availability of molecular culture-independent methods for detection of sepsis may increase the identification of polymicrobial infections [30]. Guidelines or recommendations for therapeutic and preventive strategies against polymicrobial infections do not exist. Prolonged therapy with antibiotics targeting all organisms isolated in a polymicrobial infection and removing infected catheters remain the mainstay in therapy. Defining the epidemiology and clinical impact of polymicrobial infections may be the first step towards delineating optimal therapy and clinical outcomes. Preventive strategies should emphasize catheter care bundles that decrease central line associated bloodstream infections. Focused research is necessary to prevent and treat polymicrobial infections to improve clinical outcomes in the vulnerable neonate. Conclusions: The main aim of the study was to investigate the frequency of polymicrobial infections in our tertiary neonatal intensive care unit, understand its impact on neonatal outcomes and promote research in the management of these infections. Polymicrobial bacteremia is common in neonates and comprises nearly 14% of all infectious episodes. The most common organisms in neonatal polymicrobial infections are CONS, S. aureus, Enterococcus species, E. coli, and Candida species. Surgical intervention is a significant risk factor for neonatal polymicrobial infections. We observed a more than 3-fold increase in mortality in infants with polymicrobial bacteremia and a significant increase in duration of the bacteremia. Research focused to prevent and to improve clinical outcomes in neonatal polymicrobial infections is urgently needed. Abbreviations: TCH: Texas Children’s Hospital; VON: Vermont Oxford Network; CONS: Coagulase negative Staphylococci; ELBW: Extremely low birth weight; VLBW: Very low birth weight; NICU: Neonatal Intensive Care Unit; PDA: Patent ductus arteriosus; BPD: Bronchopulmonary dysplasia; PVL: Periventricular leucomalacia; NEC: Necrotizing enterocolitis; TPN: Total parenteral nutrition; ROP: Retinopathy of prematurity; IVH: Intraventricular hemorrhage; IPPV: Intermittent positive pressure ventilation; CPAP: Continuous positive airway pressure; HFOV: High frequency oscillatory ventilation; INO: Inhaled nitric oxide. Competing interests: None of the authors have any financial interests to disclose or competing interest. Authors’ contributions: We state that all the authors satisfy the requirements of author as laid down in ‘Instructions to the Authors’. MP conceived the project, participated in the design, data acquisition, analysis and interpretation, drafted the initial manuscript and approved the final manuscript as submitted. DZ participated in the acquisition of microbiology data and tabulating the data. YJ for clinical data acquisition and revision of the manuscript. PR participated in microbiology data acquisition, analysis and revision of the manuscript. JV for critical intellectual input and revision of the manuscript. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2334/14/390/prepub
Background: Polymicrobial infections in adults and children are associated with increase in mortality, duration of intensive care and healthcare costs. Very few studies have characterized polymicrobial bloodstream infections in the neonatal unit. Considerable variation has been reported in incidence of polymicrobial infections and associated clinical outcomes. We characterized the risk factors and outcomes of polymicrobial bloodstream infections in our neonatal units in a tertiary hospital in North America. Methods: In a retrospective case control study design, we identified infants in the neonatal intensive care unit with positive blood cultures at Texas Children's Hospital, over a 16-year period from January 1, 1997 to December 31, 2012. Clinical data from online databases were available from January 2009 to December 2012. For each polymicrobial bloodstream infection (case), we matched three infants with monomicrobial bloodstream infection (control) by gestational age and birth weight. Results: We identified 2007 episodes of bloodstream infections during the 16 year study period and 280 (14%) of these were polymicrobial. Coagulase-negative Staphylococcus, Enterococcus, Klebsiella and Candida were the most common microbial genera isolated from polymicrobial infections. Polymicrobial bloodstream infections were associated with more than 3-fold increase in mortality and an increase in duration of infection. Surgical intervention was a significant risk factor for polymicrobial infection. Conclusions: The frequency and increased mortality emphasizes the clinical significance of polymicrobial bloodstream infections in the neonatal intensive care unit. Clinical awareness and focused research on neonatal polymicrobial infections is urgently needed.
Background: Polymicrobial infections increase mortality more than 2-fold in adults and children, increase length of hospital stay and healthcare costs [1,2]. Risk factors for polymicrobial infections in children and adults include the presence of a central venous catheter, administration of parenteral nutrition, gastrointestinal pathology, especially short gut syndrome, use of broad-spectrum antibiotics and immunosuppression [1,3,4]. Neonatal polymicrobial infections are less well characterized compared to those in children or adults. Increased survival of extremely premature infants at the edge of viability, dependence on catheters and parenteral nutrition (PN), and antibiotic therapy may predispose to polymicrobial infections in neonates. The frequency of neonatal polymicrobial bloodstream infections reported in clinical studies varies from 4 to 24% of all bloodstream infections [5-10]. A standard definition for neonatal polymicrobial infections is lacking and the incidence varies from study to study partly due to variability in definition, neonatal population and practices [11]. Few studies have focused on polymicrobial infections in neonates, especially on risk factors and clinical outcomes in the western world [10-12]. In a neonatal review, the mortality due to polymicrobial infections was 3-fold greater than that of monomicrobial infections (70% vs. 23%) [10]. Organisms that are commonly implicated in neonatal polymicrobial bloodstream infections are coagulase-negative staphylococcus (CONS), Candida spp., Staphylococcus aureus and Enterococcus spp. [1,9,13-16]. A common risk factor appears to be multi-species biofilm infections originating from indwelling medical devices, notably indwelling vascular catheters or endotracheal tubes [1,17]. Polymicrobial bloodstream infections may be defined as multiple organisms isolated during an infectious episode including those from a single blood specimen [1] or more restrictively as isolation of more than one organism from a single blood specimen only [11,12,17]. Different definitions may partly explain the varying incidence of polymicrobial bloodstream infections reported. Bizzarro et al. (Yale, 1989-2006) and Faix et al. (Ann Arbor, 1971-1986) have reported the only two studies on neonatal polymicrobial infections from North America, [10,11]. Changing epidemiology of neonatal infections dictates a need to update and understand the epidemiology of polymicrobial infections in the hospitalized infant [11]. Lack of data on polymicrobial infections from our neonatal unit and concern of significant increase in mortality, morbidity and healthcare costs due to polymicrobial infections prompted us to investigate its frequency, risk factors and clinical outcomes at Texas Children’s Hospital (TCH). Conclusions: The main aim of the study was to investigate the frequency of polymicrobial infections in our tertiary neonatal intensive care unit, understand its impact on neonatal outcomes and promote research in the management of these infections. Polymicrobial bacteremia is common in neonates and comprises nearly 14% of all infectious episodes. The most common organisms in neonatal polymicrobial infections are CONS, S. aureus, Enterococcus species, E. coli, and Candida species. Surgical intervention is a significant risk factor for neonatal polymicrobial infections. We observed a more than 3-fold increase in mortality in infants with polymicrobial bacteremia and a significant increase in duration of the bacteremia. Research focused to prevent and to improve clinical outcomes in neonatal polymicrobial infections is urgently needed.
Background: Polymicrobial infections in adults and children are associated with increase in mortality, duration of intensive care and healthcare costs. Very few studies have characterized polymicrobial bloodstream infections in the neonatal unit. Considerable variation has been reported in incidence of polymicrobial infections and associated clinical outcomes. We characterized the risk factors and outcomes of polymicrobial bloodstream infections in our neonatal units in a tertiary hospital in North America. Methods: In a retrospective case control study design, we identified infants in the neonatal intensive care unit with positive blood cultures at Texas Children's Hospital, over a 16-year period from January 1, 1997 to December 31, 2012. Clinical data from online databases were available from January 2009 to December 2012. For each polymicrobial bloodstream infection (case), we matched three infants with monomicrobial bloodstream infection (control) by gestational age and birth weight. Results: We identified 2007 episodes of bloodstream infections during the 16 year study period and 280 (14%) of these were polymicrobial. Coagulase-negative Staphylococcus, Enterococcus, Klebsiella and Candida were the most common microbial genera isolated from polymicrobial infections. Polymicrobial bloodstream infections were associated with more than 3-fold increase in mortality and an increase in duration of infection. Surgical intervention was a significant risk factor for polymicrobial infection. Conclusions: The frequency and increased mortality emphasizes the clinical significance of polymicrobial bloodstream infections in the neonatal intensive care unit. Clinical awareness and focused research on neonatal polymicrobial infections is urgently needed.
6,867
283
[ 473, 161, 340, 339, 152, 105, 14, 109, 16 ]
13
[ "infections", "polymicrobial", "polymicrobial infections", "neonatal", "infection", "bloodstream", "data", "blood", "clinical", "study" ]
[ "outcomes neonatal polymicrobial", "neonates polymicrobial bacteremia", "neonatal polymicrobial bloodstream", "infants polymicrobial bloodstream", "infants polymicrobial infections" ]
[CONTENT] Neonate | Infant | NICU | Sepsis | Polymicrobial | Mortality [SUMMARY]
[CONTENT] Neonate | Infant | NICU | Sepsis | Polymicrobial | Mortality [SUMMARY]
[CONTENT] Neonate | Infant | NICU | Sepsis | Polymicrobial | Mortality [SUMMARY]
[CONTENT] Neonate | Infant | NICU | Sepsis | Polymicrobial | Mortality [SUMMARY]
[CONTENT] Neonate | Infant | NICU | Sepsis | Polymicrobial | Mortality [SUMMARY]
[CONTENT] Neonate | Infant | NICU | Sepsis | Polymicrobial | Mortality [SUMMARY]
[CONTENT] Bacteremia | Birth Weight | Candida | Case-Control Studies | Coinfection | Female | Humans | Incidence | Infant | Infant, Newborn | Intensive Care Units, Neonatal | Klebsiella | Male | Retrospective Studies | Risk Factors | Staphylococcus | Texas | Treatment Outcome [SUMMARY]
[CONTENT] Bacteremia | Birth Weight | Candida | Case-Control Studies | Coinfection | Female | Humans | Incidence | Infant | Infant, Newborn | Intensive Care Units, Neonatal | Klebsiella | Male | Retrospective Studies | Risk Factors | Staphylococcus | Texas | Treatment Outcome [SUMMARY]
[CONTENT] Bacteremia | Birth Weight | Candida | Case-Control Studies | Coinfection | Female | Humans | Incidence | Infant | Infant, Newborn | Intensive Care Units, Neonatal | Klebsiella | Male | Retrospective Studies | Risk Factors | Staphylococcus | Texas | Treatment Outcome [SUMMARY]
[CONTENT] Bacteremia | Birth Weight | Candida | Case-Control Studies | Coinfection | Female | Humans | Incidence | Infant | Infant, Newborn | Intensive Care Units, Neonatal | Klebsiella | Male | Retrospective Studies | Risk Factors | Staphylococcus | Texas | Treatment Outcome [SUMMARY]
[CONTENT] Bacteremia | Birth Weight | Candida | Case-Control Studies | Coinfection | Female | Humans | Incidence | Infant | Infant, Newborn | Intensive Care Units, Neonatal | Klebsiella | Male | Retrospective Studies | Risk Factors | Staphylococcus | Texas | Treatment Outcome [SUMMARY]
[CONTENT] Bacteremia | Birth Weight | Candida | Case-Control Studies | Coinfection | Female | Humans | Incidence | Infant | Infant, Newborn | Intensive Care Units, Neonatal | Klebsiella | Male | Retrospective Studies | Risk Factors | Staphylococcus | Texas | Treatment Outcome [SUMMARY]
[CONTENT] outcomes neonatal polymicrobial | neonates polymicrobial bacteremia | neonatal polymicrobial bloodstream | infants polymicrobial bloodstream | infants polymicrobial infections [SUMMARY]
[CONTENT] outcomes neonatal polymicrobial | neonates polymicrobial bacteremia | neonatal polymicrobial bloodstream | infants polymicrobial bloodstream | infants polymicrobial infections [SUMMARY]
[CONTENT] outcomes neonatal polymicrobial | neonates polymicrobial bacteremia | neonatal polymicrobial bloodstream | infants polymicrobial bloodstream | infants polymicrobial infections [SUMMARY]
[CONTENT] outcomes neonatal polymicrobial | neonates polymicrobial bacteremia | neonatal polymicrobial bloodstream | infants polymicrobial bloodstream | infants polymicrobial infections [SUMMARY]
[CONTENT] outcomes neonatal polymicrobial | neonates polymicrobial bacteremia | neonatal polymicrobial bloodstream | infants polymicrobial bloodstream | infants polymicrobial infections [SUMMARY]
[CONTENT] outcomes neonatal polymicrobial | neonates polymicrobial bacteremia | neonatal polymicrobial bloodstream | infants polymicrobial bloodstream | infants polymicrobial infections [SUMMARY]
[CONTENT] infections | polymicrobial | polymicrobial infections | neonatal | infection | bloodstream | data | blood | clinical | study [SUMMARY]
[CONTENT] infections | polymicrobial | polymicrobial infections | neonatal | infection | bloodstream | data | blood | clinical | study [SUMMARY]
[CONTENT] infections | polymicrobial | polymicrobial infections | neonatal | infection | bloodstream | data | blood | clinical | study [SUMMARY]
[CONTENT] infections | polymicrobial | polymicrobial infections | neonatal | infection | bloodstream | data | blood | clinical | study [SUMMARY]
[CONTENT] infections | polymicrobial | polymicrobial infections | neonatal | infection | bloodstream | data | blood | clinical | study [SUMMARY]
[CONTENT] infections | polymicrobial | polymicrobial infections | neonatal | infection | bloodstream | data | blood | clinical | study [SUMMARY]
[CONTENT] infections | polymicrobial | polymicrobial infections | neonatal polymicrobial | neonatal | 10 | children | bloodstream infections | 11 | bloodstream [SUMMARY]
[CONTENT] blood | infection | culture | polymicrobial | catheters | cultures | infections | blood cultures | blood culture | bloodstream [SUMMARY]
[CONTENT] polymicrobial | bacteremia | gram | infections | cases | monomicrobial | controls | vs | ci | 95 ci [SUMMARY]
[CONTENT] polymicrobial | infections | neonatal polymicrobial infections | bacteremia | neonatal polymicrobial | polymicrobial infections | neonatal | research | polymicrobial bacteremia | species [SUMMARY]
[CONTENT] polymicrobial | infections | polymicrobial infections | neonatal | infection | blood | data | bloodstream | infants | catheters [SUMMARY]
[CONTENT] polymicrobial | infections | polymicrobial infections | neonatal | infection | blood | data | bloodstream | infants | catheters [SUMMARY]
[CONTENT] ||| ||| ||| tertiary | North America [SUMMARY]
[CONTENT] Texas Children's Hospital | 16-year | January 1, 1997 to December 31, 2012 ||| January 2009 to December 2012 ||| three [SUMMARY]
[CONTENT] 2007 | 16 year | 280 | 14% ||| Staphylococcus | Enterococcus | Klebsiella | Candida ||| more than 3-fold ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| tertiary | North America ||| Texas Children's Hospital | 16-year | January 1, 1997 to December 31, 2012 ||| January 2009 to December 2012 ||| three ||| ||| 2007 | 16 year | 280 | 14% ||| Staphylococcus | Enterococcus | Klebsiella | Candida ||| more than 3-fold ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| tertiary | North America ||| Texas Children's Hospital | 16-year | January 1, 1997 to December 31, 2012 ||| January 2009 to December 2012 ||| three ||| ||| 2007 | 16 year | 280 | 14% ||| Staphylococcus | Enterococcus | Klebsiella | Candida ||| more than 3-fold ||| ||| ||| [SUMMARY]
Outreach onsite treatment with a simplified pangenotypic direct-acting anti-viral regimen for hepatitis C virus micro-elimination in a prison.
35110949
Prisoners are at risk of hepatitis C virus (HCV) infection, especially among the people who inject drugs (PWID). We implemented an outreach strategy in combination with universal mass screening and immediate onsite treatment with a simplified pan-genotypic direct-acting antivirals (DAA) regimen, 12 wk of sofosbuvir/velpatasvir, in a PWID-dominant prison in Taiwan.
BACKGROUND
HCV-viremic patients were recruited for onsite treatment program for HCV micro-elimination with a pangenotypic DAA regimen, 12 wk of sofosbuvir/ velpatasvir, from two cohorts in Penghu Prison, either identified by mass screen or in outpatient clinics, in September 2019. Another group of HCV-viremic patients identified sporadically in outpatient clinics before mass screening were enrolled as a control group. The primary endpoint was sustained virological response (SVR12, defined as undetectable HCV ribonucleic acid (RNA) 12 wk after end-of-treatment).
METHODS
A total of 212 HCV-viremic subjects were recruited for HCV micro-elimination campaign; 91 patients treated with sofosbuvir/Ledipasvir or glecaprevir/ pibrentasvir before mass screening were enrolled as a control. The HCV micro-elimination group had significantly lower proportion of diabetes, hypertension, hyperlipidemia, advanced fibrosis and chronic kidney diseases, but higher levels of HCV RNA. The SVR12 rate was comparable between the HCV micro-elimination and control groups, 95.8% (203/212) vs 94.5% (86/91), respectively, in intent-to-treat analysis, and 100% (203/203) vs 98.9% (86/87), respectively, in per-protocol analysis. There was no virological failure, treatment discontinuation, and serious adverse event among sofosbuvir/velpatasvir-treated patients in the HCV micro-elimination group.
RESULTS
Outreach mass screening followed by immediate onsite treatment with a simplified pangenotypic DAA regimen, sofosbuvir/velpatasvir, provides successful strategies toward HCV micro-elimination among prisoners.
CONCLUSION
[ "Antiviral Agents", "Hepacivirus", "Hepatitis C", "Hepatitis C, Chronic", "Humans", "Prisons" ]
8776526
INTRODUCTION
Hepatitis C virus (HCV) infection is a progressive and blood-borne infectious disease that can lead to end stage liver diseases, such as hepatic decompensation, liver cirrhosis, and hepatocellular carcinoma[1,2]. Iatrogenic transmission of HCV, such as blood transfusion and surgery, has decreased in developed countries. Whereas people who inject drugs (PWID) has become the major population of HCV transmission, which could consist of approximately 80% of HCV-infected patients[3]. Given that lack of vaccine available, “treatment as prevention” for HCV transmission in PWID is very important for HCV elimination. Prisoners are at high risk of HCV infection, with prevalence rates ranging from 3.1% to 38%[4,5]. The high prevalence of HCV infection in prisoners is resulted from unsafe lifestyles, psychiatric disorders, and social problems before they are incarcerated. Recently, PWID has been the most important risk factor of HCV infection in prisoners[6]. The anti-HCV prevalence rate could be as high as 91% among PWID prisoners[7]. Screening and eliminating HCV infection in prisoners is therefore an important social health issue. According to the American Association for the Study of Liver Diseases and European Association for the Study of the Liver (EASL) guidelines, all HCV viremic patients should be treated if life span is expected more than one year[8,9]. HCV therapeutic strategies have been revolutionized significantly because of the availability of direct-acting antivirals (DAA)[10]. Interferon (IFN)-based regimens for HCV infection have serious side effects, long therapeutic duration, and contraindications, leading to the huge gaps in HCV care cascade[11]. The current IFN-free DAA regimens provide shorter treatment duration, very high treatment efficacy and safety profiles, not only for general population[12], but also for special populations[13], such as HCV/human immunodeficiency virus (HIV) coinfected patients, hepatitis B virus (HBV)/HCV coinfected patients and patients with chronic kidney diseases in real-world clinical settings[14,15]. World Health Organization (WHO) set a global goal of HCV elimination by 2030[16], and Taiwan authority is even ambitious by 2025[17]. To achieve the goal, implementation of the concept of HCV micro-elimination is regarding as an efficient and practical strategy[18]. We have proved that “universal mass screening plus outreach onsite treatment” is the key to achieve HCV micro-elimination among patients under maintenance hemodialysis[19]. Recently, the latest EASL HCV guideline recommended simplified, genotyping/ subtyping-free, pangenotypic anti-HCV treatment, either sofosbuvir/ velpatasvir or glecaprevir/pibrentasvir, to increase the accessibility and global cure rates among patients with > 12 years, chronic hepatitis C without cirrhosis or with compensated cirrhosis, with or without HIV co-infection, whatever treatment-naïve or IFN-experienced[8]. Since HCV treatment is not frequently administered to prisoners due to unawareness of HCV infection, difficultly management, easily loss to follow-up, and lack of hepatologist in prison[20], collaboration between hepatologists and prison authorities to carry out strategies for HCV diagnosis and treatment in prisoners in highly demanded. Herein, we implemented an outreach strategy in combination with universal mass screen and onsite treatment with a simplified pan-genotypic DAA regimen, 12 wk of sofosbuvir/velpatasvir, toward HCV micro-elimination in a PWID-dominant prison in Taiwan.
MATERIALS AND METHODS
Patients linked to onsite treatment program for HCV micro-elimination HCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1). Patient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12. HCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1). Patient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12. HCV-viremic patients identified by a universal mass screening In September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. In September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. HCV-viremic patients identified in outpatient clinics during the period of HCV mass screening Another 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. All patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. Another 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. All patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. Patients identified and treated by DAAs in outpatient clinics before mass screening A total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. All participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080). A total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. All participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080). Assessment, monitoring and endpoints Anti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT. The primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period). Anti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT. The primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period). Statistical analyses. The efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States). The efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States).
null
null
CONCLUSION
Our study provided evidence for the concept that simplified, genotyping/subtyping-free regimens can achieve high SVR12 rate in HCV-infected prisoners. In the future, it is possible to implement the strategy to all prisoners in our country.
[ "INTRODUCTION", "Patients linked to onsite treatment program for HCV micro-elimination", "HCV-viremic patients identified by a universal mass screening", "HCV-viremic patients identified in outpatient clinics during the period of HCV mass screening", "Patients identified and treated by DAAs in outpatient clinics before mass screening", "Assessment, monitoring and endpoints", "Statistical analyses.", "RESULTS", "Patient flowchart of HCV micro-elimination campaign", "Patient characteristics ", "Treatment efficacy", "Safety profiles", "DISCUSSION", "CONCLUSION" ]
[ "Hepatitis C virus (HCV) infection is a progressive and blood-borne infectious disease that can lead to end stage liver diseases, such as hepatic decompensation, liver cirrhosis, and hepatocellular carcinoma[1,2]. Iatrogenic transmission of HCV, such as blood transfusion and surgery, has decreased in developed countries. Whereas people who inject drugs (PWID) has become the major population of HCV transmission, which could consist of approximately 80% of HCV-infected patients[3]. Given that lack of vaccine available, “treatment as prevention” for HCV transmission in PWID is very important for HCV elimination.\nPrisoners are at high risk of HCV infection, with prevalence rates ranging from 3.1% to 38%[4,5]. The high prevalence of HCV infection in prisoners is resulted from unsafe lifestyles, psychiatric disorders, and social problems before they are incarcerated. Recently, PWID has been the most important risk factor of HCV infection in prisoners[6]. The anti-HCV prevalence rate could be as high as 91% among PWID prisoners[7]. Screening and eliminating HCV infection in prisoners is therefore an important social health issue.\nAccording to the American Association for the Study of Liver Diseases and European Association for the Study of the Liver (EASL) guidelines, all HCV viremic patients should be treated if life span is expected more than one year[8,9]. HCV therapeutic strategies have been revolutionized significantly because of the availability of direct-acting antivirals (DAA)[10]. Interferon (IFN)-based regimens for HCV infection have serious side effects, long therapeutic duration, and contraindications, leading to the huge gaps in HCV care cascade[11]. The current IFN-free DAA regimens provide shorter treatment duration, very high treatment efficacy and safety profiles, not only for general population[12], but also for special populations[13], such as HCV/human immunodeficiency virus (HIV) coinfected patients, hepatitis B virus (HBV)/HCV coinfected patients and patients with chronic kidney diseases in real-world clinical settings[14,15].\nWorld Health Organization (WHO) set a global goal of HCV elimination by 2030[16], and Taiwan authority is even ambitious by 2025[17]. To achieve the goal, implementation of the concept of HCV micro-elimination is regarding as an efficient and practical strategy[18]. We have proved that “universal mass screening plus outreach onsite treatment” is the key to achieve HCV micro-elimination among patients under maintenance hemodialysis[19].\nRecently, the latest EASL HCV guideline recommended simplified, genotyping/ subtyping-free, pangenotypic anti-HCV treatment, either sofosbuvir/ velpatasvir or glecaprevir/pibrentasvir, to increase the accessibility and global cure rates among patients with > 12 years, chronic hepatitis C without cirrhosis or with compensated cirrhosis, with or without HIV co-infection, whatever treatment-naïve or IFN-experienced[8].\nSince HCV treatment is not frequently administered to prisoners due to unawareness of HCV infection, difficultly management, easily loss to follow-up, and lack of hepatologist in prison[20], collaboration between hepatologists and prison authorities to carry out strategies for HCV diagnosis and treatment in prisoners in highly demanded. Herein, we implemented an outreach strategy in combination with universal mass screen and onsite treatment with a simplified pan-genotypic DAA regimen, 12 wk of sofosbuvir/velpatasvir, toward HCV micro-elimination in a PWID-dominant prison in Taiwan.", "HCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1).\n\nPatient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12.", "In September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen.", "Another 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen.\nAll patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. ", "A total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions.\nAll participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080).", "Anti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT.\nThe primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period).", "The efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States).", "Patient flowchart of HCV micro-elimination campaign The patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020.\nThe patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020.\nPatient characteristics The baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer.\nBaseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison\n56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination).\n\nP < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon.\nThe baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer.\nBaseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison\n56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination).\n\nP < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon.\nTreatment efficacy All of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13].\nIn ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2).\nVirological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison\nOne missing data.\nOne transferred; One missing data.\nTwo transferred; One released.\nFour released.\nOne relapser.\nFour transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. \nDuring DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir).\nAll of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13].\nIn ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2).\nVirological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison\nOne missing data.\nOne transferred; One missing data.\nTwo transferred; One released.\nFour released.\nOne relapser.\nFour transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. \nDuring DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir).\nSafety profiles The safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality.\nSafety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison\nDAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. \nThe safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality.\nSafety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison\nDAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. ", "The patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020.", "The baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer.\nBaseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison\n56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination).\n\nP < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon.", "All of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13].\nIn ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2).\nVirological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison\nOne missing data.\nOne transferred; One missing data.\nTwo transferred; One released.\nFour released.\nOne relapser.\nFour transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. \nDuring DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir).", "The safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality.\nSafety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison\nDAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. ", "In the current study, we demonstrated that mass screening combined with onsite group therapy by using a simplified pan-genotypic DAA regimen, 12 wk of sofosbuvir/velpatasvir, provides an “one-size fits all” solution toward the achievement of HCV micro-elimination in prisoners. The SVR rate was 95.6% in ITT population and 100% in PP population after excluding the inmates released or transferred before end-of-follow-up. The high SVR rate was observed in this PWID-dominant population, which HCV genotype distribution was diverse, including genotypes 1a, 1b, 2, 3 and 6. \nRecent advance in the development of IFN-free pan-genotypic DAA regimens has remarkably improved the treatment efficacy with an overall SVR rates of > 90%. Therefore, WHO set the global of HCV elimination by 2030, through the achievement of > 90% diagnosis rate and > 80% treatment rate for eligible patients[16]. Nevertheless, there are many barriers in each HCV care cascade toward HCV elimination at the population level[11,23]. To overcome the barriers, combining the concept of micro-elimination and an outreach strategy with immediate onsite treatment would be a more efficient and practical approach to achieve that goal[18,24]. The current study compared the HCV-infected inmates identified sporadically in outpatient clinics of Penghu Prison from 2017 to 2019 before mass screening and the patients identified by mass screening. We found that mass screening identified 208 HCV-viremic patients in a 5 d screening program from 1137 inmates (encountered around two-third of total inmates in Penghu Prison), compared to 91 HCV-viremic inmates treated in outpatient clinics from 2017 to September 2019. Our results demonstrated that mass screening with immediate onsite treatment provide much more efficient and practical solution to overcome the gaps of disease awareness and link-to-care in the HCV care cascades toward HCV micro-elimination in prisoners. In addition, we implemented “HCV reflex testing” in the mass screening program to scale-up and speed-up the diagnosis and link-to-care for treatment uptake of HCV infections[25].\nPWID is known as the major risk factor of HCV infection and transmission. Although the anti-HCV prevalence in PWID prisoners decreased from 91% in 2014 to 34.8% in 2019 by the strategy of safe injection in Taiwan[21], almost all (97.6%) of HCV-infected prisoners were PWID. Given the lack of vaccine available and high risk of transmission, the strategy of universal screening and concept of “treatment as prevention” are the keys to HCV elimination in prison as well as PWID.\nWe observed that the sporadic HCV-infected prisoners identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and eGFR, than those participating in the HCV micro-elimination campaign. It implicated that a great proportion of identified sporadically in outpatient clinics were due to concomitant morbidities; by contrast, many HCV-infected patients were unaware to their HCV diseases. In our mass screening, only 36.6% (145/396) of HCV-infected prisoners were aware of HCV infection before screening[21]. It indicates that the implementation of an outreach strategy with universal mass screen is necessary for HCV micro-elimination in prison.\nDespite of the advances in the management of HCV infections, DAA therapy in incarcerated HCV-infected people remains many obstacles to be resolved, including disease unawareness, lack of updated information about the benefits of new DAA treatment, uncertainty of treatment right[26], poor accessibility due to of onsite treatment facilities or HCV treaters. Another difficulty for HCV treatment in prisoners is the unexpected or scheduled releasing from prison or transferring to other prisons, which frequently leads to the interruption of treatment or lost-to- follow up[20,27]. We are lucky that the Taiwan Health Insurance covered all incarcerated people, including all of the laboratory tests and ultrasound sonography and the cost of DAA regimens. Each prison has a contracted hospital providing point-of-care facility. Before initiating DAA therapy, we excluded the patients with expected release or transfer within 24 wk, and negotiated with the authority to avoid unnecessary transferring to other prisons during the period of HCV treatment and follow-up once the inmates entering the DAA course. Eventually we achieved a high treatment rate of 90.6% (212/234) and a high treatment complete rate of 95.8% (203/212), with a high cure rate at 100% (212/212).\nBefore the IFN-free DAA available, the lower SVR rate, much longer treatment duration and frequent adverse events of IFN-based treatment discouraged HCV-infected prisoners from receiving treatment[10]. IFN-free DAA regimens revolutionized HCV treatment which has largely extended the indication for various HCV-infected patients. Nevertheless, the application of typical DAA regimens are based on HCV genotype, presence of decompensated cirrhosis, renal function, and prior treatment experience. The two pangenotypic DAA regimens, sofosbuvir/velpatasvir, and glecaprevir/pibrentasvir, have achieved very high SVR rates of > 95%, regardless of HCV GTs, except for treatment-experienced cirrhotic HCV GT3 patients or GT3b patients[8,12,13]. Recently, to improve the access to anti-HCV therapy, reduce the cost of laboratory tests and the relative complexity of genotype-based treatment strategies, simplified treatment without many information needed for treatment decision are recommended to facilitate the care cascade among populations who are historically less engaged in healthcare, such as PWIDs and prisoners[8]. EASL recommends simplified, genotyping/subtyping-free regimens for IFN-free DAA treatment-naïve (except sofosbuvir plus ribavirin), HCV-infected or HCV-HIV coinfected adolescent and adult patients without cirrhosis or with compensated cirrhosis, regardless of HCV genotypes[8]. These recommendations are a universal 12 wk regimen of sofosbuvir/velpatasvir for all patients or glecaprevir/pibrentasvir, 8 wk for non-cirrhotic, 12 wk for compensated cirrhotic, and 16 wk for HCV GT3 patients, respectively. There are only four information needed before treatment, including the presence of HCV viremia, potential drug-drug interactions, and prior treatment experience, and presence of cirrhosis. The advantages of glecaprevir/pibrentasvir is a shorter 8 wk regimen for treatment-naïve HCV patients and IFN-experienced non-cirrhotic patients with compensated liver diseases, which would be benefit for prisoners who are expected to be released or transferred in a short term. However, glecaprevir, a protease inhibitor, is contraindicated for patients with hepatic decompensation and at risk for rare occurrence of serious drug-induced liver injury[28]. Also, glecaprevir/ pibrentasvir has higher pill burden, three tablets a d. The advantages of sofosbuvir/ velpatasvir include a universal fixed 12 wk regimen, one tablet a d, for all HCV patients with compensated liver diseases, less frequency of potential drug-drug interactions[29], and safety for those with hepatic decompensation. However, a 12 wk regimen with sofosbuvir/velpatasvir needs one more visit and monitoring when compared to an 8 wk regimen with glecaprevir/pibrentasvir. Therefore, we select sofosbuvir/velpatasvir as the antiviral regimen for our outreach onsite treatment. In our study, all HCV-viremic prisoners fit the criteria of simplified, genotyping/ subtyping-free regimens, except one who failed to prior glecaprevir/pibrentasvir therapy and was not enrolled for sofosbuvir/velpatasvir treatment. In our PP analysis, the overall SVR12 rate was comparable between HCV patient group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203). Our study provided evidence for the concept that simplified, genotyping/subtyping-free regimens can achieve high SVR12 rate in HCV-infected PWID-dominant prisoners.\nIn our study, none of prisoners had DAA treatment discontinuation due to adverse events. None experienced serious adverse event. These data indicated that the simplified, genotyping/subtyping-free regimen, sofosbuvir/velpatasvir, was safe and well tolerated for HCV-infected PWID-dominant prisoners. Very few adverse events were reported in both groups, whatever using sofosbuvir/Ledipasvir, glecaprevir/ pibrentasvir and sofosbuvir/velpatasvir, when compared to the data from clinical trials[30,31]. It might be due to that current population was younger and less patients with advanced fibrosis or chronic kidney diseases.\nThere were some limitations in our study. First, not all inmates in Penghu Prison participated our mass screening. Strategies and policy to encourage inmates to receive HCV screening is mandatory to achieve the goal of WHO. Second, unexpected prisoners’ transferral and release could not be completely avoided, which caused incomplete treatment and follow-up. Successfully linking the released or transferred people to another HCV treaters could help completing HCV treatment and follow-up. Third, there was no reimbursement for the retreatment of prior DAA failed patients in Taiwan at the time of the current study.", "Well-designed strategies for mass screening and treatment for HCV-infected prisoners can be implemented successfully by the collaboration between physicians and prison authorities. We demonstrated that mass screening followed by immediate onsite treatment with a simplified pangenotypic DAA regimen, sofosbuvir/velpatasvir, provides successful strategies toward HCV micro-elimination among prisoners." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients linked to onsite treatment program for HCV micro-elimination", "HCV-viremic patients identified by a universal mass screening", "HCV-viremic patients identified in outpatient clinics during the period of HCV mass screening", "Patients identified and treated by DAAs in outpatient clinics before mass screening", "Assessment, monitoring and endpoints", "Statistical analyses.", "RESULTS", "Patient flowchart of HCV micro-elimination campaign", "Patient characteristics ", "Treatment efficacy", "Safety profiles", "DISCUSSION", "CONCLUSION" ]
[ "Hepatitis C virus (HCV) infection is a progressive and blood-borne infectious disease that can lead to end stage liver diseases, such as hepatic decompensation, liver cirrhosis, and hepatocellular carcinoma[1,2]. Iatrogenic transmission of HCV, such as blood transfusion and surgery, has decreased in developed countries. Whereas people who inject drugs (PWID) has become the major population of HCV transmission, which could consist of approximately 80% of HCV-infected patients[3]. Given that lack of vaccine available, “treatment as prevention” for HCV transmission in PWID is very important for HCV elimination.\nPrisoners are at high risk of HCV infection, with prevalence rates ranging from 3.1% to 38%[4,5]. The high prevalence of HCV infection in prisoners is resulted from unsafe lifestyles, psychiatric disorders, and social problems before they are incarcerated. Recently, PWID has been the most important risk factor of HCV infection in prisoners[6]. The anti-HCV prevalence rate could be as high as 91% among PWID prisoners[7]. Screening and eliminating HCV infection in prisoners is therefore an important social health issue.\nAccording to the American Association for the Study of Liver Diseases and European Association for the Study of the Liver (EASL) guidelines, all HCV viremic patients should be treated if life span is expected more than one year[8,9]. HCV therapeutic strategies have been revolutionized significantly because of the availability of direct-acting antivirals (DAA)[10]. Interferon (IFN)-based regimens for HCV infection have serious side effects, long therapeutic duration, and contraindications, leading to the huge gaps in HCV care cascade[11]. The current IFN-free DAA regimens provide shorter treatment duration, very high treatment efficacy and safety profiles, not only for general population[12], but also for special populations[13], such as HCV/human immunodeficiency virus (HIV) coinfected patients, hepatitis B virus (HBV)/HCV coinfected patients and patients with chronic kidney diseases in real-world clinical settings[14,15].\nWorld Health Organization (WHO) set a global goal of HCV elimination by 2030[16], and Taiwan authority is even ambitious by 2025[17]. To achieve the goal, implementation of the concept of HCV micro-elimination is regarding as an efficient and practical strategy[18]. We have proved that “universal mass screening plus outreach onsite treatment” is the key to achieve HCV micro-elimination among patients under maintenance hemodialysis[19].\nRecently, the latest EASL HCV guideline recommended simplified, genotyping/ subtyping-free, pangenotypic anti-HCV treatment, either sofosbuvir/ velpatasvir or glecaprevir/pibrentasvir, to increase the accessibility and global cure rates among patients with > 12 years, chronic hepatitis C without cirrhosis or with compensated cirrhosis, with or without HIV co-infection, whatever treatment-naïve or IFN-experienced[8].\nSince HCV treatment is not frequently administered to prisoners due to unawareness of HCV infection, difficultly management, easily loss to follow-up, and lack of hepatologist in prison[20], collaboration between hepatologists and prison authorities to carry out strategies for HCV diagnosis and treatment in prisoners in highly demanded. Herein, we implemented an outreach strategy in combination with universal mass screen and onsite treatment with a simplified pan-genotypic DAA regimen, 12 wk of sofosbuvir/velpatasvir, toward HCV micro-elimination in a PWID-dominant prison in Taiwan.", "Patients linked to onsite treatment program for HCV micro-elimination HCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1).\n\nPatient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12.\nHCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1).\n\nPatient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12.\nHCV-viremic patients identified by a universal mass screening In September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen.\nIn September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen.\nHCV-viremic patients identified in outpatient clinics during the period of HCV mass screening Another 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen.\nAll patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. \nAnother 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen.\nAll patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. \nPatients identified and treated by DAAs in outpatient clinics before mass screening A total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions.\nAll participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080).\nA total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions.\nAll participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080).\nAssessment, monitoring and endpoints Anti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT.\nThe primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period).\nAnti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT.\nThe primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period).\nStatistical analyses. The efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States).\nThe efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States).", "HCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1).\n\nPatient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12.", "In September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen.", "Another 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen.\nAll patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. ", "A total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions.\nAll participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080).", "Anti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT.\nThe primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period).", "The efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States).", "Patient flowchart of HCV micro-elimination campaign The patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020.\nThe patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020.\nPatient characteristics The baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer.\nBaseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison\n56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination).\n\nP < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon.\nThe baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer.\nBaseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison\n56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination).\n\nP < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon.\nTreatment efficacy All of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13].\nIn ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2).\nVirological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison\nOne missing data.\nOne transferred; One missing data.\nTwo transferred; One released.\nFour released.\nOne relapser.\nFour transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. \nDuring DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir).\nAll of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13].\nIn ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2).\nVirological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison\nOne missing data.\nOne transferred; One missing data.\nTwo transferred; One released.\nFour released.\nOne relapser.\nFour transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. \nDuring DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir).\nSafety profiles The safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality.\nSafety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison\nDAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. \nThe safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality.\nSafety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison\nDAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. ", "The patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020.", "The baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer.\nBaseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison\n56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination).\n\nP < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon.", "All of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13].\nIn ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2).\nVirological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison\nOne missing data.\nOne transferred; One missing data.\nTwo transferred; One released.\nFour released.\nOne relapser.\nFour transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. \nDuring DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir).", "The safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality.\nSafety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison\nDAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. ", "In the current study, we demonstrated that mass screening combined with onsite group therapy by using a simplified pan-genotypic DAA regimen, 12 wk of sofosbuvir/velpatasvir, provides an “one-size fits all” solution toward the achievement of HCV micro-elimination in prisoners. The SVR rate was 95.6% in ITT population and 100% in PP population after excluding the inmates released or transferred before end-of-follow-up. The high SVR rate was observed in this PWID-dominant population, which HCV genotype distribution was diverse, including genotypes 1a, 1b, 2, 3 and 6. \nRecent advance in the development of IFN-free pan-genotypic DAA regimens has remarkably improved the treatment efficacy with an overall SVR rates of > 90%. Therefore, WHO set the global of HCV elimination by 2030, through the achievement of > 90% diagnosis rate and > 80% treatment rate for eligible patients[16]. Nevertheless, there are many barriers in each HCV care cascade toward HCV elimination at the population level[11,23]. To overcome the barriers, combining the concept of micro-elimination and an outreach strategy with immediate onsite treatment would be a more efficient and practical approach to achieve that goal[18,24]. The current study compared the HCV-infected inmates identified sporadically in outpatient clinics of Penghu Prison from 2017 to 2019 before mass screening and the patients identified by mass screening. We found that mass screening identified 208 HCV-viremic patients in a 5 d screening program from 1137 inmates (encountered around two-third of total inmates in Penghu Prison), compared to 91 HCV-viremic inmates treated in outpatient clinics from 2017 to September 2019. Our results demonstrated that mass screening with immediate onsite treatment provide much more efficient and practical solution to overcome the gaps of disease awareness and link-to-care in the HCV care cascades toward HCV micro-elimination in prisoners. In addition, we implemented “HCV reflex testing” in the mass screening program to scale-up and speed-up the diagnosis and link-to-care for treatment uptake of HCV infections[25].\nPWID is known as the major risk factor of HCV infection and transmission. Although the anti-HCV prevalence in PWID prisoners decreased from 91% in 2014 to 34.8% in 2019 by the strategy of safe injection in Taiwan[21], almost all (97.6%) of HCV-infected prisoners were PWID. Given the lack of vaccine available and high risk of transmission, the strategy of universal screening and concept of “treatment as prevention” are the keys to HCV elimination in prison as well as PWID.\nWe observed that the sporadic HCV-infected prisoners identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and eGFR, than those participating in the HCV micro-elimination campaign. It implicated that a great proportion of identified sporadically in outpatient clinics were due to concomitant morbidities; by contrast, many HCV-infected patients were unaware to their HCV diseases. In our mass screening, only 36.6% (145/396) of HCV-infected prisoners were aware of HCV infection before screening[21]. It indicates that the implementation of an outreach strategy with universal mass screen is necessary for HCV micro-elimination in prison.\nDespite of the advances in the management of HCV infections, DAA therapy in incarcerated HCV-infected people remains many obstacles to be resolved, including disease unawareness, lack of updated information about the benefits of new DAA treatment, uncertainty of treatment right[26], poor accessibility due to of onsite treatment facilities or HCV treaters. Another difficulty for HCV treatment in prisoners is the unexpected or scheduled releasing from prison or transferring to other prisons, which frequently leads to the interruption of treatment or lost-to- follow up[20,27]. We are lucky that the Taiwan Health Insurance covered all incarcerated people, including all of the laboratory tests and ultrasound sonography and the cost of DAA regimens. Each prison has a contracted hospital providing point-of-care facility. Before initiating DAA therapy, we excluded the patients with expected release or transfer within 24 wk, and negotiated with the authority to avoid unnecessary transferring to other prisons during the period of HCV treatment and follow-up once the inmates entering the DAA course. Eventually we achieved a high treatment rate of 90.6% (212/234) and a high treatment complete rate of 95.8% (203/212), with a high cure rate at 100% (212/212).\nBefore the IFN-free DAA available, the lower SVR rate, much longer treatment duration and frequent adverse events of IFN-based treatment discouraged HCV-infected prisoners from receiving treatment[10]. IFN-free DAA regimens revolutionized HCV treatment which has largely extended the indication for various HCV-infected patients. Nevertheless, the application of typical DAA regimens are based on HCV genotype, presence of decompensated cirrhosis, renal function, and prior treatment experience. The two pangenotypic DAA regimens, sofosbuvir/velpatasvir, and glecaprevir/pibrentasvir, have achieved very high SVR rates of > 95%, regardless of HCV GTs, except for treatment-experienced cirrhotic HCV GT3 patients or GT3b patients[8,12,13]. Recently, to improve the access to anti-HCV therapy, reduce the cost of laboratory tests and the relative complexity of genotype-based treatment strategies, simplified treatment without many information needed for treatment decision are recommended to facilitate the care cascade among populations who are historically less engaged in healthcare, such as PWIDs and prisoners[8]. EASL recommends simplified, genotyping/subtyping-free regimens for IFN-free DAA treatment-naïve (except sofosbuvir plus ribavirin), HCV-infected or HCV-HIV coinfected adolescent and adult patients without cirrhosis or with compensated cirrhosis, regardless of HCV genotypes[8]. These recommendations are a universal 12 wk regimen of sofosbuvir/velpatasvir for all patients or glecaprevir/pibrentasvir, 8 wk for non-cirrhotic, 12 wk for compensated cirrhotic, and 16 wk for HCV GT3 patients, respectively. There are only four information needed before treatment, including the presence of HCV viremia, potential drug-drug interactions, and prior treatment experience, and presence of cirrhosis. The advantages of glecaprevir/pibrentasvir is a shorter 8 wk regimen for treatment-naïve HCV patients and IFN-experienced non-cirrhotic patients with compensated liver diseases, which would be benefit for prisoners who are expected to be released or transferred in a short term. However, glecaprevir, a protease inhibitor, is contraindicated for patients with hepatic decompensation and at risk for rare occurrence of serious drug-induced liver injury[28]. Also, glecaprevir/ pibrentasvir has higher pill burden, three tablets a d. The advantages of sofosbuvir/ velpatasvir include a universal fixed 12 wk regimen, one tablet a d, for all HCV patients with compensated liver diseases, less frequency of potential drug-drug interactions[29], and safety for those with hepatic decompensation. However, a 12 wk regimen with sofosbuvir/velpatasvir needs one more visit and monitoring when compared to an 8 wk regimen with glecaprevir/pibrentasvir. Therefore, we select sofosbuvir/velpatasvir as the antiviral regimen for our outreach onsite treatment. In our study, all HCV-viremic prisoners fit the criteria of simplified, genotyping/ subtyping-free regimens, except one who failed to prior glecaprevir/pibrentasvir therapy and was not enrolled for sofosbuvir/velpatasvir treatment. In our PP analysis, the overall SVR12 rate was comparable between HCV patient group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203). Our study provided evidence for the concept that simplified, genotyping/subtyping-free regimens can achieve high SVR12 rate in HCV-infected PWID-dominant prisoners.\nIn our study, none of prisoners had DAA treatment discontinuation due to adverse events. None experienced serious adverse event. These data indicated that the simplified, genotyping/subtyping-free regimen, sofosbuvir/velpatasvir, was safe and well tolerated for HCV-infected PWID-dominant prisoners. Very few adverse events were reported in both groups, whatever using sofosbuvir/Ledipasvir, glecaprevir/ pibrentasvir and sofosbuvir/velpatasvir, when compared to the data from clinical trials[30,31]. It might be due to that current population was younger and less patients with advanced fibrosis or chronic kidney diseases.\nThere were some limitations in our study. First, not all inmates in Penghu Prison participated our mass screening. Strategies and policy to encourage inmates to receive HCV screening is mandatory to achieve the goal of WHO. Second, unexpected prisoners’ transferral and release could not be completely avoided, which caused incomplete treatment and follow-up. Successfully linking the released or transferred people to another HCV treaters could help completing HCV treatment and follow-up. Third, there was no reimbursement for the retreatment of prior DAA failed patients in Taiwan at the time of the current study.", "Well-designed strategies for mass screening and treatment for HCV-infected prisoners can be implemented successfully by the collaboration between physicians and prison authorities. We demonstrated that mass screening followed by immediate onsite treatment with a simplified pangenotypic DAA regimen, sofosbuvir/velpatasvir, provides successful strategies toward HCV micro-elimination among prisoners." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Direct-acting antivirals", "Sofosbuvir", "Velpatasvir", "People who inject drugs", "Universal screen" ]
INTRODUCTION: Hepatitis C virus (HCV) infection is a progressive and blood-borne infectious disease that can lead to end stage liver diseases, such as hepatic decompensation, liver cirrhosis, and hepatocellular carcinoma[1,2]. Iatrogenic transmission of HCV, such as blood transfusion and surgery, has decreased in developed countries. Whereas people who inject drugs (PWID) has become the major population of HCV transmission, which could consist of approximately 80% of HCV-infected patients[3]. Given that lack of vaccine available, “treatment as prevention” for HCV transmission in PWID is very important for HCV elimination. Prisoners are at high risk of HCV infection, with prevalence rates ranging from 3.1% to 38%[4,5]. The high prevalence of HCV infection in prisoners is resulted from unsafe lifestyles, psychiatric disorders, and social problems before they are incarcerated. Recently, PWID has been the most important risk factor of HCV infection in prisoners[6]. The anti-HCV prevalence rate could be as high as 91% among PWID prisoners[7]. Screening and eliminating HCV infection in prisoners is therefore an important social health issue. According to the American Association for the Study of Liver Diseases and European Association for the Study of the Liver (EASL) guidelines, all HCV viremic patients should be treated if life span is expected more than one year[8,9]. HCV therapeutic strategies have been revolutionized significantly because of the availability of direct-acting antivirals (DAA)[10]. Interferon (IFN)-based regimens for HCV infection have serious side effects, long therapeutic duration, and contraindications, leading to the huge gaps in HCV care cascade[11]. The current IFN-free DAA regimens provide shorter treatment duration, very high treatment efficacy and safety profiles, not only for general population[12], but also for special populations[13], such as HCV/human immunodeficiency virus (HIV) coinfected patients, hepatitis B virus (HBV)/HCV coinfected patients and patients with chronic kidney diseases in real-world clinical settings[14,15]. World Health Organization (WHO) set a global goal of HCV elimination by 2030[16], and Taiwan authority is even ambitious by 2025[17]. To achieve the goal, implementation of the concept of HCV micro-elimination is regarding as an efficient and practical strategy[18]. We have proved that “universal mass screening plus outreach onsite treatment” is the key to achieve HCV micro-elimination among patients under maintenance hemodialysis[19]. Recently, the latest EASL HCV guideline recommended simplified, genotyping/ subtyping-free, pangenotypic anti-HCV treatment, either sofosbuvir/ velpatasvir or glecaprevir/pibrentasvir, to increase the accessibility and global cure rates among patients with > 12 years, chronic hepatitis C without cirrhosis or with compensated cirrhosis, with or without HIV co-infection, whatever treatment-naïve or IFN-experienced[8]. Since HCV treatment is not frequently administered to prisoners due to unawareness of HCV infection, difficultly management, easily loss to follow-up, and lack of hepatologist in prison[20], collaboration between hepatologists and prison authorities to carry out strategies for HCV diagnosis and treatment in prisoners in highly demanded. Herein, we implemented an outreach strategy in combination with universal mass screen and onsite treatment with a simplified pan-genotypic DAA regimen, 12 wk of sofosbuvir/velpatasvir, toward HCV micro-elimination in a PWID-dominant prison in Taiwan. MATERIALS AND METHODS: Patients linked to onsite treatment program for HCV micro-elimination HCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1). Patient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12. HCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1). Patient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12. HCV-viremic patients identified by a universal mass screening In September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. In September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. HCV-viremic patients identified in outpatient clinics during the period of HCV mass screening Another 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. All patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. Another 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. All patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. Patients identified and treated by DAAs in outpatient clinics before mass screening A total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. All participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080). A total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. All participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080). Assessment, monitoring and endpoints Anti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT. The primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period). Anti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT. The primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period). Statistical analyses. The efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States). The efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States). Patients linked to onsite treatment program for HCV micro-elimination: HCV-viremic patients were recruited from two cohorts in Penghu Prison (Agency of Corrections, Ministry of Justice, Taiwan), a PWID-dominant prison (Figure 1). Patient flowchart of hepatitis C virus treatment with a simplified pan-genotypic directly-acting antivirals regimen in Penghu Prison. HCV: Hepatitis C virus; DAA: Directly-acting antivirals; SOL/VEL: Sofosbuvir/velpatasvir; GEL/PIB: Glecaprevir/pibrentasvir; EOTVR: Virological response at end-of-treatment; SVR12: Sustained viral response at post-treatment wk 12. HCV-viremic patients identified by a universal mass screening: In September 2019, we conducted a 5 d universal mass screening of viral hepatitis in Penghu Prison. These inclusion criteria were prisoners, who were at least 20 years old, being willing to enter the study for screening of viral hepatitis. The study of mass screening was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (IRB: KMUHIRB-SV(I)-20190033). All participants provided written informed consents. A total of 1137 subjects from 1697 inmates participated the mass screening[21]. Among them, 396 (34.8%) subjects had anti-HCV seropositivity; 208 (52.5%) of the 396 subjects were seropositive for HCV ribonucleic acid (RNA) and linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. HCV-viremic patients identified in outpatient clinics during the period of HCV mass screening: Another 26 HCV-viremic subjects identified in outpatient clinics of Penghu Prison between August to December 2019 were also linked to the onsite HCV treatment program with universal sofosbuvir/velpatasvir regimen. All patients received pretreatment evaluation in December 2019, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. A 12 wk, oral pan-genotypic regimen of sofosbuvir/velpatasvir 400/100 mg fixed-dose combination once daily was initiated in January-February 2020. Patients identified and treated by DAAs in outpatient clinics before mass screening: A total of 91 HCV-viremic patients identified in outpatient clinics of Penghu Prison and treated with DAA before mass screening from 2017 to 2019 were enrolled as a control. The selection of DAA regimens were based on physician’s discretion according to the viral genotype and criteria of reimbursement of National Health Insurance Administration, Taiwan. All patients received pretreatment evaluation, including medical history, liver and renal function tests, complete blood cell counts, HCV viral loads and genotyping, abdominal sonography and assessment of potential drug–drug interactions. All participants signed informed consent forms. These enrolled inmates of our study were protected according to the guidelines of the Declaration of Helsinki. The current study of DAA therapy was approved by the Institutional Review Board of Tri-Service General Hospital (IRB: TSGHIRB 2-107-05-080). Assessment, monitoring and endpoints: Anti-HCV antibody was determined by the third generation, commercially available immunoassay (Ax SYM HCV III; Abbott Laboratories, North Chicago, IL). HCV RNA viral loads and genotype were determined by real-time PCR assays [RealTime HCV; Abbott Molecular, Des Plaines IL, United States; detection limit: 12 IU/mL])[22]. Liver cirrhosis was defined by the presence of clinical, radiological, endoscopic or laboratory evidence of cirrhosis and/or portal hypertension or fibrosis-4 index (FIB-4) (> 6.5). Laboratory data monitoring and assessment of side effects were performed at treatment wk 2, 4, 8 and end-of-treatment (EOT), and 12 wk after EOT. The primary endpoint was sustained virological response (SVR12, defined as undetectable HCV RNA throughout 12 wk of the post-treatment follow-up period). Statistical analyses.: The efficacy of all DAA regimens was determined in a intent-to-treat (ITT) population (all enrolled patients with at least one dose of DAA) and a per-protocol (PP) population (subjects receiving at least one dose of DAA and retained in Penghu Prison throughout the DAA treatment and follow-up period). Safety assessments reported adverse event (AE), serious adverse event (SAE) and laboratory abnormalities in the ITT population. Continuous variables are expressed as means ± standard deviation (SD), and categorical variables are expressed as percentages. The differences of continuous variables are estimated by the Student’s t test. The differences in categorical variables are analyzed using the Chi-square test. The on-treatment and off-treatment virological response rates were analyzed in number and percentages with 95% confidence interval (CI). All data analyses were performed using the SPSS software version 18.0 (SPSS Inc., Chicago, Illinois, United States). RESULTS: Patient flowchart of HCV micro-elimination campaign The patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020. The patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020. Patient characteristics The baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer. Baseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison 56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination). P < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon. The baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer. Baseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison 56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination). P < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon. Treatment efficacy All of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13]. In ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2). Virological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison One missing data. One transferred; One missing data. Two transferred; One released. Four released. One relapser. Four transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. During DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir). All of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13]. In ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2). Virological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison One missing data. One transferred; One missing data. Two transferred; One released. Four released. One relapser. Four transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. During DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir). Safety profiles The safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality. Safety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison DAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. The safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality. Safety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison DAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. Patient flowchart of HCV micro-elimination campaign: The patient flowchart of HCV mass screen, assessment and treatment was shown in Figure 1. A total of 234 HCV-viremic patients, 208 from mass screening and 26 from outpatient clinics in Penghu Prison were assessed for eligibility of group therapy with sofosbuvir/velpatasvir in December 2019. Twenty-two patients were excluded from anti-HCV therapy due to scheduled to be released from jail (n = 16) or transferred to other jails (n = 3) within 6 mo, unwilling to receive therapy (n = 2) and prior glecaprevir/pibrentasvir treatment failure (n = 1). Finally, 212 patients were recruited for sofosbuvir/velpatasvir therapy initiated in January-February 2020. Patient characteristics : The baseline characteristics of 303 HCV-viremic patients, including 212 in HCV micro-elimination campaign and 91 sporadic controls from outpatient clinics before micro-elimination campaign were listed in Table 1. They mean age was 48.4 years with male dominant (99.7%). Thirty (9.9%) had HBV coinfection. The mean FIB-4 was 1.3, with 20 (6.6%) had advanced fibrosis (FIB-4 > 3.25). Only one patient (0.3%) had liver cirrhosis. The mean HCV RNA levels was 6.5 Logs IU/mL, dominant with HCV genotype 1 (HCV-GT1, 42.2%), followed by HCV-GT6 (35.3%), HCV-GT3 (11.6%) and HCV-GT2 (10.6%). Three (1%) patients were prior IFN-experienced. The two groups had comparable characteristics in terms of age, gender, HBV co-infection, liver and renal function tests, FIB-4 score, HCV genotype distribution, and prior history of IFN-based therapy. However, the sporadic patients identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m2, but significantly lower HCV viral loads. None of patient had decompensated cirrhosis nor liver cancer. Baseline characteristics of hepatitis C virus-infected patients receiving directly-acting antivirals therapy between sporadic hepatitis C virus therapy in outpatient clinics and campaign of hepatitis C virus micro-elimination in Penghu prison 56 patients did not have body mass index information (12 patients before campaign of hepatitis C virus (HCV) micro-elimination; 44 patients in campaign of HCV micro-elimination). P < 0.05. DAA: Directly-acting antivirals; HCV: Hepatitis C virus; BMI: Body mass index; AST: Aspartate aminotransferase; ALT: Alanine aminotransferase; LC: Liver cirrhosis; FIB-4: Fibrosis-4 index; HBsAg: Hepatitis B surface antigen; eGFR: Estimated glomerular filtration rate (mL/min/1.73 m2); SOF: Sofosbuvir; VEL: Velpatasvir; LDV: Ledipasvir; GLE: Glecaprevir; PIB: Pibrentasvir; IFN: Interferon. Treatment efficacy: All of 212 patients in HCV micro-elimination campaign received sofosbuvir/ velpatasvir treatment; while among 91 sporadic patients with DAA therapy before HCV micro-elimination campaign, 78 (85.7%) received 12 wk of sofosbuvir/Ledipasvir and 13 (14.3%) received 8-12 wk of glecaprevir/pibrentasvir according to the Taiwan HCV guideline[12,13]. In ITT analysis, the overall SVR12 rate was 95.4% (289/303) with comparable SVR12 rates between sporadic HCV control group (94.5%, 86/91) and HCV micro-elimination group (95.8%, 203/212, P = 0.126, Table 2). Virological responses of hepatitis C virus-infected patients receiving directly-acting antivirals therapy before and during campaign of hepatitis C virus micro-elimination in Penghu prison in Penghu prison One missing data. One transferred; One missing data. Two transferred; One released. Four released. One relapser. Four transferred; Five released. HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. During DAA treatment period, all of patients in sporadic HCV control group completed DAA therapy, while 3 patients in HCV micro-elimination group lost-to-follow (2 transferred; 1 released). During the post-treatment follow-up period, 4 patients in sporadic HCV control group lost-to-follow (4 released), while 6 patients in HCV micro-elimination group lost-to-follow (2 transferred; 4 released). In PP analysis, the overall SVR12 rate was 99.7% (289/290) with comparable SVR12 rates between sporadic HCV control group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203, P = 0.126, Table 2). Only one patient experienced virological failure (54 years old male, treatment-naïve, HCV-GT3 infection with baseline viral loads of 62,883 IU/mL and FIB-4 of 2.37; relapsed from a 12 wk regimen of glecaprevir/pibrentasvir). Safety profiles: The safety profiles of both groups were shown in Table 3. None of patients had treatment discontinuation other than released or transferred. None experienced serious adverse event. The frequency of adverse events was 4.3% (4/91) and 1.4% (3/212), respectively, among patients in sporadic control group and HCV micro-elimination group. The most reported adverse events were rash in 3 of 13 (23.1%) patients treated with glecaprevir/pibrentasvir and pruritus in 2 of 212 (0.9%) patients treated with sofosbuvir/velpatasvir. None of patients experienced grade 3 or 4 Laboratory abnormality. Safety profiles of hepatitis C virus-infected patients receiving direct-acting antivirals therapy in Penghu prison DAA: Directly-acting antivirals; HCV: Hepatitis C virus; VEL: Velpatasvir; SOF: Sofosbuvir. DISCUSSION: In the current study, we demonstrated that mass screening combined with onsite group therapy by using a simplified pan-genotypic DAA regimen, 12 wk of sofosbuvir/velpatasvir, provides an “one-size fits all” solution toward the achievement of HCV micro-elimination in prisoners. The SVR rate was 95.6% in ITT population and 100% in PP population after excluding the inmates released or transferred before end-of-follow-up. The high SVR rate was observed in this PWID-dominant population, which HCV genotype distribution was diverse, including genotypes 1a, 1b, 2, 3 and 6. Recent advance in the development of IFN-free pan-genotypic DAA regimens has remarkably improved the treatment efficacy with an overall SVR rates of > 90%. Therefore, WHO set the global of HCV elimination by 2030, through the achievement of > 90% diagnosis rate and > 80% treatment rate for eligible patients[16]. Nevertheless, there are many barriers in each HCV care cascade toward HCV elimination at the population level[11,23]. To overcome the barriers, combining the concept of micro-elimination and an outreach strategy with immediate onsite treatment would be a more efficient and practical approach to achieve that goal[18,24]. The current study compared the HCV-infected inmates identified sporadically in outpatient clinics of Penghu Prison from 2017 to 2019 before mass screening and the patients identified by mass screening. We found that mass screening identified 208 HCV-viremic patients in a 5 d screening program from 1137 inmates (encountered around two-third of total inmates in Penghu Prison), compared to 91 HCV-viremic inmates treated in outpatient clinics from 2017 to September 2019. Our results demonstrated that mass screening with immediate onsite treatment provide much more efficient and practical solution to overcome the gaps of disease awareness and link-to-care in the HCV care cascades toward HCV micro-elimination in prisoners. In addition, we implemented “HCV reflex testing” in the mass screening program to scale-up and speed-up the diagnosis and link-to-care for treatment uptake of HCV infections[25]. PWID is known as the major risk factor of HCV infection and transmission. Although the anti-HCV prevalence in PWID prisoners decreased from 91% in 2014 to 34.8% in 2019 by the strategy of safe injection in Taiwan[21], almost all (97.6%) of HCV-infected prisoners were PWID. Given the lack of vaccine available and high risk of transmission, the strategy of universal screening and concept of “treatment as prevention” are the keys to HCV elimination in prison as well as PWID. We observed that the sporadic HCV-infected prisoners identified in outpatient clinics had significantly higher proportion of comorbidities, including diabetes, hypertension, hyperlipidemia and eGFR, than those participating in the HCV micro-elimination campaign. It implicated that a great proportion of identified sporadically in outpatient clinics were due to concomitant morbidities; by contrast, many HCV-infected patients were unaware to their HCV diseases. In our mass screening, only 36.6% (145/396) of HCV-infected prisoners were aware of HCV infection before screening[21]. It indicates that the implementation of an outreach strategy with universal mass screen is necessary for HCV micro-elimination in prison. Despite of the advances in the management of HCV infections, DAA therapy in incarcerated HCV-infected people remains many obstacles to be resolved, including disease unawareness, lack of updated information about the benefits of new DAA treatment, uncertainty of treatment right[26], poor accessibility due to of onsite treatment facilities or HCV treaters. Another difficulty for HCV treatment in prisoners is the unexpected or scheduled releasing from prison or transferring to other prisons, which frequently leads to the interruption of treatment or lost-to- follow up[20,27]. We are lucky that the Taiwan Health Insurance covered all incarcerated people, including all of the laboratory tests and ultrasound sonography and the cost of DAA regimens. Each prison has a contracted hospital providing point-of-care facility. Before initiating DAA therapy, we excluded the patients with expected release or transfer within 24 wk, and negotiated with the authority to avoid unnecessary transferring to other prisons during the period of HCV treatment and follow-up once the inmates entering the DAA course. Eventually we achieved a high treatment rate of 90.6% (212/234) and a high treatment complete rate of 95.8% (203/212), with a high cure rate at 100% (212/212). Before the IFN-free DAA available, the lower SVR rate, much longer treatment duration and frequent adverse events of IFN-based treatment discouraged HCV-infected prisoners from receiving treatment[10]. IFN-free DAA regimens revolutionized HCV treatment which has largely extended the indication for various HCV-infected patients. Nevertheless, the application of typical DAA regimens are based on HCV genotype, presence of decompensated cirrhosis, renal function, and prior treatment experience. The two pangenotypic DAA regimens, sofosbuvir/velpatasvir, and glecaprevir/pibrentasvir, have achieved very high SVR rates of > 95%, regardless of HCV GTs, except for treatment-experienced cirrhotic HCV GT3 patients or GT3b patients[8,12,13]. Recently, to improve the access to anti-HCV therapy, reduce the cost of laboratory tests and the relative complexity of genotype-based treatment strategies, simplified treatment without many information needed for treatment decision are recommended to facilitate the care cascade among populations who are historically less engaged in healthcare, such as PWIDs and prisoners[8]. EASL recommends simplified, genotyping/subtyping-free regimens for IFN-free DAA treatment-naïve (except sofosbuvir plus ribavirin), HCV-infected or HCV-HIV coinfected adolescent and adult patients without cirrhosis or with compensated cirrhosis, regardless of HCV genotypes[8]. These recommendations are a universal 12 wk regimen of sofosbuvir/velpatasvir for all patients or glecaprevir/pibrentasvir, 8 wk for non-cirrhotic, 12 wk for compensated cirrhotic, and 16 wk for HCV GT3 patients, respectively. There are only four information needed before treatment, including the presence of HCV viremia, potential drug-drug interactions, and prior treatment experience, and presence of cirrhosis. The advantages of glecaprevir/pibrentasvir is a shorter 8 wk regimen for treatment-naïve HCV patients and IFN-experienced non-cirrhotic patients with compensated liver diseases, which would be benefit for prisoners who are expected to be released or transferred in a short term. However, glecaprevir, a protease inhibitor, is contraindicated for patients with hepatic decompensation and at risk for rare occurrence of serious drug-induced liver injury[28]. Also, glecaprevir/ pibrentasvir has higher pill burden, three tablets a d. The advantages of sofosbuvir/ velpatasvir include a universal fixed 12 wk regimen, one tablet a d, for all HCV patients with compensated liver diseases, less frequency of potential drug-drug interactions[29], and safety for those with hepatic decompensation. However, a 12 wk regimen with sofosbuvir/velpatasvir needs one more visit and monitoring when compared to an 8 wk regimen with glecaprevir/pibrentasvir. Therefore, we select sofosbuvir/velpatasvir as the antiviral regimen for our outreach onsite treatment. In our study, all HCV-viremic prisoners fit the criteria of simplified, genotyping/ subtyping-free regimens, except one who failed to prior glecaprevir/pibrentasvir therapy and was not enrolled for sofosbuvir/velpatasvir treatment. In our PP analysis, the overall SVR12 rate was comparable between HCV patient group (98.9%, 86/87) and HCV micro-elimination group (100%, 203/203). Our study provided evidence for the concept that simplified, genotyping/subtyping-free regimens can achieve high SVR12 rate in HCV-infected PWID-dominant prisoners. In our study, none of prisoners had DAA treatment discontinuation due to adverse events. None experienced serious adverse event. These data indicated that the simplified, genotyping/subtyping-free regimen, sofosbuvir/velpatasvir, was safe and well tolerated for HCV-infected PWID-dominant prisoners. Very few adverse events were reported in both groups, whatever using sofosbuvir/Ledipasvir, glecaprevir/ pibrentasvir and sofosbuvir/velpatasvir, when compared to the data from clinical trials[30,31]. It might be due to that current population was younger and less patients with advanced fibrosis or chronic kidney diseases. There were some limitations in our study. First, not all inmates in Penghu Prison participated our mass screening. Strategies and policy to encourage inmates to receive HCV screening is mandatory to achieve the goal of WHO. Second, unexpected prisoners’ transferral and release could not be completely avoided, which caused incomplete treatment and follow-up. Successfully linking the released or transferred people to another HCV treaters could help completing HCV treatment and follow-up. Third, there was no reimbursement for the retreatment of prior DAA failed patients in Taiwan at the time of the current study. CONCLUSION: Well-designed strategies for mass screening and treatment for HCV-infected prisoners can be implemented successfully by the collaboration between physicians and prison authorities. We demonstrated that mass screening followed by immediate onsite treatment with a simplified pangenotypic DAA regimen, sofosbuvir/velpatasvir, provides successful strategies toward HCV micro-elimination among prisoners.
Background: Prisoners are at risk of hepatitis C virus (HCV) infection, especially among the people who inject drugs (PWID). We implemented an outreach strategy in combination with universal mass screening and immediate onsite treatment with a simplified pan-genotypic direct-acting antivirals (DAA) regimen, 12 wk of sofosbuvir/velpatasvir, in a PWID-dominant prison in Taiwan. Methods: HCV-viremic patients were recruited for onsite treatment program for HCV micro-elimination with a pangenotypic DAA regimen, 12 wk of sofosbuvir/ velpatasvir, from two cohorts in Penghu Prison, either identified by mass screen or in outpatient clinics, in September 2019. Another group of HCV-viremic patients identified sporadically in outpatient clinics before mass screening were enrolled as a control group. The primary endpoint was sustained virological response (SVR12, defined as undetectable HCV ribonucleic acid (RNA) 12 wk after end-of-treatment). Results: A total of 212 HCV-viremic subjects were recruited for HCV micro-elimination campaign; 91 patients treated with sofosbuvir/Ledipasvir or glecaprevir/ pibrentasvir before mass screening were enrolled as a control. The HCV micro-elimination group had significantly lower proportion of diabetes, hypertension, hyperlipidemia, advanced fibrosis and chronic kidney diseases, but higher levels of HCV RNA. The SVR12 rate was comparable between the HCV micro-elimination and control groups, 95.8% (203/212) vs 94.5% (86/91), respectively, in intent-to-treat analysis, and 100% (203/203) vs 98.9% (86/87), respectively, in per-protocol analysis. There was no virological failure, treatment discontinuation, and serious adverse event among sofosbuvir/velpatasvir-treated patients in the HCV micro-elimination group. Conclusions: Outreach mass screening followed by immediate onsite treatment with a simplified pangenotypic DAA regimen, sofosbuvir/velpatasvir, provides successful strategies toward HCV micro-elimination among prisoners.
INTRODUCTION: Hepatitis C virus (HCV) infection is a progressive and blood-borne infectious disease that can lead to end stage liver diseases, such as hepatic decompensation, liver cirrhosis, and hepatocellular carcinoma[1,2]. Iatrogenic transmission of HCV, such as blood transfusion and surgery, has decreased in developed countries. Whereas people who inject drugs (PWID) has become the major population of HCV transmission, which could consist of approximately 80% of HCV-infected patients[3]. Given that lack of vaccine available, “treatment as prevention” for HCV transmission in PWID is very important for HCV elimination. Prisoners are at high risk of HCV infection, with prevalence rates ranging from 3.1% to 38%[4,5]. The high prevalence of HCV infection in prisoners is resulted from unsafe lifestyles, psychiatric disorders, and social problems before they are incarcerated. Recently, PWID has been the most important risk factor of HCV infection in prisoners[6]. The anti-HCV prevalence rate could be as high as 91% among PWID prisoners[7]. Screening and eliminating HCV infection in prisoners is therefore an important social health issue. According to the American Association for the Study of Liver Diseases and European Association for the Study of the Liver (EASL) guidelines, all HCV viremic patients should be treated if life span is expected more than one year[8,9]. HCV therapeutic strategies have been revolutionized significantly because of the availability of direct-acting antivirals (DAA)[10]. Interferon (IFN)-based regimens for HCV infection have serious side effects, long therapeutic duration, and contraindications, leading to the huge gaps in HCV care cascade[11]. The current IFN-free DAA regimens provide shorter treatment duration, very high treatment efficacy and safety profiles, not only for general population[12], but also for special populations[13], such as HCV/human immunodeficiency virus (HIV) coinfected patients, hepatitis B virus (HBV)/HCV coinfected patients and patients with chronic kidney diseases in real-world clinical settings[14,15]. World Health Organization (WHO) set a global goal of HCV elimination by 2030[16], and Taiwan authority is even ambitious by 2025[17]. To achieve the goal, implementation of the concept of HCV micro-elimination is regarding as an efficient and practical strategy[18]. We have proved that “universal mass screening plus outreach onsite treatment” is the key to achieve HCV micro-elimination among patients under maintenance hemodialysis[19]. Recently, the latest EASL HCV guideline recommended simplified, genotyping/ subtyping-free, pangenotypic anti-HCV treatment, either sofosbuvir/ velpatasvir or glecaprevir/pibrentasvir, to increase the accessibility and global cure rates among patients with > 12 years, chronic hepatitis C without cirrhosis or with compensated cirrhosis, with or without HIV co-infection, whatever treatment-naïve or IFN-experienced[8]. Since HCV treatment is not frequently administered to prisoners due to unawareness of HCV infection, difficultly management, easily loss to follow-up, and lack of hepatologist in prison[20], collaboration between hepatologists and prison authorities to carry out strategies for HCV diagnosis and treatment in prisoners in highly demanded. Herein, we implemented an outreach strategy in combination with universal mass screen and onsite treatment with a simplified pan-genotypic DAA regimen, 12 wk of sofosbuvir/velpatasvir, toward HCV micro-elimination in a PWID-dominant prison in Taiwan. CONCLUSION: Our study provided evidence for the concept that simplified, genotyping/subtyping-free regimens can achieve high SVR12 rate in HCV-infected prisoners. In the future, it is possible to implement the strategy to all prisoners in our country.
Background: Prisoners are at risk of hepatitis C virus (HCV) infection, especially among the people who inject drugs (PWID). We implemented an outreach strategy in combination with universal mass screening and immediate onsite treatment with a simplified pan-genotypic direct-acting antivirals (DAA) regimen, 12 wk of sofosbuvir/velpatasvir, in a PWID-dominant prison in Taiwan. Methods: HCV-viremic patients were recruited for onsite treatment program for HCV micro-elimination with a pangenotypic DAA regimen, 12 wk of sofosbuvir/ velpatasvir, from two cohorts in Penghu Prison, either identified by mass screen or in outpatient clinics, in September 2019. Another group of HCV-viremic patients identified sporadically in outpatient clinics before mass screening were enrolled as a control group. The primary endpoint was sustained virological response (SVR12, defined as undetectable HCV ribonucleic acid (RNA) 12 wk after end-of-treatment). Results: A total of 212 HCV-viremic subjects were recruited for HCV micro-elimination campaign; 91 patients treated with sofosbuvir/Ledipasvir or glecaprevir/ pibrentasvir before mass screening were enrolled as a control. The HCV micro-elimination group had significantly lower proportion of diabetes, hypertension, hyperlipidemia, advanced fibrosis and chronic kidney diseases, but higher levels of HCV RNA. The SVR12 rate was comparable between the HCV micro-elimination and control groups, 95.8% (203/212) vs 94.5% (86/91), respectively, in intent-to-treat analysis, and 100% (203/203) vs 98.9% (86/87), respectively, in per-protocol analysis. There was no virological failure, treatment discontinuation, and serious adverse event among sofosbuvir/velpatasvir-treated patients in the HCV micro-elimination group. Conclusions: Outreach mass screening followed by immediate onsite treatment with a simplified pangenotypic DAA regimen, sofosbuvir/velpatasvir, provides successful strategies toward HCV micro-elimination among prisoners.
8,364
367
[ 625, 109, 139, 105, 156, 162, 186, 2176, 131, 412, 381, 153, 1667, 59 ]
15
[ "hcv", "patients", "treatment", "elimination", "daa", "micro", "micro elimination", "sofosbuvir", "hepatitis", "prison" ]
[ "hcv elimination prisoners", "pwid important hcv", "hcv infected pwid", "inmates receive hcv", "prison hcv hepatitis" ]
null
[CONTENT] Direct-acting antivirals | Sofosbuvir | Velpatasvir | People who inject drugs | Universal screen [SUMMARY]
[CONTENT] Direct-acting antivirals | Sofosbuvir | Velpatasvir | People who inject drugs | Universal screen [SUMMARY]
null
[CONTENT] Direct-acting antivirals | Sofosbuvir | Velpatasvir | People who inject drugs | Universal screen [SUMMARY]
[CONTENT] Direct-acting antivirals | Sofosbuvir | Velpatasvir | People who inject drugs | Universal screen [SUMMARY]
[CONTENT] Direct-acting antivirals | Sofosbuvir | Velpatasvir | People who inject drugs | Universal screen [SUMMARY]
[CONTENT] Antiviral Agents | Hepacivirus | Hepatitis C | Hepatitis C, Chronic | Humans | Prisons [SUMMARY]
[CONTENT] Antiviral Agents | Hepacivirus | Hepatitis C | Hepatitis C, Chronic | Humans | Prisons [SUMMARY]
null
[CONTENT] Antiviral Agents | Hepacivirus | Hepatitis C | Hepatitis C, Chronic | Humans | Prisons [SUMMARY]
[CONTENT] Antiviral Agents | Hepacivirus | Hepatitis C | Hepatitis C, Chronic | Humans | Prisons [SUMMARY]
[CONTENT] Antiviral Agents | Hepacivirus | Hepatitis C | Hepatitis C, Chronic | Humans | Prisons [SUMMARY]
[CONTENT] hcv elimination prisoners | pwid important hcv | hcv infected pwid | inmates receive hcv | prison hcv hepatitis [SUMMARY]
[CONTENT] hcv elimination prisoners | pwid important hcv | hcv infected pwid | inmates receive hcv | prison hcv hepatitis [SUMMARY]
null
[CONTENT] hcv elimination prisoners | pwid important hcv | hcv infected pwid | inmates receive hcv | prison hcv hepatitis [SUMMARY]
[CONTENT] hcv elimination prisoners | pwid important hcv | hcv infected pwid | inmates receive hcv | prison hcv hepatitis [SUMMARY]
[CONTENT] hcv elimination prisoners | pwid important hcv | hcv infected pwid | inmates receive hcv | prison hcv hepatitis [SUMMARY]
[CONTENT] hcv | patients | treatment | elimination | daa | micro | micro elimination | sofosbuvir | hepatitis | prison [SUMMARY]
[CONTENT] hcv | patients | treatment | elimination | daa | micro | micro elimination | sofosbuvir | hepatitis | prison [SUMMARY]
null
[CONTENT] hcv | patients | treatment | elimination | daa | micro | micro elimination | sofosbuvir | hepatitis | prison [SUMMARY]
[CONTENT] hcv | patients | treatment | elimination | daa | micro | micro elimination | sofosbuvir | hepatitis | prison [SUMMARY]
[CONTENT] hcv | patients | treatment | elimination | daa | micro | micro elimination | sofosbuvir | hepatitis | prison [SUMMARY]
[CONTENT] hcv | hcv infection | infection | prisoners | pwid | high | treatment | important | infection prisoners | hcv infection prisoners [SUMMARY]
[CONTENT] hcv | treatment | subjects | variables | daa | viral | screening | mass screening | drug | response [SUMMARY]
null
[CONTENT] strategies | prisoners | screening | mass screening | collaboration physicians prison | simplified pangenotypic | simplified pangenotypic daa | simplified pangenotypic daa regimen | treatment simplified pangenotypic daa | prison authorities demonstrated [SUMMARY]
[CONTENT] hcv | patients | treatment | hepatitis | daa | elimination | micro | micro elimination | mass | screening [SUMMARY]
[CONTENT] hcv | patients | treatment | hepatitis | daa | elimination | micro | micro elimination | mass | screening [SUMMARY]
[CONTENT] PWID ||| 12 | PWID | Taiwan [SUMMARY]
[CONTENT] HCV micro-elimination | DAA | 12 | two | Penghu Prison | September 2019 ||| ||| RNA | 12 [SUMMARY]
null
[CONTENT] HCV micro-elimination [SUMMARY]
[CONTENT] PWID ||| 12 | PWID | Taiwan ||| HCV micro-elimination | DAA | 12 | two | Penghu Prison | September 2019 ||| ||| RNA | 12 ||| 212 | HCV micro-elimination | 91 ||| HCV | HCV RNA ||| 95.8% | 94.5% | 100% | 203/203 | 98.9% ||| ||| HCV micro-elimination [SUMMARY]
[CONTENT] PWID ||| 12 | PWID | Taiwan ||| HCV micro-elimination | DAA | 12 | two | Penghu Prison | September 2019 ||| ||| RNA | 12 ||| 212 | HCV micro-elimination | 91 ||| HCV | HCV RNA ||| 95.8% | 94.5% | 100% | 203/203 | 98.9% ||| ||| HCV micro-elimination [SUMMARY]
Prevalence of body-focused repetitive behaviors in three large medical colleges of Karachi: a cross-sectional study.
23116460
Body-focused repetitive behaviors (BFRBs) that include skin picking (dermatillomania), hair pulling (trichotillomania) and nail biting (onychophagia), lead to harmful physical and psychological sequelae.The objective was to determine the prevalence of BFRBs among students attending three large medical colleges of Karachi. It is imperative to come up with frequency to design strategies to decrease the burden and adverse effects associated with BFRBs among medical students.
BACKGROUND
A cross-sectional study was conducted among 210 students attending Aga Khan University, Dow Medical College and Sind Medical College, Karachi, in equal proportion. Data were collected using a pre tested tool, "Habit Questionnaire". Diagnoses were made on the criteria that a student must be involved in an activity 5 times or more per day for 4 weeks or more. Convenience sampling was done to recruit the participants aged 18 years and above after getting written informed consent.
METHODS
The overall prevalence of BFRBs was found to be 46 (22%). For those positive for BFRBs, gender distribution was as follows: females 29 (13.9%) and males 17 (8.1%). Among these students, 19 (9.0%) were engaged in dermatillomania, 28 (13.3%) in trichotillomania and 13 (6.2%) in onychophagia.
RESULTS
High proportions of BFRBs are reported among medical students of Karachi. Key health messages and interventions to reduce stress and anxiety among students may help in curtailing the burden of this disease which has serious adverse consequences.
CONCLUSIONS
[ "Adolescent", "Adult", "Cross-Sectional Studies", "Female", "Humans", "Male", "Obsessive-Compulsive Disorder", "Pakistan", "Schools, Medical", "Young Adult" ]
3508914
Background
Body-focused repetitive behaviors (BFRBs) refer to a group of behaviors that include skin picking (dermatillomania), hair pulling (trichotillomania) and nail biting (onychophagia), which result in physical and psychological difficulties [1]. These behaviors for some individuals are simply referred to as nervous habits [2]. However, these nervous habits become problematic when they interfere with the person’s everyday functioning. When these BFRBs cross this line, then they are classified as Impulse Control Disorders. BFRBs most often begin in late childhood or adolescence. They are among the most poorly understood, misdiagnosed, and under treated groups of disorders. The key factor underlying BFRBs is difficulty resisting the urge or impulse to perform a certain behavior that causes a degree of relief. The behavior continues because the BFRB results in a more pleasant state therefore, it is negatively reinforced. Prevalence estimates indicates that such behaviors are quite common among students. Nail biting of 2 times or more per week was reported among 63.6% students in United States of America [2]. Another study from USA, using the stringent criteria of 5 times or more per week stated 21.8% students engaged in mouth, lips or cheeks chewing and 10.1% engaged in habitual nail biting [3]. A recent survey of college students found that 13.7% of the sample endorsed in at least one repetitive behavior that occurred more than five times per day for at least 4 weeks and produced some type of psychological or physical disruption of functioning [4]. Based on questionnaire screenings, a lifetime ICD rate of 3.5% was found in college students of Germany [5]. Very limited literature and no prevalence estimates regarding BFRBs could be found from Asian countries including Pakistan. The treatment for BFRBs may include a combination of psychotropic medications and cognitive behavioral therapy (CBT). CBT often involves Habit Reversal Training and Exposure and Response Prevention (aka Exposure and Ritual Prevention). Since limited literature from our part of the world is available on body-focused repetitive behaviors and its impact on medical students and their lifestyles, it is imperative to spread awareness regarding them and find out ways and means to prevent them. It is therefore essential to come up with frequency to design strategies to decrease the prevalence of BFRBs and thus prevent their adverse effects among medical students. We thus aimed to determine the prevalence of BFRBs among students attending three large medical colleges of Karachi.
Methods
The study was conducted in three large medical colleges of Karachi: The Aga Khan University, Dow Medical College and Sindh Medical College. Students from Karachi as well as interior and foreign attend these institutes; hence provide better representation from different socio economic classes and cultures. The combined strength of student of the three colleges was approximately 3000. The study was conducted in July 2010. The investigation was a cross-sectional study. Students were recruited in equal proportion from each college. A self-administered pre tested questionnaire was used for data collection. Data was collected by trained medical students. The study was approved by the Ethical Review Committee of Aga Khan University and Dow Medical College. Written informed consent was obtained from each student and an explanation of the purposes of the research was provided to them. Eligibility criteria to participate in the study comprised of registered medical students aged 18 and above. However students less than 18 years and not willing to consent to participate in this study were excluded. A d “Habit Questionnaire” was used for gathering information on repetitive behaviors. Habit is a brief five items, self-report questionnaire that provides a standardized assessment of the frequency and duration of BFRBs. The Habit Questionnaire has been found to possess moderate test–retest reliability of 0.69 of diagnosis of BFRB, p<0.001 [4]. Participants were asked to indicate if they engage in the following repetitive behaviors: hair pulling, hair manipulation, nail biting, skin biting and mouth chewing. For each repetitive behavior, participants who acknowledged engaging in the specified behavior were asked about the frequency of the behavior (i.e., fewer or greater than five times per day) and the duration of the behavior (i.e., less than 4 weeks or longer than 4 weeks). Participants were also asked to specify if the behavior caused impairment, which was defined as (1) interference with daily functioning, (2) resultant injuries with possible permanent scarring or damage, (3) medical attention as a result of the behavior or (4) recommendation to cease the behavior by a medical professional. To rule out conditions that commonly co-occur with BFRBs, the final section of the Habit Questionnaire asked participants to indicate if they had ever been diagnosed with obsessive–compulsive disorder, Tourette’s syndrome, autism, Asperger’s syndrome or a developmental disability. To meet the criteria for a BFRB, the participants needed to answer “yes” to engaging in at least one repetitive behavior five or more times per day, and the behavior had to be present for at least 4 weeks. Participants were also needed to report that the particular behavior interferes with functioning, caused an injury, caused him or her to seek medical attention or elicited a recommendation to stop the behavior by a medical professional. Our focus was on three behaviors: Trichotillomania (including ‘hair pulling’ and ‘hair manipulation’), Onychophagia (including ‘nail biting’) and Dermatillomania (including ‘chew mouth lips or cheek’ and ‘bite on any other areas’). Statistical analysis A sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved. Data was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate. A sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved. Data was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate.
Results
The prevalence of BFRBs among medical students of Karachi was found to be 46 (22%). For those positive for BFRBs, gender distribution was as follows: females 29 (13.9%) and males 17 (8.1%). The average age of participants was 21.5 ranging from 18–27 (refer Table 1). Among these students, 19 (9.0%) were engaged in dermatillomania, 28 (13.3%) in trichotillomania and 13 (6.2%) in onychophagia (refer Table 2). Baseline characteristics of participants BFRBs characteristics among medical students in Karachi As discussed earlier, to meet criteria for BFRBs, student must report engaging in an activity more than five times a day for at least 4 weeks or more. Table 3 clearly shows that majority of the students that were involved in an activity repeated it less than five times per day, rendering them ineligible to be labeled as being involved in BFRBs. Also, far less number of students engaging in a BFRB said to have been involved in it for 4 weeks to 12 months while majority fell into 12 months or more category: 7 out of 9 males and 18 out of 19 females for hair pulling; 20 out of 31 males and 23 out of 34 females for hair manipulation; 11 out of 11 males 25 out of 36 females for nail biting; 17 out of 23 males and 38 out of 51 females for chewing mouth, lips or cheeks and 4 out 5 males and 6 out of 9 females for biting other areas. Prevalence and gender wise distributions of BFRBs Many of the students being engaged in an activity also reported that that activity resulted in noticeable hair loss, interfered with day to day activity, caused injury, permanent scar or damage or made them seek medical attention. Few of them reported that a medical professional suggested them to stop the behavior while very few said that the behavior occurred under the influence of alcohol or other drugs (for details refer Table 3). Besides this, as shown in the previous studies, we did obtain the same test retest reliability.
Conclusions
The above facts clearly indicate a comparatively higher prevalence of BFRBs among medical students of Karachi than other populations, supporting the fact that these students are a victim of great stress and anxiety in their daily lives. High occurrence of BFRBs among them can be greatly detrimental and thus there is an utter need to spread awareness regarding these behaviors to save the students from its negative impacts. Key health messages and interventions to reduce stress and anxiety among students may help in curtailing the burden of this disease which has serious adverse consequences.
[ "Background", "Statistical analysis", "Abbreviations", "Competing interests", "Authors’ contributions", "Authors’ information" ]
[ "Body-focused repetitive behaviors (BFRBs) refer to a group of behaviors that include skin picking (dermatillomania), hair pulling (trichotillomania) and nail biting (onychophagia), which result in physical and psychological difficulties\n[1]. These behaviors for some individuals are simply referred to as nervous habits\n[2]. However, these nervous habits become problematic when they interfere with the person’s everyday functioning. When these BFRBs cross this line, then they are classified as Impulse Control Disorders.\nBFRBs most often begin in late childhood or adolescence. They are among the most poorly understood, misdiagnosed, and under treated groups of disorders. The key factor underlying BFRBs is difficulty resisting the urge or impulse to perform a certain behavior that causes a degree of relief. The behavior continues because the BFRB results in a more pleasant state therefore, it is negatively reinforced.\nPrevalence estimates indicates that such behaviors are quite common among students. Nail biting of 2 times or more per week was reported among 63.6% students in United States of America\n[2]. Another study from USA, using the stringent criteria of 5 times or more per week stated 21.8% students engaged in mouth, lips or cheeks chewing and 10.1% engaged in habitual nail biting\n[3]. A recent survey of college students found that 13.7% of the sample endorsed in at least one repetitive behavior that occurred more than five times per day for at least 4 weeks and produced some type of psychological or physical disruption of functioning\n[4]. Based on questionnaire screenings, a lifetime ICD rate of 3.5% was found in college students of Germany\n[5]. Very limited literature and no prevalence estimates regarding BFRBs could be found from Asian countries including Pakistan.\nThe treatment for BFRBs may include a combination of psychotropic medications and cognitive behavioral therapy (CBT). CBT often involves Habit Reversal Training and Exposure and Response Prevention (aka Exposure and Ritual Prevention).\nSince limited literature from our part of the world is available on body-focused repetitive behaviors and its impact on medical students and their lifestyles, it is imperative to spread awareness regarding them and find out ways and means to prevent them. It is therefore essential to come up with frequency to design strategies to decrease the prevalence of BFRBs and thus prevent their adverse effects among medical students.\nWe thus aimed to determine the prevalence of BFRBs among students attending three large medical colleges of Karachi.", "A sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved.\nData was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate.", "BFRBs: Body Focused Repetitive Behaviors; CBT: Cognitive Behavioral Therapy; ICD: Impulse Control Disorder; SPSS: Statistical Package for the Social Sciences.", "We do not have any competing interests to declare.", "EUS came up with the idea, did literature review, made the proposal, did data collection, entered data into Statistical Package for the Social Sciences, analyzed it and wrote the paper. SSN helped with literature review, did data collection, entered data into Statistical Package for the Social Sciences and analyzed it. HN’s role was as a consultant psychiatrist, BA supervised the overall conduction of study. All authors read and approved the final manuscript.", "EUS and SSN are final year medical students at Dow Medical College, Dow University of Health Sciences, Karachi, Pakistan. HN = MBBS, FCPS, BA MSc." ]
[ null, null, null, null, null, null ]
[ "Background", "Methods", "Statistical analysis", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Authors’ information" ]
[ "Body-focused repetitive behaviors (BFRBs) refer to a group of behaviors that include skin picking (dermatillomania), hair pulling (trichotillomania) and nail biting (onychophagia), which result in physical and psychological difficulties\n[1]. These behaviors for some individuals are simply referred to as nervous habits\n[2]. However, these nervous habits become problematic when they interfere with the person’s everyday functioning. When these BFRBs cross this line, then they are classified as Impulse Control Disorders.\nBFRBs most often begin in late childhood or adolescence. They are among the most poorly understood, misdiagnosed, and under treated groups of disorders. The key factor underlying BFRBs is difficulty resisting the urge or impulse to perform a certain behavior that causes a degree of relief. The behavior continues because the BFRB results in a more pleasant state therefore, it is negatively reinforced.\nPrevalence estimates indicates that such behaviors are quite common among students. Nail biting of 2 times or more per week was reported among 63.6% students in United States of America\n[2]. Another study from USA, using the stringent criteria of 5 times or more per week stated 21.8% students engaged in mouth, lips or cheeks chewing and 10.1% engaged in habitual nail biting\n[3]. A recent survey of college students found that 13.7% of the sample endorsed in at least one repetitive behavior that occurred more than five times per day for at least 4 weeks and produced some type of psychological or physical disruption of functioning\n[4]. Based on questionnaire screenings, a lifetime ICD rate of 3.5% was found in college students of Germany\n[5]. Very limited literature and no prevalence estimates regarding BFRBs could be found from Asian countries including Pakistan.\nThe treatment for BFRBs may include a combination of psychotropic medications and cognitive behavioral therapy (CBT). CBT often involves Habit Reversal Training and Exposure and Response Prevention (aka Exposure and Ritual Prevention).\nSince limited literature from our part of the world is available on body-focused repetitive behaviors and its impact on medical students and their lifestyles, it is imperative to spread awareness regarding them and find out ways and means to prevent them. It is therefore essential to come up with frequency to design strategies to decrease the prevalence of BFRBs and thus prevent their adverse effects among medical students.\nWe thus aimed to determine the prevalence of BFRBs among students attending three large medical colleges of Karachi.", "The study was conducted in three large medical colleges of Karachi: The Aga Khan University, Dow Medical College and Sindh Medical College. Students from Karachi as well as interior and foreign attend these institutes; hence provide better representation from different socio economic classes and cultures. The combined strength of student of the three colleges was approximately 3000. The study was conducted in July 2010.\nThe investigation was a cross-sectional study. Students were recruited in equal proportion from each college. A self-administered pre tested questionnaire was used for data collection. Data was collected by trained medical students. The study was approved by the Ethical Review Committee of Aga Khan University and Dow Medical College. Written informed consent was obtained from each student and an explanation of the purposes of the research was provided to them.\nEligibility criteria to participate in the study comprised of registered medical students aged 18 and above. However students less than 18 years and not willing to consent to participate in this study were excluded.\nA d “Habit Questionnaire” was used for gathering information on repetitive behaviors. Habit is a brief five items, self-report questionnaire that provides a standardized assessment of the frequency and duration of BFRBs.\nThe Habit Questionnaire has been found to possess moderate test–retest reliability of 0.69 of diagnosis of BFRB, p<0.001\n[4].\nParticipants were asked to indicate if they engage in the following repetitive behaviors: hair pulling, hair manipulation, nail biting, skin biting and mouth chewing. For each repetitive behavior, participants who acknowledged engaging in the specified behavior were asked about the frequency of the behavior (i.e., fewer or greater than five times per day) and the duration of the behavior (i.e., less than 4 weeks or longer than 4 weeks). Participants were also asked to specify if the behavior caused impairment, which was defined as (1) interference with daily functioning, (2) resultant injuries with possible permanent scarring or damage, (3) medical attention as a result of the behavior or (4) recommendation to cease the behavior by a medical professional. To rule out conditions that commonly co-occur with BFRBs, the final section of the Habit Questionnaire asked participants to indicate if they had ever been diagnosed with obsessive–compulsive disorder, Tourette’s syndrome, autism, Asperger’s syndrome or a developmental disability.\nTo meet the criteria for a BFRB, the participants needed to answer “yes” to engaging in at least one repetitive behavior five or more times per day, and the behavior had to be present for at least 4 weeks. Participants were also needed to report that the particular behavior interferes with functioning, caused an injury, caused him or her to seek medical attention or elicited a recommendation to stop the behavior by a medical professional.\nOur focus was on three behaviors: Trichotillomania (including ‘hair pulling’ and ‘hair manipulation’), Onychophagia (including ‘nail biting’) and Dermatillomania (including ‘chew mouth lips or cheek’ and ‘bite on any other areas’).\n Statistical analysis A sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved.\nData was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate.\nA sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved.\nData was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate.", "A sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved.\nData was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate.", "The prevalence of BFRBs among medical students of Karachi was found to be 46 (22%). For those positive for BFRBs, gender distribution was as follows: females 29 (13.9%) and males 17 (8.1%). The average age of participants was 21.5 ranging from 18–27 (refer Table\n1). Among these students, 19 (9.0%) were engaged in dermatillomania, 28 (13.3%) in trichotillomania and 13 (6.2%) in onychophagia (refer Table\n2).\nBaseline characteristics of participants\nBFRBs characteristics among medical students in Karachi\nAs discussed earlier, to meet criteria for BFRBs, student must report engaging in an activity more than five times a day for at least 4 weeks or more. Table\n3 clearly shows that majority of the students that were involved in an activity repeated it less than five times per day, rendering them ineligible to be labeled as being involved in BFRBs. Also, far less number of students engaging in a BFRB said to have been involved in it for 4 weeks to 12 months while majority fell into 12 months or more category: 7 out of 9 males and 18 out of 19 females for hair pulling; 20 out of 31 males and 23 out of 34 females for hair manipulation; 11 out of 11 males 25 out of 36 females for nail biting; 17 out of 23 males and 38 out of 51 females for chewing mouth, lips or cheeks and 4 out 5 males and 6 out of 9 females for biting other areas.\nPrevalence and gender wise distributions of BFRBs\nMany of the students being engaged in an activity also reported that that activity resulted in noticeable hair loss, interfered with day to day activity, caused injury, permanent scar or damage or made them seek medical attention. Few of them reported that a medical professional suggested them to stop the behavior while very few said that the behavior occurred under the influence of alcohol or other drugs (for details refer Table\n3). Besides this, as shown in the previous studies, we did obtain the same test retest reliability.", "BFRBs are not uncommon among medical students of Karachi. It is therefore imperative to identify the prevalence associated with BFRBs to design interventions to curtail the burden.\nDisorder-specific rates in our study ranged from 6.2%-13.3%, which is in agreement with the rates of 12.8% and 10.0% of skin picking and nail biting respectively according to a study conducted in the U.S.\n[6]. Our study also found greater occurrence among females than males, a finding consistent with previous studies\n[1,7,8]. Literature shows that stressful conditions tend to trigger BFRBs but may not be necessarily associated. Thus this higher prevalence may indicate that females are more prone to stressful conditions and thus tend to get engaged more frequently in BFRBs than males.\nAs mentioned above, stress may trigger BFRBs. Besides that, other reasons such as socioeconomic factors may also be the cause of stress and anxiety among different sets of students. But since 1) no literature is available proving any such facts, 2) our sample is limited and has no control population and 3) ours is a cross-sectional study which cannot establish cause and effect relationship, therefore we can’t come up with a valid conclusion in this regard.\nOverall prevalence of BFRBs in our study was found to be much higher than previous surveys for e.g., according to a study, 13.7% of college students endorsed at least one repetitive behavior that occurred more than five times per day for at least 4 weeks and produced some type of psychological or physical disruption of functioning\n[4]. Another source gives a lifetime ICD rate of 3.5% in college students\n[5] while according to a study conducted in United States, 17.1% subjects met criteria for a current ICD\n[6]. Disorder-specific rates are far greater than other studies, such as 2.7-6.4% of skin picking, mouth chewing and nail biting\n[4] and disorder-specific rates of 0.4-1.2%, except for trichotillomania, which was 0%\n[5].\nResearch suggests that skin picking occurs in people with a mean age of onset of around 15 years\n[9] but our study comprised of medical students aged 18 and above, with those indulging in skin picking ranging from 19–26 years with a mean of 22.5 years. Previous studies also identified prevalence of dermatillomania to be approximately 3.8-4.6% of college students\n[8,10] 5.4% of a community sample, 4% of college students and 2% of patients seen in a dermatology clinic\n[8,11-14], and 1.4%, 4.6% and 3.8% by Nancy J. Keuthen on various occasions\n[8,9]. It should be noted here that all these figures are far lower than the prevalence of dermatillomania in our study population that is 9.0%, possibly depicting a greater influence of stress and anxiety on medical students\n[15].\nExpression of repetitive behaviors has been well documented in the trichotillomania literature. Prevalence estimates indicate a frequency of 2.5% of young adults\n[16], being quite lower than the prevalence of trichotillomania found in our study, that is 13.3%. This might be due to inclusion of ‘hair manipulation’ in addition to ‘hair pulling’ in our study.\nIt is interesting to note that medical students of Karachi were found to indulge the most into trichotillomania, followed by dermatillomania and the least prevalent behavior was onychophagia which is contrary to literature present. This might be due to the fact that in our study, Trichotillomania and Dermatillomania both comprised of two behaviors whereas Onychophagia comprised of nail biting alone, as has been mentioned above.\nIn one of the few studies to address the issue of BFRBs, college students were categorized as having a repetitive behavior (habit) if the student reported engaging in a behavior two or more times per week\n[2]. Using this relatively lenient criterion, Hansen et al. found that nail biting occurred in 63.6% of the sample. A subsequent study used more stringent criteria for identifying repetitive behaviors in college students. Stating that the repetitive behavior had to occur at least five times per day to be classified as a habit, it was found that 21.8% of the sample engaged in habitual chewing on mouth, lips, or cheeks and 10.1% engaged in habitual nail biting\n[3].\nAs discussed earlier, BFRBs may produce a variety of physical sequelae. Thus, to accurately ascertain the extent to which BFRBs are an actual diagnosable problem, not only must data be collected on how frequently these behaviors occur in an individual but also and more importantly the extent to which these behaviors produce some type of impairment be considered. Unfortunately, the Hansen et al. (1990) and Woods et al. (1996) did not incorporate this variable into their operational definitions when determining the prevalence of BFRBs. The tool that we have used for our study, the ‘Habit Questionnaire’ follows criteria that cover all factors and variables associated with BFRBs and thus give a better estimate of its prevalence with greater accuracy. Even after following such strict criteria, prevalence in our study was quite high, pointing towards the fact that medical students of Karachi are greatly prone to such negative behaviors.\nThere are a number of aspects of our study which limit conclusions. Due to excessive workload on medical students, the specifications of questionnaires might not have been accurately filled or due to extra stress on students having their exam season and vice versa, their BFRBs might have been affected which might have affected the overall results. Also, since ours is a cross-sectional study, cause and effect relationship for the identified associated factors could not be established. Because we have carried out this study among medical students only, we cannot give any absolute predictions of prevalence among the general population. Besides that, we cannot comment on the age and gender distribution of BFRBs since medical students fall into a particular age group only and also because the majority of students at the above mentioned study setting were females, it may have affected gender distribution of such behaviors. Also, we have not considered co-morbidities of different behaviors which limit conclusions.\nThis study offers a number of avenues for future research. Additional research should be conducted to establish prevalence rates among different populations including children, adolescents and elderly, and among populations with cultural differences. It may also be useful to examine the general phenomenology of BFRBs among typically developing persons, including possible co-occurring psychological symptoms such as anxiety or negative affective states. Studies should also be conducted to find out the factors associated with BFRBs and its various possible consequences. Doing so may elucidate important components of treatment and methods to avoid engaging into such behaviors that are so physically and socially disadvantageous.", "The above facts clearly indicate a comparatively higher prevalence of BFRBs among medical students of Karachi than other populations, supporting the fact that these students are a victim of great stress and anxiety in their daily lives. High occurrence of BFRBs among them can be greatly detrimental and thus there is an utter need to spread awareness regarding these behaviors to save the students from its negative impacts. Key health messages and interventions to reduce stress and anxiety among students may help in curtailing the burden of this disease which has serious adverse consequences.", "BFRBs: Body Focused Repetitive Behaviors; CBT: Cognitive Behavioral Therapy; ICD: Impulse Control Disorder; SPSS: Statistical Package for the Social Sciences.", "We do not have any competing interests to declare.", "EUS came up with the idea, did literature review, made the proposal, did data collection, entered data into Statistical Package for the Social Sciences, analyzed it and wrote the paper. SSN helped with literature review, did data collection, entered data into Statistical Package for the Social Sciences and analyzed it. HN’s role was as a consultant psychiatrist, BA supervised the overall conduction of study. All authors read and approved the final manuscript.", "EUS and SSN are final year medical students at Dow Medical College, Dow University of Health Sciences, Karachi, Pakistan. HN = MBBS, FCPS, BA MSc." ]
[ null, "methods", null, "results", "discussion", "conclusions", null, null, null, null ]
[ "BFRBs", "Dermatillomania", "Trichotillomania", "Onychophagia" ]
Background: Body-focused repetitive behaviors (BFRBs) refer to a group of behaviors that include skin picking (dermatillomania), hair pulling (trichotillomania) and nail biting (onychophagia), which result in physical and psychological difficulties [1]. These behaviors for some individuals are simply referred to as nervous habits [2]. However, these nervous habits become problematic when they interfere with the person’s everyday functioning. When these BFRBs cross this line, then they are classified as Impulse Control Disorders. BFRBs most often begin in late childhood or adolescence. They are among the most poorly understood, misdiagnosed, and under treated groups of disorders. The key factor underlying BFRBs is difficulty resisting the urge or impulse to perform a certain behavior that causes a degree of relief. The behavior continues because the BFRB results in a more pleasant state therefore, it is negatively reinforced. Prevalence estimates indicates that such behaviors are quite common among students. Nail biting of 2 times or more per week was reported among 63.6% students in United States of America [2]. Another study from USA, using the stringent criteria of 5 times or more per week stated 21.8% students engaged in mouth, lips or cheeks chewing and 10.1% engaged in habitual nail biting [3]. A recent survey of college students found that 13.7% of the sample endorsed in at least one repetitive behavior that occurred more than five times per day for at least 4 weeks and produced some type of psychological or physical disruption of functioning [4]. Based on questionnaire screenings, a lifetime ICD rate of 3.5% was found in college students of Germany [5]. Very limited literature and no prevalence estimates regarding BFRBs could be found from Asian countries including Pakistan. The treatment for BFRBs may include a combination of psychotropic medications and cognitive behavioral therapy (CBT). CBT often involves Habit Reversal Training and Exposure and Response Prevention (aka Exposure and Ritual Prevention). Since limited literature from our part of the world is available on body-focused repetitive behaviors and its impact on medical students and their lifestyles, it is imperative to spread awareness regarding them and find out ways and means to prevent them. It is therefore essential to come up with frequency to design strategies to decrease the prevalence of BFRBs and thus prevent their adverse effects among medical students. We thus aimed to determine the prevalence of BFRBs among students attending three large medical colleges of Karachi. Methods: The study was conducted in three large medical colleges of Karachi: The Aga Khan University, Dow Medical College and Sindh Medical College. Students from Karachi as well as interior and foreign attend these institutes; hence provide better representation from different socio economic classes and cultures. The combined strength of student of the three colleges was approximately 3000. The study was conducted in July 2010. The investigation was a cross-sectional study. Students were recruited in equal proportion from each college. A self-administered pre tested questionnaire was used for data collection. Data was collected by trained medical students. The study was approved by the Ethical Review Committee of Aga Khan University and Dow Medical College. Written informed consent was obtained from each student and an explanation of the purposes of the research was provided to them. Eligibility criteria to participate in the study comprised of registered medical students aged 18 and above. However students less than 18 years and not willing to consent to participate in this study were excluded. A d “Habit Questionnaire” was used for gathering information on repetitive behaviors. Habit is a brief five items, self-report questionnaire that provides a standardized assessment of the frequency and duration of BFRBs. The Habit Questionnaire has been found to possess moderate test–retest reliability of 0.69 of diagnosis of BFRB, p<0.001 [4]. Participants were asked to indicate if they engage in the following repetitive behaviors: hair pulling, hair manipulation, nail biting, skin biting and mouth chewing. For each repetitive behavior, participants who acknowledged engaging in the specified behavior were asked about the frequency of the behavior (i.e., fewer or greater than five times per day) and the duration of the behavior (i.e., less than 4 weeks or longer than 4 weeks). Participants were also asked to specify if the behavior caused impairment, which was defined as (1) interference with daily functioning, (2) resultant injuries with possible permanent scarring or damage, (3) medical attention as a result of the behavior or (4) recommendation to cease the behavior by a medical professional. To rule out conditions that commonly co-occur with BFRBs, the final section of the Habit Questionnaire asked participants to indicate if they had ever been diagnosed with obsessive–compulsive disorder, Tourette’s syndrome, autism, Asperger’s syndrome or a developmental disability. To meet the criteria for a BFRB, the participants needed to answer “yes” to engaging in at least one repetitive behavior five or more times per day, and the behavior had to be present for at least 4 weeks. Participants were also needed to report that the particular behavior interferes with functioning, caused an injury, caused him or her to seek medical attention or elicited a recommendation to stop the behavior by a medical professional. Our focus was on three behaviors: Trichotillomania (including ‘hair pulling’ and ‘hair manipulation’), Onychophagia (including ‘nail biting’) and Dermatillomania (including ‘chew mouth lips or cheek’ and ‘bite on any other areas’). Statistical analysis A sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved. Data was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate. A sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved. Data was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate. Statistical analysis: A sample size of at least 210 was required to estimate the prevalence of BFRBs among medical students assuming 14% prevalence figure of the Pakistani population, along with 80 percent power, 0.05 significance level, 5 percent bond on error and 10% adjustment for non-response rate. Convenience sampling was used by getting the questionnaires filled during regular college hours from students at four locations in the medical college: the lecture halls, laboratories, library and canteen. Questionnaires were filled on consecutive days until the required sample size was achieved. Data was entered and analyzed using Statistical Package for the Social Sciences (SPSS) version 13.0. Initially descriptive statistics, frequencies and proportions were generated. Continuous variables were analyzed by t – test and categorical by chi-square or Fisher exact, where appropriate. Results: The prevalence of BFRBs among medical students of Karachi was found to be 46 (22%). For those positive for BFRBs, gender distribution was as follows: females 29 (13.9%) and males 17 (8.1%). The average age of participants was 21.5 ranging from 18–27 (refer Table 1). Among these students, 19 (9.0%) were engaged in dermatillomania, 28 (13.3%) in trichotillomania and 13 (6.2%) in onychophagia (refer Table 2). Baseline characteristics of participants BFRBs characteristics among medical students in Karachi As discussed earlier, to meet criteria for BFRBs, student must report engaging in an activity more than five times a day for at least 4 weeks or more. Table 3 clearly shows that majority of the students that were involved in an activity repeated it less than five times per day, rendering them ineligible to be labeled as being involved in BFRBs. Also, far less number of students engaging in a BFRB said to have been involved in it for 4 weeks to 12 months while majority fell into 12 months or more category: 7 out of 9 males and 18 out of 19 females for hair pulling; 20 out of 31 males and 23 out of 34 females for hair manipulation; 11 out of 11 males 25 out of 36 females for nail biting; 17 out of 23 males and 38 out of 51 females for chewing mouth, lips or cheeks and 4 out 5 males and 6 out of 9 females for biting other areas. Prevalence and gender wise distributions of BFRBs Many of the students being engaged in an activity also reported that that activity resulted in noticeable hair loss, interfered with day to day activity, caused injury, permanent scar or damage or made them seek medical attention. Few of them reported that a medical professional suggested them to stop the behavior while very few said that the behavior occurred under the influence of alcohol or other drugs (for details refer Table 3). Besides this, as shown in the previous studies, we did obtain the same test retest reliability. Discussion: BFRBs are not uncommon among medical students of Karachi. It is therefore imperative to identify the prevalence associated with BFRBs to design interventions to curtail the burden. Disorder-specific rates in our study ranged from 6.2%-13.3%, which is in agreement with the rates of 12.8% and 10.0% of skin picking and nail biting respectively according to a study conducted in the U.S. [6]. Our study also found greater occurrence among females than males, a finding consistent with previous studies [1,7,8]. Literature shows that stressful conditions tend to trigger BFRBs but may not be necessarily associated. Thus this higher prevalence may indicate that females are more prone to stressful conditions and thus tend to get engaged more frequently in BFRBs than males. As mentioned above, stress may trigger BFRBs. Besides that, other reasons such as socioeconomic factors may also be the cause of stress and anxiety among different sets of students. But since 1) no literature is available proving any such facts, 2) our sample is limited and has no control population and 3) ours is a cross-sectional study which cannot establish cause and effect relationship, therefore we can’t come up with a valid conclusion in this regard. Overall prevalence of BFRBs in our study was found to be much higher than previous surveys for e.g., according to a study, 13.7% of college students endorsed at least one repetitive behavior that occurred more than five times per day for at least 4 weeks and produced some type of psychological or physical disruption of functioning [4]. Another source gives a lifetime ICD rate of 3.5% in college students [5] while according to a study conducted in United States, 17.1% subjects met criteria for a current ICD [6]. Disorder-specific rates are far greater than other studies, such as 2.7-6.4% of skin picking, mouth chewing and nail biting [4] and disorder-specific rates of 0.4-1.2%, except for trichotillomania, which was 0% [5]. Research suggests that skin picking occurs in people with a mean age of onset of around 15 years [9] but our study comprised of medical students aged 18 and above, with those indulging in skin picking ranging from 19–26 years with a mean of 22.5 years. Previous studies also identified prevalence of dermatillomania to be approximately 3.8-4.6% of college students [8,10] 5.4% of a community sample, 4% of college students and 2% of patients seen in a dermatology clinic [8,11-14], and 1.4%, 4.6% and 3.8% by Nancy J. Keuthen on various occasions [8,9]. It should be noted here that all these figures are far lower than the prevalence of dermatillomania in our study population that is 9.0%, possibly depicting a greater influence of stress and anxiety on medical students [15]. Expression of repetitive behaviors has been well documented in the trichotillomania literature. Prevalence estimates indicate a frequency of 2.5% of young adults [16], being quite lower than the prevalence of trichotillomania found in our study, that is 13.3%. This might be due to inclusion of ‘hair manipulation’ in addition to ‘hair pulling’ in our study. It is interesting to note that medical students of Karachi were found to indulge the most into trichotillomania, followed by dermatillomania and the least prevalent behavior was onychophagia which is contrary to literature present. This might be due to the fact that in our study, Trichotillomania and Dermatillomania both comprised of two behaviors whereas Onychophagia comprised of nail biting alone, as has been mentioned above. In one of the few studies to address the issue of BFRBs, college students were categorized as having a repetitive behavior (habit) if the student reported engaging in a behavior two or more times per week [2]. Using this relatively lenient criterion, Hansen et al. found that nail biting occurred in 63.6% of the sample. A subsequent study used more stringent criteria for identifying repetitive behaviors in college students. Stating that the repetitive behavior had to occur at least five times per day to be classified as a habit, it was found that 21.8% of the sample engaged in habitual chewing on mouth, lips, or cheeks and 10.1% engaged in habitual nail biting [3]. As discussed earlier, BFRBs may produce a variety of physical sequelae. Thus, to accurately ascertain the extent to which BFRBs are an actual diagnosable problem, not only must data be collected on how frequently these behaviors occur in an individual but also and more importantly the extent to which these behaviors produce some type of impairment be considered. Unfortunately, the Hansen et al. (1990) and Woods et al. (1996) did not incorporate this variable into their operational definitions when determining the prevalence of BFRBs. The tool that we have used for our study, the ‘Habit Questionnaire’ follows criteria that cover all factors and variables associated with BFRBs and thus give a better estimate of its prevalence with greater accuracy. Even after following such strict criteria, prevalence in our study was quite high, pointing towards the fact that medical students of Karachi are greatly prone to such negative behaviors. There are a number of aspects of our study which limit conclusions. Due to excessive workload on medical students, the specifications of questionnaires might not have been accurately filled or due to extra stress on students having their exam season and vice versa, their BFRBs might have been affected which might have affected the overall results. Also, since ours is a cross-sectional study, cause and effect relationship for the identified associated factors could not be established. Because we have carried out this study among medical students only, we cannot give any absolute predictions of prevalence among the general population. Besides that, we cannot comment on the age and gender distribution of BFRBs since medical students fall into a particular age group only and also because the majority of students at the above mentioned study setting were females, it may have affected gender distribution of such behaviors. Also, we have not considered co-morbidities of different behaviors which limit conclusions. This study offers a number of avenues for future research. Additional research should be conducted to establish prevalence rates among different populations including children, adolescents and elderly, and among populations with cultural differences. It may also be useful to examine the general phenomenology of BFRBs among typically developing persons, including possible co-occurring psychological symptoms such as anxiety or negative affective states. Studies should also be conducted to find out the factors associated with BFRBs and its various possible consequences. Doing so may elucidate important components of treatment and methods to avoid engaging into such behaviors that are so physically and socially disadvantageous. Conclusions: The above facts clearly indicate a comparatively higher prevalence of BFRBs among medical students of Karachi than other populations, supporting the fact that these students are a victim of great stress and anxiety in their daily lives. High occurrence of BFRBs among them can be greatly detrimental and thus there is an utter need to spread awareness regarding these behaviors to save the students from its negative impacts. Key health messages and interventions to reduce stress and anxiety among students may help in curtailing the burden of this disease which has serious adverse consequences. Abbreviations: BFRBs: Body Focused Repetitive Behaviors; CBT: Cognitive Behavioral Therapy; ICD: Impulse Control Disorder; SPSS: Statistical Package for the Social Sciences. Competing interests: We do not have any competing interests to declare. Authors’ contributions: EUS came up with the idea, did literature review, made the proposal, did data collection, entered data into Statistical Package for the Social Sciences, analyzed it and wrote the paper. SSN helped with literature review, did data collection, entered data into Statistical Package for the Social Sciences and analyzed it. HN’s role was as a consultant psychiatrist, BA supervised the overall conduction of study. All authors read and approved the final manuscript. Authors’ information: EUS and SSN are final year medical students at Dow Medical College, Dow University of Health Sciences, Karachi, Pakistan. HN = MBBS, FCPS, BA MSc.
Background: Body-focused repetitive behaviors (BFRBs) that include skin picking (dermatillomania), hair pulling (trichotillomania) and nail biting (onychophagia), lead to harmful physical and psychological sequelae.The objective was to determine the prevalence of BFRBs among students attending three large medical colleges of Karachi. It is imperative to come up with frequency to design strategies to decrease the burden and adverse effects associated with BFRBs among medical students. Methods: A cross-sectional study was conducted among 210 students attending Aga Khan University, Dow Medical College and Sind Medical College, Karachi, in equal proportion. Data were collected using a pre tested tool, "Habit Questionnaire". Diagnoses were made on the criteria that a student must be involved in an activity 5 times or more per day for 4 weeks or more. Convenience sampling was done to recruit the participants aged 18 years and above after getting written informed consent. Results: The overall prevalence of BFRBs was found to be 46 (22%). For those positive for BFRBs, gender distribution was as follows: females 29 (13.9%) and males 17 (8.1%). Among these students, 19 (9.0%) were engaged in dermatillomania, 28 (13.3%) in trichotillomania and 13 (6.2%) in onychophagia. Conclusions: High proportions of BFRBs are reported among medical students of Karachi. Key health messages and interventions to reduce stress and anxiety among students may help in curtailing the burden of this disease which has serious adverse consequences.
Background: Body-focused repetitive behaviors (BFRBs) refer to a group of behaviors that include skin picking (dermatillomania), hair pulling (trichotillomania) and nail biting (onychophagia), which result in physical and psychological difficulties [1]. These behaviors for some individuals are simply referred to as nervous habits [2]. However, these nervous habits become problematic when they interfere with the person’s everyday functioning. When these BFRBs cross this line, then they are classified as Impulse Control Disorders. BFRBs most often begin in late childhood or adolescence. They are among the most poorly understood, misdiagnosed, and under treated groups of disorders. The key factor underlying BFRBs is difficulty resisting the urge or impulse to perform a certain behavior that causes a degree of relief. The behavior continues because the BFRB results in a more pleasant state therefore, it is negatively reinforced. Prevalence estimates indicates that such behaviors are quite common among students. Nail biting of 2 times or more per week was reported among 63.6% students in United States of America [2]. Another study from USA, using the stringent criteria of 5 times or more per week stated 21.8% students engaged in mouth, lips or cheeks chewing and 10.1% engaged in habitual nail biting [3]. A recent survey of college students found that 13.7% of the sample endorsed in at least one repetitive behavior that occurred more than five times per day for at least 4 weeks and produced some type of psychological or physical disruption of functioning [4]. Based on questionnaire screenings, a lifetime ICD rate of 3.5% was found in college students of Germany [5]. Very limited literature and no prevalence estimates regarding BFRBs could be found from Asian countries including Pakistan. The treatment for BFRBs may include a combination of psychotropic medications and cognitive behavioral therapy (CBT). CBT often involves Habit Reversal Training and Exposure and Response Prevention (aka Exposure and Ritual Prevention). Since limited literature from our part of the world is available on body-focused repetitive behaviors and its impact on medical students and their lifestyles, it is imperative to spread awareness regarding them and find out ways and means to prevent them. It is therefore essential to come up with frequency to design strategies to decrease the prevalence of BFRBs and thus prevent their adverse effects among medical students. We thus aimed to determine the prevalence of BFRBs among students attending three large medical colleges of Karachi. Conclusions: The above facts clearly indicate a comparatively higher prevalence of BFRBs among medical students of Karachi than other populations, supporting the fact that these students are a victim of great stress and anxiety in their daily lives. High occurrence of BFRBs among them can be greatly detrimental and thus there is an utter need to spread awareness regarding these behaviors to save the students from its negative impacts. Key health messages and interventions to reduce stress and anxiety among students may help in curtailing the burden of this disease which has serious adverse consequences.
Background: Body-focused repetitive behaviors (BFRBs) that include skin picking (dermatillomania), hair pulling (trichotillomania) and nail biting (onychophagia), lead to harmful physical and psychological sequelae.The objective was to determine the prevalence of BFRBs among students attending three large medical colleges of Karachi. It is imperative to come up with frequency to design strategies to decrease the burden and adverse effects associated with BFRBs among medical students. Methods: A cross-sectional study was conducted among 210 students attending Aga Khan University, Dow Medical College and Sind Medical College, Karachi, in equal proportion. Data were collected using a pre tested tool, "Habit Questionnaire". Diagnoses were made on the criteria that a student must be involved in an activity 5 times or more per day for 4 weeks or more. Convenience sampling was done to recruit the participants aged 18 years and above after getting written informed consent. Results: The overall prevalence of BFRBs was found to be 46 (22%). For those positive for BFRBs, gender distribution was as follows: females 29 (13.9%) and males 17 (8.1%). Among these students, 19 (9.0%) were engaged in dermatillomania, 28 (13.3%) in trichotillomania and 13 (6.2%) in onychophagia. Conclusions: High proportions of BFRBs are reported among medical students of Karachi. Key health messages and interventions to reduce stress and anxiety among students may help in curtailing the burden of this disease which has serious adverse consequences.
3,503
294
[ 470, 150, 28, 10, 85, 32 ]
10
[ "students", "bfrbs", "medical", "study", "prevalence", "behavior", "college", "behaviors", "medical students", "repetitive" ]
[ "nail biting mentioned", "behaviors trichotillomania", "biting disorder specific", "nail biting dermatillomania", "behaviors trichotillomania including" ]
[CONTENT] BFRBs | Dermatillomania | Trichotillomania | Onychophagia [SUMMARY]
[CONTENT] BFRBs | Dermatillomania | Trichotillomania | Onychophagia [SUMMARY]
[CONTENT] BFRBs | Dermatillomania | Trichotillomania | Onychophagia [SUMMARY]
[CONTENT] BFRBs | Dermatillomania | Trichotillomania | Onychophagia [SUMMARY]
[CONTENT] BFRBs | Dermatillomania | Trichotillomania | Onychophagia [SUMMARY]
[CONTENT] BFRBs | Dermatillomania | Trichotillomania | Onychophagia [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Female | Humans | Male | Obsessive-Compulsive Disorder | Pakistan | Schools, Medical | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Female | Humans | Male | Obsessive-Compulsive Disorder | Pakistan | Schools, Medical | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Female | Humans | Male | Obsessive-Compulsive Disorder | Pakistan | Schools, Medical | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Female | Humans | Male | Obsessive-Compulsive Disorder | Pakistan | Schools, Medical | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Female | Humans | Male | Obsessive-Compulsive Disorder | Pakistan | Schools, Medical | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Female | Humans | Male | Obsessive-Compulsive Disorder | Pakistan | Schools, Medical | Young Adult [SUMMARY]
[CONTENT] nail biting mentioned | behaviors trichotillomania | biting disorder specific | nail biting dermatillomania | behaviors trichotillomania including [SUMMARY]
[CONTENT] nail biting mentioned | behaviors trichotillomania | biting disorder specific | nail biting dermatillomania | behaviors trichotillomania including [SUMMARY]
[CONTENT] nail biting mentioned | behaviors trichotillomania | biting disorder specific | nail biting dermatillomania | behaviors trichotillomania including [SUMMARY]
[CONTENT] nail biting mentioned | behaviors trichotillomania | biting disorder specific | nail biting dermatillomania | behaviors trichotillomania including [SUMMARY]
[CONTENT] nail biting mentioned | behaviors trichotillomania | biting disorder specific | nail biting dermatillomania | behaviors trichotillomania including [SUMMARY]
[CONTENT] nail biting mentioned | behaviors trichotillomania | biting disorder specific | nail biting dermatillomania | behaviors trichotillomania including [SUMMARY]
[CONTENT] students | bfrbs | medical | study | prevalence | behavior | college | behaviors | medical students | repetitive [SUMMARY]
[CONTENT] students | bfrbs | medical | study | prevalence | behavior | college | behaviors | medical students | repetitive [SUMMARY]
[CONTENT] students | bfrbs | medical | study | prevalence | behavior | college | behaviors | medical students | repetitive [SUMMARY]
[CONTENT] students | bfrbs | medical | study | prevalence | behavior | college | behaviors | medical students | repetitive [SUMMARY]
[CONTENT] students | bfrbs | medical | study | prevalence | behavior | college | behaviors | medical students | repetitive [SUMMARY]
[CONTENT] students | bfrbs | medical | study | prevalence | behavior | college | behaviors | medical students | repetitive [SUMMARY]
[CONTENT] students | bfrbs | behaviors | prevalence | prevention | exposure | prevent | limited literature | habits | disorders [SUMMARY]
[CONTENT] behavior | medical | participants | college | students | asked | study | questionnaire | medical college | required [SUMMARY]
[CONTENT] females | males | activity | table | involved | refer table | students | bfrbs | day | refer [SUMMARY]
[CONTENT] students | stress anxiety | anxiety | stress | adverse consequences | supporting fact | supporting fact students | supporting fact students victim | key health messages interventions | key health messages [SUMMARY]
[CONTENT] students | bfrbs | medical | interests declare | competing interests declare | competing interests | declare | competing | interests | study [SUMMARY]
[CONTENT] students | bfrbs | medical | interests declare | competing interests declare | competing interests | declare | competing | interests | study [SUMMARY]
[CONTENT] ||| BFRBs | three | Karachi ||| BFRBs [SUMMARY]
[CONTENT] 210 | Aga Khan University | Dow Medical College | Sind Medical College | Karachi ||| ||| 5 | 4 weeks ||| 18 years [SUMMARY]
[CONTENT] BFRBs | 46 | 22% ||| BFRBs | 29 | 13.9% | 17 | 8.1% ||| 19 | 9.0% | dermatillomania | 28 | 13.3% | trichotillomania | 13 | 6.2% | onychophagia [SUMMARY]
[CONTENT] Karachi ||| [SUMMARY]
[CONTENT] ||| BFRBs | three | Karachi ||| BFRBs ||| 210 | Aga Khan University | Dow Medical College | Sind Medical College | Karachi ||| ||| 5 | 4 weeks ||| 18 years ||| ||| BFRBs | 46 | 22% ||| BFRBs | 29 | 13.9% | 17 | 8.1% ||| 19 | 9.0% | dermatillomania | 28 | 13.3% | trichotillomania | 13 | 6.2% | onychophagia ||| Karachi ||| [SUMMARY]
[CONTENT] ||| BFRBs | three | Karachi ||| BFRBs ||| 210 | Aga Khan University | Dow Medical College | Sind Medical College | Karachi ||| ||| 5 | 4 weeks ||| 18 years ||| ||| BFRBs | 46 | 22% ||| BFRBs | 29 | 13.9% | 17 | 8.1% ||| 19 | 9.0% | dermatillomania | 28 | 13.3% | trichotillomania | 13 | 6.2% | onychophagia ||| Karachi ||| [SUMMARY]
Thoracotomy is better than thoracoscopic lobectomy in the lymph node dissection of lung cancer: a systematic review and meta-analysis.
27855709
The aim of this study was to investigate which surgical method is better in lymph node (LN) dissection of lung cancer.
BACKGROUND
A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus, and Google Scholar was performed to identify studies comparing thoracoscopic lobectomy (video-assisted thoracic surgery (VATS) group) and thoracotomy (open group) in LN dissection.
METHODS
Twenty-nine articles met the inclusion criteria and involved 2763 patients in the VATS group and 3484 patients in the open group. The meta-analysis showed that fewer total LNs (95% confidence interval [CI] -1.52 to -0.73, p < 0.0001) and N2 LNs (95% CI -1.25 to -0.10, p = 0.02) were dissected in the VATS group. A similar number of total LN stations, N2 LN stations, and N1 LNs were harvested in both groups. Only one study reported that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04).
RESULTS
Open lobectomy could achieve better LN dissection efficacy than thoracoscopic lobectomy in the treatment of lung cancer, especially in the N2 LNs dissection. These findings require validation by high-quality, large-scale randomized controlled trials.
CONCLUSIONS
[ "Carcinoma, Non-Small-Cell Lung", "Humans", "Lung Neoplasms", "Lymph Node Excision", "Lymphatic Metastasis", "Mediastinum", "Neoplasm Staging", "Pneumonectomy", "Thoracic Surgery, Video-Assisted", "Thoracotomy" ]
5114806
Background
Lung cancer is the leading cause of cancer deaths in many countries [1, 2]. Surgical treatment is the preferred treatment for early-stage non-small cell lung cancer (NSCLC). D’Cunha’s research showed that N1 and N2 lymph nodes (LNs) were positive in 27.5% of patients with lung cancer under lobectomy [3]. However, non-invasive examinations, such as computed tomography (CT) and positron emission tomography-computed tomography (PET-CT), are not sensitive and specific for the clinical staging of lung cancer. Video-assisted thoracic surgery (VATS) is the preferred surgical procedure, with fewer incidences of postoperative complications and a higher survival rate compared with thoracotomy [4–7]. However, whether VATS can achieve the same LN dissection efficacy is controversial, and there remains a lack of high-quality, large-scale clinical research. To determine whether VATS can achieve the same LN dissection efficacy as thoracotomy in lung cancer, we performed a systemic review and meta-analysis.
null
null
Results
Search results and quality assessment of the included studies We initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers Table 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial Flow diagram of screened and included papers Summary of the 29 trials included in the present meta-analysis ① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial We initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers Table 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial Flow diagram of screened and included papers Summary of the 29 trials included in the present meta-analysis ① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial Comparison of total LNN and LNS We identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I 2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Fourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I 2 = 73%, Fig. 2b). We identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I 2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Fourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I 2 = 73%, Fig. 2b). Comparison of N2 LNN and LNS Eleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I 2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Five articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I 2 = 76%, Fig. 3b). Eleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I 2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Five articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I 2 = 76%, Fig. 3b). Comparison of N1 LNN and LNS Four articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I 2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Only one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04). Four articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I 2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Only one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04). Publication bias The funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group Funnel plot of the mean difference in total LNN in the VATS group vs. the open group The funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group Funnel plot of the mean difference in total LNN in the VATS group vs. the open group
Conclusions
Less total and mediastinal LNs were evaluated with VATS than with thoracotomy in the present study. Both approaches harvested a similar number of total LN stations, mediastinal LN stations, and N1 LNs. However, owing to the possible existing bias in the original studies, inter-study heterogeneity, and the inherent limitations of our meta-analysis, the findings require validation in high-quality, large-scale RCTs.
[ "Search strategy", "Inclusion criteria and exclusion criteria", "Data extraction", "Quality assessment for included studies", "Statistical analysis", "Search results and quality assessment of the included studies", "Comparison of total LNN and LNS", "Comparison of N2 LNN and LNS", "Comparison of N1 LNN and LNS", "Publication bias" ]
[ "MEDLINE and manual searches were performed by two investigators independently and in duplicate to identify all relevant scientific articles published from January 1990 to May 2016. The MEDLINE search was performed using PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, The Cochrane Library, Scopus, and Google Scholar. The MeSH terms “lung cancer or lung neoplasm”, “thoracotomy or open surgery”, and “video-assisted thoracic surgery or VATS” and comparative study were used.", "The following inclusion criteria were applied: (1) published in English, (2) compared the LN dissection of thoracoscopic lobectomy with thoracotomy in treating patients with lung cancer, and (3) the most recent study was chosen when duplication of data is in more than one article.\nReviews without original data, case reports, meta-analyses, letters, expert opinions, and animal studies were excluded. Studies on robotic-assisted VATS were also excluded.", "Two investigators independently extracted data from the eligible studies. The extracted data included first author, year of publication, geographical area, study design, duration of enrollment, information on preoperative staging, number of patients per group, LN number (LNN), and LN station number (LNS).", "Two investigators independently assessed the quality of each included study using the Newcastle-Ottawa Scale (NOS) for non-randomized studies and the Jadad scale for randomized controlled trials (RCTs).\nThe NOS evaluates the quality of studies by analyzing three items: selection, comparability, and exposure. The scale assigns a maximum of nine points to each study: a maximum of four points for selection, two points for comparability, and three points for exposure. Therefore, the highest quality study would score nine points. In our analysis, high-quality studies were defined as those that scored nine or eight points; medium-quality studies were those that scored seven or six points [8].\nThe Jadad scale (five points) contained questions for three main parts: randomization, masking, and accountability of all patients (withdrawals and dropouts). Studies scored ≥3 points were considered as high quality [9].", "Meta-analysis was conducted by Review Manager 5.3 and SPSS 18.0, p value < 0.05 suggested statistically significant. The differences were compared between the two groups using analysis of variance for continuous variables and pooled relative risk (RR) with 95% confidence interval (CI) for categorical variables. We used I\n2 and Cochran Q to evaluate the between-study heterogeneity. A random-effects model was adopted when the heterogeneity was significant (p ≤ 0.10 and I\n2 > 50%); otherwise, a fixed-effects model was used. Rank correlation test of funnel plot asymmetry was used to assess the potential publication bias.", "We initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers\nTable 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN\nCHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial\n\nFlow diagram of screened and included papers\nSummary of the 29 trials included in the present meta-analysis\n① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN\n\nCHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial", "We identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I\n2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group\n\nForest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group\nFourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I\n2 = 73%, Fig. 2b).", "Eleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I\n2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group\n\nForest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group\nFive articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I\n2 = 76%, Fig. 3b).", "Four articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I\n2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group\n\nForest plot of the mean difference in N1 LNN in the VATS group vs. the open group\nOnly one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04).", "The funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group\n\nFunnel plot of the mean difference in total LNN in the VATS group vs. the open group" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Search strategy", "Inclusion criteria and exclusion criteria", "Data extraction", "Quality assessment for included studies", "Statistical analysis", "Results", "Search results and quality assessment of the included studies", "Comparison of total LNN and LNS", "Comparison of N2 LNN and LNS", "Comparison of N1 LNN and LNS", "Publication bias", "Discussion", "Conclusions" ]
[ "Lung cancer is the leading cause of cancer deaths in many countries [1, 2]. Surgical treatment is the preferred treatment for early-stage non-small cell lung cancer (NSCLC). D’Cunha’s research showed that N1 and N2 lymph nodes (LNs) were positive in 27.5% of patients with lung cancer under lobectomy [3]. However, non-invasive examinations, such as computed tomography (CT) and positron emission tomography-computed tomography (PET-CT), are not sensitive and specific for the clinical staging of lung cancer. Video-assisted thoracic surgery (VATS) is the preferred surgical procedure, with fewer incidences of postoperative complications and a higher survival rate compared with thoracotomy [4–7]. However, whether VATS can achieve the same LN dissection efficacy is controversial, and there remains a lack of high-quality, large-scale clinical research.\nTo determine whether VATS can achieve the same LN dissection efficacy as thoracotomy in lung cancer, we performed a systemic review and meta-analysis.", " Search strategy MEDLINE and manual searches were performed by two investigators independently and in duplicate to identify all relevant scientific articles published from January 1990 to May 2016. The MEDLINE search was performed using PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, The Cochrane Library, Scopus, and Google Scholar. The MeSH terms “lung cancer or lung neoplasm”, “thoracotomy or open surgery”, and “video-assisted thoracic surgery or VATS” and comparative study were used.\nMEDLINE and manual searches were performed by two investigators independently and in duplicate to identify all relevant scientific articles published from January 1990 to May 2016. The MEDLINE search was performed using PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, The Cochrane Library, Scopus, and Google Scholar. The MeSH terms “lung cancer or lung neoplasm”, “thoracotomy or open surgery”, and “video-assisted thoracic surgery or VATS” and comparative study were used.\n Inclusion criteria and exclusion criteria The following inclusion criteria were applied: (1) published in English, (2) compared the LN dissection of thoracoscopic lobectomy with thoracotomy in treating patients with lung cancer, and (3) the most recent study was chosen when duplication of data is in more than one article.\nReviews without original data, case reports, meta-analyses, letters, expert opinions, and animal studies were excluded. Studies on robotic-assisted VATS were also excluded.\nThe following inclusion criteria were applied: (1) published in English, (2) compared the LN dissection of thoracoscopic lobectomy with thoracotomy in treating patients with lung cancer, and (3) the most recent study was chosen when duplication of data is in more than one article.\nReviews without original data, case reports, meta-analyses, letters, expert opinions, and animal studies were excluded. Studies on robotic-assisted VATS were also excluded.\n Data extraction Two investigators independently extracted data from the eligible studies. The extracted data included first author, year of publication, geographical area, study design, duration of enrollment, information on preoperative staging, number of patients per group, LN number (LNN), and LN station number (LNS).\nTwo investigators independently extracted data from the eligible studies. The extracted data included first author, year of publication, geographical area, study design, duration of enrollment, information on preoperative staging, number of patients per group, LN number (LNN), and LN station number (LNS).\n Quality assessment for included studies Two investigators independently assessed the quality of each included study using the Newcastle-Ottawa Scale (NOS) for non-randomized studies and the Jadad scale for randomized controlled trials (RCTs).\nThe NOS evaluates the quality of studies by analyzing three items: selection, comparability, and exposure. The scale assigns a maximum of nine points to each study: a maximum of four points for selection, two points for comparability, and three points for exposure. Therefore, the highest quality study would score nine points. In our analysis, high-quality studies were defined as those that scored nine or eight points; medium-quality studies were those that scored seven or six points [8].\nThe Jadad scale (five points) contained questions for three main parts: randomization, masking, and accountability of all patients (withdrawals and dropouts). Studies scored ≥3 points were considered as high quality [9].\nTwo investigators independently assessed the quality of each included study using the Newcastle-Ottawa Scale (NOS) for non-randomized studies and the Jadad scale for randomized controlled trials (RCTs).\nThe NOS evaluates the quality of studies by analyzing three items: selection, comparability, and exposure. The scale assigns a maximum of nine points to each study: a maximum of four points for selection, two points for comparability, and three points for exposure. Therefore, the highest quality study would score nine points. In our analysis, high-quality studies were defined as those that scored nine or eight points; medium-quality studies were those that scored seven or six points [8].\nThe Jadad scale (five points) contained questions for three main parts: randomization, masking, and accountability of all patients (withdrawals and dropouts). Studies scored ≥3 points were considered as high quality [9].\n Statistical analysis Meta-analysis was conducted by Review Manager 5.3 and SPSS 18.0, p value < 0.05 suggested statistically significant. The differences were compared between the two groups using analysis of variance for continuous variables and pooled relative risk (RR) with 95% confidence interval (CI) for categorical variables. We used I\n2 and Cochran Q to evaluate the between-study heterogeneity. A random-effects model was adopted when the heterogeneity was significant (p ≤ 0.10 and I\n2 > 50%); otherwise, a fixed-effects model was used. Rank correlation test of funnel plot asymmetry was used to assess the potential publication bias.\nMeta-analysis was conducted by Review Manager 5.3 and SPSS 18.0, p value < 0.05 suggested statistically significant. The differences were compared between the two groups using analysis of variance for continuous variables and pooled relative risk (RR) with 95% confidence interval (CI) for categorical variables. We used I\n2 and Cochran Q to evaluate the between-study heterogeneity. A random-effects model was adopted when the heterogeneity was significant (p ≤ 0.10 and I\n2 > 50%); otherwise, a fixed-effects model was used. Rank correlation test of funnel plot asymmetry was used to assess the potential publication bias.", "MEDLINE and manual searches were performed by two investigators independently and in duplicate to identify all relevant scientific articles published from January 1990 to May 2016. The MEDLINE search was performed using PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, The Cochrane Library, Scopus, and Google Scholar. The MeSH terms “lung cancer or lung neoplasm”, “thoracotomy or open surgery”, and “video-assisted thoracic surgery or VATS” and comparative study were used.", "The following inclusion criteria were applied: (1) published in English, (2) compared the LN dissection of thoracoscopic lobectomy with thoracotomy in treating patients with lung cancer, and (3) the most recent study was chosen when duplication of data is in more than one article.\nReviews without original data, case reports, meta-analyses, letters, expert opinions, and animal studies were excluded. Studies on robotic-assisted VATS were also excluded.", "Two investigators independently extracted data from the eligible studies. The extracted data included first author, year of publication, geographical area, study design, duration of enrollment, information on preoperative staging, number of patients per group, LN number (LNN), and LN station number (LNS).", "Two investigators independently assessed the quality of each included study using the Newcastle-Ottawa Scale (NOS) for non-randomized studies and the Jadad scale for randomized controlled trials (RCTs).\nThe NOS evaluates the quality of studies by analyzing three items: selection, comparability, and exposure. The scale assigns a maximum of nine points to each study: a maximum of four points for selection, two points for comparability, and three points for exposure. Therefore, the highest quality study would score nine points. In our analysis, high-quality studies were defined as those that scored nine or eight points; medium-quality studies were those that scored seven or six points [8].\nThe Jadad scale (five points) contained questions for three main parts: randomization, masking, and accountability of all patients (withdrawals and dropouts). Studies scored ≥3 points were considered as high quality [9].", "Meta-analysis was conducted by Review Manager 5.3 and SPSS 18.0, p value < 0.05 suggested statistically significant. The differences were compared between the two groups using analysis of variance for continuous variables and pooled relative risk (RR) with 95% confidence interval (CI) for categorical variables. We used I\n2 and Cochran Q to evaluate the between-study heterogeneity. A random-effects model was adopted when the heterogeneity was significant (p ≤ 0.10 and I\n2 > 50%); otherwise, a fixed-effects model was used. Rank correlation test of funnel plot asymmetry was used to assess the potential publication bias.", " Search results and quality assessment of the included studies We initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers\nTable 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN\nCHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial\n\nFlow diagram of screened and included papers\nSummary of the 29 trials included in the present meta-analysis\n① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN\n\nCHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial\nWe initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers\nTable 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN\nCHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial\n\nFlow diagram of screened and included papers\nSummary of the 29 trials included in the present meta-analysis\n① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN\n\nCHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial\n Comparison of total LNN and LNS We identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I\n2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group\n\nForest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group\nFourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I\n2 = 73%, Fig. 2b).\nWe identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I\n2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group\n\nForest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group\nFourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I\n2 = 73%, Fig. 2b).\n Comparison of N2 LNN and LNS Eleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I\n2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group\n\nForest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group\nFive articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I\n2 = 76%, Fig. 3b).\nEleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I\n2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group\n\nForest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group\nFive articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I\n2 = 76%, Fig. 3b).\n Comparison of N1 LNN and LNS Four articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I\n2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group\n\nForest plot of the mean difference in N1 LNN in the VATS group vs. the open group\nOnly one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04).\nFour articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I\n2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group\n\nForest plot of the mean difference in N1 LNN in the VATS group vs. the open group\nOnly one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04).\n Publication bias The funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group\n\nFunnel plot of the mean difference in total LNN in the VATS group vs. the open group\nThe funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group\n\nFunnel plot of the mean difference in total LNN in the VATS group vs. the open group", "We initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers\nTable 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN\nCHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial\n\nFlow diagram of screened and included papers\nSummary of the 29 trials included in the present meta-analysis\n① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN\n\nCHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial", "We identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I\n2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group\n\nForest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group\nFourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I\n2 = 73%, Fig. 2b).", "Eleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I\n2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group\n\nForest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group\nFive articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I\n2 = 76%, Fig. 3b).", "Four articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I\n2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group\n\nForest plot of the mean difference in N1 LNN in the VATS group vs. the open group\nOnly one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04).", "The funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group\n\nFunnel plot of the mean difference in total LNN in the VATS group vs. the open group", "The presence or absence of mediastinal LN metastases is a critical component to accurate staging and therefore a key component of the surgical management of NSCLC [10]. Both overall and disease-free survival have been associated with the number of LNs dissected [11, 12]. Lardinois compared LNs dissection versus sampling; the results showed a longer disease-free survival and better local tumor control in dissection groups [13]. Focusing on stage IA lung cancer, Xu reported a similar result and suggested that the number of N2 stations served as a more significant prognostic factor [14]. As a result, current guidelines from the National Comprehensive Cancer Network and European Society of Thoracic Surgeons recommend that all patients with resectable NSCLC should have a complete systematic nodal dissection, with at least three N2 stations dissected [15, 16].\nAlthough VATS is associated with many benefits in comparison to thoracotomy, whether VATS can achieve the same LN dissection efficacy in patients with lung cancer remains controversial [7, 17, 18]. Flores reported that the surgical field in VATS facilitated the dissection of LNs adjacent to the blood vessels and the trachea and found that smaller nodes achieved better LN dissection [19]. Ramos showed that VATS dissected more total LNS (5.1 ± 1.1 vs. 4.5 ± 1.2, p < 0.001) and mediastinal LNS (3.4 ± 0.9 vs. 3.2 ± 0.9, p = 0.022) when compared with thoracotomy [20]. By contrast, Denlinger reported that significantly more overall LNs were dissected in the open group than in the VATS group (8.9 ± 5.2 vs. 7.1 ± 5.2, p = 0.0006) [21]. Similar results were reported by Lee and colleagues in their retrospective study of LN evaluation achieved by VATS lobectomy compared with that by open lobectomy [22]. A secondary analysis including 752 original participants of the American College of Surgeons Oncology Group Z0030 trial compared patients who underwent lobectomy by VATS with patients who underwent thoracotomy. The results showed that there was no significant difference in the overall number of LNs retrieved between the two groups (15 vs. 19, p = 0.147) [23].\nThe present study included a total of 7568 patients from nine countries, providing the most comprehensive evidence for LN dissection efficacy by VATS to date. The meta-analysis showed that fewer total LNN and similar total LNS were dissected in the VATS group as compared with that in the open group. If we only included studies focusing on clinical stage I lung cancer, the results were similar (total LNN, 95% CI −1.67 to −0.61, p < 0.0001; total LNS, 95% CI −0.62 to 0.02, p = 0.07). It suggests that surgeons may not have the ability to perform systematic lymphadenectomy in VATS or ignore the importance of systematic lymphadenectomy for various reasons (earlier tumor stage, worrying about damage to vital organs, and so on).\nIn the comparison of N1 and N2 LN dissection, our results showed that similar number of N1 LNN and N2 LNS could be harvested by VATS, while fewer N2 LNN were harvested by VATS as compared with thoracotomy. Only one article reported on N1 LNS comparison between the two groups and showed better efficiency in the open group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04) [22].\nIt was controversial that removing more N2 LNs could increase the accuracy of clinical staging of NSCLC. Boffa et al. compared the completeness of surgical LN evaluation during anatomic resection of primary lung cancer by open and VATS approaches in 11,531 patients from The Society of Thoracic Surgeons-General Thoracic Database. The results showed nodal upstaging in 14.3% (1024 patients) of the open group and in 11.6% (508 patients) of the VATS group (p < 0.001). The study suggested that surgeons should be encouraged to apply a systematic approach to hilar and peribronchial LN dissection during VATS lobectomy for lung cancer, particularly as they were adopting this approach [24].\nSome surgeons might also worry about the complications caused by the systematic LN dissection. As with open thoracotomy, systematic mediastinal LN dissection under VATS may increase the risk of intraoperative bleeding (bronchial arteries, etc.), tracheobronchial injury, recurrent nerve injury, prolonged air leak, atrial fibrillation, and pulmonary edema [25]. In other papers, Watanabe et al. reported similar mortality and morbidity of mediastinal LN dissection by VATS vs. open lobectomy, indicating that systematic mediastinal LN dissection by VATS is a safe procedure [26]. Zhang et al. compared complications such as chylothorax and nerve injury between VATS and open thoracotomy in a meta-analysis. The results showed that these events were similar in both groups [27].\nThe possible limitations of our study must be considered when interpreting the findings described herein. First, including only English papers might have resulted in language bias. Second, including 7568 participants from 36 studies with only three RCTs might have weakened the quality of the results. Third, the number of dissected LNs varied significantly between the included studies. Different doctors have different understanding of LN dissection and might be at different stages of the learning curve. Some data did not meet the National Comprehensive Cancer Network guide requirements for lung cancer surgery treatment of systematic LN dissection, which may have affected the reliability of the results. Fourth, there is great potential for LN fragmentation during dissection. Different pathologists and different counting procedures might lead to false LNN counts, which might increase the heterogeneity between studies but would not alter the overall results. Finally, we did not analyze the survival difference between VATS and open thoracotomy. Our analysis compared LN harvest capability between two evaluation procedures only from a surgical point of view and tried to give further proof of satisfied oncologic efficacy by VATS.", "Less total and mediastinal LNs were evaluated with VATS than with thoracotomy in the present study. Both approaches harvested a similar number of total LN stations, mediastinal LN stations, and N1 LNs. However, owing to the possible existing bias in the original studies, inter-study heterogeneity, and the inherent limitations of our meta-analysis, the findings require validation in high-quality, large-scale RCTs." ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Video-assisted thoracic surgery", "Thoracotomy", "Meta-analysis", "Lung cancer", "Lymph node dissection" ]
Background: Lung cancer is the leading cause of cancer deaths in many countries [1, 2]. Surgical treatment is the preferred treatment for early-stage non-small cell lung cancer (NSCLC). D’Cunha’s research showed that N1 and N2 lymph nodes (LNs) were positive in 27.5% of patients with lung cancer under lobectomy [3]. However, non-invasive examinations, such as computed tomography (CT) and positron emission tomography-computed tomography (PET-CT), are not sensitive and specific for the clinical staging of lung cancer. Video-assisted thoracic surgery (VATS) is the preferred surgical procedure, with fewer incidences of postoperative complications and a higher survival rate compared with thoracotomy [4–7]. However, whether VATS can achieve the same LN dissection efficacy is controversial, and there remains a lack of high-quality, large-scale clinical research. To determine whether VATS can achieve the same LN dissection efficacy as thoracotomy in lung cancer, we performed a systemic review and meta-analysis. Methods: Search strategy MEDLINE and manual searches were performed by two investigators independently and in duplicate to identify all relevant scientific articles published from January 1990 to May 2016. The MEDLINE search was performed using PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, The Cochrane Library, Scopus, and Google Scholar. The MeSH terms “lung cancer or lung neoplasm”, “thoracotomy or open surgery”, and “video-assisted thoracic surgery or VATS” and comparative study were used. MEDLINE and manual searches were performed by two investigators independently and in duplicate to identify all relevant scientific articles published from January 1990 to May 2016. The MEDLINE search was performed using PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, The Cochrane Library, Scopus, and Google Scholar. The MeSH terms “lung cancer or lung neoplasm”, “thoracotomy or open surgery”, and “video-assisted thoracic surgery or VATS” and comparative study were used. Inclusion criteria and exclusion criteria The following inclusion criteria were applied: (1) published in English, (2) compared the LN dissection of thoracoscopic lobectomy with thoracotomy in treating patients with lung cancer, and (3) the most recent study was chosen when duplication of data is in more than one article. Reviews without original data, case reports, meta-analyses, letters, expert opinions, and animal studies were excluded. Studies on robotic-assisted VATS were also excluded. The following inclusion criteria were applied: (1) published in English, (2) compared the LN dissection of thoracoscopic lobectomy with thoracotomy in treating patients with lung cancer, and (3) the most recent study was chosen when duplication of data is in more than one article. Reviews without original data, case reports, meta-analyses, letters, expert opinions, and animal studies were excluded. Studies on robotic-assisted VATS were also excluded. Data extraction Two investigators independently extracted data from the eligible studies. The extracted data included first author, year of publication, geographical area, study design, duration of enrollment, information on preoperative staging, number of patients per group, LN number (LNN), and LN station number (LNS). Two investigators independently extracted data from the eligible studies. The extracted data included first author, year of publication, geographical area, study design, duration of enrollment, information on preoperative staging, number of patients per group, LN number (LNN), and LN station number (LNS). Quality assessment for included studies Two investigators independently assessed the quality of each included study using the Newcastle-Ottawa Scale (NOS) for non-randomized studies and the Jadad scale for randomized controlled trials (RCTs). The NOS evaluates the quality of studies by analyzing three items: selection, comparability, and exposure. The scale assigns a maximum of nine points to each study: a maximum of four points for selection, two points for comparability, and three points for exposure. Therefore, the highest quality study would score nine points. In our analysis, high-quality studies were defined as those that scored nine or eight points; medium-quality studies were those that scored seven or six points [8]. The Jadad scale (five points) contained questions for three main parts: randomization, masking, and accountability of all patients (withdrawals and dropouts). Studies scored ≥3 points were considered as high quality [9]. Two investigators independently assessed the quality of each included study using the Newcastle-Ottawa Scale (NOS) for non-randomized studies and the Jadad scale for randomized controlled trials (RCTs). The NOS evaluates the quality of studies by analyzing three items: selection, comparability, and exposure. The scale assigns a maximum of nine points to each study: a maximum of four points for selection, two points for comparability, and three points for exposure. Therefore, the highest quality study would score nine points. In our analysis, high-quality studies were defined as those that scored nine or eight points; medium-quality studies were those that scored seven or six points [8]. The Jadad scale (five points) contained questions for three main parts: randomization, masking, and accountability of all patients (withdrawals and dropouts). Studies scored ≥3 points were considered as high quality [9]. Statistical analysis Meta-analysis was conducted by Review Manager 5.3 and SPSS 18.0, p value < 0.05 suggested statistically significant. The differences were compared between the two groups using analysis of variance for continuous variables and pooled relative risk (RR) with 95% confidence interval (CI) for categorical variables. We used I 2 and Cochran Q to evaluate the between-study heterogeneity. A random-effects model was adopted when the heterogeneity was significant (p ≤ 0.10 and I 2 > 50%); otherwise, a fixed-effects model was used. Rank correlation test of funnel plot asymmetry was used to assess the potential publication bias. Meta-analysis was conducted by Review Manager 5.3 and SPSS 18.0, p value < 0.05 suggested statistically significant. The differences were compared between the two groups using analysis of variance for continuous variables and pooled relative risk (RR) with 95% confidence interval (CI) for categorical variables. We used I 2 and Cochran Q to evaluate the between-study heterogeneity. A random-effects model was adopted when the heterogeneity was significant (p ≤ 0.10 and I 2 > 50%); otherwise, a fixed-effects model was used. Rank correlation test of funnel plot asymmetry was used to assess the potential publication bias. Search strategy: MEDLINE and manual searches were performed by two investigators independently and in duplicate to identify all relevant scientific articles published from January 1990 to May 2016. The MEDLINE search was performed using PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, The Cochrane Library, Scopus, and Google Scholar. The MeSH terms “lung cancer or lung neoplasm”, “thoracotomy or open surgery”, and “video-assisted thoracic surgery or VATS” and comparative study were used. Inclusion criteria and exclusion criteria: The following inclusion criteria were applied: (1) published in English, (2) compared the LN dissection of thoracoscopic lobectomy with thoracotomy in treating patients with lung cancer, and (3) the most recent study was chosen when duplication of data is in more than one article. Reviews without original data, case reports, meta-analyses, letters, expert opinions, and animal studies were excluded. Studies on robotic-assisted VATS were also excluded. Data extraction: Two investigators independently extracted data from the eligible studies. The extracted data included first author, year of publication, geographical area, study design, duration of enrollment, information on preoperative staging, number of patients per group, LN number (LNN), and LN station number (LNS). Quality assessment for included studies: Two investigators independently assessed the quality of each included study using the Newcastle-Ottawa Scale (NOS) for non-randomized studies and the Jadad scale for randomized controlled trials (RCTs). The NOS evaluates the quality of studies by analyzing three items: selection, comparability, and exposure. The scale assigns a maximum of nine points to each study: a maximum of four points for selection, two points for comparability, and three points for exposure. Therefore, the highest quality study would score nine points. In our analysis, high-quality studies were defined as those that scored nine or eight points; medium-quality studies were those that scored seven or six points [8]. The Jadad scale (five points) contained questions for three main parts: randomization, masking, and accountability of all patients (withdrawals and dropouts). Studies scored ≥3 points were considered as high quality [9]. Statistical analysis: Meta-analysis was conducted by Review Manager 5.3 and SPSS 18.0, p value < 0.05 suggested statistically significant. The differences were compared between the two groups using analysis of variance for continuous variables and pooled relative risk (RR) with 95% confidence interval (CI) for categorical variables. We used I 2 and Cochran Q to evaluate the between-study heterogeneity. A random-effects model was adopted when the heterogeneity was significant (p ≤ 0.10 and I 2 > 50%); otherwise, a fixed-effects model was used. Rank correlation test of funnel plot asymmetry was used to assess the potential publication bias. Results: Search results and quality assessment of the included studies We initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers Table 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial Flow diagram of screened and included papers Summary of the 29 trials included in the present meta-analysis ① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial We initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers Table 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial Flow diagram of screened and included papers Summary of the 29 trials included in the present meta-analysis ① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial Comparison of total LNN and LNS We identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I 2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Fourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I 2 = 73%, Fig. 2b). We identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I 2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Fourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I 2 = 73%, Fig. 2b). Comparison of N2 LNN and LNS Eleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I 2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Five articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I 2 = 76%, Fig. 3b). Eleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I 2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Five articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I 2 = 76%, Fig. 3b). Comparison of N1 LNN and LNS Four articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I 2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Only one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04). Four articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I 2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Only one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04). Publication bias The funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group Funnel plot of the mean difference in total LNN in the VATS group vs. the open group The funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group Funnel plot of the mean difference in total LNN in the VATS group vs. the open group Search results and quality assessment of the included studies: We initially identified 2341 publications from the database and reference list searches and reviewed 29 articles for final analysis (Fig. 1). The articles involved a total of 6247 patients, of whom 2763 underwent VATS and 3484 underwent thoracotomy. Of these 29 publications, three studies were RCTs and 26 were retrospective studies. According to the NOS and Jadad scales assessment scores, 23 articles were of good quality and the remaining six were medium quality. The baseline characteristics of these articles are listed in Table 1.Fig. 1Flow diagram of screened and included papers Table 1Summary of the 29 trials included in the present meta-analysisStudyInstitutionEnrolled yearNo. of patientsClinical stageOutcomesDesignQualityVATSOpenVATSOpen1995Kirby [28] USASingle1991.10–1993.121991.10–1993.122530I①RCT31998Morikawa [29] JPNSingle1996.04–1996.121995.01–1996.033941I–II③Retrospective92000Luketich [30] USASingleNot mentionedNot mentioned3131I①Retrospective72000Sugi [31] JPNSingle1993.01–1994.061993.01–1994.064852Ia③, ⑤RCT32001Nomori [32] JPNSingle1999.08–2000.121998.04–1999.073333I①,⑦,⑧Retrospective72005Watanabe [33] JPNSingle1997–20041997–2004221190I①, ③, ⑦, ⑧Retrospective82006Petersen [34] USASingle2001–20051996–20051285I–IV②Retrospective82006Shigemura [35] JPNMulti1999.01–2004.011999.01–2004.015055Ia①Retrospective92008Shiraishi [36] JPNSingle1994.11–2005.101994.11–2005.102055I③, ⑤Retrospective82008Watanabe [37] JPNSingle1997–20061997–20063732I①, ②, ③, ④Retrospective82008Whitson [38] USASingleNot mentionedNot mentioned67Not mentioned①Retrospective62009Nakanishi [39] JPNSingle2000.04–2007.012000.04–2007.011314I–IV③Retrospective82009Okur [40] TRSingle2007.01–2007.112007.01–2007.112028I②Retrospective82010Denlinger [21] USASingle2000.01–2008.082000.01–2008.0879464I①, ③, ⑤Retrospective82011D’Amico [41] USASingle2007.01–2010.092007.01–2010.09199189I–III②,④Retrospective72012Bu [42] CHNSingle2001.05–2011.042001.05–2011.044687Not mentioned①, ②Retrospective72012Li [43] CHNSingle2006.09–2009.122006.09–2009.122947I③, ④Retrospective82012Licht [44] DNKMulti2007.01–2011.122007.01–2011.12717796I②Retrospective82013Fan [45] CHNSingle2005.01–2010.122005.01–2010.127977I–II①, ②Retrospective82013Lee [22] USASingle1990.05–2011.121990.05–2011.12141115Not mentioned①, ②, ③, ④, ⑤, ⑥Retrospective82013Palade [46] GERSingle2008.05–2011.122008.05–2011.123232I①, ⑦, ⑧RCT32013Zhong [47] CHNSingle2006.03–2011.082006.03–2011.086790I①, ②, ③, ④Retrospective82014Li [48] CHNSingle2011.02–2013.022011.02–2013.022132I–II①Retrospective82014Stephens [4] USASingle2002.01–2011.122002.01–2011.12307307I②Retrospective82015Cai [49] CHNSingle2010.01–2012.052010.01–2012.057167I–II①, ②Retrospective92015Kuritzky [50] USASingle2007–20122007–201274224I②Retrospective82015Murakawa [51] JPNSingle2001–20102001–2010101101I①, ②Retrospective82015Nwogu [6] USAMulti2004.10–2010.062004.10–2010.06175175I–II①, ②Retrospective72015Zhang [52] CHNSingle2012.10–2013.112012.10–2013.117028I①Retrospective9① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial Flow diagram of screened and included papers Summary of the 29 trials included in the present meta-analysis ① total lymph node number, ② total lymph node station number, ③ N2 LNN, ④ N2 LNS, ⑤ N1 LNN, ⑥ N1 LNS, ⑦ left-side LNN, ⑧ right-side LNN CHN China, DNK Denmark, JPN Japan, GER Germany, TR Turkey, USA United States of America, RCT randomized controlled trial Comparison of total LNN and LNS: We identified 19 articles for total LNN comparison. They involved 1297 patients in the VATS group and 1731 patients in the open group (thoracotomy). The heterogeneity between these studies was acceptable (p = 0.02, I 2 = 44%). Fewer total LNs were dissected in the VATS group as compared with the open group (95% CI −1.52 to −0.73, p < 0.00001, Fig. 2a).Fig. 2Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in total LNN (a) and LNS (b) in the VATS group vs. the open group Fourteen articles were identified for total LNS comparison. They involved 2046 patients in the VATS group and 2373 patients in the open group. The mean difference in total LNS between the two groups was not significant (95% CI −0.28 to 0.06, p = 0.20), with significant heterogeneity across studies (p < 0.00001, I 2 = 73%, Fig. 2b). Comparison of N2 LNN and LNS: Eleven articles were identified for N2 LNN comparison. They involved 726 patients in the VATS group and 1132 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.08, I 2 = 41%). Fewer N2 LNs were dissected in the VATS group as compared with the open group (95% CI −1.38 to −0.49, p < 0.0001, Fig. 3a).Fig. 3Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Forest plot of the mean difference in N2 LNN (a) and LNS (b) in the VATS group vs. the open group Five articles were identified for N2 LNS comparison. They involved 473 patients in the VATS group and 473 patients in the open group. The mean difference in N2 LNS between the two groups was not significant (95% CI −0.46 to 0.23, p = 0.50), with significant heterogeneity across studies (p = 0.002, I 2 = 76%, Fig. 3b). Comparison of N1 LNN and LNS: Four articles were identified for N1 LNN comparison. They involved 288 patients in the VATS group and 686 patients in the open group. The heterogeneity between these studies was acceptable (p = 0.44, I 2 = 60%). The mean difference in N1 LNN between the two groups was not significant (95% CI −0.71 to 0.08, p = 0.11, Fig. 4).Fig. 4Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Forest plot of the mean difference in N1 LNN in the VATS group vs. the open group Only one article was identified for N1 LNS comparison. They involved 141 patients in the VATS group and 115 patients in the open group. The result showed that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04). Publication bias: The funnel plot for publication bias (standard error by total LNN comparison) demonstrated marked evidence of symmetry (Fig. 5), indicating no publication bias. The combined effect size yielded a Z value of 5.64, with a corresponding p < 0.00001. This result indicates that the fail-safe N value was relevant.Fig. 5Funnel plot of the mean difference in total LNN in the VATS group vs. the open group Funnel plot of the mean difference in total LNN in the VATS group vs. the open group Discussion: The presence or absence of mediastinal LN metastases is a critical component to accurate staging and therefore a key component of the surgical management of NSCLC [10]. Both overall and disease-free survival have been associated with the number of LNs dissected [11, 12]. Lardinois compared LNs dissection versus sampling; the results showed a longer disease-free survival and better local tumor control in dissection groups [13]. Focusing on stage IA lung cancer, Xu reported a similar result and suggested that the number of N2 stations served as a more significant prognostic factor [14]. As a result, current guidelines from the National Comprehensive Cancer Network and European Society of Thoracic Surgeons recommend that all patients with resectable NSCLC should have a complete systematic nodal dissection, with at least three N2 stations dissected [15, 16]. Although VATS is associated with many benefits in comparison to thoracotomy, whether VATS can achieve the same LN dissection efficacy in patients with lung cancer remains controversial [7, 17, 18]. Flores reported that the surgical field in VATS facilitated the dissection of LNs adjacent to the blood vessels and the trachea and found that smaller nodes achieved better LN dissection [19]. Ramos showed that VATS dissected more total LNS (5.1 ± 1.1 vs. 4.5 ± 1.2, p < 0.001) and mediastinal LNS (3.4 ± 0.9 vs. 3.2 ± 0.9, p = 0.022) when compared with thoracotomy [20]. By contrast, Denlinger reported that significantly more overall LNs were dissected in the open group than in the VATS group (8.9 ± 5.2 vs. 7.1 ± 5.2, p = 0.0006) [21]. Similar results were reported by Lee and colleagues in their retrospective study of LN evaluation achieved by VATS lobectomy compared with that by open lobectomy [22]. A secondary analysis including 752 original participants of the American College of Surgeons Oncology Group Z0030 trial compared patients who underwent lobectomy by VATS with patients who underwent thoracotomy. The results showed that there was no significant difference in the overall number of LNs retrieved between the two groups (15 vs. 19, p = 0.147) [23]. The present study included a total of 7568 patients from nine countries, providing the most comprehensive evidence for LN dissection efficacy by VATS to date. The meta-analysis showed that fewer total LNN and similar total LNS were dissected in the VATS group as compared with that in the open group. If we only included studies focusing on clinical stage I lung cancer, the results were similar (total LNN, 95% CI −1.67 to −0.61, p < 0.0001; total LNS, 95% CI −0.62 to 0.02, p = 0.07). It suggests that surgeons may not have the ability to perform systematic lymphadenectomy in VATS or ignore the importance of systematic lymphadenectomy for various reasons (earlier tumor stage, worrying about damage to vital organs, and so on). In the comparison of N1 and N2 LN dissection, our results showed that similar number of N1 LNN and N2 LNS could be harvested by VATS, while fewer N2 LNN were harvested by VATS as compared with thoracotomy. Only one article reported on N1 LNS comparison between the two groups and showed better efficiency in the open group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04) [22]. It was controversial that removing more N2 LNs could increase the accuracy of clinical staging of NSCLC. Boffa et al. compared the completeness of surgical LN evaluation during anatomic resection of primary lung cancer by open and VATS approaches in 11,531 patients from The Society of Thoracic Surgeons-General Thoracic Database. The results showed nodal upstaging in 14.3% (1024 patients) of the open group and in 11.6% (508 patients) of the VATS group (p < 0.001). The study suggested that surgeons should be encouraged to apply a systematic approach to hilar and peribronchial LN dissection during VATS lobectomy for lung cancer, particularly as they were adopting this approach [24]. Some surgeons might also worry about the complications caused by the systematic LN dissection. As with open thoracotomy, systematic mediastinal LN dissection under VATS may increase the risk of intraoperative bleeding (bronchial arteries, etc.), tracheobronchial injury, recurrent nerve injury, prolonged air leak, atrial fibrillation, and pulmonary edema [25]. In other papers, Watanabe et al. reported similar mortality and morbidity of mediastinal LN dissection by VATS vs. open lobectomy, indicating that systematic mediastinal LN dissection by VATS is a safe procedure [26]. Zhang et al. compared complications such as chylothorax and nerve injury between VATS and open thoracotomy in a meta-analysis. The results showed that these events were similar in both groups [27]. The possible limitations of our study must be considered when interpreting the findings described herein. First, including only English papers might have resulted in language bias. Second, including 7568 participants from 36 studies with only three RCTs might have weakened the quality of the results. Third, the number of dissected LNs varied significantly between the included studies. Different doctors have different understanding of LN dissection and might be at different stages of the learning curve. Some data did not meet the National Comprehensive Cancer Network guide requirements for lung cancer surgery treatment of systematic LN dissection, which may have affected the reliability of the results. Fourth, there is great potential for LN fragmentation during dissection. Different pathologists and different counting procedures might lead to false LNN counts, which might increase the heterogeneity between studies but would not alter the overall results. Finally, we did not analyze the survival difference between VATS and open thoracotomy. Our analysis compared LN harvest capability between two evaluation procedures only from a surgical point of view and tried to give further proof of satisfied oncologic efficacy by VATS. Conclusions: Less total and mediastinal LNs were evaluated with VATS than with thoracotomy in the present study. Both approaches harvested a similar number of total LN stations, mediastinal LN stations, and N1 LNs. However, owing to the possible existing bias in the original studies, inter-study heterogeneity, and the inherent limitations of our meta-analysis, the findings require validation in high-quality, large-scale RCTs.
Background: The aim of this study was to investigate which surgical method is better in lymph node (LN) dissection of lung cancer. Methods: A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus, and Google Scholar was performed to identify studies comparing thoracoscopic lobectomy (video-assisted thoracic surgery (VATS) group) and thoracotomy (open group) in LN dissection. Results: Twenty-nine articles met the inclusion criteria and involved 2763 patients in the VATS group and 3484 patients in the open group. The meta-analysis showed that fewer total LNs (95% confidence interval [CI] -1.52 to -0.73, p < 0.0001) and N2 LNs (95% CI -1.25 to -0.10, p = 0.02) were dissected in the VATS group. A similar number of total LN stations, N2 LN stations, and N1 LNs were harvested in both groups. Only one study reported that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04). Conclusions: Open lobectomy could achieve better LN dissection efficacy than thoracoscopic lobectomy in the treatment of lung cancer, especially in the N2 LNs dissection. These findings require validation by high-quality, large-scale randomized controlled trials.
Background: Lung cancer is the leading cause of cancer deaths in many countries [1, 2]. Surgical treatment is the preferred treatment for early-stage non-small cell lung cancer (NSCLC). D’Cunha’s research showed that N1 and N2 lymph nodes (LNs) were positive in 27.5% of patients with lung cancer under lobectomy [3]. However, non-invasive examinations, such as computed tomography (CT) and positron emission tomography-computed tomography (PET-CT), are not sensitive and specific for the clinical staging of lung cancer. Video-assisted thoracic surgery (VATS) is the preferred surgical procedure, with fewer incidences of postoperative complications and a higher survival rate compared with thoracotomy [4–7]. However, whether VATS can achieve the same LN dissection efficacy is controversial, and there remains a lack of high-quality, large-scale clinical research. To determine whether VATS can achieve the same LN dissection efficacy as thoracotomy in lung cancer, we performed a systemic review and meta-analysis. Conclusions: Less total and mediastinal LNs were evaluated with VATS than with thoracotomy in the present study. Both approaches harvested a similar number of total LN stations, mediastinal LN stations, and N1 LNs. However, owing to the possible existing bias in the original studies, inter-study heterogeneity, and the inherent limitations of our meta-analysis, the findings require validation in high-quality, large-scale RCTs.
Background: The aim of this study was to investigate which surgical method is better in lymph node (LN) dissection of lung cancer. Methods: A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus, and Google Scholar was performed to identify studies comparing thoracoscopic lobectomy (video-assisted thoracic surgery (VATS) group) and thoracotomy (open group) in LN dissection. Results: Twenty-nine articles met the inclusion criteria and involved 2763 patients in the VATS group and 3484 patients in the open group. The meta-analysis showed that fewer total LNs (95% confidence interval [CI] -1.52 to -0.73, p < 0.0001) and N2 LNs (95% CI -1.25 to -0.10, p = 0.02) were dissected in the VATS group. A similar number of total LN stations, N2 LN stations, and N1 LNs were harvested in both groups. Only one study reported that fewer N1 LN stations were dissected in the VATS group (1.4 ± 0.5 vs. 1.6 ± 0.6, p = 0.04). Conclusions: Open lobectomy could achieve better LN dissection efficacy than thoracoscopic lobectomy in the treatment of lung cancer, especially in the N2 LNs dissection. These findings require validation by high-quality, large-scale randomized controlled trials.
6,718
266
[ 90, 88, 56, 176, 128, 467, 218, 215, 177, 101 ]
15
[ "group", "vats", "lnn", "lns", "open", "01", "patients", "studies", "vats group", "open group" ]
[ "dissection thoracoscopic lobectomy", "vats lobectomy lung", "dissection efficacy thoracotomy", "staging lung cancer", "efficacy thoracotomy lung" ]
null
[CONTENT] Video-assisted thoracic surgery | Thoracotomy | Meta-analysis | Lung cancer | Lymph node dissection [SUMMARY]
null
[CONTENT] Video-assisted thoracic surgery | Thoracotomy | Meta-analysis | Lung cancer | Lymph node dissection [SUMMARY]
[CONTENT] Video-assisted thoracic surgery | Thoracotomy | Meta-analysis | Lung cancer | Lymph node dissection [SUMMARY]
[CONTENT] Video-assisted thoracic surgery | Thoracotomy | Meta-analysis | Lung cancer | Lymph node dissection [SUMMARY]
[CONTENT] Video-assisted thoracic surgery | Thoracotomy | Meta-analysis | Lung cancer | Lymph node dissection [SUMMARY]
[CONTENT] Carcinoma, Non-Small-Cell Lung | Humans | Lung Neoplasms | Lymph Node Excision | Lymphatic Metastasis | Mediastinum | Neoplasm Staging | Pneumonectomy | Thoracic Surgery, Video-Assisted | Thoracotomy [SUMMARY]
null
[CONTENT] Carcinoma, Non-Small-Cell Lung | Humans | Lung Neoplasms | Lymph Node Excision | Lymphatic Metastasis | Mediastinum | Neoplasm Staging | Pneumonectomy | Thoracic Surgery, Video-Assisted | Thoracotomy [SUMMARY]
[CONTENT] Carcinoma, Non-Small-Cell Lung | Humans | Lung Neoplasms | Lymph Node Excision | Lymphatic Metastasis | Mediastinum | Neoplasm Staging | Pneumonectomy | Thoracic Surgery, Video-Assisted | Thoracotomy [SUMMARY]
[CONTENT] Carcinoma, Non-Small-Cell Lung | Humans | Lung Neoplasms | Lymph Node Excision | Lymphatic Metastasis | Mediastinum | Neoplasm Staging | Pneumonectomy | Thoracic Surgery, Video-Assisted | Thoracotomy [SUMMARY]
[CONTENT] Carcinoma, Non-Small-Cell Lung | Humans | Lung Neoplasms | Lymph Node Excision | Lymphatic Metastasis | Mediastinum | Neoplasm Staging | Pneumonectomy | Thoracic Surgery, Video-Assisted | Thoracotomy [SUMMARY]
[CONTENT] dissection thoracoscopic lobectomy | vats lobectomy lung | dissection efficacy thoracotomy | staging lung cancer | efficacy thoracotomy lung [SUMMARY]
null
[CONTENT] dissection thoracoscopic lobectomy | vats lobectomy lung | dissection efficacy thoracotomy | staging lung cancer | efficacy thoracotomy lung [SUMMARY]
[CONTENT] dissection thoracoscopic lobectomy | vats lobectomy lung | dissection efficacy thoracotomy | staging lung cancer | efficacy thoracotomy lung [SUMMARY]
[CONTENT] dissection thoracoscopic lobectomy | vats lobectomy lung | dissection efficacy thoracotomy | staging lung cancer | efficacy thoracotomy lung [SUMMARY]
[CONTENT] dissection thoracoscopic lobectomy | vats lobectomy lung | dissection efficacy thoracotomy | staging lung cancer | efficacy thoracotomy lung [SUMMARY]
[CONTENT] group | vats | lnn | lns | open | 01 | patients | studies | vats group | open group [SUMMARY]
null
[CONTENT] group | vats | lnn | lns | open | 01 | patients | studies | vats group | open group [SUMMARY]
[CONTENT] group | vats | lnn | lns | open | 01 | patients | studies | vats group | open group [SUMMARY]
[CONTENT] group | vats | lnn | lns | open | 01 | patients | studies | vats group | open group [SUMMARY]
[CONTENT] group | vats | lnn | lns | open | 01 | patients | studies | vats group | open group [SUMMARY]
[CONTENT] cancer | lung | lung cancer | tomography | preferred | computed | ct | computed tomography | research | efficacy [SUMMARY]
null
[CONTENT] group | 01 | lnn | 2011 | vats group | open group | total | open | lns | fig [SUMMARY]
[CONTENT] mediastinal | ln stations | stations | total | study | ln | total ln stations mediastinal | n1 lns owing | n1 lns owing possible | approaches harvested similar number [SUMMARY]
[CONTENT] group | vats | vats group | open | open group | lnn | points | total | lns | 01 [SUMMARY]
[CONTENT] group | vats | vats group | open | open group | lnn | points | total | lns | 01 [SUMMARY]
[CONTENT] LN [SUMMARY]
null
[CONTENT] Twenty-nine | 2763 | 3484 ||| 95% | CI] -1.52 | 0.0001 | 95% | CI | -0.10 | 0.02 | VATS ||| LN | N2 LN | N1 ||| Only one | N1 LN | 1.4 ± 0.5 | 1.6 ±  | 0.6 | 0.04 [SUMMARY]
[CONTENT] LN ||| [SUMMARY]
[CONTENT] LN ||| PubMed | Ovid MEDLINE | EMBASE | ScienceDirect | the Cochrane Library | Google Scholar | VATS | LN ||| Twenty-nine | 2763 | 3484 ||| 95% | CI] -1.52 | 0.0001 | 95% | CI | -0.10 | 0.02 | VATS ||| LN | N2 LN | N1 ||| Only one | N1 LN | 1.4 ± 0.5 | 1.6 ±  | 0.6 | 0.04 ||| LN ||| [SUMMARY]
[CONTENT] LN ||| PubMed | Ovid MEDLINE | EMBASE | ScienceDirect | the Cochrane Library | Google Scholar | VATS | LN ||| Twenty-nine | 2763 | 3484 ||| 95% | CI] -1.52 | 0.0001 | 95% | CI | -0.10 | 0.02 | VATS ||| LN | N2 LN | N1 ||| Only one | N1 LN | 1.4 ± 0.5 | 1.6 ±  | 0.6 | 0.04 ||| LN ||| [SUMMARY]
Expression of PH Domain Leucine-rich Repeat Protein Phosphatase, Forkhead Homeobox Type O 3a and RAD51, and their Relationships with Clinicopathologic Features and Prognosis in Ovarian Serous Adenocarcinoma.
28139510
Ovarian serous adenocarcinoma can be divided into low- and high-grade tumors, which exhibit substantial differences in pathogenesis, clinicopathology, and prognosis. This study aimed to investigate the differences in the PH domain leucine-rich repeat protein phosphatase (PHLPP), forkhead homeobox type O 3a (FoxO3a), and RAD51 protein expressions, and their associations with prognosis in patients with low- and high-grade ovarian serous adenocarcinomas.
BACKGROUND
The PHLPP, FoxO3a, and RAD51 protein expressions were examined in 94 high- and 26 low-grade ovarian serous adenocarcinomas by immunohistochemistry. The differences in expression and their relationships with pathological features and prognosis were analyzed.
METHODS
In high-grade serous adenocarcinomas, the positive rates of PHLPP and FoxO3a were 24.5% and 26.6%, while in low-grade tumors, they were 23.1% and 26.9%, respectively (P < 0.05 vs. the control specimens; low- vs. high-grade: P > 0.05). The positive rates of RAD51 were 70.2% and 65.4% in high- and low-grade serous adenocarcinomas, respectively (P < 0.05 vs. the control specimens; low- vs. high-grade: P > 0.05). Meanwhile, in high-grade tumors, Stage III/IV tumors and lymph node and omental metastases were significantly associated with lower PHLPP and FoxO3a and higher RAD51 expression. The 5-year survival rates of patients with PHLPP- and FoxO3a-positive high-grade tumors (43.5% and 36.0%) were significantly higher than in patients with PHLPP-negative tumors (5.6% and 7.2%, respectively; P< 0.05). Similarly, the 5-year survival rate of RAD51-positive patients (3.0%) was significantly lower than in negative patients (42.9%; P< 0.05). In low-grade tumors, the PHLPP, FoxO3a, and RAD51 expressions were not significantly correlated with lymph node metastasis, omental metastasis, Federation of Gynecology and Obstetrics stage, or prognosis.
RESULTS
Abnormal PHLPP, FoxO3a, and RAD51 protein expressions may be involved in the development of high- and low-grade ovarian serous adenocarcinomas, suggesting common molecular pathways. Decreased PHLPP and FoxO3a and increased RAD51 protein expression may be important molecular markers for poor prognosis, and RAD51 may be an independent prognosis factor, of high-grade, but not low-grade, ovarian serous adenocarcinomas.
CONCLUSIONS
[ "Adult", "Aged", "Biomarkers, Tumor", "Cystadenocarcinoma, Serous", "Female", "Forkhead Box Protein O3", "Humans", "Immunohistochemistry", "Lymphatic Metastasis", "Middle Aged", "Neoplasm Staging", "Nuclear Proteins", "Ovarian Neoplasms", "Phosphoprotein Phosphatases", "Prognosis", "Rad51 Recombinase" ]
5308009
Introduction
Ovarian serous adenocarcinoma is the most common ovarian epithelial malignant tumor. It can be divided into low- and high-grade tumors, which exhibit substantial differences in the pathogenesis, clinicopathological characteristics, and prognosis.[1] Low-grade serous carcinomas undergo a progressive, gradual developmental process from benign serous cystadenoma to borderline serous cystadenoma to malignant serous adenocarcinoma, while high-grade serous adenocarcinomas develop directly from the oviduct epithelium or ovarian inclusion cysts. However, it remains unclear whether the two tumor types have completely different molecular mechanisms, whether there is a common molecular basis, and whether the genes involved in the development of these tumors affect the patients’ prognosis. Recent studies[23] have indicated that the PI3K/Akt signaling pathway is involved in the initiation and development of ovarian serous adenocarcinoma. For example, PH domain leucine-rich repeat protein phosphatase (PHLPP) is known to negatively regulate Akt and its downstream kinase by dephosphorylating the hydrophobic core of Akt (Akt1 Ser473), thereby antagonizing PI3K/Akt signaling and inhibiting tumor growth. Forkhead homeobox type O 3a (FoxO3a), an important downstream signaling molecule of Akt, is involved in the initiation and development of tumor cells and regulation of cell proliferation after being phosphorylated by Akt.[45] Further, RAD51, a DNA double-strand break repair gene,[6] has also been implicated in the PI3K/Akt signaling pathway: phosphorylated FoxO3a binds to the promoter of the downstream gene RAD51, thereby regulating RAD51 and promoting tumor development and metastasis. The present study aimed to (1) determine the differences in the protein expressions of PHLPP, FoxO3a, and RAD51 between high- and low-grade ovarian serous adenocarcinomas, (2) explore the roles of these three genes in the initiation of the different grades of ovarian serous adenocarcinoma, and (3) analyze the relationships between their protein expressions and prognosis.
Methods
Ethics The study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000. The study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000. Participant selection and description Patients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively. Expression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining. Patients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively. Expression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining. Technical information Experimental procedures The operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope. Evaluation standard for staining The slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used. PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Experimental procedures The operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope. Evaluation standard for staining The slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used. PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Statistical analyses SPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05. SPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05.
Results
Expression of PH domain leucine-rich repeat protein phosphatase in ovarian serous adenocarcinomas and its association with clinicopathological features and prognosis Among 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1]. Protein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05]. Relationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a. Among the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05]. Five-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas. Among 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1]. Protein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05]. Relationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a. Among the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05]. Five-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas. Expression of forkhead homeobox type O 3a in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis Among the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3]. Protein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of FoxO3a in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05). Among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas. Among the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3]. Protein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of FoxO3a in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05). Among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas. Expression of RAD51 in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis Among the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05). Protein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary. Protein expression of RAD51 in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05). Finally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas. Among the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05). Protein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary. Protein expression of RAD51 in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05). Finally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas. Prognostic multivariate survival analyses of high-grade serous ovarian adenocarcinoma The Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5]. Multivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma RR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase. The Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5]. Multivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma RR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase.
null
null
[ "Ethics", "Participant selection and description", "Technical information", "PH domain leucine-rich repeat protein phosphatase-positive criteria", "Forkhead homeobox type O 3a-positive criteria", "RAD51-positive criteria", "Statistical analyses", "Expression of PH domain leucine-rich repeat protein phosphatase in ovarian serous adenocarcinomas and its association with clinicopathological features and prognosis", "Expression of forkhead homeobox type O 3a in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis", "Expression of RAD51 in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis", "Prognostic multivariate survival analyses of high-grade serous ovarian adenocarcinoma", "Role of the tumor suppressor PH domain leucine-rich repeat protein phosphatase in the development of ovarian serous adenocarcinoma and its association with patient prognosis", "Role of the tumor suppressor forkhead homeobox type O 3a in the development of ovarian serous adenocarcinoma and its association with patient prognosis", "Role of RAD51 in the development of ovarian serous adenocarcinoma and its association with patient prognosis", "Financial support and sponsorship" ]
[ "The study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000.", "Patients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively.\nExpression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining.", "\nExperimental procedures\n\nThe operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope.\n\nEvaluation standard for staining\n\nThe slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used.\n PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]\nCells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]\n Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]\nCells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]\n RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]\nCells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]", "Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]", "Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]", "Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]", "SPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05.", "Among 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1].\nProtein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary.\nProtein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05].\nRelationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma\nLNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a.\nAmong the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05].\nFive-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas.", "Among the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3].\nProtein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary.\nProtein expression of FoxO3a in high- and low-grade serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05).\nAmong the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05).\nFive-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas.", "Among the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05).\nProtein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary.\nProtein expression of RAD51 in high- and low-grade serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05).\nFinally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05).\nFive-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas.", "The Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5].\nMultivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma\nRR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase.", "The tumor suppressor PHLPP, located at 18q21.33, can directly dephosphorylate ser473 in the hydrophobic group of Akt, thus negatively regulating Akt and its downstream targets. It has been reported that PHLPP can act synergistically with PTEN, another well-known tumor suppressor gene, thereby significantly inhibiting the proliferation of colorectal cancer cells, and negatively regulating PI3K/Akt signaling.[11] Accordingly, studies have shown that the PHLPP expression is significantly decreased in colon, breast, prostate, and lung cancers.[5712]\nThe results of the present study showed that the positive rate of PHLPP protein expression in the high-grade group was significantly lower than that in the oviduct group. The PHLPP-positive expression rate was decreased gradually from normal ovarian tissue to benign tumors to borderline tumors to low-grade adenocarcinomas, indicating that loss of PHLPP expression may be associated with the initiation of ovarian serous adenocarcinoma development, whereas no significant difference was observed between high-grade and low-grade tumors. Further, the PHLPP expression level showed significant associations with LNM, FIGO stages, and omental metastasis in high-grade tumors, but not in low-grade tumors. Similarly, the 5-year survival rates of PHLPP-positive and negative high-grade tumors significant differed, whereas no difference was seen in the low-grade tumors. These findings suggest that PHLPP may be a predictor of prognosis of high-grade ovarian serous adenocarcinoma.", "FoxO3a is an important member of the FOXO family, located on at 6q21, and encodes a 673-amino acid long protein.[13] FoxO3a, as an important signaling molecule downstream of Akt in the PI3K-Akt signaling pathway, binds to the 14-3-3 chaperone protein after being phosphorylated by Akt, and subsequently translocates from the nucleus to the cytoplasm, thereby activating the apoptotic genes Bim, FasL, and tumor necrosis factor and its related apoptosis-inducing ligand TRAIL, consequently promoting apoptosis.[14] In addition to its role in apoptosis, FoxO3a plays critical roles in regulating cell proliferation, metabolism, the stress response, and the life span of cancer cells.[15] Particularly, another target molecule of FoxO3a is the cell cycle-regulating gene P27; overexpression of FoxO3a can arrest the cell cycle in the G0/G1 phase by upregulating P27. In addition, overexpression of FoxO3a can significantly inhibit the proliferation of tumor cells and result in tumor cell G2 arrest, further promoting tumor cell apoptosis.[1617]\nThe results of our analyses showed that the FoxO3a-positive rate in the high-grade tumors was significantly lower than that in the normal oviducts. In the low-grade adenocarcinomas, borderline cystadenomas, cystadenomas, and normal ovaries, the positive expression rates of FoxO3a protein were gradually reduced from normal ovaries to the low-grade adenocarcinomas, and the FoxO3 positive rate of the low-grade tumors was significantly lower than those of the borderline cystadenomas, cystadenomas, and normal ovaries, while the positive rate of borderline cystadenomas was significantly lower than those of cystadenomas and normal ovaries. Moreover, the FoxO3a expression level showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. These results suggest that FoxO3a may play a role in the oncogenesis and development of ovarian serous adenocarcinomas. In the high-grade tumors, the 5-year survival rate of FoxO3a-positive patients was significantly higher than that of the negative patients, whereas no difference in survival was seen in low-grade tumors, indicating that positive FoxO3a expression may have a long-term protective effect on cancer patients, and FoxO3a may hence have certain significance in the prediction of the patients’ long-term prognosis. Interestingly, it has been reported that tumor cells with low FoxO3a expression can develop drug resistance to paclitaxel and that inhibiting FoxO3a expression with RNAi can reduce E1A-mediated sensitivity to paclitaxel.[18] Therefore, FoxO3a protein may represent a potential prognostic indicator for high-grade ovarian serous adenocarcinoma.", "RAD51 is a human homologous recombination repair gene. Too strong or too weak homologous recombination will result in genome instability and DNA damage, and the consequent gene loss, loss of heterozygosity, and/or genetic translocation are important causes of tumorigenesis and tumor progression.[19] Studies have found that RAD51 protein is overexpression in the breast, prostate, pancreatic, and colon cancers. Such overexpression of RAD51 can provide certain advantages for the growth of cancer cells by promoting the initiation and progression of cancer. However, to date, studies on the expression of RAD51 in ovarian cancers are few.\nThe results of this study showed that the RAD51-positive rate in high-grade tumors was significantly higher than that of normal oviducts, suggesting that RAD51 may play a role in the initiation of high-grade ovarian serous carcinomas. The RAD51-positive rate gradually increased from normal ovaries to cystadenomas to borderline cystadenomas to low-grade serous adenocarcinomas. Particularly, the positive rate of low-grade tumors was significantly higher than that in benign tumors and normal oviducts. The gradual change in the RAD51 expression suggests that the initiation of low-grade serous adenocarcinoma is a gradual procedure from benign to borderline to low-grade adenocarcinoma. However, no significant difference in the expression of RAD51 was found between high- and low-grade tumors. The expression level of RAD51 showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors.\nHerein, the 5-year survival rate of patients with RAD51-positive high-grade adenocarcinomas was significantly lower than that of patients with RAD51-negative tumors, while no difference in survival was found in low-grade tumors. In other words, we found that high-grade tumor patients with high expression of RAD51 had a lower survival rate and poor prognosis, indicating that RAD51 may be a prognostic factor in high-grade ovarian serous adenocarcinoma. Furthermore, multivariate survival analyses showed that RAD51 was an independent prognostic factor, which may be of clinical significance in terms of the development of molecular targeted therapy, in addition to FIGO stage, and LNM. Further investigations on RAD51 are required to confirm these findings and to hopefully be able to improve the quality of life and prognosis of patients using targeted therapy.\nIn conclusion, the results of the present study indicate that PHLPP, FoxO3a, and RAD51 may play certain roles in the pathogenesis of the different grades of ovarian serous adenocarcinomas. Meanwhile, they also suggest the possible presence of common molecular mechanisms in the pathogenesis of the different grades. Increased protein expression of RAD51 may be an important molecular marker for poor prognosis of high-grade ovarian serous adenocarcinomas. However, considering the limited number of cases selected, which may have affected the significance of the results, further studies are required to improve and supplement these findings.", "This work were supported by grants from the Key Scientific Research program in Medical Science of Hebei Province, China (No. 20150318) and Science and Technology support program of Hebei Province, China (No. 16277794D)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Ethics", "Participant selection and description", "Technical information", "PH domain leucine-rich repeat protein phosphatase-positive criteria", "Forkhead homeobox type O 3a-positive criteria", "RAD51-positive criteria", "Statistical analyses", "Results", "Expression of PH domain leucine-rich repeat protein phosphatase in ovarian serous adenocarcinomas and its association with clinicopathological features and prognosis", "Expression of forkhead homeobox type O 3a in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis", "Expression of RAD51 in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis", "Prognostic multivariate survival analyses of high-grade serous ovarian adenocarcinoma", "Discussion", "Role of the tumor suppressor PH domain leucine-rich repeat protein phosphatase in the development of ovarian serous adenocarcinoma and its association with patient prognosis", "Role of the tumor suppressor forkhead homeobox type O 3a in the development of ovarian serous adenocarcinoma and its association with patient prognosis", "Role of RAD51 in the development of ovarian serous adenocarcinoma and its association with patient prognosis", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Ovarian serous adenocarcinoma is the most common ovarian epithelial malignant tumor. It can be divided into low- and high-grade tumors, which exhibit substantial differences in the pathogenesis, clinicopathological characteristics, and prognosis.[1] Low-grade serous carcinomas undergo a progressive, gradual developmental process from benign serous cystadenoma to borderline serous cystadenoma to malignant serous adenocarcinoma, while high-grade serous adenocarcinomas develop directly from the oviduct epithelium or ovarian inclusion cysts.\nHowever, it remains unclear whether the two tumor types have completely different molecular mechanisms, whether there is a common molecular basis, and whether the genes involved in the development of these tumors affect the patients’ prognosis. Recent studies[23] have indicated that the PI3K/Akt signaling pathway is involved in the initiation and development of ovarian serous adenocarcinoma. For example, PH domain leucine-rich repeat protein phosphatase (PHLPP) is known to negatively regulate Akt and its downstream kinase by dephosphorylating the hydrophobic core of Akt (Akt1 Ser473), thereby antagonizing PI3K/Akt signaling and inhibiting tumor growth. Forkhead homeobox type O 3a (FoxO3a), an important downstream signaling molecule of Akt, is involved in the initiation and development of tumor cells and regulation of cell proliferation after being phosphorylated by Akt.[45] Further, RAD51, a DNA double-strand break repair gene,[6] has also been implicated in the PI3K/Akt signaling pathway: phosphorylated FoxO3a binds to the promoter of the downstream gene RAD51, thereby regulating RAD51 and promoting tumor development and metastasis.\nThe present study aimed to (1) determine the differences in the protein expressions of PHLPP, FoxO3a, and RAD51 between high- and low-grade ovarian serous adenocarcinomas, (2) explore the roles of these three genes in the initiation of the different grades of ovarian serous adenocarcinoma, and (3) analyze the relationships between their protein expressions and prognosis.", " Ethics The study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000.\nThe study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000.\n Participant selection and description Patients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively.\nExpression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining.\nPatients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively.\nExpression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining.\n Technical information \nExperimental procedures\n\nThe operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope.\n\nEvaluation standard for staining\n\nThe slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used.\n PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]\nCells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]\n Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]\nCells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]\n RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]\nCells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]\n\nExperimental procedures\n\nThe operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope.\n\nEvaluation standard for staining\n\nThe slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used.\n PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]\nCells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]\n Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]\nCells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]\n RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]\nCells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]\n Statistical analyses SPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05.\nSPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05.", "The study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000.", "Patients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively.\nExpression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining.", "\nExperimental procedures\n\nThe operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope.\n\nEvaluation standard for staining\n\nThe slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used.\n PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]\nCells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]\n Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]\nCells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]\n RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]\nCells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]", "Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7]", "Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8]", "Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9]", "SPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05.", " Expression of PH domain leucine-rich repeat protein phosphatase in ovarian serous adenocarcinomas and its association with clinicopathological features and prognosis Among 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1].\nProtein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary.\nProtein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05].\nRelationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma\nLNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a.\nAmong the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05].\nFive-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas.\nAmong 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1].\nProtein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary.\nProtein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05].\nRelationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma\nLNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a.\nAmong the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05].\nFive-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas.\n Expression of forkhead homeobox type O 3a in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis Among the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3].\nProtein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary.\nProtein expression of FoxO3a in high- and low-grade serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05).\nAmong the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05).\nFive-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas.\nAmong the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3].\nProtein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary.\nProtein expression of FoxO3a in high- and low-grade serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05).\nAmong the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05).\nFive-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas.\n Expression of RAD51 in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis Among the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05).\nProtein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary.\nProtein expression of RAD51 in high- and low-grade serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05).\nFinally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05).\nFive-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas.\nAmong the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05).\nProtein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary.\nProtein expression of RAD51 in high- and low-grade serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05).\nFinally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05).\nFive-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas.\n Prognostic multivariate survival analyses of high-grade serous ovarian adenocarcinoma The Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5].\nMultivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma\nRR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase.\nThe Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5].\nMultivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma\nRR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase.", "Among 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1].\nProtein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary.\nProtein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05].\nRelationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma\nLNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a.\nAmong the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05].\nFive-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas.", "Among the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3].\nProtein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary.\nProtein expression of FoxO3a in high- and low-grade serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05).\nAmong the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05).\nFive-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas.", "Among the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05).\nProtein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary.\nProtein expression of RAD51 in high- and low-grade serous adenocarcinomas\n*P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas.\nIn high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05).\nFinally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05).\nFive-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas.", "The Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5].\nMultivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma\nRR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase.", "Ovarian cancer is one of the most common malignancies of the female reproductive system, and has the highest mortality among all gynecologic malignant tumors, thus seriously affecting women's health and lives. Recent studies have suggested that ovarian serous adenocarcinomas can be divided into high- and low-grade tumors and that the initiation mechanisms, clinical pathological features, and prognoses of the two types greatly differ.[110] Thus, ovarian serous adenocarcinomas have complex biological behaviors. However, although the pathogeneses of the two types involve different molecular pathways, there is some common basis. Recent studies have found that the PI3K/Akt signaling pathway is closely associated with the initiation and progression of ovarian cancer. PHLPP, a known tumor suppressor, can negatively regulate Akt and its downstream kinases, thereby antagonizing the PI3K/Akt signaling pathway and inhibit tumor growth. Moreover, as an important downstream signaling molecule of Akt, after being phosphorylated by Akt, FoxO3a is involved in the initiation and development of tumor cells and regulates cell proliferation. RAD51, a downstream target of FoxO3a, also plays an important role in PI3K/Akt signaling; FoxO3a binds to the promoter of RAD51, consequently regulating RAD51 and promoting tumor development and metastasis. In this study, by examining the differences in the protein expression levels of PHLPP, FoxO3a, and RAD51 between high- and low-grade ovarian serous adenocarcinomas, the roles of the three genes in the development of the different grades of ovarian serous adenocarcinoma were explored, and the relationships between the protein expression levels and the patient prognosis were analyzed.\n Role of the tumor suppressor PH domain leucine-rich repeat protein phosphatase in the development of ovarian serous adenocarcinoma and its association with patient prognosis The tumor suppressor PHLPP, located at 18q21.33, can directly dephosphorylate ser473 in the hydrophobic group of Akt, thus negatively regulating Akt and its downstream targets. It has been reported that PHLPP can act synergistically with PTEN, another well-known tumor suppressor gene, thereby significantly inhibiting the proliferation of colorectal cancer cells, and negatively regulating PI3K/Akt signaling.[11] Accordingly, studies have shown that the PHLPP expression is significantly decreased in colon, breast, prostate, and lung cancers.[5712]\nThe results of the present study showed that the positive rate of PHLPP protein expression in the high-grade group was significantly lower than that in the oviduct group. The PHLPP-positive expression rate was decreased gradually from normal ovarian tissue to benign tumors to borderline tumors to low-grade adenocarcinomas, indicating that loss of PHLPP expression may be associated with the initiation of ovarian serous adenocarcinoma development, whereas no significant difference was observed between high-grade and low-grade tumors. Further, the PHLPP expression level showed significant associations with LNM, FIGO stages, and omental metastasis in high-grade tumors, but not in low-grade tumors. Similarly, the 5-year survival rates of PHLPP-positive and negative high-grade tumors significant differed, whereas no difference was seen in the low-grade tumors. These findings suggest that PHLPP may be a predictor of prognosis of high-grade ovarian serous adenocarcinoma.\nThe tumor suppressor PHLPP, located at 18q21.33, can directly dephosphorylate ser473 in the hydrophobic group of Akt, thus negatively regulating Akt and its downstream targets. It has been reported that PHLPP can act synergistically with PTEN, another well-known tumor suppressor gene, thereby significantly inhibiting the proliferation of colorectal cancer cells, and negatively regulating PI3K/Akt signaling.[11] Accordingly, studies have shown that the PHLPP expression is significantly decreased in colon, breast, prostate, and lung cancers.[5712]\nThe results of the present study showed that the positive rate of PHLPP protein expression in the high-grade group was significantly lower than that in the oviduct group. The PHLPP-positive expression rate was decreased gradually from normal ovarian tissue to benign tumors to borderline tumors to low-grade adenocarcinomas, indicating that loss of PHLPP expression may be associated with the initiation of ovarian serous adenocarcinoma development, whereas no significant difference was observed between high-grade and low-grade tumors. Further, the PHLPP expression level showed significant associations with LNM, FIGO stages, and omental metastasis in high-grade tumors, but not in low-grade tumors. Similarly, the 5-year survival rates of PHLPP-positive and negative high-grade tumors significant differed, whereas no difference was seen in the low-grade tumors. These findings suggest that PHLPP may be a predictor of prognosis of high-grade ovarian serous adenocarcinoma.\n Role of the tumor suppressor forkhead homeobox type O 3a in the development of ovarian serous adenocarcinoma and its association with patient prognosis FoxO3a is an important member of the FOXO family, located on at 6q21, and encodes a 673-amino acid long protein.[13] FoxO3a, as an important signaling molecule downstream of Akt in the PI3K-Akt signaling pathway, binds to the 14-3-3 chaperone protein after being phosphorylated by Akt, and subsequently translocates from the nucleus to the cytoplasm, thereby activating the apoptotic genes Bim, FasL, and tumor necrosis factor and its related apoptosis-inducing ligand TRAIL, consequently promoting apoptosis.[14] In addition to its role in apoptosis, FoxO3a plays critical roles in regulating cell proliferation, metabolism, the stress response, and the life span of cancer cells.[15] Particularly, another target molecule of FoxO3a is the cell cycle-regulating gene P27; overexpression of FoxO3a can arrest the cell cycle in the G0/G1 phase by upregulating P27. In addition, overexpression of FoxO3a can significantly inhibit the proliferation of tumor cells and result in tumor cell G2 arrest, further promoting tumor cell apoptosis.[1617]\nThe results of our analyses showed that the FoxO3a-positive rate in the high-grade tumors was significantly lower than that in the normal oviducts. In the low-grade adenocarcinomas, borderline cystadenomas, cystadenomas, and normal ovaries, the positive expression rates of FoxO3a protein were gradually reduced from normal ovaries to the low-grade adenocarcinomas, and the FoxO3 positive rate of the low-grade tumors was significantly lower than those of the borderline cystadenomas, cystadenomas, and normal ovaries, while the positive rate of borderline cystadenomas was significantly lower than those of cystadenomas and normal ovaries. Moreover, the FoxO3a expression level showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. These results suggest that FoxO3a may play a role in the oncogenesis and development of ovarian serous adenocarcinomas. In the high-grade tumors, the 5-year survival rate of FoxO3a-positive patients was significantly higher than that of the negative patients, whereas no difference in survival was seen in low-grade tumors, indicating that positive FoxO3a expression may have a long-term protective effect on cancer patients, and FoxO3a may hence have certain significance in the prediction of the patients’ long-term prognosis. Interestingly, it has been reported that tumor cells with low FoxO3a expression can develop drug resistance to paclitaxel and that inhibiting FoxO3a expression with RNAi can reduce E1A-mediated sensitivity to paclitaxel.[18] Therefore, FoxO3a protein may represent a potential prognostic indicator for high-grade ovarian serous adenocarcinoma.\nFoxO3a is an important member of the FOXO family, located on at 6q21, and encodes a 673-amino acid long protein.[13] FoxO3a, as an important signaling molecule downstream of Akt in the PI3K-Akt signaling pathway, binds to the 14-3-3 chaperone protein after being phosphorylated by Akt, and subsequently translocates from the nucleus to the cytoplasm, thereby activating the apoptotic genes Bim, FasL, and tumor necrosis factor and its related apoptosis-inducing ligand TRAIL, consequently promoting apoptosis.[14] In addition to its role in apoptosis, FoxO3a plays critical roles in regulating cell proliferation, metabolism, the stress response, and the life span of cancer cells.[15] Particularly, another target molecule of FoxO3a is the cell cycle-regulating gene P27; overexpression of FoxO3a can arrest the cell cycle in the G0/G1 phase by upregulating P27. In addition, overexpression of FoxO3a can significantly inhibit the proliferation of tumor cells and result in tumor cell G2 arrest, further promoting tumor cell apoptosis.[1617]\nThe results of our analyses showed that the FoxO3a-positive rate in the high-grade tumors was significantly lower than that in the normal oviducts. In the low-grade adenocarcinomas, borderline cystadenomas, cystadenomas, and normal ovaries, the positive expression rates of FoxO3a protein were gradually reduced from normal ovaries to the low-grade adenocarcinomas, and the FoxO3 positive rate of the low-grade tumors was significantly lower than those of the borderline cystadenomas, cystadenomas, and normal ovaries, while the positive rate of borderline cystadenomas was significantly lower than those of cystadenomas and normal ovaries. Moreover, the FoxO3a expression level showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. These results suggest that FoxO3a may play a role in the oncogenesis and development of ovarian serous adenocarcinomas. In the high-grade tumors, the 5-year survival rate of FoxO3a-positive patients was significantly higher than that of the negative patients, whereas no difference in survival was seen in low-grade tumors, indicating that positive FoxO3a expression may have a long-term protective effect on cancer patients, and FoxO3a may hence have certain significance in the prediction of the patients’ long-term prognosis. Interestingly, it has been reported that tumor cells with low FoxO3a expression can develop drug resistance to paclitaxel and that inhibiting FoxO3a expression with RNAi can reduce E1A-mediated sensitivity to paclitaxel.[18] Therefore, FoxO3a protein may represent a potential prognostic indicator for high-grade ovarian serous adenocarcinoma.\n Role of RAD51 in the development of ovarian serous adenocarcinoma and its association with patient prognosis RAD51 is a human homologous recombination repair gene. Too strong or too weak homologous recombination will result in genome instability and DNA damage, and the consequent gene loss, loss of heterozygosity, and/or genetic translocation are important causes of tumorigenesis and tumor progression.[19] Studies have found that RAD51 protein is overexpression in the breast, prostate, pancreatic, and colon cancers. Such overexpression of RAD51 can provide certain advantages for the growth of cancer cells by promoting the initiation and progression of cancer. However, to date, studies on the expression of RAD51 in ovarian cancers are few.\nThe results of this study showed that the RAD51-positive rate in high-grade tumors was significantly higher than that of normal oviducts, suggesting that RAD51 may play a role in the initiation of high-grade ovarian serous carcinomas. The RAD51-positive rate gradually increased from normal ovaries to cystadenomas to borderline cystadenomas to low-grade serous adenocarcinomas. Particularly, the positive rate of low-grade tumors was significantly higher than that in benign tumors and normal oviducts. The gradual change in the RAD51 expression suggests that the initiation of low-grade serous adenocarcinoma is a gradual procedure from benign to borderline to low-grade adenocarcinoma. However, no significant difference in the expression of RAD51 was found between high- and low-grade tumors. The expression level of RAD51 showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors.\nHerein, the 5-year survival rate of patients with RAD51-positive high-grade adenocarcinomas was significantly lower than that of patients with RAD51-negative tumors, while no difference in survival was found in low-grade tumors. In other words, we found that high-grade tumor patients with high expression of RAD51 had a lower survival rate and poor prognosis, indicating that RAD51 may be a prognostic factor in high-grade ovarian serous adenocarcinoma. Furthermore, multivariate survival analyses showed that RAD51 was an independent prognostic factor, which may be of clinical significance in terms of the development of molecular targeted therapy, in addition to FIGO stage, and LNM. Further investigations on RAD51 are required to confirm these findings and to hopefully be able to improve the quality of life and prognosis of patients using targeted therapy.\nIn conclusion, the results of the present study indicate that PHLPP, FoxO3a, and RAD51 may play certain roles in the pathogenesis of the different grades of ovarian serous adenocarcinomas. Meanwhile, they also suggest the possible presence of common molecular mechanisms in the pathogenesis of the different grades. Increased protein expression of RAD51 may be an important molecular marker for poor prognosis of high-grade ovarian serous adenocarcinomas. However, considering the limited number of cases selected, which may have affected the significance of the results, further studies are required to improve and supplement these findings.\nRAD51 is a human homologous recombination repair gene. Too strong or too weak homologous recombination will result in genome instability and DNA damage, and the consequent gene loss, loss of heterozygosity, and/or genetic translocation are important causes of tumorigenesis and tumor progression.[19] Studies have found that RAD51 protein is overexpression in the breast, prostate, pancreatic, and colon cancers. Such overexpression of RAD51 can provide certain advantages for the growth of cancer cells by promoting the initiation and progression of cancer. However, to date, studies on the expression of RAD51 in ovarian cancers are few.\nThe results of this study showed that the RAD51-positive rate in high-grade tumors was significantly higher than that of normal oviducts, suggesting that RAD51 may play a role in the initiation of high-grade ovarian serous carcinomas. The RAD51-positive rate gradually increased from normal ovaries to cystadenomas to borderline cystadenomas to low-grade serous adenocarcinomas. Particularly, the positive rate of low-grade tumors was significantly higher than that in benign tumors and normal oviducts. The gradual change in the RAD51 expression suggests that the initiation of low-grade serous adenocarcinoma is a gradual procedure from benign to borderline to low-grade adenocarcinoma. However, no significant difference in the expression of RAD51 was found between high- and low-grade tumors. The expression level of RAD51 showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors.\nHerein, the 5-year survival rate of patients with RAD51-positive high-grade adenocarcinomas was significantly lower than that of patients with RAD51-negative tumors, while no difference in survival was found in low-grade tumors. In other words, we found that high-grade tumor patients with high expression of RAD51 had a lower survival rate and poor prognosis, indicating that RAD51 may be a prognostic factor in high-grade ovarian serous adenocarcinoma. Furthermore, multivariate survival analyses showed that RAD51 was an independent prognostic factor, which may be of clinical significance in terms of the development of molecular targeted therapy, in addition to FIGO stage, and LNM. Further investigations on RAD51 are required to confirm these findings and to hopefully be able to improve the quality of life and prognosis of patients using targeted therapy.\nIn conclusion, the results of the present study indicate that PHLPP, FoxO3a, and RAD51 may play certain roles in the pathogenesis of the different grades of ovarian serous adenocarcinomas. Meanwhile, they also suggest the possible presence of common molecular mechanisms in the pathogenesis of the different grades. Increased protein expression of RAD51 may be an important molecular marker for poor prognosis of high-grade ovarian serous adenocarcinomas. However, considering the limited number of cases selected, which may have affected the significance of the results, further studies are required to improve and supplement these findings.\n Financial support and sponsorship This work were supported by grants from the Key Scientific Research program in Medical Science of Hebei Province, China (No. 20150318) and Science and Technology support program of Hebei Province, China (No. 16277794D).\nThis work were supported by grants from the Key Scientific Research program in Medical Science of Hebei Province, China (No. 20150318) and Science and Technology support program of Hebei Province, China (No. 16277794D).\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "The tumor suppressor PHLPP, located at 18q21.33, can directly dephosphorylate ser473 in the hydrophobic group of Akt, thus negatively regulating Akt and its downstream targets. It has been reported that PHLPP can act synergistically with PTEN, another well-known tumor suppressor gene, thereby significantly inhibiting the proliferation of colorectal cancer cells, and negatively regulating PI3K/Akt signaling.[11] Accordingly, studies have shown that the PHLPP expression is significantly decreased in colon, breast, prostate, and lung cancers.[5712]\nThe results of the present study showed that the positive rate of PHLPP protein expression in the high-grade group was significantly lower than that in the oviduct group. The PHLPP-positive expression rate was decreased gradually from normal ovarian tissue to benign tumors to borderline tumors to low-grade adenocarcinomas, indicating that loss of PHLPP expression may be associated with the initiation of ovarian serous adenocarcinoma development, whereas no significant difference was observed between high-grade and low-grade tumors. Further, the PHLPP expression level showed significant associations with LNM, FIGO stages, and omental metastasis in high-grade tumors, but not in low-grade tumors. Similarly, the 5-year survival rates of PHLPP-positive and negative high-grade tumors significant differed, whereas no difference was seen in the low-grade tumors. These findings suggest that PHLPP may be a predictor of prognosis of high-grade ovarian serous adenocarcinoma.", "FoxO3a is an important member of the FOXO family, located on at 6q21, and encodes a 673-amino acid long protein.[13] FoxO3a, as an important signaling molecule downstream of Akt in the PI3K-Akt signaling pathway, binds to the 14-3-3 chaperone protein after being phosphorylated by Akt, and subsequently translocates from the nucleus to the cytoplasm, thereby activating the apoptotic genes Bim, FasL, and tumor necrosis factor and its related apoptosis-inducing ligand TRAIL, consequently promoting apoptosis.[14] In addition to its role in apoptosis, FoxO3a plays critical roles in regulating cell proliferation, metabolism, the stress response, and the life span of cancer cells.[15] Particularly, another target molecule of FoxO3a is the cell cycle-regulating gene P27; overexpression of FoxO3a can arrest the cell cycle in the G0/G1 phase by upregulating P27. In addition, overexpression of FoxO3a can significantly inhibit the proliferation of tumor cells and result in tumor cell G2 arrest, further promoting tumor cell apoptosis.[1617]\nThe results of our analyses showed that the FoxO3a-positive rate in the high-grade tumors was significantly lower than that in the normal oviducts. In the low-grade adenocarcinomas, borderline cystadenomas, cystadenomas, and normal ovaries, the positive expression rates of FoxO3a protein were gradually reduced from normal ovaries to the low-grade adenocarcinomas, and the FoxO3 positive rate of the low-grade tumors was significantly lower than those of the borderline cystadenomas, cystadenomas, and normal ovaries, while the positive rate of borderline cystadenomas was significantly lower than those of cystadenomas and normal ovaries. Moreover, the FoxO3a expression level showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. These results suggest that FoxO3a may play a role in the oncogenesis and development of ovarian serous adenocarcinomas. In the high-grade tumors, the 5-year survival rate of FoxO3a-positive patients was significantly higher than that of the negative patients, whereas no difference in survival was seen in low-grade tumors, indicating that positive FoxO3a expression may have a long-term protective effect on cancer patients, and FoxO3a may hence have certain significance in the prediction of the patients’ long-term prognosis. Interestingly, it has been reported that tumor cells with low FoxO3a expression can develop drug resistance to paclitaxel and that inhibiting FoxO3a expression with RNAi can reduce E1A-mediated sensitivity to paclitaxel.[18] Therefore, FoxO3a protein may represent a potential prognostic indicator for high-grade ovarian serous adenocarcinoma.", "RAD51 is a human homologous recombination repair gene. Too strong or too weak homologous recombination will result in genome instability and DNA damage, and the consequent gene loss, loss of heterozygosity, and/or genetic translocation are important causes of tumorigenesis and tumor progression.[19] Studies have found that RAD51 protein is overexpression in the breast, prostate, pancreatic, and colon cancers. Such overexpression of RAD51 can provide certain advantages for the growth of cancer cells by promoting the initiation and progression of cancer. However, to date, studies on the expression of RAD51 in ovarian cancers are few.\nThe results of this study showed that the RAD51-positive rate in high-grade tumors was significantly higher than that of normal oviducts, suggesting that RAD51 may play a role in the initiation of high-grade ovarian serous carcinomas. The RAD51-positive rate gradually increased from normal ovaries to cystadenomas to borderline cystadenomas to low-grade serous adenocarcinomas. Particularly, the positive rate of low-grade tumors was significantly higher than that in benign tumors and normal oviducts. The gradual change in the RAD51 expression suggests that the initiation of low-grade serous adenocarcinoma is a gradual procedure from benign to borderline to low-grade adenocarcinoma. However, no significant difference in the expression of RAD51 was found between high- and low-grade tumors. The expression level of RAD51 showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors.\nHerein, the 5-year survival rate of patients with RAD51-positive high-grade adenocarcinomas was significantly lower than that of patients with RAD51-negative tumors, while no difference in survival was found in low-grade tumors. In other words, we found that high-grade tumor patients with high expression of RAD51 had a lower survival rate and poor prognosis, indicating that RAD51 may be a prognostic factor in high-grade ovarian serous adenocarcinoma. Furthermore, multivariate survival analyses showed that RAD51 was an independent prognostic factor, which may be of clinical significance in terms of the development of molecular targeted therapy, in addition to FIGO stage, and LNM. Further investigations on RAD51 are required to confirm these findings and to hopefully be able to improve the quality of life and prognosis of patients using targeted therapy.\nIn conclusion, the results of the present study indicate that PHLPP, FoxO3a, and RAD51 may play certain roles in the pathogenesis of the different grades of ovarian serous adenocarcinomas. Meanwhile, they also suggest the possible presence of common molecular mechanisms in the pathogenesis of the different grades. Increased protein expression of RAD51 may be an important molecular marker for poor prognosis of high-grade ovarian serous adenocarcinomas. However, considering the limited number of cases selected, which may have affected the significance of the results, further studies are required to improve and supplement these findings.", "This work were supported by grants from the Key Scientific Research program in Medical Science of Hebei Province, China (No. 20150318) and Science and Technology support program of Hebei Province, China (No. 16277794D).", "There are no conflicts of interest." ]
[ "intro", "methods", null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", null, null, null, null, "COI-statement" ]
[ "Forkhead Homeobox Type O 3a", "Immunohistochemistry", "Ovarian Serous Adenocarcinomas", "PH Domain Leucine-rich Repeat Protein Phosphatase", "Prognosis", "RAD51" ]
Introduction: Ovarian serous adenocarcinoma is the most common ovarian epithelial malignant tumor. It can be divided into low- and high-grade tumors, which exhibit substantial differences in the pathogenesis, clinicopathological characteristics, and prognosis.[1] Low-grade serous carcinomas undergo a progressive, gradual developmental process from benign serous cystadenoma to borderline serous cystadenoma to malignant serous adenocarcinoma, while high-grade serous adenocarcinomas develop directly from the oviduct epithelium or ovarian inclusion cysts. However, it remains unclear whether the two tumor types have completely different molecular mechanisms, whether there is a common molecular basis, and whether the genes involved in the development of these tumors affect the patients’ prognosis. Recent studies[23] have indicated that the PI3K/Akt signaling pathway is involved in the initiation and development of ovarian serous adenocarcinoma. For example, PH domain leucine-rich repeat protein phosphatase (PHLPP) is known to negatively regulate Akt and its downstream kinase by dephosphorylating the hydrophobic core of Akt (Akt1 Ser473), thereby antagonizing PI3K/Akt signaling and inhibiting tumor growth. Forkhead homeobox type O 3a (FoxO3a), an important downstream signaling molecule of Akt, is involved in the initiation and development of tumor cells and regulation of cell proliferation after being phosphorylated by Akt.[45] Further, RAD51, a DNA double-strand break repair gene,[6] has also been implicated in the PI3K/Akt signaling pathway: phosphorylated FoxO3a binds to the promoter of the downstream gene RAD51, thereby regulating RAD51 and promoting tumor development and metastasis. The present study aimed to (1) determine the differences in the protein expressions of PHLPP, FoxO3a, and RAD51 between high- and low-grade ovarian serous adenocarcinomas, (2) explore the roles of these three genes in the initiation of the different grades of ovarian serous adenocarcinoma, and (3) analyze the relationships between their protein expressions and prognosis. Methods: Ethics The study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000. The study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000. Participant selection and description Patients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively. Expression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining. Patients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively. Expression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining. Technical information Experimental procedures The operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope. Evaluation standard for staining The slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used. PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Experimental procedures The operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope. Evaluation standard for staining The slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used. PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Statistical analyses SPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05. SPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05. Ethics: The study was approved by the Ethics Committee of the Fourth Hospital of Hebei Medical University and was conducted in accordance with the Declaration of Helsinki 1975, as revised in 2000. Participant selection and description: Patients with ovarian serous adenocarcinoma hospitalized and treated from January 2006 to January 2011 in our hospital were included in the ovarian serous adenocarcinoma groups. No patients received chemotherapy and radiotherapy preoperatively, and their diagnosis was confirmed by pathological examinations postoperatively. All patients had complete clinical data and were graded according to the World Health Organization histological classification for ovarian tumors (2014 edition). Ninety-four and twenty-six cases of high- and low-grade ovarian serous adenocarcinomas were included in this study. Among the high-grade group (mean age, 55 years; range, 36–78 years), there were 26 and 68 patients with International Federation of Gynecology and Obstetrics (FIGO) Stage (2014 edition) I–II and III–IV disease, respectively. There were thirty and 64 patients with and without lymph node metastasis (LNM), and 41 and 53 patients with and without omental metastasis, respectively. Among the low-grade group (mean age, 44 years; range, 26–70 years), there were 18 and eight patients with FIGO Stage I–II and III–IV disease, respectively. There were 4 and 22 patients with and without LNM, and 5 and 21 patients with and without omental metastasis, respectively. Moreover, 21 cases of borderline serous cystadenoma (mean age, 38 years; range, 18–59 years) and 35 cases of benign serous cystadenoma (mean age, 45 years; range, 22–69 years) were included, along with thirty normal ovaries and thirty oviducts (mean age, 48 years; range, 36–62 years) removed from patients undergoing surgery for uterine fibroids, which were used as controls for the low- and high-grade tumor groups, respectively. Expression of TP53 protein in high- and low-grade serous carcinoma: Among the 94 cases of high-grade, 92 cases (97.9%) showed strong staining of TP53, while among the 26 cases of low-grade serous adenocarcinoma, there was no case of strong staining. Technical information: Experimental procedures The operative specimens were examined, and the patients were diagnosed by experienced pathologists. Paraffin blocks of the specimens were cut, and immunohistochemical staining of the sections was conducted as follows: the samples were sliced, deparaffinized, dehydrated, and subjected to antigen retrieval before blocking with goat serum. Subsequently, the slides were placed in a humidifier chamber and incubated at room temperature for 45 min. The tissues were incubated with primary antibodies (working concentrations of anti-PHLPP, RAD51, and FoxO3a of 1:100, 1:100, and 1:150, respectively, Biogot Biotechnology Co., Ltd., Nanjing, China) in the sealed humidifier chamber at 4°C overnight. After washing of the slides, goat anti-rabbit secondary antibodies were added and the slides were incubated in the humidifier at 37°C for 30 min. Next, horseradish peroxidase-labeled streptavidin was added and the slides were incubated in the humidifier at 37°C for 30 min, after which the tissues were stained by 3,3’-diaminobenzidine, followed by counterstaining, destaining, dehydration, clearing, and mounting. Finally, the slides were air-dried and observed under a microscope. Evaluation standard for staining The slides were scored by pathologists according to the below criteria for PHLPP, FoxO3a, and RAD51 under a microscope. Ten high-magnification fields were counted, and the average cell numbers were used. PH domain leucine-rich repeat protein phosphatase-positive criteria Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Forkhead homeobox type O 3a-positive criteria Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] RAD51-positive criteria Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] PH domain leucine-rich repeat protein phosphatase-positive criteria: Cells with cytoplasm stained brown were determined as positive. The staining intensity was scored as follows: uncolored (0), pale yellow (1 point), brown (2 points), and tan (3 points). The percentages of positive cells were scored as 0, 1, 2, 3, and 4 points for <5%, 5–25%, 26–50%, 51–75%, and >75%, respectively. According to the combined results of the two scores, the staining was divided into four levels: 0–1 point was defined as negative (−), ≥2 was defined as positive.[7] Forkhead homeobox type O 3a-positive criteria: Cells with nuclei staining brown were defined as positive. No staining was scored as 0, positive cells <10% as 1 point, 11–25% as 2 points, and >26% as 3 points. A total score of >2 points was defined as positive.[8] RAD51-positive criteria: Cells with cytoplasm or nuclei staining brown were defined as positive. Positive cells <10% and ≥10% were defined as negative and positive expressions, respectively.[9] Statistical analyses: SPSS 16.0 software (SPSS Inc., USA) was used for the statistical analysis. Categorical data were analyzed by the Chi-square test, and the survival rates were calculated by Kaplan-Meier univariate analysis. Differences in the survival rates between the different groups were determined by the log-rank test. Cox multivariate survival analysis was used for analysis of independent prognostic factors. The statistical significance level was set as P < 0.05. Results: Expression of PH domain leucine-rich repeat protein phosphatase in ovarian serous adenocarcinomas and its association with clinicopathological features and prognosis Among 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1]. Protein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05]. Relationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a. Among the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05]. Five-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas. Among 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1]. Protein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05]. Relationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a. Among the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05]. Five-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas. Expression of forkhead homeobox type O 3a in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis Among the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3]. Protein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of FoxO3a in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05). Among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas. Among the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3]. Protein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of FoxO3a in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05). Among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas. Expression of RAD51 in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis Among the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05). Protein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary. Protein expression of RAD51 in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05). Finally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas. Among the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05). Protein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary. Protein expression of RAD51 in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05). Finally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas. Prognostic multivariate survival analyses of high-grade serous ovarian adenocarcinoma The Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5]. Multivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma RR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase. The Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5]. Multivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma RR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase. Expression of PH domain leucine-rich repeat protein phosphatase in ovarian serous adenocarcinomas and its association with clinicopathological features and prognosis: Among 94 cases of high-grade serous adenocarcinoma, 23 cases (24.5%) were PHLPP-positive, which was significantly lower than the positive rate of 86.7% in the normal oviduct group (P < 0.05). Of 26 low-grade serous adenocarcinoma cases, 6 (23.1%) were positive, which was significantly lower than that in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, there was no difference in the positive expression rates between high- and low-grade tumors [P > 0.05; Figure 1 and Table 1]. Protein expression of PH domain leucine-rich repeat protein phosphatase in high- and low-grade serous ovarian adenocarcinoma by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of PHLPP in high- and low-grade ovarian serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. PHLPP: PH domain leucine-rich repeat protein phosphatase. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower PHLPP protein expression levels [P < 0.05; Table 2]. On the other hand, there were no significant correlations between the clinicopathological features and PHLPP protein expression in low-grade tumors [P > 0.05]. Relationships between PHLPP, FoxO3a, and RAD51 protein expressions and clinicopathological features of high-grade serous ovarian adenocarcinoma LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a. Among the 94 patients with high-grade serous adenocarcinoma, the 5-year survival rates of PHLPP-negative and positive patients were 5.6% and 43.5%, respectively [P < 0.05, Figure 2]. Among the 26 cases of low-grade serous adenocarcinoma, however, the expression of PHLPP did not significantly associate with the 5-year survival rate of the patients [P > 0.05]. Five-year survival curves of PH domain leucine-rich repeat protein phosphatase-positive and negative patients with high-grade ovarian serous adenocarcinomas. Expression of forkhead homeobox type O 3a in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis: Among the 94 cases of high-grade serous adenocarcinoma, 25 (26.6%) were positive for FoxO3a protein expression, which was significantly lower than the positive rate of 90.0% in the normal oviduct group (P < 0.05). Among the 26 cases of low-grade serous carcinoma, 7 (26.9%) were positive, which was significantly lower than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups (P < 0.05). However, the difference in the positive expression rates between the high- and low-grade groups was not significant [P > 0.05; Figure 3 and Table 3]. Protein expression of forkhead homeobox type O 3a in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Negative high-grade adenocarcinoma, (b) positive normal oviduct, (c) negative low-grade adenocarcinoma, (d) positive borderline cystadenoma, (e) positive cystadenoma, (f) positive normal ovary. Protein expression of FoxO3a in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. FoxO3a: Forkhead homeobox type O 3a. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with lower FoxO3a protein expression [P < 0.05, Table 2]. On the other hand, in low-grade tumors, the FoxO3a protein expression showed no association with clinicopathological features (P > 0.05). Among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of FoxO3a-negative and positive patients were 7.2% and 36.0%, respectively [P < 0.05, Figure 4]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of FoxO3a showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of forkhead homeobox type O 3a-positive and negative patients with high-grade ovarian serous adenocarcinomas. Expression of RAD51 in ovarian serous adenocarcinoma and its association with clinicopathological features and prognosis: Among the 94 cases of high-grade serous adenocarcinoma, 66 (70.2%) were positive for RAD51 protein, which was significantly higher than that in the normal oviduct group [P < 0.05; Figure 5 and Table 4]. Among the 26 cases of low-grade serous carcinoma, 17 (65.4%) were positive, which was significantly higher than in the normal ovary, borderline serous cystadenoma, and serous cystadenoma groups [P < 0.05; Figure 5 and Table 4]. However, the difference in the positive expression rates between the high- and low-grade groups was not significant (P > 0.05). Protein expression of RAD51 in high- and low-grade serous ovarian adenocarcinomas by immunohistochemical analysis (original magnification ×200). (a) Positive high-grade adenocarcinoma, (b) negative normal oviduct, (c) positive low-grade adenocarcinoma, (d) negative borderline cystadenoma, (e) negative cystadenoma, (f) negative normal ovary. Protein expression of RAD51 in high- and low-grade serous adenocarcinomas *P<0.05 compared with normal oviducts; †P<0.05 compared with normal ovaries, cystadenomas, and serous cystadenomas. In high-grade tumors, LNM, FIGO Stage III/IV tumors, and omental metastasis were significantly associated with higher RAD51 protein expression levels [P < 0.05; Table 2]. Conversely, there was no significant correlation between clinicopathological features and RAD51 expression in low-grade tumors (P > 0.05). Finally, among the 94 cases of high-grade serous adenocarcinoma, the 5-year survival rates of RAD51-positive and negative patients were 3.0% and 42.9%, respectively [P < 0.05; Figure 6]. Among the 26 cases of low-grade serous adenocarcinoma, the expression of RAD51 showed no significant association with the 5-year survival rate (P > 0.05). Five-year survival curves of RAD51-positive and negative patients with high-grade ovarian serous adenocarcinomas. Prognostic multivariate survival analyses of high-grade serous ovarian adenocarcinoma: The Cox multivariate analysis results showed that FIGO staging, LNM, and RAD51 protein expression were independent prognostic factors of patients with high-grade serous ovarian adenocarcinoma [P < 0.05; Table 5]. Multivariate analysis of independent factors associated with the 5-year overall survival rate in patients with high-grade serous ovarian adenocarcinoma RR: Risk ratio; CI: Confidence interval; LNM: Lymph node metastasis; FIGO: International Federation of Gynecology and Obstetrics; OM: Omental metastasis; FoxO3a: Forkhead homeobox type O 3a; PHLPP: PH domain leucine-rich repeat protein phosphatase. Discussion: Ovarian cancer is one of the most common malignancies of the female reproductive system, and has the highest mortality among all gynecologic malignant tumors, thus seriously affecting women's health and lives. Recent studies have suggested that ovarian serous adenocarcinomas can be divided into high- and low-grade tumors and that the initiation mechanisms, clinical pathological features, and prognoses of the two types greatly differ.[110] Thus, ovarian serous adenocarcinomas have complex biological behaviors. However, although the pathogeneses of the two types involve different molecular pathways, there is some common basis. Recent studies have found that the PI3K/Akt signaling pathway is closely associated with the initiation and progression of ovarian cancer. PHLPP, a known tumor suppressor, can negatively regulate Akt and its downstream kinases, thereby antagonizing the PI3K/Akt signaling pathway and inhibit tumor growth. Moreover, as an important downstream signaling molecule of Akt, after being phosphorylated by Akt, FoxO3a is involved in the initiation and development of tumor cells and regulates cell proliferation. RAD51, a downstream target of FoxO3a, also plays an important role in PI3K/Akt signaling; FoxO3a binds to the promoter of RAD51, consequently regulating RAD51 and promoting tumor development and metastasis. In this study, by examining the differences in the protein expression levels of PHLPP, FoxO3a, and RAD51 between high- and low-grade ovarian serous adenocarcinomas, the roles of the three genes in the development of the different grades of ovarian serous adenocarcinoma were explored, and the relationships between the protein expression levels and the patient prognosis were analyzed. Role of the tumor suppressor PH domain leucine-rich repeat protein phosphatase in the development of ovarian serous adenocarcinoma and its association with patient prognosis The tumor suppressor PHLPP, located at 18q21.33, can directly dephosphorylate ser473 in the hydrophobic group of Akt, thus negatively regulating Akt and its downstream targets. It has been reported that PHLPP can act synergistically with PTEN, another well-known tumor suppressor gene, thereby significantly inhibiting the proliferation of colorectal cancer cells, and negatively regulating PI3K/Akt signaling.[11] Accordingly, studies have shown that the PHLPP expression is significantly decreased in colon, breast, prostate, and lung cancers.[5712] The results of the present study showed that the positive rate of PHLPP protein expression in the high-grade group was significantly lower than that in the oviduct group. The PHLPP-positive expression rate was decreased gradually from normal ovarian tissue to benign tumors to borderline tumors to low-grade adenocarcinomas, indicating that loss of PHLPP expression may be associated with the initiation of ovarian serous adenocarcinoma development, whereas no significant difference was observed between high-grade and low-grade tumors. Further, the PHLPP expression level showed significant associations with LNM, FIGO stages, and omental metastasis in high-grade tumors, but not in low-grade tumors. Similarly, the 5-year survival rates of PHLPP-positive and negative high-grade tumors significant differed, whereas no difference was seen in the low-grade tumors. These findings suggest that PHLPP may be a predictor of prognosis of high-grade ovarian serous adenocarcinoma. The tumor suppressor PHLPP, located at 18q21.33, can directly dephosphorylate ser473 in the hydrophobic group of Akt, thus negatively regulating Akt and its downstream targets. It has been reported that PHLPP can act synergistically with PTEN, another well-known tumor suppressor gene, thereby significantly inhibiting the proliferation of colorectal cancer cells, and negatively regulating PI3K/Akt signaling.[11] Accordingly, studies have shown that the PHLPP expression is significantly decreased in colon, breast, prostate, and lung cancers.[5712] The results of the present study showed that the positive rate of PHLPP protein expression in the high-grade group was significantly lower than that in the oviduct group. The PHLPP-positive expression rate was decreased gradually from normal ovarian tissue to benign tumors to borderline tumors to low-grade adenocarcinomas, indicating that loss of PHLPP expression may be associated with the initiation of ovarian serous adenocarcinoma development, whereas no significant difference was observed between high-grade and low-grade tumors. Further, the PHLPP expression level showed significant associations with LNM, FIGO stages, and omental metastasis in high-grade tumors, but not in low-grade tumors. Similarly, the 5-year survival rates of PHLPP-positive and negative high-grade tumors significant differed, whereas no difference was seen in the low-grade tumors. These findings suggest that PHLPP may be a predictor of prognosis of high-grade ovarian serous adenocarcinoma. Role of the tumor suppressor forkhead homeobox type O 3a in the development of ovarian serous adenocarcinoma and its association with patient prognosis FoxO3a is an important member of the FOXO family, located on at 6q21, and encodes a 673-amino acid long protein.[13] FoxO3a, as an important signaling molecule downstream of Akt in the PI3K-Akt signaling pathway, binds to the 14-3-3 chaperone protein after being phosphorylated by Akt, and subsequently translocates from the nucleus to the cytoplasm, thereby activating the apoptotic genes Bim, FasL, and tumor necrosis factor and its related apoptosis-inducing ligand TRAIL, consequently promoting apoptosis.[14] In addition to its role in apoptosis, FoxO3a plays critical roles in regulating cell proliferation, metabolism, the stress response, and the life span of cancer cells.[15] Particularly, another target molecule of FoxO3a is the cell cycle-regulating gene P27; overexpression of FoxO3a can arrest the cell cycle in the G0/G1 phase by upregulating P27. In addition, overexpression of FoxO3a can significantly inhibit the proliferation of tumor cells and result in tumor cell G2 arrest, further promoting tumor cell apoptosis.[1617] The results of our analyses showed that the FoxO3a-positive rate in the high-grade tumors was significantly lower than that in the normal oviducts. In the low-grade adenocarcinomas, borderline cystadenomas, cystadenomas, and normal ovaries, the positive expression rates of FoxO3a protein were gradually reduced from normal ovaries to the low-grade adenocarcinomas, and the FoxO3 positive rate of the low-grade tumors was significantly lower than those of the borderline cystadenomas, cystadenomas, and normal ovaries, while the positive rate of borderline cystadenomas was significantly lower than those of cystadenomas and normal ovaries. Moreover, the FoxO3a expression level showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. These results suggest that FoxO3a may play a role in the oncogenesis and development of ovarian serous adenocarcinomas. In the high-grade tumors, the 5-year survival rate of FoxO3a-positive patients was significantly higher than that of the negative patients, whereas no difference in survival was seen in low-grade tumors, indicating that positive FoxO3a expression may have a long-term protective effect on cancer patients, and FoxO3a may hence have certain significance in the prediction of the patients’ long-term prognosis. Interestingly, it has been reported that tumor cells with low FoxO3a expression can develop drug resistance to paclitaxel and that inhibiting FoxO3a expression with RNAi can reduce E1A-mediated sensitivity to paclitaxel.[18] Therefore, FoxO3a protein may represent a potential prognostic indicator for high-grade ovarian serous adenocarcinoma. FoxO3a is an important member of the FOXO family, located on at 6q21, and encodes a 673-amino acid long protein.[13] FoxO3a, as an important signaling molecule downstream of Akt in the PI3K-Akt signaling pathway, binds to the 14-3-3 chaperone protein after being phosphorylated by Akt, and subsequently translocates from the nucleus to the cytoplasm, thereby activating the apoptotic genes Bim, FasL, and tumor necrosis factor and its related apoptosis-inducing ligand TRAIL, consequently promoting apoptosis.[14] In addition to its role in apoptosis, FoxO3a plays critical roles in regulating cell proliferation, metabolism, the stress response, and the life span of cancer cells.[15] Particularly, another target molecule of FoxO3a is the cell cycle-regulating gene P27; overexpression of FoxO3a can arrest the cell cycle in the G0/G1 phase by upregulating P27. In addition, overexpression of FoxO3a can significantly inhibit the proliferation of tumor cells and result in tumor cell G2 arrest, further promoting tumor cell apoptosis.[1617] The results of our analyses showed that the FoxO3a-positive rate in the high-grade tumors was significantly lower than that in the normal oviducts. In the low-grade adenocarcinomas, borderline cystadenomas, cystadenomas, and normal ovaries, the positive expression rates of FoxO3a protein were gradually reduced from normal ovaries to the low-grade adenocarcinomas, and the FoxO3 positive rate of the low-grade tumors was significantly lower than those of the borderline cystadenomas, cystadenomas, and normal ovaries, while the positive rate of borderline cystadenomas was significantly lower than those of cystadenomas and normal ovaries. Moreover, the FoxO3a expression level showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. These results suggest that FoxO3a may play a role in the oncogenesis and development of ovarian serous adenocarcinomas. In the high-grade tumors, the 5-year survival rate of FoxO3a-positive patients was significantly higher than that of the negative patients, whereas no difference in survival was seen in low-grade tumors, indicating that positive FoxO3a expression may have a long-term protective effect on cancer patients, and FoxO3a may hence have certain significance in the prediction of the patients’ long-term prognosis. Interestingly, it has been reported that tumor cells with low FoxO3a expression can develop drug resistance to paclitaxel and that inhibiting FoxO3a expression with RNAi can reduce E1A-mediated sensitivity to paclitaxel.[18] Therefore, FoxO3a protein may represent a potential prognostic indicator for high-grade ovarian serous adenocarcinoma. Role of RAD51 in the development of ovarian serous adenocarcinoma and its association with patient prognosis RAD51 is a human homologous recombination repair gene. Too strong or too weak homologous recombination will result in genome instability and DNA damage, and the consequent gene loss, loss of heterozygosity, and/or genetic translocation are important causes of tumorigenesis and tumor progression.[19] Studies have found that RAD51 protein is overexpression in the breast, prostate, pancreatic, and colon cancers. Such overexpression of RAD51 can provide certain advantages for the growth of cancer cells by promoting the initiation and progression of cancer. However, to date, studies on the expression of RAD51 in ovarian cancers are few. The results of this study showed that the RAD51-positive rate in high-grade tumors was significantly higher than that of normal oviducts, suggesting that RAD51 may play a role in the initiation of high-grade ovarian serous carcinomas. The RAD51-positive rate gradually increased from normal ovaries to cystadenomas to borderline cystadenomas to low-grade serous adenocarcinomas. Particularly, the positive rate of low-grade tumors was significantly higher than that in benign tumors and normal oviducts. The gradual change in the RAD51 expression suggests that the initiation of low-grade serous adenocarcinoma is a gradual procedure from benign to borderline to low-grade adenocarcinoma. However, no significant difference in the expression of RAD51 was found between high- and low-grade tumors. The expression level of RAD51 showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. Herein, the 5-year survival rate of patients with RAD51-positive high-grade adenocarcinomas was significantly lower than that of patients with RAD51-negative tumors, while no difference in survival was found in low-grade tumors. In other words, we found that high-grade tumor patients with high expression of RAD51 had a lower survival rate and poor prognosis, indicating that RAD51 may be a prognostic factor in high-grade ovarian serous adenocarcinoma. Furthermore, multivariate survival analyses showed that RAD51 was an independent prognostic factor, which may be of clinical significance in terms of the development of molecular targeted therapy, in addition to FIGO stage, and LNM. Further investigations on RAD51 are required to confirm these findings and to hopefully be able to improve the quality of life and prognosis of patients using targeted therapy. In conclusion, the results of the present study indicate that PHLPP, FoxO3a, and RAD51 may play certain roles in the pathogenesis of the different grades of ovarian serous adenocarcinomas. Meanwhile, they also suggest the possible presence of common molecular mechanisms in the pathogenesis of the different grades. Increased protein expression of RAD51 may be an important molecular marker for poor prognosis of high-grade ovarian serous adenocarcinomas. However, considering the limited number of cases selected, which may have affected the significance of the results, further studies are required to improve and supplement these findings. RAD51 is a human homologous recombination repair gene. Too strong or too weak homologous recombination will result in genome instability and DNA damage, and the consequent gene loss, loss of heterozygosity, and/or genetic translocation are important causes of tumorigenesis and tumor progression.[19] Studies have found that RAD51 protein is overexpression in the breast, prostate, pancreatic, and colon cancers. Such overexpression of RAD51 can provide certain advantages for the growth of cancer cells by promoting the initiation and progression of cancer. However, to date, studies on the expression of RAD51 in ovarian cancers are few. The results of this study showed that the RAD51-positive rate in high-grade tumors was significantly higher than that of normal oviducts, suggesting that RAD51 may play a role in the initiation of high-grade ovarian serous carcinomas. The RAD51-positive rate gradually increased from normal ovaries to cystadenomas to borderline cystadenomas to low-grade serous adenocarcinomas. Particularly, the positive rate of low-grade tumors was significantly higher than that in benign tumors and normal oviducts. The gradual change in the RAD51 expression suggests that the initiation of low-grade serous adenocarcinoma is a gradual procedure from benign to borderline to low-grade adenocarcinoma. However, no significant difference in the expression of RAD51 was found between high- and low-grade tumors. The expression level of RAD51 showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. Herein, the 5-year survival rate of patients with RAD51-positive high-grade adenocarcinomas was significantly lower than that of patients with RAD51-negative tumors, while no difference in survival was found in low-grade tumors. In other words, we found that high-grade tumor patients with high expression of RAD51 had a lower survival rate and poor prognosis, indicating that RAD51 may be a prognostic factor in high-grade ovarian serous adenocarcinoma. Furthermore, multivariate survival analyses showed that RAD51 was an independent prognostic factor, which may be of clinical significance in terms of the development of molecular targeted therapy, in addition to FIGO stage, and LNM. Further investigations on RAD51 are required to confirm these findings and to hopefully be able to improve the quality of life and prognosis of patients using targeted therapy. In conclusion, the results of the present study indicate that PHLPP, FoxO3a, and RAD51 may play certain roles in the pathogenesis of the different grades of ovarian serous adenocarcinomas. Meanwhile, they also suggest the possible presence of common molecular mechanisms in the pathogenesis of the different grades. Increased protein expression of RAD51 may be an important molecular marker for poor prognosis of high-grade ovarian serous adenocarcinomas. However, considering the limited number of cases selected, which may have affected the significance of the results, further studies are required to improve and supplement these findings. Financial support and sponsorship This work were supported by grants from the Key Scientific Research program in Medical Science of Hebei Province, China (No. 20150318) and Science and Technology support program of Hebei Province, China (No. 16277794D). This work were supported by grants from the Key Scientific Research program in Medical Science of Hebei Province, China (No. 20150318) and Science and Technology support program of Hebei Province, China (No. 16277794D). Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Role of the tumor suppressor PH domain leucine-rich repeat protein phosphatase in the development of ovarian serous adenocarcinoma and its association with patient prognosis: The tumor suppressor PHLPP, located at 18q21.33, can directly dephosphorylate ser473 in the hydrophobic group of Akt, thus negatively regulating Akt and its downstream targets. It has been reported that PHLPP can act synergistically with PTEN, another well-known tumor suppressor gene, thereby significantly inhibiting the proliferation of colorectal cancer cells, and negatively regulating PI3K/Akt signaling.[11] Accordingly, studies have shown that the PHLPP expression is significantly decreased in colon, breast, prostate, and lung cancers.[5712] The results of the present study showed that the positive rate of PHLPP protein expression in the high-grade group was significantly lower than that in the oviduct group. The PHLPP-positive expression rate was decreased gradually from normal ovarian tissue to benign tumors to borderline tumors to low-grade adenocarcinomas, indicating that loss of PHLPP expression may be associated with the initiation of ovarian serous adenocarcinoma development, whereas no significant difference was observed between high-grade and low-grade tumors. Further, the PHLPP expression level showed significant associations with LNM, FIGO stages, and omental metastasis in high-grade tumors, but not in low-grade tumors. Similarly, the 5-year survival rates of PHLPP-positive and negative high-grade tumors significant differed, whereas no difference was seen in the low-grade tumors. These findings suggest that PHLPP may be a predictor of prognosis of high-grade ovarian serous adenocarcinoma. Role of the tumor suppressor forkhead homeobox type O 3a in the development of ovarian serous adenocarcinoma and its association with patient prognosis: FoxO3a is an important member of the FOXO family, located on at 6q21, and encodes a 673-amino acid long protein.[13] FoxO3a, as an important signaling molecule downstream of Akt in the PI3K-Akt signaling pathway, binds to the 14-3-3 chaperone protein after being phosphorylated by Akt, and subsequently translocates from the nucleus to the cytoplasm, thereby activating the apoptotic genes Bim, FasL, and tumor necrosis factor and its related apoptosis-inducing ligand TRAIL, consequently promoting apoptosis.[14] In addition to its role in apoptosis, FoxO3a plays critical roles in regulating cell proliferation, metabolism, the stress response, and the life span of cancer cells.[15] Particularly, another target molecule of FoxO3a is the cell cycle-regulating gene P27; overexpression of FoxO3a can arrest the cell cycle in the G0/G1 phase by upregulating P27. In addition, overexpression of FoxO3a can significantly inhibit the proliferation of tumor cells and result in tumor cell G2 arrest, further promoting tumor cell apoptosis.[1617] The results of our analyses showed that the FoxO3a-positive rate in the high-grade tumors was significantly lower than that in the normal oviducts. In the low-grade adenocarcinomas, borderline cystadenomas, cystadenomas, and normal ovaries, the positive expression rates of FoxO3a protein were gradually reduced from normal ovaries to the low-grade adenocarcinomas, and the FoxO3 positive rate of the low-grade tumors was significantly lower than those of the borderline cystadenomas, cystadenomas, and normal ovaries, while the positive rate of borderline cystadenomas was significantly lower than those of cystadenomas and normal ovaries. Moreover, the FoxO3a expression level showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. These results suggest that FoxO3a may play a role in the oncogenesis and development of ovarian serous adenocarcinomas. In the high-grade tumors, the 5-year survival rate of FoxO3a-positive patients was significantly higher than that of the negative patients, whereas no difference in survival was seen in low-grade tumors, indicating that positive FoxO3a expression may have a long-term protective effect on cancer patients, and FoxO3a may hence have certain significance in the prediction of the patients’ long-term prognosis. Interestingly, it has been reported that tumor cells with low FoxO3a expression can develop drug resistance to paclitaxel and that inhibiting FoxO3a expression with RNAi can reduce E1A-mediated sensitivity to paclitaxel.[18] Therefore, FoxO3a protein may represent a potential prognostic indicator for high-grade ovarian serous adenocarcinoma. Role of RAD51 in the development of ovarian serous adenocarcinoma and its association with patient prognosis: RAD51 is a human homologous recombination repair gene. Too strong or too weak homologous recombination will result in genome instability and DNA damage, and the consequent gene loss, loss of heterozygosity, and/or genetic translocation are important causes of tumorigenesis and tumor progression.[19] Studies have found that RAD51 protein is overexpression in the breast, prostate, pancreatic, and colon cancers. Such overexpression of RAD51 can provide certain advantages for the growth of cancer cells by promoting the initiation and progression of cancer. However, to date, studies on the expression of RAD51 in ovarian cancers are few. The results of this study showed that the RAD51-positive rate in high-grade tumors was significantly higher than that of normal oviducts, suggesting that RAD51 may play a role in the initiation of high-grade ovarian serous carcinomas. The RAD51-positive rate gradually increased from normal ovaries to cystadenomas to borderline cystadenomas to low-grade serous adenocarcinomas. Particularly, the positive rate of low-grade tumors was significantly higher than that in benign tumors and normal oviducts. The gradual change in the RAD51 expression suggests that the initiation of low-grade serous adenocarcinoma is a gradual procedure from benign to borderline to low-grade adenocarcinoma. However, no significant difference in the expression of RAD51 was found between high- and low-grade tumors. The expression level of RAD51 showed significant associations with LNM, FIGO stage, and omental metastasis in high-grade, but not low-grade tumors. Herein, the 5-year survival rate of patients with RAD51-positive high-grade adenocarcinomas was significantly lower than that of patients with RAD51-negative tumors, while no difference in survival was found in low-grade tumors. In other words, we found that high-grade tumor patients with high expression of RAD51 had a lower survival rate and poor prognosis, indicating that RAD51 may be a prognostic factor in high-grade ovarian serous adenocarcinoma. Furthermore, multivariate survival analyses showed that RAD51 was an independent prognostic factor, which may be of clinical significance in terms of the development of molecular targeted therapy, in addition to FIGO stage, and LNM. Further investigations on RAD51 are required to confirm these findings and to hopefully be able to improve the quality of life and prognosis of patients using targeted therapy. In conclusion, the results of the present study indicate that PHLPP, FoxO3a, and RAD51 may play certain roles in the pathogenesis of the different grades of ovarian serous adenocarcinomas. Meanwhile, they also suggest the possible presence of common molecular mechanisms in the pathogenesis of the different grades. Increased protein expression of RAD51 may be an important molecular marker for poor prognosis of high-grade ovarian serous adenocarcinomas. However, considering the limited number of cases selected, which may have affected the significance of the results, further studies are required to improve and supplement these findings. Financial support and sponsorship: This work were supported by grants from the Key Scientific Research program in Medical Science of Hebei Province, China (No. 20150318) and Science and Technology support program of Hebei Province, China (No. 16277794D). Conflicts of interest: There are no conflicts of interest.
Background: Ovarian serous adenocarcinoma can be divided into low- and high-grade tumors, which exhibit substantial differences in pathogenesis, clinicopathology, and prognosis. This study aimed to investigate the differences in the PH domain leucine-rich repeat protein phosphatase (PHLPP), forkhead homeobox type O 3a (FoxO3a), and RAD51 protein expressions, and their associations with prognosis in patients with low- and high-grade ovarian serous adenocarcinomas. Methods: The PHLPP, FoxO3a, and RAD51 protein expressions were examined in 94 high- and 26 low-grade ovarian serous adenocarcinomas by immunohistochemistry. The differences in expression and their relationships with pathological features and prognosis were analyzed. Results: In high-grade serous adenocarcinomas, the positive rates of PHLPP and FoxO3a were 24.5% and 26.6%, while in low-grade tumors, they were 23.1% and 26.9%, respectively (P < 0.05 vs. the control specimens; low- vs. high-grade: P > 0.05). The positive rates of RAD51 were 70.2% and 65.4% in high- and low-grade serous adenocarcinomas, respectively (P < 0.05 vs. the control specimens; low- vs. high-grade: P > 0.05). Meanwhile, in high-grade tumors, Stage III/IV tumors and lymph node and omental metastases were significantly associated with lower PHLPP and FoxO3a and higher RAD51 expression. The 5-year survival rates of patients with PHLPP- and FoxO3a-positive high-grade tumors (43.5% and 36.0%) were significantly higher than in patients with PHLPP-negative tumors (5.6% and 7.2%, respectively; P< 0.05). Similarly, the 5-year survival rate of RAD51-positive patients (3.0%) was significantly lower than in negative patients (42.9%; P< 0.05). In low-grade tumors, the PHLPP, FoxO3a, and RAD51 expressions were not significantly correlated with lymph node metastasis, omental metastasis, Federation of Gynecology and Obstetrics stage, or prognosis. Conclusions: Abnormal PHLPP, FoxO3a, and RAD51 protein expressions may be involved in the development of high- and low-grade ovarian serous adenocarcinomas, suggesting common molecular pathways. Decreased PHLPP and FoxO3a and increased RAD51 protein expression may be important molecular markers for poor prognosis, and RAD51 may be an independent prognosis factor, of high-grade, but not low-grade, ovarian serous adenocarcinomas.
null
null
12,826
454
[ 33, 378, 692, 117, 52, 30, 82, 458, 395, 379, 112, 265, 477, 538, 42 ]
20
[ "grade", "positive", "serous", "high", "low", "low grade", "expression", "high grade", "rad51", "adenocarcinoma" ]
[ "ovarian cancer phlpp", "ovarian adenocarcinoma rr", "ovarian epithelial malignant", "ovarian serous carcinomas", "ovarian serous adenocarcinoma" ]
null
null
[CONTENT] Forkhead Homeobox Type O 3a | Immunohistochemistry | Ovarian Serous Adenocarcinomas | PH Domain Leucine-rich Repeat Protein Phosphatase | Prognosis | RAD51 [SUMMARY]
[CONTENT] Forkhead Homeobox Type O 3a | Immunohistochemistry | Ovarian Serous Adenocarcinomas | PH Domain Leucine-rich Repeat Protein Phosphatase | Prognosis | RAD51 [SUMMARY]
[CONTENT] Forkhead Homeobox Type O 3a | Immunohistochemistry | Ovarian Serous Adenocarcinomas | PH Domain Leucine-rich Repeat Protein Phosphatase | Prognosis | RAD51 [SUMMARY]
null
[CONTENT] Forkhead Homeobox Type O 3a | Immunohistochemistry | Ovarian Serous Adenocarcinomas | PH Domain Leucine-rich Repeat Protein Phosphatase | Prognosis | RAD51 [SUMMARY]
null
[CONTENT] Adult | Aged | Biomarkers, Tumor | Cystadenocarcinoma, Serous | Female | Forkhead Box Protein O3 | Humans | Immunohistochemistry | Lymphatic Metastasis | Middle Aged | Neoplasm Staging | Nuclear Proteins | Ovarian Neoplasms | Phosphoprotein Phosphatases | Prognosis | Rad51 Recombinase [SUMMARY]
[CONTENT] Adult | Aged | Biomarkers, Tumor | Cystadenocarcinoma, Serous | Female | Forkhead Box Protein O3 | Humans | Immunohistochemistry | Lymphatic Metastasis | Middle Aged | Neoplasm Staging | Nuclear Proteins | Ovarian Neoplasms | Phosphoprotein Phosphatases | Prognosis | Rad51 Recombinase [SUMMARY]
[CONTENT] Adult | Aged | Biomarkers, Tumor | Cystadenocarcinoma, Serous | Female | Forkhead Box Protein O3 | Humans | Immunohistochemistry | Lymphatic Metastasis | Middle Aged | Neoplasm Staging | Nuclear Proteins | Ovarian Neoplasms | Phosphoprotein Phosphatases | Prognosis | Rad51 Recombinase [SUMMARY]
null
[CONTENT] Adult | Aged | Biomarkers, Tumor | Cystadenocarcinoma, Serous | Female | Forkhead Box Protein O3 | Humans | Immunohistochemistry | Lymphatic Metastasis | Middle Aged | Neoplasm Staging | Nuclear Proteins | Ovarian Neoplasms | Phosphoprotein Phosphatases | Prognosis | Rad51 Recombinase [SUMMARY]
null
[CONTENT] ovarian cancer phlpp | ovarian adenocarcinoma rr | ovarian epithelial malignant | ovarian serous carcinomas | ovarian serous adenocarcinoma [SUMMARY]
[CONTENT] ovarian cancer phlpp | ovarian adenocarcinoma rr | ovarian epithelial malignant | ovarian serous carcinomas | ovarian serous adenocarcinoma [SUMMARY]
[CONTENT] ovarian cancer phlpp | ovarian adenocarcinoma rr | ovarian epithelial malignant | ovarian serous carcinomas | ovarian serous adenocarcinoma [SUMMARY]
null
[CONTENT] ovarian cancer phlpp | ovarian adenocarcinoma rr | ovarian epithelial malignant | ovarian serous carcinomas | ovarian serous adenocarcinoma [SUMMARY]
null
[CONTENT] grade | positive | serous | high | low | low grade | expression | high grade | rad51 | adenocarcinoma [SUMMARY]
[CONTENT] grade | positive | serous | high | low | low grade | expression | high grade | rad51 | adenocarcinoma [SUMMARY]
[CONTENT] grade | positive | serous | high | low | low grade | expression | high grade | rad51 | adenocarcinoma [SUMMARY]
null
[CONTENT] grade | positive | serous | high | low | low grade | expression | high grade | rad51 | adenocarcinoma [SUMMARY]
null
[CONTENT] akt | serous | tumor | signaling | development | ovarian | involved | pi3k | akt signaling | pi3k akt [SUMMARY]
[CONTENT] positive | years | staining | points | defined | cells | positive cells | defined positive | brown | slides [SUMMARY]
[CONTENT] grade | 05 | serous | high | positive | grade serous | expression | low | low grade | adenocarcinoma [SUMMARY]
null
[CONTENT] grade | positive | serous | high | low | low grade | 05 | expression | high grade | rad51 [SUMMARY]
null
[CONTENT] Ovarian ||| O 3a [SUMMARY]
[CONTENT] PHLPP | FoxO3a | 94 | 26 ||| [SUMMARY]
[CONTENT] PHLPP | FoxO3a | 24.5% | 26.6% | 23.1% | 26.9% | 0.05 ||| 70.2% | 65.4% | 0.05 ||| FoxO3a ||| 5-year | 43.5% | 36.0% | 5.6% and | 7.2% | P< 0.05 ||| 5-year | 3.0% | 42.9% | P< 0.05 ||| PHLPP | FoxO3a | Federation of Gynecology and Obstetrics [SUMMARY]
null
[CONTENT] Ovarian ||| O 3a | adenocarcinomas ||| PHLPP | FoxO3a | 94 | 26 ||| ||| PHLPP | FoxO3a | 24.5% | 26.6% | 23.1% | 26.9% | 0.05 ||| 70.2% | 65.4% | 0.05 ||| FoxO3a ||| 5-year | 43.5% | 36.0% | 5.6% and | 7.2% | P< 0.05 ||| 5-year | 3.0% | 42.9% | P< 0.05 ||| PHLPP | FoxO3a | Federation of Gynecology and Obstetrics ||| FoxO3a ||| [SUMMARY]
null
Novel variants of the newly emerged Anaplasma capra from Korean water deer (Hydropotes inermis argyropus) in South Korea.
31345253
Anaplasma spp. are tick-borne Gram-negative obligate intracellular bacteria that infect humans and a wide range of animals. Anaplasma capra has emerged as a human pathogen; however, little is known about the occurrence and genetic identity of this agent in wildlife. The present study aimed to determine the infection rate and genetic profile of this pathogen in wild animals in the Republic of Korea.
BACKGROUND
A total of 253 blood samples [198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus)] were collected at Chungbuk Wildlife Center during the period 2015-2018. Genomic DNA was extracted from the samples and screened for presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene marker. Anaplasma capra-positive isolates were genetically profiled by amplification of a longer fragment of 16S rRNA (rrs) as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4). Generated sequences of each gene marker were aligned with homologous sequences in the database and phylogenetically analyzed.
METHODS
Anaplasma capra was detected in blood samples derived from Korean water deer, whereas samples from other animal species were negative. The overall infection rate in tested samples was 13.8% (35/253) and in the water deer the rate was 17.8% (35/198), distributed along the study period from 2015 to 2018. Genetic profiling and a phylogenetic analysis based on analyzed gene markers revealed the occurrence of two distinct strains, clustered in a single clade with counterpart sequences of A. capra in the database.
RESULTS
Anaplasma capra infection were detected in Korean water deer in the Republic of Korea, providing insight into the role of wildlife as a potential reservoir for animal and human anaplasmosis. However, further work is needed in order to evaluate the role of Korean water deer as a host/reservoir host of A. capra.
CONCLUSIONS
[ "Anaplasma", "Anaplasmosis", "Animals", "DNA, Bacterial", "Deer", "Disease Reservoirs", "Genetic Variation", "Phylogeny", "RNA, Ribosomal, 16S", "Republic of Korea" ]
6659236
Background
The cosmopolitan genus Anaplasma includes six species of Gram-negative obligate intracellular bacteria that are transmitted by ticks to a wide range of animals, including humans [1–5], resulting in considerable economic losses in the livestock industry and serious public health concerns [6, 7]. Anaplasma phagocytophilum, A. ovis and recently reported A. capra are human pathogens [8–12], whereas other species in the genus have no known zoonotic potential. However, A. platys may have zoonotic potential after frequent reports of human infection [13, 14]. The provisional name Anaplasma capra was assigned after its initial characterization in goats (Capra aegagrus hircus) in China [12]. Later, it was isolated from sheep, goats and cattle in different geographical regions [15–19] as well as from various tick species (Haemaphysalis qinghaiensis, H. longicornis, Ixodes persulcatus) [12, 20–23]. Infection of A. capra was also reported in six wild animals in China including three takins (Budorcas taxicolor), two Reeves’s muntjacs (Muntiacus reevesi) and one forest musk deer (Moschus berezovskii) [24]. Anaplasma species usually parasitize bone marrow-derived elements, including neutrophils (A. phagocytophilum), erythrocytes (A. marginale, A. centrale and A. ovis), monocytes (A. bovis) and platelets (A. platys) [7, 9, 10, 12]. However, A. capra seems to infect endothelial cells, rendering its microscopic detection in blood smears unreliable [12, 15]. In humans, the disease caused by A. capra is generally characterized by undifferentiated fever, headache, malaise, dizziness, myalgia and chills, with potential progression to CNS involvement and cerebrospinal fluid pleocytosis [12]. Although different Anaplasma species have been detected in wildlife [23–29], little is known about the prevalence and genetic identity of A. capra in these animals in Korea. Using molecular tools, the present study aimed at investigating the occurrence and characterizing the genetic profile of this pathogen in wildlife in the Republic of Korea.
null
null
Results
The overall infection rate of A. capra in tested animals was 13.8% (35/253); however, samples from raccoon dogs (n = 53), leopard cat (n = 1) and roe deer (n = 1) were negative. The infection rate in KWD was 17.7% (35/198), distributed as follows: 24.6% (14/57) in 2015; 13.2% (5/38) in 2016; 17.3% (14/81) in 2017; and 9.1% (2/22) in 2018 (Table 2).Table 2Distribution of samples and prevalence of A. capra in animal speciesYearKorean water deerRaccoon dogOther animalsNumber infectedTotal numberInfection rate (%)InfectedNot infectedInfectedNot infectedNot infected201514430210147818.0201653305054311.620171467018LC (1)1410014.0201822009RD (1)2326.3Total number3516305323525313.8aInfection rate/species (%)17.700aOverall infection rateAbbreviations: LC, leopard cat (Prionailurus bengalensis); RD, roe deer (Capreolus pygargus) Distribution of samples and prevalence of A. capra in animal species aOverall infection rate Abbreviations: LC, leopard cat (Prionailurus bengalensis); RD, roe deer (Capreolus pygargus) Molecular and phylogenetic analyses indicated to occurrence of two genetically distinct strains [named Cheongju (23 isolates) and Chungbuk (12 isolates)] of this pathogen. Sequences obtained from both strains were similar to those derived from A. capra from goats, sheep, cattle, ticks and humans; however, they had striking genetic differences, suggesting that they are novel strains. Sequences of the rrs gene fragment of both strains showed an identity of ~ 99.5% with counterparts in database and clustered in the clade of A. capra from different hosts (Fig. 1). Both strains had single nucleotide polymorphisms (SNPs), resulting in four genotypes at this gene locus. Phylogenetic analysis revealed that three sequences designated A. centrale (GenBank: AB211164, AF283007 and GU064903) and two sequences designated Anaplasma spp. (GenBank: AB454075 and AB509223) clustered within the A. capra clade, even though other A. centrale sequences from different hosts and geographical regions formed separate clusters in the ML tree.Fig. 1Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of 16S rRNA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of 16S rRNA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site The gltA gene of the Cheongju strain shared a similarity of 99.5% (with two substitutions, A/G at position 456 and T/C at position 533) with gltA sequences KM206274, KJ700628 and MH029895 isolated from a human, goat and tick, respectively [12, 23]. Sequences of the Chungbuk strain showed a similarity of 98–99% with KX685885, KX685886 and MF071308 of A. capra from ticks and sheep [13, 19]. Both strains clustered with their homologous sequences in the A. capra clade (Fig. 2). groEL gene sequences derived from the Cheongju strain shared a similarity of 99% (one substitution) with their counterparts from humans (GenBank: KM206275), goats (GenBank: KJ700629), sheep (GenBank: KX417356) and ticks (GenBank: KR261633 and KR261635), whereas sequences from the Chungbuk strain shared a similarity of 91% with the reference sequences (Fig. 3). The msp2 sequences showed extensive intra- and inter-sequence variations, including multiple InDels and single nucleotide substitutions; however, all sequences remained clustered in the A. capra clade (Fig. 4). A hypervariable stretch was detected between positions 285 and 414 of the generated sequences (corresponding to positions 550 and 679 in the reference sequence KM206276 of A. capra from humans). The msp4 sequences were identical in the two strains and showed an identity of 100% with those from humans (GenBank: KM206277) and ticks (GenBank: KR261637 and KR261640) (Fig. 5).Fig. 2Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of gltA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Fig. 3Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of groEL gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Fig. 4Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp2 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Fig. 5Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp4 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of gltA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of groEL gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp2 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp4 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site
Conclusions
To our knowledge, the results presented herein provide the first evidence for the presence of A. capra in Korean water deer in Korea. As an emerging human pathogen, the detection of A. capra in deer provides insight into the role of wildlife as a potential reservoir for human anaplasmosis. Furthermore, the obtained results expand the known geographical and host range of the Anaplasma capra.
[ "Background", "Collection of samples", "DNA extraction and PCR amplification", "DNA sequence analysis", "Phylogenetic analysis" ]
[ "The cosmopolitan genus Anaplasma includes six species of Gram-negative obligate intracellular bacteria that are transmitted by ticks to a wide range of animals, including humans [1–5], resulting in considerable economic losses in the livestock industry and serious public health concerns [6, 7]. Anaplasma phagocytophilum, A. ovis and recently reported A. capra are human pathogens [8–12], whereas other species in the genus have no known zoonotic potential. However, A. platys may have zoonotic potential after frequent reports of human infection [13, 14].\nThe provisional name Anaplasma capra was assigned after its initial characterization in goats (Capra aegagrus hircus) in China [12]. Later, it was isolated from sheep, goats and cattle in different geographical regions [15–19] as well as from various tick species (Haemaphysalis qinghaiensis, H. longicornis, Ixodes persulcatus) [12, 20–23]. Infection of A. capra was also reported in six wild animals in China including three takins (Budorcas taxicolor), two Reeves’s muntjacs (Muntiacus reevesi) and one forest musk deer (Moschus berezovskii) [24].\nAnaplasma species usually parasitize bone marrow-derived elements, including neutrophils (A. phagocytophilum), erythrocytes (A. marginale, A. centrale and A. ovis), monocytes (A. bovis) and platelets (A. platys) [7, 9, 10, 12]. However, A. capra seems to infect endothelial cells, rendering its microscopic detection in blood smears unreliable [12, 15]. In humans, the disease caused by A. capra is generally characterized by undifferentiated fever, headache, malaise, dizziness, myalgia and chills, with potential progression to CNS involvement and cerebrospinal fluid pleocytosis [12].\nAlthough different Anaplasma species have been detected in wildlife [23–29], little is known about the prevalence and genetic identity of A. capra in these animals in Korea. Using molecular tools, the present study aimed at investigating the occurrence and characterizing the genetic profile of this pathogen in wildlife in the Republic of Korea.", "Chungbuk Wildlife Center is located in Cheongju city, Chungcheongbuk-do province in the Republic of Korea (36°38′13.99ʺN, 127°29′22.99ʺE). The center receives terrestrial and avian wild animals for purposes of treatment from sickness/injuries and/or rehabilitation. Blood samples are collected for diagnosis and treatment of wildlife referred to the Chungbuk Wildlife Center. Blood samples are archived in EDTA-treated tubes and stored at – 80 °C. A total of 253 blood samples including 198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus), collected from January 2015 to June 2018, were used.", "Frozen blood samples were thawed at room temperature and genomic DNA was extracted from 200 µl of blood using a Magpurix® Blood Kit and Magpurix® 12s automated nucleic acid purification system (Zinexts Life Science Corp., Taipei, Taiwan), according to the manufacturer’s recommendations. DNA preparations were tested for the presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene as described previously [30]. Anaplasma capra-positive isolates were genetically profiled by the amplification of a longer fragment of 16S rRNA (rrs) gene as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes as described previously (Table 1). Amplified fragments were electrophoresed on 1.2% gel loaded with EcoDye™ stain (BIOFACT, Daejeon, Korea) and visualized using UV light.Table 1PCR primers and conditions used in this studyTarget genePrimer namePrimer sequence (5′-3′)Annealing T (°C)Target size (bp)ReferencerrsForwardTTGAGAGTTTGATCCTGGCTCAGAACG571499[12]ReverseWAAGGWGGTAATCCAGCgltAOuter FGCGATTTTAGAGTGYGGAGATTG551031[12]Outer RTACAATACCGGAGTAAAAGTCAAInner FTCATCTCCTGTTGCACGGTGCCC60594[21]Inner RCTCTGAATGAACATGCCCACCCTgroELForwardGCGAGGCGTTAGACAAGTCCATT581129[12]ReverseTCCAGAGATGCAAGCGTGTATAGmsp2Outer FGCGTGTTGATGGCTCTGGT521089[12]Outer RACCAGTATCCTTATTTTTACCInner FGAGTGCACCAGAGCCTAGAA56801This studyInner RTCACCATCACCAAGCACTCTmsp4Outer FCAGTCTGCGCCTGCTCCCTAC55757[12]Outer RAGGAATCTTGCTCCAAGGTTAInner FGGGTTCTGATATGGCATCTTC56656[15]Inner RGGGAAATGTCCTTATAGGATTCGAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature\n\nPCR primers and conditions used in this study\nAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature", "The PCR products (rrs and groEL) or secondary PCR product (for other gene markers) were purified and sequenced, either directly or after cloning in the pGEM-T vector (Promega, Madison, WI, USA), in both directions. Generated sequences were assembled using ChromasPro v.2.1.8 (https://technelysium.com.au/wp/chromaspro/).", "The obtained sequences from each genetic locus were aligned with each other and reference sequences, available in GenBank (https://www.ncbi.nlm.nih.gov/), using ClustalX (http://www.clustal.org/) to determine the identity of Anaplasma spp. Evolutionary relationships were inferred based on partial sequences of 16S rRNA, citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes using the maximum likelihood (ML) method implemented in MEGA7 (http://www.megasoftware.net/). The ML phylogenetic analysis was conducted using the Kimura 2-parameter model and 1000 bootstrap replicates. The ML tree was rooted against the nucleotide sequences L36221 (Rickettsia typhi), KY124257 (Rickettsia parkeri), U96733 (Rickettsia rickettsii) and BDDN01000175 (Ehrlichia ruminantium) for 16S rRNA, gltA, groEL and msp4 gene markers, respectively." ]
[ null, null, null, null, null ]
[ "Background", "Methods", "Collection of samples", "DNA extraction and PCR amplification", "DNA sequence analysis", "Phylogenetic analysis", "Results", "Discussion", "Conclusions" ]
[ "The cosmopolitan genus Anaplasma includes six species of Gram-negative obligate intracellular bacteria that are transmitted by ticks to a wide range of animals, including humans [1–5], resulting in considerable economic losses in the livestock industry and serious public health concerns [6, 7]. Anaplasma phagocytophilum, A. ovis and recently reported A. capra are human pathogens [8–12], whereas other species in the genus have no known zoonotic potential. However, A. platys may have zoonotic potential after frequent reports of human infection [13, 14].\nThe provisional name Anaplasma capra was assigned after its initial characterization in goats (Capra aegagrus hircus) in China [12]. Later, it was isolated from sheep, goats and cattle in different geographical regions [15–19] as well as from various tick species (Haemaphysalis qinghaiensis, H. longicornis, Ixodes persulcatus) [12, 20–23]. Infection of A. capra was also reported in six wild animals in China including three takins (Budorcas taxicolor), two Reeves’s muntjacs (Muntiacus reevesi) and one forest musk deer (Moschus berezovskii) [24].\nAnaplasma species usually parasitize bone marrow-derived elements, including neutrophils (A. phagocytophilum), erythrocytes (A. marginale, A. centrale and A. ovis), monocytes (A. bovis) and platelets (A. platys) [7, 9, 10, 12]. However, A. capra seems to infect endothelial cells, rendering its microscopic detection in blood smears unreliable [12, 15]. In humans, the disease caused by A. capra is generally characterized by undifferentiated fever, headache, malaise, dizziness, myalgia and chills, with potential progression to CNS involvement and cerebrospinal fluid pleocytosis [12].\nAlthough different Anaplasma species have been detected in wildlife [23–29], little is known about the prevalence and genetic identity of A. capra in these animals in Korea. Using molecular tools, the present study aimed at investigating the occurrence and characterizing the genetic profile of this pathogen in wildlife in the Republic of Korea.", " Collection of samples Chungbuk Wildlife Center is located in Cheongju city, Chungcheongbuk-do province in the Republic of Korea (36°38′13.99ʺN, 127°29′22.99ʺE). The center receives terrestrial and avian wild animals for purposes of treatment from sickness/injuries and/or rehabilitation. Blood samples are collected for diagnosis and treatment of wildlife referred to the Chungbuk Wildlife Center. Blood samples are archived in EDTA-treated tubes and stored at – 80 °C. A total of 253 blood samples including 198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus), collected from January 2015 to June 2018, were used.\nChungbuk Wildlife Center is located in Cheongju city, Chungcheongbuk-do province in the Republic of Korea (36°38′13.99ʺN, 127°29′22.99ʺE). The center receives terrestrial and avian wild animals for purposes of treatment from sickness/injuries and/or rehabilitation. Blood samples are collected for diagnosis and treatment of wildlife referred to the Chungbuk Wildlife Center. Blood samples are archived in EDTA-treated tubes and stored at – 80 °C. A total of 253 blood samples including 198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus), collected from January 2015 to June 2018, were used.\n DNA extraction and PCR amplification Frozen blood samples were thawed at room temperature and genomic DNA was extracted from 200 µl of blood using a Magpurix® Blood Kit and Magpurix® 12s automated nucleic acid purification system (Zinexts Life Science Corp., Taipei, Taiwan), according to the manufacturer’s recommendations. DNA preparations were tested for the presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene as described previously [30]. Anaplasma capra-positive isolates were genetically profiled by the amplification of a longer fragment of 16S rRNA (rrs) gene as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes as described previously (Table 1). Amplified fragments were electrophoresed on 1.2% gel loaded with EcoDye™ stain (BIOFACT, Daejeon, Korea) and visualized using UV light.Table 1PCR primers and conditions used in this studyTarget genePrimer namePrimer sequence (5′-3′)Annealing T (°C)Target size (bp)ReferencerrsForwardTTGAGAGTTTGATCCTGGCTCAGAACG571499[12]ReverseWAAGGWGGTAATCCAGCgltAOuter FGCGATTTTAGAGTGYGGAGATTG551031[12]Outer RTACAATACCGGAGTAAAAGTCAAInner FTCATCTCCTGTTGCACGGTGCCC60594[21]Inner RCTCTGAATGAACATGCCCACCCTgroELForwardGCGAGGCGTTAGACAAGTCCATT581129[12]ReverseTCCAGAGATGCAAGCGTGTATAGmsp2Outer FGCGTGTTGATGGCTCTGGT521089[12]Outer RACCAGTATCCTTATTTTTACCInner FGAGTGCACCAGAGCCTAGAA56801This studyInner RTCACCATCACCAAGCACTCTmsp4Outer FCAGTCTGCGCCTGCTCCCTAC55757[12]Outer RAGGAATCTTGCTCCAAGGTTAInner FGGGTTCTGATATGGCATCTTC56656[15]Inner RGGGAAATGTCCTTATAGGATTCGAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature\n\nPCR primers and conditions used in this study\nAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature\nFrozen blood samples were thawed at room temperature and genomic DNA was extracted from 200 µl of blood using a Magpurix® Blood Kit and Magpurix® 12s automated nucleic acid purification system (Zinexts Life Science Corp., Taipei, Taiwan), according to the manufacturer’s recommendations. DNA preparations were tested for the presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene as described previously [30]. Anaplasma capra-positive isolates were genetically profiled by the amplification of a longer fragment of 16S rRNA (rrs) gene as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes as described previously (Table 1). Amplified fragments were electrophoresed on 1.2% gel loaded with EcoDye™ stain (BIOFACT, Daejeon, Korea) and visualized using UV light.Table 1PCR primers and conditions used in this studyTarget genePrimer namePrimer sequence (5′-3′)Annealing T (°C)Target size (bp)ReferencerrsForwardTTGAGAGTTTGATCCTGGCTCAGAACG571499[12]ReverseWAAGGWGGTAATCCAGCgltAOuter FGCGATTTTAGAGTGYGGAGATTG551031[12]Outer RTACAATACCGGAGTAAAAGTCAAInner FTCATCTCCTGTTGCACGGTGCCC60594[21]Inner RCTCTGAATGAACATGCCCACCCTgroELForwardGCGAGGCGTTAGACAAGTCCATT581129[12]ReverseTCCAGAGATGCAAGCGTGTATAGmsp2Outer FGCGTGTTGATGGCTCTGGT521089[12]Outer RACCAGTATCCTTATTTTTACCInner FGAGTGCACCAGAGCCTAGAA56801This studyInner RTCACCATCACCAAGCACTCTmsp4Outer FCAGTCTGCGCCTGCTCCCTAC55757[12]Outer RAGGAATCTTGCTCCAAGGTTAInner FGGGTTCTGATATGGCATCTTC56656[15]Inner RGGGAAATGTCCTTATAGGATTCGAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature\n\nPCR primers and conditions used in this study\nAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature\n DNA sequence analysis The PCR products (rrs and groEL) or secondary PCR product (for other gene markers) were purified and sequenced, either directly or after cloning in the pGEM-T vector (Promega, Madison, WI, USA), in both directions. Generated sequences were assembled using ChromasPro v.2.1.8 (https://technelysium.com.au/wp/chromaspro/).\nThe PCR products (rrs and groEL) or secondary PCR product (for other gene markers) were purified and sequenced, either directly or after cloning in the pGEM-T vector (Promega, Madison, WI, USA), in both directions. Generated sequences were assembled using ChromasPro v.2.1.8 (https://technelysium.com.au/wp/chromaspro/).\n Phylogenetic analysis The obtained sequences from each genetic locus were aligned with each other and reference sequences, available in GenBank (https://www.ncbi.nlm.nih.gov/), using ClustalX (http://www.clustal.org/) to determine the identity of Anaplasma spp. Evolutionary relationships were inferred based on partial sequences of 16S rRNA, citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes using the maximum likelihood (ML) method implemented in MEGA7 (http://www.megasoftware.net/). The ML phylogenetic analysis was conducted using the Kimura 2-parameter model and 1000 bootstrap replicates. The ML tree was rooted against the nucleotide sequences L36221 (Rickettsia typhi), KY124257 (Rickettsia parkeri), U96733 (Rickettsia rickettsii) and BDDN01000175 (Ehrlichia ruminantium) for 16S rRNA, gltA, groEL and msp4 gene markers, respectively.\nThe obtained sequences from each genetic locus were aligned with each other and reference sequences, available in GenBank (https://www.ncbi.nlm.nih.gov/), using ClustalX (http://www.clustal.org/) to determine the identity of Anaplasma spp. Evolutionary relationships were inferred based on partial sequences of 16S rRNA, citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes using the maximum likelihood (ML) method implemented in MEGA7 (http://www.megasoftware.net/). The ML phylogenetic analysis was conducted using the Kimura 2-parameter model and 1000 bootstrap replicates. The ML tree was rooted against the nucleotide sequences L36221 (Rickettsia typhi), KY124257 (Rickettsia parkeri), U96733 (Rickettsia rickettsii) and BDDN01000175 (Ehrlichia ruminantium) for 16S rRNA, gltA, groEL and msp4 gene markers, respectively.", "Chungbuk Wildlife Center is located in Cheongju city, Chungcheongbuk-do province in the Republic of Korea (36°38′13.99ʺN, 127°29′22.99ʺE). The center receives terrestrial and avian wild animals for purposes of treatment from sickness/injuries and/or rehabilitation. Blood samples are collected for diagnosis and treatment of wildlife referred to the Chungbuk Wildlife Center. Blood samples are archived in EDTA-treated tubes and stored at – 80 °C. A total of 253 blood samples including 198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus), collected from January 2015 to June 2018, were used.", "Frozen blood samples were thawed at room temperature and genomic DNA was extracted from 200 µl of blood using a Magpurix® Blood Kit and Magpurix® 12s automated nucleic acid purification system (Zinexts Life Science Corp., Taipei, Taiwan), according to the manufacturer’s recommendations. DNA preparations were tested for the presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene as described previously [30]. Anaplasma capra-positive isolates were genetically profiled by the amplification of a longer fragment of 16S rRNA (rrs) gene as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes as described previously (Table 1). Amplified fragments were electrophoresed on 1.2% gel loaded with EcoDye™ stain (BIOFACT, Daejeon, Korea) and visualized using UV light.Table 1PCR primers and conditions used in this studyTarget genePrimer namePrimer sequence (5′-3′)Annealing T (°C)Target size (bp)ReferencerrsForwardTTGAGAGTTTGATCCTGGCTCAGAACG571499[12]ReverseWAAGGWGGTAATCCAGCgltAOuter FGCGATTTTAGAGTGYGGAGATTG551031[12]Outer RTACAATACCGGAGTAAAAGTCAAInner FTCATCTCCTGTTGCACGGTGCCC60594[21]Inner RCTCTGAATGAACATGCCCACCCTgroELForwardGCGAGGCGTTAGACAAGTCCATT581129[12]ReverseTCCAGAGATGCAAGCGTGTATAGmsp2Outer FGCGTGTTGATGGCTCTGGT521089[12]Outer RACCAGTATCCTTATTTTTACCInner FGAGTGCACCAGAGCCTAGAA56801This studyInner RTCACCATCACCAAGCACTCTmsp4Outer FCAGTCTGCGCCTGCTCCCTAC55757[12]Outer RAGGAATCTTGCTCCAAGGTTAInner FGGGTTCTGATATGGCATCTTC56656[15]Inner RGGGAAATGTCCTTATAGGATTCGAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature\n\nPCR primers and conditions used in this study\nAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature", "The PCR products (rrs and groEL) or secondary PCR product (for other gene markers) were purified and sequenced, either directly or after cloning in the pGEM-T vector (Promega, Madison, WI, USA), in both directions. Generated sequences were assembled using ChromasPro v.2.1.8 (https://technelysium.com.au/wp/chromaspro/).", "The obtained sequences from each genetic locus were aligned with each other and reference sequences, available in GenBank (https://www.ncbi.nlm.nih.gov/), using ClustalX (http://www.clustal.org/) to determine the identity of Anaplasma spp. Evolutionary relationships were inferred based on partial sequences of 16S rRNA, citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes using the maximum likelihood (ML) method implemented in MEGA7 (http://www.megasoftware.net/). The ML phylogenetic analysis was conducted using the Kimura 2-parameter model and 1000 bootstrap replicates. The ML tree was rooted against the nucleotide sequences L36221 (Rickettsia typhi), KY124257 (Rickettsia parkeri), U96733 (Rickettsia rickettsii) and BDDN01000175 (Ehrlichia ruminantium) for 16S rRNA, gltA, groEL and msp4 gene markers, respectively.", "The overall infection rate of A. capra in tested animals was 13.8% (35/253); however, samples from raccoon dogs (n = 53), leopard cat (n = 1) and roe deer (n = 1) were negative. The infection rate in KWD was 17.7% (35/198), distributed as follows: 24.6% (14/57) in 2015; 13.2% (5/38) in 2016; 17.3% (14/81) in 2017; and 9.1% (2/22) in 2018 (Table 2).Table 2Distribution of samples and prevalence of A. capra in animal speciesYearKorean water deerRaccoon dogOther animalsNumber infectedTotal numberInfection rate (%)InfectedNot infectedInfectedNot infectedNot infected201514430210147818.0201653305054311.620171467018LC (1)1410014.0201822009RD (1)2326.3Total number3516305323525313.8aInfection rate/species (%)17.700aOverall infection rateAbbreviations: LC, leopard cat (Prionailurus bengalensis); RD, roe deer (Capreolus pygargus)\n\nDistribution of samples and prevalence of A. capra in animal species\naOverall infection rate\nAbbreviations: LC, leopard cat (Prionailurus bengalensis); RD, roe deer (Capreolus pygargus)\nMolecular and phylogenetic analyses indicated to occurrence of two genetically distinct strains [named Cheongju (23 isolates) and Chungbuk (12 isolates)] of this pathogen. Sequences obtained from both strains were similar to those derived from A. capra from goats, sheep, cattle, ticks and humans; however, they had striking genetic differences, suggesting that they are novel strains. Sequences of the rrs gene fragment of both strains showed an identity of ~ 99.5% with counterparts in database and clustered in the clade of A. capra from different hosts (Fig. 1). Both strains had single nucleotide polymorphisms (SNPs), resulting in four genotypes at this gene locus. Phylogenetic analysis revealed that three sequences designated A. centrale (GenBank: AB211164, AF283007 and GU064903) and two sequences designated Anaplasma spp. (GenBank: AB454075 and AB509223) clustered within the A. capra clade, even though other A. centrale sequences from different hosts and geographical regions formed separate clusters in the ML tree.Fig. 1Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of 16S rRNA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\n\nMaximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of 16S rRNA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\nThe gltA gene of the Cheongju strain shared a similarity of 99.5% (with two substitutions, A/G at position 456 and T/C at position 533) with gltA sequences KM206274, KJ700628 and MH029895 isolated from a human, goat and tick, respectively [12, 23]. Sequences of the Chungbuk strain showed a similarity of 98–99% with KX685885, KX685886 and MF071308 of A. capra from ticks and sheep [13, 19]. Both strains clustered with their homologous sequences in the A. capra clade (Fig. 2). groEL gene sequences derived from the Cheongju strain shared a similarity of 99% (one substitution) with their counterparts from humans (GenBank: KM206275), goats (GenBank: KJ700629), sheep (GenBank: KX417356) and ticks (GenBank: KR261633 and KR261635), whereas sequences from the Chungbuk strain shared a similarity of 91% with the reference sequences (Fig. 3). The msp2 sequences showed extensive intra- and inter-sequence variations, including multiple InDels and single nucleotide substitutions; however, all sequences remained clustered in the A. capra clade (Fig. 4). A hypervariable stretch was detected between positions 285 and 414 of the generated sequences (corresponding to positions 550 and 679 in the reference sequence KM206276 of A. capra from humans). The msp4 sequences were identical in the two strains and showed an identity of 100% with those from humans (GenBank: KM206277) and ticks (GenBank: KR261637 and KR261640) (Fig. 5).Fig. 2Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of gltA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\nFig. 3Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of groEL gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\nFig. 4Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp2 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\nFig. 5Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp4 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\n\nMaximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of gltA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\nMaximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of groEL gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\nMaximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp2 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site\nMaximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp4 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site", "Wild animals act as reservoirs for a wide range of pathogens [31–33]. The emergence of infectious disease agents of wildlife origin is a prominent challenge to public health and the livestock industry [34–36]. Anaplasma capra has recently been isolated from human patients in China with non-specific clinical manifestations, with potential progression to CNS complications, suggesting that this species could pose a substantial threat to public health [12, 37]. We detected A. capra DNA in blood samples of 35 out of 198 KWD (17.7% infection rate) at the Chungbuk Wildlife Center, Korea. Epidemiological data for this pathogen in wildlife are lacking in Korea; however, our findings are similar to those obtained from wildlife (five takins, three Himalayan gorals, three Reeves’s muntjacs, one forest musk deer and one wild boar) in China [24]. In addition, a low percentage of infection rate was reported cattle, sheep and goats in China, Sweden and Korea [15–19], indicating that A. capra has a broad host range. Occurrence of infection during the study period from 2015 to 2018 indicates the persistence of the infection in KWD, suggesting that the species may act as a reservoir for this pathogen. However, it is difficult to explain the negative results from raccoon dogs in the present study. This may be attributed to persistent infection making the pathogen below detectable level in the blood of these animals. In support of this view, A. capra has been reported to infect endothelial cells [12, 15], making its detection in the blood possible in case of considerable bacteremia or released endothelial cells, resembling Rickettsia species [12]. Furthermore, the sample size and species and/or the age of animals may play a role in these findings. Further investigations are needed to clarify these points.\nOur genetic profiling results indicated that the newly generated 16S rRNA gene sequences shared a homology of > 99.5% with sequences of A. capra strains from humans, sheep, goats, cattle and ticks [12, 15, 16, 18, 20, 22], suggesting that they likely are within the same species of bacteria [38, 39]. Clustering of sequences named A. centrale from deer (Cervus nippon nippon) (GenBank: AB211164) and cattle (GenBank: AF283007) in Japan and from ticks (Haemaphysalis longicornis) in Korea (GenBank: GU064903), as well as Anaplasma spp. from deer (GenBank: AB454075; direct submission) and Japanese serow (Capricornis crispus) (GenBank: AB509223) in the same clade of A. capra indicate a close phylogenetic relationship. The clustering pattern of these sequences in the A. capra clade does not support their assignment as sister taxa and suggests that these isolates are in fact A. capra [15, 25] and may need re-description, since these sequences were more related to Chungbuk strain in the A. capra clade.\nThe results obtained using different gene markers showed considerable sequence variation, suggesting that A. capra has a high degree of genetic diversity. Notably, extensive genetic variation was detected in msp2. Consistent with our results, sequence variation at the studied gene markers has been reported previously among isolates from ticks, sheep and goats [12, 15–18, 20–25]. Similarly, genetic variation is common in other Anaplasma species [40–45]. Although genetic diversity is reportedly related to infectivity, virulence, pathogenicity, niche preference, immune evasion, and/or host adaptability [46–49], this has not been established in A. capra and further studies are needed to evaluate these relationships.\nDue to the extinction of natural predators, the KWD is thriving in Korea and has been designated as “harmful wildlife” by the Ministry of Environment in 1994 owing to harmful interactions with humans and their properties. This close interaction poses substantial threats to domestic animals and human health in Korea. This study was limited by analyzing samples from one geographical area and few animal species, which my lead to biases in the results. A large-scale study is underway to fully elucidate the host range of wildlife, vector ticks, pathogenicity and geographical distribution of this organism in Korea.", "To our knowledge, the results presented herein provide the first evidence for the presence of A. capra in Korean water deer in Korea. As an emerging human pathogen, the detection of A. capra in deer provides insight into the role of wildlife as a potential reservoir for human anaplasmosis. Furthermore, the obtained results expand the known geographical and host range of the Anaplasma capra." ]
[ null, "materials|methods", null, null, null, null, "results", "discussion", "conclusion" ]
[ "\nAnaplasma capra\n", "Korean water deer (Hydropotes inermis argyropus)", "South Korea" ]
Background: The cosmopolitan genus Anaplasma includes six species of Gram-negative obligate intracellular bacteria that are transmitted by ticks to a wide range of animals, including humans [1–5], resulting in considerable economic losses in the livestock industry and serious public health concerns [6, 7]. Anaplasma phagocytophilum, A. ovis and recently reported A. capra are human pathogens [8–12], whereas other species in the genus have no known zoonotic potential. However, A. platys may have zoonotic potential after frequent reports of human infection [13, 14]. The provisional name Anaplasma capra was assigned after its initial characterization in goats (Capra aegagrus hircus) in China [12]. Later, it was isolated from sheep, goats and cattle in different geographical regions [15–19] as well as from various tick species (Haemaphysalis qinghaiensis, H. longicornis, Ixodes persulcatus) [12, 20–23]. Infection of A. capra was also reported in six wild animals in China including three takins (Budorcas taxicolor), two Reeves’s muntjacs (Muntiacus reevesi) and one forest musk deer (Moschus berezovskii) [24]. Anaplasma species usually parasitize bone marrow-derived elements, including neutrophils (A. phagocytophilum), erythrocytes (A. marginale, A. centrale and A. ovis), monocytes (A. bovis) and platelets (A. platys) [7, 9, 10, 12]. However, A. capra seems to infect endothelial cells, rendering its microscopic detection in blood smears unreliable [12, 15]. In humans, the disease caused by A. capra is generally characterized by undifferentiated fever, headache, malaise, dizziness, myalgia and chills, with potential progression to CNS involvement and cerebrospinal fluid pleocytosis [12]. Although different Anaplasma species have been detected in wildlife [23–29], little is known about the prevalence and genetic identity of A. capra in these animals in Korea. Using molecular tools, the present study aimed at investigating the occurrence and characterizing the genetic profile of this pathogen in wildlife in the Republic of Korea. Methods: Collection of samples Chungbuk Wildlife Center is located in Cheongju city, Chungcheongbuk-do province in the Republic of Korea (36°38′13.99ʺN, 127°29′22.99ʺE). The center receives terrestrial and avian wild animals for purposes of treatment from sickness/injuries and/or rehabilitation. Blood samples are collected for diagnosis and treatment of wildlife referred to the Chungbuk Wildlife Center. Blood samples are archived in EDTA-treated tubes and stored at – 80 °C. A total of 253 blood samples including 198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus), collected from January 2015 to June 2018, were used. Chungbuk Wildlife Center is located in Cheongju city, Chungcheongbuk-do province in the Republic of Korea (36°38′13.99ʺN, 127°29′22.99ʺE). The center receives terrestrial and avian wild animals for purposes of treatment from sickness/injuries and/or rehabilitation. Blood samples are collected for diagnosis and treatment of wildlife referred to the Chungbuk Wildlife Center. Blood samples are archived in EDTA-treated tubes and stored at – 80 °C. A total of 253 blood samples including 198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus), collected from January 2015 to June 2018, were used. DNA extraction and PCR amplification Frozen blood samples were thawed at room temperature and genomic DNA was extracted from 200 µl of blood using a Magpurix® Blood Kit and Magpurix® 12s automated nucleic acid purification system (Zinexts Life Science Corp., Taipei, Taiwan), according to the manufacturer’s recommendations. DNA preparations were tested for the presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene as described previously [30]. Anaplasma capra-positive isolates were genetically profiled by the amplification of a longer fragment of 16S rRNA (rrs) gene as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes as described previously (Table 1). Amplified fragments were electrophoresed on 1.2% gel loaded with EcoDye™ stain (BIOFACT, Daejeon, Korea) and visualized using UV light.Table 1PCR primers and conditions used in this studyTarget genePrimer namePrimer sequence (5′-3′)Annealing T (°C)Target size (bp)ReferencerrsForwardTTGAGAGTTTGATCCTGGCTCAGAACG571499[12]ReverseWAAGGWGGTAATCCAGCgltAOuter FGCGATTTTAGAGTGYGGAGATTG551031[12]Outer RTACAATACCGGAGTAAAAGTCAAInner FTCATCTCCTGTTGCACGGTGCCC60594[21]Inner RCTCTGAATGAACATGCCCACCCTgroELForwardGCGAGGCGTTAGACAAGTCCATT581129[12]ReverseTCCAGAGATGCAAGCGTGTATAGmsp2Outer FGCGTGTTGATGGCTCTGGT521089[12]Outer RACCAGTATCCTTATTTTTACCInner FGAGTGCACCAGAGCCTAGAA56801This studyInner RTCACCATCACCAAGCACTCTmsp4Outer FCAGTCTGCGCCTGCTCCCTAC55757[12]Outer RAGGAATCTTGCTCCAAGGTTAInner FGGGTTCTGATATGGCATCTTC56656[15]Inner RGGGAAATGTCCTTATAGGATTCGAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature PCR primers and conditions used in this study Abbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature Frozen blood samples were thawed at room temperature and genomic DNA was extracted from 200 µl of blood using a Magpurix® Blood Kit and Magpurix® 12s automated nucleic acid purification system (Zinexts Life Science Corp., Taipei, Taiwan), according to the manufacturer’s recommendations. DNA preparations were tested for the presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene as described previously [30]. Anaplasma capra-positive isolates were genetically profiled by the amplification of a longer fragment of 16S rRNA (rrs) gene as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes as described previously (Table 1). Amplified fragments were electrophoresed on 1.2% gel loaded with EcoDye™ stain (BIOFACT, Daejeon, Korea) and visualized using UV light.Table 1PCR primers and conditions used in this studyTarget genePrimer namePrimer sequence (5′-3′)Annealing T (°C)Target size (bp)ReferencerrsForwardTTGAGAGTTTGATCCTGGCTCAGAACG571499[12]ReverseWAAGGWGGTAATCCAGCgltAOuter FGCGATTTTAGAGTGYGGAGATTG551031[12]Outer RTACAATACCGGAGTAAAAGTCAAInner FTCATCTCCTGTTGCACGGTGCCC60594[21]Inner RCTCTGAATGAACATGCCCACCCTgroELForwardGCGAGGCGTTAGACAAGTCCATT581129[12]ReverseTCCAGAGATGCAAGCGTGTATAGmsp2Outer FGCGTGTTGATGGCTCTGGT521089[12]Outer RACCAGTATCCTTATTTTTACCInner FGAGTGCACCAGAGCCTAGAA56801This studyInner RTCACCATCACCAAGCACTCTmsp4Outer FCAGTCTGCGCCTGCTCCCTAC55757[12]Outer RAGGAATCTTGCTCCAAGGTTAInner FGGGTTCTGATATGGCATCTTC56656[15]Inner RGGGAAATGTCCTTATAGGATTCGAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature PCR primers and conditions used in this study Abbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature DNA sequence analysis The PCR products (rrs and groEL) or secondary PCR product (for other gene markers) were purified and sequenced, either directly or after cloning in the pGEM-T vector (Promega, Madison, WI, USA), in both directions. Generated sequences were assembled using ChromasPro v.2.1.8 (https://technelysium.com.au/wp/chromaspro/). The PCR products (rrs and groEL) or secondary PCR product (for other gene markers) were purified and sequenced, either directly or after cloning in the pGEM-T vector (Promega, Madison, WI, USA), in both directions. Generated sequences were assembled using ChromasPro v.2.1.8 (https://technelysium.com.au/wp/chromaspro/). Phylogenetic analysis The obtained sequences from each genetic locus were aligned with each other and reference sequences, available in GenBank (https://www.ncbi.nlm.nih.gov/), using ClustalX (http://www.clustal.org/) to determine the identity of Anaplasma spp. Evolutionary relationships were inferred based on partial sequences of 16S rRNA, citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes using the maximum likelihood (ML) method implemented in MEGA7 (http://www.megasoftware.net/). The ML phylogenetic analysis was conducted using the Kimura 2-parameter model and 1000 bootstrap replicates. The ML tree was rooted against the nucleotide sequences L36221 (Rickettsia typhi), KY124257 (Rickettsia parkeri), U96733 (Rickettsia rickettsii) and BDDN01000175 (Ehrlichia ruminantium) for 16S rRNA, gltA, groEL and msp4 gene markers, respectively. The obtained sequences from each genetic locus were aligned with each other and reference sequences, available in GenBank (https://www.ncbi.nlm.nih.gov/), using ClustalX (http://www.clustal.org/) to determine the identity of Anaplasma spp. Evolutionary relationships were inferred based on partial sequences of 16S rRNA, citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes using the maximum likelihood (ML) method implemented in MEGA7 (http://www.megasoftware.net/). The ML phylogenetic analysis was conducted using the Kimura 2-parameter model and 1000 bootstrap replicates. The ML tree was rooted against the nucleotide sequences L36221 (Rickettsia typhi), KY124257 (Rickettsia parkeri), U96733 (Rickettsia rickettsii) and BDDN01000175 (Ehrlichia ruminantium) for 16S rRNA, gltA, groEL and msp4 gene markers, respectively. Collection of samples: Chungbuk Wildlife Center is located in Cheongju city, Chungcheongbuk-do province in the Republic of Korea (36°38′13.99ʺN, 127°29′22.99ʺE). The center receives terrestrial and avian wild animals for purposes of treatment from sickness/injuries and/or rehabilitation. Blood samples are collected for diagnosis and treatment of wildlife referred to the Chungbuk Wildlife Center. Blood samples are archived in EDTA-treated tubes and stored at – 80 °C. A total of 253 blood samples including 198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus), collected from January 2015 to June 2018, were used. DNA extraction and PCR amplification: Frozen blood samples were thawed at room temperature and genomic DNA was extracted from 200 µl of blood using a Magpurix® Blood Kit and Magpurix® 12s automated nucleic acid purification system (Zinexts Life Science Corp., Taipei, Taiwan), according to the manufacturer’s recommendations. DNA preparations were tested for the presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene as described previously [30]. Anaplasma capra-positive isolates were genetically profiled by the amplification of a longer fragment of 16S rRNA (rrs) gene as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes as described previously (Table 1). Amplified fragments were electrophoresed on 1.2% gel loaded with EcoDye™ stain (BIOFACT, Daejeon, Korea) and visualized using UV light.Table 1PCR primers and conditions used in this studyTarget genePrimer namePrimer sequence (5′-3′)Annealing T (°C)Target size (bp)ReferencerrsForwardTTGAGAGTTTGATCCTGGCTCAGAACG571499[12]ReverseWAAGGWGGTAATCCAGCgltAOuter FGCGATTTTAGAGTGYGGAGATTG551031[12]Outer RTACAATACCGGAGTAAAAGTCAAInner FTCATCTCCTGTTGCACGGTGCCC60594[21]Inner RCTCTGAATGAACATGCCCACCCTgroELForwardGCGAGGCGTTAGACAAGTCCATT581129[12]ReverseTCCAGAGATGCAAGCGTGTATAGmsp2Outer FGCGTGTTGATGGCTCTGGT521089[12]Outer RACCAGTATCCTTATTTTTACCInner FGAGTGCACCAGAGCCTAGAA56801This studyInner RTCACCATCACCAAGCACTCTmsp4Outer FCAGTCTGCGCCTGCTCCCTAC55757[12]Outer RAGGAATCTTGCTCCAAGGTTAInner FGGGTTCTGATATGGCATCTTC56656[15]Inner RGGGAAATGTCCTTATAGGATTCGAbbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature PCR primers and conditions used in this study Abbreviations: rrs, 16S rRNA; gltA, citrate synthase; groEL, heat-shock protein; msp2, major surface protein 2; msp4, major surface protein 4; T, temperature DNA sequence analysis: The PCR products (rrs and groEL) or secondary PCR product (for other gene markers) were purified and sequenced, either directly or after cloning in the pGEM-T vector (Promega, Madison, WI, USA), in both directions. Generated sequences were assembled using ChromasPro v.2.1.8 (https://technelysium.com.au/wp/chromaspro/). Phylogenetic analysis: The obtained sequences from each genetic locus were aligned with each other and reference sequences, available in GenBank (https://www.ncbi.nlm.nih.gov/), using ClustalX (http://www.clustal.org/) to determine the identity of Anaplasma spp. Evolutionary relationships were inferred based on partial sequences of 16S rRNA, citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4) genes using the maximum likelihood (ML) method implemented in MEGA7 (http://www.megasoftware.net/). The ML phylogenetic analysis was conducted using the Kimura 2-parameter model and 1000 bootstrap replicates. The ML tree was rooted against the nucleotide sequences L36221 (Rickettsia typhi), KY124257 (Rickettsia parkeri), U96733 (Rickettsia rickettsii) and BDDN01000175 (Ehrlichia ruminantium) for 16S rRNA, gltA, groEL and msp4 gene markers, respectively. Results: The overall infection rate of A. capra in tested animals was 13.8% (35/253); however, samples from raccoon dogs (n = 53), leopard cat (n = 1) and roe deer (n = 1) were negative. The infection rate in KWD was 17.7% (35/198), distributed as follows: 24.6% (14/57) in 2015; 13.2% (5/38) in 2016; 17.3% (14/81) in 2017; and 9.1% (2/22) in 2018 (Table 2).Table 2Distribution of samples and prevalence of A. capra in animal speciesYearKorean water deerRaccoon dogOther animalsNumber infectedTotal numberInfection rate (%)InfectedNot infectedInfectedNot infectedNot infected201514430210147818.0201653305054311.620171467018LC (1)1410014.0201822009RD (1)2326.3Total number3516305323525313.8aInfection rate/species (%)17.700aOverall infection rateAbbreviations: LC, leopard cat (Prionailurus bengalensis); RD, roe deer (Capreolus pygargus) Distribution of samples and prevalence of A. capra in animal species aOverall infection rate Abbreviations: LC, leopard cat (Prionailurus bengalensis); RD, roe deer (Capreolus pygargus) Molecular and phylogenetic analyses indicated to occurrence of two genetically distinct strains [named Cheongju (23 isolates) and Chungbuk (12 isolates)] of this pathogen. Sequences obtained from both strains were similar to those derived from A. capra from goats, sheep, cattle, ticks and humans; however, they had striking genetic differences, suggesting that they are novel strains. Sequences of the rrs gene fragment of both strains showed an identity of ~ 99.5% with counterparts in database and clustered in the clade of A. capra from different hosts (Fig. 1). Both strains had single nucleotide polymorphisms (SNPs), resulting in four genotypes at this gene locus. Phylogenetic analysis revealed that three sequences designated A. centrale (GenBank: AB211164, AF283007 and GU064903) and two sequences designated Anaplasma spp. (GenBank: AB454075 and AB509223) clustered within the A. capra clade, even though other A. centrale sequences from different hosts and geographical regions formed separate clusters in the ML tree.Fig. 1Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of 16S rRNA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of 16S rRNA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site The gltA gene of the Cheongju strain shared a similarity of 99.5% (with two substitutions, A/G at position 456 and T/C at position 533) with gltA sequences KM206274, KJ700628 and MH029895 isolated from a human, goat and tick, respectively [12, 23]. Sequences of the Chungbuk strain showed a similarity of 98–99% with KX685885, KX685886 and MF071308 of A. capra from ticks and sheep [13, 19]. Both strains clustered with their homologous sequences in the A. capra clade (Fig. 2). groEL gene sequences derived from the Cheongju strain shared a similarity of 99% (one substitution) with their counterparts from humans (GenBank: KM206275), goats (GenBank: KJ700629), sheep (GenBank: KX417356) and ticks (GenBank: KR261633 and KR261635), whereas sequences from the Chungbuk strain shared a similarity of 91% with the reference sequences (Fig. 3). The msp2 sequences showed extensive intra- and inter-sequence variations, including multiple InDels and single nucleotide substitutions; however, all sequences remained clustered in the A. capra clade (Fig. 4). A hypervariable stretch was detected between positions 285 and 414 of the generated sequences (corresponding to positions 550 and 679 in the reference sequence KM206276 of A. capra from humans). The msp4 sequences were identical in the two strains and showed an identity of 100% with those from humans (GenBank: KM206277) and ticks (GenBank: KR261637 and KR261640) (Fig. 5).Fig. 2Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of gltA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Fig. 3Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of groEL gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Fig. 4Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp2 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Fig. 5Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp4 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of gltA gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of groEL gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp2 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Maximum-likelihood phylogenetic trees of Anaplasma species based on partial sequences of msp4 gene. The tree was constructed using MEGA7 with the Kimura 2-parameter model. The newly generated sequences are indicated by diamonds. The numbers at nodes represent bootstrap values. The scale-bar represents the number of nucleotide substitutions per site Discussion: Wild animals act as reservoirs for a wide range of pathogens [31–33]. The emergence of infectious disease agents of wildlife origin is a prominent challenge to public health and the livestock industry [34–36]. Anaplasma capra has recently been isolated from human patients in China with non-specific clinical manifestations, with potential progression to CNS complications, suggesting that this species could pose a substantial threat to public health [12, 37]. We detected A. capra DNA in blood samples of 35 out of 198 KWD (17.7% infection rate) at the Chungbuk Wildlife Center, Korea. Epidemiological data for this pathogen in wildlife are lacking in Korea; however, our findings are similar to those obtained from wildlife (five takins, three Himalayan gorals, three Reeves’s muntjacs, one forest musk deer and one wild boar) in China [24]. In addition, a low percentage of infection rate was reported cattle, sheep and goats in China, Sweden and Korea [15–19], indicating that A. capra has a broad host range. Occurrence of infection during the study period from 2015 to 2018 indicates the persistence of the infection in KWD, suggesting that the species may act as a reservoir for this pathogen. However, it is difficult to explain the negative results from raccoon dogs in the present study. This may be attributed to persistent infection making the pathogen below detectable level in the blood of these animals. In support of this view, A. capra has been reported to infect endothelial cells [12, 15], making its detection in the blood possible in case of considerable bacteremia or released endothelial cells, resembling Rickettsia species [12]. Furthermore, the sample size and species and/or the age of animals may play a role in these findings. Further investigations are needed to clarify these points. Our genetic profiling results indicated that the newly generated 16S rRNA gene sequences shared a homology of > 99.5% with sequences of A. capra strains from humans, sheep, goats, cattle and ticks [12, 15, 16, 18, 20, 22], suggesting that they likely are within the same species of bacteria [38, 39]. Clustering of sequences named A. centrale from deer (Cervus nippon nippon) (GenBank: AB211164) and cattle (GenBank: AF283007) in Japan and from ticks (Haemaphysalis longicornis) in Korea (GenBank: GU064903), as well as Anaplasma spp. from deer (GenBank: AB454075; direct submission) and Japanese serow (Capricornis crispus) (GenBank: AB509223) in the same clade of A. capra indicate a close phylogenetic relationship. The clustering pattern of these sequences in the A. capra clade does not support their assignment as sister taxa and suggests that these isolates are in fact A. capra [15, 25] and may need re-description, since these sequences were more related to Chungbuk strain in the A. capra clade. The results obtained using different gene markers showed considerable sequence variation, suggesting that A. capra has a high degree of genetic diversity. Notably, extensive genetic variation was detected in msp2. Consistent with our results, sequence variation at the studied gene markers has been reported previously among isolates from ticks, sheep and goats [12, 15–18, 20–25]. Similarly, genetic variation is common in other Anaplasma species [40–45]. Although genetic diversity is reportedly related to infectivity, virulence, pathogenicity, niche preference, immune evasion, and/or host adaptability [46–49], this has not been established in A. capra and further studies are needed to evaluate these relationships. Due to the extinction of natural predators, the KWD is thriving in Korea and has been designated as “harmful wildlife” by the Ministry of Environment in 1994 owing to harmful interactions with humans and their properties. This close interaction poses substantial threats to domestic animals and human health in Korea. This study was limited by analyzing samples from one geographical area and few animal species, which my lead to biases in the results. A large-scale study is underway to fully elucidate the host range of wildlife, vector ticks, pathogenicity and geographical distribution of this organism in Korea. Conclusions: To our knowledge, the results presented herein provide the first evidence for the presence of A. capra in Korean water deer in Korea. As an emerging human pathogen, the detection of A. capra in deer provides insight into the role of wildlife as a potential reservoir for human anaplasmosis. Furthermore, the obtained results expand the known geographical and host range of the Anaplasma capra.
Background: Anaplasma spp. are tick-borne Gram-negative obligate intracellular bacteria that infect humans and a wide range of animals. Anaplasma capra has emerged as a human pathogen; however, little is known about the occurrence and genetic identity of this agent in wildlife. The present study aimed to determine the infection rate and genetic profile of this pathogen in wild animals in the Republic of Korea. Methods: A total of 253 blood samples [198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus)] were collected at Chungbuk Wildlife Center during the period 2015-2018. Genomic DNA was extracted from the samples and screened for presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene marker. Anaplasma capra-positive isolates were genetically profiled by amplification of a longer fragment of 16S rRNA (rrs) as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4). Generated sequences of each gene marker were aligned with homologous sequences in the database and phylogenetically analyzed. Results: Anaplasma capra was detected in blood samples derived from Korean water deer, whereas samples from other animal species were negative. The overall infection rate in tested samples was 13.8% (35/253) and in the water deer the rate was 17.8% (35/198), distributed along the study period from 2015 to 2018. Genetic profiling and a phylogenetic analysis based on analyzed gene markers revealed the occurrence of two distinct strains, clustered in a single clade with counterpart sequences of A. capra in the database. Conclusions: Anaplasma capra infection were detected in Korean water deer in the Republic of Korea, providing insight into the role of wildlife as a potential reservoir for animal and human anaplasmosis. However, further work is needed in order to evaluate the role of Korean water deer as a host/reservoir host of A. capra.
Background: The cosmopolitan genus Anaplasma includes six species of Gram-negative obligate intracellular bacteria that are transmitted by ticks to a wide range of animals, including humans [1–5], resulting in considerable economic losses in the livestock industry and serious public health concerns [6, 7]. Anaplasma phagocytophilum, A. ovis and recently reported A. capra are human pathogens [8–12], whereas other species in the genus have no known zoonotic potential. However, A. platys may have zoonotic potential after frequent reports of human infection [13, 14]. The provisional name Anaplasma capra was assigned after its initial characterization in goats (Capra aegagrus hircus) in China [12]. Later, it was isolated from sheep, goats and cattle in different geographical regions [15–19] as well as from various tick species (Haemaphysalis qinghaiensis, H. longicornis, Ixodes persulcatus) [12, 20–23]. Infection of A. capra was also reported in six wild animals in China including three takins (Budorcas taxicolor), two Reeves’s muntjacs (Muntiacus reevesi) and one forest musk deer (Moschus berezovskii) [24]. Anaplasma species usually parasitize bone marrow-derived elements, including neutrophils (A. phagocytophilum), erythrocytes (A. marginale, A. centrale and A. ovis), monocytes (A. bovis) and platelets (A. platys) [7, 9, 10, 12]. However, A. capra seems to infect endothelial cells, rendering its microscopic detection in blood smears unreliable [12, 15]. In humans, the disease caused by A. capra is generally characterized by undifferentiated fever, headache, malaise, dizziness, myalgia and chills, with potential progression to CNS involvement and cerebrospinal fluid pleocytosis [12]. Although different Anaplasma species have been detected in wildlife [23–29], little is known about the prevalence and genetic identity of A. capra in these animals in Korea. Using molecular tools, the present study aimed at investigating the occurrence and characterizing the genetic profile of this pathogen in wildlife in the Republic of Korea. Conclusions: To our knowledge, the results presented herein provide the first evidence for the presence of A. capra in Korean water deer in Korea. As an emerging human pathogen, the detection of A. capra in deer provides insight into the role of wildlife as a potential reservoir for human anaplasmosis. Furthermore, the obtained results expand the known geographical and host range of the Anaplasma capra.
Background: Anaplasma spp. are tick-borne Gram-negative obligate intracellular bacteria that infect humans and a wide range of animals. Anaplasma capra has emerged as a human pathogen; however, little is known about the occurrence and genetic identity of this agent in wildlife. The present study aimed to determine the infection rate and genetic profile of this pathogen in wild animals in the Republic of Korea. Methods: A total of 253 blood samples [198 from Korean water deer (Hydropotes inermis argyropus), 53 from raccoon dogs (Nyctereutes procyonoides) and one sample each from a leopard cat (Prionailurus bengalensis) and a roe deer (Capreolus pygargus)] were collected at Chungbuk Wildlife Center during the period 2015-2018. Genomic DNA was extracted from the samples and screened for presence of Anaplasma species by PCR/sequence analysis of 429 bp of the 16S rRNA gene marker. Anaplasma capra-positive isolates were genetically profiled by amplification of a longer fragment of 16S rRNA (rrs) as well as partial sequences of citrate synthase (gltA), heat-shock protein (groEL), major surface protein 2 (msp2) and major surface protein 4 (msp4). Generated sequences of each gene marker were aligned with homologous sequences in the database and phylogenetically analyzed. Results: Anaplasma capra was detected in blood samples derived from Korean water deer, whereas samples from other animal species were negative. The overall infection rate in tested samples was 13.8% (35/253) and in the water deer the rate was 17.8% (35/198), distributed along the study period from 2015 to 2018. Genetic profiling and a phylogenetic analysis based on analyzed gene markers revealed the occurrence of two distinct strains, clustered in a single clade with counterpart sequences of A. capra in the database. Conclusions: Anaplasma capra infection were detected in Korean water deer in the Republic of Korea, providing insight into the role of wildlife as a potential reservoir for animal and human anaplasmosis. However, further work is needed in order to evaluate the role of Korean water deer as a host/reservoir host of A. capra.
4,543
402
[ 384, 141, 291, 59, 157 ]
9
[ "sequences", "protein", "capra", "anaplasma", "gene", "12", "species", "surface protein", "surface", "major surface protein" ]
[ "different anaplasma species", "anaplasma species detected", "ticks pathogenicity", "anaplasma spp deer", "capra human pathogens" ]
null
[CONTENT] Anaplasma capra | Korean water deer (Hydropotes inermis argyropus) | South Korea [SUMMARY]
null
[CONTENT] Anaplasma capra | Korean water deer (Hydropotes inermis argyropus) | South Korea [SUMMARY]
[CONTENT] Anaplasma capra | Korean water deer (Hydropotes inermis argyropus) | South Korea [SUMMARY]
[CONTENT] Anaplasma capra | Korean water deer (Hydropotes inermis argyropus) | South Korea [SUMMARY]
[CONTENT] Anaplasma capra | Korean water deer (Hydropotes inermis argyropus) | South Korea [SUMMARY]
[CONTENT] Anaplasma | Anaplasmosis | Animals | DNA, Bacterial | Deer | Disease Reservoirs | Genetic Variation | Phylogeny | RNA, Ribosomal, 16S | Republic of Korea [SUMMARY]
null
[CONTENT] Anaplasma | Anaplasmosis | Animals | DNA, Bacterial | Deer | Disease Reservoirs | Genetic Variation | Phylogeny | RNA, Ribosomal, 16S | Republic of Korea [SUMMARY]
[CONTENT] Anaplasma | Anaplasmosis | Animals | DNA, Bacterial | Deer | Disease Reservoirs | Genetic Variation | Phylogeny | RNA, Ribosomal, 16S | Republic of Korea [SUMMARY]
[CONTENT] Anaplasma | Anaplasmosis | Animals | DNA, Bacterial | Deer | Disease Reservoirs | Genetic Variation | Phylogeny | RNA, Ribosomal, 16S | Republic of Korea [SUMMARY]
[CONTENT] Anaplasma | Anaplasmosis | Animals | DNA, Bacterial | Deer | Disease Reservoirs | Genetic Variation | Phylogeny | RNA, Ribosomal, 16S | Republic of Korea [SUMMARY]
[CONTENT] different anaplasma species | anaplasma species detected | ticks pathogenicity | anaplasma spp deer | capra human pathogens [SUMMARY]
null
[CONTENT] different anaplasma species | anaplasma species detected | ticks pathogenicity | anaplasma spp deer | capra human pathogens [SUMMARY]
[CONTENT] different anaplasma species | anaplasma species detected | ticks pathogenicity | anaplasma spp deer | capra human pathogens [SUMMARY]
[CONTENT] different anaplasma species | anaplasma species detected | ticks pathogenicity | anaplasma spp deer | capra human pathogens [SUMMARY]
[CONTENT] different anaplasma species | anaplasma species detected | ticks pathogenicity | anaplasma spp deer | capra human pathogens [SUMMARY]
[CONTENT] sequences | protein | capra | anaplasma | gene | 12 | species | surface protein | surface | major surface protein [SUMMARY]
null
[CONTENT] sequences | protein | capra | anaplasma | gene | 12 | species | surface protein | surface | major surface protein [SUMMARY]
[CONTENT] sequences | protein | capra | anaplasma | gene | 12 | species | surface protein | surface | major surface protein [SUMMARY]
[CONTENT] sequences | protein | capra | anaplasma | gene | 12 | species | surface protein | surface | major surface protein [SUMMARY]
[CONTENT] sequences | protein | capra | anaplasma | gene | 12 | species | surface protein | surface | major surface protein [SUMMARY]
[CONTENT] capra | 12 | species | anaplasma | potential | zoonotic | phagocytophilum | genus | zoonotic potential | platys [SUMMARY]
null
[CONTENT] sequences | substitutions | nucleotide substitutions | phylogenetic trees anaplasma | species based partial | values | parameter model newly generated | parameter model newly | site | model newly [SUMMARY]
[CONTENT] results | capra | human | deer | deer korea emerging | capra korean water | capra korean | known geographical host range | emerging | results expand known [SUMMARY]
[CONTENT] protein | sequences | capra | surface | major | major surface | major surface protein | surface protein | 12 | wildlife [SUMMARY]
[CONTENT] protein | sequences | capra | surface | major | major surface | major surface protein | surface protein | 12 | wildlife [SUMMARY]
[CONTENT] ||| Gram-negative ||| ||| the Republic of Korea [SUMMARY]
null
[CONTENT] Korean ||| 13.8% | 17.8% | 2015 | 2018 ||| two | A. [SUMMARY]
[CONTENT] Korean | the Republic of Korea ||| Korean | A. [SUMMARY]
[CONTENT] ||| Gram-negative ||| ||| the Republic of Korea ||| 253 | 198 | Korean | 53 | Nyctereutes | one | Chungbuk Wildlife Center | 2015-2018 ||| Anaplasma | PCR | 429 bp | 16S ||| 16S | 2 | 4 ||| ||| ||| Korean ||| 13.8% | 17.8% | 2015 | 2018 ||| two | A. ||| Korean | the Republic of Korea ||| Korean | A. [SUMMARY]
[CONTENT] ||| Gram-negative ||| ||| the Republic of Korea ||| 253 | 198 | Korean | 53 | Nyctereutes | one | Chungbuk Wildlife Center | 2015-2018 ||| Anaplasma | PCR | 429 bp | 16S ||| 16S | 2 | 4 ||| ||| ||| Korean ||| 13.8% | 17.8% | 2015 | 2018 ||| two | A. ||| Korean | the Republic of Korea ||| Korean | A. [SUMMARY]
Well-Being, Physical Activity, and Social Support in Octogenarians with Heart Failure during COVID-19 Confinement: A Mixed-Methods Study.
36430033
This study aimed to compare well-being and physical activity (PA) before and during COVID-19 confinement in older adults with heart failure (HF), to compare well-being and PA during COVID-19 confinement in octogenarians and non-octogenarians, and to explore well-being, social support, attention to symptoms, and assistance needs during confinement in this population.
BACKGROUND
A mixed-methods design was performed. Well-being (Cantril Ladder of Life) and PA (International Physical Activity Questionnaire) were assessed. Semi-structured interviews were performed to assess the rest of the variables.
METHODS
120 participants were evaluated (74.16 ± 12.90 years; octogenarians = 44.16%, non-octogenarians = 55.83%). Both groups showed lower well-being and performed less PA during confinement than before (p &lt; 0.001). Octogenarians reported lower well-being (p = 0.02), higher sedentary time (p = 0.03), and lower levels of moderate PA (p = 0.04) during confinement. Most individuals in the sample considered their well-being to have decreased during confinement, 30% reported decreased social support, 50% increased their attention to symptoms, and 60% were not satisfied with the assistance received. Octogenarians were more severely impacted during confinement than non-octogenarians in terms of well-being, attention to symptoms, and assistance needs.
RESULTS
Well-being and PA decreased during confinement, although octogenarians were more affected than non-octogenarians. Remote monitoring strategies are needed in elders with HF to control health outcomes in critical periods, especially in octogenarians.
CONCLUSIONS
[ "Humans", "Aged", "Aged, 80 and over", "COVID-19", "Social Support", "Heart Failure", "Exercise", "Sedentary Behavior" ]
9690854
1. Introduction
Heart failure (HF) is a major clinical and public health problem [1]. HF constitutes a complex debilitating syndrome with significant health consequences that trigger high burdens of mortality and hospitalization, particularly among those aged 65 and older [2]. The COVID-19 pandemic has caused an alteration and restructuration of healthcare systems. The routine of care of HF patients has abruptly changed compared to pre-COVID-19 in terms of an increase in in-person outpatient visits and an increase in telehealth-based programs [3]. Moreover, because government-approved nationwide confinement may have led to a decrease in physical activity (PA), which could also lower well-being, there may be serious consequences for cardiometabolic morbimortality in HF patients. Therefore, COVID-19 represents a serious threat for HF patients, even more in elders with HF [4,5,6]. On the other hand, both confinement and fear of contagion may have reduced social support and assistance for elder patients with HF during COVID-19 confinement [7]. In order to mitigate these consequences, promotion of self-management and optimization of clinical assistance of patients with HF, as well as the use of measures such as telehealth, telerehabilitation, mobile applications, and electronic heart rate devices, have been employed [8,9]. Previous trials have found several limitations such as the lack of physical examination, depersonalization of healthcare, and lack of familiarity with the platform [10]. Moreover, there are groups of population that seem not to be ready to use these telematic measures, due to very old age, poor hearing, cognitive dysfunction, language barriers, or limited education, which may require the assistance of a family member or a caregiver [5]. Previous quantitative studies have analyzed how confinement affected patients with HF, in terms of fear to visit the hospital [3], differences in physical activity [11,12], mental health [13], safety of telehealth programs [14], etc. Few qualitative studies have been performed in order to characterize the value of technology in supporting caregiving for individuals with HF [15], examine the caregiving experiences and coping strategies of older adults with HF during the ongoing pandemic [16], and explore patients’ and clinicians’ experiences of managing HF during COVID-19 pandemic [17]. However, to our knowledge, no mixed-methods studies have been conducted in this regard. Mix-methods designs offer the possibility for participants to give their opinion about issues that may not have been considered in pre-set quantitative questionnaires. In addition, the literature regarding delivery of health care has highlighted the role of patient-reported outcomes and also of experience measures [10]. Even though previous studies have been performed in patients with HF, most of them take into account the middle-aged population or, when assessing the elderly, ages range from 60 to 80 years, thus information regarding adults older than 80 years old is scarce [11,13,14,16,18,19]. Therefore, further research is needed to focus on the clinical conditions of the octogenarian population and to analyze the differences between octogenarians and non-octogenarians. This study aimed to (1) compare well-being and PA before and during COVID-19 confinement in older patients with HF; (2) compare well-being and PA during COVID-19 confinement in octogenarians and non-octogenarians with HF; and (3) explore well-being, social support, attention to HF symptoms, assistance needs, and suggestions to improve care during COVID-19 confinement in this population.
null
null
3. Results
A total of 120 HF patients were assessed before and during COVID-19 confinement, of whom 60.80% were male and 39.20% were women. Mean (SD) age was 74.16 ± 12.90 years old. The sample consisted of non-octogenarians (<80 years old) (55.83% of the sample) and octogenarians (≥80 years) (44.16 % of the sample). All subjects had a diagnosis of HF, with a mean (SD) time of evolution of 78.73 ± 94.21 months. Table 2 shows the sociodemographic characteristics of the sample. Table 3 shows the results of well-being and physical activity before and after confinement. The sample reported significantly lower well-being (p < 0.001) during confinement than before confinement. Similarly, lower levels of light PA (p < 0.001), moderate PA (p < 0.001), and total PA (p < 0.001) were observed during confinement when compared to before confinement. In addition, sedentary time was higher during confinement than before confinement (p < 0.001). However, no significant differences were found in vigorous PA (p > 0.05). Two-factor repeated measures MANOVA revealed significant interaction effects on well-being and PA variables (F(5,114) = 3.33, p = 0.01, η2p = 0.89). Five univariate variables showed non-significant interaction effects for “group by age” and “time measurements”, including light PA, moderate PA, vigorous PA, sedentary time, and total PA (p > 0.05). However, the well-being univariate variable was found to have significant interaction effects for only “group by age” and “time measurements” (F(1,118 = 14.56, p < 0.001, η2p = 0.97). Table 4 shows the comparisons of well-being and physical activity in octogenarians vs. non-octogenarians before and after confinement. Post-hoc analysis showed that before confinement, the levels of light, moderate, vigorous, and total PA, as well as sedentary time were similar between both groups (p > 0.05). However, during confinement, octogenarians showed significantly lower values of well-being (p = 0.02) and moderate PA (p = 0.04), as well as higher sedentary time (p = 0.03) than non-octogenarians. Regarding between-time measurements comparisons, there were significant differences before and during confinement in well-being (p < 0.001, respectively), light PA (p < 0.001, respectively), moderate PA (p = 0.001 and p = 0.02, respectively), sedentary time (p < 0.001, respectively), and total PA (p < 0.001, respectively) in both non-octogenarians and octogenarians. However, there were no significant differences by time measurements in vigorous PA (p > 0.05) in any of the two groups. Regarding the qualitative data analysis, five categories were obtained: (i) alterations in well-being; (ii) changes in social support; (iii) attention to HF symptoms; (iv) assistance needs; and (v) suggestions to improve care. The identified categories were divided into 32 subcategories. The identified categories and subcategories are described as follows:(i)Alterations in well-being. This category defines the main areas of well-being affected by COVID-19. It is composed of the following six subcategories: (1) cognitive; (2) the confinement has been positive on his/her well-being; (3) the confinement has not changed his/her routines; (4) emotional; (5) physical; (6) social.(ii)Changes in social support. This category shows the main changes in social support for people with HF during COVID-19. It is composed of six subcategories: (1) receives care from institutions; (2) receives care from his/her relatives; (3) difficulty in communication; (4) no family visits; (5) receives outside assistance; (6) reduced social contact.(iii)Attention to HF symptoms. This category highlights the main characteristics related to the attention to HF symptoms during confinement. It is composed of ten subcategories: (1) self-diagnosis of his/her health condition; (2) avoids social contact; (3) maintains healthy habits; (4) increased dependence; (5) fear of COVID-19; (6) no changes due to COVID-19; (7) does not follow doctor’s treatment; (8) concern for the health of his/her relatives; (9) concern for his/her own health; (10) health problems.(iv)Assistance needs. This category defines the needs of care and highlights the main needs or care requirements of HF patients during confinement. This category is composed of two subcategories and four contexts that define assistance needs: (1) no assistance needs; (2) if had assistance needs: (2a) assistance from family members; (2b) outside assistance (ambulance, telephone assistance, telecare button, caregiver, person for household chores and shopping, cardiac rehabilitation, neighbors); (2c) needs more assistance than received; (2d) total dependency.(v)Suggestions to improve care. This category highlights the suggestions of HF patients in order to improve their care during COVID-19 confinement. It is composed of the following eight subcategories: (1) help from politicians and from institutions; (2) demonstrate care to the patient; (3) increase the availability of physicians; (4) increase efficiency and patient care; (5) no suggestions; (6) speed in caring for patients; (7) receive written feedback from telephone consultations; (8) satisfied with the assistance received. Alterations in well-being. This category defines the main areas of well-being affected by COVID-19. It is composed of the following six subcategories: (1) cognitive; (2) the confinement has been positive on his/her well-being; (3) the confinement has not changed his/her routines; (4) emotional; (5) physical; (6) social. Changes in social support. This category shows the main changes in social support for people with HF during COVID-19. It is composed of six subcategories: (1) receives care from institutions; (2) receives care from his/her relatives; (3) difficulty in communication; (4) no family visits; (5) receives outside assistance; (6) reduced social contact. Attention to HF symptoms. This category highlights the main characteristics related to the attention to HF symptoms during confinement. It is composed of ten subcategories: (1) self-diagnosis of his/her health condition; (2) avoids social contact; (3) maintains healthy habits; (4) increased dependence; (5) fear of COVID-19; (6) no changes due to COVID-19; (7) does not follow doctor’s treatment; (8) concern for the health of his/her relatives; (9) concern for his/her own health; (10) health problems. Assistance needs. This category defines the needs of care and highlights the main needs or care requirements of HF patients during confinement. This category is composed of two subcategories and four contexts that define assistance needs: (1) no assistance needs; (2) if had assistance needs: (2a) assistance from family members; (2b) outside assistance (ambulance, telephone assistance, telecare button, caregiver, person for household chores and shopping, cardiac rehabilitation, neighbors); (2c) needs more assistance than received; (2d) total dependency. Suggestions to improve care. This category highlights the suggestions of HF patients in order to improve their care during COVID-19 confinement. It is composed of the following eight subcategories: (1) help from politicians and from institutions; (2) demonstrate care to the patient; (3) increase the availability of physicians; (4) increase efficiency and patient care; (5) no suggestions; (6) speed in caring for patients; (7) receive written feedback from telephone consultations; (8) satisfied with the assistance received. In the qualitative results, the most reported subcategories in each analyzed category are described. Findings are shown by verbatim excerpts from the interviews. Regarding well-being, the 52.50% of the sample considered that COVID-19 altered their well-being to a great extent. Figure 1 shows differences between octogenarians vs. non-octogenarians in alterations in well-being caused by COVID-19 confinement. Alterations in emotional factors stand out in both groups, according to participant 45: “because it is very scary that people dye because of COVID-19. Our life has changed because we can’t go calmed down the street, nor when we meet family or friends” and participant 5: “I started teleworking and it has been overwhelming because we had more work than ever”. Moreover, physical alterations were highlighted, as participant 3 stated: “regarding physical level, I get more tired when making any effort”. It should be noted that non-octogenarians reported that confinement had less impact on their habits, as indicated by participant 164: “the confinement seemed very good to me because I don´t usually go out and I am with my husband and my daughter, so it is fine for me” and participant 133: “regarding confinement, I could climb stairs as I did before”. On the other hand, 30% of the individuals reported that social support decreased during confinement. Figure 2 shows differences between octogenarians vs. non-octogenarians in changes in social support during COVID-19 confinement. In this regard, the most important change in octogenarians was the increase of care provided by their relatives, as shown by participant 1: “my family members are much more concerned about me and about the fact that I comply with the treatments” and participant 63: “I am living with my daughter, because due to confinement, I needed her help and care and I did not want to be alone”. In contrast, non-octogenarians perceived they did not received care during confinement from physicians, or they did not receive enough information in this regard; according to participant 187: “No doctor has come to my house to give information to me regarding suggestions of care and aspects to take into account related to confinement“ and participant 156: “I have not received any support”. In addition, 50% of the individuals reported that their attention to HF symptoms increased during confinement. Figure 3 shows differences between octogenarians vs. non-octogenarians in attention to HF symptoms during COVID-19 confinement. There was a greater concern for health in non-octogenarians, as shown by participant 101: “I worry more about my health and I don’t want to get worse” and participant 196: “a lot of concern for me and for my wife. We don’t want to get sick, so we take more care of ourselves and pay more attention to our symptoms”. Similarly, there was a greater fear of COVID-19 in non-octogenarians, as indicated by participant 28: “I am scared, I am very afraid of COVID-19 [...] I am a patient at risk” and participant 87: “I pay more attention to my symptoms because of fear and because I have more time”. Although only slightly, octogenarians showed a greater concern for avoiding social contact, as expressed by participant 98: “I avoid contact with friends, even with relatives” and participant 178: “I only go out to the street one day a week and by car”. Regarding assistance needs of HF patients, 60% of individuals in the sample were not satisfied with the assistance received. As shown in Figure 4, both groups needed assistance, with a greater predominance in octogenarians, as reported by participant 1: “I have a 24-h caregiver, she runs all my errands, goes shopping, helps me in the bath, checks my pills” and participant 23: “another caregiver to clean the house and go shopping”. On the other hand, non-octogenarians considered that they needed more help than they received, as reported by participant 19: “I need more attention because my legs that are swelling more, and I don’t understand why” and participant 229: “the doctor is more available. The general practitioner does not take care of me, he only gives medication to me, we cannot go to the outpatient clinic”. Regarding suggestions to improve care (Figure 5), half of the individuals had no suggestions in this regard. Non-octogenarians suggested the need of offering a greater efficiency and attention to HF patients, as participant 39 assured: “I need to continue with the rehabilitation sessions that were cancelled due to the confinement. The suggestion I make is that those of us who are more dependent and who need more attention should get it. They leave us alone at home as if nothing happened, and that is bad for us. I suggest more attention, that the nurse or physiotherapist comes to check that I am doing well. Self-care starts with good information and making sure that I am doing well, then I will be able to do it alone” and participant 114: “they should call us to follow up our situation, it is hard for me to follow the recommendations”. However, those aged <80 suggested a lower availability of doctors, as suggested by participant 189: “now that I have pain and doctors do not attend me” and participant 98: “maybe if doctors were more available they could solve my doubts”, as well as reporting satisfaction with the assistance they were offered, as indicated by participant 23: “I am happy with the assistance received, and I send encouragement to the health workers” and participant 101: “very competent people, I am very happy with the treatment”. Finally, the key points of the qualitative data analysis were that most of the individuals sampled considered that well-being decreased during confinement, 30% reported that social support decreased, 50% increased their attention to symptoms, and 60% were not satisfied with the assistance received. Octogenarians were more affected during lockdown than non-octogenarians in terms of well-being, attention to symptoms, and support needs.
5. Conclusions
Well-being and PA levels decreased during COVID-19 confinement in older adults with HF. Moreover, octogenarians were more severely impacted during confinement than non-octogenarians in terms of lower well-being, lower attention to their HF symptoms, and higher assistance needs, although non-octogenarians reported lower social support than non-octogenarians. Thus, the development of remote monitoring strategies is needed in older adults with HF in order to maintain an adequate level of PA and control health outcomes in critical periods, especially for octogenarians with HF.
[ "2. Materials and Methods", "2.2. Design", "2.3. Data Collection", "2.4. Sample Size Calculation", "2.5. Data Analysis" ]
[ " 2.1. Participants and Setting The sample consisted of 120 participants. Participants were recruited at an outpatient clinic and were assessed between November 2019 and April 2020. Inclusion criteria included: (1) diagnosis of HF, (2) age ≥ 60 years, and (3) being cognitively capable of completing the assessments. Written informed consent was obtained before participation. The principles of voluntariness and confidentiality were respected. All participants were informed about the objectives and procedures of the study. All procedures were conducted in accordance with the principles of the Declaration of Helsinki. The study was approved by the Institutional Review Board Ethics Committee (2020-440-1).\nThe sample consisted of 120 participants. Participants were recruited at an outpatient clinic and were assessed between November 2019 and April 2020. Inclusion criteria included: (1) diagnosis of HF, (2) age ≥ 60 years, and (3) being cognitively capable of completing the assessments. Written informed consent was obtained before participation. The principles of voluntariness and confidentiality were respected. All participants were informed about the objectives and procedures of the study. All procedures were conducted in accordance with the principles of the Declaration of Helsinki. The study was approved by the Institutional Review Board Ethics Committee (2020-440-1).\n 2.2. Design A mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23].\nA mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23].\n 2.3. Data Collection All participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher.\nIn the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nWell-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].\nPhysical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nRegarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed.\nAll participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher.\nIn the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nWell-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].\nPhysical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nRegarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed.\n 2.4. Sample Size Calculation Anticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50.\nAnticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50.\n 2.5. Data Analysis The statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests.\nThe data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.\nData reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].\nLayout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.\nObtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.\nThe statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests.\nThe data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.\nData reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].\nLayout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.\nObtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.", "A mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23].", "All participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher.\nIn the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nWell-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].\nPhysical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nRegarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed.", "Anticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50.", "The statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests.\nThe data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.\nData reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].\nLayout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.\nObtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed." ]
[ null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Participants and Setting", "2.2. Design", "2.3. Data Collection", "2.4. Sample Size Calculation", "2.5. Data Analysis", "3. Results", "4. Discussion", "5. Conclusions" ]
[ "Heart failure (HF) is a major clinical and public health problem [1]. HF constitutes a complex debilitating syndrome with significant health consequences that trigger high burdens of mortality and hospitalization, particularly among those aged 65 and older [2].\nThe COVID-19 pandemic has caused an alteration and restructuration of healthcare systems. The routine of care of HF patients has abruptly changed compared to pre-COVID-19 in terms of an increase in in-person outpatient visits and an increase in telehealth-based programs [3]. Moreover, because government-approved nationwide confinement may have led to a decrease in physical activity (PA), which could also lower well-being, there may be serious consequences for cardiometabolic morbimortality in HF patients. Therefore, COVID-19 represents a serious threat for HF patients, even more in elders with HF [4,5,6]. On the other hand, both confinement and fear of contagion may have reduced social support and assistance for elder patients with HF during COVID-19 confinement [7].\nIn order to mitigate these consequences, promotion of self-management and optimization of clinical assistance of patients with HF, as well as the use of measures such as telehealth, telerehabilitation, mobile applications, and electronic heart rate devices, have been employed [8,9]. Previous trials have found several limitations such as the lack of physical examination, depersonalization of healthcare, and lack of familiarity with the platform [10]. Moreover, there are groups of population that seem not to be ready to use these telematic measures, due to very old age, poor hearing, cognitive dysfunction, language barriers, or limited education, which may require the assistance of a family member or a caregiver [5].\nPrevious quantitative studies have analyzed how confinement affected patients with HF, in terms of fear to visit the hospital [3], differences in physical activity [11,12], mental health [13], safety of telehealth programs [14], etc. Few qualitative studies have been performed in order to characterize the value of technology in supporting caregiving for individuals with HF [15], examine the caregiving experiences and coping strategies of older adults with HF during the ongoing pandemic [16], and explore patients’ and clinicians’ experiences of managing HF during COVID-19 pandemic [17]. However, to our knowledge, no mixed-methods studies have been conducted in this regard. Mix-methods designs offer the possibility for participants to give their opinion about issues that may not have been considered in pre-set quantitative questionnaires. In addition, the literature regarding delivery of health care has highlighted the role of patient-reported outcomes and also of experience measures [10].\nEven though previous studies have been performed in patients with HF, most of them take into account the middle-aged population or, when assessing the elderly, ages range from 60 to 80 years, thus information regarding adults older than 80 years old is scarce [11,13,14,16,18,19]. Therefore, further research is needed to focus on the clinical conditions of the octogenarian population and to analyze the differences between octogenarians and non-octogenarians.\nThis study aimed to (1) compare well-being and PA before and during COVID-19 confinement in older patients with HF; (2) compare well-being and PA during COVID-19 confinement in octogenarians and non-octogenarians with HF; and (3) explore well-being, social support, attention to HF symptoms, assistance needs, and suggestions to improve care during COVID-19 confinement in this population.", " 2.1. Participants and Setting The sample consisted of 120 participants. Participants were recruited at an outpatient clinic and were assessed between November 2019 and April 2020. Inclusion criteria included: (1) diagnosis of HF, (2) age ≥ 60 years, and (3) being cognitively capable of completing the assessments. Written informed consent was obtained before participation. The principles of voluntariness and confidentiality were respected. All participants were informed about the objectives and procedures of the study. All procedures were conducted in accordance with the principles of the Declaration of Helsinki. The study was approved by the Institutional Review Board Ethics Committee (2020-440-1).\nThe sample consisted of 120 participants. Participants were recruited at an outpatient clinic and were assessed between November 2019 and April 2020. Inclusion criteria included: (1) diagnosis of HF, (2) age ≥ 60 years, and (3) being cognitively capable of completing the assessments. Written informed consent was obtained before participation. The principles of voluntariness and confidentiality were respected. All participants were informed about the objectives and procedures of the study. All procedures were conducted in accordance with the principles of the Declaration of Helsinki. The study was approved by the Institutional Review Board Ethics Committee (2020-440-1).\n 2.2. Design A mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23].\nA mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23].\n 2.3. Data Collection All participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher.\nIn the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nWell-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].\nPhysical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nRegarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed.\nAll participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher.\nIn the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nWell-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].\nPhysical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nRegarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed.\n 2.4. Sample Size Calculation Anticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50.\nAnticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50.\n 2.5. Data Analysis The statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests.\nThe data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.\nData reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].\nLayout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.\nObtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.\nThe statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests.\nThe data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.\nData reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].\nLayout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.\nObtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.", "The sample consisted of 120 participants. Participants were recruited at an outpatient clinic and were assessed between November 2019 and April 2020. Inclusion criteria included: (1) diagnosis of HF, (2) age ≥ 60 years, and (3) being cognitively capable of completing the assessments. Written informed consent was obtained before participation. The principles of voluntariness and confidentiality were respected. All participants were informed about the objectives and procedures of the study. All procedures were conducted in accordance with the principles of the Declaration of Helsinki. The study was approved by the Institutional Review Board Ethics Committee (2020-440-1).", "A mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23].", "All participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher.\nIn the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nWell-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].\nPhysical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32].\nRegarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed.", "Anticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50.", "The statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests.\nThe data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.\nData reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].\nLayout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.\nObtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed.", "A total of 120 HF patients were assessed before and during COVID-19 confinement, of whom 60.80% were male and 39.20% were women. Mean (SD) age was 74.16 ± 12.90 years old. The sample consisted of non-octogenarians (<80 years old) (55.83% of the sample) and octogenarians (≥80 years) (44.16 % of the sample). All subjects had a diagnosis of HF, with a mean (SD) time of evolution of 78.73 ± 94.21 months. Table 2 shows the sociodemographic characteristics of the sample.\nTable 3 shows the results of well-being and physical activity before and after confinement. The sample reported significantly lower well-being (p < 0.001) during confinement than before confinement. Similarly, lower levels of light PA (p < 0.001), moderate PA (p < 0.001), and total PA (p < 0.001) were observed during confinement when compared to before confinement. In addition, sedentary time was higher during confinement than before confinement (p < 0.001). However, no significant differences were found in vigorous PA (p > 0.05).\nTwo-factor repeated measures MANOVA revealed significant interaction effects on well-being and PA variables (F(5,114) = 3.33, p = 0.01, η2p = 0.89). Five univariate variables showed non-significant interaction effects for “group by age” and “time measurements”, including light PA, moderate PA, vigorous PA, sedentary time, and total PA (p > 0.05). However, the well-being univariate variable was found to have significant interaction effects for only “group by age” and “time measurements” (F(1,118 = 14.56, p < 0.001, η2p = 0.97). Table 4 shows the comparisons of well-being and physical activity in octogenarians vs. non-octogenarians before and after confinement. Post-hoc analysis showed that before confinement, the levels of light, moderate, vigorous, and total PA, as well as sedentary time were similar between both groups (p > 0.05). However, during confinement, octogenarians showed significantly lower values of well-being (p = 0.02) and moderate PA (p = 0.04), as well as higher sedentary time (p = 0.03) than non-octogenarians. Regarding between-time measurements comparisons, there were significant differences before and during confinement in well-being (p < 0.001, respectively), light PA (p < 0.001, respectively), moderate PA (p = 0.001 and p = 0.02, respectively), sedentary time (p < 0.001, respectively), and total PA (p < 0.001, respectively) in both non-octogenarians and octogenarians. However, there were no significant differences by time measurements in vigorous PA (p > 0.05) in any of the two groups.\nRegarding the qualitative data analysis, five categories were obtained: (i) alterations in well-being; (ii) changes in social support; (iii) attention to HF symptoms; (iv) assistance needs; and (v) suggestions to improve care. The identified categories were divided into 32 subcategories. The identified categories and subcategories are described as follows:(i)Alterations in well-being. This category defines the main areas of well-being affected by COVID-19. It is composed of the following six subcategories: (1) cognitive; (2) the confinement has been positive on his/her well-being; (3) the confinement has not changed his/her routines; (4) emotional; (5) physical; (6) social.(ii)Changes in social support. This category shows the main changes in social support for people with HF during COVID-19. It is composed of six subcategories: (1) receives care from institutions; (2) receives care from his/her relatives; (3) difficulty in communication; (4) no family visits; (5) receives outside assistance; (6) reduced social contact.(iii)Attention to HF symptoms. This category highlights the main characteristics related to the attention to HF symptoms during confinement. It is composed of ten subcategories: (1) self-diagnosis of his/her health condition; (2) avoids social contact; (3) maintains healthy habits; (4) increased dependence; (5) fear of COVID-19; (6) no changes due to COVID-19; (7) does not follow doctor’s treatment; (8) concern for the health of his/her relatives; (9) concern for his/her own health; (10) health problems.(iv)Assistance needs. This category defines the needs of care and highlights the main needs or care requirements of HF patients during confinement. This category is composed of two subcategories and four contexts that define assistance needs: (1) no assistance needs; (2) if had assistance needs: (2a) assistance from family members; (2b) outside assistance (ambulance, telephone assistance, telecare button, caregiver, person for household chores and shopping, cardiac rehabilitation, neighbors); (2c) needs more assistance than received; (2d) total dependency.(v)Suggestions to improve care. This category highlights the suggestions of HF patients in order to improve their care during COVID-19 confinement. It is composed of the following eight subcategories: (1) help from politicians and from institutions; (2) demonstrate care to the patient; (3) increase the availability of physicians; (4) increase efficiency and patient care; (5) no suggestions; (6) speed in caring for patients; (7) receive written feedback from telephone consultations; (8) satisfied with the assistance received.\nAlterations in well-being. This category defines the main areas of well-being affected by COVID-19. It is composed of the following six subcategories: (1) cognitive; (2) the confinement has been positive on his/her well-being; (3) the confinement has not changed his/her routines; (4) emotional; (5) physical; (6) social.\nChanges in social support. This category shows the main changes in social support for people with HF during COVID-19. It is composed of six subcategories: (1) receives care from institutions; (2) receives care from his/her relatives; (3) difficulty in communication; (4) no family visits; (5) receives outside assistance; (6) reduced social contact.\nAttention to HF symptoms. This category highlights the main characteristics related to the attention to HF symptoms during confinement. It is composed of ten subcategories: (1) self-diagnosis of his/her health condition; (2) avoids social contact; (3) maintains healthy habits; (4) increased dependence; (5) fear of COVID-19; (6) no changes due to COVID-19; (7) does not follow doctor’s treatment; (8) concern for the health of his/her relatives; (9) concern for his/her own health; (10) health problems.\nAssistance needs. This category defines the needs of care and highlights the main needs or care requirements of HF patients during confinement. This category is composed of two subcategories and four contexts that define assistance needs: (1) no assistance needs; (2) if had assistance needs: (2a) assistance from family members; (2b) outside assistance (ambulance, telephone assistance, telecare button, caregiver, person for household chores and shopping, cardiac rehabilitation, neighbors); (2c) needs more assistance than received; (2d) total dependency.\nSuggestions to improve care. This category highlights the suggestions of HF patients in order to improve their care during COVID-19 confinement. It is composed of the following eight subcategories: (1) help from politicians and from institutions; (2) demonstrate care to the patient; (3) increase the availability of physicians; (4) increase efficiency and patient care; (5) no suggestions; (6) speed in caring for patients; (7) receive written feedback from telephone consultations; (8) satisfied with the assistance received.\nIn the qualitative results, the most reported subcategories in each analyzed category are described. Findings are shown by verbatim excerpts from the interviews.\nRegarding well-being, the 52.50% of the sample considered that COVID-19 altered their well-being to a great extent. Figure 1 shows differences between octogenarians vs. non-octogenarians in alterations in well-being caused by COVID-19 confinement. Alterations in emotional factors stand out in both groups, according to participant 45: “because it is very scary that people dye because of COVID-19. Our life has changed because we can’t go calmed down the street, nor when we meet family or friends” and participant 5: “I started teleworking and it has been overwhelming because we had more work than ever”. Moreover, physical alterations were highlighted, as participant 3 stated: “regarding physical level, I get more tired when making any effort”. It should be noted that non-octogenarians reported that confinement had less impact on their habits, as indicated by participant 164: “the confinement seemed very good to me because I don´t usually go out and I am with my husband and my daughter, so it is fine for me” and participant 133: “regarding confinement, I could climb stairs as I did before”.\nOn the other hand, 30% of the individuals reported that social support decreased during confinement. Figure 2 shows differences between octogenarians vs. non-octogenarians in changes in social support during COVID-19 confinement. In this regard, the most important change in octogenarians was the increase of care provided by their relatives, as shown by participant 1: “my family members are much more concerned about me and about the fact that I comply with the treatments” and participant 63: “I am living with my daughter, because due to confinement, I needed her help and care and I did not want to be alone”. In contrast, non-octogenarians perceived they did not received care during confinement from physicians, or they did not receive enough information in this regard; according to participant 187: “No doctor has come to my house to give information to me regarding suggestions of care and aspects to take into account related to confinement“ and participant 156: “I have not received any support”.\nIn addition, 50% of the individuals reported that their attention to HF symptoms increased during confinement. Figure 3 shows differences between octogenarians vs. non-octogenarians in attention to HF symptoms during COVID-19 confinement. There was a greater concern for health in non-octogenarians, as shown by participant 101: “I worry more about my health and I don’t want to get worse” and participant 196: “a lot of concern for me and for my wife. We don’t want to get sick, so we take more care of ourselves and pay more attention to our symptoms”. Similarly, there was a greater fear of COVID-19 in non-octogenarians, as indicated by participant 28: “I am scared, I am very afraid of COVID-19 [...] I am a patient at risk” and participant 87: “I pay more attention to my symptoms because of fear and because I have more time”. Although only slightly, octogenarians showed a greater concern for avoiding social contact, as expressed by participant 98: “I avoid contact with friends, even with relatives” and participant 178: “I only go out to the street one day a week and by car”.\nRegarding assistance needs of HF patients, 60% of individuals in the sample were not satisfied with the assistance received. As shown in Figure 4, both groups needed assistance, with a greater predominance in octogenarians, as reported by participant 1: “I have a 24-h caregiver, she runs all my errands, goes shopping, helps me in the bath, checks my pills” and participant 23: “another caregiver to clean the house and go shopping”. On the other hand, non-octogenarians considered that they needed more help than they received, as reported by participant 19: “I need more attention because my legs that are swelling more, and I don’t understand why” and participant 229: “the doctor is more available. The general practitioner does not take care of me, he only gives medication to me, we cannot go to the outpatient clinic”.\nRegarding suggestions to improve care (Figure 5), half of the individuals had no suggestions in this regard. Non-octogenarians suggested the need of offering a greater efficiency and attention to HF patients, as participant 39 assured: “I need to continue with the rehabilitation sessions that were cancelled due to the confinement. The suggestion I make is that those of us who are more dependent and who need more attention should get it. They leave us alone at home as if nothing happened, and that is bad for us. I suggest more attention, that the nurse or physiotherapist comes to check that I am doing well. Self-care starts with good information and making sure that I am doing well, then I will be able to do it alone” and participant 114: “they should call us to follow up our situation, it is hard for me to follow the recommendations”. However, those aged <80 suggested a lower availability of doctors, as suggested by participant 189: “now that I have pain and doctors do not attend me” and participant 98: “maybe if doctors were more available they could solve my doubts”, as well as reporting satisfaction with the assistance they were offered, as indicated by participant 23: “I am happy with the assistance received, and I send encouragement to the health workers” and participant 101: “very competent people, I am very happy with the treatment”.\nFinally, the key points of the qualitative data analysis were that most of the individuals sampled considered that well-being decreased during confinement, 30% reported that social support decreased, 50% increased their attention to symptoms, and 60% were not satisfied with the assistance received. Octogenarians were more affected during lockdown than non-octogenarians in terms of well-being, attention to symptoms, and support needs.", "The findings of this study showed that well-being and PA levels of older adults with HF decreased during COVID-19 confinement. Regarding the comparison between octogenarians and non-octogenarians, octogenarians reported lower well-being, higher sedentary time, and lower levels of moderate PA than non-octogenarians during confinement. Moreover, octogenarians were more severely impacted during confinement than non-octogenarians in terms of exhibiting lower well-being. These results are in accordance with the patient-reported experiences, where octogenarians and non-octogenarians explained that confinement affected their well-being, especially regarding emotional status and physical fitness. Furthermore, octogenarian patients of this study reported that confinement affected several aspects of their daily lives. These results are especially important because previous studies have found that patients with HF are at a higher risk of developing mental health problems and seem to have a lower capacity to enjoy daily activities compared with people without HF symptoms [13]. In addition, in the study of Rantanen et al. [40], people more than 85 years old, both men and women, reported significantly lower quality of life during COVID-19 confinement. In addition, those with lower cognitive functioning, lower emotional stability, and living alone may be at risk of poorer self-reported mental and physical health [41]. Therefore, it is important to stress that nurses and other cardiac providers should identify vulnerabilities in sustained HF self-care behaviors and well-being among older adults, especially among octogenarians, to improve their well-being and physical activity and to improve satisfaction with the care they receive [19].\nFurthermore, the decrease in PA and the increase in sedentary time that we found in older adults with HF during confinement, are in line with the results reported by Vetrovsky et al. [11], Angelo-Brasca et al. [42], and Caraballo et al. [12], who also found a reduction of PA levels in HF patients. It is important to highlight the decrease of PA levels since although physical condition was not evaluated, previous studies suggest that the decrease of PA levels could lead to an important deterioration of physical fitness, which is an important predictor of HF morbidity and mortality in this population [43,44]. In turn, a higher functional status is a predictor of an enhanced quality of life in octogenarians [45]. Therefore, the promotion of active aging seems to be a key point in order to improve or sustain quality of life, especially during social distancing [40].\nA lower attention to their HF symptoms and higher assistance needs were found in octogenarians. However, non-octogenarians reported lower social support than octogenarians. In our sample, it seems that octogenarians reported an increase of care provided by their relatives, which may have influenced their social support; thus, octogenarians presented higher social support than non-octogenarians. In this regard, our results are not in line with those reported by Golden et al. [46], who found that loneliness increased with age. In addition, fear of contagion implied that patients preferred telephone visits, although both groups, especially octogenarians, reported that they needed assistance. These results are similar to those obtained by Mcilvennan et al. [3], who reported patients’ expressions of fear, reluctance to visit the hospital, and lower early symptom reporting. However, our results evidenced that participants were mostly satisfied with the care they received. In contrast, Raman and Vyselaar [10] reported that patients preferred in-person visits since telehealth visits were considered as presenting inferior quality due to the lack of physical examination, emotional detachment from care providers, and general unfamiliarity. Independently of participants´ opinions, telehealth models for outpatients with HF have been demonstrated to be safe and seem not to increase mortality, suggesting that telehealth outpatient visits in patients with HF could be safely incorporated into clinical practice [14,17].\nGiven the lack of studies that use a mixed-methods methodology to investigate well-being, PA, social support, attention to symptoms, assistance needs, and suggestions to improve care in older patients with HF during COVID-19 confinement, this study aimed to fill this gap for a better understanding of this condition. The comparison by age in octogenarians vs. non-octogenarians and the qualitative approach, which involved in-depth interviews with HF patients, led to the collection of relevant data not normally discussed or shared in healthcare research.\nHowever, there are several limitations in this study. First, the study was performed in a single center; therefore, results may not be generalizable to the entire older population with HF. Additionally, the findings may not be representative of all patients, since our results can only be generalized to countries with similar restrictions to those established in Spain, such as Italy, France, or the Czech Republic. Second, the small sample size, although not unusual in qualitative research that requires extensive and detailed analysis of each patient, may not be representative of people with HF in Spain. Nevertheless, further studies should be performed with larger sample sizes. Future studies should also investigate the consequences of this reduction of PA and implement appropriate protocols to ensure good health outcomes in older adults, considering well-being, social support, attention to HF symptoms, and assistance needs during critical periods.", "Well-being and PA levels decreased during COVID-19 confinement in older adults with HF. Moreover, octogenarians were more severely impacted during confinement than non-octogenarians in terms of lower well-being, lower attention to their HF symptoms, and higher assistance needs, although non-octogenarians reported lower social support than non-octogenarians. Thus, the development of remote monitoring strategies is needed in older adults with HF in order to maintain an adequate level of PA and control health outcomes in critical periods, especially for octogenarians with HF." ]
[ "intro", null, "subjects", null, null, null, null, "results", "discussion", "conclusions" ]
[ "heart failure", "COVID-19", "confinement", "well-being", "physical activity", "mixed-methods study" ]
1. Introduction: Heart failure (HF) is a major clinical and public health problem [1]. HF constitutes a complex debilitating syndrome with significant health consequences that trigger high burdens of mortality and hospitalization, particularly among those aged 65 and older [2]. The COVID-19 pandemic has caused an alteration and restructuration of healthcare systems. The routine of care of HF patients has abruptly changed compared to pre-COVID-19 in terms of an increase in in-person outpatient visits and an increase in telehealth-based programs [3]. Moreover, because government-approved nationwide confinement may have led to a decrease in physical activity (PA), which could also lower well-being, there may be serious consequences for cardiometabolic morbimortality in HF patients. Therefore, COVID-19 represents a serious threat for HF patients, even more in elders with HF [4,5,6]. On the other hand, both confinement and fear of contagion may have reduced social support and assistance for elder patients with HF during COVID-19 confinement [7]. In order to mitigate these consequences, promotion of self-management and optimization of clinical assistance of patients with HF, as well as the use of measures such as telehealth, telerehabilitation, mobile applications, and electronic heart rate devices, have been employed [8,9]. Previous trials have found several limitations such as the lack of physical examination, depersonalization of healthcare, and lack of familiarity with the platform [10]. Moreover, there are groups of population that seem not to be ready to use these telematic measures, due to very old age, poor hearing, cognitive dysfunction, language barriers, or limited education, which may require the assistance of a family member or a caregiver [5]. Previous quantitative studies have analyzed how confinement affected patients with HF, in terms of fear to visit the hospital [3], differences in physical activity [11,12], mental health [13], safety of telehealth programs [14], etc. Few qualitative studies have been performed in order to characterize the value of technology in supporting caregiving for individuals with HF [15], examine the caregiving experiences and coping strategies of older adults with HF during the ongoing pandemic [16], and explore patients’ and clinicians’ experiences of managing HF during COVID-19 pandemic [17]. However, to our knowledge, no mixed-methods studies have been conducted in this regard. Mix-methods designs offer the possibility for participants to give their opinion about issues that may not have been considered in pre-set quantitative questionnaires. In addition, the literature regarding delivery of health care has highlighted the role of patient-reported outcomes and also of experience measures [10]. Even though previous studies have been performed in patients with HF, most of them take into account the middle-aged population or, when assessing the elderly, ages range from 60 to 80 years, thus information regarding adults older than 80 years old is scarce [11,13,14,16,18,19]. Therefore, further research is needed to focus on the clinical conditions of the octogenarian population and to analyze the differences between octogenarians and non-octogenarians. This study aimed to (1) compare well-being and PA before and during COVID-19 confinement in older patients with HF; (2) compare well-being and PA during COVID-19 confinement in octogenarians and non-octogenarians with HF; and (3) explore well-being, social support, attention to HF symptoms, assistance needs, and suggestions to improve care during COVID-19 confinement in this population. 2. Materials and Methods: 2.1. Participants and Setting The sample consisted of 120 participants. Participants were recruited at an outpatient clinic and were assessed between November 2019 and April 2020. Inclusion criteria included: (1) diagnosis of HF, (2) age ≥ 60 years, and (3) being cognitively capable of completing the assessments. Written informed consent was obtained before participation. The principles of voluntariness and confidentiality were respected. All participants were informed about the objectives and procedures of the study. All procedures were conducted in accordance with the principles of the Declaration of Helsinki. The study was approved by the Institutional Review Board Ethics Committee (2020-440-1). The sample consisted of 120 participants. Participants were recruited at an outpatient clinic and were assessed between November 2019 and April 2020. Inclusion criteria included: (1) diagnosis of HF, (2) age ≥ 60 years, and (3) being cognitively capable of completing the assessments. Written informed consent was obtained before participation. The principles of voluntariness and confidentiality were respected. All participants were informed about the objectives and procedures of the study. All procedures were conducted in accordance with the principles of the Declaration of Helsinki. The study was approved by the Institutional Review Board Ethics Committee (2020-440-1). 2.2. Design A mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23]. A mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23]. 2.3. Data Collection All participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher. In the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32]. Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28]. Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32]. Regarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed. All participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher. In the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32]. Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28]. Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32]. Regarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed. 2.4. Sample Size Calculation Anticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50. Anticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50. 2.5. Data Analysis The statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests. The data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed. Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37]. Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory. Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed. The statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests. The data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed. Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37]. Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory. Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed. 2.1. Participants and Setting: The sample consisted of 120 participants. Participants were recruited at an outpatient clinic and were assessed between November 2019 and April 2020. Inclusion criteria included: (1) diagnosis of HF, (2) age ≥ 60 years, and (3) being cognitively capable of completing the assessments. Written informed consent was obtained before participation. The principles of voluntariness and confidentiality were respected. All participants were informed about the objectives and procedures of the study. All procedures were conducted in accordance with the principles of the Declaration of Helsinki. The study was approved by the Institutional Review Board Ethics Committee (2020-440-1). 2.2. Design: A mixed concurrent triangulation design (dItrIaC) was carried out according to Creswell et al., Hernández et al., and García et al. [20,21,22]. These authors indicated that this design aims to confirm results, cross-validate results between quantitative and qualitative data, and apply the advantages of each method. Both quantitative and qualitative phases have equal importance in the research. In addition, both methods were applied at the same time, thus data collection and analysis were performed at the same time. Regarding the quantitative phase, a descriptive and cross-sectional design was carried out. Regarding the qualitative phase, a phenomenological design was used, in which the immediate subjective experience was analyzed as the basis of knowledge, whilst phenomena were studied from the perspective of the participants, and their referential framework was preserved. In addition, in the qualitative phase, interest was maintained in knowing how people experience and interpret the social world, which they construct in interaction through language [23]. 2.3. Data Collection: All participants were included in the quantitative and in the qualitative analysis. Sociodemographic data (age, sex, time since HF diagnosis, education, marital status) were obtained by clinical interview through a trained researcher. In the quantitative phase, the following outcomes were evaluated before and during COVID-19 confinement:(1)Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28].(2)Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32]. Well-being was assessed using the Cantril Ladder of Life [19], a single-item indicator with a ladder of steps numbered from 0 to 10 (0 = the worst possible life, 10 = the best possible life). Participants answered on which step they stand at present. Cantril Ladder of Life validity and test-retest coefficients of 0.70 have been reported in previous studies in patients with acute coronary events. Several studies have previously used this scale in HF patients [24,25,26,27,28]. Physical activity was evaluated using the International Physical Activity Questionnaire (IPAQ). It contains seven items for identifying frequency and duration of low, moderate, and vigorous PA as well as inactivity during the past week. Frequency is measured in days and duration in hours and minutes. The answers to the questions were transformed into metabolic equivalent of task (MET-minutes). The total PA score is the sum of vigorous, moderate, and walking PA scores. Typical IPAQ correlations with an accelerometer were 0.80 for reliability [29]. Several studies have previously used this questionnaire in HF patients [30,31,32]. Regarding the qualitative phase, semi-structured interviews were performed. Participants were invited to share information before and during COVID-19 confinement, as well as examples of situations they experienced. Table 1 shows the questions of the semi-structured interview performed. 2.4. Sample Size Calculation: Anticipating medium-sized differences in well-being and PA assessed before and during COVID-19 confinement in older adults with HF, the a priori power analysis (within-between interaction ANOVA) with two independent groups (octogenarians vs. non-octogenarians) and two measurement times (before vs. during COVID-19 confinement) yielded a needed total sample size of 98 participants using the following settings: f = 0.25, alpha = 0.05 (p-value), 1 − beta = 0.80 (power), correlation among repeated measures = 0.50. 2.5. Data Analysis: The statistical analysis in the quantitative phase was performed using SPSS version 26.0 (IBM SPSS, Inc., Chicago, IL., USA). An ANOVA test was used to explore differences between time measurements (i.e., before and during COVID-19 confinement) in the well-being and PA variables in the entire sample. Additionally, a two-factor mixed multivariate analysis of variance (MANOVA) with a between-subjects factor “groups by age” with two categories (i.e., non-octogenarians (<80 years) and octogenarians (≥80 years)) and a within-subject factor “time measurements” with two categories (i.e., before and during COVID-19 confinement) was performed in the abovementioned variables. Post-hoc analyses were requested using the Bonferroni correction. We evaluated the assumption of homoscedasticity using Levene’s test and the sphericity using Mauchly’s test. Partial eta-squared (η2p) values within the repeated measures ANOVA were used to express the effect size (i.e., small ≥ 0.01, medium ≥ 0.06, large ≥ 0.14). The α level was set equal to or less than 0.05 for all tests. The data analysis in the qualitative phase was supported by computer-assisted qualitative data analysis software (CAQDAS) [33], specifically with the use of NVivo software version 12.0 (QSR International, Inc., Burlington, MA, USA). The treatment of the data followed the classical qualitative data analysis system [34,35]. This model involved the following steps [36]:(1)Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37].(2)Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory.(3)Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed. Data reduction. Information was divided into grammatical content units (paragraphs and sentences). Inductive content analysis (elaborating categories from the reading and analysis of the collected material without taking into consideration the initial categories) and deductive content analysis (categories are established a priori whilst the researcher adapts each unit to an already existing category) were performed. The assessment of content belonging to the corresponding category/subcategory was performed based on two levels, intracoder and intercoder, until agreement was reached among the members of the research team [37]. Layout and grouping. Different graphic resources and information were obtained using CAQDAS as follows: relationships and deep structure of the text [38], graphic representations or visual images of the relationships between concepts [39], and matrices/double-entry tables in which verbal information was included according to the aspects specified by rows and columns [34]. For the calculation of the analysis of the frequency of concurrence of the categories and subcategories, the NVivo coding matrix tool was been used. A matrix was made for each category, taking into account that the subcategories were placed in the rows, whilst the classification of octogenarian/non-octogenarian was placed in the columns. The percentages of each row were calculated based on the total cell references of each subcategory. Obtention of results and verification of conclusions. This phase involved the use of metaphors and analogies, as well as the inclusion of vignettes and narrative fragments, culminating with the aforementioned triangulation strategies. For textual data, description, interpretation, code counting, concurrence, comparison, and contextualization were performed. For data transformed into numerical values, statistical techniques, comparison, and contextualization were performed. 3. Results: A total of 120 HF patients were assessed before and during COVID-19 confinement, of whom 60.80% were male and 39.20% were women. Mean (SD) age was 74.16 ± 12.90 years old. The sample consisted of non-octogenarians (<80 years old) (55.83% of the sample) and octogenarians (≥80 years) (44.16 % of the sample). All subjects had a diagnosis of HF, with a mean (SD) time of evolution of 78.73 ± 94.21 months. Table 2 shows the sociodemographic characteristics of the sample. Table 3 shows the results of well-being and physical activity before and after confinement. The sample reported significantly lower well-being (p < 0.001) during confinement than before confinement. Similarly, lower levels of light PA (p < 0.001), moderate PA (p < 0.001), and total PA (p < 0.001) were observed during confinement when compared to before confinement. In addition, sedentary time was higher during confinement than before confinement (p < 0.001). However, no significant differences were found in vigorous PA (p > 0.05). Two-factor repeated measures MANOVA revealed significant interaction effects on well-being and PA variables (F(5,114) = 3.33, p = 0.01, η2p = 0.89). Five univariate variables showed non-significant interaction effects for “group by age” and “time measurements”, including light PA, moderate PA, vigorous PA, sedentary time, and total PA (p > 0.05). However, the well-being univariate variable was found to have significant interaction effects for only “group by age” and “time measurements” (F(1,118 = 14.56, p < 0.001, η2p = 0.97). Table 4 shows the comparisons of well-being and physical activity in octogenarians vs. non-octogenarians before and after confinement. Post-hoc analysis showed that before confinement, the levels of light, moderate, vigorous, and total PA, as well as sedentary time were similar between both groups (p > 0.05). However, during confinement, octogenarians showed significantly lower values of well-being (p = 0.02) and moderate PA (p = 0.04), as well as higher sedentary time (p = 0.03) than non-octogenarians. Regarding between-time measurements comparisons, there were significant differences before and during confinement in well-being (p < 0.001, respectively), light PA (p < 0.001, respectively), moderate PA (p = 0.001 and p = 0.02, respectively), sedentary time (p < 0.001, respectively), and total PA (p < 0.001, respectively) in both non-octogenarians and octogenarians. However, there were no significant differences by time measurements in vigorous PA (p > 0.05) in any of the two groups. Regarding the qualitative data analysis, five categories were obtained: (i) alterations in well-being; (ii) changes in social support; (iii) attention to HF symptoms; (iv) assistance needs; and (v) suggestions to improve care. The identified categories were divided into 32 subcategories. The identified categories and subcategories are described as follows:(i)Alterations in well-being. This category defines the main areas of well-being affected by COVID-19. It is composed of the following six subcategories: (1) cognitive; (2) the confinement has been positive on his/her well-being; (3) the confinement has not changed his/her routines; (4) emotional; (5) physical; (6) social.(ii)Changes in social support. This category shows the main changes in social support for people with HF during COVID-19. It is composed of six subcategories: (1) receives care from institutions; (2) receives care from his/her relatives; (3) difficulty in communication; (4) no family visits; (5) receives outside assistance; (6) reduced social contact.(iii)Attention to HF symptoms. This category highlights the main characteristics related to the attention to HF symptoms during confinement. It is composed of ten subcategories: (1) self-diagnosis of his/her health condition; (2) avoids social contact; (3) maintains healthy habits; (4) increased dependence; (5) fear of COVID-19; (6) no changes due to COVID-19; (7) does not follow doctor’s treatment; (8) concern for the health of his/her relatives; (9) concern for his/her own health; (10) health problems.(iv)Assistance needs. This category defines the needs of care and highlights the main needs or care requirements of HF patients during confinement. This category is composed of two subcategories and four contexts that define assistance needs: (1) no assistance needs; (2) if had assistance needs: (2a) assistance from family members; (2b) outside assistance (ambulance, telephone assistance, telecare button, caregiver, person for household chores and shopping, cardiac rehabilitation, neighbors); (2c) needs more assistance than received; (2d) total dependency.(v)Suggestions to improve care. This category highlights the suggestions of HF patients in order to improve their care during COVID-19 confinement. It is composed of the following eight subcategories: (1) help from politicians and from institutions; (2) demonstrate care to the patient; (3) increase the availability of physicians; (4) increase efficiency and patient care; (5) no suggestions; (6) speed in caring for patients; (7) receive written feedback from telephone consultations; (8) satisfied with the assistance received. Alterations in well-being. This category defines the main areas of well-being affected by COVID-19. It is composed of the following six subcategories: (1) cognitive; (2) the confinement has been positive on his/her well-being; (3) the confinement has not changed his/her routines; (4) emotional; (5) physical; (6) social. Changes in social support. This category shows the main changes in social support for people with HF during COVID-19. It is composed of six subcategories: (1) receives care from institutions; (2) receives care from his/her relatives; (3) difficulty in communication; (4) no family visits; (5) receives outside assistance; (6) reduced social contact. Attention to HF symptoms. This category highlights the main characteristics related to the attention to HF symptoms during confinement. It is composed of ten subcategories: (1) self-diagnosis of his/her health condition; (2) avoids social contact; (3) maintains healthy habits; (4) increased dependence; (5) fear of COVID-19; (6) no changes due to COVID-19; (7) does not follow doctor’s treatment; (8) concern for the health of his/her relatives; (9) concern for his/her own health; (10) health problems. Assistance needs. This category defines the needs of care and highlights the main needs or care requirements of HF patients during confinement. This category is composed of two subcategories and four contexts that define assistance needs: (1) no assistance needs; (2) if had assistance needs: (2a) assistance from family members; (2b) outside assistance (ambulance, telephone assistance, telecare button, caregiver, person for household chores and shopping, cardiac rehabilitation, neighbors); (2c) needs more assistance than received; (2d) total dependency. Suggestions to improve care. This category highlights the suggestions of HF patients in order to improve their care during COVID-19 confinement. It is composed of the following eight subcategories: (1) help from politicians and from institutions; (2) demonstrate care to the patient; (3) increase the availability of physicians; (4) increase efficiency and patient care; (5) no suggestions; (6) speed in caring for patients; (7) receive written feedback from telephone consultations; (8) satisfied with the assistance received. In the qualitative results, the most reported subcategories in each analyzed category are described. Findings are shown by verbatim excerpts from the interviews. Regarding well-being, the 52.50% of the sample considered that COVID-19 altered their well-being to a great extent. Figure 1 shows differences between octogenarians vs. non-octogenarians in alterations in well-being caused by COVID-19 confinement. Alterations in emotional factors stand out in both groups, according to participant 45: “because it is very scary that people dye because of COVID-19. Our life has changed because we can’t go calmed down the street, nor when we meet family or friends” and participant 5: “I started teleworking and it has been overwhelming because we had more work than ever”. Moreover, physical alterations were highlighted, as participant 3 stated: “regarding physical level, I get more tired when making any effort”. It should be noted that non-octogenarians reported that confinement had less impact on their habits, as indicated by participant 164: “the confinement seemed very good to me because I don´t usually go out and I am with my husband and my daughter, so it is fine for me” and participant 133: “regarding confinement, I could climb stairs as I did before”. On the other hand, 30% of the individuals reported that social support decreased during confinement. Figure 2 shows differences between octogenarians vs. non-octogenarians in changes in social support during COVID-19 confinement. In this regard, the most important change in octogenarians was the increase of care provided by their relatives, as shown by participant 1: “my family members are much more concerned about me and about the fact that I comply with the treatments” and participant 63: “I am living with my daughter, because due to confinement, I needed her help and care and I did not want to be alone”. In contrast, non-octogenarians perceived they did not received care during confinement from physicians, or they did not receive enough information in this regard; according to participant 187: “No doctor has come to my house to give information to me regarding suggestions of care and aspects to take into account related to confinement“ and participant 156: “I have not received any support”. In addition, 50% of the individuals reported that their attention to HF symptoms increased during confinement. Figure 3 shows differences between octogenarians vs. non-octogenarians in attention to HF symptoms during COVID-19 confinement. There was a greater concern for health in non-octogenarians, as shown by participant 101: “I worry more about my health and I don’t want to get worse” and participant 196: “a lot of concern for me and for my wife. We don’t want to get sick, so we take more care of ourselves and pay more attention to our symptoms”. Similarly, there was a greater fear of COVID-19 in non-octogenarians, as indicated by participant 28: “I am scared, I am very afraid of COVID-19 [...] I am a patient at risk” and participant 87: “I pay more attention to my symptoms because of fear and because I have more time”. Although only slightly, octogenarians showed a greater concern for avoiding social contact, as expressed by participant 98: “I avoid contact with friends, even with relatives” and participant 178: “I only go out to the street one day a week and by car”. Regarding assistance needs of HF patients, 60% of individuals in the sample were not satisfied with the assistance received. As shown in Figure 4, both groups needed assistance, with a greater predominance in octogenarians, as reported by participant 1: “I have a 24-h caregiver, she runs all my errands, goes shopping, helps me in the bath, checks my pills” and participant 23: “another caregiver to clean the house and go shopping”. On the other hand, non-octogenarians considered that they needed more help than they received, as reported by participant 19: “I need more attention because my legs that are swelling more, and I don’t understand why” and participant 229: “the doctor is more available. The general practitioner does not take care of me, he only gives medication to me, we cannot go to the outpatient clinic”. Regarding suggestions to improve care (Figure 5), half of the individuals had no suggestions in this regard. Non-octogenarians suggested the need of offering a greater efficiency and attention to HF patients, as participant 39 assured: “I need to continue with the rehabilitation sessions that were cancelled due to the confinement. The suggestion I make is that those of us who are more dependent and who need more attention should get it. They leave us alone at home as if nothing happened, and that is bad for us. I suggest more attention, that the nurse or physiotherapist comes to check that I am doing well. Self-care starts with good information and making sure that I am doing well, then I will be able to do it alone” and participant 114: “they should call us to follow up our situation, it is hard for me to follow the recommendations”. However, those aged <80 suggested a lower availability of doctors, as suggested by participant 189: “now that I have pain and doctors do not attend me” and participant 98: “maybe if doctors were more available they could solve my doubts”, as well as reporting satisfaction with the assistance they were offered, as indicated by participant 23: “I am happy with the assistance received, and I send encouragement to the health workers” and participant 101: “very competent people, I am very happy with the treatment”. Finally, the key points of the qualitative data analysis were that most of the individuals sampled considered that well-being decreased during confinement, 30% reported that social support decreased, 50% increased their attention to symptoms, and 60% were not satisfied with the assistance received. Octogenarians were more affected during lockdown than non-octogenarians in terms of well-being, attention to symptoms, and support needs. 4. Discussion: The findings of this study showed that well-being and PA levels of older adults with HF decreased during COVID-19 confinement. Regarding the comparison between octogenarians and non-octogenarians, octogenarians reported lower well-being, higher sedentary time, and lower levels of moderate PA than non-octogenarians during confinement. Moreover, octogenarians were more severely impacted during confinement than non-octogenarians in terms of exhibiting lower well-being. These results are in accordance with the patient-reported experiences, where octogenarians and non-octogenarians explained that confinement affected their well-being, especially regarding emotional status and physical fitness. Furthermore, octogenarian patients of this study reported that confinement affected several aspects of their daily lives. These results are especially important because previous studies have found that patients with HF are at a higher risk of developing mental health problems and seem to have a lower capacity to enjoy daily activities compared with people without HF symptoms [13]. In addition, in the study of Rantanen et al. [40], people more than 85 years old, both men and women, reported significantly lower quality of life during COVID-19 confinement. In addition, those with lower cognitive functioning, lower emotional stability, and living alone may be at risk of poorer self-reported mental and physical health [41]. Therefore, it is important to stress that nurses and other cardiac providers should identify vulnerabilities in sustained HF self-care behaviors and well-being among older adults, especially among octogenarians, to improve their well-being and physical activity and to improve satisfaction with the care they receive [19]. Furthermore, the decrease in PA and the increase in sedentary time that we found in older adults with HF during confinement, are in line with the results reported by Vetrovsky et al. [11], Angelo-Brasca et al. [42], and Caraballo et al. [12], who also found a reduction of PA levels in HF patients. It is important to highlight the decrease of PA levels since although physical condition was not evaluated, previous studies suggest that the decrease of PA levels could lead to an important deterioration of physical fitness, which is an important predictor of HF morbidity and mortality in this population [43,44]. In turn, a higher functional status is a predictor of an enhanced quality of life in octogenarians [45]. Therefore, the promotion of active aging seems to be a key point in order to improve or sustain quality of life, especially during social distancing [40]. A lower attention to their HF symptoms and higher assistance needs were found in octogenarians. However, non-octogenarians reported lower social support than octogenarians. In our sample, it seems that octogenarians reported an increase of care provided by their relatives, which may have influenced their social support; thus, octogenarians presented higher social support than non-octogenarians. In this regard, our results are not in line with those reported by Golden et al. [46], who found that loneliness increased with age. In addition, fear of contagion implied that patients preferred telephone visits, although both groups, especially octogenarians, reported that they needed assistance. These results are similar to those obtained by Mcilvennan et al. [3], who reported patients’ expressions of fear, reluctance to visit the hospital, and lower early symptom reporting. However, our results evidenced that participants were mostly satisfied with the care they received. In contrast, Raman and Vyselaar [10] reported that patients preferred in-person visits since telehealth visits were considered as presenting inferior quality due to the lack of physical examination, emotional detachment from care providers, and general unfamiliarity. Independently of participants´ opinions, telehealth models for outpatients with HF have been demonstrated to be safe and seem not to increase mortality, suggesting that telehealth outpatient visits in patients with HF could be safely incorporated into clinical practice [14,17]. Given the lack of studies that use a mixed-methods methodology to investigate well-being, PA, social support, attention to symptoms, assistance needs, and suggestions to improve care in older patients with HF during COVID-19 confinement, this study aimed to fill this gap for a better understanding of this condition. The comparison by age in octogenarians vs. non-octogenarians and the qualitative approach, which involved in-depth interviews with HF patients, led to the collection of relevant data not normally discussed or shared in healthcare research. However, there are several limitations in this study. First, the study was performed in a single center; therefore, results may not be generalizable to the entire older population with HF. Additionally, the findings may not be representative of all patients, since our results can only be generalized to countries with similar restrictions to those established in Spain, such as Italy, France, or the Czech Republic. Second, the small sample size, although not unusual in qualitative research that requires extensive and detailed analysis of each patient, may not be representative of people with HF in Spain. Nevertheless, further studies should be performed with larger sample sizes. Future studies should also investigate the consequences of this reduction of PA and implement appropriate protocols to ensure good health outcomes in older adults, considering well-being, social support, attention to HF symptoms, and assistance needs during critical periods. 5. Conclusions: Well-being and PA levels decreased during COVID-19 confinement in older adults with HF. Moreover, octogenarians were more severely impacted during confinement than non-octogenarians in terms of lower well-being, lower attention to their HF symptoms, and higher assistance needs, although non-octogenarians reported lower social support than non-octogenarians. Thus, the development of remote monitoring strategies is needed in older adults with HF in order to maintain an adequate level of PA and control health outcomes in critical periods, especially for octogenarians with HF.
Background: This study aimed to compare well-being and physical activity (PA) before and during COVID-19 confinement in older adults with heart failure (HF), to compare well-being and PA during COVID-19 confinement in octogenarians and non-octogenarians, and to explore well-being, social support, attention to symptoms, and assistance needs during confinement in this population. Methods: A mixed-methods design was performed. Well-being (Cantril Ladder of Life) and PA (International Physical Activity Questionnaire) were assessed. Semi-structured interviews were performed to assess the rest of the variables. Results: 120 participants were evaluated (74.16 ± 12.90 years; octogenarians = 44.16%, non-octogenarians = 55.83%). Both groups showed lower well-being and performed less PA during confinement than before (p &lt; 0.001). Octogenarians reported lower well-being (p = 0.02), higher sedentary time (p = 0.03), and lower levels of moderate PA (p = 0.04) during confinement. Most individuals in the sample considered their well-being to have decreased during confinement, 30% reported decreased social support, 50% increased their attention to symptoms, and 60% were not satisfied with the assistance received. Octogenarians were more severely impacted during confinement than non-octogenarians in terms of well-being, attention to symptoms, and assistance needs. Conclusions: Well-being and PA decreased during confinement, although octogenarians were more affected than non-octogenarians. Remote monitoring strategies are needed in elders with HF to control health outcomes in critical periods, especially in octogenarians.
1. Introduction: Heart failure (HF) is a major clinical and public health problem [1]. HF constitutes a complex debilitating syndrome with significant health consequences that trigger high burdens of mortality and hospitalization, particularly among those aged 65 and older [2]. The COVID-19 pandemic has caused an alteration and restructuration of healthcare systems. The routine of care of HF patients has abruptly changed compared to pre-COVID-19 in terms of an increase in in-person outpatient visits and an increase in telehealth-based programs [3]. Moreover, because government-approved nationwide confinement may have led to a decrease in physical activity (PA), which could also lower well-being, there may be serious consequences for cardiometabolic morbimortality in HF patients. Therefore, COVID-19 represents a serious threat for HF patients, even more in elders with HF [4,5,6]. On the other hand, both confinement and fear of contagion may have reduced social support and assistance for elder patients with HF during COVID-19 confinement [7]. In order to mitigate these consequences, promotion of self-management and optimization of clinical assistance of patients with HF, as well as the use of measures such as telehealth, telerehabilitation, mobile applications, and electronic heart rate devices, have been employed [8,9]. Previous trials have found several limitations such as the lack of physical examination, depersonalization of healthcare, and lack of familiarity with the platform [10]. Moreover, there are groups of population that seem not to be ready to use these telematic measures, due to very old age, poor hearing, cognitive dysfunction, language barriers, or limited education, which may require the assistance of a family member or a caregiver [5]. Previous quantitative studies have analyzed how confinement affected patients with HF, in terms of fear to visit the hospital [3], differences in physical activity [11,12], mental health [13], safety of telehealth programs [14], etc. Few qualitative studies have been performed in order to characterize the value of technology in supporting caregiving for individuals with HF [15], examine the caregiving experiences and coping strategies of older adults with HF during the ongoing pandemic [16], and explore patients’ and clinicians’ experiences of managing HF during COVID-19 pandemic [17]. However, to our knowledge, no mixed-methods studies have been conducted in this regard. Mix-methods designs offer the possibility for participants to give their opinion about issues that may not have been considered in pre-set quantitative questionnaires. In addition, the literature regarding delivery of health care has highlighted the role of patient-reported outcomes and also of experience measures [10]. Even though previous studies have been performed in patients with HF, most of them take into account the middle-aged population or, when assessing the elderly, ages range from 60 to 80 years, thus information regarding adults older than 80 years old is scarce [11,13,14,16,18,19]. Therefore, further research is needed to focus on the clinical conditions of the octogenarian population and to analyze the differences between octogenarians and non-octogenarians. This study aimed to (1) compare well-being and PA before and during COVID-19 confinement in older patients with HF; (2) compare well-being and PA during COVID-19 confinement in octogenarians and non-octogenarians with HF; and (3) explore well-being, social support, attention to HF symptoms, assistance needs, and suggestions to improve care during COVID-19 confinement in this population. 5. Conclusions: Well-being and PA levels decreased during COVID-19 confinement in older adults with HF. Moreover, octogenarians were more severely impacted during confinement than non-octogenarians in terms of lower well-being, lower attention to their HF symptoms, and higher assistance needs, although non-octogenarians reported lower social support than non-octogenarians. Thus, the development of remote monitoring strategies is needed in older adults with HF in order to maintain an adequate level of PA and control health outcomes in critical periods, especially for octogenarians with HF.
Background: This study aimed to compare well-being and physical activity (PA) before and during COVID-19 confinement in older adults with heart failure (HF), to compare well-being and PA during COVID-19 confinement in octogenarians and non-octogenarians, and to explore well-being, social support, attention to symptoms, and assistance needs during confinement in this population. Methods: A mixed-methods design was performed. Well-being (Cantril Ladder of Life) and PA (International Physical Activity Questionnaire) were assessed. Semi-structured interviews were performed to assess the rest of the variables. Results: 120 participants were evaluated (74.16 ± 12.90 years; octogenarians = 44.16%, non-octogenarians = 55.83%). Both groups showed lower well-being and performed less PA during confinement than before (p &lt; 0.001). Octogenarians reported lower well-being (p = 0.02), higher sedentary time (p = 0.03), and lower levels of moderate PA (p = 0.04) during confinement. Most individuals in the sample considered their well-being to have decreased during confinement, 30% reported decreased social support, 50% increased their attention to symptoms, and 60% were not satisfied with the assistance received. Octogenarians were more severely impacted during confinement than non-octogenarians in terms of well-being, attention to symptoms, and assistance needs. Conclusions: Well-being and PA decreased during confinement, although octogenarians were more affected than non-octogenarians. Remote monitoring strategies are needed in elders with HF to control health outcomes in critical periods, especially in octogenarians.
10,195
315
[ 3736, 189, 521, 100, 921 ]
10
[ "hf", "confinement", "octogenarians", "19", "analysis", "pa", "covid", "covid 19", "patients", "data" ]
[ "covid 19 pandemic", "covid 19 life", "elder patients", "older patients hf", "confinement older patients" ]
null
[CONTENT] heart failure | COVID-19 | confinement | well-being | physical activity | mixed-methods study [SUMMARY]
null
[CONTENT] heart failure | COVID-19 | confinement | well-being | physical activity | mixed-methods study [SUMMARY]
[CONTENT] heart failure | COVID-19 | confinement | well-being | physical activity | mixed-methods study [SUMMARY]
[CONTENT] heart failure | COVID-19 | confinement | well-being | physical activity | mixed-methods study [SUMMARY]
[CONTENT] heart failure | COVID-19 | confinement | well-being | physical activity | mixed-methods study [SUMMARY]
[CONTENT] Humans | Aged | Aged, 80 and over | COVID-19 | Social Support | Heart Failure | Exercise | Sedentary Behavior [SUMMARY]
null
[CONTENT] Humans | Aged | Aged, 80 and over | COVID-19 | Social Support | Heart Failure | Exercise | Sedentary Behavior [SUMMARY]
[CONTENT] Humans | Aged | Aged, 80 and over | COVID-19 | Social Support | Heart Failure | Exercise | Sedentary Behavior [SUMMARY]
[CONTENT] Humans | Aged | Aged, 80 and over | COVID-19 | Social Support | Heart Failure | Exercise | Sedentary Behavior [SUMMARY]
[CONTENT] Humans | Aged | Aged, 80 and over | COVID-19 | Social Support | Heart Failure | Exercise | Sedentary Behavior [SUMMARY]
[CONTENT] covid 19 pandemic | covid 19 life | elder patients | older patients hf | confinement older patients [SUMMARY]
null
[CONTENT] covid 19 pandemic | covid 19 life | elder patients | older patients hf | confinement older patients [SUMMARY]
[CONTENT] covid 19 pandemic | covid 19 life | elder patients | older patients hf | confinement older patients [SUMMARY]
[CONTENT] covid 19 pandemic | covid 19 life | elder patients | older patients hf | confinement older patients [SUMMARY]
[CONTENT] covid 19 pandemic | covid 19 life | elder patients | older patients hf | confinement older patients [SUMMARY]
[CONTENT] hf | confinement | octogenarians | 19 | analysis | pa | covid | covid 19 | patients | data [SUMMARY]
null
[CONTENT] hf | confinement | octogenarians | 19 | analysis | pa | covid | covid 19 | patients | data [SUMMARY]
[CONTENT] hf | confinement | octogenarians | 19 | analysis | pa | covid | covid 19 | patients | data [SUMMARY]
[CONTENT] hf | confinement | octogenarians | 19 | analysis | pa | covid | covid 19 | patients | data [SUMMARY]
[CONTENT] hf | confinement | octogenarians | 19 | analysis | pa | covid | covid 19 | patients | data [SUMMARY]
[CONTENT] hf | patients | patients hf | 19 | covid 19 | covid | population | confinement | pandemic | assistance [SUMMARY]
null
[CONTENT] participant | care | assistance | confinement | octogenarians | needs | 001 | composed | attention | subcategories [SUMMARY]
[CONTENT] octogenarians | lower | hf | non octogenarians | non | adults hf | adults | older | older adults | older adults hf [SUMMARY]
[CONTENT] hf | octogenarians | confinement | patients | 19 | pa | covid | covid 19 | non | assistance [SUMMARY]
[CONTENT] hf | octogenarians | confinement | patients | 19 | pa | covid | covid 19 | non | assistance [SUMMARY]
[CONTENT] COVID-19 | COVID-19 | octogenarians [SUMMARY]
null
[CONTENT] 120 | 74.16 | 12.90 years | 44.16% | 55.83% ||| p &lt | 0.001 ||| Octogenarians | 0.02 | 0.03 | PA | 0.04 ||| 30% | 50% | 60% ||| Octogenarians [SUMMARY]
[CONTENT] ||| HF [SUMMARY]
[CONTENT] COVID-19 | COVID-19 | octogenarians ||| ||| ||| ||| 120 | 74.16 | 12.90 years | 44.16% | 55.83% ||| p &lt | 0.001 ||| Octogenarians | 0.02 | 0.03 | PA | 0.04 ||| 30% | 50% | 60% ||| Octogenarians ||| ||| HF [SUMMARY]
[CONTENT] COVID-19 | COVID-19 | octogenarians ||| ||| ||| ||| 120 | 74.16 | 12.90 years | 44.16% | 55.83% ||| p &lt | 0.001 ||| Octogenarians | 0.02 | 0.03 | PA | 0.04 ||| 30% | 50% | 60% ||| Octogenarians ||| ||| HF [SUMMARY]
Pressure Overload-induced Cardiac Hypertrophy Varies According to Different Ligation Needle Sizes and Body Weights in Mice.
30226916
The cardiac hypertrophy (CH) model for mice has been widely used, thereby providing an effective research foundation for CH exploration.
BACKGROUND
Four needles with different external diameters (0.35, 0.40, 0.45, and 0.50 mm) were used for AAC. 150 male C57BL/6 mice were selected according to body weight (BW) and divided into 3 weight levels: 18 g, 22 g, and 26 g (n = 50 in each group). All weight levels were divided into 5 groups: a sham group (n = 10) and 4 AAC groups using 4 ligation intensities (n = 10 per group). After surgery, survival rates were recorded, echocardiography was performed, hearts were dissected and used for histological detection, and data were statistically analyzed, P < 0.05 was considered statistically significant.
METHODS
All mice died in the following AAC groups: 18g/0.35 mm, 22 g/0.35 mm, 26 g/0.35 mm, 22 g/0.40 mm, and 26 g/0.40 mm. All mice with AAC, those ligated with a 0.50-mm needle, and those that underwent sham operation survived. Different death rates occurred in the following AAC groups: 18 g/0.40 mm, 18 g/0.45 mm, 18 g/0.50 mm, 22 g/45 mm, 22 g/0.50 mm, 26 g/0.45 mm, and 26 g/0.50 mm. The heart weight/body weight ratios (5.39 ± 0.85, 6.41 ± 0.68, 4.67 ± 0.37, 5.22 ± 0.42, 4.23 ± 0.28, 5.41 ± 0.14, and 4.02 ± 0.13) were significantly increased compared with those of the sham groups for mice with the same weight levels.
RESULTS
A 0.45-mm needle led to more obvious CH than did 0.40-mm and 0.50-mm needles and caused extraordinary CH in 18-g mice.
CONCLUSION
[ "Animals", "Aorta, Abdominal", "Body Weight", "Cardiomegaly", "Constriction", "Disease Models, Animal", "Echocardiography", "Ligation", "Male", "Mice, Inbred C57BL", "Needles", "Random Allocation", "Reference Values", "Reproducibility of Results", "Time Factors" ]
6023638
Introduction
Cardiac hypertrophy (CH) is a compensatory pathological change that is usually induced by pressure overload (PO), neurohumoral abnormality, and the effects of cytokines. It is characterized by cardiomyocyte hypertrophy and interstitial hyperplasia, and it results in an enlarged heart and thickening of the heart walls. Clinically, CH is involved in the development of many diseases, such as valvular disease, hypertension, arterial stenosis, and primary myocardial hypertrophy. If these diseases develop at their own pace, then cardiac function (CF) will gradually decompensate, leading to heart failure (HF), which severely lowers the quality of life and increases the mortality rate. Therefore, CH is a widespread concern and has been explored at the molecular level by researchers. Due to the high genomic homology between mice and humans, an established CH model for mice has been widely used in animal experiments, thereby providing an effective research foundation for CH exploration. Currently, PO-induced CH is a common way to establish the model. Abdominal aortic constriction (AAC) is highly recommended by researchers because of the high success rate and the ability to perform surgery without the need for thoracotomy or a ventilator. However, the modeling effects with different ligating intensities for certain body weights (BWs) have not yet been reported. Therefore, we used 3 frequently used mice BWs (18 g, 22 g, and 26 g) and 4 different needle sizes (0.35, 0.40, 0.45, and 0.50 mm) to establish the CH model for each weight level for AAC, summarized the survival rates, and evaluated the CH effects.
Methods
Animal groups and handling One-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai SLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and cared for in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health, Washington, DC, 1996). Experimental protocols were approved by our Institutional Animal Care and Use Committee of Zhejiang University (Hangzhou, China). Mice were selected according to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range, 20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the following 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0 ± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight levels were divided using sortition randomization method to create a sham group (n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45, and 0.50mm; n = 10 per group). Regarding BW, no significant differences were found among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that survived were not significant (Table S2). Mice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal injection). When the mice did not respond when their toe was pinched, the limbs were fixed on the operating board in the supine position and the skin was prepared by shaving and disinfection with alcohol. Sterile gauze was placed on the right side of the abdomen and a ventrimesal incision approximately 1.5 cm was created starting from the xiphoid. The skin was fixed with a spreader and the viscera was pulled out gently with a swab and placed on the gauze. Then, the abdominal aorta was isolated using a blunt dissection technique with curved microforceps under a microscope. A 6-0 silk suture was snared and pulled back around the aorta 1mm above the superior mesenteric artery. A 2-mm blunt acupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm; Huatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number GB2024-1994) was then placed next to the aorta. The suture was tied snugly around the needle and the aorta. The needle was removed immediately after ligation, the viscera were replaced, the peritoneum and skin were sutured, and the mice were allowed to recover. Aortic ligation was omitted only for the sham group. After surgery, the ears were cut to differentiate the mice. Then, mice were placed in an incubator at 30ºC until they woke, and they were returned to their cages. Survival status was recorded daily. To observe the physical development of mice under different conditions, BW differences before surgery and at week 8 post-surgery were calculated as the change in BW. One-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai SLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and cared for in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health, Washington, DC, 1996). Experimental protocols were approved by our Institutional Animal Care and Use Committee of Zhejiang University (Hangzhou, China). Mice were selected according to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range, 20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the following 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0 ± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight levels were divided using sortition randomization method to create a sham group (n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45, and 0.50mm; n = 10 per group). Regarding BW, no significant differences were found among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that survived were not significant (Table S2). Mice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal injection). When the mice did not respond when their toe was pinched, the limbs were fixed on the operating board in the supine position and the skin was prepared by shaving and disinfection with alcohol. Sterile gauze was placed on the right side of the abdomen and a ventrimesal incision approximately 1.5 cm was created starting from the xiphoid. The skin was fixed with a spreader and the viscera was pulled out gently with a swab and placed on the gauze. Then, the abdominal aorta was isolated using a blunt dissection technique with curved microforceps under a microscope. A 6-0 silk suture was snared and pulled back around the aorta 1mm above the superior mesenteric artery. A 2-mm blunt acupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm; Huatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number GB2024-1994) was then placed next to the aorta. The suture was tied snugly around the needle and the aorta. The needle was removed immediately after ligation, the viscera were replaced, the peritoneum and skin were sutured, and the mice were allowed to recover. Aortic ligation was omitted only for the sham group. After surgery, the ears were cut to differentiate the mice. Then, mice were placed in an incubator at 30ºC until they woke, and they were returned to their cages. Survival status was recorded daily. To observe the physical development of mice under different conditions, BW differences before surgery and at week 8 post-surgery were calculated as the change in BW. Echocardiography imaging After post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4% chloralhydrate and placed on a warming pad after skin preparation. Transthoracic 2-dimensional (2D) echocardiography was performed using the GE Vivid E9 Ultrasound echocardiographic system (General Electric Company, Fairfield, CT, USA) with the GE 9L probe (8-MHz linear array transducer; General Electric Company). M-mode parasternal long-axis scans of the left ventricle at the mitral chordae level were used to quantify the interventricular septum thickness at end-diastole (IVSd), interventricular septum thickness at end-systole (IVSs), left ventricular internal dimension at end-diastole (LVIDd), left ventricular internal dimension at end-systole (LVIDs), left ventricular posterior wall thickness at end-diastole (LVPWd), left ventricular posterior wall thickness at end-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All mice were tested using the same parameters. After post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4% chloralhydrate and placed on a warming pad after skin preparation. Transthoracic 2-dimensional (2D) echocardiography was performed using the GE Vivid E9 Ultrasound echocardiographic system (General Electric Company, Fairfield, CT, USA) with the GE 9L probe (8-MHz linear array transducer; General Electric Company). M-mode parasternal long-axis scans of the left ventricle at the mitral chordae level were used to quantify the interventricular septum thickness at end-diastole (IVSd), interventricular septum thickness at end-systole (IVSs), left ventricular internal dimension at end-diastole (LVIDd), left ventricular internal dimension at end-systole (LVIDs), left ventricular posterior wall thickness at end-diastole (LVPWd), left ventricular posterior wall thickness at end-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All mice were tested using the same parameters. Heart weight, heart weight/body weight, and heart weight/tibial length After echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by cervical dislocation and the hearts were dissected. Then, atrial and vascular tissues were snipped carefully, leaving the ventricles. The hearts were rinsed with phosphate-buffered saline (PBS), drained by gently squeezing on absorbent paper, weighed, photographed under natural light, and fixed in 4% paraformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia) were recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio and HW/TL ratio were calculated to evaluate the hypertrophic response to PO. After echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by cervical dislocation and the hearts were dissected. Then, atrial and vascular tissues were snipped carefully, leaving the ventricles. The hearts were rinsed with phosphate-buffered saline (PBS), drained by gently squeezing on absorbent paper, weighed, photographed under natural light, and fixed in 4% paraformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia) were recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio and HW/TL ratio were calculated to evaluate the hypertrophic response to PO. Histological examination of the heart Extracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After routine histologic procedures, the hearts were embedded in paraffin and cut into 4-µm sections. Sections were stained with hematoxylin and eosin (HE) and picrosirius red (PSR). Cardiac cross-sections were captured at 20 × microscopic views from HE sections, and 5 thicknesses of the left ventricle in each view were selected in systematic sampling, and measured using Image-Pro Plus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values were calculated. Cardiomyocyte morphological changes were captured at 400×microscopic views from HE sections. Interstitial and/or perivascular collagen depositions were captured at 200×microscopic views under standard lights. Collagen was stained red using PSR, thereby indicating fibrosis. At least 6 views were selected in a blinded manner, and each photograph was analyzed to reveal the ratio of red collagen to the entire tissue area using Image-Pro Plus 6.0. Then, the mean values were calculated. Extracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After routine histologic procedures, the hearts were embedded in paraffin and cut into 4-µm sections. Sections were stained with hematoxylin and eosin (HE) and picrosirius red (PSR). Cardiac cross-sections were captured at 20 × microscopic views from HE sections, and 5 thicknesses of the left ventricle in each view were selected in systematic sampling, and measured using Image-Pro Plus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values were calculated. Cardiomyocyte morphological changes were captured at 400×microscopic views from HE sections. Interstitial and/or perivascular collagen depositions were captured at 200×microscopic views under standard lights. Collagen was stained red using PSR, thereby indicating fibrosis. At least 6 views were selected in a blinded manner, and each photograph was analyzed to reveal the ratio of red collagen to the entire tissue area using Image-Pro Plus 6.0. Then, the mean values were calculated. Statistical Analysis SPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all statistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the normality of the quantitative variables as appropriate. Data are presented as mean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests were used to evaluate differences between groups. p < 0.05 was considered statistically significant. SPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all statistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the normality of the quantitative variables as appropriate. Data are presented as mean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests were used to evaluate differences between groups. p < 0.05 was considered statistically significant.
Results
Excessive AAC may lead to death We monitored mice deaths after surgery according to acute heart failure (AHF) criteria. Data (Table 1) showed that all deaths occurred within 5 days, and a high incidence of death occurred during the initial 24h post-surgery. Mice deaths after surgery There were no mice deaths in the AAC0.50-mm group or the sham group. Deaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5 d); 54 deaths occurred within 0-24h post-surgery. The total number of deaths was 65. We monitored mice deaths after surgery according to acute heart failure (AHF) criteria. Data (Table 1) showed that all deaths occurred within 5 days, and a high incidence of death occurred during the initial 24h post-surgery. Mice deaths after surgery There were no mice deaths in the AAC0.50-mm group or the sham group. Deaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5 d); 54 deaths occurred within 0-24h post-surgery. The total number of deaths was 65. AAC increases cardiac dimensions and reduces cardiac function Echocardiography was performed at the end of post-operative weeks 4 and 8. At week 4 post-surgery, data (Table 2) showed a trend of heart enlargement for mice with AAC, including thickening of the ventricular wall and an increase in chamber dilation; however, differences in EF and FS were not significant, indicating that changes in the heart structure did not have a pronounced effect on cardiac function at that time point. At week 8 post-surgery, the trend of heart enlargement continued; however, the EF and FS values for the AAC groups decreased significantly. This change in cardiac function from week 4 to week 8 was consistent with systolic function beginning to be markedly affected at week 4 after PO surgery. Echocardiographic outcomes of 18-g, 22-g, and 26-g mice IVSd: interventricular septum thickness at end-diastole; IVSs: interventricular septum thickness at end-systole; LVIDd: left ventricular internal dimension at end-diastole; LVIDs: left ventricular internal dimension at end-systole; LVPWd: left ventricular posterior wall thickness at end-diastole; LVPWs: left ventricular posterior wall thickness at end-systole; EF: ejection fraction; FS: fractional shortening. The cardiac dimensions (inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and functional indices (inclding EF and FS) changes measured by echocardiography. Data were statistically analyzed and presented as the mean ± SD. At week 4, cardiac dimensions for the AAC groups significantly increased compared with the sham groups (*p < 0.05); at week 8 , more cardiac dimensions significantly increased, and that EF and FS values for the AAC groups all decreased significantly compared to those of the sham groups for 3 weight levels (*p < 0.05). Echocardiography was performed at the end of post-operative weeks 4 and 8. At week 4 post-surgery, data (Table 2) showed a trend of heart enlargement for mice with AAC, including thickening of the ventricular wall and an increase in chamber dilation; however, differences in EF and FS were not significant, indicating that changes in the heart structure did not have a pronounced effect on cardiac function at that time point. At week 8 post-surgery, the trend of heart enlargement continued; however, the EF and FS values for the AAC groups decreased significantly. This change in cardiac function from week 4 to week 8 was consistent with systolic function beginning to be markedly affected at week 4 after PO surgery. Echocardiographic outcomes of 18-g, 22-g, and 26-g mice IVSd: interventricular septum thickness at end-diastole; IVSs: interventricular septum thickness at end-systole; LVIDd: left ventricular internal dimension at end-diastole; LVIDs: left ventricular internal dimension at end-systole; LVPWd: left ventricular posterior wall thickness at end-diastole; LVPWs: left ventricular posterior wall thickness at end-systole; EF: ejection fraction; FS: fractional shortening. The cardiac dimensions (inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and functional indices (inclding EF and FS) changes measured by echocardiography. Data were statistically analyzed and presented as the mean ± SD. At week 4, cardiac dimensions for the AAC groups significantly increased compared with the sham groups (*p < 0.05); at week 8 , more cardiac dimensions significantly increased, and that EF and FS values for the AAC groups all decreased significantly compared to those of the sham groups for 3 weight levels (*p < 0.05). AAC increases HW, HW/BW, and HW/TL ratio Generally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators of CH. In our study, as shown in Table 3, we found that AAC significantly increased HW and caused a significantly higher HW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels. The HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly higher than those for the AAC0.50 mm groups. These HW-related indices for the 18 g/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm groups. Heart weight–related indices of 18-g, 22-g, and 26-g mice HW: heart weight; BW: body weight; TL: tibial length. HW(mg), HW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC and sham groups of 3 BW levels. Data are presented as the mean ± SD. Compared to the sham group in the same BW level, the heart weight–related indices of AAC groups increased significantly (p < 0.05). Compared to the rest groups in the same BW level, the heart weight–related indices of the AAC 0.45-mm group increased significantly (p < 0.05). Generally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators of CH. In our study, as shown in Table 3, we found that AAC significantly increased HW and caused a significantly higher HW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels. The HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly higher than those for the AAC0.50 mm groups. These HW-related indices for the 18 g/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm groups. Heart weight–related indices of 18-g, 22-g, and 26-g mice HW: heart weight; BW: body weight; TL: tibial length. HW(mg), HW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC and sham groups of 3 BW levels. Data are presented as the mean ± SD. Compared to the sham group in the same BW level, the heart weight–related indices of AAC groups increased significantly (p < 0.05). Compared to the rest groups in the same BW level, the heart weight–related indices of the AAC 0.45-mm group increased significantly (p < 0.05). AAC leads to cardiomyocyte hypertrophy and increases collagen depositions For mice undergoing AAC surgery, the hearts demonstrated different degrees of enlargement (Figure 1A), enlargement of the papillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared with that of the sham group (Table 4). The sham groups showed normal architecture of the cardiomyocytes compared with the AAC groups. Pathological changes including enlarged, disarrayed, and eosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and trachychromatic and pantomorphic nuclei were observed in each of the AAC groups (Figure 1C). Scattered collagen depositions in the interstitial and perivascular spaces were observed in the sham groups. In comparison, in some AAC groups, a larger quantity and wider range of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was observed in the perivascular space, especially in the external vascular wall (Figure 1E). Statistical analysis indicated that the AAC group had a significantly greater collagen area than the sham group (Table 5). These results imply that AAC is capable of inducing PO-induced CH and fibrosis. Figure 1Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Thickness of the left ventricle (mm) based on weight and needle size Data are presented as the mean ± SD (n = 5). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. Percentage of collagen deposition in the left ventricle based on weight and needle size Data are presented as the mean ± SD (n = 6). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. For mice undergoing AAC surgery, the hearts demonstrated different degrees of enlargement (Figure 1A), enlargement of the papillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared with that of the sham group (Table 4). The sham groups showed normal architecture of the cardiomyocytes compared with the AAC groups. Pathological changes including enlarged, disarrayed, and eosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and trachychromatic and pantomorphic nuclei were observed in each of the AAC groups (Figure 1C). Scattered collagen depositions in the interstitial and perivascular spaces were observed in the sham groups. In comparison, in some AAC groups, a larger quantity and wider range of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was observed in the perivascular space, especially in the external vascular wall (Figure 1E). Statistical analysis indicated that the AAC group had a significantly greater collagen area than the sham group (Table 5). These results imply that AAC is capable of inducing PO-induced CH and fibrosis. Figure 1Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Thickness of the left ventricle (mm) based on weight and needle size Data are presented as the mean ± SD (n = 5). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. Percentage of collagen deposition in the left ventricle based on weight and needle size Data are presented as the mean ± SD (n = 6). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. AAC may restrict physical development Analysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice compared to 22-g and 26-g mice (Table 6), indicating that the 18-g groups had higher development potential. In the 18-g mice groups, data showed that the value of 18 g/0.40 mm was significantly lower than that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no significant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm group had nearly normal physical development. Development of the 18 g/0.40 mm group was limited. Body weight changes with AAC under 0.45 mm needle BW: body weight; AAC: abdominal aortic constriction. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.45 mm group and the 22 g/0.45 mm and 26 g/0.45 mm groups. Body weight and BW changes in 18-g mice BW: body weight. BW changes of 18-g mice before and after surgery for 8 weeks. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.40 mm group and the rest groups after surgery. Analysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice compared to 22-g and 26-g mice (Table 6), indicating that the 18-g groups had higher development potential. In the 18-g mice groups, data showed that the value of 18 g/0.40 mm was significantly lower than that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no significant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm group had nearly normal physical development. Development of the 18 g/0.40 mm group was limited. Body weight changes with AAC under 0.45 mm needle BW: body weight; AAC: abdominal aortic constriction. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.45 mm group and the 22 g/0.45 mm and 26 g/0.45 mm groups. Body weight and BW changes in 18-g mice BW: body weight. BW changes of 18-g mice before and after surgery for 8 weeks. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.40 mm group and the rest groups after surgery.
Conclusion
We established CH models using 4 ligation needle sizes and 3 weights for mice. Data showed that both of 0.45-mm and 0.50-mm needles lead to CH. However, 0.45mm needle brings more effective model and causes obvious CH in 18-g mice.
[ "Animal groups and handling", "Echocardiography imaging", "Heart weight, heart weight/body weight, and heart weight/tibial\nlength", "Histological examination of the heart", "Statistical Analysis", "Excessive AAC may lead to death", "AAC increases cardiac dimensions and reduces cardiac function", "AAC increases HW, HW/BW, and HW/TL ratio", "AAC leads to cardiomyocyte hypertrophy and increases collagen\ndepositions", "AAC may restrict physical development" ]
[ "One-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai\nSLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and\ncared for in accordance with the Guide for the Care and Use of\nLaboratory Animals (National Institutes of Health, Washington, DC,\n1996). Experimental protocols were approved by our Institutional Animal Care and\nUse Committee of Zhejiang University (Hangzhou, China). Mice were selected\naccording to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range,\n20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the\nfollowing 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0\n± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight\nlevels were divided using sortition randomization method to create a sham group\n(n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45,\nand 0.50mm; n = 10 per group). Regarding BW, no significant differences were\nfound among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that\nsurvived were not significant (Table\nS2).\nMice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal\ninjection). When the mice did not respond when their toe was pinched, the limbs\nwere fixed on the operating board in the supine position and the skin was\nprepared by shaving and disinfection with alcohol. Sterile gauze was placed on\nthe right side of the abdomen and a ventrimesal incision approximately 1.5 cm\nwas created starting from the xiphoid. The skin was fixed with a spreader and\nthe viscera was pulled out gently with a swab and placed on the gauze. Then, the\nabdominal aorta was isolated using a blunt dissection technique with curved\nmicroforceps under a microscope. A 6-0 silk suture was snared and pulled back\naround the aorta 1mm above the superior mesenteric artery. A 2-mm blunt\nacupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm;\nHuatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number\nGB2024-1994) was then placed next to the aorta. The suture was tied snugly\naround the needle and the aorta. The needle was removed immediately after\nligation, the viscera were replaced, the peritoneum and skin were sutured, and\nthe mice were allowed to recover. Aortic ligation was omitted only for the sham\ngroup. After surgery, the ears were cut to differentiate the mice. Then, mice\nwere placed in an incubator at 30ºC until they woke, and they were\nreturned to their cages. Survival status was recorded daily. To observe the\nphysical development of mice under different conditions, BW differences before\nsurgery and at week 8 post-surgery were calculated as the change in BW.", "After post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4%\nchloralhydrate and placed on a warming pad after skin preparation. Transthoracic\n2-dimensional (2D) echocardiography was performed using the GE Vivid E9\nUltrasound echocardiographic system (General Electric Company, Fairfield, CT,\nUSA) with the GE 9L probe (8-MHz linear array transducer; General Electric\nCompany). M-mode parasternal long-axis scans of the left ventricle at the mitral\nchordae level were used to quantify the interventricular septum thickness at\nend-diastole (IVSd), interventricular septum thickness at end-systole (IVSs),\nleft ventricular internal dimension at end-diastole (LVIDd), left ventricular\ninternal dimension at end-systole (LVIDs), left ventricular posterior wall\nthickness at end-diastole (LVPWd), left ventricular posterior wall thickness at\nend-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All\nmice were tested using the same parameters.", "After echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by\ncervical dislocation and the hearts were dissected. Then, atrial and vascular\ntissues were snipped carefully, leaving the ventricles. The hearts were rinsed\nwith phosphate-buffered saline (PBS), drained by gently squeezing on absorbent\npaper, weighed, photographed under natural light, and fixed in 4%\nparaformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia)\nwere recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio\nand HW/TL ratio were calculated to evaluate the hypertrophic response to PO.", "Extracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After\nroutine histologic procedures, the hearts were embedded in paraffin and cut into\n4-µm sections. Sections were stained with hematoxylin and eosin (HE) and\npicrosirius red (PSR). Cardiac cross-sections were captured at 20 ×\nmicroscopic views from HE sections, and 5 thicknesses of the left ventricle in\neach view were selected in systematic sampling, and measured using Image-Pro\nPlus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values\nwere calculated. Cardiomyocyte morphological changes were captured at\n400×microscopic views from HE sections. Interstitial and/or perivascular\ncollagen depositions were captured at 200×microscopic views under\nstandard lights. Collagen was stained red using PSR, thereby indicating\nfibrosis. At least 6 views were selected in a blinded manner, and each\nphotograph was analyzed to reveal the ratio of red collagen to the entire tissue\narea using Image-Pro Plus 6.0. Then, the mean values were calculated.", "SPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all\nstatistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the\nnormality of the quantitative variables as appropriate. Data are presented as\nmean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests\nwere used to evaluate differences between groups. p < 0.05 was considered\nstatistically significant.", "We monitored mice deaths after surgery according to acute heart failure (AHF)\ncriteria. Data (Table 1) showed that all\ndeaths occurred within 5 days, and a high incidence of death occurred during the\ninitial 24h post-surgery.\nMice deaths after surgery\nThere were no mice deaths in the AAC0.50-mm group or the sham group.\nDeaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5\nd); 54 deaths occurred within 0-24h post-surgery. The total number\nof deaths was 65.", "Echocardiography was performed at the end of post-operative weeks 4 and 8. At\nweek 4 post-surgery, data (Table 2)\nshowed a trend of heart enlargement for mice with AAC, including thickening of\nthe ventricular wall and an increase in chamber dilation; however, differences\nin EF and FS were not significant, indicating that changes in the heart\nstructure did not have a pronounced effect on cardiac function at that time\npoint. At week 8 post-surgery, the trend of heart enlargement continued;\nhowever, the EF and FS values for the AAC groups decreased significantly. This\nchange in cardiac function from week 4 to week 8 was consistent with systolic\nfunction beginning to be markedly affected at week 4 after PO surgery.\nEchocardiographic outcomes of 18-g, 22-g, and 26-g mice\nIVSd: interventricular septum thickness at end-diastole; IVSs:\ninterventricular septum thickness at end-systole; LVIDd: left\nventricular internal dimension at end-diastole; LVIDs: left\nventricular internal dimension at end-systole; LVPWd: left\nventricular posterior wall thickness at end-diastole; LVPWs: left\nventricular posterior wall thickness at end-systole; EF: ejection\nfraction; FS: fractional shortening. The cardiac dimensions\n(inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and\nfunctional indices (inclding EF and FS) changes measured by\nechocardiography. Data were statistically analyzed and presented as\nthe mean ± SD. At week 4, cardiac dimensions for the AAC\ngroups significantly increased compared with the sham groups (*p\n< 0.05); at week 8 , more cardiac dimensions significantly\nincreased, and that EF and FS values for the AAC groups all\ndecreased significantly compared to those of the sham groups for 3\nweight levels (*p < 0.05).", "Generally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators\nof CH. In our study, as shown in Table 3,\nwe found that AAC significantly increased HW and caused a significantly higher\nHW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels.\nThe HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly\nhigher than those for the AAC0.50 mm groups. These HW-related indices for the 18\ng/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm\ngroups.\nHeart weight–related indices of 18-g, 22-g, and 26-g mice\nHW: heart weight; BW: body weight; TL: tibial length. HW(mg),\nHW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC\nand sham groups of 3 BW levels. Data are presented as the mean\n± SD.\nCompared to the sham group in the same BW level, the heart\nweight–related indices of AAC groups increased significantly (p <\n0.05).\nCompared to the rest groups in the same BW level, the heart\nweight–related indices of the AAC 0.45-mm group increased\nsignificantly (p < 0.05).", "For mice undergoing AAC surgery, the hearts demonstrated different degrees of\nenlargement (Figure 1A), enlargement of the\npapillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared\nwith that of the sham group (Table 4).\nThe sham groups showed normal architecture of the cardiomyocytes compared with\nthe AAC groups. Pathological changes including enlarged, disarrayed, and\neosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and\ntrachychromatic and pantomorphic nuclei were observed in each of the AAC groups\n(Figure 1C). Scattered collagen\ndepositions in the interstitial and perivascular spaces were observed in the\nsham groups. In comparison, in some AAC groups, a larger quantity and wider\nrange of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was\nobserved in the perivascular space, especially in the external vascular wall\n(Figure 1E). Statistical analysis\nindicated that the AAC group had a significantly greater collagen area than the\nsham group (Table 5). These results imply\nthat AAC is capable of inducing PO-induced CH and fibrosis.\n\nFigure 1Cardiomyocyte hypertrophy and collagen deposition histological\nexamination. (A) Gross hearts under natural light. (B) The 20\n× microscopic views of HE sections. (C) The 400 ×\nmicroscopic views of HE sections. (D) Representative\n200×microscopic views under standard lights of PSR sections\nin the interstitial space. (E) Representative 200 ×\nmicroscopic views under standard lights of PSR sections in the\nperivascular space. Fibrosis is presented as red in the PSR\nsections.\n\nCardiomyocyte hypertrophy and collagen deposition histological\nexamination. (A) Gross hearts under natural light. (B) The 20\n× microscopic views of HE sections. (C) The 400 ×\nmicroscopic views of HE sections. (D) Representative\n200×microscopic views under standard lights of PSR sections\nin the interstitial space. (E) Representative 200 ×\nmicroscopic views under standard lights of PSR sections in the\nperivascular space. Fibrosis is presented as red in the PSR\nsections.\nThickness of the left ventricle (mm) based on weight and needle size\nData are presented as the mean ± SD (n = 5).\np < 0.05 represents a significant difference between the abdominal\naortic constriction (AAC) and sham groups.\nPercentage of collagen deposition in the left ventricle based on weight\nand needle size\nData are presented as the mean ± SD (n = 6).\np < 0.05 represents a significant difference between the abdominal\naortic constriction (AAC) and sham groups.", "Analysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice\ncompared to 22-g and 26-g mice (Table 6),\nindicating that the 18-g groups had higher development potential. In the 18-g\nmice groups, data showed that the value of 18 g/0.40 mm was significantly lower\nthan that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no\nsignificant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm\ngroup had nearly normal physical development. Development of the 18 g/0.40 mm\ngroup was limited.\nBody weight changes with AAC under 0.45 mm needle\nBW: body weight; AAC: abdominal aortic constriction. Data are\npresented as the mean ± SD.\np < 0.05 represents a significant difference between the 18 g/0.45\nmm group and the 22 g/0.45 mm and 26 g/0.45 mm groups.\nBody weight and BW changes in 18-g mice\nBW: body weight. BW changes of 18-g mice before and after surgery for\n8 weeks. Data are presented as the mean ± SD.\np < 0.05 represents a significant difference between the 18 g/0.40\nmm group and the rest groups after surgery." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Animal groups and handling", "Echocardiography imaging", "Heart weight, heart weight/body weight, and heart weight/tibial\nlength", "Histological examination of the heart", "Statistical Analysis", "Results", "Excessive AAC may lead to death", "AAC increases cardiac dimensions and reduces cardiac function", "AAC increases HW, HW/BW, and HW/TL ratio", "AAC leads to cardiomyocyte hypertrophy and increases collagen\ndepositions", "AAC may restrict physical development", "Discussion", "Conclusion" ]
[ "Cardiac hypertrophy (CH) is a compensatory pathological change that is usually\ninduced by pressure overload (PO), neurohumoral abnormality, and the effects of\ncytokines. It is characterized by cardiomyocyte hypertrophy and interstitial\nhyperplasia, and it results in an enlarged heart and thickening of the heart walls.\nClinically, CH is involved in the development of many diseases, such as valvular\ndisease, hypertension, arterial stenosis, and primary myocardial hypertrophy. If\nthese diseases develop at their own pace, then cardiac function (CF) will gradually\ndecompensate, leading to heart failure (HF), which severely lowers the quality of\nlife and increases the mortality rate. Therefore, CH is a widespread concern and has\nbeen explored at the molecular level by researchers. Due to the high genomic\nhomology between mice and humans, an established CH model for mice has been widely\nused in animal experiments, thereby providing an effective research foundation for\nCH exploration.\nCurrently, PO-induced CH is a common way to establish the model. Abdominal aortic\nconstriction (AAC) is highly recommended by researchers because of the high success\nrate and the ability to perform surgery without the need for thoracotomy or a\nventilator. However, the modeling effects with different ligating intensities for\ncertain body weights (BWs) have not yet been reported. Therefore, we used 3\nfrequently used mice BWs (18 g, 22 g, and 26 g) and 4 different needle sizes (0.35,\n0.40, 0.45, and 0.50 mm) to establish the CH model for each weight level for AAC,\nsummarized the survival rates, and evaluated the CH effects.", " Animal groups and handling One-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai\nSLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and\ncared for in accordance with the Guide for the Care and Use of\nLaboratory Animals (National Institutes of Health, Washington, DC,\n1996). Experimental protocols were approved by our Institutional Animal Care and\nUse Committee of Zhejiang University (Hangzhou, China). Mice were selected\naccording to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range,\n20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the\nfollowing 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0\n± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight\nlevels were divided using sortition randomization method to create a sham group\n(n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45,\nand 0.50mm; n = 10 per group). Regarding BW, no significant differences were\nfound among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that\nsurvived were not significant (Table\nS2).\nMice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal\ninjection). When the mice did not respond when their toe was pinched, the limbs\nwere fixed on the operating board in the supine position and the skin was\nprepared by shaving and disinfection with alcohol. Sterile gauze was placed on\nthe right side of the abdomen and a ventrimesal incision approximately 1.5 cm\nwas created starting from the xiphoid. The skin was fixed with a spreader and\nthe viscera was pulled out gently with a swab and placed on the gauze. Then, the\nabdominal aorta was isolated using a blunt dissection technique with curved\nmicroforceps under a microscope. A 6-0 silk suture was snared and pulled back\naround the aorta 1mm above the superior mesenteric artery. A 2-mm blunt\nacupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm;\nHuatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number\nGB2024-1994) was then placed next to the aorta. The suture was tied snugly\naround the needle and the aorta. The needle was removed immediately after\nligation, the viscera were replaced, the peritoneum and skin were sutured, and\nthe mice were allowed to recover. Aortic ligation was omitted only for the sham\ngroup. After surgery, the ears were cut to differentiate the mice. Then, mice\nwere placed in an incubator at 30ºC until they woke, and they were\nreturned to their cages. Survival status was recorded daily. To observe the\nphysical development of mice under different conditions, BW differences before\nsurgery and at week 8 post-surgery were calculated as the change in BW.\nOne-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai\nSLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and\ncared for in accordance with the Guide for the Care and Use of\nLaboratory Animals (National Institutes of Health, Washington, DC,\n1996). Experimental protocols were approved by our Institutional Animal Care and\nUse Committee of Zhejiang University (Hangzhou, China). Mice were selected\naccording to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range,\n20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the\nfollowing 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0\n± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight\nlevels were divided using sortition randomization method to create a sham group\n(n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45,\nand 0.50mm; n = 10 per group). Regarding BW, no significant differences were\nfound among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that\nsurvived were not significant (Table\nS2).\nMice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal\ninjection). When the mice did not respond when their toe was pinched, the limbs\nwere fixed on the operating board in the supine position and the skin was\nprepared by shaving and disinfection with alcohol. Sterile gauze was placed on\nthe right side of the abdomen and a ventrimesal incision approximately 1.5 cm\nwas created starting from the xiphoid. The skin was fixed with a spreader and\nthe viscera was pulled out gently with a swab and placed on the gauze. Then, the\nabdominal aorta was isolated using a blunt dissection technique with curved\nmicroforceps under a microscope. A 6-0 silk suture was snared and pulled back\naround the aorta 1mm above the superior mesenteric artery. A 2-mm blunt\nacupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm;\nHuatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number\nGB2024-1994) was then placed next to the aorta. The suture was tied snugly\naround the needle and the aorta. The needle was removed immediately after\nligation, the viscera were replaced, the peritoneum and skin were sutured, and\nthe mice were allowed to recover. Aortic ligation was omitted only for the sham\ngroup. After surgery, the ears were cut to differentiate the mice. Then, mice\nwere placed in an incubator at 30ºC until they woke, and they were\nreturned to their cages. Survival status was recorded daily. To observe the\nphysical development of mice under different conditions, BW differences before\nsurgery and at week 8 post-surgery were calculated as the change in BW.\n Echocardiography imaging After post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4%\nchloralhydrate and placed on a warming pad after skin preparation. Transthoracic\n2-dimensional (2D) echocardiography was performed using the GE Vivid E9\nUltrasound echocardiographic system (General Electric Company, Fairfield, CT,\nUSA) with the GE 9L probe (8-MHz linear array transducer; General Electric\nCompany). M-mode parasternal long-axis scans of the left ventricle at the mitral\nchordae level were used to quantify the interventricular septum thickness at\nend-diastole (IVSd), interventricular septum thickness at end-systole (IVSs),\nleft ventricular internal dimension at end-diastole (LVIDd), left ventricular\ninternal dimension at end-systole (LVIDs), left ventricular posterior wall\nthickness at end-diastole (LVPWd), left ventricular posterior wall thickness at\nend-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All\nmice were tested using the same parameters.\nAfter post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4%\nchloralhydrate and placed on a warming pad after skin preparation. Transthoracic\n2-dimensional (2D) echocardiography was performed using the GE Vivid E9\nUltrasound echocardiographic system (General Electric Company, Fairfield, CT,\nUSA) with the GE 9L probe (8-MHz linear array transducer; General Electric\nCompany). M-mode parasternal long-axis scans of the left ventricle at the mitral\nchordae level were used to quantify the interventricular septum thickness at\nend-diastole (IVSd), interventricular septum thickness at end-systole (IVSs),\nleft ventricular internal dimension at end-diastole (LVIDd), left ventricular\ninternal dimension at end-systole (LVIDs), left ventricular posterior wall\nthickness at end-diastole (LVPWd), left ventricular posterior wall thickness at\nend-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All\nmice were tested using the same parameters.\n Heart weight, heart weight/body weight, and heart weight/tibial\nlength After echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by\ncervical dislocation and the hearts were dissected. Then, atrial and vascular\ntissues were snipped carefully, leaving the ventricles. The hearts were rinsed\nwith phosphate-buffered saline (PBS), drained by gently squeezing on absorbent\npaper, weighed, photographed under natural light, and fixed in 4%\nparaformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia)\nwere recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio\nand HW/TL ratio were calculated to evaluate the hypertrophic response to PO.\nAfter echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by\ncervical dislocation and the hearts were dissected. Then, atrial and vascular\ntissues were snipped carefully, leaving the ventricles. The hearts were rinsed\nwith phosphate-buffered saline (PBS), drained by gently squeezing on absorbent\npaper, weighed, photographed under natural light, and fixed in 4%\nparaformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia)\nwere recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio\nand HW/TL ratio were calculated to evaluate the hypertrophic response to PO.\n Histological examination of the heart Extracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After\nroutine histologic procedures, the hearts were embedded in paraffin and cut into\n4-µm sections. Sections were stained with hematoxylin and eosin (HE) and\npicrosirius red (PSR). Cardiac cross-sections were captured at 20 ×\nmicroscopic views from HE sections, and 5 thicknesses of the left ventricle in\neach view were selected in systematic sampling, and measured using Image-Pro\nPlus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values\nwere calculated. Cardiomyocyte morphological changes were captured at\n400×microscopic views from HE sections. Interstitial and/or perivascular\ncollagen depositions were captured at 200×microscopic views under\nstandard lights. Collagen was stained red using PSR, thereby indicating\nfibrosis. At least 6 views were selected in a blinded manner, and each\nphotograph was analyzed to reveal the ratio of red collagen to the entire tissue\narea using Image-Pro Plus 6.0. Then, the mean values were calculated.\nExtracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After\nroutine histologic procedures, the hearts were embedded in paraffin and cut into\n4-µm sections. Sections were stained with hematoxylin and eosin (HE) and\npicrosirius red (PSR). Cardiac cross-sections were captured at 20 ×\nmicroscopic views from HE sections, and 5 thicknesses of the left ventricle in\neach view were selected in systematic sampling, and measured using Image-Pro\nPlus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values\nwere calculated. Cardiomyocyte morphological changes were captured at\n400×microscopic views from HE sections. Interstitial and/or perivascular\ncollagen depositions were captured at 200×microscopic views under\nstandard lights. Collagen was stained red using PSR, thereby indicating\nfibrosis. At least 6 views were selected in a blinded manner, and each\nphotograph was analyzed to reveal the ratio of red collagen to the entire tissue\narea using Image-Pro Plus 6.0. Then, the mean values were calculated.\n Statistical Analysis SPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all\nstatistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the\nnormality of the quantitative variables as appropriate. Data are presented as\nmean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests\nwere used to evaluate differences between groups. p < 0.05 was considered\nstatistically significant.\nSPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all\nstatistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the\nnormality of the quantitative variables as appropriate. Data are presented as\nmean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests\nwere used to evaluate differences between groups. p < 0.05 was considered\nstatistically significant.", "One-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai\nSLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and\ncared for in accordance with the Guide for the Care and Use of\nLaboratory Animals (National Institutes of Health, Washington, DC,\n1996). Experimental protocols were approved by our Institutional Animal Care and\nUse Committee of Zhejiang University (Hangzhou, China). Mice were selected\naccording to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range,\n20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the\nfollowing 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0\n± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight\nlevels were divided using sortition randomization method to create a sham group\n(n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45,\nand 0.50mm; n = 10 per group). Regarding BW, no significant differences were\nfound among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that\nsurvived were not significant (Table\nS2).\nMice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal\ninjection). When the mice did not respond when their toe was pinched, the limbs\nwere fixed on the operating board in the supine position and the skin was\nprepared by shaving and disinfection with alcohol. Sterile gauze was placed on\nthe right side of the abdomen and a ventrimesal incision approximately 1.5 cm\nwas created starting from the xiphoid. The skin was fixed with a spreader and\nthe viscera was pulled out gently with a swab and placed on the gauze. Then, the\nabdominal aorta was isolated using a blunt dissection technique with curved\nmicroforceps under a microscope. A 6-0 silk suture was snared and pulled back\naround the aorta 1mm above the superior mesenteric artery. A 2-mm blunt\nacupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm;\nHuatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number\nGB2024-1994) was then placed next to the aorta. The suture was tied snugly\naround the needle and the aorta. The needle was removed immediately after\nligation, the viscera were replaced, the peritoneum and skin were sutured, and\nthe mice were allowed to recover. Aortic ligation was omitted only for the sham\ngroup. After surgery, the ears were cut to differentiate the mice. Then, mice\nwere placed in an incubator at 30ºC until they woke, and they were\nreturned to their cages. Survival status was recorded daily. To observe the\nphysical development of mice under different conditions, BW differences before\nsurgery and at week 8 post-surgery were calculated as the change in BW.", "After post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4%\nchloralhydrate and placed on a warming pad after skin preparation. Transthoracic\n2-dimensional (2D) echocardiography was performed using the GE Vivid E9\nUltrasound echocardiographic system (General Electric Company, Fairfield, CT,\nUSA) with the GE 9L probe (8-MHz linear array transducer; General Electric\nCompany). M-mode parasternal long-axis scans of the left ventricle at the mitral\nchordae level were used to quantify the interventricular septum thickness at\nend-diastole (IVSd), interventricular septum thickness at end-systole (IVSs),\nleft ventricular internal dimension at end-diastole (LVIDd), left ventricular\ninternal dimension at end-systole (LVIDs), left ventricular posterior wall\nthickness at end-diastole (LVPWd), left ventricular posterior wall thickness at\nend-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All\nmice were tested using the same parameters.", "After echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by\ncervical dislocation and the hearts were dissected. Then, atrial and vascular\ntissues were snipped carefully, leaving the ventricles. The hearts were rinsed\nwith phosphate-buffered saline (PBS), drained by gently squeezing on absorbent\npaper, weighed, photographed under natural light, and fixed in 4%\nparaformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia)\nwere recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio\nand HW/TL ratio were calculated to evaluate the hypertrophic response to PO.", "Extracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After\nroutine histologic procedures, the hearts were embedded in paraffin and cut into\n4-µm sections. Sections were stained with hematoxylin and eosin (HE) and\npicrosirius red (PSR). Cardiac cross-sections were captured at 20 ×\nmicroscopic views from HE sections, and 5 thicknesses of the left ventricle in\neach view were selected in systematic sampling, and measured using Image-Pro\nPlus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values\nwere calculated. Cardiomyocyte morphological changes were captured at\n400×microscopic views from HE sections. Interstitial and/or perivascular\ncollagen depositions were captured at 200×microscopic views under\nstandard lights. Collagen was stained red using PSR, thereby indicating\nfibrosis. At least 6 views were selected in a blinded manner, and each\nphotograph was analyzed to reveal the ratio of red collagen to the entire tissue\narea using Image-Pro Plus 6.0. Then, the mean values were calculated.", "SPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all\nstatistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the\nnormality of the quantitative variables as appropriate. Data are presented as\nmean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests\nwere used to evaluate differences between groups. p < 0.05 was considered\nstatistically significant.", " Excessive AAC may lead to death We monitored mice deaths after surgery according to acute heart failure (AHF)\ncriteria. Data (Table 1) showed that all\ndeaths occurred within 5 days, and a high incidence of death occurred during the\ninitial 24h post-surgery.\nMice deaths after surgery\nThere were no mice deaths in the AAC0.50-mm group or the sham group.\nDeaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5\nd); 54 deaths occurred within 0-24h post-surgery. The total number\nof deaths was 65.\nWe monitored mice deaths after surgery according to acute heart failure (AHF)\ncriteria. Data (Table 1) showed that all\ndeaths occurred within 5 days, and a high incidence of death occurred during the\ninitial 24h post-surgery.\nMice deaths after surgery\nThere were no mice deaths in the AAC0.50-mm group or the sham group.\nDeaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5\nd); 54 deaths occurred within 0-24h post-surgery. The total number\nof deaths was 65.\n AAC increases cardiac dimensions and reduces cardiac function Echocardiography was performed at the end of post-operative weeks 4 and 8. At\nweek 4 post-surgery, data (Table 2)\nshowed a trend of heart enlargement for mice with AAC, including thickening of\nthe ventricular wall and an increase in chamber dilation; however, differences\nin EF and FS were not significant, indicating that changes in the heart\nstructure did not have a pronounced effect on cardiac function at that time\npoint. At week 8 post-surgery, the trend of heart enlargement continued;\nhowever, the EF and FS values for the AAC groups decreased significantly. This\nchange in cardiac function from week 4 to week 8 was consistent with systolic\nfunction beginning to be markedly affected at week 4 after PO surgery.\nEchocardiographic outcomes of 18-g, 22-g, and 26-g mice\nIVSd: interventricular septum thickness at end-diastole; IVSs:\ninterventricular septum thickness at end-systole; LVIDd: left\nventricular internal dimension at end-diastole; LVIDs: left\nventricular internal dimension at end-systole; LVPWd: left\nventricular posterior wall thickness at end-diastole; LVPWs: left\nventricular posterior wall thickness at end-systole; EF: ejection\nfraction; FS: fractional shortening. The cardiac dimensions\n(inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and\nfunctional indices (inclding EF and FS) changes measured by\nechocardiography. Data were statistically analyzed and presented as\nthe mean ± SD. At week 4, cardiac dimensions for the AAC\ngroups significantly increased compared with the sham groups (*p\n< 0.05); at week 8 , more cardiac dimensions significantly\nincreased, and that EF and FS values for the AAC groups all\ndecreased significantly compared to those of the sham groups for 3\nweight levels (*p < 0.05).\nEchocardiography was performed at the end of post-operative weeks 4 and 8. At\nweek 4 post-surgery, data (Table 2)\nshowed a trend of heart enlargement for mice with AAC, including thickening of\nthe ventricular wall and an increase in chamber dilation; however, differences\nin EF and FS were not significant, indicating that changes in the heart\nstructure did not have a pronounced effect on cardiac function at that time\npoint. At week 8 post-surgery, the trend of heart enlargement continued;\nhowever, the EF and FS values for the AAC groups decreased significantly. This\nchange in cardiac function from week 4 to week 8 was consistent with systolic\nfunction beginning to be markedly affected at week 4 after PO surgery.\nEchocardiographic outcomes of 18-g, 22-g, and 26-g mice\nIVSd: interventricular septum thickness at end-diastole; IVSs:\ninterventricular septum thickness at end-systole; LVIDd: left\nventricular internal dimension at end-diastole; LVIDs: left\nventricular internal dimension at end-systole; LVPWd: left\nventricular posterior wall thickness at end-diastole; LVPWs: left\nventricular posterior wall thickness at end-systole; EF: ejection\nfraction; FS: fractional shortening. The cardiac dimensions\n(inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and\nfunctional indices (inclding EF and FS) changes measured by\nechocardiography. Data were statistically analyzed and presented as\nthe mean ± SD. At week 4, cardiac dimensions for the AAC\ngroups significantly increased compared with the sham groups (*p\n< 0.05); at week 8 , more cardiac dimensions significantly\nincreased, and that EF and FS values for the AAC groups all\ndecreased significantly compared to those of the sham groups for 3\nweight levels (*p < 0.05).\n AAC increases HW, HW/BW, and HW/TL ratio Generally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators\nof CH. In our study, as shown in Table 3,\nwe found that AAC significantly increased HW and caused a significantly higher\nHW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels.\nThe HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly\nhigher than those for the AAC0.50 mm groups. These HW-related indices for the 18\ng/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm\ngroups.\nHeart weight–related indices of 18-g, 22-g, and 26-g mice\nHW: heart weight; BW: body weight; TL: tibial length. HW(mg),\nHW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC\nand sham groups of 3 BW levels. Data are presented as the mean\n± SD.\nCompared to the sham group in the same BW level, the heart\nweight–related indices of AAC groups increased significantly (p <\n0.05).\nCompared to the rest groups in the same BW level, the heart\nweight–related indices of the AAC 0.45-mm group increased\nsignificantly (p < 0.05).\nGenerally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators\nof CH. In our study, as shown in Table 3,\nwe found that AAC significantly increased HW and caused a significantly higher\nHW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels.\nThe HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly\nhigher than those for the AAC0.50 mm groups. These HW-related indices for the 18\ng/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm\ngroups.\nHeart weight–related indices of 18-g, 22-g, and 26-g mice\nHW: heart weight; BW: body weight; TL: tibial length. HW(mg),\nHW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC\nand sham groups of 3 BW levels. Data are presented as the mean\n± SD.\nCompared to the sham group in the same BW level, the heart\nweight–related indices of AAC groups increased significantly (p <\n0.05).\nCompared to the rest groups in the same BW level, the heart\nweight–related indices of the AAC 0.45-mm group increased\nsignificantly (p < 0.05).\n AAC leads to cardiomyocyte hypertrophy and increases collagen\ndepositions For mice undergoing AAC surgery, the hearts demonstrated different degrees of\nenlargement (Figure 1A), enlargement of the\npapillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared\nwith that of the sham group (Table 4).\nThe sham groups showed normal architecture of the cardiomyocytes compared with\nthe AAC groups. Pathological changes including enlarged, disarrayed, and\neosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and\ntrachychromatic and pantomorphic nuclei were observed in each of the AAC groups\n(Figure 1C). Scattered collagen\ndepositions in the interstitial and perivascular spaces were observed in the\nsham groups. In comparison, in some AAC groups, a larger quantity and wider\nrange of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was\nobserved in the perivascular space, especially in the external vascular wall\n(Figure 1E). Statistical analysis\nindicated that the AAC group had a significantly greater collagen area than the\nsham group (Table 5). These results imply\nthat AAC is capable of inducing PO-induced CH and fibrosis.\n\nFigure 1Cardiomyocyte hypertrophy and collagen deposition histological\nexamination. (A) Gross hearts under natural light. (B) The 20\n× microscopic views of HE sections. (C) The 400 ×\nmicroscopic views of HE sections. (D) Representative\n200×microscopic views under standard lights of PSR sections\nin the interstitial space. (E) Representative 200 ×\nmicroscopic views under standard lights of PSR sections in the\nperivascular space. Fibrosis is presented as red in the PSR\nsections.\n\nCardiomyocyte hypertrophy and collagen deposition histological\nexamination. (A) Gross hearts under natural light. (B) The 20\n× microscopic views of HE sections. (C) The 400 ×\nmicroscopic views of HE sections. (D) Representative\n200×microscopic views under standard lights of PSR sections\nin the interstitial space. (E) Representative 200 ×\nmicroscopic views under standard lights of PSR sections in the\nperivascular space. Fibrosis is presented as red in the PSR\nsections.\nThickness of the left ventricle (mm) based on weight and needle size\nData are presented as the mean ± SD (n = 5).\np < 0.05 represents a significant difference between the abdominal\naortic constriction (AAC) and sham groups.\nPercentage of collagen deposition in the left ventricle based on weight\nand needle size\nData are presented as the mean ± SD (n = 6).\np < 0.05 represents a significant difference between the abdominal\naortic constriction (AAC) and sham groups.\nFor mice undergoing AAC surgery, the hearts demonstrated different degrees of\nenlargement (Figure 1A), enlargement of the\npapillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared\nwith that of the sham group (Table 4).\nThe sham groups showed normal architecture of the cardiomyocytes compared with\nthe AAC groups. Pathological changes including enlarged, disarrayed, and\neosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and\ntrachychromatic and pantomorphic nuclei were observed in each of the AAC groups\n(Figure 1C). Scattered collagen\ndepositions in the interstitial and perivascular spaces were observed in the\nsham groups. In comparison, in some AAC groups, a larger quantity and wider\nrange of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was\nobserved in the perivascular space, especially in the external vascular wall\n(Figure 1E). Statistical analysis\nindicated that the AAC group had a significantly greater collagen area than the\nsham group (Table 5). These results imply\nthat AAC is capable of inducing PO-induced CH and fibrosis.\n\nFigure 1Cardiomyocyte hypertrophy and collagen deposition histological\nexamination. (A) Gross hearts under natural light. (B) The 20\n× microscopic views of HE sections. (C) The 400 ×\nmicroscopic views of HE sections. (D) Representative\n200×microscopic views under standard lights of PSR sections\nin the interstitial space. (E) Representative 200 ×\nmicroscopic views under standard lights of PSR sections in the\nperivascular space. Fibrosis is presented as red in the PSR\nsections.\n\nCardiomyocyte hypertrophy and collagen deposition histological\nexamination. (A) Gross hearts under natural light. (B) The 20\n× microscopic views of HE sections. (C) The 400 ×\nmicroscopic views of HE sections. (D) Representative\n200×microscopic views under standard lights of PSR sections\nin the interstitial space. (E) Representative 200 ×\nmicroscopic views under standard lights of PSR sections in the\nperivascular space. Fibrosis is presented as red in the PSR\nsections.\nThickness of the left ventricle (mm) based on weight and needle size\nData are presented as the mean ± SD (n = 5).\np < 0.05 represents a significant difference between the abdominal\naortic constriction (AAC) and sham groups.\nPercentage of collagen deposition in the left ventricle based on weight\nand needle size\nData are presented as the mean ± SD (n = 6).\np < 0.05 represents a significant difference between the abdominal\naortic constriction (AAC) and sham groups.\n AAC may restrict physical development Analysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice\ncompared to 22-g and 26-g mice (Table 6),\nindicating that the 18-g groups had higher development potential. In the 18-g\nmice groups, data showed that the value of 18 g/0.40 mm was significantly lower\nthan that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no\nsignificant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm\ngroup had nearly normal physical development. Development of the 18 g/0.40 mm\ngroup was limited.\nBody weight changes with AAC under 0.45 mm needle\nBW: body weight; AAC: abdominal aortic constriction. Data are\npresented as the mean ± SD.\np < 0.05 represents a significant difference between the 18 g/0.45\nmm group and the 22 g/0.45 mm and 26 g/0.45 mm groups.\nBody weight and BW changes in 18-g mice\nBW: body weight. BW changes of 18-g mice before and after surgery for\n8 weeks. Data are presented as the mean ± SD.\np < 0.05 represents a significant difference between the 18 g/0.40\nmm group and the rest groups after surgery.\nAnalysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice\ncompared to 22-g and 26-g mice (Table 6),\nindicating that the 18-g groups had higher development potential. In the 18-g\nmice groups, data showed that the value of 18 g/0.40 mm was significantly lower\nthan that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no\nsignificant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm\ngroup had nearly normal physical development. Development of the 18 g/0.40 mm\ngroup was limited.\nBody weight changes with AAC under 0.45 mm needle\nBW: body weight; AAC: abdominal aortic constriction. Data are\npresented as the mean ± SD.\np < 0.05 represents a significant difference between the 18 g/0.45\nmm group and the 22 g/0.45 mm and 26 g/0.45 mm groups.\nBody weight and BW changes in 18-g mice\nBW: body weight. BW changes of 18-g mice before and after surgery for\n8 weeks. Data are presented as the mean ± SD.\np < 0.05 represents a significant difference between the 18 g/0.40\nmm group and the rest groups after surgery.", "We monitored mice deaths after surgery according to acute heart failure (AHF)\ncriteria. Data (Table 1) showed that all\ndeaths occurred within 5 days, and a high incidence of death occurred during the\ninitial 24h post-surgery.\nMice deaths after surgery\nThere were no mice deaths in the AAC0.50-mm group or the sham group.\nDeaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5\nd); 54 deaths occurred within 0-24h post-surgery. The total number\nof deaths was 65.", "Echocardiography was performed at the end of post-operative weeks 4 and 8. At\nweek 4 post-surgery, data (Table 2)\nshowed a trend of heart enlargement for mice with AAC, including thickening of\nthe ventricular wall and an increase in chamber dilation; however, differences\nin EF and FS were not significant, indicating that changes in the heart\nstructure did not have a pronounced effect on cardiac function at that time\npoint. At week 8 post-surgery, the trend of heart enlargement continued;\nhowever, the EF and FS values for the AAC groups decreased significantly. This\nchange in cardiac function from week 4 to week 8 was consistent with systolic\nfunction beginning to be markedly affected at week 4 after PO surgery.\nEchocardiographic outcomes of 18-g, 22-g, and 26-g mice\nIVSd: interventricular septum thickness at end-diastole; IVSs:\ninterventricular septum thickness at end-systole; LVIDd: left\nventricular internal dimension at end-diastole; LVIDs: left\nventricular internal dimension at end-systole; LVPWd: left\nventricular posterior wall thickness at end-diastole; LVPWs: left\nventricular posterior wall thickness at end-systole; EF: ejection\nfraction; FS: fractional shortening. The cardiac dimensions\n(inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and\nfunctional indices (inclding EF and FS) changes measured by\nechocardiography. Data were statistically analyzed and presented as\nthe mean ± SD. At week 4, cardiac dimensions for the AAC\ngroups significantly increased compared with the sham groups (*p\n< 0.05); at week 8 , more cardiac dimensions significantly\nincreased, and that EF and FS values for the AAC groups all\ndecreased significantly compared to those of the sham groups for 3\nweight levels (*p < 0.05).", "Generally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators\nof CH. In our study, as shown in Table 3,\nwe found that AAC significantly increased HW and caused a significantly higher\nHW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels.\nThe HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly\nhigher than those for the AAC0.50 mm groups. These HW-related indices for the 18\ng/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm\ngroups.\nHeart weight–related indices of 18-g, 22-g, and 26-g mice\nHW: heart weight; BW: body weight; TL: tibial length. HW(mg),\nHW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC\nand sham groups of 3 BW levels. Data are presented as the mean\n± SD.\nCompared to the sham group in the same BW level, the heart\nweight–related indices of AAC groups increased significantly (p <\n0.05).\nCompared to the rest groups in the same BW level, the heart\nweight–related indices of the AAC 0.45-mm group increased\nsignificantly (p < 0.05).", "For mice undergoing AAC surgery, the hearts demonstrated different degrees of\nenlargement (Figure 1A), enlargement of the\npapillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared\nwith that of the sham group (Table 4).\nThe sham groups showed normal architecture of the cardiomyocytes compared with\nthe AAC groups. Pathological changes including enlarged, disarrayed, and\neosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and\ntrachychromatic and pantomorphic nuclei were observed in each of the AAC groups\n(Figure 1C). Scattered collagen\ndepositions in the interstitial and perivascular spaces were observed in the\nsham groups. In comparison, in some AAC groups, a larger quantity and wider\nrange of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was\nobserved in the perivascular space, especially in the external vascular wall\n(Figure 1E). Statistical analysis\nindicated that the AAC group had a significantly greater collagen area than the\nsham group (Table 5). These results imply\nthat AAC is capable of inducing PO-induced CH and fibrosis.\n\nFigure 1Cardiomyocyte hypertrophy and collagen deposition histological\nexamination. (A) Gross hearts under natural light. (B) The 20\n× microscopic views of HE sections. (C) The 400 ×\nmicroscopic views of HE sections. (D) Representative\n200×microscopic views under standard lights of PSR sections\nin the interstitial space. (E) Representative 200 ×\nmicroscopic views under standard lights of PSR sections in the\nperivascular space. Fibrosis is presented as red in the PSR\nsections.\n\nCardiomyocyte hypertrophy and collagen deposition histological\nexamination. (A) Gross hearts under natural light. (B) The 20\n× microscopic views of HE sections. (C) The 400 ×\nmicroscopic views of HE sections. (D) Representative\n200×microscopic views under standard lights of PSR sections\nin the interstitial space. (E) Representative 200 ×\nmicroscopic views under standard lights of PSR sections in the\nperivascular space. Fibrosis is presented as red in the PSR\nsections.\nThickness of the left ventricle (mm) based on weight and needle size\nData are presented as the mean ± SD (n = 5).\np < 0.05 represents a significant difference between the abdominal\naortic constriction (AAC) and sham groups.\nPercentage of collagen deposition in the left ventricle based on weight\nand needle size\nData are presented as the mean ± SD (n = 6).\np < 0.05 represents a significant difference between the abdominal\naortic constriction (AAC) and sham groups.", "Analysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice\ncompared to 22-g and 26-g mice (Table 6),\nindicating that the 18-g groups had higher development potential. In the 18-g\nmice groups, data showed that the value of 18 g/0.40 mm was significantly lower\nthan that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no\nsignificant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm\ngroup had nearly normal physical development. Development of the 18 g/0.40 mm\ngroup was limited.\nBody weight changes with AAC under 0.45 mm needle\nBW: body weight; AAC: abdominal aortic constriction. Data are\npresented as the mean ± SD.\np < 0.05 represents a significant difference between the 18 g/0.45\nmm group and the 22 g/0.45 mm and 26 g/0.45 mm groups.\nBody weight and BW changes in 18-g mice\nBW: body weight. BW changes of 18-g mice before and after surgery for\n8 weeks. Data are presented as the mean ± SD.\np < 0.05 represents a significant difference between the 18 g/0.40\nmm group and the rest groups after surgery.", "In this study, we performed AAC according to 4 different ligating intensities for\nmice of 3 different weight levels to evaluate the survival rates of mice and CH\ninduced by PO under different conditions. This is the first study showing that CH\ndiversities exist among groups under different ligations and BW.\nAAC is widely used in the modeling of CH induced by PO in mice. Needle ligation is\nusually used, and the efficiency of modeling is highly dependent on ligation\nintensity. Nevertheless, excessive constriction will lead to death,1 and our research findings (Table 1) demonstrated this point. In this\nstudy, a 0.35-mm needle caused the death of all mice in the 3 weight levels, and the\n0.40-mm needle caused the death of all mice in 22-g and 26-g groups. Contrarily, all\nmice with AAC that underwent surgery with a 0.50-mm needle or sham operation\nsurvived. Mice in the other groups had different mortality rates. Regarding the\nselection of needles for the BW ranges of this study, a needle smaller than 0.35mm\nin diameter caused stronger constriction and death. However, a needle larger than\n0.50 mm in diameter did not alternatively affect the survival rate, but it did\nreduce the efficiency of CH because of the reduced PO from weaker constriction. This\nis why we chose needles between 0.35 mm and 0.50 mm.\nDeath can occur after AAC. Undoubtedly, AAC increases cardiac afterloading. To cope\nwith the additional biodynamics, the heart exerts a series of adaptive changes,\nincluding activation and hypertrophy of cardiomyocytes and hyperplasia of the\nextracellular matrix.2 This\ncompensational mechanism maintains cardiac output (CO) effectively for a period of\ntime while maintaining the survival of the organism; it is also the basis for the\nestablishment of the CH model. However, when the sudden afterloading is out of the\nrange of cardiomyocyte adjustment, the bloodstream will be limited and cause\nconstriction, resulting in AHF. AHF is typically characterized by rapid changes in\nheart failure (HF) symptoms.3 Sato\net al.4 considered the incidence of\ndeath within 5 days as an assessment criterion of AHF. AHF could moderately or\nmarkedly improve by the second day if effectively controlled. AHF leads to high\nventricular pressure, and high ventricular pressure leads to high pulmonary blood\npressure, thus leading to pulmonary congestion, which is one of the causes of death\nafter AAC.5 Liao et al.6 suggested that cardiogenic\npneumo-edema is the main cause of postoperative death for PO mice. Additionally,\narrhythmia may occur as part of the electrophysiological changes,7 and cardiomyocyte sarcomeres may be\ndisordered during the pathological changes.8 These are all severe threats to the survival rate after AAC.\nOur record of mice death times (Table 1)\nshowed the phenomenon of all deaths occurring within 5 days. A high incidence of\ndeath occurred during the initial 24h, which is in accordance with the\naforementioned AHF criteria. In addition, there is a positive correlation between CO\nand BW;9,10 therefore, compared with the low-weight mice, high-weight\nmice require more CO and will have cardiac afterloading that is more increased than\nthat of low-weight mice with the same aortic constriction. Results of the current\nstudy (Table 1) indicate that higher-weight\nmice had poorer tolerance for AAC, which is reflected in their mortality rates.\nRegarding mice with AAC that underwent ligation with a 0.40-mm needle, all mice in\nthe 22-g and 26-g groups died. However, 6 out of 10 mice survived in the 18-g\ngroup.\nThe diagnosis of CH usually depends on changes in cardiac function and\nmorphology.11\nEchocardiography can be performed in vitro noninvasively during the first assessment\nof CH, and it is especially used to monitor changes in cardiac function.12 We performed echocardiographic\nexaminations of mice at the end of week 4 and week 8 post-surgery. Data (week 4 data\nin Table 2) showed that at the end of week 4,\nthe phenomena of thickened ventricular walls, enlarged ventricular chambers, and\ndecreased cardiac functions were emerging in each AAC group compared with the sham\ngroups, and this diversity was consistent with the characteristic cardiac changes\nthat occur with chronic pressure overload.13,14 These trends\nbecame more pronounced at the end of week 8 (week 8 data in Table 2), when EF and FS, which represent cardiac function, were\nsignificantly lower compared with the sham groups. CH also increased HW. In our\nstudy, the HW, HW/BW ratio, and HW/TL ratio for the AAC groups were significantly\nincreased (Table 3). Cardiac remodeling is\nthe most typical pathological change of CH, including cardiomyocyte hypertrophy and\nthe extracellular matrix increases.15 Our histological results showed increased external diameters\nand ventricular thickness in gross hearts and cross-sections under AAC (Figures 1A and B). HE staining of the AAC groups displayed the hypertrophic pathology\nof cardiomyocytes and nuclei (Figure 1C). PSR\nstaining of the AAC groups displayed extensive collagen depositions (Figure 1D), particularly in the perivascular\nspace (Figure 1E). Statistical analysis showed\nthat the thickness of the left ventricle (Table\n4) and the percentage of collagen deposition (Table 5) were significantly increased in the AAC groups compared\nto the sham group. Regarding the formation of collagen, Kuwahara et al.16 indicated that cardiac fibroblasts\nare activated on day 3 after PO, and that the neoformative fibrous tissues mainly\naffect the diastolic function rather than the systolic function during the initial 4\nweeks. Then, excessive myocardial fibrosis is implicated in systolic dysfunction\nbecause of its more intensive traction, and cardiac function begins to deteriorate\nsignificantly. Regarding EF and FS values for the AAC groups (Table 2), the downward trends from week 4 to week 8 conform to\nthis theory.\nChoosing the proper needle is critical for establishing the CH model. Based on these\nresults, we found that all mice with AAC died when a 0.35-mm needle was used for\nligation for all 3 weight levels and when a 0.40-mm needle was used for ligation for\nthe 22-g and 26-g groups; therefore, these 5 groups of weight-needle pairings were\nclearly unsuitable for use. The 18g/0.40mm group had obvious CH compared with the\nsham group, and its survival rate was acceptable (6 out of 10). However, it should\nstill be excluded because the 18 g/0.45 mm group showed more obvious CH and higher\nsurvival rates (8 out of 10) (Table 1, Table 3). The 0.45-mm and 0.50-mm needles are\navailable for all 3 weight levels, but both can result in definite myocardial\nhypertrophy. However, the values of the HW, HW/BW ratio, and HW/TL ratio for the AAC\nmice when using the 0.45-mm needle were significantly higher than those when using a\n0.50-mm needle for each weight level (Table\n3). Therefore, for all 3 weight levels of our study, a CH model can be\nestablished using a 0.50-mm needle and the survival rate of the mice will not be\nthreatened. However, a 0.45-mm needle leads to more effective CH model, and higher\nmortality than the 0.50-mm needle.\nNormally, with the PO-induced CH model, thinner needles creates more severe aortic\nstenosis and lead to more pronounced CH, and vice versa. However, we observed an\ninteresting phenomenon: the CH level of the 18 g/0.45 mm group was abnormally\nsignificantly higher than that of the 18 g/0.40 mm group (18-g mice in Table 3). Regarding the analysis of BW data\nwith AAC (Table 6), the changes in BW in 18-g\nmice during weeks 0 to 8 were significantly higher than those for the 22-g and 26-g\nmice, indicating that 18-g mice have greater potential for physical development\nafter surgery and that physical development is often accompanied by organ\ndevelopment.17 Therefore,\nthe heart of 18-g mice also has greater development potential. For the same weight\nlevel, the BW change of the 18 g/0.45 mm group during weeks 0 to 8 was significantly\nhigher than that of the 18 g/0.40 mm group (BW change in Table 7). As mentioned, BW is positively related to CO;\ntherefore, perhaps the greater ligation limited CO in the 18 g/0.40 mm group, which\nalso limited physical development and organ development, including development of\nthe heart. At the end of week 8, there was no significant difference in BW for the\n18g/0.45mm group and 18g/sham groups; both had significantly higher BW than the 18\ng/0.40 mm group (BW at week 8 in Table 7).\nThe 0.45-mm needle had no obvious limits in18-g mice, but the BW advantage for the\n18 g/0.45 mm group compared to the 18 g/0.40 mm group depends on greater CO and\nrequires more hypertrophic myocardium for support. So, to establish CH models for\nAAC in mice that have developmental potential, such as 18-g mice, there may be a\nspecial ligating intensity region that can cause more obvious CH than the two\nadjacent regions. However, this phenomenon must comprise multiple factors and is\nworth further study.", "We established CH models using 4 ligation needle sizes and 3 weights for mice. Data\nshowed that both of 0.45-mm and 0.50-mm needles lead to CH. However, 0.45mm needle\nbrings more effective model and causes obvious CH in 18-g mice." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions" ]
[ "Cardiomegaly", "Body Weight", "Heart Failure", "Needles/utilization", "Rats" ]
Introduction: Cardiac hypertrophy (CH) is a compensatory pathological change that is usually induced by pressure overload (PO), neurohumoral abnormality, and the effects of cytokines. It is characterized by cardiomyocyte hypertrophy and interstitial hyperplasia, and it results in an enlarged heart and thickening of the heart walls. Clinically, CH is involved in the development of many diseases, such as valvular disease, hypertension, arterial stenosis, and primary myocardial hypertrophy. If these diseases develop at their own pace, then cardiac function (CF) will gradually decompensate, leading to heart failure (HF), which severely lowers the quality of life and increases the mortality rate. Therefore, CH is a widespread concern and has been explored at the molecular level by researchers. Due to the high genomic homology between mice and humans, an established CH model for mice has been widely used in animal experiments, thereby providing an effective research foundation for CH exploration. Currently, PO-induced CH is a common way to establish the model. Abdominal aortic constriction (AAC) is highly recommended by researchers because of the high success rate and the ability to perform surgery without the need for thoracotomy or a ventilator. However, the modeling effects with different ligating intensities for certain body weights (BWs) have not yet been reported. Therefore, we used 3 frequently used mice BWs (18 g, 22 g, and 26 g) and 4 different needle sizes (0.35, 0.40, 0.45, and 0.50 mm) to establish the CH model for each weight level for AAC, summarized the survival rates, and evaluated the CH effects. Methods: Animal groups and handling One-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai SLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and cared for in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health, Washington, DC, 1996). Experimental protocols were approved by our Institutional Animal Care and Use Committee of Zhejiang University (Hangzhou, China). Mice were selected according to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range, 20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the following 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0 ± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight levels were divided using sortition randomization method to create a sham group (n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45, and 0.50mm; n = 10 per group). Regarding BW, no significant differences were found among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that survived were not significant (Table S2). Mice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal injection). When the mice did not respond when their toe was pinched, the limbs were fixed on the operating board in the supine position and the skin was prepared by shaving and disinfection with alcohol. Sterile gauze was placed on the right side of the abdomen and a ventrimesal incision approximately 1.5 cm was created starting from the xiphoid. The skin was fixed with a spreader and the viscera was pulled out gently with a swab and placed on the gauze. Then, the abdominal aorta was isolated using a blunt dissection technique with curved microforceps under a microscope. A 6-0 silk suture was snared and pulled back around the aorta 1mm above the superior mesenteric artery. A 2-mm blunt acupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm; Huatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number GB2024-1994) was then placed next to the aorta. The suture was tied snugly around the needle and the aorta. The needle was removed immediately after ligation, the viscera were replaced, the peritoneum and skin were sutured, and the mice were allowed to recover. Aortic ligation was omitted only for the sham group. After surgery, the ears were cut to differentiate the mice. Then, mice were placed in an incubator at 30ºC until they woke, and they were returned to their cages. Survival status was recorded daily. To observe the physical development of mice under different conditions, BW differences before surgery and at week 8 post-surgery were calculated as the change in BW. One-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai SLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and cared for in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health, Washington, DC, 1996). Experimental protocols were approved by our Institutional Animal Care and Use Committee of Zhejiang University (Hangzhou, China). Mice were selected according to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range, 20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the following 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0 ± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight levels were divided using sortition randomization method to create a sham group (n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45, and 0.50mm; n = 10 per group). Regarding BW, no significant differences were found among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that survived were not significant (Table S2). Mice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal injection). When the mice did not respond when their toe was pinched, the limbs were fixed on the operating board in the supine position and the skin was prepared by shaving and disinfection with alcohol. Sterile gauze was placed on the right side of the abdomen and a ventrimesal incision approximately 1.5 cm was created starting from the xiphoid. The skin was fixed with a spreader and the viscera was pulled out gently with a swab and placed on the gauze. Then, the abdominal aorta was isolated using a blunt dissection technique with curved microforceps under a microscope. A 6-0 silk suture was snared and pulled back around the aorta 1mm above the superior mesenteric artery. A 2-mm blunt acupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm; Huatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number GB2024-1994) was then placed next to the aorta. The suture was tied snugly around the needle and the aorta. The needle was removed immediately after ligation, the viscera were replaced, the peritoneum and skin were sutured, and the mice were allowed to recover. Aortic ligation was omitted only for the sham group. After surgery, the ears were cut to differentiate the mice. Then, mice were placed in an incubator at 30ºC until they woke, and they were returned to their cages. Survival status was recorded daily. To observe the physical development of mice under different conditions, BW differences before surgery and at week 8 post-surgery were calculated as the change in BW. Echocardiography imaging After post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4% chloralhydrate and placed on a warming pad after skin preparation. Transthoracic 2-dimensional (2D) echocardiography was performed using the GE Vivid E9 Ultrasound echocardiographic system (General Electric Company, Fairfield, CT, USA) with the GE 9L probe (8-MHz linear array transducer; General Electric Company). M-mode parasternal long-axis scans of the left ventricle at the mitral chordae level were used to quantify the interventricular septum thickness at end-diastole (IVSd), interventricular septum thickness at end-systole (IVSs), left ventricular internal dimension at end-diastole (LVIDd), left ventricular internal dimension at end-systole (LVIDs), left ventricular posterior wall thickness at end-diastole (LVPWd), left ventricular posterior wall thickness at end-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All mice were tested using the same parameters. After post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4% chloralhydrate and placed on a warming pad after skin preparation. Transthoracic 2-dimensional (2D) echocardiography was performed using the GE Vivid E9 Ultrasound echocardiographic system (General Electric Company, Fairfield, CT, USA) with the GE 9L probe (8-MHz linear array transducer; General Electric Company). M-mode parasternal long-axis scans of the left ventricle at the mitral chordae level were used to quantify the interventricular septum thickness at end-diastole (IVSd), interventricular septum thickness at end-systole (IVSs), left ventricular internal dimension at end-diastole (LVIDd), left ventricular internal dimension at end-systole (LVIDs), left ventricular posterior wall thickness at end-diastole (LVPWd), left ventricular posterior wall thickness at end-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All mice were tested using the same parameters. Heart weight, heart weight/body weight, and heart weight/tibial length After echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by cervical dislocation and the hearts were dissected. Then, atrial and vascular tissues were snipped carefully, leaving the ventricles. The hearts were rinsed with phosphate-buffered saline (PBS), drained by gently squeezing on absorbent paper, weighed, photographed under natural light, and fixed in 4% paraformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia) were recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio and HW/TL ratio were calculated to evaluate the hypertrophic response to PO. After echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by cervical dislocation and the hearts were dissected. Then, atrial and vascular tissues were snipped carefully, leaving the ventricles. The hearts were rinsed with phosphate-buffered saline (PBS), drained by gently squeezing on absorbent paper, weighed, photographed under natural light, and fixed in 4% paraformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia) were recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio and HW/TL ratio were calculated to evaluate the hypertrophic response to PO. Histological examination of the heart Extracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After routine histologic procedures, the hearts were embedded in paraffin and cut into 4-µm sections. Sections were stained with hematoxylin and eosin (HE) and picrosirius red (PSR). Cardiac cross-sections were captured at 20 × microscopic views from HE sections, and 5 thicknesses of the left ventricle in each view were selected in systematic sampling, and measured using Image-Pro Plus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values were calculated. Cardiomyocyte morphological changes were captured at 400×microscopic views from HE sections. Interstitial and/or perivascular collagen depositions were captured at 200×microscopic views under standard lights. Collagen was stained red using PSR, thereby indicating fibrosis. At least 6 views were selected in a blinded manner, and each photograph was analyzed to reveal the ratio of red collagen to the entire tissue area using Image-Pro Plus 6.0. Then, the mean values were calculated. Extracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After routine histologic procedures, the hearts were embedded in paraffin and cut into 4-µm sections. Sections were stained with hematoxylin and eosin (HE) and picrosirius red (PSR). Cardiac cross-sections were captured at 20 × microscopic views from HE sections, and 5 thicknesses of the left ventricle in each view were selected in systematic sampling, and measured using Image-Pro Plus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values were calculated. Cardiomyocyte morphological changes were captured at 400×microscopic views from HE sections. Interstitial and/or perivascular collagen depositions were captured at 200×microscopic views under standard lights. Collagen was stained red using PSR, thereby indicating fibrosis. At least 6 views were selected in a blinded manner, and each photograph was analyzed to reveal the ratio of red collagen to the entire tissue area using Image-Pro Plus 6.0. Then, the mean values were calculated. Statistical Analysis SPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all statistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the normality of the quantitative variables as appropriate. Data are presented as mean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests were used to evaluate differences between groups. p < 0.05 was considered statistically significant. SPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all statistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the normality of the quantitative variables as appropriate. Data are presented as mean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests were used to evaluate differences between groups. p < 0.05 was considered statistically significant. Animal groups and handling: One-hundred fifty male C57BL/6 wild-type mice were obtained from the Shanghai SLAC Laboratory Animal Co. Ltd (Shanghai, China). All animals were treated and cared for in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health, Washington, DC, 1996). Experimental protocols were approved by our Institutional Animal Care and Use Committee of Zhejiang University (Hangzhou, China). Mice were selected according to weights of approximately 18 g (range, 17.3-18.7 g), 22 g (range, 20.8-23.0 g), and 26 g (range, 25.1-27.0 g), and they were divided into the following 3 weight levels: 18 g (18.0 ± 0.3 g; n = 50), 22 g (22.0 ± 0.6 g; n = 50), and 26 g (26.1 ± 0.5 g; n = 50). All weight levels were divided using sortition randomization method to create a sham group (n = 10) and 4 AAC groups according to ligating intensities (0.35, 0.40, 0.45, and 0.50mm; n = 10 per group). Regarding BW, no significant differences were found among the 5 groups for each weight level (Table S1), and the preoperative BWs of mice that died and those that survived were not significant (Table S2). Mice were anesthetized with 4% chloralhydrate (0.1ml/1g BW, intraperitoneal injection). When the mice did not respond when their toe was pinched, the limbs were fixed on the operating board in the supine position and the skin was prepared by shaving and disinfection with alcohol. Sterile gauze was placed on the right side of the abdomen and a ventrimesal incision approximately 1.5 cm was created starting from the xiphoid. The skin was fixed with a spreader and the viscera was pulled out gently with a swab and placed on the gauze. Then, the abdominal aorta was isolated using a blunt dissection technique with curved microforceps under a microscope. A 6-0 silk suture was snared and pulled back around the aorta 1mm above the superior mesenteric artery. A 2-mm blunt acupuncture needle (external diameters: 0.35 mm, 0.40 mm, 0.45 mm, and 0.50 mm; Huatuo; Suzhou Medical Appliance Factory, Suzhou, China; criterion number GB2024-1994) was then placed next to the aorta. The suture was tied snugly around the needle and the aorta. The needle was removed immediately after ligation, the viscera were replaced, the peritoneum and skin were sutured, and the mice were allowed to recover. Aortic ligation was omitted only for the sham group. After surgery, the ears were cut to differentiate the mice. Then, mice were placed in an incubator at 30ºC until they woke, and they were returned to their cages. Survival status was recorded daily. To observe the physical development of mice under different conditions, BW differences before surgery and at week 8 post-surgery were calculated as the change in BW. Echocardiography imaging: After post-surgery weeks 4 and 8, mice were weighed and anesthetized with 4% chloralhydrate and placed on a warming pad after skin preparation. Transthoracic 2-dimensional (2D) echocardiography was performed using the GE Vivid E9 Ultrasound echocardiographic system (General Electric Company, Fairfield, CT, USA) with the GE 9L probe (8-MHz linear array transducer; General Electric Company). M-mode parasternal long-axis scans of the left ventricle at the mitral chordae level were used to quantify the interventricular septum thickness at end-diastole (IVSd), interventricular septum thickness at end-systole (IVSs), left ventricular internal dimension at end-diastole (LVIDd), left ventricular internal dimension at end-systole (LVIDs), left ventricular posterior wall thickness at end-diastole (LVPWd), left ventricular posterior wall thickness at end-systole (LVPWs), ejection fraction (EF), and fractional shortening (FS). All mice were tested using the same parameters. Heart weight, heart weight/body weight, and heart weight/tibial length: After echocardiographic analysis at 8 weeks post-surgery, mice were sacrificed by cervical dislocation and the hearts were dissected. Then, atrial and vascular tissues were snipped carefully, leaving the ventricles. The hearts were rinsed with phosphate-buffered saline (PBS), drained by gently squeezing on absorbent paper, weighed, photographed under natural light, and fixed in 4% paraformaldehyde. The tibial lengths (TLs; mean value of the bilateral tibia) were recorded. Heart weight (HW), BW, and TL were measured, and the HW/BW ratio and HW/TL ratio were calculated to evaluate the hypertrophic response to PO. Histological examination of the heart: Extracted hearts were fixed in 4% paraformaldehyde for 24h and dehydrated. After routine histologic procedures, the hearts were embedded in paraffin and cut into 4-µm sections. Sections were stained with hematoxylin and eosin (HE) and picrosirius red (PSR). Cardiac cross-sections were captured at 20 × microscopic views from HE sections, and 5 thicknesses of the left ventricle in each view were selected in systematic sampling, and measured using Image-Pro Plus 6.0 (Media Cybernetics, Inc., Rockville, MD, USA). Then, the mean values were calculated. Cardiomyocyte morphological changes were captured at 400×microscopic views from HE sections. Interstitial and/or perivascular collagen depositions were captured at 200×microscopic views under standard lights. Collagen was stained red using PSR, thereby indicating fibrosis. At least 6 views were selected in a blinded manner, and each photograph was analyzed to reveal the ratio of red collagen to the entire tissue area using Image-Pro Plus 6.0. Then, the mean values were calculated. Statistical Analysis: SPSS 17.0 statistical software (SPSS Inc., Chicago, IL, USA) was used for all statistical analyses. The Kolmogorov-Smirnov (K-S) test was used to verify the normality of the quantitative variables as appropriate. Data are presented as mean ± standard deviation (SD). One-way ANOVA and post-hoc Tukey tests were used to evaluate differences between groups. p < 0.05 was considered statistically significant. Results: Excessive AAC may lead to death We monitored mice deaths after surgery according to acute heart failure (AHF) criteria. Data (Table 1) showed that all deaths occurred within 5 days, and a high incidence of death occurred during the initial 24h post-surgery. Mice deaths after surgery There were no mice deaths in the AAC0.50-mm group or the sham group. Deaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5 d); 54 deaths occurred within 0-24h post-surgery. The total number of deaths was 65. We monitored mice deaths after surgery according to acute heart failure (AHF) criteria. Data (Table 1) showed that all deaths occurred within 5 days, and a high incidence of death occurred during the initial 24h post-surgery. Mice deaths after surgery There were no mice deaths in the AAC0.50-mm group or the sham group. Deaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5 d); 54 deaths occurred within 0-24h post-surgery. The total number of deaths was 65. AAC increases cardiac dimensions and reduces cardiac function Echocardiography was performed at the end of post-operative weeks 4 and 8. At week 4 post-surgery, data (Table 2) showed a trend of heart enlargement for mice with AAC, including thickening of the ventricular wall and an increase in chamber dilation; however, differences in EF and FS were not significant, indicating that changes in the heart structure did not have a pronounced effect on cardiac function at that time point. At week 8 post-surgery, the trend of heart enlargement continued; however, the EF and FS values for the AAC groups decreased significantly. This change in cardiac function from week 4 to week 8 was consistent with systolic function beginning to be markedly affected at week 4 after PO surgery. Echocardiographic outcomes of 18-g, 22-g, and 26-g mice IVSd: interventricular septum thickness at end-diastole; IVSs: interventricular septum thickness at end-systole; LVIDd: left ventricular internal dimension at end-diastole; LVIDs: left ventricular internal dimension at end-systole; LVPWd: left ventricular posterior wall thickness at end-diastole; LVPWs: left ventricular posterior wall thickness at end-systole; EF: ejection fraction; FS: fractional shortening. The cardiac dimensions (inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and functional indices (inclding EF and FS) changes measured by echocardiography. Data were statistically analyzed and presented as the mean ± SD. At week 4, cardiac dimensions for the AAC groups significantly increased compared with the sham groups (*p < 0.05); at week 8 , more cardiac dimensions significantly increased, and that EF and FS values for the AAC groups all decreased significantly compared to those of the sham groups for 3 weight levels (*p < 0.05). Echocardiography was performed at the end of post-operative weeks 4 and 8. At week 4 post-surgery, data (Table 2) showed a trend of heart enlargement for mice with AAC, including thickening of the ventricular wall and an increase in chamber dilation; however, differences in EF and FS were not significant, indicating that changes in the heart structure did not have a pronounced effect on cardiac function at that time point. At week 8 post-surgery, the trend of heart enlargement continued; however, the EF and FS values for the AAC groups decreased significantly. This change in cardiac function from week 4 to week 8 was consistent with systolic function beginning to be markedly affected at week 4 after PO surgery. Echocardiographic outcomes of 18-g, 22-g, and 26-g mice IVSd: interventricular septum thickness at end-diastole; IVSs: interventricular septum thickness at end-systole; LVIDd: left ventricular internal dimension at end-diastole; LVIDs: left ventricular internal dimension at end-systole; LVPWd: left ventricular posterior wall thickness at end-diastole; LVPWs: left ventricular posterior wall thickness at end-systole; EF: ejection fraction; FS: fractional shortening. The cardiac dimensions (inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and functional indices (inclding EF and FS) changes measured by echocardiography. Data were statistically analyzed and presented as the mean ± SD. At week 4, cardiac dimensions for the AAC groups significantly increased compared with the sham groups (*p < 0.05); at week 8 , more cardiac dimensions significantly increased, and that EF and FS values for the AAC groups all decreased significantly compared to those of the sham groups for 3 weight levels (*p < 0.05). AAC increases HW, HW/BW, and HW/TL ratio Generally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators of CH. In our study, as shown in Table 3, we found that AAC significantly increased HW and caused a significantly higher HW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels. The HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly higher than those for the AAC0.50 mm groups. These HW-related indices for the 18 g/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm groups. Heart weight–related indices of 18-g, 22-g, and 26-g mice HW: heart weight; BW: body weight; TL: tibial length. HW(mg), HW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC and sham groups of 3 BW levels. Data are presented as the mean ± SD. Compared to the sham group in the same BW level, the heart weight–related indices of AAC groups increased significantly (p < 0.05). Compared to the rest groups in the same BW level, the heart weight–related indices of the AAC 0.45-mm group increased significantly (p < 0.05). Generally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators of CH. In our study, as shown in Table 3, we found that AAC significantly increased HW and caused a significantly higher HW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels. The HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly higher than those for the AAC0.50 mm groups. These HW-related indices for the 18 g/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm groups. Heart weight–related indices of 18-g, 22-g, and 26-g mice HW: heart weight; BW: body weight; TL: tibial length. HW(mg), HW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC and sham groups of 3 BW levels. Data are presented as the mean ± SD. Compared to the sham group in the same BW level, the heart weight–related indices of AAC groups increased significantly (p < 0.05). Compared to the rest groups in the same BW level, the heart weight–related indices of the AAC 0.45-mm group increased significantly (p < 0.05). AAC leads to cardiomyocyte hypertrophy and increases collagen depositions For mice undergoing AAC surgery, the hearts demonstrated different degrees of enlargement (Figure 1A), enlargement of the papillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared with that of the sham group (Table 4). The sham groups showed normal architecture of the cardiomyocytes compared with the AAC groups. Pathological changes including enlarged, disarrayed, and eosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and trachychromatic and pantomorphic nuclei were observed in each of the AAC groups (Figure 1C). Scattered collagen depositions in the interstitial and perivascular spaces were observed in the sham groups. In comparison, in some AAC groups, a larger quantity and wider range of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was observed in the perivascular space, especially in the external vascular wall (Figure 1E). Statistical analysis indicated that the AAC group had a significantly greater collagen area than the sham group (Table 5). These results imply that AAC is capable of inducing PO-induced CH and fibrosis. Figure 1Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Thickness of the left ventricle (mm) based on weight and needle size Data are presented as the mean ± SD (n = 5). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. Percentage of collagen deposition in the left ventricle based on weight and needle size Data are presented as the mean ± SD (n = 6). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. For mice undergoing AAC surgery, the hearts demonstrated different degrees of enlargement (Figure 1A), enlargement of the papillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared with that of the sham group (Table 4). The sham groups showed normal architecture of the cardiomyocytes compared with the AAC groups. Pathological changes including enlarged, disarrayed, and eosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and trachychromatic and pantomorphic nuclei were observed in each of the AAC groups (Figure 1C). Scattered collagen depositions in the interstitial and perivascular spaces were observed in the sham groups. In comparison, in some AAC groups, a larger quantity and wider range of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was observed in the perivascular space, especially in the external vascular wall (Figure 1E). Statistical analysis indicated that the AAC group had a significantly greater collagen area than the sham group (Table 5). These results imply that AAC is capable of inducing PO-induced CH and fibrosis. Figure 1Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Thickness of the left ventricle (mm) based on weight and needle size Data are presented as the mean ± SD (n = 5). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. Percentage of collagen deposition in the left ventricle based on weight and needle size Data are presented as the mean ± SD (n = 6). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. AAC may restrict physical development Analysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice compared to 22-g and 26-g mice (Table 6), indicating that the 18-g groups had higher development potential. In the 18-g mice groups, data showed that the value of 18 g/0.40 mm was significantly lower than that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no significant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm group had nearly normal physical development. Development of the 18 g/0.40 mm group was limited. Body weight changes with AAC under 0.45 mm needle BW: body weight; AAC: abdominal aortic constriction. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.45 mm group and the 22 g/0.45 mm and 26 g/0.45 mm groups. Body weight and BW changes in 18-g mice BW: body weight. BW changes of 18-g mice before and after surgery for 8 weeks. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.40 mm group and the rest groups after surgery. Analysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice compared to 22-g and 26-g mice (Table 6), indicating that the 18-g groups had higher development potential. In the 18-g mice groups, data showed that the value of 18 g/0.40 mm was significantly lower than that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no significant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm group had nearly normal physical development. Development of the 18 g/0.40 mm group was limited. Body weight changes with AAC under 0.45 mm needle BW: body weight; AAC: abdominal aortic constriction. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.45 mm group and the 22 g/0.45 mm and 26 g/0.45 mm groups. Body weight and BW changes in 18-g mice BW: body weight. BW changes of 18-g mice before and after surgery for 8 weeks. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.40 mm group and the rest groups after surgery. Excessive AAC may lead to death: We monitored mice deaths after surgery according to acute heart failure (AHF) criteria. Data (Table 1) showed that all deaths occurred within 5 days, and a high incidence of death occurred during the initial 24h post-surgery. Mice deaths after surgery There were no mice deaths in the AAC0.50-mm group or the sham group. Deaths were recorded during 3 time periods (0-24h, 24h-3d, and 3-5 d); 54 deaths occurred within 0-24h post-surgery. The total number of deaths was 65. AAC increases cardiac dimensions and reduces cardiac function: Echocardiography was performed at the end of post-operative weeks 4 and 8. At week 4 post-surgery, data (Table 2) showed a trend of heart enlargement for mice with AAC, including thickening of the ventricular wall and an increase in chamber dilation; however, differences in EF and FS were not significant, indicating that changes in the heart structure did not have a pronounced effect on cardiac function at that time point. At week 8 post-surgery, the trend of heart enlargement continued; however, the EF and FS values for the AAC groups decreased significantly. This change in cardiac function from week 4 to week 8 was consistent with systolic function beginning to be markedly affected at week 4 after PO surgery. Echocardiographic outcomes of 18-g, 22-g, and 26-g mice IVSd: interventricular septum thickness at end-diastole; IVSs: interventricular septum thickness at end-systole; LVIDd: left ventricular internal dimension at end-diastole; LVIDs: left ventricular internal dimension at end-systole; LVPWd: left ventricular posterior wall thickness at end-diastole; LVPWs: left ventricular posterior wall thickness at end-systole; EF: ejection fraction; FS: fractional shortening. The cardiac dimensions (inclding IVSd, IVSs, LVIDd, LVIDs, LVPWd, and LVPWs) (mm) and functional indices (inclding EF and FS) changes measured by echocardiography. Data were statistically analyzed and presented as the mean ± SD. At week 4, cardiac dimensions for the AAC groups significantly increased compared with the sham groups (*p < 0.05); at week 8 , more cardiac dimensions significantly increased, and that EF and FS values for the AAC groups all decreased significantly compared to those of the sham groups for 3 weight levels (*p < 0.05). AAC increases HW, HW/BW, and HW/TL ratio: Generally, the increased HW, HW/BW, and HW/TL ratio are the three main indicators of CH. In our study, as shown in Table 3, we found that AAC significantly increased HW and caused a significantly higher HW/BW ratio and HW/TL ratio compared to the sham groups for all weight levels. The HW, HW/BW, and HW/TL values for the AAC0.45 mm groups were significantly higher than those for the AAC0.50 mm groups. These HW-related indices for the 18 g/0.45 mm groups were even significantly higher than those for the 18 g/0.40 mm groups. Heart weight–related indices of 18-g, 22-g, and 26-g mice HW: heart weight; BW: body weight; TL: tibial length. HW(mg), HW/BW(mg/g), and HW/TL(mg/cm) were measured and calculated from AAC and sham groups of 3 BW levels. Data are presented as the mean ± SD. Compared to the sham group in the same BW level, the heart weight–related indices of AAC groups increased significantly (p < 0.05). Compared to the rest groups in the same BW level, the heart weight–related indices of the AAC 0.45-mm group increased significantly (p < 0.05). AAC leads to cardiomyocyte hypertrophy and increases collagen depositions: For mice undergoing AAC surgery, the hearts demonstrated different degrees of enlargement (Figure 1A), enlargement of the papillary muscles, and thickening of the ventricular walls (Figure 1B). Wall thickening increased significantly compared with that of the sham group (Table 4). The sham groups showed normal architecture of the cardiomyocytes compared with the AAC groups. Pathological changes including enlarged, disarrayed, and eosinophilic cardiomyocytes and cardiomyocytes rich in cytoplasm and trachychromatic and pantomorphic nuclei were observed in each of the AAC groups (Figure 1C). Scattered collagen depositions in the interstitial and perivascular spaces were observed in the sham groups. In comparison, in some AAC groups, a larger quantity and wider range of red deposits were observed in the interstitial space (Figure 1D), and thickened collagen was observed in the perivascular space, especially in the external vascular wall (Figure 1E). Statistical analysis indicated that the AAC group had a significantly greater collagen area than the sham group (Table 5). These results imply that AAC is capable of inducing PO-induced CH and fibrosis. Figure 1Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Cardiomyocyte hypertrophy and collagen deposition histological examination. (A) Gross hearts under natural light. (B) The 20 × microscopic views of HE sections. (C) The 400 × microscopic views of HE sections. (D) Representative 200×microscopic views under standard lights of PSR sections in the interstitial space. (E) Representative 200 × microscopic views under standard lights of PSR sections in the perivascular space. Fibrosis is presented as red in the PSR sections. Thickness of the left ventricle (mm) based on weight and needle size Data are presented as the mean ± SD (n = 5). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. Percentage of collagen deposition in the left ventricle based on weight and needle size Data are presented as the mean ± SD (n = 6). p < 0.05 represents a significant difference between the abdominal aortic constriction (AAC) and sham groups. AAC may restrict physical development: Analysis showed that with AAC 0.45 mm, BW significantly increased in 18-g mice compared to 22-g and 26-g mice (Table 6), indicating that the 18-g groups had higher development potential. In the 18-g mice groups, data showed that the value of 18 g/0.40 mm was significantly lower than that of the 18 g/0.45 mm and 18 g/sham groups, and that there were no significant differences between the 18 g/0.45 mm and 18 g/sham groups (Table 7), indicating that the 18 g/0.45 mm group had nearly normal physical development. Development of the 18 g/0.40 mm group was limited. Body weight changes with AAC under 0.45 mm needle BW: body weight; AAC: abdominal aortic constriction. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.45 mm group and the 22 g/0.45 mm and 26 g/0.45 mm groups. Body weight and BW changes in 18-g mice BW: body weight. BW changes of 18-g mice before and after surgery for 8 weeks. Data are presented as the mean ± SD. p < 0.05 represents a significant difference between the 18 g/0.40 mm group and the rest groups after surgery. Discussion: In this study, we performed AAC according to 4 different ligating intensities for mice of 3 different weight levels to evaluate the survival rates of mice and CH induced by PO under different conditions. This is the first study showing that CH diversities exist among groups under different ligations and BW. AAC is widely used in the modeling of CH induced by PO in mice. Needle ligation is usually used, and the efficiency of modeling is highly dependent on ligation intensity. Nevertheless, excessive constriction will lead to death,1 and our research findings (Table 1) demonstrated this point. In this study, a 0.35-mm needle caused the death of all mice in the 3 weight levels, and the 0.40-mm needle caused the death of all mice in 22-g and 26-g groups. Contrarily, all mice with AAC that underwent surgery with a 0.50-mm needle or sham operation survived. Mice in the other groups had different mortality rates. Regarding the selection of needles for the BW ranges of this study, a needle smaller than 0.35mm in diameter caused stronger constriction and death. However, a needle larger than 0.50 mm in diameter did not alternatively affect the survival rate, but it did reduce the efficiency of CH because of the reduced PO from weaker constriction. This is why we chose needles between 0.35 mm and 0.50 mm. Death can occur after AAC. Undoubtedly, AAC increases cardiac afterloading. To cope with the additional biodynamics, the heart exerts a series of adaptive changes, including activation and hypertrophy of cardiomyocytes and hyperplasia of the extracellular matrix.2 This compensational mechanism maintains cardiac output (CO) effectively for a period of time while maintaining the survival of the organism; it is also the basis for the establishment of the CH model. However, when the sudden afterloading is out of the range of cardiomyocyte adjustment, the bloodstream will be limited and cause constriction, resulting in AHF. AHF is typically characterized by rapid changes in heart failure (HF) symptoms.3 Sato et al.4 considered the incidence of death within 5 days as an assessment criterion of AHF. AHF could moderately or markedly improve by the second day if effectively controlled. AHF leads to high ventricular pressure, and high ventricular pressure leads to high pulmonary blood pressure, thus leading to pulmonary congestion, which is one of the causes of death after AAC.5 Liao et al.6 suggested that cardiogenic pneumo-edema is the main cause of postoperative death for PO mice. Additionally, arrhythmia may occur as part of the electrophysiological changes,7 and cardiomyocyte sarcomeres may be disordered during the pathological changes.8 These are all severe threats to the survival rate after AAC. Our record of mice death times (Table 1) showed the phenomenon of all deaths occurring within 5 days. A high incidence of death occurred during the initial 24h, which is in accordance with the aforementioned AHF criteria. In addition, there is a positive correlation between CO and BW;9,10 therefore, compared with the low-weight mice, high-weight mice require more CO and will have cardiac afterloading that is more increased than that of low-weight mice with the same aortic constriction. Results of the current study (Table 1) indicate that higher-weight mice had poorer tolerance for AAC, which is reflected in their mortality rates. Regarding mice with AAC that underwent ligation with a 0.40-mm needle, all mice in the 22-g and 26-g groups died. However, 6 out of 10 mice survived in the 18-g group. The diagnosis of CH usually depends on changes in cardiac function and morphology.11 Echocardiography can be performed in vitro noninvasively during the first assessment of CH, and it is especially used to monitor changes in cardiac function.12 We performed echocardiographic examinations of mice at the end of week 4 and week 8 post-surgery. Data (week 4 data in Table 2) showed that at the end of week 4, the phenomena of thickened ventricular walls, enlarged ventricular chambers, and decreased cardiac functions were emerging in each AAC group compared with the sham groups, and this diversity was consistent with the characteristic cardiac changes that occur with chronic pressure overload.13,14 These trends became more pronounced at the end of week 8 (week 8 data in Table 2), when EF and FS, which represent cardiac function, were significantly lower compared with the sham groups. CH also increased HW. In our study, the HW, HW/BW ratio, and HW/TL ratio for the AAC groups were significantly increased (Table 3). Cardiac remodeling is the most typical pathological change of CH, including cardiomyocyte hypertrophy and the extracellular matrix increases.15 Our histological results showed increased external diameters and ventricular thickness in gross hearts and cross-sections under AAC (Figures 1A and B). HE staining of the AAC groups displayed the hypertrophic pathology of cardiomyocytes and nuclei (Figure 1C). PSR staining of the AAC groups displayed extensive collagen depositions (Figure 1D), particularly in the perivascular space (Figure 1E). Statistical analysis showed that the thickness of the left ventricle (Table 4) and the percentage of collagen deposition (Table 5) were significantly increased in the AAC groups compared to the sham group. Regarding the formation of collagen, Kuwahara et al.16 indicated that cardiac fibroblasts are activated on day 3 after PO, and that the neoformative fibrous tissues mainly affect the diastolic function rather than the systolic function during the initial 4 weeks. Then, excessive myocardial fibrosis is implicated in systolic dysfunction because of its more intensive traction, and cardiac function begins to deteriorate significantly. Regarding EF and FS values for the AAC groups (Table 2), the downward trends from week 4 to week 8 conform to this theory. Choosing the proper needle is critical for establishing the CH model. Based on these results, we found that all mice with AAC died when a 0.35-mm needle was used for ligation for all 3 weight levels and when a 0.40-mm needle was used for ligation for the 22-g and 26-g groups; therefore, these 5 groups of weight-needle pairings were clearly unsuitable for use. The 18g/0.40mm group had obvious CH compared with the sham group, and its survival rate was acceptable (6 out of 10). However, it should still be excluded because the 18 g/0.45 mm group showed more obvious CH and higher survival rates (8 out of 10) (Table 1, Table 3). The 0.45-mm and 0.50-mm needles are available for all 3 weight levels, but both can result in definite myocardial hypertrophy. However, the values of the HW, HW/BW ratio, and HW/TL ratio for the AAC mice when using the 0.45-mm needle were significantly higher than those when using a 0.50-mm needle for each weight level (Table 3). Therefore, for all 3 weight levels of our study, a CH model can be established using a 0.50-mm needle and the survival rate of the mice will not be threatened. However, a 0.45-mm needle leads to more effective CH model, and higher mortality than the 0.50-mm needle. Normally, with the PO-induced CH model, thinner needles creates more severe aortic stenosis and lead to more pronounced CH, and vice versa. However, we observed an interesting phenomenon: the CH level of the 18 g/0.45 mm group was abnormally significantly higher than that of the 18 g/0.40 mm group (18-g mice in Table 3). Regarding the analysis of BW data with AAC (Table 6), the changes in BW in 18-g mice during weeks 0 to 8 were significantly higher than those for the 22-g and 26-g mice, indicating that 18-g mice have greater potential for physical development after surgery and that physical development is often accompanied by organ development.17 Therefore, the heart of 18-g mice also has greater development potential. For the same weight level, the BW change of the 18 g/0.45 mm group during weeks 0 to 8 was significantly higher than that of the 18 g/0.40 mm group (BW change in Table 7). As mentioned, BW is positively related to CO; therefore, perhaps the greater ligation limited CO in the 18 g/0.40 mm group, which also limited physical development and organ development, including development of the heart. At the end of week 8, there was no significant difference in BW for the 18g/0.45mm group and 18g/sham groups; both had significantly higher BW than the 18 g/0.40 mm group (BW at week 8 in Table 7). The 0.45-mm needle had no obvious limits in18-g mice, but the BW advantage for the 18 g/0.45 mm group compared to the 18 g/0.40 mm group depends on greater CO and requires more hypertrophic myocardium for support. So, to establish CH models for AAC in mice that have developmental potential, such as 18-g mice, there may be a special ligating intensity region that can cause more obvious CH than the two adjacent regions. However, this phenomenon must comprise multiple factors and is worth further study. Conclusion: We established CH models using 4 ligation needle sizes and 3 weights for mice. Data showed that both of 0.45-mm and 0.50-mm needles lead to CH. However, 0.45mm needle brings more effective model and causes obvious CH in 18-g mice.
Background: The cardiac hypertrophy (CH) model for mice has been widely used, thereby providing an effective research foundation for CH exploration. Methods: Four needles with different external diameters (0.35, 0.40, 0.45, and 0.50 mm) were used for AAC. 150 male C57BL/6 mice were selected according to body weight (BW) and divided into 3 weight levels: 18 g, 22 g, and 26 g (n = 50 in each group). All weight levels were divided into 5 groups: a sham group (n = 10) and 4 AAC groups using 4 ligation intensities (n = 10 per group). After surgery, survival rates were recorded, echocardiography was performed, hearts were dissected and used for histological detection, and data were statistically analyzed, P < 0.05 was considered statistically significant. Results: All mice died in the following AAC groups: 18g/0.35 mm, 22 g/0.35 mm, 26 g/0.35 mm, 22 g/0.40 mm, and 26 g/0.40 mm. All mice with AAC, those ligated with a 0.50-mm needle, and those that underwent sham operation survived. Different death rates occurred in the following AAC groups: 18 g/0.40 mm, 18 g/0.45 mm, 18 g/0.50 mm, 22 g/45 mm, 22 g/0.50 mm, 26 g/0.45 mm, and 26 g/0.50 mm. The heart weight/body weight ratios (5.39 ± 0.85, 6.41 ± 0.68, 4.67 ± 0.37, 5.22 ± 0.42, 4.23 ± 0.28, 5.41 ± 0.14, and 4.02 ± 0.13) were significantly increased compared with those of the sham groups for mice with the same weight levels. Conclusions: A 0.45-mm needle led to more obvious CH than did 0.40-mm and 0.50-mm needles and caused extraordinary CH in 18-g mice.
Introduction: Cardiac hypertrophy (CH) is a compensatory pathological change that is usually induced by pressure overload (PO), neurohumoral abnormality, and the effects of cytokines. It is characterized by cardiomyocyte hypertrophy and interstitial hyperplasia, and it results in an enlarged heart and thickening of the heart walls. Clinically, CH is involved in the development of many diseases, such as valvular disease, hypertension, arterial stenosis, and primary myocardial hypertrophy. If these diseases develop at their own pace, then cardiac function (CF) will gradually decompensate, leading to heart failure (HF), which severely lowers the quality of life and increases the mortality rate. Therefore, CH is a widespread concern and has been explored at the molecular level by researchers. Due to the high genomic homology between mice and humans, an established CH model for mice has been widely used in animal experiments, thereby providing an effective research foundation for CH exploration. Currently, PO-induced CH is a common way to establish the model. Abdominal aortic constriction (AAC) is highly recommended by researchers because of the high success rate and the ability to perform surgery without the need for thoracotomy or a ventilator. However, the modeling effects with different ligating intensities for certain body weights (BWs) have not yet been reported. Therefore, we used 3 frequently used mice BWs (18 g, 22 g, and 26 g) and 4 different needle sizes (0.35, 0.40, 0.45, and 0.50 mm) to establish the CH model for each weight level for AAC, summarized the survival rates, and evaluated the CH effects. Conclusion: We established CH models using 4 ligation needle sizes and 3 weights for mice. Data showed that both of 0.45-mm and 0.50-mm needles lead to CH. However, 0.45mm needle brings more effective model and causes obvious CH in 18-g mice.
Background: The cardiac hypertrophy (CH) model for mice has been widely used, thereby providing an effective research foundation for CH exploration. Methods: Four needles with different external diameters (0.35, 0.40, 0.45, and 0.50 mm) were used for AAC. 150 male C57BL/6 mice were selected according to body weight (BW) and divided into 3 weight levels: 18 g, 22 g, and 26 g (n = 50 in each group). All weight levels were divided into 5 groups: a sham group (n = 10) and 4 AAC groups using 4 ligation intensities (n = 10 per group). After surgery, survival rates were recorded, echocardiography was performed, hearts were dissected and used for histological detection, and data were statistically analyzed, P < 0.05 was considered statistically significant. Results: All mice died in the following AAC groups: 18g/0.35 mm, 22 g/0.35 mm, 26 g/0.35 mm, 22 g/0.40 mm, and 26 g/0.40 mm. All mice with AAC, those ligated with a 0.50-mm needle, and those that underwent sham operation survived. Different death rates occurred in the following AAC groups: 18 g/0.40 mm, 18 g/0.45 mm, 18 g/0.50 mm, 22 g/45 mm, 22 g/0.50 mm, 26 g/0.45 mm, and 26 g/0.50 mm. The heart weight/body weight ratios (5.39 ± 0.85, 6.41 ± 0.68, 4.67 ± 0.37, 5.22 ± 0.42, 4.23 ± 0.28, 5.41 ± 0.14, and 4.02 ± 0.13) were significantly increased compared with those of the sham groups for mice with the same weight levels. Conclusions: A 0.45-mm needle led to more obvious CH than did 0.40-mm and 0.50-mm needles and caused extraordinary CH in 18-g mice.
10,645
342
[ 605, 203, 128, 200, 87, 113, 369, 268, 514, 252 ]
15
[ "mice", "groups", "mm", "aac", "18", "bw", "weight", "hw", "group", "sham" ]
[ "hypertrophy cardiomyocytes hyperplasia", "myocardial hypertrophy values", "cardiomyocyte hypertrophy increases", "activation hypertrophy cardiomyocytes", "cardiac hypertrophy ch" ]
[CONTENT] Cardiomegaly | Body Weight | Heart Failure | Needles/utilization | Rats [SUMMARY]
[CONTENT] Cardiomegaly | Body Weight | Heart Failure | Needles/utilization | Rats [SUMMARY]
[CONTENT] Cardiomegaly | Body Weight | Heart Failure | Needles/utilization | Rats [SUMMARY]
[CONTENT] Cardiomegaly | Body Weight | Heart Failure | Needles/utilization | Rats [SUMMARY]
[CONTENT] Cardiomegaly | Body Weight | Heart Failure | Needles/utilization | Rats [SUMMARY]
[CONTENT] Cardiomegaly | Body Weight | Heart Failure | Needles/utilization | Rats [SUMMARY]
[CONTENT] Animals | Aorta, Abdominal | Body Weight | Cardiomegaly | Constriction | Disease Models, Animal | Echocardiography | Ligation | Male | Mice, Inbred C57BL | Needles | Random Allocation | Reference Values | Reproducibility of Results | Time Factors [SUMMARY]
[CONTENT] Animals | Aorta, Abdominal | Body Weight | Cardiomegaly | Constriction | Disease Models, Animal | Echocardiography | Ligation | Male | Mice, Inbred C57BL | Needles | Random Allocation | Reference Values | Reproducibility of Results | Time Factors [SUMMARY]
[CONTENT] Animals | Aorta, Abdominal | Body Weight | Cardiomegaly | Constriction | Disease Models, Animal | Echocardiography | Ligation | Male | Mice, Inbred C57BL | Needles | Random Allocation | Reference Values | Reproducibility of Results | Time Factors [SUMMARY]
[CONTENT] Animals | Aorta, Abdominal | Body Weight | Cardiomegaly | Constriction | Disease Models, Animal | Echocardiography | Ligation | Male | Mice, Inbred C57BL | Needles | Random Allocation | Reference Values | Reproducibility of Results | Time Factors [SUMMARY]
[CONTENT] Animals | Aorta, Abdominal | Body Weight | Cardiomegaly | Constriction | Disease Models, Animal | Echocardiography | Ligation | Male | Mice, Inbred C57BL | Needles | Random Allocation | Reference Values | Reproducibility of Results | Time Factors [SUMMARY]
[CONTENT] Animals | Aorta, Abdominal | Body Weight | Cardiomegaly | Constriction | Disease Models, Animal | Echocardiography | Ligation | Male | Mice, Inbred C57BL | Needles | Random Allocation | Reference Values | Reproducibility of Results | Time Factors [SUMMARY]
[CONTENT] hypertrophy cardiomyocytes hyperplasia | myocardial hypertrophy values | cardiomyocyte hypertrophy increases | activation hypertrophy cardiomyocytes | cardiac hypertrophy ch [SUMMARY]
[CONTENT] hypertrophy cardiomyocytes hyperplasia | myocardial hypertrophy values | cardiomyocyte hypertrophy increases | activation hypertrophy cardiomyocytes | cardiac hypertrophy ch [SUMMARY]
[CONTENT] hypertrophy cardiomyocytes hyperplasia | myocardial hypertrophy values | cardiomyocyte hypertrophy increases | activation hypertrophy cardiomyocytes | cardiac hypertrophy ch [SUMMARY]
[CONTENT] hypertrophy cardiomyocytes hyperplasia | myocardial hypertrophy values | cardiomyocyte hypertrophy increases | activation hypertrophy cardiomyocytes | cardiac hypertrophy ch [SUMMARY]
[CONTENT] hypertrophy cardiomyocytes hyperplasia | myocardial hypertrophy values | cardiomyocyte hypertrophy increases | activation hypertrophy cardiomyocytes | cardiac hypertrophy ch [SUMMARY]
[CONTENT] hypertrophy cardiomyocytes hyperplasia | myocardial hypertrophy values | cardiomyocyte hypertrophy increases | activation hypertrophy cardiomyocytes | cardiac hypertrophy ch [SUMMARY]
[CONTENT] mice | groups | mm | aac | 18 | bw | weight | hw | group | sham [SUMMARY]
[CONTENT] mice | groups | mm | aac | 18 | bw | weight | hw | group | sham [SUMMARY]
[CONTENT] mice | groups | mm | aac | 18 | bw | weight | hw | group | sham [SUMMARY]
[CONTENT] mice | groups | mm | aac | 18 | bw | weight | hw | group | sham [SUMMARY]
[CONTENT] mice | groups | mm | aac | 18 | bw | weight | hw | group | sham [SUMMARY]
[CONTENT] mice | groups | mm | aac | 18 | bw | weight | hw | group | sham [SUMMARY]
[CONTENT] ch | effects | model | hypertrophy | researchers | diseases | researchers high | establish | ch model | rate [SUMMARY]
[CONTENT] mice | placed | end | aorta | left | bw | sections | skin | fixed | views [SUMMARY]
[CONTENT] groups | aac | hw | 18 | significantly | mm | bw | sham | sections | 45 mm [SUMMARY]
[CONTENT] ch | needle | lead ch | mice data | needles lead ch | needles lead | mice data showed 45 | 45mm needle | 45mm needle brings | model causes obvious ch [SUMMARY]
[CONTENT] groups | mm | mice | 18 | hw | bw | aac | ch | end | weight [SUMMARY]
[CONTENT] groups | mm | mice | 18 | hw | bw | aac | ch | end | weight [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] Four | 0.35 | 0.40 | 0.45 | 0.50 mm | AAC ||| 150 | 3 | 18 | 22 | 26 | 50 ||| 5 | sham group | 10 | 4 | 4 | 10 ||| [SUMMARY]
[CONTENT] AAC | 18g/0.35 mm | 22 g/0.35 mm | 26 g/0.35 mm | 22 | 26 ||| AAC | 0.50-mm ||| AAC | 18 g/0.40 mm | 18 g/0.45 mm | 18 | 22 | g/45 mm | 22 | g/0.50 mm | 26 g/0.45 mm | 26 ||| 5.39 | 0.85 | 6.41 | 0.68 | 4.67 | 0.37 | 5.22 | 0.42 | 4.23 ± 0.28 | 5.41 | 0.14 | 4.02 [SUMMARY]
[CONTENT] 0.45-mm | 0.40-mm | 0.50-mm | 18 [SUMMARY]
[CONTENT] ||| Four | 0.35 | 0.40 | 0.45 | 0.50 mm | AAC ||| 150 | 3 | 18 | 22 | 26 | 50 ||| 5 | sham group | 10 | 4 | 4 | 10 ||| ||| ||| AAC | 18g/0.35 mm | 22 g/0.35 mm | 26 g/0.35 mm | 22 | 26 ||| AAC | 0.50-mm ||| AAC | 18 g/0.40 mm | 18 g/0.45 mm | 18 | 22 | g/45 mm | 22 | g/0.50 mm | 26 g/0.45 mm | 26 ||| 5.39 | 0.85 | 6.41 | 0.68 | 4.67 | 0.37 | 5.22 | 0.42 | 4.23 ± 0.28 | 5.41 | 0.14 | 4.02 ||| 0.45-mm | 0.40-mm | 0.50-mm | 18 [SUMMARY]
[CONTENT] ||| Four | 0.35 | 0.40 | 0.45 | 0.50 mm | AAC ||| 150 | 3 | 18 | 22 | 26 | 50 ||| 5 | sham group | 10 | 4 | 4 | 10 ||| ||| ||| AAC | 18g/0.35 mm | 22 g/0.35 mm | 26 g/0.35 mm | 22 | 26 ||| AAC | 0.50-mm ||| AAC | 18 g/0.40 mm | 18 g/0.45 mm | 18 | 22 | g/45 mm | 22 | g/0.50 mm | 26 g/0.45 mm | 26 ||| 5.39 | 0.85 | 6.41 | 0.68 | 4.67 | 0.37 | 5.22 | 0.42 | 4.23 ± 0.28 | 5.41 | 0.14 | 4.02 ||| 0.45-mm | 0.40-mm | 0.50-mm | 18 [SUMMARY]
Cyanidin 3-glucoside modulated cell cycle progression in liver precancerous lesion,
33911466
Cyanidin-3-O-glucoside (cyan) exhibits antioxidant and anticancer properties. The cell cycle proteins and antimitotic drugs might be promising therapeutic targets in hepatocellular carcinoma.
BACKGROUND
In vivo, DEN/2-AAF-induced hepatic PCL, rats were treated with three doses of cyan (10, 15, and 20 mg/kg/d, for four consecutive days per week for 16 wk). Blood and liver tissue samples were collected for measurement of the followings; alpha fetoprotein (AFP) liver function and RNA panel differential expression was evaluated via real time polymerase chain reaction. Histopathological examination of liver sections stained with H&E and immunohistochemical study using glutathione S-transferase placental (GSTP) and proliferating cell nuclear antigen (PCNA) antibodies were assessed.
METHODS
Cyan administration mitigated the effect of DEN/2-AFF induced PCL, decreased AFP levels, and improved liver function. Remarkably, treatment with cyan dose dependently decreased the long non-coding RNA MALAT1 and tubulin gamma 1 mRNA expressions and increased the levels of miR-125b, all of which are involved in cell cycle and mitotic spindle assembly. Of note, cyan decreased GSTP foci percent area and PCNA positively stained nuclei.
RESULTS
Our results indicated that cyan could be used as a potential therapeutic agent to inhibit liver carcinogenesis in rat model via modulation of cell cycle.
CONCLUSION
[ "Animals", "Anthocyanins", "Diethylnitrosamine", "Female", "Glucosides", "Glutathione Transferase", "Liver", "Liver Neoplasms", "Liver Neoplasms, Experimental", "Precancerous Conditions", "Pregnancy", "Rats", "Rats, Wistar" ]
8047539
INTRODUCTION
Hepatocellular carcinoma (HCC) is the commonest primary malignancy of the liver. Its main risk factors include hepatitis B, hepatitis C, and non-alcoholic steatohepatitis. HCC is associated with high mortality and morbidity. Its incidence has increased from 1.4 to 6.2 per 100000 cases per year within the last 30 years[1]. HCC burden in Egypt is high due to the high prevalence of HCV, where liver cancers account for 11.75% of all gastrointestinal cancers and 1.68% of all malignancies. In addition, over 70% of all liver malignancies in Egypt are HCC[2]. To date, HCC is holding its place as an unresolved dilemma; its resistance to systemic chemotherapy and radiation makes the cure approachable only in very early stages according to Milan criteria, that is, where the tumor size is < 5 cm (when only one lesion is present) or < 3 cm (when 2-3 lesions are present)[3]. Genes involved in cell cycle control are usually mutated in presence of cancer. Dysregulated mitosis results in genomic instability, which leads to tumor aggressiveness[4]. Microtubules are a key component of mitotic spindles, and thus, a crucial part of the process of mitosis. They form an important target in cancer chemotherapy, inducing arrest of mitosis and cell death. On the other hand, while the overexpression of γ-tubulin (γ-TUBG) showed improved survival among small cell lung cancer patients[5], the overexpression of γ-TUBG is found to be characteristic feature of thyroid and breast cancers[6]. Such variation may arise due to differences in gene transcription and check point regulations in different cancer tissues[7]. Cyanidin-3-glucoside (cyan) is a potential chemotherapeutic and chemo-protective agent. Through its antioxidant activity, it scavenges radical oxygen species, which decreases the overall number of tumorous cells (malignant and benign) in in vivo studies[8]. Cyan has been implicated in some beneficial health activities[9], including reducing age-associated oxidative stress[10], improving cognitive brain function, and exhibiting anti-diabetic[11], anti-inflammation[12], anti-atherogenic[13], and anti-obesity activities[14]. Previously, it has been reported that cyan mediates caspase-3 cleavage with DNA fragmentation. Cyan has also been associated with induction of autophagy, a key element involved in cancer elimination, via induction of autophagy-related gene 5 and microtubule-associated protein 1 light chain 3-II[15]. Cyan has drawn increasing attention because of its potential anti-cancer properties. Cyan may offer a novel avenue for treating HCC. a potential antiproliferative effect on HepG2 cells[16]. The present study was conducted to evaluate the effect of cyan in three different doses on cell cycle progression and mitotic assembly disorder in hepatic precancerous lesion (PCL) via assessment of levels of long non-coding RNA (lncRNA) MALAT1, miR-125b, and tubulin 1. We also conducted histopathological and immuno-histochemical examination using H&E staining assessment of levels of glutathione S-transferase placental (GSTP) and proliferating cell nuclear antigen (PCNA) antibodies in male Wistar rat model of diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF)-induced hepatic PCL.
MATERIALS AND METHODS
Chemicals and drugs DEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt. DEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt. Experimental animals All animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol. All animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol. Experimental procedures Induction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected. Animal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17]. Induction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected. Animal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17]. Assessment of the effects of treatment Biochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm. Biochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm. Determination of serum albumin Quantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample. Quantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample. Bioinformatics-based selection of cell cycle-microtubule assembly- and HCC-specific RNA panel The RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk. The RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk. Extraction of total RNA, including lncRNA and miRNA from liver tissue Total RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. Total RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. Real time-polymerase chain reaction quantification of the RNA panel in liver tissues The relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel. The relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel. Histopathological and immunohistochemical studies Liver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States). Liver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States). Statistical analysis All results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant. All results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant.
null
null
CONCLUSION
Further larger in vitro and in vivo studies are needed to elucidate the mechanism of cyanidin cytotoxicity in hepatocellular carcinoma.
[ "INTRODUCTION", "Chemicals and drugs ", "Experimental animals", "Experimental procedures", "Assessment of the effects of treatment", "Determination of serum albumin", "Bioinformatics-based selection of cell cycle-microtubule assembly- and HCC-specific RNA panel ", "Extraction of total RNA, including lncRNA and miRNA from liver tissue", "Real time-polymerase chain reaction quantification of the RNA panel in liver tissues", "Histopathological and immunohistochemical studies", "Statistical analysis", "RESULTS", "Effect on AFP levels and liver function tests", "Effect on expression of lncRNA MALAT1, miR-125b, and TUBG1 (G2/M transition of mitotic cell cycle) in the liver tissue", "Histopathological and immunohistochemical studies", "DISCUSSION", "CONCLUSION" ]
[ "Hepatocellular carcinoma (HCC) is the commonest primary malignancy of the liver. Its main risk factors include hepatitis B, hepatitis C, and non-alcoholic steatohepatitis. HCC is associated with high mortality and morbidity. Its incidence has increased from 1.4 to 6.2 per 100000 cases per year within the last 30 years[1]. HCC burden in Egypt is high due to the high prevalence of HCV, where liver cancers account for 11.75% of all gastrointestinal cancers and 1.68% of all malignancies. In addition, over 70% of all liver malignancies in Egypt are HCC[2].\nTo date, HCC is holding its place as an unresolved dilemma; its resistance to systemic chemotherapy and radiation makes the cure approachable only in very early stages according to Milan criteria, that is, where the tumor size is < 5 cm (when only one lesion is present) or < 3 cm (when 2-3 lesions are present)[3].\nGenes involved in cell cycle control are usually mutated in presence of cancer. Dysregulated mitosis results in genomic instability, which leads to tumor aggressiveness[4]. Microtubules are a key component of mitotic spindles, and thus, a crucial part of the process of mitosis. They form an important target in cancer chemotherapy, inducing arrest of mitosis and cell death. On the other hand, while the overexpression of γ-tubulin (γ-TUBG) showed improved survival among small cell lung cancer patients[5], the overexpression of γ-TUBG is found to be characteristic feature of thyroid and breast cancers[6]. Such variation may arise due to differences in gene transcription and check point regulations in different cancer tissues[7].\nCyanidin-3-glucoside (cyan) is a potential chemotherapeutic and chemo-protective agent. Through its antioxidant activity, it scavenges radical oxygen species, which decreases the overall number of tumorous cells (malignant and benign) in in vivo studies[8]. Cyan has been implicated in some beneficial health activities[9], including reducing age-associated oxidative stress[10], improving cognitive brain function, and exhibiting anti-diabetic[11], anti-inflammation[12], anti-atherogenic[13], and anti-obesity activities[14]. Previously, it has been reported that cyan mediates caspase-3 cleavage with DNA fragmentation. Cyan has also been associated with induction of autophagy, a key element involved in cancer elimination, via induction of autophagy-related gene 5 and microtubule-associated protein 1 light chain 3-II[15]. Cyan has drawn increasing attention because of its potential anti-cancer properties. Cyan may offer a novel avenue for treating HCC. a potential antiproliferative effect on HepG2 cells[16].\nThe present study was conducted to evaluate the effect of cyan in three different doses on cell cycle progression and mitotic assembly disorder in hepatic precancerous lesion (PCL) via assessment of levels of long non-coding RNA (lncRNA) MALAT1, miR-125b, and tubulin 1. We also conducted histopathological and immuno-histochemical examination using H&E staining assessment of levels of glutathione S-transferase placental (GSTP) and proliferating cell nuclear antigen (PCNA) antibodies in male Wistar rat model of diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF)-induced hepatic PCL. ", "DEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt.", "All animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol.", "\nInduction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected.\n\nAnimal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17].", "\nBiochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm.", "Quantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample.", "The RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk.", "Total RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. ", "The relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel.", "Liver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States).", "All results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant. ", "Effect on AFP levels and liver function tests As shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D).\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein.\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase.\nAs shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D).\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein.\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase.\nEffect on expression of lncRNA MALAT1, miR-125b, and TUBG1 (G2/M transition of mitotic cell cycle) in the liver tissue As shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C).\n\nEffect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA.\nAs shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C).\n\nEffect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA.\nHistopathological and immunohistochemical studies Light microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H).\n\nPhotomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes.\nImmunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1).\n\nPhotomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40)\n\nPhotomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). \nExpression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue\nValues are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. \n\nP < 0.05 significant differences compared to naïve group.\n\nP < 0.05 significant differences compared to precancerous lesion group.\n\nP < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen.\nLight microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H).\n\nPhotomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes.\nImmunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1).\n\nPhotomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40)\n\nPhotomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). \nExpression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue\nValues are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. \n\nP < 0.05 significant differences compared to naïve group.\n\nP < 0.05 significant differences compared to precancerous lesion group.\n\nP < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen.", "As shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D).\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein.\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase.", "As shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C).\n\nEffect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA.", "Light microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H).\n\nPhotomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes.\nImmunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1).\n\nPhotomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40)\n\nPhotomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). \nExpression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue\nValues are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. \n\nP < 0.05 significant differences compared to naïve group.\n\nP < 0.05 significant differences compared to precancerous lesion group.\n\nP < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen.", "Altered expression of mitotic spindle assembly genes has been detected in many cancers. For example, MAD2 gene levels are downregulated in breast carcinoma[18], while mitotic checkpoint serine/BUB1 gene expression is dysregulated in colorectal carcinoma[19]. The impairment of spindle assembly genes are frequently detected in HCC[20,21]. Moreover, inhibition of the mitotic assembly genes has been found to be lethal to cancer cells, and has promising therapeutic effect in cancer treatment[22,23]. \nIn this study, we used several public microarray databases for the construction of simple genetic-epigenetic network specific to HCC and linked to cell cycle and mitotic spindle formation. We assessed the antiproliferative effect of cyan in HCC animal model via modulation of lncRNA-MALAT1-miR-125b-TUBG1 mRNA axis expression.\nMicrotubules are involved both in maintenance of cell shape and motility and in mitotic spindle formation[24]. γ-TUBG is a major player that supports microtubule nucleation. γ-TUBG acts as a binding site for the α/β-tubulin dimer and it has two γ-TUBG genes, TUBG1 and TUBG2 in mammals only[25]. The TUBG levels in the centrosome change during cell cycle progression and usually increases at the start of mitosis. At the end of mitosis, TUBG levels are decreased to interphase levels[26]. Overexpression and altered compartmentalization of γ-TUBG may lead to carcinogenesis[27-29]. Interestingly, γ-TUBG associates interact with tumor suppressor protein C53 in mammalian interphase cells of different cellular origins[30]. Interestingly, γ-TUBG interacts with E2F transcription factors, resulting in upregulation of E2F target genes PCNA and CDKB1[31] and glutathione S transferase activity[32]. On the other hand, the mRNA and protein levels of TUBA1C are upregulated in HCC. Clinically, it affects the survival of patients and the level of metastatic burden along with portal vein thrombosis. In addition, it activates cellular proliferation and migration and is a potential therapeutic target for HCC[33]. Recent studies have shed light on the efficacy of microtubule-binding agents in the treatment of HCC, especially in combination with mammalian target of rapamycin inhibitors[34].\nOne of the few miRNAs that regulate Spindle Assembly Checkpoints (SAC) is miR-125b. It suppresses the expression of Mad1 (a core SAC protein) that is responsible for inhibiting entry into anaphase till metaphase defects are corrected[35]. Importantly, miRNA-125b has been found to be associated with colorectal cancer[36], multiple myeloma[37], and breast cancer[38]. Song et al[39] demonstrated that the expression of miR-125b in hepatocytes was protective and reflected better survival in HCC tissue samples due to restoration of SIRT6 expression[39]. miR-125b-5p inhibits HCC proliferation by targeting TXNRD1, the terminal step of biosynthesis of nucleotides[40].\nDeregulation of lncRNAs has been shown to be crucial in cancer progression, especially HCC[41,42]. Hung et al[43] demonstrated the involvement of lncRNAs in regulation of the expression of cell cycle-related genes, p53 gene regulatory pathway, and apoptosis[43,44]. The lncRNA MALAT1 is overexpressed in several solid tumors and is linked with tumor recurrence[45-47]. MALAT1 coordinate RNA polymerase II transcription, pre-mRNA splicing, and mRNA export[48]. Thus, this lncRNA holds great potential in influencing the local concentration of a specific splicing factor during specific stages of the cell cycle. Depletion of MALAT1 in cancer cells inhibits tumorigenicity[49]. MALAT1 regulates E2F1 transcription factor activity, highlighting its pro-proliferative function[50]. MALAT1 is upregulated in many cancer types, including breast cancer[51], cervical cancer, lung cancer[52], and hepatocarcinoma[53-55]. In a previous study, hypoxia enhanced MALAT1 expression, leading to increase in proliferating and invading activity of Hep3B cells due to negative interaction with miR200a[56]. Huang and his group explored the role of specificity protein 1/3 in regulation of MALAT1 expression in HCC cells[57].\nCyanidin or anthocyanin (a polyphenolic pigment found in plants) confers several pharmacological benefits, including anticancer properties[58-62]. Cyan mediates cytotoxicity against human monocytic leukemia cells via G2/M phase arrest and induction of apoptosis[63]. Tsai et al[64] evaluated anthocyanins as promising anticancer or chemopreventive agents to induce G2/M arrest in HAs-induced leukemia cell cycle arrest via ATM-Chk1/2-Cdc25C axis modulation[64]. They also found that anthocyanin inhibits the proliferation of HCC cells[65,66].\nIn our study, the effect of cyan on cancerous tissue was dose-dependent (10, 15 and 20 mg/kg/d), that is, increasing its effect with increasing the concentration of the drug. This was proved also, with the correlated significant decrease in the percent area of GSTP foci, PCNA expression which are the target of TUBG1 mRNA and the molecular epigenetic markers associated with HCC (lncRNA MALAT1 and miR-125b-1-3p) in the liver tissue of rats in our study (Figure 7). Thus, we can hypothesize that administration of cyan in PCL may result in downregulation of lncRNA MALAT1 with subsequent release of free miR-125b-1-3p that binds to TUBG1 mRNA and downregulate its expression[65]. The changes in the epigenetic-genetic network affect their target genes PCNA and GSTP.\n\nConcept map of study design. GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen; TUBG: Tubulin; lncRNA: Long non-coding RNA.\nLastly, the cyan-mediated modulation of cell cycle and mitotic spindle assembly that regulate HCC proliferation might affect the capability of the HCC cells to overcome oxidative stress[66,67]. Our data confers the rationale for designing antimitotic approach combining the conventional cytotoxic drugs with phytochemical extracts to inhibit cell cycle progression in cancer. More in vitro functional studies are required to explore the functional mechanism of the chosen RNA panel and validate their role as drug target biomarkers. ", "Cyanidin is a natural molecule that holds great potential in unraveling the mystery of cytotoxic pharmacy in the future. It can be used to devise novel antimitotic drugs that can target cell cycle in HCC. Our results also indicated great potential of the TUBG1 mRNA- miR-125b-lncRNA MALAT1 and -1-3p axis as potential drug target biomarker for further investigations." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Chemicals and drugs ", "Experimental animals", "Experimental procedures", "Assessment of the effects of treatment", "Determination of serum albumin", "Bioinformatics-based selection of cell cycle-microtubule assembly- and HCC-specific RNA panel ", "Extraction of total RNA, including lncRNA and miRNA from liver tissue", "Real time-polymerase chain reaction quantification of the RNA panel in liver tissues", "Histopathological and immunohistochemical studies", "Statistical analysis", "RESULTS", "Effect on AFP levels and liver function tests", "Effect on expression of lncRNA MALAT1, miR-125b, and TUBG1 (G2/M transition of mitotic cell cycle) in the liver tissue", "Histopathological and immunohistochemical studies", "DISCUSSION", "CONCLUSION" ]
[ "Hepatocellular carcinoma (HCC) is the commonest primary malignancy of the liver. Its main risk factors include hepatitis B, hepatitis C, and non-alcoholic steatohepatitis. HCC is associated with high mortality and morbidity. Its incidence has increased from 1.4 to 6.2 per 100000 cases per year within the last 30 years[1]. HCC burden in Egypt is high due to the high prevalence of HCV, where liver cancers account for 11.75% of all gastrointestinal cancers and 1.68% of all malignancies. In addition, over 70% of all liver malignancies in Egypt are HCC[2].\nTo date, HCC is holding its place as an unresolved dilemma; its resistance to systemic chemotherapy and radiation makes the cure approachable only in very early stages according to Milan criteria, that is, where the tumor size is < 5 cm (when only one lesion is present) or < 3 cm (when 2-3 lesions are present)[3].\nGenes involved in cell cycle control are usually mutated in presence of cancer. Dysregulated mitosis results in genomic instability, which leads to tumor aggressiveness[4]. Microtubules are a key component of mitotic spindles, and thus, a crucial part of the process of mitosis. They form an important target in cancer chemotherapy, inducing arrest of mitosis and cell death. On the other hand, while the overexpression of γ-tubulin (γ-TUBG) showed improved survival among small cell lung cancer patients[5], the overexpression of γ-TUBG is found to be characteristic feature of thyroid and breast cancers[6]. Such variation may arise due to differences in gene transcription and check point regulations in different cancer tissues[7].\nCyanidin-3-glucoside (cyan) is a potential chemotherapeutic and chemo-protective agent. Through its antioxidant activity, it scavenges radical oxygen species, which decreases the overall number of tumorous cells (malignant and benign) in in vivo studies[8]. Cyan has been implicated in some beneficial health activities[9], including reducing age-associated oxidative stress[10], improving cognitive brain function, and exhibiting anti-diabetic[11], anti-inflammation[12], anti-atherogenic[13], and anti-obesity activities[14]. Previously, it has been reported that cyan mediates caspase-3 cleavage with DNA fragmentation. Cyan has also been associated with induction of autophagy, a key element involved in cancer elimination, via induction of autophagy-related gene 5 and microtubule-associated protein 1 light chain 3-II[15]. Cyan has drawn increasing attention because of its potential anti-cancer properties. Cyan may offer a novel avenue for treating HCC. a potential antiproliferative effect on HepG2 cells[16].\nThe present study was conducted to evaluate the effect of cyan in three different doses on cell cycle progression and mitotic assembly disorder in hepatic precancerous lesion (PCL) via assessment of levels of long non-coding RNA (lncRNA) MALAT1, miR-125b, and tubulin 1. We also conducted histopathological and immuno-histochemical examination using H&E staining assessment of levels of glutathione S-transferase placental (GSTP) and proliferating cell nuclear antigen (PCNA) antibodies in male Wistar rat model of diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF)-induced hepatic PCL. ", "Chemicals and drugs DEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt.\nDEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt.\nExperimental animals All animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol.\nAll animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol.\nExperimental procedures \nInduction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected.\n\nAnimal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17].\n\nInduction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected.\n\nAnimal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17].\nAssessment of the effects of treatment \nBiochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm.\n\nBiochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm.\nDetermination of serum albumin Quantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample.\nQuantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample.\nBioinformatics-based selection of cell cycle-microtubule assembly- and HCC-specific RNA panel The RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk.\nThe RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk.\nExtraction of total RNA, including lncRNA and miRNA from liver tissue Total RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. \nTotal RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. \nReal time-polymerase chain reaction quantification of the RNA panel in liver tissues The relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel.\nThe relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel.\nHistopathological and immunohistochemical studies Liver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States).\nLiver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States).\nStatistical analysis All results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant. \nAll results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant. ", "DEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt.", "All animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol.", "\nInduction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected.\n\nAnimal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17].", "\nBiochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm.", "Quantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample.", "The RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk.", "Total RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. ", "The relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel.", "Liver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States).", "All results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant. ", "Effect on AFP levels and liver function tests As shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D).\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein.\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase.\nAs shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D).\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein.\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase.\nEffect on expression of lncRNA MALAT1, miR-125b, and TUBG1 (G2/M transition of mitotic cell cycle) in the liver tissue As shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C).\n\nEffect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA.\nAs shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C).\n\nEffect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA.\nHistopathological and immunohistochemical studies Light microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H).\n\nPhotomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes.\nImmunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1).\n\nPhotomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40)\n\nPhotomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). \nExpression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue\nValues are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. \n\nP < 0.05 significant differences compared to naïve group.\n\nP < 0.05 significant differences compared to precancerous lesion group.\n\nP < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen.\nLight microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H).\n\nPhotomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes.\nImmunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1).\n\nPhotomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40)\n\nPhotomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). \nExpression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue\nValues are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. \n\nP < 0.05 significant differences compared to naïve group.\n\nP < 0.05 significant differences compared to precancerous lesion group.\n\nP < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen.", "As shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D).\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein.\n\nEffect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase.", "As shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C).\n\nEffect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA.", "Light microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H).\n\nPhotomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes.\nImmunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1).\n\nPhotomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40)\n\nPhotomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). \nExpression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue\nValues are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. \n\nP < 0.05 significant differences compared to naïve group.\n\nP < 0.05 significant differences compared to precancerous lesion group.\n\nP < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen.", "Altered expression of mitotic spindle assembly genes has been detected in many cancers. For example, MAD2 gene levels are downregulated in breast carcinoma[18], while mitotic checkpoint serine/BUB1 gene expression is dysregulated in colorectal carcinoma[19]. The impairment of spindle assembly genes are frequently detected in HCC[20,21]. Moreover, inhibition of the mitotic assembly genes has been found to be lethal to cancer cells, and has promising therapeutic effect in cancer treatment[22,23]. \nIn this study, we used several public microarray databases for the construction of simple genetic-epigenetic network specific to HCC and linked to cell cycle and mitotic spindle formation. We assessed the antiproliferative effect of cyan in HCC animal model via modulation of lncRNA-MALAT1-miR-125b-TUBG1 mRNA axis expression.\nMicrotubules are involved both in maintenance of cell shape and motility and in mitotic spindle formation[24]. γ-TUBG is a major player that supports microtubule nucleation. γ-TUBG acts as a binding site for the α/β-tubulin dimer and it has two γ-TUBG genes, TUBG1 and TUBG2 in mammals only[25]. The TUBG levels in the centrosome change during cell cycle progression and usually increases at the start of mitosis. At the end of mitosis, TUBG levels are decreased to interphase levels[26]. Overexpression and altered compartmentalization of γ-TUBG may lead to carcinogenesis[27-29]. Interestingly, γ-TUBG associates interact with tumor suppressor protein C53 in mammalian interphase cells of different cellular origins[30]. Interestingly, γ-TUBG interacts with E2F transcription factors, resulting in upregulation of E2F target genes PCNA and CDKB1[31] and glutathione S transferase activity[32]. On the other hand, the mRNA and protein levels of TUBA1C are upregulated in HCC. Clinically, it affects the survival of patients and the level of metastatic burden along with portal vein thrombosis. In addition, it activates cellular proliferation and migration and is a potential therapeutic target for HCC[33]. Recent studies have shed light on the efficacy of microtubule-binding agents in the treatment of HCC, especially in combination with mammalian target of rapamycin inhibitors[34].\nOne of the few miRNAs that regulate Spindle Assembly Checkpoints (SAC) is miR-125b. It suppresses the expression of Mad1 (a core SAC protein) that is responsible for inhibiting entry into anaphase till metaphase defects are corrected[35]. Importantly, miRNA-125b has been found to be associated with colorectal cancer[36], multiple myeloma[37], and breast cancer[38]. Song et al[39] demonstrated that the expression of miR-125b in hepatocytes was protective and reflected better survival in HCC tissue samples due to restoration of SIRT6 expression[39]. miR-125b-5p inhibits HCC proliferation by targeting TXNRD1, the terminal step of biosynthesis of nucleotides[40].\nDeregulation of lncRNAs has been shown to be crucial in cancer progression, especially HCC[41,42]. Hung et al[43] demonstrated the involvement of lncRNAs in regulation of the expression of cell cycle-related genes, p53 gene regulatory pathway, and apoptosis[43,44]. The lncRNA MALAT1 is overexpressed in several solid tumors and is linked with tumor recurrence[45-47]. MALAT1 coordinate RNA polymerase II transcription, pre-mRNA splicing, and mRNA export[48]. Thus, this lncRNA holds great potential in influencing the local concentration of a specific splicing factor during specific stages of the cell cycle. Depletion of MALAT1 in cancer cells inhibits tumorigenicity[49]. MALAT1 regulates E2F1 transcription factor activity, highlighting its pro-proliferative function[50]. MALAT1 is upregulated in many cancer types, including breast cancer[51], cervical cancer, lung cancer[52], and hepatocarcinoma[53-55]. In a previous study, hypoxia enhanced MALAT1 expression, leading to increase in proliferating and invading activity of Hep3B cells due to negative interaction with miR200a[56]. Huang and his group explored the role of specificity protein 1/3 in regulation of MALAT1 expression in HCC cells[57].\nCyanidin or anthocyanin (a polyphenolic pigment found in plants) confers several pharmacological benefits, including anticancer properties[58-62]. Cyan mediates cytotoxicity against human monocytic leukemia cells via G2/M phase arrest and induction of apoptosis[63]. Tsai et al[64] evaluated anthocyanins as promising anticancer or chemopreventive agents to induce G2/M arrest in HAs-induced leukemia cell cycle arrest via ATM-Chk1/2-Cdc25C axis modulation[64]. They also found that anthocyanin inhibits the proliferation of HCC cells[65,66].\nIn our study, the effect of cyan on cancerous tissue was dose-dependent (10, 15 and 20 mg/kg/d), that is, increasing its effect with increasing the concentration of the drug. This was proved also, with the correlated significant decrease in the percent area of GSTP foci, PCNA expression which are the target of TUBG1 mRNA and the molecular epigenetic markers associated with HCC (lncRNA MALAT1 and miR-125b-1-3p) in the liver tissue of rats in our study (Figure 7). Thus, we can hypothesize that administration of cyan in PCL may result in downregulation of lncRNA MALAT1 with subsequent release of free miR-125b-1-3p that binds to TUBG1 mRNA and downregulate its expression[65]. The changes in the epigenetic-genetic network affect their target genes PCNA and GSTP.\n\nConcept map of study design. GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen; TUBG: Tubulin; lncRNA: Long non-coding RNA.\nLastly, the cyan-mediated modulation of cell cycle and mitotic spindle assembly that regulate HCC proliferation might affect the capability of the HCC cells to overcome oxidative stress[66,67]. Our data confers the rationale for designing antimitotic approach combining the conventional cytotoxic drugs with phytochemical extracts to inhibit cell cycle progression in cancer. More in vitro functional studies are required to explore the functional mechanism of the chosen RNA panel and validate their role as drug target biomarkers. ", "Cyanidin is a natural molecule that holds great potential in unraveling the mystery of cytotoxic pharmacy in the future. It can be used to devise novel antimitotic drugs that can target cell cycle in HCC. Our results also indicated great potential of the TUBG1 mRNA- miR-125b-lncRNA MALAT1 and -1-3p axis as potential drug target biomarker for further investigations." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Hepatocellular carcinoma therapy", "Hepatocellular-carcinoma growth", "Hepatocellular-carcinoma model", "Hepatocellular-carcinoma size" ]
INTRODUCTION: Hepatocellular carcinoma (HCC) is the commonest primary malignancy of the liver. Its main risk factors include hepatitis B, hepatitis C, and non-alcoholic steatohepatitis. HCC is associated with high mortality and morbidity. Its incidence has increased from 1.4 to 6.2 per 100000 cases per year within the last 30 years[1]. HCC burden in Egypt is high due to the high prevalence of HCV, where liver cancers account for 11.75% of all gastrointestinal cancers and 1.68% of all malignancies. In addition, over 70% of all liver malignancies in Egypt are HCC[2]. To date, HCC is holding its place as an unresolved dilemma; its resistance to systemic chemotherapy and radiation makes the cure approachable only in very early stages according to Milan criteria, that is, where the tumor size is < 5 cm (when only one lesion is present) or < 3 cm (when 2-3 lesions are present)[3]. Genes involved in cell cycle control are usually mutated in presence of cancer. Dysregulated mitosis results in genomic instability, which leads to tumor aggressiveness[4]. Microtubules are a key component of mitotic spindles, and thus, a crucial part of the process of mitosis. They form an important target in cancer chemotherapy, inducing arrest of mitosis and cell death. On the other hand, while the overexpression of γ-tubulin (γ-TUBG) showed improved survival among small cell lung cancer patients[5], the overexpression of γ-TUBG is found to be characteristic feature of thyroid and breast cancers[6]. Such variation may arise due to differences in gene transcription and check point regulations in different cancer tissues[7]. Cyanidin-3-glucoside (cyan) is a potential chemotherapeutic and chemo-protective agent. Through its antioxidant activity, it scavenges radical oxygen species, which decreases the overall number of tumorous cells (malignant and benign) in in vivo studies[8]. Cyan has been implicated in some beneficial health activities[9], including reducing age-associated oxidative stress[10], improving cognitive brain function, and exhibiting anti-diabetic[11], anti-inflammation[12], anti-atherogenic[13], and anti-obesity activities[14]. Previously, it has been reported that cyan mediates caspase-3 cleavage with DNA fragmentation. Cyan has also been associated with induction of autophagy, a key element involved in cancer elimination, via induction of autophagy-related gene 5 and microtubule-associated protein 1 light chain 3-II[15]. Cyan has drawn increasing attention because of its potential anti-cancer properties. Cyan may offer a novel avenue for treating HCC. a potential antiproliferative effect on HepG2 cells[16]. The present study was conducted to evaluate the effect of cyan in three different doses on cell cycle progression and mitotic assembly disorder in hepatic precancerous lesion (PCL) via assessment of levels of long non-coding RNA (lncRNA) MALAT1, miR-125b, and tubulin 1. We also conducted histopathological and immuno-histochemical examination using H&E staining assessment of levels of glutathione S-transferase placental (GSTP) and proliferating cell nuclear antigen (PCNA) antibodies in male Wistar rat model of diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF)-induced hepatic PCL. MATERIALS AND METHODS: Chemicals and drugs DEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt. DEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt. Experimental animals All animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol. All animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol. Experimental procedures Induction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected. Animal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17]. Induction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected. Animal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17]. Assessment of the effects of treatment Biochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm. Biochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm. Determination of serum albumin Quantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample. Quantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample. Bioinformatics-based selection of cell cycle-microtubule assembly- and HCC-specific RNA panel The RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk. The RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk. Extraction of total RNA, including lncRNA and miRNA from liver tissue Total RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. Total RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. Real time-polymerase chain reaction quantification of the RNA panel in liver tissues The relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel. The relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel. Histopathological and immunohistochemical studies Liver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States). Liver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States). Statistical analysis All results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant. All results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant. Chemicals and drugs : DEN was obtained as 1 g solution in serum bottle diluted in 0.9% NaCl. 2-AAF was obtained as white powder, while cyan was obtained as black powder. Both 2-AAF and cyan were dissolved in 0.9% NaCl. All chemicals were purchased from Sigma-Aldrich, Cairo, Egypt. Experimental animals: All animal procedures were approved by the Institutional Animal Ethics Committee of Ain Shams University, Faculty of Medicine. Thirty male Wistar rats weighing 200-250 g were purchased from Nile Pharmaceuticals Company (Cairo, Egypt). Animals were housed in an animal room with temperature 25 ± 2 °C and 12 h light/dark cycle controls. An adaptation period of 1 wk was allowed before initiation of the experimental protocol. Experimental procedures: Induction of hepatic PCL: DEN was injected intraperitoneally (i.p.) at a dose of 100 mg/kg once weekly for three consecutive weeks. One week after the last DEN injection, a single dose (300 mg/kg) of 2-AAF was injected. Animal groups: Rats were divided into five groups (6 rats/each group), including: Naïve group: Rats were injected with a vehicle (0.9% NaCl), PCL group: Rats were injected with DEN and 2-AAF, and three cyan groups (cyan-10, cyan-15, and cyan-20 groups): Rats were injected with DEN and 2-AAF, and orally administered through gastric gavage with cyan in doses of 10, 15, and 20 mg/kg respectively, for four consecutive days per week for 16 wk[17]. Assessment of the effects of treatment: Biochemical and molecular studies: After scarifying the animals, blood was withdrawn from each rat from the retro orbital vein and incubated for about half an hour for clotting. Then, the blood sample was centrifuged for 20 min at 5000 RPM to get serum samples. Livers were dissected. The right lobe of the livers was removed, cut into longitudinal sections 2-4 mm in thickness and kept in 10% formalin for histopathological and immunohistochemical examination. The other lobe was kept frozen at -80 °C for molecular analyses. The samples were analyzed for the following parameters: (1) alpha fetoprotein (AFP) levels: AFP level was analyzed using quantitative sandwich rat AFP ELISA kit purchased from MyBiosource Inc. (San Diego, United States) with a sensitivity range of 0.625-20 ng/mL; (2) serum alanine aminotransferase (ALT) levels: The serum ALT was measured according to method described by using kits purchased from Diamond Diagnostic (Cairo, Egypt). ALT catalyzes the transfer of amino groups from specific amino acids to ketoglutaric acid, yielding glutamic acid and oxaloacetic or pyruvic acid, respectively. Ketoacids were then determined calorimetrically after their reaction with 2, 4- dinitrophenylhydrazine; and (3) determination of serum total bilirubin and direct bilirubin: Alkaline methanolysis of bilirubin followed by chloroform extraction of bilirubin methyl esters, and later, separation of these esters by chromatography and spectrophotometric determination at 430 nm. Determination of serum albumin: Quantitative determination of serum albumin level was quantified using rat albumin ELISA kit. The absorbance at 450 nm is a measure of the concentration of albumin in the test sample. Bioinformatics-based selection of cell cycle-microtubule assembly- and HCC-specific RNA panel : The RNA panel was chosen in three steps: (1) Tubulin, gamma 1 (TUBG1) was retrieved as a crucial player in microtubule formation and progression of the cell cycle through gene atlas expression database (available at https://www.ebi.ac.uk/gxa/home) alongside with verification of the chosen gene from protein Atlas database (available at https://www.proteinatlas.org/), followed by analysis of TUBG1 gene ontology to ensure that it is linked to G2-M transition and mitotic spindle assembly, which is closely linked to cancer development through gene card database (available at https://www.genecards.org/); (2) we selected miR-125b-1-3p targeting TUBG1 mRNA though) miRWALK database available at http://mirwalk.umm.uni-heidelberg.de/), followed by pathway enrichment, which revealed that miR-125b-1-3p is linked to MAP kinase and ubiquitin-mediated proteolysis (strongly correlated with mitotic assembly and cell cycle progression) through (Diana database available at http://diana.imis.athena-innovation.gr/DianaTools/index.php); and (3) lncRNA metastasis-associated lung adenocarcinoma transcript 1, lncRNA MALAT1, was obtained as a lncRNA specific to HCC and targeting miR-125b-1-3p using noncode database available at http://www.noncode.org/ and European Bioinformatics institute database available at https://www.ebi.ac.uk. Extraction of total RNA, including lncRNA and miRNA from liver tissue: Total RNA was extracted from liver tissues using miRNeasy Mini kit (Cat No. 217004, Qiagen, United States) according to manufacturer’s protocol. The extracted RNA concentration and integrity were assessed using [(NanoDrop Technologies/Thermo Scientific, Wilmington, DE, United States)], with RNA purities were 1.8-2. The extracted total RNA was reverse transcribed into cDNA using a miScript II RT kit (Cat No. 218160, 218161, Qiagen, United States) on thermal cycler (Bio-Rad; Hercules, CA, United States), according to the manufacturer’s protocol. Real time-polymerase chain reaction quantification of the RNA panel in liver tissues: The relative expression of TUBG1 mRNA and lncRNA-MALAT1 in liver tissue were measured using a Quantitect SYBR Green Master Mix kit and an RT2 SYBR Green ROX real time-polymerase chain reaction (qPCR) Mastermix (Qiagen), respectively, using a 7500 qPCR Systems (Applied Biosystems, Foster City, CA, United States) detection system) and specific primers (Accession: NM_001128148, NM_003234, NR_002819, and ENST00000534336, respectively) supplied by Qiagen. GAPDH (Accession NM_002046.7) was used as a housekeeping gene. MiR-3163 expression in liver tissue was quantified by PCR using a miScript SYBR Green kit (Qiagen), a miScript universal primer, and a miRNA-specific forward primer (mir-125b-1-3p miScript Primer Assay) (Accession: MIMAT0004592 (5'-ACGGGUUAGGCUCUUGGGAGCU). All steps followed the manufacturer’s suggested protocol, and SNORD68 was used as an internal control. Ct values more than 36 were considered as negative. The specificities of the amplicons for the SYBR Green−based PCR amplification were affirmed by the melting curves. The 2−ΔΔCt technique was used to measure the relative expression of the HCC-specific RNA panel. Histopathological and immunohistochemical studies: Liver sections were cut at 5-μm thickness and stained with H&E. Immunohistochemistry S-P method was used to detect GSTP and PCNA levels. Briefly, the protocol was as follows: (1) the tissues were treated with endogenous peroxidase blocking solution at room temperature for 10 min, and then, incubated in normal nonimmune serum at room temperature for 10 min; (2) mouse anti GSTP or PCNA antibody was added to adjacent tissue sections respectively and incubated overnight at 4 °C; (3) biotin-conjugated second antibody was added to the sections and incubated at room temperature for 10 min; and (4) S-P complex was added at room temperature for 10 min, and then, 2,4-diaminobutyric acid was used for the color reaction. The tissue sections were washed with Poly (butylene succinate) (PBS) (0.01 mol/L, pH 7.4) between each step. Positive and negative controls were simultaneously used to ensure specificity and reliability of the staining process. A positive section was taken as positive control. In negative control, PBS was used to replace the first antibody. GSTP-positivity was indicated by brown coloration of the cytoplasm. Morphometric analysis of the foci percent area was carried out using Leica Q win V.3 software after capturing the images using a Leica DM2500 microscope (Leica, Wetzlar, Germany). The PCNA labeling indices are represented as the expression of positively-stained nuclei (10 fields/slide at 400 ×) as shown below: +, positive expression found in 1-3 fields; ++, positive expression found in 4-6 fields; and +++, positive expression found in 7-10 fields. Rabbit monoclonal PCNA and GSTP antibodies were purchased from Abcam (San Francisco, United States). Statistical analysis: All results were expressed as mean ± SD. Statistical analysis was carried out using GraphPad Prism version 6.01. Unpaired t-test was done to compare between naïve and PCL groups. One-way ANOVA, followed by Tukey’s test, was carried out to compare between all groups. P values < 0.05 were considered statistically significant. RESULTS: Effect on AFP levels and liver function tests As shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D). Effect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein. Effect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase. As shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D). Effect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein. Effect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase. Effect on expression of lncRNA MALAT1, miR-125b, and TUBG1 (G2/M transition of mitotic cell cycle) in the liver tissue As shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C). Effect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA. As shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C). Effect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA. Histopathological and immunohistochemical studies Light microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H). Photomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes. Immunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1). Photomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40) Photomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). Expression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue Values are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. P < 0.05 significant differences compared to naïve group. P < 0.05 significant differences compared to precancerous lesion group. P < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen. Light microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H). Photomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes. Immunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1). Photomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40) Photomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). Expression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue Values are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. P < 0.05 significant differences compared to naïve group. P < 0.05 significant differences compared to precancerous lesion group. P < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen. Effect on AFP levels and liver function tests: As shown in Figure 1, following induction of PCL, rats exhibited a significant increase in AFP levels. AFP levels were significantly decreased in cyan-10, cyan-15, and cyan-20 groups compared to PCL group. There was no significant difference between the AFP levels of the three cyan groups. Liver function tests were affected after receiving DEN/2-AAF in which rats exhibited a significant increase in levels of ALT and total and direct bilirubin, and a significant decrease in the serum albumin (Figure 2). As shown in Figure 2A, rats received cyan. in different doses exhibited a significant decrease in serum ALT compared to PCL group. Cyan-15 group showed significant decrease in serum ALT compared to cyan-10 group. Moreover, cyan-20 group showed significant decrease in serum ALT compared to cyan-10 group. In As depicted in Figure 2B, compared to PCL group, the rats in the three cyan groups exhibited a significant decrease in serum total bilirubin. Furthermore, cyan-20 group showed significant decrease in serum total bilirubin compared to cyan-10 group. Figure 2C: Serum direct bilirubin was significantly decreased in the three cyan groups compared to the PCL group. Moreover, cyan-20 group showed significant decrease in serum direct bilirubin compared to cyan-10 group. Administration of cyan in doses 5, 10 and 20 mg/kg/d produced a significant increase in serum albumin compared to PCL group. However, there was no significant difference between the serum albumin levels of the three cyan groups (Figure 2D). Effect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) on alpha fetoprotein in the serum of rats. Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group, unpaired t test; bP < 0.05 significant differences compared to precancerous lesion group. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; AFP: Alpha fetoprotein. Effect of cyanidin 3-glucoside at different doses (10, 15 and 20 mg/kg/d) in rats. A: Alanine aminotransferase; B: Total bilirubin; C: Direct bilirubin; D: Serum albumin. Values are mean ± SEM; number of animals = 6 rats/each group. bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; ALT: Alanine aminotransferase. Effect on expression of lncRNA MALAT1, miR-125b, and TUBG1 (G2/M transition of mitotic cell cycle) in the liver tissue: As shown in Figure 3, induction of PCL resulted in significant increase of lncRNA MALAT a significant decrease in RQ miR-125b and significant increase in RQ TUBG1 expressions in comparison to naïve group. RQ of lncRNA MALAT1 in the liver tissue was significantly decreased in cyan -10,15 and cyan-20 groups compared to PCL group. Furthermore, cyan-20 group showed significant decrease in RQ of lncRNA MALAT1 in the liver tissue compared to cyan-10 group (Figure 3A). On the contrary, RQ of miR-125b in the liver tissue was significantly increased in cyan -10, cyan-15 and cyan-20 groups compared to PCL group. In addition, cyan-20 group showed significant increase in RQ of miR-125b in the liver tissue compared to cyan-10 group (Figure 3B). Furthermore, RQ of TUBG1 in the liver tissue was significantly decreased in cyan-10, cyan-15 and cyan-20 groups compared to PCL group. Also, cyan-20 group showed significant decrease in RQ of TUBG1 in the liver tissue compared to cyan-10 group (Figure 3C). Effect on expression of of long non-coding RNA, MALAT1, miR-125b and tubulin 1 (G2/M transition of mitotic cell cycle) in the liver tissue. A: RQ of long non-coding RNA MALAT1; B: RQ miR-125b; C: RQ TUBGI . Values are mean ± SEM; number of animals = 6 rats/each group. aP < 0.05 significant differences compared to naïve group; bP < 0.05 significant differences compared to precancerous lesion group; cP < 0.05 significant differences between the 2 selected groups, One-way ANOVA followed by Tukey’s test. PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; TUBG: Tubulin gamma 1; lncRNA: Long non-coding RNA. Histopathological and immunohistochemical studies: Light microscopic examination of H&E-stained sections of naïve rats’ liver showed normal hepatic architecture, cords of hepatocytes radiating from central vein, portal triads present in between, polygonal hepatocytes with centrally rounded vesicular nuclei, and hepatic sinusoid in between (Figure 4A and B). Liver sections obtained from rats received DEN/2-AAF (Figure 4C-E) showed disruption of normal hepatic lobular architecture along with presence of large, discriminated dysplastic nodules compressing the surrounding liver tissue, with high-grade dysplastic cells displaying increased nuclear: Cytoplasmatic ratio (Figure 4E), nuclear hyperchromatosis, and basophilic cytoplasm. Liver sections of rats treated with different doses of cyanidin (10, 15, and 20 mg/kg) respectively, showed small and less discriminated dysplastic nodules (Figure 4F-H). Photomicrographs of liver sections stained with H&E staining. A and B: Naïve group liver sections showed normal hepatic architecture, cords of hepatocytes radiating from central vein and portal triads present in-between and polygonal hepatocytes with central rounded vesicular nuclei, and hepatic sinusoid in-between; C-E: Liver sections of rats received diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF) showed larger, discriminated dysplastic nodules (doted shapes) compressing the surrounding liver tissue with disruption of normal hepatic lobular architecture; D: Liver sections of rats received DEN/2-AAF showed eosinophilic foci of cellular alteration consisting of enlarged hepatocytes with increased acidophilic staining and vaculated nuclei (arrow head); E: Liver sections of rats received DEN/2-AAF showed foci of Clear cell formed of hepatocytes showing variable degrees of cytoplasmic vacuolations and ballooning with pyknotic nuclei (arrow); F-H: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) respectively, showing small and less discriminated dysplastic nodules (doted shapes). (Magnification: A, C, F, G, H × 1000; B, D, E × 400). CV: Central vein; PV: Portal vein; Pt: Portal triads; S: Sinusoid; H: Hepatocytes. Immunohistochemical examination with GSTP antibody appeared as large hyperplastic nodules were demonstrated by the brown color. Figure 5A and B shows the liver sections of naïve group, while Figure 5C shows large-positively stained GSTP foci demonstrated in PCL group. Liver sections from rats of all cyan-treated groups exhibited small GSTP-positive small hepatic foci of different sizes scattered in-between negatively-stained hepatic parenchyma (Figure 5D-F). The percent area of GSTP foci was significantly increased in the PCL group compared to naïve group, while all cyan-treated groups showed significant decrease in the GSTP percent area. Furthermore, cyan-15 and -20 showed significant decrease in the GSTP percent area over cyan-10 (Table 1). PCNA immunohistochemical analysis showed an elevated expression of PCNA in the group that received DEN/2-AAF as compared to naïve group (Figure 6A and B). Liver sections obtained from rats received cyan. in its three doses showed decrease in PCNA-positively stained nuclei (Figure 6C-E and Table 1). Photomicrographs of liver sections of rats immunohistochemical stained with glutathione S-transferase placental antibody. A and B: Naive group; C: Precancerous lesion group showing multiple glutathione S-transferase placental (GSTP)-positive large hepatic nodules (brown stained nodules) occupying most of section; D-F: Liver sections of rats treated with different doses of cyanidin (10, 15, 20 mg/kg) showing GSTP positive small hepatic foci (brown stained cells = arrow) of different size scattered in-between negatively stained hepatic parenchyma. (Magnification: × 40) Photomicrographs of liver sections Immunohistochemical stained with proliferating cell nuclear antigen. A: Negative reaction of control group; B: Diethylnitrosamine + 100 mg 2-acetylaminofluorene group showing positive stained nuclei scattered all over the field; C-E: Liver sections of rats treated with different doses of cyamidine (10, 15, 20 mg/kg) respectively, show few positive hepatocytes sporadically distributed over the field (arrow). (Magnification × 100). Expression rate of hepatocytes positive for glutathione S-transferase placental and proliferating cell nuclear antigen were calculated as number of positive field expression in 10 fields per rat liver tissue Values are mean ± SD; number of animals = 6 rats/each group. Examined field; 10 fields/liver section. P < 0.05 significant differences compared to naïve group. P < 0.05 significant differences compared to precancerous lesion group. P < 0.05 significant compared to cyanidin-3-O-glucoside-10, One-way ANOVA followed by Tukey’s test. +: Positive expression found in 1-3 fields; ++: Positive expression found in 4-6 fields; +++: Positive expression found in 7-10 field; PCL: Precancerous lesion; Cyan: Cyanidin-3-O-glucoside; GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen. DISCUSSION: Altered expression of mitotic spindle assembly genes has been detected in many cancers. For example, MAD2 gene levels are downregulated in breast carcinoma[18], while mitotic checkpoint serine/BUB1 gene expression is dysregulated in colorectal carcinoma[19]. The impairment of spindle assembly genes are frequently detected in HCC[20,21]. Moreover, inhibition of the mitotic assembly genes has been found to be lethal to cancer cells, and has promising therapeutic effect in cancer treatment[22,23]. In this study, we used several public microarray databases for the construction of simple genetic-epigenetic network specific to HCC and linked to cell cycle and mitotic spindle formation. We assessed the antiproliferative effect of cyan in HCC animal model via modulation of lncRNA-MALAT1-miR-125b-TUBG1 mRNA axis expression. Microtubules are involved both in maintenance of cell shape and motility and in mitotic spindle formation[24]. γ-TUBG is a major player that supports microtubule nucleation. γ-TUBG acts as a binding site for the α/β-tubulin dimer and it has two γ-TUBG genes, TUBG1 and TUBG2 in mammals only[25]. The TUBG levels in the centrosome change during cell cycle progression and usually increases at the start of mitosis. At the end of mitosis, TUBG levels are decreased to interphase levels[26]. Overexpression and altered compartmentalization of γ-TUBG may lead to carcinogenesis[27-29]. Interestingly, γ-TUBG associates interact with tumor suppressor protein C53 in mammalian interphase cells of different cellular origins[30]. Interestingly, γ-TUBG interacts with E2F transcription factors, resulting in upregulation of E2F target genes PCNA and CDKB1[31] and glutathione S transferase activity[32]. On the other hand, the mRNA and protein levels of TUBA1C are upregulated in HCC. Clinically, it affects the survival of patients and the level of metastatic burden along with portal vein thrombosis. In addition, it activates cellular proliferation and migration and is a potential therapeutic target for HCC[33]. Recent studies have shed light on the efficacy of microtubule-binding agents in the treatment of HCC, especially in combination with mammalian target of rapamycin inhibitors[34]. One of the few miRNAs that regulate Spindle Assembly Checkpoints (SAC) is miR-125b. It suppresses the expression of Mad1 (a core SAC protein) that is responsible for inhibiting entry into anaphase till metaphase defects are corrected[35]. Importantly, miRNA-125b has been found to be associated with colorectal cancer[36], multiple myeloma[37], and breast cancer[38]. Song et al[39] demonstrated that the expression of miR-125b in hepatocytes was protective and reflected better survival in HCC tissue samples due to restoration of SIRT6 expression[39]. miR-125b-5p inhibits HCC proliferation by targeting TXNRD1, the terminal step of biosynthesis of nucleotides[40]. Deregulation of lncRNAs has been shown to be crucial in cancer progression, especially HCC[41,42]. Hung et al[43] demonstrated the involvement of lncRNAs in regulation of the expression of cell cycle-related genes, p53 gene regulatory pathway, and apoptosis[43,44]. The lncRNA MALAT1 is overexpressed in several solid tumors and is linked with tumor recurrence[45-47]. MALAT1 coordinate RNA polymerase II transcription, pre-mRNA splicing, and mRNA export[48]. Thus, this lncRNA holds great potential in influencing the local concentration of a specific splicing factor during specific stages of the cell cycle. Depletion of MALAT1 in cancer cells inhibits tumorigenicity[49]. MALAT1 regulates E2F1 transcription factor activity, highlighting its pro-proliferative function[50]. MALAT1 is upregulated in many cancer types, including breast cancer[51], cervical cancer, lung cancer[52], and hepatocarcinoma[53-55]. In a previous study, hypoxia enhanced MALAT1 expression, leading to increase in proliferating and invading activity of Hep3B cells due to negative interaction with miR200a[56]. Huang and his group explored the role of specificity protein 1/3 in regulation of MALAT1 expression in HCC cells[57]. Cyanidin or anthocyanin (a polyphenolic pigment found in plants) confers several pharmacological benefits, including anticancer properties[58-62]. Cyan mediates cytotoxicity against human monocytic leukemia cells via G2/M phase arrest and induction of apoptosis[63]. Tsai et al[64] evaluated anthocyanins as promising anticancer or chemopreventive agents to induce G2/M arrest in HAs-induced leukemia cell cycle arrest via ATM-Chk1/2-Cdc25C axis modulation[64]. They also found that anthocyanin inhibits the proliferation of HCC cells[65,66]. In our study, the effect of cyan on cancerous tissue was dose-dependent (10, 15 and 20 mg/kg/d), that is, increasing its effect with increasing the concentration of the drug. This was proved also, with the correlated significant decrease in the percent area of GSTP foci, PCNA expression which are the target of TUBG1 mRNA and the molecular epigenetic markers associated with HCC (lncRNA MALAT1 and miR-125b-1-3p) in the liver tissue of rats in our study (Figure 7). Thus, we can hypothesize that administration of cyan in PCL may result in downregulation of lncRNA MALAT1 with subsequent release of free miR-125b-1-3p that binds to TUBG1 mRNA and downregulate its expression[65]. The changes in the epigenetic-genetic network affect their target genes PCNA and GSTP. Concept map of study design. GSTP: Glutathione S-transferase placental; PCNA: Proliferating cell nuclear antigen; TUBG: Tubulin; lncRNA: Long non-coding RNA. Lastly, the cyan-mediated modulation of cell cycle and mitotic spindle assembly that regulate HCC proliferation might affect the capability of the HCC cells to overcome oxidative stress[66,67]. Our data confers the rationale for designing antimitotic approach combining the conventional cytotoxic drugs with phytochemical extracts to inhibit cell cycle progression in cancer. More in vitro functional studies are required to explore the functional mechanism of the chosen RNA panel and validate their role as drug target biomarkers. CONCLUSION: Cyanidin is a natural molecule that holds great potential in unraveling the mystery of cytotoxic pharmacy in the future. It can be used to devise novel antimitotic drugs that can target cell cycle in HCC. Our results also indicated great potential of the TUBG1 mRNA- miR-125b-lncRNA MALAT1 and -1-3p axis as potential drug target biomarker for further investigations.
Background: Cyanidin-3-O-glucoside (cyan) exhibits antioxidant and anticancer properties. The cell cycle proteins and antimitotic drugs might be promising therapeutic targets in hepatocellular carcinoma. Methods: In vivo, DEN/2-AAF-induced hepatic PCL, rats were treated with three doses of cyan (10, 15, and 20 mg/kg/d, for four consecutive days per week for 16 wk). Blood and liver tissue samples were collected for measurement of the followings; alpha fetoprotein (AFP) liver function and RNA panel differential expression was evaluated via real time polymerase chain reaction. Histopathological examination of liver sections stained with H&E and immunohistochemical study using glutathione S-transferase placental (GSTP) and proliferating cell nuclear antigen (PCNA) antibodies were assessed. Results: Cyan administration mitigated the effect of DEN/2-AFF induced PCL, decreased AFP levels, and improved liver function. Remarkably, treatment with cyan dose dependently decreased the long non-coding RNA MALAT1 and tubulin gamma 1 mRNA expressions and increased the levels of miR-125b, all of which are involved in cell cycle and mitotic spindle assembly. Of note, cyan decreased GSTP foci percent area and PCNA positively stained nuclei. Conclusions: Our results indicated that cyan could be used as a potential therapeutic agent to inhibit liver carcinogenesis in rat model via modulation of cell cycle.
INTRODUCTION: Hepatocellular carcinoma (HCC) is the commonest primary malignancy of the liver. Its main risk factors include hepatitis B, hepatitis C, and non-alcoholic steatohepatitis. HCC is associated with high mortality and morbidity. Its incidence has increased from 1.4 to 6.2 per 100000 cases per year within the last 30 years[1]. HCC burden in Egypt is high due to the high prevalence of HCV, where liver cancers account for 11.75% of all gastrointestinal cancers and 1.68% of all malignancies. In addition, over 70% of all liver malignancies in Egypt are HCC[2]. To date, HCC is holding its place as an unresolved dilemma; its resistance to systemic chemotherapy and radiation makes the cure approachable only in very early stages according to Milan criteria, that is, where the tumor size is < 5 cm (when only one lesion is present) or < 3 cm (when 2-3 lesions are present)[3]. Genes involved in cell cycle control are usually mutated in presence of cancer. Dysregulated mitosis results in genomic instability, which leads to tumor aggressiveness[4]. Microtubules are a key component of mitotic spindles, and thus, a crucial part of the process of mitosis. They form an important target in cancer chemotherapy, inducing arrest of mitosis and cell death. On the other hand, while the overexpression of γ-tubulin (γ-TUBG) showed improved survival among small cell lung cancer patients[5], the overexpression of γ-TUBG is found to be characteristic feature of thyroid and breast cancers[6]. Such variation may arise due to differences in gene transcription and check point regulations in different cancer tissues[7]. Cyanidin-3-glucoside (cyan) is a potential chemotherapeutic and chemo-protective agent. Through its antioxidant activity, it scavenges radical oxygen species, which decreases the overall number of tumorous cells (malignant and benign) in in vivo studies[8]. Cyan has been implicated in some beneficial health activities[9], including reducing age-associated oxidative stress[10], improving cognitive brain function, and exhibiting anti-diabetic[11], anti-inflammation[12], anti-atherogenic[13], and anti-obesity activities[14]. Previously, it has been reported that cyan mediates caspase-3 cleavage with DNA fragmentation. Cyan has also been associated with induction of autophagy, a key element involved in cancer elimination, via induction of autophagy-related gene 5 and microtubule-associated protein 1 light chain 3-II[15]. Cyan has drawn increasing attention because of its potential anti-cancer properties. Cyan may offer a novel avenue for treating HCC. a potential antiproliferative effect on HepG2 cells[16]. The present study was conducted to evaluate the effect of cyan in three different doses on cell cycle progression and mitotic assembly disorder in hepatic precancerous lesion (PCL) via assessment of levels of long non-coding RNA (lncRNA) MALAT1, miR-125b, and tubulin 1. We also conducted histopathological and immuno-histochemical examination using H&E staining assessment of levels of glutathione S-transferase placental (GSTP) and proliferating cell nuclear antigen (PCNA) antibodies in male Wistar rat model of diethylnitrosamine/2-acetylaminofluorene (DEN/2-AAF)-induced hepatic PCL. CONCLUSION: Further larger in vitro and in vivo studies are needed to elucidate the mechanism of cyanidin cytotoxicity in hepatocellular carcinoma.
Background: Cyanidin-3-O-glucoside (cyan) exhibits antioxidant and anticancer properties. The cell cycle proteins and antimitotic drugs might be promising therapeutic targets in hepatocellular carcinoma. Methods: In vivo, DEN/2-AAF-induced hepatic PCL, rats were treated with three doses of cyan (10, 15, and 20 mg/kg/d, for four consecutive days per week for 16 wk). Blood and liver tissue samples were collected for measurement of the followings; alpha fetoprotein (AFP) liver function and RNA panel differential expression was evaluated via real time polymerase chain reaction. Histopathological examination of liver sections stained with H&E and immunohistochemical study using glutathione S-transferase placental (GSTP) and proliferating cell nuclear antigen (PCNA) antibodies were assessed. Results: Cyan administration mitigated the effect of DEN/2-AFF induced PCL, decreased AFP levels, and improved liver function. Remarkably, treatment with cyan dose dependently decreased the long non-coding RNA MALAT1 and tubulin gamma 1 mRNA expressions and increased the levels of miR-125b, all of which are involved in cell cycle and mitotic spindle assembly. Of note, cyan decreased GSTP foci percent area and PCNA positively stained nuclei. Conclusions: Our results indicated that cyan could be used as a potential therapeutic agent to inhibit liver carcinogenesis in rat model via modulation of cell cycle.
11,831
257
[ 594, 59, 79, 157, 267, 32, 206, 113, 209, 339, 63, 3530, 476, 321, 947, 1081, 65 ]
18
[ "cyan", "group", "liver", "10", "significant", "rats", "compared", "20", "sections", "pcl" ]
[ "introduction hepatocellular carcinoma", "carcinoma hcc commonest", "hcv liver cancers", "liver malignancies egypt", "malignancies egypt hcc" ]
null
[CONTENT] Hepatocellular carcinoma therapy | Hepatocellular-carcinoma growth | Hepatocellular-carcinoma model | Hepatocellular-carcinoma size [SUMMARY]
[CONTENT] Hepatocellular carcinoma therapy | Hepatocellular-carcinoma growth | Hepatocellular-carcinoma model | Hepatocellular-carcinoma size [SUMMARY]
null
[CONTENT] Hepatocellular carcinoma therapy | Hepatocellular-carcinoma growth | Hepatocellular-carcinoma model | Hepatocellular-carcinoma size [SUMMARY]
[CONTENT] Hepatocellular carcinoma therapy | Hepatocellular-carcinoma growth | Hepatocellular-carcinoma model | Hepatocellular-carcinoma size [SUMMARY]
[CONTENT] Hepatocellular carcinoma therapy | Hepatocellular-carcinoma growth | Hepatocellular-carcinoma model | Hepatocellular-carcinoma size [SUMMARY]
[CONTENT] Animals | Anthocyanins | Diethylnitrosamine | Female | Glucosides | Glutathione Transferase | Liver | Liver Neoplasms | Liver Neoplasms, Experimental | Precancerous Conditions | Pregnancy | Rats | Rats, Wistar [SUMMARY]
[CONTENT] Animals | Anthocyanins | Diethylnitrosamine | Female | Glucosides | Glutathione Transferase | Liver | Liver Neoplasms | Liver Neoplasms, Experimental | Precancerous Conditions | Pregnancy | Rats | Rats, Wistar [SUMMARY]
null
[CONTENT] Animals | Anthocyanins | Diethylnitrosamine | Female | Glucosides | Glutathione Transferase | Liver | Liver Neoplasms | Liver Neoplasms, Experimental | Precancerous Conditions | Pregnancy | Rats | Rats, Wistar [SUMMARY]
[CONTENT] Animals | Anthocyanins | Diethylnitrosamine | Female | Glucosides | Glutathione Transferase | Liver | Liver Neoplasms | Liver Neoplasms, Experimental | Precancerous Conditions | Pregnancy | Rats | Rats, Wistar [SUMMARY]
[CONTENT] Animals | Anthocyanins | Diethylnitrosamine | Female | Glucosides | Glutathione Transferase | Liver | Liver Neoplasms | Liver Neoplasms, Experimental | Precancerous Conditions | Pregnancy | Rats | Rats, Wistar [SUMMARY]
[CONTENT] introduction hepatocellular carcinoma | carcinoma hcc commonest | hcv liver cancers | liver malignancies egypt | malignancies egypt hcc [SUMMARY]
[CONTENT] introduction hepatocellular carcinoma | carcinoma hcc commonest | hcv liver cancers | liver malignancies egypt | malignancies egypt hcc [SUMMARY]
null
[CONTENT] introduction hepatocellular carcinoma | carcinoma hcc commonest | hcv liver cancers | liver malignancies egypt | malignancies egypt hcc [SUMMARY]
[CONTENT] introduction hepatocellular carcinoma | carcinoma hcc commonest | hcv liver cancers | liver malignancies egypt | malignancies egypt hcc [SUMMARY]
[CONTENT] introduction hepatocellular carcinoma | carcinoma hcc commonest | hcv liver cancers | liver malignancies egypt | malignancies egypt hcc [SUMMARY]
[CONTENT] cyan | group | liver | 10 | significant | rats | compared | 20 | sections | pcl [SUMMARY]
[CONTENT] cyan | group | liver | 10 | significant | rats | compared | 20 | sections | pcl [SUMMARY]
null
[CONTENT] cyan | group | liver | 10 | significant | rats | compared | 20 | sections | pcl [SUMMARY]
[CONTENT] cyan | group | liver | 10 | significant | rats | compared | 20 | sections | pcl [SUMMARY]
[CONTENT] cyan | group | liver | 10 | significant | rats | compared | 20 | sections | pcl [SUMMARY]
[CONTENT] cancer | anti | hcc | cyan | associated | cell | mitosis | cancers | present | potential [SUMMARY]
[CONTENT] database available | database | available | united states | united | states | injected | www | positive | serum [SUMMARY]
null
[CONTENT] potential | great potential | great | target | pharmacy future devise novel | future devise | future devise novel | future devise novel antimitotic | pharmacy future devise | pharmacy future [SUMMARY]
[CONTENT] cyan | group | significant | 10 | rats | compared | liver | serum | groups | 20 [SUMMARY]
[CONTENT] cyan | group | significant | 10 | rats | compared | liver | serum | groups | 20 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] DEN/2-AAF | PCL | three | 10 | 15 | 20 mg/kg | four consecutive days per week | 16 ||| AFP | RNA ||| H&E [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| DEN/2-AAF | PCL | three | 10 | 15 | 20 mg/kg | four consecutive days per week | 16 ||| AFP | RNA ||| H&E ||| Cyan | PCL | AFP ||| RNA MALAT1 | 1 ||| GSTP foci percent | PCNA ||| [SUMMARY]
[CONTENT] ||| ||| DEN/2-AAF | PCL | three | 10 | 15 | 20 mg/kg | four consecutive days per week | 16 ||| AFP | RNA ||| H&E ||| Cyan | PCL | AFP ||| RNA MALAT1 | 1 ||| GSTP foci percent | PCNA ||| [SUMMARY]
Lipoteichoic acid induces surfactant protein-A biosynthesis in human alveolar type II epithelial cells through activating the MEK1/2-ERK1/2-NF-κB pathway.
23031213
Lipoteichoic acid (LTA), a gram-positive bacterial outer membrane component, can cause septic shock. Our previous studies showed that the gram-negative endotoxin, lipopolysaccharide (LPS), could induce surfactant protein-A (SP-A) production in human alveolar epithelial (A549) cells.
BACKGROUND
A549 cells were exposed to LTA. Levels of SP-A, nuclear factor (NF)-κB, extracellular signal-regulated kinase 1/2 (ERK1/2), and mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1 were determined.
METHODS
Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability. Meanwhile, when exposed to 30 μg/ml LTA for 1, 6, and 24 h, the biosynthesis of SP-A mRNA and protein in A549 cells significantly increased. As to the mechanism, LTA enhanced cytosolic and nuclear NF-κB levels in time-dependent manners. Pretreatment with BAY 11-7082, an inhibitor of NF-κB activation, significantly inhibited LTA-induced SP-A mRNA expression. Sequentially, LTA time-dependently augmented phosphorylation of ERK1/2. In addition, levels of phosphorylated MEK1 were augmented following treatment with LTA.
RESULTS
Therefore, this study showed that LTA can increase SP-A synthesis in human alveolar type II epithelial cells through sequentially activating the MEK1-ERK1/2-NF-κB-dependent pathway.
CONCLUSIONS
[ "Alveolar Epithelial Cells", "Cell Culture Techniques", "Cell Survival", "Humans", "Immunoblotting", "Lipopolysaccharides", "Mitogen-Activated Protein Kinase 1", "Mitogen-Activated Protein Kinase 3", "NF-kappa B", "Pulmonary Surfactant-Associated Protein A", "Real-Time Polymerase Chain Reaction", "Signal Transduction", "Teichoic Acids" ]
3492077
Background
Sepsis can lead to multiorgan failure and death and appears to be triggered by bacterial products, such as lipopolysaccharide (LPS) from gram-negative bacteria and lipoteichoic acid (LTA) from gram-positive ones [1-3]. Infection of the respiratory tract caused by gram-positive bacteria and pneumonia combined with acute lung injury (ALI) are usually the leading causes of mortality by sepsis [4]. In the past few decades, the incidences of sepsis and septic shock have been increasing [5]. Although endotoxin-activated events are clearly important in gram-negative infection, gram-positive bacteria also have crucial roles, but less is known about host responses to them [6]. The increasing prevalence of sepsis from gram-positive bacterial pathogens necessitates a reevaluation of the basic assumptions about the molecular pathogenesis of ALI. Alveolar epithelial type II cells contribute to the maintenance of mucosal integrity by modulating the production of surfactants [7]. Pulmonary surfactants play important roles in protecting the lung during endotoxin-induced injury and infection [8,9]. Surfactant protein (SP)-A is the most abundant pulmonary surfactant protein. Levels of SP-A in bronchiolar lavage fluid are modulated in gram-negative or -positive bacteria-caused lung diseases, including severe pneumonia, acute respiratory distress syndrome, and cardiogenic lung edema [10]. Thus, altering lung SP-A levels can be an effective indicator for pulmonary infection and inflammation. Our previous study showed that LPS selectively induced spa gene expression in human alveolar epithelial A549 cells [11]. LTA, an outer membrane component of gram-positive bacteria, was shown to be one of the critical factors participating in the pathogenesis of sepsis [12,13]. LTA can stimulate inflammatory responses in the lung [14,15]. Therefore, understanding the mechanisms that regulate LTA-mediated cell activation is crucial for diagnosis, treatment, or prognosis of lung inflammatory diseases. In response to stimuli, LTA can activate macrophages to produce massive amounts of inflammatory factors that exhibit systemic effects in the general circulation [16]. LTA can induce the secretion of various cytokines such as interleukin (IL)-1β, IL-6, and tumor necrosis factor (TNF)-α [17]. These data suggest that LTA can selectively modify gene transcription of various cell types and sequentially augment and possibly initiate tissue inflammation. Mitogen-activated protein kinases (MAPKs) are serine/threonine kinases. The first MAPK isoforms to be cloned and characterized were the extracellular signal-regulated kinase 1 and 2 (ERK 1/2) [18,19]. ERK 1/2 are well documented to be activated by a family of dual-specificity kinases known as the mitogen-activated/ERK kinases (MEKs) [16,20]. A previous study demonstrated that LTA can selectively activate the ERK pathway in the cornea [21]. Our previous study showed that LTA induced TNF-α and IL-6 expressions by means of stimulating phosphorylation of ERK1/2 in macrophages [16]. In addition, LTA also triggered translocation of nuclear factor (NF)-κB from the cytoplasm to nuclei and its transactivation activity. Meanwhile, the mechanisms responsible for LTA-induced spa gene expression in alveolar epithelial cells are still unknown. In this study, we attempted to evaluate the effects of LTA on SP-1 synthesis in human alveolar type II epithelial cells and its possible mechanisms.
null
null
Results
Toxicity of LTA to A549 cells Cell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown). Cell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown). LTA-induced enhancement of SP-A biosynthesis in A549 cells The effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure 1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure 1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure 1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels. Effects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. The effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure 1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure 1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure 1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels. Effects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. LTA-induced SP-A mRNA expression in A549 cells Induction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure 2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure 2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure 2). Effects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively. Induction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure 2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure 2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure 2). Effects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively. Augmentation of NF-κB expression and translocation by LTA Mechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures 3 and 4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure 3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure 3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively. Effects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Effects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Treatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure 4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure 4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB. Mechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures 3 and 4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure 3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure 3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively. Effects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Effects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Treatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure 4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure 4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB. LTA-enhanced phosphorylation of ERK1/2 The reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure 5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure 5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure 5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure 5B). Effects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. The reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure 5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure 5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure 5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure 5B). Effects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. LTA-induced activation of MEK1 Phosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure 6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure 6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure 6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1. Effects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. Phosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure 6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure 6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure 6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1. Effects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.
Conclusions
In summary, we used an alveolar epithelial cell model to study the immunomodulatory responses of LTA. Our results revealed that LTA can induce inflammatory responses in alveolar epithelial A549 cells by means of enhancing SP-A mRNA and protein syntheses. Moreover, the signal-transducing mechanisms of LTA-caused regulation of SP-A expression arise through the cascade phosphorylations of MEK1 and ERK1/2. In succession, LTA increased NF-κB expression and translocation. LTA-induced SP-A production in alveolar type II epithelial cells may indicate the status of gram-positive bacteria-caused septic shock and acute lung injury. More molecular pathways should be investigated and proven in the future. However, there are certain limitations of this study, including the use of A549 cells, which are derived from human lung carcinoma. The effects of LTA on A549 cells may differ from those on normal alveolar epithelial cells. Thus, we will perform a translational study to evaluate the effects of LTA on alveolar epithelial cells of animals with acute lung injury.
[ "Background", "Cell culture and drug treatment", "Assay of cell viability", "Immunoblotting analyses of SP-A, NF-κB, and phosphorylated and non-phosphorylated ERK1/2 and MEK1", "Extraction of nuclear proteins and immunodetection", "Real-time polymerase chain reaction (PCR) assays", "Statistical analysis", "Toxicity of LTA to A549 cells", "LTA-induced enhancement of SP-A biosynthesis in A549 cells", "LTA-induced SP-A mRNA expression in A549 cells", "Augmentation of NF-κB expression and translocation by LTA", "LTA-enhanced phosphorylation of ERK1/2", "LTA-induced activation of MEK1", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Sepsis can lead to multiorgan failure and death and appears to be triggered by bacterial products, such as lipopolysaccharide (LPS) from gram-negative bacteria and lipoteichoic acid (LTA) from gram-positive ones \n[1-3]. Infection of the respiratory tract caused by gram-positive bacteria and pneumonia combined with acute lung injury (ALI) are usually the leading causes of mortality by sepsis \n[4]. In the past few decades, the incidences of sepsis and septic shock have been increasing \n[5]. Although endotoxin-activated events are clearly important in gram-negative infection, gram-positive bacteria also have crucial roles, but less is known about host responses to them \n[6]. The increasing prevalence of sepsis from gram-positive bacterial pathogens necessitates a reevaluation of the basic assumptions about the molecular pathogenesis of ALI.\nAlveolar epithelial type II cells contribute to the maintenance of mucosal integrity by modulating the production of surfactants \n[7]. Pulmonary surfactants play important roles in protecting the lung during endotoxin-induced injury and infection \n[8,9]. Surfactant protein (SP)-A is the most abundant pulmonary surfactant protein. Levels of SP-A in bronchiolar lavage fluid are modulated in gram-negative or -positive bacteria-caused lung diseases, including severe pneumonia, acute respiratory distress syndrome, and cardiogenic lung edema \n[10]. Thus, altering lung SP-A levels can be an effective indicator for pulmonary infection and inflammation. Our previous study showed that LPS selectively induced spa gene expression in human alveolar epithelial A549 cells \n[11].\nLTA, an outer membrane component of gram-positive bacteria, was shown to be one of the critical factors participating in the pathogenesis of sepsis \n[12,13]. LTA can stimulate inflammatory responses in the lung \n[14,15]. Therefore, understanding the mechanisms that regulate LTA-mediated cell activation is crucial for diagnosis, treatment, or prognosis of lung inflammatory diseases. In response to stimuli, LTA can activate macrophages to produce massive amounts of inflammatory factors that exhibit systemic effects in the general circulation \n[16]. LTA can induce the secretion of various cytokines such as interleukin (IL)-1β, IL-6, and tumor necrosis factor (TNF)-α \n[17]. These data suggest that LTA can selectively modify gene transcription of various cell types and sequentially augment and possibly initiate tissue inflammation.\nMitogen-activated protein kinases (MAPKs) are serine/threonine kinases. The first MAPK isoforms to be cloned and characterized were the extracellular signal-regulated kinase 1 and 2 (ERK 1/2) \n[18,19]. ERK 1/2 are well documented to be activated by a family of dual-specificity kinases known as the mitogen-activated/ERK kinases (MEKs) \n[16,20]. A previous study demonstrated that LTA can selectively activate the ERK pathway in the cornea \n[21]. Our previous study showed that LTA induced TNF-α and IL-6 expressions by means of stimulating phosphorylation of ERK1/2 in macrophages \n[16]. In addition, LTA also triggered translocation of nuclear factor (NF)-κB from the cytoplasm to nuclei and its transactivation activity. Meanwhile, the mechanisms responsible for LTA-induced spa gene expression in alveolar epithelial cells are still unknown. In this study, we attempted to evaluate the effects of LTA on SP-1 synthesis in human alveolar type II epithelial cells and its possible mechanisms.", "A human lung carcinoma type II epithelial cell line (A549) was cultured following a previous method \n[3]. A549 cells were grown in Dulbecco's modified Eagle's medium (DMEM)/Ham’s F12 culture medium (Sigma, St. Louis, MO, USA), supplemented with 10% (v/v) heat-inactivated fetal calf serum, 100 U/ml penicillin G, 100 μg/ml streptomycin, and 2 mM l-glutamine. A549 cells were seeded in 75-cm2 culture flasks at 37 °C in a humidified atmosphere of 5% CO2. Cells were grown to confluence before drug treatment. LTA purchased from Sigma was dissolved in phosphate-buffered saline (PBS) (0.14 M NaCl, 2.6 mM KCl, 8 mM Na2HPO4, and 1.5 mM KH2PO4). BAY 11–7082, an inhibitor of NF-κB activation, was also purchased from Sigma.", "Cell viability was determined using a colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay as previously described \n[22]. Briefly, A549 cells (104 cells/well) were seeded in 96-well tissue culture plates overnight. After drug treatment, macrophages were cultured in new medium containing 0.5 mg/mL 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide for a further 3 h. The blue formazan products in the macrophages were dissolved in dimethyl sulfoxide and spectrophotometrically measured at a wavelength of 550 nm.", "Protein levels were immunodetected according to a previously described method \n[11]. After drug treatment, cell lysates were prepared in ice-cold radioimmunoprecipitation assay buffer (25 mM Tris–HCl (pH 7.2), 0.1% sodium dodecylsulfate (SDS), 1% Triton X-100, 1% sodium deoxycholate, 0.15 M NaCl, and 1 mM EDTA). Protein concentrations were quantified using a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Proteins (50 μg/well) were subjected to sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. Immunodetection of SP-A and NF-κB was carried out using rabbit polyclonal antibodies against human SP-A and NF-κB (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Cellular β-actin protein was immunodetected using a mouse monoclonal antibody (mAb) against mouse β-actin (Sigma) as the internal standard. These protein bands were quantified using a digital imaging system (UVtec, Cambridge, UK). Phosphorylated ERK1/2 and MEK1 were immunodetected using a rabbit polyclonal antibody against phosphorylated residues of ERK1/2 and MEK1 (Cell Signaling, Danvers, MA, USA). Nonphosphorylated ERK2 and MEK1 were immunodetected as the internal controls (Cell Signaling). Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer, Melbourne, Australia).", "Amounts of nuclear transcription factors were quantified following a previously described method \n[20]. After drug treatment, nuclear extracts of macrophages were prepared. Protein concentrations were quantified by a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Nuclear proteins (50 μg/well) were subjected to SDS-PAGE and transferred to nitrocellulose membranes. After blocking, NF-κB was immunodetected using a rabbit polyclonal antibody against mouse NF-κB p65 (Santa Cruz Biotechnology). A proliferating cell nuclear antigen (PCNA) was detected using a mouse mAb against the rat PCNA protein (Santa Cruz Biotechnology) as the internal standard. Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer).", "Messenger (m)RNA from A549 cells exposed to LTA were prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Oligonucleotides for the PCR analyses of SP-A mRNA and β-actin mRNA were designed and synthesized by Clontech Laboratories (Palo Alto, CA, USA). The oligonucleotide sequences of the upstream and downstream primers for these mRNA analyses were respectively 5'-TGA AAGGGAGTTCTAGCATCTCACAGA-3' and 5'-ACATATGCCTATGTAGGCCTGACTGAG-3' for SP-A mRNA, and 5'- GTCTACATGTCTCGATCCCACTTA A -3' and 5'-GGTCTTTCTCTCTCATCGCGCTC-5' for β-actin mRNA. A quantitative PCR analysis was carried out using iQSYBR Green Supermix (Bio-Rad, Hercules, CA, USA) and the MyiQ Single-Color Real-Time PCR Detection System (Bio-Rad) as described previously \n[11].", "Statistical differences were considered significant when the p value of Duncan’s multiple-range test was <0.05. Statistical analysis between groups over time was carried out by a two-way analysis of variance (ANOVA).", "Cell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown).", "The effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure \n1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure \n1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure \n1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels.\nEffects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.", "Induction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure \n2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure \n2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure \n2).\nEffects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively.", "Mechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures \n3 and \n4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure \n3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure \n3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively.\nEffects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nEffects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nTreatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure \n4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure \n4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB.", "The reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure \n5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure \n5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure \n5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure \n5B).\nEffects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.", "Phosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure \n6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure \n6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure \n6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1.\nEffects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.", "ALI: Acute lung injury; ERK1/2: Extracellular signal-regulated kinase 1/2; IL: Interleukin; LPS: Lipopolysaccharide; LTA: Lipoteichoic acid; NF-κB: Nuclear factor-κB; MAPKs: Mitogen-activated protein kinases; MEK1: Mitogen-activated/extracellular signal-regulated kinase kinase 1; PCNA: Proliferating cell nuclear antigen; SDS-PAGE: Sodium dodecylsulfate polyacrylamide gel electrophoresis; SP-A: Surfactant protein-A; TLR2: Toll-like receptor 2; TNF-α: Tumor necrosis factor-α.", "The authors declare that they have no competing interests.", "FLL, CYC, and RMC visualized experimental design. YTT and HLT refined the experimental approach. TGC did the statistical analysis. TLC had significant intellectual input into the development of this work, and added to the Discussion. All authors reviewed data and results, and had significant input into the writing of the final manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Materials and methods", "Cell culture and drug treatment", "Assay of cell viability", "Immunoblotting analyses of SP-A, NF-κB, and phosphorylated and non-phosphorylated ERK1/2 and MEK1", "Extraction of nuclear proteins and immunodetection", "Real-time polymerase chain reaction (PCR) assays", "Statistical analysis", "Results", "Toxicity of LTA to A549 cells", "LTA-induced enhancement of SP-A biosynthesis in A549 cells", "LTA-induced SP-A mRNA expression in A549 cells", "Augmentation of NF-κB expression and translocation by LTA", "LTA-enhanced phosphorylation of ERK1/2", "LTA-induced activation of MEK1", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Sepsis can lead to multiorgan failure and death and appears to be triggered by bacterial products, such as lipopolysaccharide (LPS) from gram-negative bacteria and lipoteichoic acid (LTA) from gram-positive ones \n[1-3]. Infection of the respiratory tract caused by gram-positive bacteria and pneumonia combined with acute lung injury (ALI) are usually the leading causes of mortality by sepsis \n[4]. In the past few decades, the incidences of sepsis and septic shock have been increasing \n[5]. Although endotoxin-activated events are clearly important in gram-negative infection, gram-positive bacteria also have crucial roles, but less is known about host responses to them \n[6]. The increasing prevalence of sepsis from gram-positive bacterial pathogens necessitates a reevaluation of the basic assumptions about the molecular pathogenesis of ALI.\nAlveolar epithelial type II cells contribute to the maintenance of mucosal integrity by modulating the production of surfactants \n[7]. Pulmonary surfactants play important roles in protecting the lung during endotoxin-induced injury and infection \n[8,9]. Surfactant protein (SP)-A is the most abundant pulmonary surfactant protein. Levels of SP-A in bronchiolar lavage fluid are modulated in gram-negative or -positive bacteria-caused lung diseases, including severe pneumonia, acute respiratory distress syndrome, and cardiogenic lung edema \n[10]. Thus, altering lung SP-A levels can be an effective indicator for pulmonary infection and inflammation. Our previous study showed that LPS selectively induced spa gene expression in human alveolar epithelial A549 cells \n[11].\nLTA, an outer membrane component of gram-positive bacteria, was shown to be one of the critical factors participating in the pathogenesis of sepsis \n[12,13]. LTA can stimulate inflammatory responses in the lung \n[14,15]. Therefore, understanding the mechanisms that regulate LTA-mediated cell activation is crucial for diagnosis, treatment, or prognosis of lung inflammatory diseases. In response to stimuli, LTA can activate macrophages to produce massive amounts of inflammatory factors that exhibit systemic effects in the general circulation \n[16]. LTA can induce the secretion of various cytokines such as interleukin (IL)-1β, IL-6, and tumor necrosis factor (TNF)-α \n[17]. These data suggest that LTA can selectively modify gene transcription of various cell types and sequentially augment and possibly initiate tissue inflammation.\nMitogen-activated protein kinases (MAPKs) are serine/threonine kinases. The first MAPK isoforms to be cloned and characterized were the extracellular signal-regulated kinase 1 and 2 (ERK 1/2) \n[18,19]. ERK 1/2 are well documented to be activated by a family of dual-specificity kinases known as the mitogen-activated/ERK kinases (MEKs) \n[16,20]. A previous study demonstrated that LTA can selectively activate the ERK pathway in the cornea \n[21]. Our previous study showed that LTA induced TNF-α and IL-6 expressions by means of stimulating phosphorylation of ERK1/2 in macrophages \n[16]. In addition, LTA also triggered translocation of nuclear factor (NF)-κB from the cytoplasm to nuclei and its transactivation activity. Meanwhile, the mechanisms responsible for LTA-induced spa gene expression in alveolar epithelial cells are still unknown. In this study, we attempted to evaluate the effects of LTA on SP-1 synthesis in human alveolar type II epithelial cells and its possible mechanisms.", " Cell culture and drug treatment A human lung carcinoma type II epithelial cell line (A549) was cultured following a previous method \n[3]. A549 cells were grown in Dulbecco's modified Eagle's medium (DMEM)/Ham’s F12 culture medium (Sigma, St. Louis, MO, USA), supplemented with 10% (v/v) heat-inactivated fetal calf serum, 100 U/ml penicillin G, 100 μg/ml streptomycin, and 2 mM l-glutamine. A549 cells were seeded in 75-cm2 culture flasks at 37 °C in a humidified atmosphere of 5% CO2. Cells were grown to confluence before drug treatment. LTA purchased from Sigma was dissolved in phosphate-buffered saline (PBS) (0.14 M NaCl, 2.6 mM KCl, 8 mM Na2HPO4, and 1.5 mM KH2PO4). BAY 11–7082, an inhibitor of NF-κB activation, was also purchased from Sigma.\nA human lung carcinoma type II epithelial cell line (A549) was cultured following a previous method \n[3]. A549 cells were grown in Dulbecco's modified Eagle's medium (DMEM)/Ham’s F12 culture medium (Sigma, St. Louis, MO, USA), supplemented with 10% (v/v) heat-inactivated fetal calf serum, 100 U/ml penicillin G, 100 μg/ml streptomycin, and 2 mM l-glutamine. A549 cells were seeded in 75-cm2 culture flasks at 37 °C in a humidified atmosphere of 5% CO2. Cells were grown to confluence before drug treatment. LTA purchased from Sigma was dissolved in phosphate-buffered saline (PBS) (0.14 M NaCl, 2.6 mM KCl, 8 mM Na2HPO4, and 1.5 mM KH2PO4). BAY 11–7082, an inhibitor of NF-κB activation, was also purchased from Sigma.\n Assay of cell viability Cell viability was determined using a colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay as previously described \n[22]. Briefly, A549 cells (104 cells/well) were seeded in 96-well tissue culture plates overnight. After drug treatment, macrophages were cultured in new medium containing 0.5 mg/mL 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide for a further 3 h. The blue formazan products in the macrophages were dissolved in dimethyl sulfoxide and spectrophotometrically measured at a wavelength of 550 nm.\nCell viability was determined using a colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay as previously described \n[22]. Briefly, A549 cells (104 cells/well) were seeded in 96-well tissue culture plates overnight. After drug treatment, macrophages were cultured in new medium containing 0.5 mg/mL 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide for a further 3 h. The blue formazan products in the macrophages were dissolved in dimethyl sulfoxide and spectrophotometrically measured at a wavelength of 550 nm.\n Immunoblotting analyses of SP-A, NF-κB, and phosphorylated and non-phosphorylated ERK1/2 and MEK1 Protein levels were immunodetected according to a previously described method \n[11]. After drug treatment, cell lysates were prepared in ice-cold radioimmunoprecipitation assay buffer (25 mM Tris–HCl (pH 7.2), 0.1% sodium dodecylsulfate (SDS), 1% Triton X-100, 1% sodium deoxycholate, 0.15 M NaCl, and 1 mM EDTA). Protein concentrations were quantified using a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Proteins (50 μg/well) were subjected to sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. Immunodetection of SP-A and NF-κB was carried out using rabbit polyclonal antibodies against human SP-A and NF-κB (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Cellular β-actin protein was immunodetected using a mouse monoclonal antibody (mAb) against mouse β-actin (Sigma) as the internal standard. These protein bands were quantified using a digital imaging system (UVtec, Cambridge, UK). Phosphorylated ERK1/2 and MEK1 were immunodetected using a rabbit polyclonal antibody against phosphorylated residues of ERK1/2 and MEK1 (Cell Signaling, Danvers, MA, USA). Nonphosphorylated ERK2 and MEK1 were immunodetected as the internal controls (Cell Signaling). Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer, Melbourne, Australia).\nProtein levels were immunodetected according to a previously described method \n[11]. After drug treatment, cell lysates were prepared in ice-cold radioimmunoprecipitation assay buffer (25 mM Tris–HCl (pH 7.2), 0.1% sodium dodecylsulfate (SDS), 1% Triton X-100, 1% sodium deoxycholate, 0.15 M NaCl, and 1 mM EDTA). Protein concentrations were quantified using a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Proteins (50 μg/well) were subjected to sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. Immunodetection of SP-A and NF-κB was carried out using rabbit polyclonal antibodies against human SP-A and NF-κB (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Cellular β-actin protein was immunodetected using a mouse monoclonal antibody (mAb) against mouse β-actin (Sigma) as the internal standard. These protein bands were quantified using a digital imaging system (UVtec, Cambridge, UK). Phosphorylated ERK1/2 and MEK1 were immunodetected using a rabbit polyclonal antibody against phosphorylated residues of ERK1/2 and MEK1 (Cell Signaling, Danvers, MA, USA). Nonphosphorylated ERK2 and MEK1 were immunodetected as the internal controls (Cell Signaling). Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer, Melbourne, Australia).\n Extraction of nuclear proteins and immunodetection Amounts of nuclear transcription factors were quantified following a previously described method \n[20]. After drug treatment, nuclear extracts of macrophages were prepared. Protein concentrations were quantified by a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Nuclear proteins (50 μg/well) were subjected to SDS-PAGE and transferred to nitrocellulose membranes. After blocking, NF-κB was immunodetected using a rabbit polyclonal antibody against mouse NF-κB p65 (Santa Cruz Biotechnology). A proliferating cell nuclear antigen (PCNA) was detected using a mouse mAb against the rat PCNA protein (Santa Cruz Biotechnology) as the internal standard. Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer).\nAmounts of nuclear transcription factors were quantified following a previously described method \n[20]. After drug treatment, nuclear extracts of macrophages were prepared. Protein concentrations were quantified by a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Nuclear proteins (50 μg/well) were subjected to SDS-PAGE and transferred to nitrocellulose membranes. After blocking, NF-κB was immunodetected using a rabbit polyclonal antibody against mouse NF-κB p65 (Santa Cruz Biotechnology). A proliferating cell nuclear antigen (PCNA) was detected using a mouse mAb against the rat PCNA protein (Santa Cruz Biotechnology) as the internal standard. Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer).\n Real-time polymerase chain reaction (PCR) assays Messenger (m)RNA from A549 cells exposed to LTA were prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Oligonucleotides for the PCR analyses of SP-A mRNA and β-actin mRNA were designed and synthesized by Clontech Laboratories (Palo Alto, CA, USA). The oligonucleotide sequences of the upstream and downstream primers for these mRNA analyses were respectively 5'-TGA AAGGGAGTTCTAGCATCTCACAGA-3' and 5'-ACATATGCCTATGTAGGCCTGACTGAG-3' for SP-A mRNA, and 5'- GTCTACATGTCTCGATCCCACTTA A -3' and 5'-GGTCTTTCTCTCTCATCGCGCTC-5' for β-actin mRNA. A quantitative PCR analysis was carried out using iQSYBR Green Supermix (Bio-Rad, Hercules, CA, USA) and the MyiQ Single-Color Real-Time PCR Detection System (Bio-Rad) as described previously \n[11].\nMessenger (m)RNA from A549 cells exposed to LTA were prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Oligonucleotides for the PCR analyses of SP-A mRNA and β-actin mRNA were designed and synthesized by Clontech Laboratories (Palo Alto, CA, USA). The oligonucleotide sequences of the upstream and downstream primers for these mRNA analyses were respectively 5'-TGA AAGGGAGTTCTAGCATCTCACAGA-3' and 5'-ACATATGCCTATGTAGGCCTGACTGAG-3' for SP-A mRNA, and 5'- GTCTACATGTCTCGATCCCACTTA A -3' and 5'-GGTCTTTCTCTCTCATCGCGCTC-5' for β-actin mRNA. A quantitative PCR analysis was carried out using iQSYBR Green Supermix (Bio-Rad, Hercules, CA, USA) and the MyiQ Single-Color Real-Time PCR Detection System (Bio-Rad) as described previously \n[11].\n Statistical analysis Statistical differences were considered significant when the p value of Duncan’s multiple-range test was <0.05. Statistical analysis between groups over time was carried out by a two-way analysis of variance (ANOVA).\nStatistical differences were considered significant when the p value of Duncan’s multiple-range test was <0.05. Statistical analysis between groups over time was carried out by a two-way analysis of variance (ANOVA).", "A human lung carcinoma type II epithelial cell line (A549) was cultured following a previous method \n[3]. A549 cells were grown in Dulbecco's modified Eagle's medium (DMEM)/Ham’s F12 culture medium (Sigma, St. Louis, MO, USA), supplemented with 10% (v/v) heat-inactivated fetal calf serum, 100 U/ml penicillin G, 100 μg/ml streptomycin, and 2 mM l-glutamine. A549 cells were seeded in 75-cm2 culture flasks at 37 °C in a humidified atmosphere of 5% CO2. Cells were grown to confluence before drug treatment. LTA purchased from Sigma was dissolved in phosphate-buffered saline (PBS) (0.14 M NaCl, 2.6 mM KCl, 8 mM Na2HPO4, and 1.5 mM KH2PO4). BAY 11–7082, an inhibitor of NF-κB activation, was also purchased from Sigma.", "Cell viability was determined using a colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay as previously described \n[22]. Briefly, A549 cells (104 cells/well) were seeded in 96-well tissue culture plates overnight. After drug treatment, macrophages were cultured in new medium containing 0.5 mg/mL 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide for a further 3 h. The blue formazan products in the macrophages were dissolved in dimethyl sulfoxide and spectrophotometrically measured at a wavelength of 550 nm.", "Protein levels were immunodetected according to a previously described method \n[11]. After drug treatment, cell lysates were prepared in ice-cold radioimmunoprecipitation assay buffer (25 mM Tris–HCl (pH 7.2), 0.1% sodium dodecylsulfate (SDS), 1% Triton X-100, 1% sodium deoxycholate, 0.15 M NaCl, and 1 mM EDTA). Protein concentrations were quantified using a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Proteins (50 μg/well) were subjected to sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. Immunodetection of SP-A and NF-κB was carried out using rabbit polyclonal antibodies against human SP-A and NF-κB (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Cellular β-actin protein was immunodetected using a mouse monoclonal antibody (mAb) against mouse β-actin (Sigma) as the internal standard. These protein bands were quantified using a digital imaging system (UVtec, Cambridge, UK). Phosphorylated ERK1/2 and MEK1 were immunodetected using a rabbit polyclonal antibody against phosphorylated residues of ERK1/2 and MEK1 (Cell Signaling, Danvers, MA, USA). Nonphosphorylated ERK2 and MEK1 were immunodetected as the internal controls (Cell Signaling). Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer, Melbourne, Australia).", "Amounts of nuclear transcription factors were quantified following a previously described method \n[20]. After drug treatment, nuclear extracts of macrophages were prepared. Protein concentrations were quantified by a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Nuclear proteins (50 μg/well) were subjected to SDS-PAGE and transferred to nitrocellulose membranes. After blocking, NF-κB was immunodetected using a rabbit polyclonal antibody against mouse NF-κB p65 (Santa Cruz Biotechnology). A proliferating cell nuclear antigen (PCNA) was detected using a mouse mAb against the rat PCNA protein (Santa Cruz Biotechnology) as the internal standard. Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer).", "Messenger (m)RNA from A549 cells exposed to LTA were prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Oligonucleotides for the PCR analyses of SP-A mRNA and β-actin mRNA were designed and synthesized by Clontech Laboratories (Palo Alto, CA, USA). The oligonucleotide sequences of the upstream and downstream primers for these mRNA analyses were respectively 5'-TGA AAGGGAGTTCTAGCATCTCACAGA-3' and 5'-ACATATGCCTATGTAGGCCTGACTGAG-3' for SP-A mRNA, and 5'- GTCTACATGTCTCGATCCCACTTA A -3' and 5'-GGTCTTTCTCTCTCATCGCGCTC-5' for β-actin mRNA. A quantitative PCR analysis was carried out using iQSYBR Green Supermix (Bio-Rad, Hercules, CA, USA) and the MyiQ Single-Color Real-Time PCR Detection System (Bio-Rad) as described previously \n[11].", "Statistical differences were considered significant when the p value of Duncan’s multiple-range test was <0.05. Statistical analysis between groups over time was carried out by a two-way analysis of variance (ANOVA).", " Toxicity of LTA to A549 cells Cell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown).\nCell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown).\n LTA-induced enhancement of SP-A biosynthesis in A549 cells The effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure \n1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure \n1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure \n1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels.\nEffects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nThe effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure \n1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure \n1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure \n1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels.\nEffects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\n LTA-induced SP-A mRNA expression in A549 cells Induction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure \n2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure \n2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure \n2).\nEffects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively.\nInduction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure \n2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure \n2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure \n2).\nEffects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively.\n Augmentation of NF-κB expression and translocation by LTA Mechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures \n3 and \n4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure \n3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure \n3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively.\nEffects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nEffects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nTreatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure \n4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure \n4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB.\nMechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures \n3 and \n4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure \n3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure \n3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively.\nEffects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nEffects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nTreatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure \n4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure \n4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB.\n LTA-enhanced phosphorylation of ERK1/2 The reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure \n5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure \n5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure \n5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure \n5B).\nEffects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.\nThe reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure \n5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure \n5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure \n5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure \n5B).\nEffects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.\n LTA-induced activation of MEK1 Phosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure \n6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure \n6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure \n6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1.\nEffects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.\nPhosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure \n6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure \n6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure \n6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1.\nEffects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.", "Cell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown).", "The effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure \n1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure \n1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure \n1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels.\nEffects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.", "Induction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure \n2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure \n2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure \n2).\nEffects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively.", "Mechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures \n3 and \n4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure \n3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure \n3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively.\nEffects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nEffects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit.\nTreatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure \n4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure \n4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB.", "The reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure \n5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure \n5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure \n5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure \n5B).\nEffects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.", "Phosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure \n6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure \n6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure \n6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure \n6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1.\nEffects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit.", "LTA represents a class of amphiphilic molecules anchored to the outer face of the cytoplasmic membrane in gram-positive bacteria and is commonly released during cell growth, especially under antibiotic therapy \n[1,2]. It can cause cytokine induction in mononuclear phagocytes \n[17]. In previous studies, LTA concentrations of 0.2~50 μg/ml were detected and stimulated activity of polymononuclear leucocyte functions and release of TNF-α in peripheral blood mononuclear cells \n[23,24]. Meanwhile, LTA levels at the infectious site can reach a high level of 26,694 ng/mL \n[25]. The concentration of LTA used in this study was < 50 μg/ml. Therefore, our results show that LTA at clinically relevant concentrations can activate alveolar type II epithelial cells by stimulating production of surfactants.\nDuring bacterial infection, endotoxins, including LTA and LPS, increase capillary permeability and enhance expressions of cellular adhesion molecules, proinflammatory cytokines, and chemokines \n[1,15]. These endotoxins can lead to most of the clinical manifestations of bacterial infection and are associated with ALI \n[4,5]. In addition, LTA can trigger lung inflammation and causes neutrophil influx into the lungs \n[15,26]. This study showed that in response to LTA stimulation, levels of SP-A mRNA and protein in alveolar A549 cells were time-dependently augmented. SP-A contributes to the pulmonary host defense \n[10,16,27]. A previous study reported that when spa gene expression was knocked-out, susceptibility of the lungs to pathogenic infection was simultaneously raised \n[28]. Our previous study also showed that LPS-mediated toll-like receptor (TLR) 2 signaling in human alveolar epithelial cells might increase SP-A biosynthesis and subsequently lead to an inflammatory response in the lungs \n[3]. As a result, SP-A could be an effective biomarker for detecting pulmonary infection by gram-negative or -positive bacteria.\nThis study showed that LTA increased the expression of NF-κB and its translocation from the cytoplasm to nuclei. NF-κB is a typical transcription factor in response to stimulation by LTA \n[16]. LTA can bind CD14 and then stimulates TLR activation \n[16,29]. After LTA associates with TLR2, NF-κB can be activated by protein kinases and is then translocated to nuclei from the cytoplasm \n[11]. NF-κB regulates certain gene expressions to control cell proliferation, differentiation, and death \n[30,31]. A previous study showed that LTA induced cyclooxygenase-2 expression in epithelial cells via IκB degradation and successive p65 NF-κB translocation \n[32]. LTA could induce SP-A mRNA expression in A549 cells. Our bioinformatic search revealed that NF-κB-DNA-binding motifs were found in the promoter regions of the spa gene. Suppressing NF-κB activation using BAY 11–7082 simultaneously inhibited LTA-induced SP-A mRNA expression. Thus, LTA transcriptionally induces SP-A expression through inducing NF-κB expression and translocation.\nOur present results revealed that the phosphorylation of ERK1/2 was associated with NF-κB activation. Sequentially, ERK1/2-activated IκBα kinase can phosphorylate IκB at two conserved serine residues in the N-terminus, triggering the degradation of this inhibitor and allowing for the rapid translocation of NF-κB into nuclei \n[16,20]. Accordingly, LTA-induced activation of A549 cells is mainly due to the improvement in ERK1/2 phosphorylation. Roles of ERK1 and ERK2 in LTA-induced SP-A expression were not determined in this study but will be validated using RNA interference in our next study. There is growing evidence that the ERK signaling pathway, which contributes to regulating inflammatory events \n[33]. Therefore, LTA regulates SP-A expression in alveolar type II epithelial cells in the course of eliciting ERK1/2 phosphorylation and subsequent activation of the transcription factor, NF-κB.\nERK activation is mediated by at least three different pathways: a Raf/MEK-dependent pathway, a PI3K/Raf-independent pathway that strongly activates MEK, and a third undetermined pathway that directly activates ERK proteins \n[34]. This study showed that LTA time-dependently increased levels of phosphorylated MEK1. Thus, one of the possible reasons explaining why LTA stimulates ERK1/2 activation is the increase in MEK1 phosphorylation. MAPK-regulating signals place this family of protein kinases in an apparently linear signaling cascade downstream of growth factor receptors, adaptor proteins, guanine-nucleotide exchange factors, Ras, Raf, and MEK \n[19]. The present study demonstrates that LTA can induce SP-A expression via MEK-dependent activation of the protein kinase ERK1/2-signaling pathway.", "In summary, we used an alveolar epithelial cell model to study the immunomodulatory responses of LTA. Our results revealed that LTA can induce inflammatory responses in alveolar epithelial A549 cells by means of enhancing SP-A mRNA and protein syntheses. Moreover, the signal-transducing mechanisms of LTA-caused regulation of SP-A expression arise through the cascade phosphorylations of MEK1 and ERK1/2. In succession, LTA increased NF-κB expression and translocation. LTA-induced SP-A production in alveolar type II epithelial cells may indicate the status of gram-positive bacteria-caused septic shock and acute lung injury. More molecular pathways should be investigated and proven in the future. However, there are certain limitations of this study, including the use of A549 cells, which are derived from human lung carcinoma. The effects of LTA on A549 cells may differ from those on normal alveolar epithelial cells. Thus, we will perform a translational study to evaluate the effects of LTA on alveolar epithelial cells of animals with acute lung injury.", "ALI: Acute lung injury; ERK1/2: Extracellular signal-regulated kinase 1/2; IL: Interleukin; LPS: Lipopolysaccharide; LTA: Lipoteichoic acid; NF-κB: Nuclear factor-κB; MAPKs: Mitogen-activated protein kinases; MEK1: Mitogen-activated/extracellular signal-regulated kinase kinase 1; PCNA: Proliferating cell nuclear antigen; SDS-PAGE: Sodium dodecylsulfate polyacrylamide gel electrophoresis; SP-A: Surfactant protein-A; TLR2: Toll-like receptor 2; TNF-α: Tumor necrosis factor-α.", "The authors declare that they have no competing interests.", "FLL, CYC, and RMC visualized experimental design. YTT and HLT refined the experimental approach. TGC did the statistical analysis. TLC had significant intellectual input into the development of this work, and added to the Discussion. All authors reviewed data and results, and had significant input into the writing of the final manuscript. All authors read and approved the final manuscript." ]
[ null, "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions", null, null, null ]
[ "Lipoteichoic acid", "Alveolar epithelial cells", "Surfactant protein-A", "MEK/ERK/NF-κB" ]
Background: Sepsis can lead to multiorgan failure and death and appears to be triggered by bacterial products, such as lipopolysaccharide (LPS) from gram-negative bacteria and lipoteichoic acid (LTA) from gram-positive ones [1-3]. Infection of the respiratory tract caused by gram-positive bacteria and pneumonia combined with acute lung injury (ALI) are usually the leading causes of mortality by sepsis [4]. In the past few decades, the incidences of sepsis and septic shock have been increasing [5]. Although endotoxin-activated events are clearly important in gram-negative infection, gram-positive bacteria also have crucial roles, but less is known about host responses to them [6]. The increasing prevalence of sepsis from gram-positive bacterial pathogens necessitates a reevaluation of the basic assumptions about the molecular pathogenesis of ALI. Alveolar epithelial type II cells contribute to the maintenance of mucosal integrity by modulating the production of surfactants [7]. Pulmonary surfactants play important roles in protecting the lung during endotoxin-induced injury and infection [8,9]. Surfactant protein (SP)-A is the most abundant pulmonary surfactant protein. Levels of SP-A in bronchiolar lavage fluid are modulated in gram-negative or -positive bacteria-caused lung diseases, including severe pneumonia, acute respiratory distress syndrome, and cardiogenic lung edema [10]. Thus, altering lung SP-A levels can be an effective indicator for pulmonary infection and inflammation. Our previous study showed that LPS selectively induced spa gene expression in human alveolar epithelial A549 cells [11]. LTA, an outer membrane component of gram-positive bacteria, was shown to be one of the critical factors participating in the pathogenesis of sepsis [12,13]. LTA can stimulate inflammatory responses in the lung [14,15]. Therefore, understanding the mechanisms that regulate LTA-mediated cell activation is crucial for diagnosis, treatment, or prognosis of lung inflammatory diseases. In response to stimuli, LTA can activate macrophages to produce massive amounts of inflammatory factors that exhibit systemic effects in the general circulation [16]. LTA can induce the secretion of various cytokines such as interleukin (IL)-1β, IL-6, and tumor necrosis factor (TNF)-α [17]. These data suggest that LTA can selectively modify gene transcription of various cell types and sequentially augment and possibly initiate tissue inflammation. Mitogen-activated protein kinases (MAPKs) are serine/threonine kinases. The first MAPK isoforms to be cloned and characterized were the extracellular signal-regulated kinase 1 and 2 (ERK 1/2) [18,19]. ERK 1/2 are well documented to be activated by a family of dual-specificity kinases known as the mitogen-activated/ERK kinases (MEKs) [16,20]. A previous study demonstrated that LTA can selectively activate the ERK pathway in the cornea [21]. Our previous study showed that LTA induced TNF-α and IL-6 expressions by means of stimulating phosphorylation of ERK1/2 in macrophages [16]. In addition, LTA also triggered translocation of nuclear factor (NF)-κB from the cytoplasm to nuclei and its transactivation activity. Meanwhile, the mechanisms responsible for LTA-induced spa gene expression in alveolar epithelial cells are still unknown. In this study, we attempted to evaluate the effects of LTA on SP-1 synthesis in human alveolar type II epithelial cells and its possible mechanisms. Materials and methods: Cell culture and drug treatment A human lung carcinoma type II epithelial cell line (A549) was cultured following a previous method [3]. A549 cells were grown in Dulbecco's modified Eagle's medium (DMEM)/Ham’s F12 culture medium (Sigma, St. Louis, MO, USA), supplemented with 10% (v/v) heat-inactivated fetal calf serum, 100 U/ml penicillin G, 100 μg/ml streptomycin, and 2 mM l-glutamine. A549 cells were seeded in 75-cm2 culture flasks at 37 °C in a humidified atmosphere of 5% CO2. Cells were grown to confluence before drug treatment. LTA purchased from Sigma was dissolved in phosphate-buffered saline (PBS) (0.14 M NaCl, 2.6 mM KCl, 8 mM Na2HPO4, and 1.5 mM KH2PO4). BAY 11–7082, an inhibitor of NF-κB activation, was also purchased from Sigma. A human lung carcinoma type II epithelial cell line (A549) was cultured following a previous method [3]. A549 cells were grown in Dulbecco's modified Eagle's medium (DMEM)/Ham’s F12 culture medium (Sigma, St. Louis, MO, USA), supplemented with 10% (v/v) heat-inactivated fetal calf serum, 100 U/ml penicillin G, 100 μg/ml streptomycin, and 2 mM l-glutamine. A549 cells were seeded in 75-cm2 culture flasks at 37 °C in a humidified atmosphere of 5% CO2. Cells were grown to confluence before drug treatment. LTA purchased from Sigma was dissolved in phosphate-buffered saline (PBS) (0.14 M NaCl, 2.6 mM KCl, 8 mM Na2HPO4, and 1.5 mM KH2PO4). BAY 11–7082, an inhibitor of NF-κB activation, was also purchased from Sigma. Assay of cell viability Cell viability was determined using a colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay as previously described [22]. Briefly, A549 cells (104 cells/well) were seeded in 96-well tissue culture plates overnight. After drug treatment, macrophages were cultured in new medium containing 0.5 mg/mL 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide for a further 3 h. The blue formazan products in the macrophages were dissolved in dimethyl sulfoxide and spectrophotometrically measured at a wavelength of 550 nm. Cell viability was determined using a colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay as previously described [22]. Briefly, A549 cells (104 cells/well) were seeded in 96-well tissue culture plates overnight. After drug treatment, macrophages were cultured in new medium containing 0.5 mg/mL 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide for a further 3 h. The blue formazan products in the macrophages were dissolved in dimethyl sulfoxide and spectrophotometrically measured at a wavelength of 550 nm. Immunoblotting analyses of SP-A, NF-κB, and phosphorylated and non-phosphorylated ERK1/2 and MEK1 Protein levels were immunodetected according to a previously described method [11]. After drug treatment, cell lysates were prepared in ice-cold radioimmunoprecipitation assay buffer (25 mM Tris–HCl (pH 7.2), 0.1% sodium dodecylsulfate (SDS), 1% Triton X-100, 1% sodium deoxycholate, 0.15 M NaCl, and 1 mM EDTA). Protein concentrations were quantified using a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Proteins (50 μg/well) were subjected to sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. Immunodetection of SP-A and NF-κB was carried out using rabbit polyclonal antibodies against human SP-A and NF-κB (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Cellular β-actin protein was immunodetected using a mouse monoclonal antibody (mAb) against mouse β-actin (Sigma) as the internal standard. These protein bands were quantified using a digital imaging system (UVtec, Cambridge, UK). Phosphorylated ERK1/2 and MEK1 were immunodetected using a rabbit polyclonal antibody against phosphorylated residues of ERK1/2 and MEK1 (Cell Signaling, Danvers, MA, USA). Nonphosphorylated ERK2 and MEK1 were immunodetected as the internal controls (Cell Signaling). Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer, Melbourne, Australia). Protein levels were immunodetected according to a previously described method [11]. After drug treatment, cell lysates were prepared in ice-cold radioimmunoprecipitation assay buffer (25 mM Tris–HCl (pH 7.2), 0.1% sodium dodecylsulfate (SDS), 1% Triton X-100, 1% sodium deoxycholate, 0.15 M NaCl, and 1 mM EDTA). Protein concentrations were quantified using a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Proteins (50 μg/well) were subjected to sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. Immunodetection of SP-A and NF-κB was carried out using rabbit polyclonal antibodies against human SP-A and NF-κB (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Cellular β-actin protein was immunodetected using a mouse monoclonal antibody (mAb) against mouse β-actin (Sigma) as the internal standard. These protein bands were quantified using a digital imaging system (UVtec, Cambridge, UK). Phosphorylated ERK1/2 and MEK1 were immunodetected using a rabbit polyclonal antibody against phosphorylated residues of ERK1/2 and MEK1 (Cell Signaling, Danvers, MA, USA). Nonphosphorylated ERK2 and MEK1 were immunodetected as the internal controls (Cell Signaling). Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer, Melbourne, Australia). Extraction of nuclear proteins and immunodetection Amounts of nuclear transcription factors were quantified following a previously described method [20]. After drug treatment, nuclear extracts of macrophages were prepared. Protein concentrations were quantified by a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Nuclear proteins (50 μg/well) were subjected to SDS-PAGE and transferred to nitrocellulose membranes. After blocking, NF-κB was immunodetected using a rabbit polyclonal antibody against mouse NF-κB p65 (Santa Cruz Biotechnology). A proliferating cell nuclear antigen (PCNA) was detected using a mouse mAb against the rat PCNA protein (Santa Cruz Biotechnology) as the internal standard. Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer). Amounts of nuclear transcription factors were quantified following a previously described method [20]. After drug treatment, nuclear extracts of macrophages were prepared. Protein concentrations were quantified by a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Nuclear proteins (50 μg/well) were subjected to SDS-PAGE and transferred to nitrocellulose membranes. After blocking, NF-κB was immunodetected using a rabbit polyclonal antibody against mouse NF-κB p65 (Santa Cruz Biotechnology). A proliferating cell nuclear antigen (PCNA) was detected using a mouse mAb against the rat PCNA protein (Santa Cruz Biotechnology) as the internal standard. Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer). Real-time polymerase chain reaction (PCR) assays Messenger (m)RNA from A549 cells exposed to LTA were prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Oligonucleotides for the PCR analyses of SP-A mRNA and β-actin mRNA were designed and synthesized by Clontech Laboratories (Palo Alto, CA, USA). The oligonucleotide sequences of the upstream and downstream primers for these mRNA analyses were respectively 5'-TGA AAGGGAGTTCTAGCATCTCACAGA-3' and 5'-ACATATGCCTATGTAGGCCTGACTGAG-3' for SP-A mRNA, and 5'- GTCTACATGTCTCGATCCCACTTA A -3' and 5'-GGTCTTTCTCTCTCATCGCGCTC-5' for β-actin mRNA. A quantitative PCR analysis was carried out using iQSYBR Green Supermix (Bio-Rad, Hercules, CA, USA) and the MyiQ Single-Color Real-Time PCR Detection System (Bio-Rad) as described previously [11]. Messenger (m)RNA from A549 cells exposed to LTA were prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Oligonucleotides for the PCR analyses of SP-A mRNA and β-actin mRNA were designed and synthesized by Clontech Laboratories (Palo Alto, CA, USA). The oligonucleotide sequences of the upstream and downstream primers for these mRNA analyses were respectively 5'-TGA AAGGGAGTTCTAGCATCTCACAGA-3' and 5'-ACATATGCCTATGTAGGCCTGACTGAG-3' for SP-A mRNA, and 5'- GTCTACATGTCTCGATCCCACTTA A -3' and 5'-GGTCTTTCTCTCTCATCGCGCTC-5' for β-actin mRNA. A quantitative PCR analysis was carried out using iQSYBR Green Supermix (Bio-Rad, Hercules, CA, USA) and the MyiQ Single-Color Real-Time PCR Detection System (Bio-Rad) as described previously [11]. Statistical analysis Statistical differences were considered significant when the p value of Duncan’s multiple-range test was <0.05. Statistical analysis between groups over time was carried out by a two-way analysis of variance (ANOVA). Statistical differences were considered significant when the p value of Duncan’s multiple-range test was <0.05. Statistical analysis between groups over time was carried out by a two-way analysis of variance (ANOVA). Cell culture and drug treatment: A human lung carcinoma type II epithelial cell line (A549) was cultured following a previous method [3]. A549 cells were grown in Dulbecco's modified Eagle's medium (DMEM)/Ham’s F12 culture medium (Sigma, St. Louis, MO, USA), supplemented with 10% (v/v) heat-inactivated fetal calf serum, 100 U/ml penicillin G, 100 μg/ml streptomycin, and 2 mM l-glutamine. A549 cells were seeded in 75-cm2 culture flasks at 37 °C in a humidified atmosphere of 5% CO2. Cells were grown to confluence before drug treatment. LTA purchased from Sigma was dissolved in phosphate-buffered saline (PBS) (0.14 M NaCl, 2.6 mM KCl, 8 mM Na2HPO4, and 1.5 mM KH2PO4). BAY 11–7082, an inhibitor of NF-κB activation, was also purchased from Sigma. Assay of cell viability: Cell viability was determined using a colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay as previously described [22]. Briefly, A549 cells (104 cells/well) were seeded in 96-well tissue culture plates overnight. After drug treatment, macrophages were cultured in new medium containing 0.5 mg/mL 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide for a further 3 h. The blue formazan products in the macrophages were dissolved in dimethyl sulfoxide and spectrophotometrically measured at a wavelength of 550 nm. Immunoblotting analyses of SP-A, NF-κB, and phosphorylated and non-phosphorylated ERK1/2 and MEK1: Protein levels were immunodetected according to a previously described method [11]. After drug treatment, cell lysates were prepared in ice-cold radioimmunoprecipitation assay buffer (25 mM Tris–HCl (pH 7.2), 0.1% sodium dodecylsulfate (SDS), 1% Triton X-100, 1% sodium deoxycholate, 0.15 M NaCl, and 1 mM EDTA). Protein concentrations were quantified using a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Proteins (50 μg/well) were subjected to sodium dodecylsulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. Immunodetection of SP-A and NF-κB was carried out using rabbit polyclonal antibodies against human SP-A and NF-κB (Santa Cruz Biotechnology, Santa Cruz, CA, USA). Cellular β-actin protein was immunodetected using a mouse monoclonal antibody (mAb) against mouse β-actin (Sigma) as the internal standard. These protein bands were quantified using a digital imaging system (UVtec, Cambridge, UK). Phosphorylated ERK1/2 and MEK1 were immunodetected using a rabbit polyclonal antibody against phosphorylated residues of ERK1/2 and MEK1 (Cell Signaling, Danvers, MA, USA). Nonphosphorylated ERK2 and MEK1 were immunodetected as the internal controls (Cell Signaling). Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer, Melbourne, Australia). Extraction of nuclear proteins and immunodetection: Amounts of nuclear transcription factors were quantified following a previously described method [20]. After drug treatment, nuclear extracts of macrophages were prepared. Protein concentrations were quantified by a bicinchonic acid protein assay kit (Pierce, Rockford, IL, USA). Nuclear proteins (50 μg/well) were subjected to SDS-PAGE and transferred to nitrocellulose membranes. After blocking, NF-κB was immunodetected using a rabbit polyclonal antibody against mouse NF-κB p65 (Santa Cruz Biotechnology). A proliferating cell nuclear antigen (PCNA) was detected using a mouse mAb against the rat PCNA protein (Santa Cruz Biotechnology) as the internal standard. Intensities of the immunoreactive bands were determined using a digital imaging system (Wallac Victor 1420, PerkinElmer). Real-time polymerase chain reaction (PCR) assays: Messenger (m)RNA from A549 cells exposed to LTA were prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Oligonucleotides for the PCR analyses of SP-A mRNA and β-actin mRNA were designed and synthesized by Clontech Laboratories (Palo Alto, CA, USA). The oligonucleotide sequences of the upstream and downstream primers for these mRNA analyses were respectively 5'-TGA AAGGGAGTTCTAGCATCTCACAGA-3' and 5'-ACATATGCCTATGTAGGCCTGACTGAG-3' for SP-A mRNA, and 5'- GTCTACATGTCTCGATCCCACTTA A -3' and 5'-GGTCTTTCTCTCTCATCGCGCTC-5' for β-actin mRNA. A quantitative PCR analysis was carried out using iQSYBR Green Supermix (Bio-Rad, Hercules, CA, USA) and the MyiQ Single-Color Real-Time PCR Detection System (Bio-Rad) as described previously [11]. Statistical analysis: Statistical differences were considered significant when the p value of Duncan’s multiple-range test was <0.05. Statistical analysis between groups over time was carried out by a two-way analysis of variance (ANOVA). Results: Toxicity of LTA to A549 cells Cell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown). Cell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown). LTA-induced enhancement of SP-A biosynthesis in A549 cells The effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure 1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure 1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure 1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels. Effects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. The effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure 1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure 1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure 1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels. Effects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. LTA-induced SP-A mRNA expression in A549 cells Induction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure 2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure 2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure 2). Effects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively. Induction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure 2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure 2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure 2). Effects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively. Augmentation of NF-κB expression and translocation by LTA Mechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures 3 and 4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure 3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure 3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively. Effects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Effects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Treatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure 4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure 4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB. Mechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures 3 and 4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure 3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure 3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively. Effects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Effects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Treatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure 4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure 4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB. LTA-enhanced phosphorylation of ERK1/2 The reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure 5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure 5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure 5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure 5B). Effects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. The reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure 5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure 5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure 5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure 5B). Effects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. LTA-induced activation of MEK1 Phosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure 6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure 6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure 6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1. Effects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. Phosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure 6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure 6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure 6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1. Effects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. Toxicity of LTA to A549 cells: Cell morphology and viability were assayed to evaluate the toxicity of LTA to human alveolar epithelial A549 cells. Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability (data not shown). When exposed to 30 μg/ml LTA for 1, 6, and 24 h, the viability of A549 cells was not influenced. Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h did not alter cell morphology (data not shown). LTA-induced enhancement of SP-A biosynthesis in A549 cells: The effects of LTA on SP-A levels in A549 cells were evaluated by an immunoblotting analysis (Figure 1). In untreated A549 cells, low levels of SP-A were immunodetected (Figure 1A, top panel, lane 1). After exposure to 30 μg/ml LTA for 1 h, levels of SP-A were found to be augmented (lane 2). When treated for 6 and 24 h, LTA obviously increased amounts of SP-A in A549 cells. β-Actin was immunodetected (Figure 1A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 1B). Exposure of A549 cells to 30 μg/ml LTA for 1, 6, and 24 h respectively caused significant 176%, 230%, and 270% increases in SP-A levels. Effects of lipoteichoic acid (LTA) on the production of surfactant protein(SP)-A. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h (A). Cellular proteins were prepared for the immunoblotting analyses. Amounts of SP-A were immunodetected (A, top pane). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. LTA-induced SP-A mRNA expression in A549 cells: Induction of SP-A mRNA expression by LTA was quantified using a real-time PCR analysis (Figure 2). After exposure to LTA for 1 h, the levels of SP-A mRNA in A549 cells were increased by 2.1-fold. Exposure of A549 cells to LTA for 6 and 24 h caused 2.8- and 3.7-fold increases in the levels of SP-A mRNA, respectively (Figure 2). Pretreatment of A549 cells with BAY 11–7082, an inhibitor of NF-κB activation, for 1 h did not change SP-A mRNA expression (data not shown). However, BAY 11–7082 significantly inhibited LTA-induced SP-A mRNA production by 70% (Figure 2). Effects of lipoteichoic acid (LTA) on induction of surfactant protein(SP)-A mRNA. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. In addition, A549 cells were pretreated with BAY 11–7082 (BAY), an inhibitor of NF-κB activation, for 1 h and then exposed to LTA for another 24 h. mRNA was prepared for real-time PCR analyses of SP-A mRNA and β-actin mRNA. Each value represents the mean ± SEM for n = 3. Symbols * and # indicate that the value significantly (p < 0.05) differed from the respective control and LTA-treated group, respectively. Augmentation of NF-κB expression and translocation by LTA: Mechanisms of LTA-induced SP-A augmentation were evaluated by analyses of NF-κB expression and translocation (Figures 3 and 4). Exposure of A549 cells to LTA for 1 h enhanced levels of cytosolic NF-κB (Figure 3A, top panel, lane 1). After treatment for 6 and 24 h, the expression of cytosolic NF-κB was obviously augmented (lanes 3 and 4). β-Actin was immunodetected (Figure 3A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 3B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased NF-κB production by 181%, 200%, and 230%, respectively. Effects of lipoteichoic acid (LTA) on the expression of the transcription factor, nuclear factor (NF)-κB. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Levels of cytosolic NF-κB p65 (cNF-κB) were immunodetected (A, top panel). β-Actin was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Effects of lipoteichoic acid (LTA) on translocation of the transcription factor, nuclear factor (NF)-κB, from the cytoplasm to nuclei. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Amounts of nuclear NF-κB p65 (nNF-κB) were immunodetected (A, top panel). Proliferating cell nuclear antigen (PCNA) was detected as the internal standard (bottom panel). These protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control groups, p < 0.05. AU, arbitrary unit. Treatment of A549 cells with LTA for 1 h increased levels of nuclear NF-κB (Figure 4A, top panel, lane 2). When exposed for 6 and 24 h, translocation of NF-κB from the cytoplasm to nuclei notably increased (lanes 3 and 4). Amounts of PCNA in A549 cells were immunodetected (Figure 4A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 4B). Exposure of A549 cells to LTA for 1, 6, and 24 h respectively caused significant 176%, 340%, 530% enhancements in levels of nuclear NF-κB. LTA-enhanced phosphorylation of ERK1/2: The reason why LTA improved NF-κB activation was further investigated by assaying ERK1/2 phosphorylation (Figure 5). Treatment of A549 cells with LTA for 1 h increased the amounts of phosphorylated ERK1/2 (Figure 5A, top panel, lane 2). Levels of phosphorylated ERK1/2 in A549 cells were obviously raised after exposure to LTA for 6 and 24 h (lanes 3 and 4). Amounts of β-actin in A549 cells were immunodetected (Figure 5A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 5B). Exposure of A549 cells to LTA for 1, 6, and 24 h significantly increased ERK1 phosphorylation by 259%, 170%, and 334%, respectively. In comparison, levels of phosphorylated ERK2 were respectively augmented by 8.2-, 6.4-, and 7.8-fold following LTA administration for 1, 6, and 24 h (Figure 5B). Effects of lipoteichoic acid (LTA) on the phosphorylation of extracellular signal-regulated kinase (ERK)1/2. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated ERK1/2 (p-ERK1/2) were immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. LTA-induced activation of MEK1: Phosphorylation of MEK1 was assayed to determine the mechanism of LTA-induced ERK1/2 activation (Figure 6). Low levels of phosphorylated MEK1 were detected in untreated A549 cells (Figure 6A, top panel, lane 1). However, exposure of A549 cells to LTA for 1 h stimulated MEK1 phosphorylation (lane 2). After exposure for 6 and 24 h, the amounts of phosphorylated MEK1 had obviously increased (lanes 3 and 4). β-actin in A549 cells was immunodetected (Figure 6A, bottom panel). These immunorelated protein bands were quantified and analyzed (Figure 6B). Treatment of A549 cells with LTA for 1, 6, and 24 h respectively caused significant 82%, 330%, and 370% increases in levels of phosphorylated MEK1. Effects of lipoteichoic acid (LTA) on the phosphorylation of mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1. A549 cells were exposed to 30 μg/ml LTA for 1, 6, and 24 h. Phosphorylated MEK1 (p-MEK1) was immunodetected (A, top panel). ERK2 was detected as the internal standard (bottom panel). These immunorelated protein bands were quantified and analyzed (B). Each value represents the mean ± SEM for n = 6. An asterisk (*) indicates that a value significantly differed from the control group, p < 0.05. AU, arbitrary unit. Discussion: LTA represents a class of amphiphilic molecules anchored to the outer face of the cytoplasmic membrane in gram-positive bacteria and is commonly released during cell growth, especially under antibiotic therapy [1,2]. It can cause cytokine induction in mononuclear phagocytes [17]. In previous studies, LTA concentrations of 0.2~50 μg/ml were detected and stimulated activity of polymononuclear leucocyte functions and release of TNF-α in peripheral blood mononuclear cells [23,24]. Meanwhile, LTA levels at the infectious site can reach a high level of 26,694 ng/mL [25]. The concentration of LTA used in this study was < 50 μg/ml. Therefore, our results show that LTA at clinically relevant concentrations can activate alveolar type II epithelial cells by stimulating production of surfactants. During bacterial infection, endotoxins, including LTA and LPS, increase capillary permeability and enhance expressions of cellular adhesion molecules, proinflammatory cytokines, and chemokines [1,15]. These endotoxins can lead to most of the clinical manifestations of bacterial infection and are associated with ALI [4,5]. In addition, LTA can trigger lung inflammation and causes neutrophil influx into the lungs [15,26]. This study showed that in response to LTA stimulation, levels of SP-A mRNA and protein in alveolar A549 cells were time-dependently augmented. SP-A contributes to the pulmonary host defense [10,16,27]. A previous study reported that when spa gene expression was knocked-out, susceptibility of the lungs to pathogenic infection was simultaneously raised [28]. Our previous study also showed that LPS-mediated toll-like receptor (TLR) 2 signaling in human alveolar epithelial cells might increase SP-A biosynthesis and subsequently lead to an inflammatory response in the lungs [3]. As a result, SP-A could be an effective biomarker for detecting pulmonary infection by gram-negative or -positive bacteria. This study showed that LTA increased the expression of NF-κB and its translocation from the cytoplasm to nuclei. NF-κB is a typical transcription factor in response to stimulation by LTA [16]. LTA can bind CD14 and then stimulates TLR activation [16,29]. After LTA associates with TLR2, NF-κB can be activated by protein kinases and is then translocated to nuclei from the cytoplasm [11]. NF-κB regulates certain gene expressions to control cell proliferation, differentiation, and death [30,31]. A previous study showed that LTA induced cyclooxygenase-2 expression in epithelial cells via IκB degradation and successive p65 NF-κB translocation [32]. LTA could induce SP-A mRNA expression in A549 cells. Our bioinformatic search revealed that NF-κB-DNA-binding motifs were found in the promoter regions of the spa gene. Suppressing NF-κB activation using BAY 11–7082 simultaneously inhibited LTA-induced SP-A mRNA expression. Thus, LTA transcriptionally induces SP-A expression through inducing NF-κB expression and translocation. Our present results revealed that the phosphorylation of ERK1/2 was associated with NF-κB activation. Sequentially, ERK1/2-activated IκBα kinase can phosphorylate IκB at two conserved serine residues in the N-terminus, triggering the degradation of this inhibitor and allowing for the rapid translocation of NF-κB into nuclei [16,20]. Accordingly, LTA-induced activation of A549 cells is mainly due to the improvement in ERK1/2 phosphorylation. Roles of ERK1 and ERK2 in LTA-induced SP-A expression were not determined in this study but will be validated using RNA interference in our next study. There is growing evidence that the ERK signaling pathway, which contributes to regulating inflammatory events [33]. Therefore, LTA regulates SP-A expression in alveolar type II epithelial cells in the course of eliciting ERK1/2 phosphorylation and subsequent activation of the transcription factor, NF-κB. ERK activation is mediated by at least three different pathways: a Raf/MEK-dependent pathway, a PI3K/Raf-independent pathway that strongly activates MEK, and a third undetermined pathway that directly activates ERK proteins [34]. This study showed that LTA time-dependently increased levels of phosphorylated MEK1. Thus, one of the possible reasons explaining why LTA stimulates ERK1/2 activation is the increase in MEK1 phosphorylation. MAPK-regulating signals place this family of protein kinases in an apparently linear signaling cascade downstream of growth factor receptors, adaptor proteins, guanine-nucleotide exchange factors, Ras, Raf, and MEK [19]. The present study demonstrates that LTA can induce SP-A expression via MEK-dependent activation of the protein kinase ERK1/2-signaling pathway. Conclusions: In summary, we used an alveolar epithelial cell model to study the immunomodulatory responses of LTA. Our results revealed that LTA can induce inflammatory responses in alveolar epithelial A549 cells by means of enhancing SP-A mRNA and protein syntheses. Moreover, the signal-transducing mechanisms of LTA-caused regulation of SP-A expression arise through the cascade phosphorylations of MEK1 and ERK1/2. In succession, LTA increased NF-κB expression and translocation. LTA-induced SP-A production in alveolar type II epithelial cells may indicate the status of gram-positive bacteria-caused septic shock and acute lung injury. More molecular pathways should be investigated and proven in the future. However, there are certain limitations of this study, including the use of A549 cells, which are derived from human lung carcinoma. The effects of LTA on A549 cells may differ from those on normal alveolar epithelial cells. Thus, we will perform a translational study to evaluate the effects of LTA on alveolar epithelial cells of animals with acute lung injury. Abbreviations: ALI: Acute lung injury; ERK1/2: Extracellular signal-regulated kinase 1/2; IL: Interleukin; LPS: Lipopolysaccharide; LTA: Lipoteichoic acid; NF-κB: Nuclear factor-κB; MAPKs: Mitogen-activated protein kinases; MEK1: Mitogen-activated/extracellular signal-regulated kinase kinase 1; PCNA: Proliferating cell nuclear antigen; SDS-PAGE: Sodium dodecylsulfate polyacrylamide gel electrophoresis; SP-A: Surfactant protein-A; TLR2: Toll-like receptor 2; TNF-α: Tumor necrosis factor-α. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: FLL, CYC, and RMC visualized experimental design. YTT and HLT refined the experimental approach. TGC did the statistical analysis. TLC had significant intellectual input into the development of this work, and added to the Discussion. All authors reviewed data and results, and had significant input into the writing of the final manuscript. All authors read and approved the final manuscript.
Background: Lipoteichoic acid (LTA), a gram-positive bacterial outer membrane component, can cause septic shock. Our previous studies showed that the gram-negative endotoxin, lipopolysaccharide (LPS), could induce surfactant protein-A (SP-A) production in human alveolar epithelial (A549) cells. Methods: A549 cells were exposed to LTA. Levels of SP-A, nuclear factor (NF)-κB, extracellular signal-regulated kinase 1/2 (ERK1/2), and mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1 were determined. Results: Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability. Meanwhile, when exposed to 30 μg/ml LTA for 1, 6, and 24 h, the biosynthesis of SP-A mRNA and protein in A549 cells significantly increased. As to the mechanism, LTA enhanced cytosolic and nuclear NF-κB levels in time-dependent manners. Pretreatment with BAY 11-7082, an inhibitor of NF-κB activation, significantly inhibited LTA-induced SP-A mRNA expression. Sequentially, LTA time-dependently augmented phosphorylation of ERK1/2. In addition, levels of phosphorylated MEK1 were augmented following treatment with LTA. Conclusions: Therefore, this study showed that LTA can increase SP-A synthesis in human alveolar type II epithelial cells through sequentially activating the MEK1-ERK1/2-NF-κB-dependent pathway.
Background: Sepsis can lead to multiorgan failure and death and appears to be triggered by bacterial products, such as lipopolysaccharide (LPS) from gram-negative bacteria and lipoteichoic acid (LTA) from gram-positive ones [1-3]. Infection of the respiratory tract caused by gram-positive bacteria and pneumonia combined with acute lung injury (ALI) are usually the leading causes of mortality by sepsis [4]. In the past few decades, the incidences of sepsis and septic shock have been increasing [5]. Although endotoxin-activated events are clearly important in gram-negative infection, gram-positive bacteria also have crucial roles, but less is known about host responses to them [6]. The increasing prevalence of sepsis from gram-positive bacterial pathogens necessitates a reevaluation of the basic assumptions about the molecular pathogenesis of ALI. Alveolar epithelial type II cells contribute to the maintenance of mucosal integrity by modulating the production of surfactants [7]. Pulmonary surfactants play important roles in protecting the lung during endotoxin-induced injury and infection [8,9]. Surfactant protein (SP)-A is the most abundant pulmonary surfactant protein. Levels of SP-A in bronchiolar lavage fluid are modulated in gram-negative or -positive bacteria-caused lung diseases, including severe pneumonia, acute respiratory distress syndrome, and cardiogenic lung edema [10]. Thus, altering lung SP-A levels can be an effective indicator for pulmonary infection and inflammation. Our previous study showed that LPS selectively induced spa gene expression in human alveolar epithelial A549 cells [11]. LTA, an outer membrane component of gram-positive bacteria, was shown to be one of the critical factors participating in the pathogenesis of sepsis [12,13]. LTA can stimulate inflammatory responses in the lung [14,15]. Therefore, understanding the mechanisms that regulate LTA-mediated cell activation is crucial for diagnosis, treatment, or prognosis of lung inflammatory diseases. In response to stimuli, LTA can activate macrophages to produce massive amounts of inflammatory factors that exhibit systemic effects in the general circulation [16]. LTA can induce the secretion of various cytokines such as interleukin (IL)-1β, IL-6, and tumor necrosis factor (TNF)-α [17]. These data suggest that LTA can selectively modify gene transcription of various cell types and sequentially augment and possibly initiate tissue inflammation. Mitogen-activated protein kinases (MAPKs) are serine/threonine kinases. The first MAPK isoforms to be cloned and characterized were the extracellular signal-regulated kinase 1 and 2 (ERK 1/2) [18,19]. ERK 1/2 are well documented to be activated by a family of dual-specificity kinases known as the mitogen-activated/ERK kinases (MEKs) [16,20]. A previous study demonstrated that LTA can selectively activate the ERK pathway in the cornea [21]. Our previous study showed that LTA induced TNF-α and IL-6 expressions by means of stimulating phosphorylation of ERK1/2 in macrophages [16]. In addition, LTA also triggered translocation of nuclear factor (NF)-κB from the cytoplasm to nuclei and its transactivation activity. Meanwhile, the mechanisms responsible for LTA-induced spa gene expression in alveolar epithelial cells are still unknown. In this study, we attempted to evaluate the effects of LTA on SP-1 synthesis in human alveolar type II epithelial cells and its possible mechanisms. Conclusions: In summary, we used an alveolar epithelial cell model to study the immunomodulatory responses of LTA. Our results revealed that LTA can induce inflammatory responses in alveolar epithelial A549 cells by means of enhancing SP-A mRNA and protein syntheses. Moreover, the signal-transducing mechanisms of LTA-caused regulation of SP-A expression arise through the cascade phosphorylations of MEK1 and ERK1/2. In succession, LTA increased NF-κB expression and translocation. LTA-induced SP-A production in alveolar type II epithelial cells may indicate the status of gram-positive bacteria-caused septic shock and acute lung injury. More molecular pathways should be investigated and proven in the future. However, there are certain limitations of this study, including the use of A549 cells, which are derived from human lung carcinoma. The effects of LTA on A549 cells may differ from those on normal alveolar epithelial cells. Thus, we will perform a translational study to evaluate the effects of LTA on alveolar epithelial cells of animals with acute lung injury.
Background: Lipoteichoic acid (LTA), a gram-positive bacterial outer membrane component, can cause septic shock. Our previous studies showed that the gram-negative endotoxin, lipopolysaccharide (LPS), could induce surfactant protein-A (SP-A) production in human alveolar epithelial (A549) cells. Methods: A549 cells were exposed to LTA. Levels of SP-A, nuclear factor (NF)-κB, extracellular signal-regulated kinase 1/2 (ERK1/2), and mitogen-activated/extracellular signal-regulated kinase kinase (MEK)1 were determined. Results: Exposure of A549 cells to 10, 30, and 50 μg/ml LTA for 24 h did not affect cell viability. Meanwhile, when exposed to 30 μg/ml LTA for 1, 6, and 24 h, the biosynthesis of SP-A mRNA and protein in A549 cells significantly increased. As to the mechanism, LTA enhanced cytosolic and nuclear NF-κB levels in time-dependent manners. Pretreatment with BAY 11-7082, an inhibitor of NF-κB activation, significantly inhibited LTA-induced SP-A mRNA expression. Sequentially, LTA time-dependently augmented phosphorylation of ERK1/2. In addition, levels of phosphorylated MEK1 were augmented following treatment with LTA. Conclusions: Therefore, this study showed that LTA can increase SP-A synthesis in human alveolar type II epithelial cells through sequentially activating the MEK1-ERK1/2-NF-κB-dependent pathway.
9,975
280
[ 646, 169, 95, 268, 143, 148, 41, 103, 282, 265, 515, 290, 270, 103, 10, 70 ]
20
[ "lta", "cells", "a549", "a549 cells", "sp", "κb", "nf κb", "nf", "protein", "24" ]
[ "surfactants pulmonary surfactants", "sepsis gram positive", "participating pathogenesis sepsis", "infection surfactant protein", "protecting lung endotoxin" ]
null
[CONTENT] Lipoteichoic acid | Alveolar epithelial cells | Surfactant protein-A | MEK/ERK/NF-κB [SUMMARY]
null
[CONTENT] Lipoteichoic acid | Alveolar epithelial cells | Surfactant protein-A | MEK/ERK/NF-κB [SUMMARY]
[CONTENT] Lipoteichoic acid | Alveolar epithelial cells | Surfactant protein-A | MEK/ERK/NF-κB [SUMMARY]
[CONTENT] Lipoteichoic acid | Alveolar epithelial cells | Surfactant protein-A | MEK/ERK/NF-κB [SUMMARY]
[CONTENT] Lipoteichoic acid | Alveolar epithelial cells | Surfactant protein-A | MEK/ERK/NF-κB [SUMMARY]
[CONTENT] Alveolar Epithelial Cells | Cell Culture Techniques | Cell Survival | Humans | Immunoblotting | Lipopolysaccharides | Mitogen-Activated Protein Kinase 1 | Mitogen-Activated Protein Kinase 3 | NF-kappa B | Pulmonary Surfactant-Associated Protein A | Real-Time Polymerase Chain Reaction | Signal Transduction | Teichoic Acids [SUMMARY]
null
[CONTENT] Alveolar Epithelial Cells | Cell Culture Techniques | Cell Survival | Humans | Immunoblotting | Lipopolysaccharides | Mitogen-Activated Protein Kinase 1 | Mitogen-Activated Protein Kinase 3 | NF-kappa B | Pulmonary Surfactant-Associated Protein A | Real-Time Polymerase Chain Reaction | Signal Transduction | Teichoic Acids [SUMMARY]
[CONTENT] Alveolar Epithelial Cells | Cell Culture Techniques | Cell Survival | Humans | Immunoblotting | Lipopolysaccharides | Mitogen-Activated Protein Kinase 1 | Mitogen-Activated Protein Kinase 3 | NF-kappa B | Pulmonary Surfactant-Associated Protein A | Real-Time Polymerase Chain Reaction | Signal Transduction | Teichoic Acids [SUMMARY]
[CONTENT] Alveolar Epithelial Cells | Cell Culture Techniques | Cell Survival | Humans | Immunoblotting | Lipopolysaccharides | Mitogen-Activated Protein Kinase 1 | Mitogen-Activated Protein Kinase 3 | NF-kappa B | Pulmonary Surfactant-Associated Protein A | Real-Time Polymerase Chain Reaction | Signal Transduction | Teichoic Acids [SUMMARY]
[CONTENT] Alveolar Epithelial Cells | Cell Culture Techniques | Cell Survival | Humans | Immunoblotting | Lipopolysaccharides | Mitogen-Activated Protein Kinase 1 | Mitogen-Activated Protein Kinase 3 | NF-kappa B | Pulmonary Surfactant-Associated Protein A | Real-Time Polymerase Chain Reaction | Signal Transduction | Teichoic Acids [SUMMARY]
[CONTENT] surfactants pulmonary surfactants | sepsis gram positive | participating pathogenesis sepsis | infection surfactant protein | protecting lung endotoxin [SUMMARY]
null
[CONTENT] surfactants pulmonary surfactants | sepsis gram positive | participating pathogenesis sepsis | infection surfactant protein | protecting lung endotoxin [SUMMARY]
[CONTENT] surfactants pulmonary surfactants | sepsis gram positive | participating pathogenesis sepsis | infection surfactant protein | protecting lung endotoxin [SUMMARY]
[CONTENT] surfactants pulmonary surfactants | sepsis gram positive | participating pathogenesis sepsis | infection surfactant protein | protecting lung endotoxin [SUMMARY]
[CONTENT] surfactants pulmonary surfactants | sepsis gram positive | participating pathogenesis sepsis | infection surfactant protein | protecting lung endotoxin [SUMMARY]
[CONTENT] lta | cells | a549 | a549 cells | sp | κb | nf κb | nf | protein | 24 [SUMMARY]
null
[CONTENT] lta | cells | a549 | a549 cells | sp | κb | nf κb | nf | protein | 24 [SUMMARY]
[CONTENT] lta | cells | a549 | a549 cells | sp | κb | nf κb | nf | protein | 24 [SUMMARY]
[CONTENT] lta | cells | a549 | a549 cells | sp | κb | nf κb | nf | protein | 24 [SUMMARY]
[CONTENT] lta | cells | a549 | a549 cells | sp | κb | nf κb | nf | protein | 24 [SUMMARY]
[CONTENT] gram | sepsis | lta | positive | lung | gram positive | bacteria | infection | study | positive bacteria [SUMMARY]
null
[CONTENT] lta | figure | cells | a549 cells | a549 | panel | 24 | lta 24 | exposure | sp [SUMMARY]
[CONTENT] alveolar | epithelial | lta | alveolar epithelial | cells | epithelial cells | study | lung | responses | lung injury [SUMMARY]
[CONTENT] lta | cells | a549 | a549 cells | sp | figure | mrna | 24 | κb | panel [SUMMARY]
[CONTENT] lta | cells | a549 | a549 cells | sp | figure | mrna | 24 | κb | panel [SUMMARY]
[CONTENT] ||| gram | LPS [SUMMARY]
null
[CONTENT] 10 | 30 | 50 μg/ml | 24 ||| 30 μg/ml | 1 | 6 | 24 | SP-A mRNA | A549 ||| LTA | NF-κB ||| NF-κB ||| LTA | ERK1/2 ||| LTA [SUMMARY]
[CONTENT] LTA | II | MEK1-ERK1/2-NF-κB-dependent [SUMMARY]
[CONTENT] ||| gram | LPS ||| LTA ||| 1/2 ||| ||| 10 | 30 | 50 μg/ml | 24 ||| 30 μg/ml | 1 | 6 | 24 | SP-A mRNA | A549 ||| LTA | NF-κB ||| NF-κB ||| LTA | ERK1/2 ||| LTA ||| LTA | II | MEK1-ERK1/2-NF-κB-dependent [SUMMARY]
[CONTENT] ||| gram | LPS ||| LTA ||| 1/2 ||| ||| 10 | 30 | 50 μg/ml | 24 ||| 30 μg/ml | 1 | 6 | 24 | SP-A mRNA | A549 ||| LTA | NF-κB ||| NF-κB ||| LTA | ERK1/2 ||| LTA ||| LTA | II | MEK1-ERK1/2-NF-κB-dependent [SUMMARY]
Ecological niche modelling of Hemipteran insects in Cameroon; the paradox of a vector-borne transmission for Mycobacterium ulcerans, the causative agent of Buruli ulcer.
25344052
The mode of transmission of the emerging neglected disease Buruli ulcer is unknown. Several potential transmission pathways have been proposed, such as amoebae, or transmission through food webs. Several lines of evidence have suggested that biting aquatic insects, Naucoridae and Belostomatidae, may act as vectors, however this proposal remains controversial.
BACKGROUND
Herein, based on sampling in Cameroon, we construct an ecological niche model of these insects to describe their spatial distribution. We predict their distribution across West Africa, describe important environmental drivers of their abundance, and examine the correlation between their abundance and Buruli ulcer prevalence in the context of the Bradford-Hill guidelines.
MATERIALS AND METHODS
We find a significant positive correlation between the abundance of the insects and the prevalence of Buruli ulcer. This correlation changes in space and time, it is significant in one Camerounese study region in (Akonolinga) and not other (Bankim). We discuss notable environmental differences between these regions.
RESULTS
We interpret the presence of, and change in, this correlation as evidence (though not proof) that these insects may be locally important in the environmental persistence, or transmission, of Mycobacterium. ulcerans. This is consistent with the idea of M. ulcerans as a pathogen transmitted by multiple modes of infection, the importance of any one pathway changing from region to region, depending on the local environmental conditions.
CONCLUSION
[ "Animals", "Buruli Ulcer", "Cameroon", "Ecosystem", "Geographic Mapping", "Hemiptera", "Humans", "Insect Vectors", "Mycobacterium ulcerans" ]
4213541
Background
The Buruli ulcer is an emerging neglected tropical disease affecting more than 5,000 people per year in West and Central Africa, French Guiana, Latin America and Australia [1]. The disease burden is highest in Africa, where it predominantly affects children under the age of 15, and due to damage to the skin, muscle and bone, can result in severe scarring and crippling deformities if left untreated. The disease is caused by the environmental pathogen Mycobacterium ulcerans. The mode of transmission of M. ulcerans, the method by which it infects humans, is unknown. Many routes of transmission have been proposed, such as transmission by aerosol [2], vector transmission by amoebae [3] or through aquatic networks [4]. During a study of the association between M. ulcerans and aquatic plants in Ghana and Benin aquatic insects were accidentally collected during the sampling procedure, and unexpectedly found to test positive for M. ulcerans [1]. The authors proposed that, given that these insects occasionally bite humans, they may be implicated in transmission of M. ulcerans. Aquatic insects have been further implicated after a series of laboratory experiments demonstrated the competency of Naucoridae to act as vectors. Naucoridae are able to acquire M. ulcerans from their diet, and then transmit the pathogen to mice resulting in Buruli-like symptoms [5–8]. Buruli ulcer is commonly associated with lowland, stagnant water [9] and human behaviours associated with water bodies appear to be risk factors for Buruli ulcer infection, which would lend support to the idea of infection occurring in an aquatic context. However, the role of these insects has been disputed for several reasons. In a two year study of Buruli ulcer endemic and non-endemic sites in Ghana [10], found no evidence for a role in transmission. The population of these insects, and the prevalence of M. ulcerans infection in them, was not significantly different between Buruli ulcer endemic and Buruli ulcer non-endemic sites [10]. The authors argued that, if these insects are vectors, we would expect them to have a higher abundance in Buruli ulcer endemic sites, that rates of M. ulcerans infection of the insects should be higher in Buruli ulcer endemic sites, and that the rate of infection of these insects should be higher than other species of invertebrates. These expectations are based on the Bradford-Hill guidelines for associating insect vectors with human vector-borne disease [11, 12]. These guidelines provide a general framework to explore the association between vectors and disease, based on the consistency, specificity, plausibility and coherence of the proposed mode of transmission. Consistency refers to the expectation that the rate of infection of the proposed vectors, which should be consistently strongly positively associated, in time and space, with prevalence of human cases. This also implies human cases should not occur in absence of the proposed vector. The proposed vector should have a demonstrated capacity to physically transmit the pathogen, which has been demonstrated for M. ulcerans in the lab [5]. The interaction between the proposed vector and human infection must be specific and alternative explanations of human infection should be ruled out (though see [4] for alternative explanations). That is, human infection must be demonstrated to not have been the result of other potential modes of transmission. We note that this criterion must be applied with care in cases of multi-host transmission. Additionally, the proposed vector must plausibly be able to be a vector of the pathogen. This criterion is often controversial as it is highly dependent upon the experience of the researcher, and their opinion about what is, and is not, plausible as opposed to merely possible [13]. Most authors in Buruli ulcer research would agree with some basic facts; the waterbugs are infected in the environment [14], they bite humans occasionally, and are able to transmit the bacteria to mice in the lab [5]. However, waterbugs are not known to be vectors of other pathogens, and related Mycobacteria (Mycobacterium tuberculosis, M. leprae, M. marnium) are not known to be vector-borne diseases [10, 12]. The plausibility of this proposed route of transmission is still debated. The final criterion, coherence, is based on what we already know about the pathogen, the vector and the host. Does the proposed method of transmission fit well with our current understanding of its biology? As our understanding of the biology of M. ulcerans improves, this criterion will be answered. Given this framework, how likely is it that Naucoridae and/or Belostomatidae are vectors of the Buruli ulcer disease? Herein, we explore the correlation in time and space between the proposed vectors, Naucoridae and Belostomatidae, and the Buruli ulcer prevalence. We discuss the other Bradford-Hill criteria, but do not focus on them specifically, as it was not directly within the scope of this work. Based on sampling in Cameroon, we characterise the set of suitable habitats within which species of the Families Belostomatidae and Naucoridae can maintain a population (their ecological niche) and describe the spatial distribution of these suitable habitats across West Africa. We then explore any correlations between habitat suitability and Buruli ulcer prevalence at multiple spatial scales.
null
null
Results
Distribution of suitable habitat, and its relationship to Buruli ulcer In Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2 Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species SpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4 Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season. Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim. In Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2 Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species SpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4 Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season. Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim. Ecologically important variables in the distribution of the aquatic insect families Variable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5 Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6 Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable. Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Flow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae. Variable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5 Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6 Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable. Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Flow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae. Model performance AICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model. The AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset. AICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model. The AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset.
null
null
[ "Background", "Distribution of Belostomatidae and Naucoridae aquatic insect families", "Environmental parameters used in ecological niche modelling", "Data preparation, niche modelling, and prediction of spatial distribution of suitable habitat", "Evaluating model performance", "Identifying the relationship between habitat suitability and Buruli ulcer prevalence", "Distribution of suitable habitat, and its relationship to Buruli ulcer", "Ecologically important variables in the distribution of the aquatic insect families", "Model performance" ]
[ "The Buruli ulcer is an emerging neglected tropical disease affecting more than 5,000 people per year in West and Central Africa, French Guiana, Latin America and Australia [1]. The disease burden is highest in Africa, where it predominantly affects children under the age of 15, and due to damage to the skin, muscle and bone, can result in severe scarring and crippling deformities if left untreated. The disease is caused by the environmental pathogen Mycobacterium ulcerans.\nThe mode of transmission of M. ulcerans, the method by which it infects humans, is unknown. Many routes of transmission have been proposed, such as transmission by aerosol [2], vector transmission by amoebae [3] or through aquatic networks [4]. During a study of the association between M. ulcerans and aquatic plants in Ghana and Benin aquatic insects were accidentally collected during the sampling procedure, and unexpectedly found to test positive for M. ulcerans\n[1]. The authors proposed that, given that these insects occasionally bite humans, they may be implicated in transmission of M. ulcerans. Aquatic insects have been further implicated after a series of laboratory experiments demonstrated the competency of Naucoridae to act as vectors. Naucoridae are able to acquire M. ulcerans from their diet, and then transmit the pathogen to mice resulting in Buruli-like symptoms [5–8]. Buruli ulcer is commonly associated with lowland, stagnant water [9] and human behaviours associated with water bodies appear to be risk factors for Buruli ulcer infection, which would lend support to the idea of infection occurring in an aquatic context.\nHowever, the role of these insects has been disputed for several reasons. In a two year study of Buruli ulcer endemic and non-endemic sites in Ghana [10], found no evidence for a role in transmission. The population of these insects, and the prevalence of M. ulcerans infection in them, was not significantly different between Buruli ulcer endemic and Buruli ulcer non-endemic sites [10]. The authors argued that, if these insects are vectors, we would expect them to have a higher abundance in Buruli ulcer endemic sites, that rates of M. ulcerans infection of the insects should be higher in Buruli ulcer endemic sites, and that the rate of infection of these insects should be higher than other species of invertebrates.\nThese expectations are based on the Bradford-Hill guidelines for associating insect vectors with human vector-borne disease [11, 12]. These guidelines provide a general framework to explore the association between vectors and disease, based on the consistency, specificity, plausibility and coherence of the proposed mode of transmission.\nConsistency refers to the expectation that the rate of infection of the proposed vectors, which should be consistently strongly positively associated, in time and space, with prevalence of human cases. This also implies human cases should not occur in absence of the proposed vector. The proposed vector should have a demonstrated capacity to physically transmit the pathogen, which has been demonstrated for M. ulcerans in the lab [5]. The interaction between the proposed vector and human infection must be specific and alternative explanations of human infection should be ruled out (though see [4] for alternative explanations). That is, human infection must be demonstrated to not have been the result of other potential modes of transmission. We note that this criterion must be applied with care in cases of multi-host transmission.\nAdditionally, the proposed vector must plausibly be able to be a vector of the pathogen. This criterion is often controversial as it is highly dependent upon the experience of the researcher, and their opinion about what is, and is not, plausible as opposed to merely possible [13]. Most authors in Buruli ulcer research would agree with some basic facts; the waterbugs are infected in the environment [14], they bite humans occasionally, and are able to transmit the bacteria to mice in the lab [5]. However, waterbugs are not known to be vectors of other pathogens, and related Mycobacteria (Mycobacterium tuberculosis, M. leprae, M. marnium) are not known to be vector-borne diseases [10, 12]. The plausibility of this proposed route of transmission is still debated. The final criterion, coherence, is based on what we already know about the pathogen, the vector and the host. Does the proposed method of transmission fit well with our current understanding of its biology? As our understanding of the biology of M. ulcerans improves, this criterion will be answered.\nGiven this framework, how likely is it that Naucoridae and/or Belostomatidae are vectors of the Buruli ulcer disease? Herein, we explore the correlation in time and space between the proposed vectors, Naucoridae and Belostomatidae, and the Buruli ulcer prevalence. We discuss the other Bradford-Hill criteria, but do not focus on them specifically, as it was not directly within the scope of this work. Based on sampling in Cameroon, we characterise the set of suitable habitats within which species of the Families Belostomatidae and Naucoridae can maintain a population (their ecological niche) and describe the spatial distribution of these suitable habitats across West Africa. We then explore any correlations between habitat suitability and Buruli ulcer prevalence at multiple spatial scales.", "Data were collected as described in [15] hereafter referred to as the SME dataset. In brief, 36 sample sites in Cameroon were visited monthly from September 2012 to February 2013 (Figure 1), a period including both wet and dry seasons. Dip net sampling was conducted at all sites. Due to limitations of current taxonomic keys, the aquatic insects of interest were only identifiable to the phylogenetic division of Family. A second dataset was used in model validation, using data collected separately by A. Garchitorena (Figure 1). This dataset was collected as described in [16], and is hereafter referred to as the AG dataset. Niche models of Belostomatidae and Naucoridae are constructed using the SME dataset, and then tested on their ability to reproduce the independent AG dataset.Figure 1\nStudy sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category.\n\nStudy sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category.", "Five ecological parameters were used to describe the distribution of suitable habitats: rainfall, flow accumulation, wetness index, land cover at the sample site, and land cover within the flight range of adult insects. These variables were selected on the basis of their likelihood to influence the distribution and condition of water, and are summarised in Table 1. Rainfall was highly seasonal, so we divide models by the season of collection. Models constructed using species distribution data from the dry season used the precipitation in the driest season; models constructed using species distribution data from the wet season used the precipitation in the wettest season. These two variables were taken from the Worldclim database, as BIO13 Precipitation of Wettest Month and BIO14 Precipitation of Driest Month [17]. Flow accumulation was derived using elevation data [18]. Flow accumulation is the surface area contributing water to a particular point, and indicates the potential amount of water available, which is then determined by rainfall. Using the SRTM elevation, flow accumulation was derived using the Fill, Flow Direction and Flow Accumulation tools in ArcMap 10.1 [19]. Wetness index has previously been shown to be associated with the Buruli ulcer [9]. In ecological terms it indicates the topological potential for water to accumulate, and was derived according to Equation 1,Table 1\nEnvironmental variables used in ecological niche modelling\nVariableUnitsOriginal resolutionSourceWetness indexm2\n15 arc-sec (approx 450 m2)SRTMFlow accumulationm2\n15 arc-sec (approx 450 m2)SRTMPrecipitation in wettest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 13Precipitation in driest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 14GLC-APUnitless300 m2\nGLC 2009GLC-5 KUnitless300 m2\nGLC 2009All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R.\n\nEnvironmental variables used in ecological niche modelling\n\nAll data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R.\n\n\n1\n\n\n\nwhere FA was the flow accumulation, 500 was the cell size in meters, and S was the surface slope in degrees. Large flow accumulation values and flat slopes resulted in high wetness index values, and indicate areas where water is likely to stagnate. In areas where the slope is zero wetness index had no value. Land cover was derived from the Global Land Cover Map 2009 [20]. GLC-At Point (hereafter GLC-AP) is the land cover at the sample site. In a 5 km radius around the site the most common (modal) land cover category was described. For example, a sample site may be in savannah, but surrounded by forest. 5 km was selected as the approximate flight radius of the insects [21–23]. This is termed GLC-5 K.", "Insect distribution data from the 36 sample sites (Figure 1) were explored to identify normality, homogeneity of variance and correlation in the five environmental parameters used. Two sites were excluded from analysis as apparent outliers in flow accumulation, otherwise the data were normally distributed and homogenous. Correlation was observed between flow accumulation and wetness index and between precipitation in the driest and in the wettest seasons, however this was not significant (p > 0.05) using a Spearman’s correlation test. All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for modelling using resample() in the library ‘raster’ of the software R [24].\nAcross the scale of West Africa we assume that absence data is not reliable, as it is more likely to indicate failure of detection rather than evidence of absence. For this reason, we chose to conduct presence-only modelling, and the specific method selected was Maximum Entropy [25]. We used the software Maxent (Maxent 3.3.3 k, [25]) to construct these models. Maxent has been used several times in the past to model the ecological niche of disease vectors [26]. Maximum entropy modelling minimises the divergence between the distribution of the environmental parameters and the species distribution, assuming the species is distributed in the most ecologically efficient manner possible. Maxent describes the environmental parameters across the study region, and is sensitive to the size of the study region. Herein, we generated background points in every raster cell within the extent of our study (that is, every environment was described). The models were replicated 100 times and averaged.\nIn order to select an appropriate extent to the area considered for modelling we chose to confine the spatial extent of our study to the region ecologically interpolated by the sample sites (Figure 2). The sample sites of the SME dataset describe a particular range of environmental conditions for the five environmental parameters (Table 1). We excluded conditions that were not studied by the SME regime (much larger or lower values of rainfall, land cover categories not sampled, flow accumulation not observed at the study sites) to avoid extrapolating beyond the range of ecological parameter values studied. To select this interpolative region we constructed a Multivariate Environmental Similarity Surface (MESS map [27]), excluding areas with negative MESS values from our study region. A MESS map was constructed for each model; models with more samples can be interpolated into larger ecological regions. MESS maps were constructed with the function mess() available in the library dismo in R.Figure 2\nDelineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous.\n\nDelineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous.", "We evaluated the performance of these models according to their ability to correctly predict the distribution of the insects (with a separate dataset, the AG dataset) as efficiently as possible (avoiding overfitting). We used two methods to evaluate the models. First, the Akaike information criterion, corrected for small sample size (AICc), was used. AICc was generated as in [28], and penalises complex models to avoid over-fitting the data, low AICc scores indicate the model is not over-fitting. Second, we tested the models ability to predict the AG dataset. The Area Under Curve (AUC) is often used to evaluate Maxent models, and has been criticized previously [29]. We used a modified version of the AUC for model evaluation, termed here AG AUC. The model predictions were compared to the known values as collected by AG. AG AUC values range from 0 to 1, values close to 1 indicate good performance, 0.5 is no better than random. Use of the AG dataset allowed us to use true absence data for model validation, avoiding the problem of pseudo-absence data in Maxent.\nThe purpose of these two different metrics is to consider different aspects of performance, neither were without limitations. Use of the AG dataset allowed a degree of validation across methodologies, indicating the extent to which our results were dependent on a particular sampling regime. AICc is traditionally used to indicate over-fitting in models, however it is sensitive to the size of the ecological niche of the species.", "Buruli ulcer prevalence data was collected for two endemic regions in Cameroon, Akonolinga [30] and Bankim [31], as shown in Figure 3. Around the centre of each village a buffer was created, and average habitat suitability in this buffer was correlated to the village Buruli ulcer prevalence using Spearman’s rank correlation coefficient. Seven buffers were used to explore the effect of buffer size and shape. Around the centre of the village circular buffers of 1, 2, 3, 4, 5 and 10 km were selected, and average habitat suitability recorded. We also used a buffer defined by the borders of the village (Figure 3). This buffer changes in size for each village, and represents the approximate extent of the village area. 5 km is approximately the flight radius of the insects, and the distance easily walkable by local people on an average day in Akonolinga [32]. To deal with this multiple testing, Bonferonni’s correction of the p-values was used.Figure 3\nSpatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon.\n\n\nSpatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon.\n", "In Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2\nSpearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species\nSpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4\nCorrelation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02.\n\nSpearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species\n\nThe column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.\n\nCorrelation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02.\n\nSpearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species\n\nThe column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim.", "Variable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5\nImportance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6\nHabitat suitability for Belostomatidae (1\nst\nand 2\nnd\nrows) and Naucoridae (3\nrd\nand 4\nth\nrows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index.\n\nImportance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.\n\nHabitat suitability for Belostomatidae (1\nst\nand 2\nnd\nrows) and Naucoridae (3\nrd\nand 4\nth\nrows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index.\nFlow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae.", "AICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model.\nThe AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Materials and methods", "Distribution of Belostomatidae and Naucoridae aquatic insect families", "Environmental parameters used in ecological niche modelling", "Data preparation, niche modelling, and prediction of spatial distribution of suitable habitat", "Evaluating model performance", "Identifying the relationship between habitat suitability and Buruli ulcer prevalence", "Results", "Distribution of suitable habitat, and its relationship to Buruli ulcer", "Ecologically important variables in the distribution of the aquatic insect families", "Model performance", "Discussion" ]
[ "The Buruli ulcer is an emerging neglected tropical disease affecting more than 5,000 people per year in West and Central Africa, French Guiana, Latin America and Australia [1]. The disease burden is highest in Africa, where it predominantly affects children under the age of 15, and due to damage to the skin, muscle and bone, can result in severe scarring and crippling deformities if left untreated. The disease is caused by the environmental pathogen Mycobacterium ulcerans.\nThe mode of transmission of M. ulcerans, the method by which it infects humans, is unknown. Many routes of transmission have been proposed, such as transmission by aerosol [2], vector transmission by amoebae [3] or through aquatic networks [4]. During a study of the association between M. ulcerans and aquatic plants in Ghana and Benin aquatic insects were accidentally collected during the sampling procedure, and unexpectedly found to test positive for M. ulcerans\n[1]. The authors proposed that, given that these insects occasionally bite humans, they may be implicated in transmission of M. ulcerans. Aquatic insects have been further implicated after a series of laboratory experiments demonstrated the competency of Naucoridae to act as vectors. Naucoridae are able to acquire M. ulcerans from their diet, and then transmit the pathogen to mice resulting in Buruli-like symptoms [5–8]. Buruli ulcer is commonly associated with lowland, stagnant water [9] and human behaviours associated with water bodies appear to be risk factors for Buruli ulcer infection, which would lend support to the idea of infection occurring in an aquatic context.\nHowever, the role of these insects has been disputed for several reasons. In a two year study of Buruli ulcer endemic and non-endemic sites in Ghana [10], found no evidence for a role in transmission. The population of these insects, and the prevalence of M. ulcerans infection in them, was not significantly different between Buruli ulcer endemic and Buruli ulcer non-endemic sites [10]. The authors argued that, if these insects are vectors, we would expect them to have a higher abundance in Buruli ulcer endemic sites, that rates of M. ulcerans infection of the insects should be higher in Buruli ulcer endemic sites, and that the rate of infection of these insects should be higher than other species of invertebrates.\nThese expectations are based on the Bradford-Hill guidelines for associating insect vectors with human vector-borne disease [11, 12]. These guidelines provide a general framework to explore the association between vectors and disease, based on the consistency, specificity, plausibility and coherence of the proposed mode of transmission.\nConsistency refers to the expectation that the rate of infection of the proposed vectors, which should be consistently strongly positively associated, in time and space, with prevalence of human cases. This also implies human cases should not occur in absence of the proposed vector. The proposed vector should have a demonstrated capacity to physically transmit the pathogen, which has been demonstrated for M. ulcerans in the lab [5]. The interaction between the proposed vector and human infection must be specific and alternative explanations of human infection should be ruled out (though see [4] for alternative explanations). That is, human infection must be demonstrated to not have been the result of other potential modes of transmission. We note that this criterion must be applied with care in cases of multi-host transmission.\nAdditionally, the proposed vector must plausibly be able to be a vector of the pathogen. This criterion is often controversial as it is highly dependent upon the experience of the researcher, and their opinion about what is, and is not, plausible as opposed to merely possible [13]. Most authors in Buruli ulcer research would agree with some basic facts; the waterbugs are infected in the environment [14], they bite humans occasionally, and are able to transmit the bacteria to mice in the lab [5]. However, waterbugs are not known to be vectors of other pathogens, and related Mycobacteria (Mycobacterium tuberculosis, M. leprae, M. marnium) are not known to be vector-borne diseases [10, 12]. The plausibility of this proposed route of transmission is still debated. The final criterion, coherence, is based on what we already know about the pathogen, the vector and the host. Does the proposed method of transmission fit well with our current understanding of its biology? As our understanding of the biology of M. ulcerans improves, this criterion will be answered.\nGiven this framework, how likely is it that Naucoridae and/or Belostomatidae are vectors of the Buruli ulcer disease? Herein, we explore the correlation in time and space between the proposed vectors, Naucoridae and Belostomatidae, and the Buruli ulcer prevalence. We discuss the other Bradford-Hill criteria, but do not focus on them specifically, as it was not directly within the scope of this work. Based on sampling in Cameroon, we characterise the set of suitable habitats within which species of the Families Belostomatidae and Naucoridae can maintain a population (their ecological niche) and describe the spatial distribution of these suitable habitats across West Africa. We then explore any correlations between habitat suitability and Buruli ulcer prevalence at multiple spatial scales.", " Distribution of Belostomatidae and Naucoridae aquatic insect families Data were collected as described in [15] hereafter referred to as the SME dataset. In brief, 36 sample sites in Cameroon were visited monthly from September 2012 to February 2013 (Figure 1), a period including both wet and dry seasons. Dip net sampling was conducted at all sites. Due to limitations of current taxonomic keys, the aquatic insects of interest were only identifiable to the phylogenetic division of Family. A second dataset was used in model validation, using data collected separately by A. Garchitorena (Figure 1). This dataset was collected as described in [16], and is hereafter referred to as the AG dataset. Niche models of Belostomatidae and Naucoridae are constructed using the SME dataset, and then tested on their ability to reproduce the independent AG dataset.Figure 1\nStudy sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category.\n\nStudy sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category.\nData were collected as described in [15] hereafter referred to as the SME dataset. In brief, 36 sample sites in Cameroon were visited monthly from September 2012 to February 2013 (Figure 1), a period including both wet and dry seasons. Dip net sampling was conducted at all sites. Due to limitations of current taxonomic keys, the aquatic insects of interest were only identifiable to the phylogenetic division of Family. A second dataset was used in model validation, using data collected separately by A. Garchitorena (Figure 1). This dataset was collected as described in [16], and is hereafter referred to as the AG dataset. Niche models of Belostomatidae and Naucoridae are constructed using the SME dataset, and then tested on their ability to reproduce the independent AG dataset.Figure 1\nStudy sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category.\n\nStudy sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category.\n Environmental parameters used in ecological niche modelling Five ecological parameters were used to describe the distribution of suitable habitats: rainfall, flow accumulation, wetness index, land cover at the sample site, and land cover within the flight range of adult insects. These variables were selected on the basis of their likelihood to influence the distribution and condition of water, and are summarised in Table 1. Rainfall was highly seasonal, so we divide models by the season of collection. Models constructed using species distribution data from the dry season used the precipitation in the driest season; models constructed using species distribution data from the wet season used the precipitation in the wettest season. These two variables were taken from the Worldclim database, as BIO13 Precipitation of Wettest Month and BIO14 Precipitation of Driest Month [17]. Flow accumulation was derived using elevation data [18]. Flow accumulation is the surface area contributing water to a particular point, and indicates the potential amount of water available, which is then determined by rainfall. Using the SRTM elevation, flow accumulation was derived using the Fill, Flow Direction and Flow Accumulation tools in ArcMap 10.1 [19]. Wetness index has previously been shown to be associated with the Buruli ulcer [9]. In ecological terms it indicates the topological potential for water to accumulate, and was derived according to Equation 1,Table 1\nEnvironmental variables used in ecological niche modelling\nVariableUnitsOriginal resolutionSourceWetness indexm2\n15 arc-sec (approx 450 m2)SRTMFlow accumulationm2\n15 arc-sec (approx 450 m2)SRTMPrecipitation in wettest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 13Precipitation in driest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 14GLC-APUnitless300 m2\nGLC 2009GLC-5 KUnitless300 m2\nGLC 2009All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R.\n\nEnvironmental variables used in ecological niche modelling\n\nAll data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R.\n\n\n1\n\n\n\nwhere FA was the flow accumulation, 500 was the cell size in meters, and S was the surface slope in degrees. Large flow accumulation values and flat slopes resulted in high wetness index values, and indicate areas where water is likely to stagnate. In areas where the slope is zero wetness index had no value. Land cover was derived from the Global Land Cover Map 2009 [20]. GLC-At Point (hereafter GLC-AP) is the land cover at the sample site. In a 5 km radius around the site the most common (modal) land cover category was described. For example, a sample site may be in savannah, but surrounded by forest. 5 km was selected as the approximate flight radius of the insects [21–23]. This is termed GLC-5 K.\nFive ecological parameters were used to describe the distribution of suitable habitats: rainfall, flow accumulation, wetness index, land cover at the sample site, and land cover within the flight range of adult insects. These variables were selected on the basis of their likelihood to influence the distribution and condition of water, and are summarised in Table 1. Rainfall was highly seasonal, so we divide models by the season of collection. Models constructed using species distribution data from the dry season used the precipitation in the driest season; models constructed using species distribution data from the wet season used the precipitation in the wettest season. These two variables were taken from the Worldclim database, as BIO13 Precipitation of Wettest Month and BIO14 Precipitation of Driest Month [17]. Flow accumulation was derived using elevation data [18]. Flow accumulation is the surface area contributing water to a particular point, and indicates the potential amount of water available, which is then determined by rainfall. Using the SRTM elevation, flow accumulation was derived using the Fill, Flow Direction and Flow Accumulation tools in ArcMap 10.1 [19]. Wetness index has previously been shown to be associated with the Buruli ulcer [9]. In ecological terms it indicates the topological potential for water to accumulate, and was derived according to Equation 1,Table 1\nEnvironmental variables used in ecological niche modelling\nVariableUnitsOriginal resolutionSourceWetness indexm2\n15 arc-sec (approx 450 m2)SRTMFlow accumulationm2\n15 arc-sec (approx 450 m2)SRTMPrecipitation in wettest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 13Precipitation in driest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 14GLC-APUnitless300 m2\nGLC 2009GLC-5 KUnitless300 m2\nGLC 2009All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R.\n\nEnvironmental variables used in ecological niche modelling\n\nAll data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R.\n\n\n1\n\n\n\nwhere FA was the flow accumulation, 500 was the cell size in meters, and S was the surface slope in degrees. Large flow accumulation values and flat slopes resulted in high wetness index values, and indicate areas where water is likely to stagnate. In areas where the slope is zero wetness index had no value. Land cover was derived from the Global Land Cover Map 2009 [20]. GLC-At Point (hereafter GLC-AP) is the land cover at the sample site. In a 5 km radius around the site the most common (modal) land cover category was described. For example, a sample site may be in savannah, but surrounded by forest. 5 km was selected as the approximate flight radius of the insects [21–23]. This is termed GLC-5 K.\n Data preparation, niche modelling, and prediction of spatial distribution of suitable habitat Insect distribution data from the 36 sample sites (Figure 1) were explored to identify normality, homogeneity of variance and correlation in the five environmental parameters used. Two sites were excluded from analysis as apparent outliers in flow accumulation, otherwise the data were normally distributed and homogenous. Correlation was observed between flow accumulation and wetness index and between precipitation in the driest and in the wettest seasons, however this was not significant (p > 0.05) using a Spearman’s correlation test. All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for modelling using resample() in the library ‘raster’ of the software R [24].\nAcross the scale of West Africa we assume that absence data is not reliable, as it is more likely to indicate failure of detection rather than evidence of absence. For this reason, we chose to conduct presence-only modelling, and the specific method selected was Maximum Entropy [25]. We used the software Maxent (Maxent 3.3.3 k, [25]) to construct these models. Maxent has been used several times in the past to model the ecological niche of disease vectors [26]. Maximum entropy modelling minimises the divergence between the distribution of the environmental parameters and the species distribution, assuming the species is distributed in the most ecologically efficient manner possible. Maxent describes the environmental parameters across the study region, and is sensitive to the size of the study region. Herein, we generated background points in every raster cell within the extent of our study (that is, every environment was described). The models were replicated 100 times and averaged.\nIn order to select an appropriate extent to the area considered for modelling we chose to confine the spatial extent of our study to the region ecologically interpolated by the sample sites (Figure 2). The sample sites of the SME dataset describe a particular range of environmental conditions for the five environmental parameters (Table 1). We excluded conditions that were not studied by the SME regime (much larger or lower values of rainfall, land cover categories not sampled, flow accumulation not observed at the study sites) to avoid extrapolating beyond the range of ecological parameter values studied. To select this interpolative region we constructed a Multivariate Environmental Similarity Surface (MESS map [27]), excluding areas with negative MESS values from our study region. A MESS map was constructed for each model; models with more samples can be interpolated into larger ecological regions. MESS maps were constructed with the function mess() available in the library dismo in R.Figure 2\nDelineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous.\n\nDelineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous.\nInsect distribution data from the 36 sample sites (Figure 1) were explored to identify normality, homogeneity of variance and correlation in the five environmental parameters used. Two sites were excluded from analysis as apparent outliers in flow accumulation, otherwise the data were normally distributed and homogenous. Correlation was observed between flow accumulation and wetness index and between precipitation in the driest and in the wettest seasons, however this was not significant (p > 0.05) using a Spearman’s correlation test. All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for modelling using resample() in the library ‘raster’ of the software R [24].\nAcross the scale of West Africa we assume that absence data is not reliable, as it is more likely to indicate failure of detection rather than evidence of absence. For this reason, we chose to conduct presence-only modelling, and the specific method selected was Maximum Entropy [25]. We used the software Maxent (Maxent 3.3.3 k, [25]) to construct these models. Maxent has been used several times in the past to model the ecological niche of disease vectors [26]. Maximum entropy modelling minimises the divergence between the distribution of the environmental parameters and the species distribution, assuming the species is distributed in the most ecologically efficient manner possible. Maxent describes the environmental parameters across the study region, and is sensitive to the size of the study region. Herein, we generated background points in every raster cell within the extent of our study (that is, every environment was described). The models were replicated 100 times and averaged.\nIn order to select an appropriate extent to the area considered for modelling we chose to confine the spatial extent of our study to the region ecologically interpolated by the sample sites (Figure 2). The sample sites of the SME dataset describe a particular range of environmental conditions for the five environmental parameters (Table 1). We excluded conditions that were not studied by the SME regime (much larger or lower values of rainfall, land cover categories not sampled, flow accumulation not observed at the study sites) to avoid extrapolating beyond the range of ecological parameter values studied. To select this interpolative region we constructed a Multivariate Environmental Similarity Surface (MESS map [27]), excluding areas with negative MESS values from our study region. A MESS map was constructed for each model; models with more samples can be interpolated into larger ecological regions. MESS maps were constructed with the function mess() available in the library dismo in R.Figure 2\nDelineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous.\n\nDelineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous.\n Evaluating model performance We evaluated the performance of these models according to their ability to correctly predict the distribution of the insects (with a separate dataset, the AG dataset) as efficiently as possible (avoiding overfitting). We used two methods to evaluate the models. First, the Akaike information criterion, corrected for small sample size (AICc), was used. AICc was generated as in [28], and penalises complex models to avoid over-fitting the data, low AICc scores indicate the model is not over-fitting. Second, we tested the models ability to predict the AG dataset. The Area Under Curve (AUC) is often used to evaluate Maxent models, and has been criticized previously [29]. We used a modified version of the AUC for model evaluation, termed here AG AUC. The model predictions were compared to the known values as collected by AG. AG AUC values range from 0 to 1, values close to 1 indicate good performance, 0.5 is no better than random. Use of the AG dataset allowed us to use true absence data for model validation, avoiding the problem of pseudo-absence data in Maxent.\nThe purpose of these two different metrics is to consider different aspects of performance, neither were without limitations. Use of the AG dataset allowed a degree of validation across methodologies, indicating the extent to which our results were dependent on a particular sampling regime. AICc is traditionally used to indicate over-fitting in models, however it is sensitive to the size of the ecological niche of the species.\nWe evaluated the performance of these models according to their ability to correctly predict the distribution of the insects (with a separate dataset, the AG dataset) as efficiently as possible (avoiding overfitting). We used two methods to evaluate the models. First, the Akaike information criterion, corrected for small sample size (AICc), was used. AICc was generated as in [28], and penalises complex models to avoid over-fitting the data, low AICc scores indicate the model is not over-fitting. Second, we tested the models ability to predict the AG dataset. The Area Under Curve (AUC) is often used to evaluate Maxent models, and has been criticized previously [29]. We used a modified version of the AUC for model evaluation, termed here AG AUC. The model predictions were compared to the known values as collected by AG. AG AUC values range from 0 to 1, values close to 1 indicate good performance, 0.5 is no better than random. Use of the AG dataset allowed us to use true absence data for model validation, avoiding the problem of pseudo-absence data in Maxent.\nThe purpose of these two different metrics is to consider different aspects of performance, neither were without limitations. Use of the AG dataset allowed a degree of validation across methodologies, indicating the extent to which our results were dependent on a particular sampling regime. AICc is traditionally used to indicate over-fitting in models, however it is sensitive to the size of the ecological niche of the species.\n Identifying the relationship between habitat suitability and Buruli ulcer prevalence Buruli ulcer prevalence data was collected for two endemic regions in Cameroon, Akonolinga [30] and Bankim [31], as shown in Figure 3. Around the centre of each village a buffer was created, and average habitat suitability in this buffer was correlated to the village Buruli ulcer prevalence using Spearman’s rank correlation coefficient. Seven buffers were used to explore the effect of buffer size and shape. Around the centre of the village circular buffers of 1, 2, 3, 4, 5 and 10 km were selected, and average habitat suitability recorded. We also used a buffer defined by the borders of the village (Figure 3). This buffer changes in size for each village, and represents the approximate extent of the village area. 5 km is approximately the flight radius of the insects, and the distance easily walkable by local people on an average day in Akonolinga [32]. To deal with this multiple testing, Bonferonni’s correction of the p-values was used.Figure 3\nSpatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon.\n\n\nSpatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon.\n\nBuruli ulcer prevalence data was collected for two endemic regions in Cameroon, Akonolinga [30] and Bankim [31], as shown in Figure 3. Around the centre of each village a buffer was created, and average habitat suitability in this buffer was correlated to the village Buruli ulcer prevalence using Spearman’s rank correlation coefficient. Seven buffers were used to explore the effect of buffer size and shape. Around the centre of the village circular buffers of 1, 2, 3, 4, 5 and 10 km were selected, and average habitat suitability recorded. We also used a buffer defined by the borders of the village (Figure 3). This buffer changes in size for each village, and represents the approximate extent of the village area. 5 km is approximately the flight radius of the insects, and the distance easily walkable by local people on an average day in Akonolinga [32]. To deal with this multiple testing, Bonferonni’s correction of the p-values was used.Figure 3\nSpatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon.\n\n\nSpatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon.\n", "Data were collected as described in [15] hereafter referred to as the SME dataset. In brief, 36 sample sites in Cameroon were visited monthly from September 2012 to February 2013 (Figure 1), a period including both wet and dry seasons. Dip net sampling was conducted at all sites. Due to limitations of current taxonomic keys, the aquatic insects of interest were only identifiable to the phylogenetic division of Family. A second dataset was used in model validation, using data collected separately by A. Garchitorena (Figure 1). This dataset was collected as described in [16], and is hereafter referred to as the AG dataset. Niche models of Belostomatidae and Naucoridae are constructed using the SME dataset, and then tested on their ability to reproduce the independent AG dataset.Figure 1\nStudy sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category.\n\nStudy sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category.", "Five ecological parameters were used to describe the distribution of suitable habitats: rainfall, flow accumulation, wetness index, land cover at the sample site, and land cover within the flight range of adult insects. These variables were selected on the basis of their likelihood to influence the distribution and condition of water, and are summarised in Table 1. Rainfall was highly seasonal, so we divide models by the season of collection. Models constructed using species distribution data from the dry season used the precipitation in the driest season; models constructed using species distribution data from the wet season used the precipitation in the wettest season. These two variables were taken from the Worldclim database, as BIO13 Precipitation of Wettest Month and BIO14 Precipitation of Driest Month [17]. Flow accumulation was derived using elevation data [18]. Flow accumulation is the surface area contributing water to a particular point, and indicates the potential amount of water available, which is then determined by rainfall. Using the SRTM elevation, flow accumulation was derived using the Fill, Flow Direction and Flow Accumulation tools in ArcMap 10.1 [19]. Wetness index has previously been shown to be associated with the Buruli ulcer [9]. In ecological terms it indicates the topological potential for water to accumulate, and was derived according to Equation 1,Table 1\nEnvironmental variables used in ecological niche modelling\nVariableUnitsOriginal resolutionSourceWetness indexm2\n15 arc-sec (approx 450 m2)SRTMFlow accumulationm2\n15 arc-sec (approx 450 m2)SRTMPrecipitation in wettest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 13Precipitation in driest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 14GLC-APUnitless300 m2\nGLC 2009GLC-5 KUnitless300 m2\nGLC 2009All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R.\n\nEnvironmental variables used in ecological niche modelling\n\nAll data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R.\n\n\n1\n\n\n\nwhere FA was the flow accumulation, 500 was the cell size in meters, and S was the surface slope in degrees. Large flow accumulation values and flat slopes resulted in high wetness index values, and indicate areas where water is likely to stagnate. In areas where the slope is zero wetness index had no value. Land cover was derived from the Global Land Cover Map 2009 [20]. GLC-At Point (hereafter GLC-AP) is the land cover at the sample site. In a 5 km radius around the site the most common (modal) land cover category was described. For example, a sample site may be in savannah, but surrounded by forest. 5 km was selected as the approximate flight radius of the insects [21–23]. This is termed GLC-5 K.", "Insect distribution data from the 36 sample sites (Figure 1) were explored to identify normality, homogeneity of variance and correlation in the five environmental parameters used. Two sites were excluded from analysis as apparent outliers in flow accumulation, otherwise the data were normally distributed and homogenous. Correlation was observed between flow accumulation and wetness index and between precipitation in the driest and in the wettest seasons, however this was not significant (p > 0.05) using a Spearman’s correlation test. All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for modelling using resample() in the library ‘raster’ of the software R [24].\nAcross the scale of West Africa we assume that absence data is not reliable, as it is more likely to indicate failure of detection rather than evidence of absence. For this reason, we chose to conduct presence-only modelling, and the specific method selected was Maximum Entropy [25]. We used the software Maxent (Maxent 3.3.3 k, [25]) to construct these models. Maxent has been used several times in the past to model the ecological niche of disease vectors [26]. Maximum entropy modelling minimises the divergence between the distribution of the environmental parameters and the species distribution, assuming the species is distributed in the most ecologically efficient manner possible. Maxent describes the environmental parameters across the study region, and is sensitive to the size of the study region. Herein, we generated background points in every raster cell within the extent of our study (that is, every environment was described). The models were replicated 100 times and averaged.\nIn order to select an appropriate extent to the area considered for modelling we chose to confine the spatial extent of our study to the region ecologically interpolated by the sample sites (Figure 2). The sample sites of the SME dataset describe a particular range of environmental conditions for the five environmental parameters (Table 1). We excluded conditions that were not studied by the SME regime (much larger or lower values of rainfall, land cover categories not sampled, flow accumulation not observed at the study sites) to avoid extrapolating beyond the range of ecological parameter values studied. To select this interpolative region we constructed a Multivariate Environmental Similarity Surface (MESS map [27]), excluding areas with negative MESS values from our study region. A MESS map was constructed for each model; models with more samples can be interpolated into larger ecological regions. MESS maps were constructed with the function mess() available in the library dismo in R.Figure 2\nDelineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous.\n\nDelineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous.", "We evaluated the performance of these models according to their ability to correctly predict the distribution of the insects (with a separate dataset, the AG dataset) as efficiently as possible (avoiding overfitting). We used two methods to evaluate the models. First, the Akaike information criterion, corrected for small sample size (AICc), was used. AICc was generated as in [28], and penalises complex models to avoid over-fitting the data, low AICc scores indicate the model is not over-fitting. Second, we tested the models ability to predict the AG dataset. The Area Under Curve (AUC) is often used to evaluate Maxent models, and has been criticized previously [29]. We used a modified version of the AUC for model evaluation, termed here AG AUC. The model predictions were compared to the known values as collected by AG. AG AUC values range from 0 to 1, values close to 1 indicate good performance, 0.5 is no better than random. Use of the AG dataset allowed us to use true absence data for model validation, avoiding the problem of pseudo-absence data in Maxent.\nThe purpose of these two different metrics is to consider different aspects of performance, neither were without limitations. Use of the AG dataset allowed a degree of validation across methodologies, indicating the extent to which our results were dependent on a particular sampling regime. AICc is traditionally used to indicate over-fitting in models, however it is sensitive to the size of the ecological niche of the species.", "Buruli ulcer prevalence data was collected for two endemic regions in Cameroon, Akonolinga [30] and Bankim [31], as shown in Figure 3. Around the centre of each village a buffer was created, and average habitat suitability in this buffer was correlated to the village Buruli ulcer prevalence using Spearman’s rank correlation coefficient. Seven buffers were used to explore the effect of buffer size and shape. Around the centre of the village circular buffers of 1, 2, 3, 4, 5 and 10 km were selected, and average habitat suitability recorded. We also used a buffer defined by the borders of the village (Figure 3). This buffer changes in size for each village, and represents the approximate extent of the village area. 5 km is approximately the flight radius of the insects, and the distance easily walkable by local people on an average day in Akonolinga [32]. To deal with this multiple testing, Bonferonni’s correction of the p-values was used.Figure 3\nSpatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon.\n\n\nSpatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon.\n", " Distribution of suitable habitat, and its relationship to Buruli ulcer In Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2\nSpearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species\nSpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4\nCorrelation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02.\n\nSpearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species\n\nThe column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.\n\nCorrelation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02.\n\nSpearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species\n\nThe column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim.\nIn Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2\nSpearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species\nSpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4\nCorrelation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02.\n\nSpearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species\n\nThe column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.\n\nCorrelation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02.\n\nSpearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species\n\nThe column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim.\n Ecologically important variables in the distribution of the aquatic insect families Variable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5\nImportance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6\nHabitat suitability for Belostomatidae (1\nst\nand 2\nnd\nrows) and Naucoridae (3\nrd\nand 4\nth\nrows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index.\n\nImportance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.\n\nHabitat suitability for Belostomatidae (1\nst\nand 2\nnd\nrows) and Naucoridae (3\nrd\nand 4\nth\nrows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index.\nFlow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae.\nVariable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5\nImportance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6\nHabitat suitability for Belostomatidae (1\nst\nand 2\nnd\nrows) and Naucoridae (3\nrd\nand 4\nth\nrows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index.\n\nImportance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.\n\nHabitat suitability for Belostomatidae (1\nst\nand 2\nnd\nrows) and Naucoridae (3\nrd\nand 4\nth\nrows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index.\nFlow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae.\n Model performance AICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model.\nThe AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset.\nAICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model.\nThe AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset.", "In Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2\nSpearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species\nSpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4\nCorrelation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02.\n\nSpearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species\n\nThe column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.\n\nCorrelation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02.\n\nSpearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species\n\nThe column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim.", "Variable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5\nImportance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6\nHabitat suitability for Belostomatidae (1\nst\nand 2\nnd\nrows) and Naucoridae (3\nrd\nand 4\nth\nrows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index.\n\nImportance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.\n\nHabitat suitability for Belostomatidae (1\nst\nand 2\nnd\nrows) and Naucoridae (3\nrd\nand 4\nth\nrows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index.\nFlow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae.", "AICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model.\nThe AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset.", "We explored the correlation between the distribution of Belostomatidae and Naucoridae and the prevalence of Buruli ulcer. We have found a positive gradient between habitat suitability of Naucoridae and Belostomatidae and Buruli ulcer prevalence. Correlation does not imply causation; this result is not proof that the insects are vectors. However, understanding the reasons for the temporal and spatial changes in this correlation will enable a better understanding of the reasons for changes in Buruli ulcer prevalence.\nThere are significant temporal changes in this correlation between habitat suitability of the insects and Buruli ulcer prevalence; in Akonolinga the Buruli ulcer prevalence is correlated to Naucoridae and Belostomatidae distribution in the wet season, but not in the dry season. Buruli ulcer is known to have complex temporal changes in prevalence [30, 33, 34], as is M. ulcerans\n[14, 35]. It is therefore unsurprising that, if these insects are implicated in maintaining M. ulcerans in the environment, or involved in transmission to humans in some way (either as host vectors, or host carriers), the correlation between Buruli ulcer prevalence and their abundance would change in time.\nWe also observe spatial changes in the correlation; water bug habitat suitability is not correlated to Buruli ulcer prevalence in Bankim, 457 km North of Akonolinga. Speculatively, perhaps other routes of transmission may be more important in this region, for example contact with infected plant biofilms, as suggested in Ghana [36].\nHow do we interpret this result in terms of the Bradford-Hill guidelines? Herein we have focused on the correlation between these insects and the prevalence of the disease in both space and time. While there is a significant positive correlation for the predicted abundance of the aquatic insects and the prevalence of the Buruli ulcer, this correlation is not consistent from region to region. Previous research has proposed that M. ulcerans is transmitted within a multi-host transmission network [4]. In such a situation of multiple hosts the relative importance of any given mode of transmission may be expected to vary in time or space, and our results are consistent with, though not proof of, this hypothesis. The lack of a clear signal between water bugs and Buruli ulcer in Bankim would suggest that may not be key vectors or host carriers in that region. Recent studies have found notable changes in community composition relevant to M. ulcerans distribution, in the Greater Accra, Ashanti and Volta regions of Ghana, for both plant [35] and aquatic insects [37]. This would support the importance of changing biotic communities as a key factor in changing Buruli ulcer prevalence, a priority for future work. We have found that the prevalence of Buruli ulcer is correlated to Belostomatidae and Naucoridae abundance in Akonolinga but not Bankim, we do not know if the plant community composition for these regions, and wider aquatic insect community, also correlates to disease prevalence changes on a similar scale.\nThe ecological niches of both Naucorid and Belostomatid water bugs in West Africa are predominantly determined by the distribution of suitable landcover in a 5 km radius, preferring water bodies, artificial areas and rain fed croplands. The specific land cover at the point of the site (GLC-AP) was less informative. The observation that the most suitable regional land cover class is water bodies is not surprising, but the high suitability of urban areas is curious. Ecologically this could have a variety of causes; there may be changes in the chemical composition of water in these habitats, a reduction in predation pressure, or a greater abundance of food. The specific reasons will require further research.\nOur study has been limited in certain points; the low taxonomic resolution of the insects is a current limitation in this study. Secondly, an important limitation is that the distribution of M. ulcerans in these insects in these areas is unknown. The distribution of Naucoridae and Belostomatidae infected by M. ulcerans may differ from the distribution of Naucoridae and Belostomatidae generally. However, the insects are known to host the bacillus on their carapace, in their body [10, 14, 16]) and in their salivary glands [14] in the wild, and the distribution of the insect necessarily sets a limit to the distribution of infected insects. A related limitation is the unknown incubation time of M. ulcerans; the time from infection to presentation at the hospital, remains unknown. Finally, we have only addressed a single criterion of the Bradford-Hill guidelines; correlation. We have not aimed for a full discussion of the other criteria, and our findings should not be interpreted as proof of the role of these insects as vectors or key host carriers. Rather, we have discussed the existence of, and change in, a correlation between these insects and Buruli ulcer. Future work aims to explore spatial variation in the correlation between Buruli ulcer and the entire plant and animal communities, identifying any similarities between regions where the correlation exists, expanding on previous studies [37] which have focused on the community assemblage differences between M. ulcerans endemic and non-endemic regions of Ghana.\nDespite these limitations, these results are consistent with previous research, which has shown that in Akonolinga the Nyong river is a risk factor for Buruli ulcer [30]. Our results agree with this conclusion; the main focus of suitable habitats for the insects in Akonolinga is the Nyong river, where the existence of large plants near the river banks provides appropriate habitat for Naucoridae and Belostomatidae to forage, develop and reproduce. Previous research has also implicated aquatic insects as important vectors in Akonolinga [14], including detection of M. ulcerans in the saliva of the insects.\nIn conclusion, we find a positive correlation between the abundance of Naucoridae and Belostomatidae suitable habitat and Buruli ulcer prevalence. This correlation is not constant, and changes in time and space. We interpret this as evidence consistent with that idea that Naucoridae and Belostomatidae may be locally important host carriers of M. ulcerans in certain conditions, their importance changing as the environmental conditions change, which would be expected in the situation of multi-host transmission." ]
[ null, "materials|methods", null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Ecological niche modelling", "Naucoridae", "Belostomatidae", "Spatial distribution", "Habitat suitability", "Buruli ulcer", "\nMycobacterium ulcerans\n", "Vector-borne transmission", "Environmentally-acquired disease" ]
Background: The Buruli ulcer is an emerging neglected tropical disease affecting more than 5,000 people per year in West and Central Africa, French Guiana, Latin America and Australia [1]. The disease burden is highest in Africa, where it predominantly affects children under the age of 15, and due to damage to the skin, muscle and bone, can result in severe scarring and crippling deformities if left untreated. The disease is caused by the environmental pathogen Mycobacterium ulcerans. The mode of transmission of M. ulcerans, the method by which it infects humans, is unknown. Many routes of transmission have been proposed, such as transmission by aerosol [2], vector transmission by amoebae [3] or through aquatic networks [4]. During a study of the association between M. ulcerans and aquatic plants in Ghana and Benin aquatic insects were accidentally collected during the sampling procedure, and unexpectedly found to test positive for M. ulcerans [1]. The authors proposed that, given that these insects occasionally bite humans, they may be implicated in transmission of M. ulcerans. Aquatic insects have been further implicated after a series of laboratory experiments demonstrated the competency of Naucoridae to act as vectors. Naucoridae are able to acquire M. ulcerans from their diet, and then transmit the pathogen to mice resulting in Buruli-like symptoms [5–8]. Buruli ulcer is commonly associated with lowland, stagnant water [9] and human behaviours associated with water bodies appear to be risk factors for Buruli ulcer infection, which would lend support to the idea of infection occurring in an aquatic context. However, the role of these insects has been disputed for several reasons. In a two year study of Buruli ulcer endemic and non-endemic sites in Ghana [10], found no evidence for a role in transmission. The population of these insects, and the prevalence of M. ulcerans infection in them, was not significantly different between Buruli ulcer endemic and Buruli ulcer non-endemic sites [10]. The authors argued that, if these insects are vectors, we would expect them to have a higher abundance in Buruli ulcer endemic sites, that rates of M. ulcerans infection of the insects should be higher in Buruli ulcer endemic sites, and that the rate of infection of these insects should be higher than other species of invertebrates. These expectations are based on the Bradford-Hill guidelines for associating insect vectors with human vector-borne disease [11, 12]. These guidelines provide a general framework to explore the association between vectors and disease, based on the consistency, specificity, plausibility and coherence of the proposed mode of transmission. Consistency refers to the expectation that the rate of infection of the proposed vectors, which should be consistently strongly positively associated, in time and space, with prevalence of human cases. This also implies human cases should not occur in absence of the proposed vector. The proposed vector should have a demonstrated capacity to physically transmit the pathogen, which has been demonstrated for M. ulcerans in the lab [5]. The interaction between the proposed vector and human infection must be specific and alternative explanations of human infection should be ruled out (though see [4] for alternative explanations). That is, human infection must be demonstrated to not have been the result of other potential modes of transmission. We note that this criterion must be applied with care in cases of multi-host transmission. Additionally, the proposed vector must plausibly be able to be a vector of the pathogen. This criterion is often controversial as it is highly dependent upon the experience of the researcher, and their opinion about what is, and is not, plausible as opposed to merely possible [13]. Most authors in Buruli ulcer research would agree with some basic facts; the waterbugs are infected in the environment [14], they bite humans occasionally, and are able to transmit the bacteria to mice in the lab [5]. However, waterbugs are not known to be vectors of other pathogens, and related Mycobacteria (Mycobacterium tuberculosis, M. leprae, M. marnium) are not known to be vector-borne diseases [10, 12]. The plausibility of this proposed route of transmission is still debated. The final criterion, coherence, is based on what we already know about the pathogen, the vector and the host. Does the proposed method of transmission fit well with our current understanding of its biology? As our understanding of the biology of M. ulcerans improves, this criterion will be answered. Given this framework, how likely is it that Naucoridae and/or Belostomatidae are vectors of the Buruli ulcer disease? Herein, we explore the correlation in time and space between the proposed vectors, Naucoridae and Belostomatidae, and the Buruli ulcer prevalence. We discuss the other Bradford-Hill criteria, but do not focus on them specifically, as it was not directly within the scope of this work. Based on sampling in Cameroon, we characterise the set of suitable habitats within which species of the Families Belostomatidae and Naucoridae can maintain a population (their ecological niche) and describe the spatial distribution of these suitable habitats across West Africa. We then explore any correlations between habitat suitability and Buruli ulcer prevalence at multiple spatial scales. Materials and methods: Distribution of Belostomatidae and Naucoridae aquatic insect families Data were collected as described in [15] hereafter referred to as the SME dataset. In brief, 36 sample sites in Cameroon were visited monthly from September 2012 to February 2013 (Figure 1), a period including both wet and dry seasons. Dip net sampling was conducted at all sites. Due to limitations of current taxonomic keys, the aquatic insects of interest were only identifiable to the phylogenetic division of Family. A second dataset was used in model validation, using data collected separately by A. Garchitorena (Figure 1). This dataset was collected as described in [16], and is hereafter referred to as the AG dataset. Niche models of Belostomatidae and Naucoridae are constructed using the SME dataset, and then tested on their ability to reproduce the independent AG dataset.Figure 1 Study sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category. Study sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category. Data were collected as described in [15] hereafter referred to as the SME dataset. In brief, 36 sample sites in Cameroon were visited monthly from September 2012 to February 2013 (Figure 1), a period including both wet and dry seasons. Dip net sampling was conducted at all sites. Due to limitations of current taxonomic keys, the aquatic insects of interest were only identifiable to the phylogenetic division of Family. A second dataset was used in model validation, using data collected separately by A. Garchitorena (Figure 1). This dataset was collected as described in [16], and is hereafter referred to as the AG dataset. Niche models of Belostomatidae and Naucoridae are constructed using the SME dataset, and then tested on their ability to reproduce the independent AG dataset.Figure 1 Study sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category. Study sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category. Environmental parameters used in ecological niche modelling Five ecological parameters were used to describe the distribution of suitable habitats: rainfall, flow accumulation, wetness index, land cover at the sample site, and land cover within the flight range of adult insects. These variables were selected on the basis of their likelihood to influence the distribution and condition of water, and are summarised in Table 1. Rainfall was highly seasonal, so we divide models by the season of collection. Models constructed using species distribution data from the dry season used the precipitation in the driest season; models constructed using species distribution data from the wet season used the precipitation in the wettest season. These two variables were taken from the Worldclim database, as BIO13 Precipitation of Wettest Month and BIO14 Precipitation of Driest Month [17]. Flow accumulation was derived using elevation data [18]. Flow accumulation is the surface area contributing water to a particular point, and indicates the potential amount of water available, which is then determined by rainfall. Using the SRTM elevation, flow accumulation was derived using the Fill, Flow Direction and Flow Accumulation tools in ArcMap 10.1 [19]. Wetness index has previously been shown to be associated with the Buruli ulcer [9]. In ecological terms it indicates the topological potential for water to accumulate, and was derived according to Equation 1,Table 1 Environmental variables used in ecological niche modelling VariableUnitsOriginal resolutionSourceWetness indexm2 15 arc-sec (approx 450 m2)SRTMFlow accumulationm2 15 arc-sec (approx 450 m2)SRTMPrecipitation in wettest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 13Precipitation in driest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 14GLC-APUnitless300 m2 GLC 2009GLC-5 KUnitless300 m2 GLC 2009All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R. Environmental variables used in ecological niche modelling All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R. 1 where FA was the flow accumulation, 500 was the cell size in meters, and S was the surface slope in degrees. Large flow accumulation values and flat slopes resulted in high wetness index values, and indicate areas where water is likely to stagnate. In areas where the slope is zero wetness index had no value. Land cover was derived from the Global Land Cover Map 2009 [20]. GLC-At Point (hereafter GLC-AP) is the land cover at the sample site. In a 5 km radius around the site the most common (modal) land cover category was described. For example, a sample site may be in savannah, but surrounded by forest. 5 km was selected as the approximate flight radius of the insects [21–23]. This is termed GLC-5 K. Five ecological parameters were used to describe the distribution of suitable habitats: rainfall, flow accumulation, wetness index, land cover at the sample site, and land cover within the flight range of adult insects. These variables were selected on the basis of their likelihood to influence the distribution and condition of water, and are summarised in Table 1. Rainfall was highly seasonal, so we divide models by the season of collection. Models constructed using species distribution data from the dry season used the precipitation in the driest season; models constructed using species distribution data from the wet season used the precipitation in the wettest season. These two variables were taken from the Worldclim database, as BIO13 Precipitation of Wettest Month and BIO14 Precipitation of Driest Month [17]. Flow accumulation was derived using elevation data [18]. Flow accumulation is the surface area contributing water to a particular point, and indicates the potential amount of water available, which is then determined by rainfall. Using the SRTM elevation, flow accumulation was derived using the Fill, Flow Direction and Flow Accumulation tools in ArcMap 10.1 [19]. Wetness index has previously been shown to be associated with the Buruli ulcer [9]. In ecological terms it indicates the topological potential for water to accumulate, and was derived according to Equation 1,Table 1 Environmental variables used in ecological niche modelling VariableUnitsOriginal resolutionSourceWetness indexm2 15 arc-sec (approx 450 m2)SRTMFlow accumulationm2 15 arc-sec (approx 450 m2)SRTMPrecipitation in wettest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 13Precipitation in driest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 14GLC-APUnitless300 m2 GLC 2009GLC-5 KUnitless300 m2 GLC 2009All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R. Environmental variables used in ecological niche modelling All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R. 1 where FA was the flow accumulation, 500 was the cell size in meters, and S was the surface slope in degrees. Large flow accumulation values and flat slopes resulted in high wetness index values, and indicate areas where water is likely to stagnate. In areas where the slope is zero wetness index had no value. Land cover was derived from the Global Land Cover Map 2009 [20]. GLC-At Point (hereafter GLC-AP) is the land cover at the sample site. In a 5 km radius around the site the most common (modal) land cover category was described. For example, a sample site may be in savannah, but surrounded by forest. 5 km was selected as the approximate flight radius of the insects [21–23]. This is termed GLC-5 K. Data preparation, niche modelling, and prediction of spatial distribution of suitable habitat Insect distribution data from the 36 sample sites (Figure 1) were explored to identify normality, homogeneity of variance and correlation in the five environmental parameters used. Two sites were excluded from analysis as apparent outliers in flow accumulation, otherwise the data were normally distributed and homogenous. Correlation was observed between flow accumulation and wetness index and between precipitation in the driest and in the wettest seasons, however this was not significant (p > 0.05) using a Spearman’s correlation test. All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for modelling using resample() in the library ‘raster’ of the software R [24]. Across the scale of West Africa we assume that absence data is not reliable, as it is more likely to indicate failure of detection rather than evidence of absence. For this reason, we chose to conduct presence-only modelling, and the specific method selected was Maximum Entropy [25]. We used the software Maxent (Maxent 3.3.3 k, [25]) to construct these models. Maxent has been used several times in the past to model the ecological niche of disease vectors [26]. Maximum entropy modelling minimises the divergence between the distribution of the environmental parameters and the species distribution, assuming the species is distributed in the most ecologically efficient manner possible. Maxent describes the environmental parameters across the study region, and is sensitive to the size of the study region. Herein, we generated background points in every raster cell within the extent of our study (that is, every environment was described). The models were replicated 100 times and averaged. In order to select an appropriate extent to the area considered for modelling we chose to confine the spatial extent of our study to the region ecologically interpolated by the sample sites (Figure 2). The sample sites of the SME dataset describe a particular range of environmental conditions for the five environmental parameters (Table 1). We excluded conditions that were not studied by the SME regime (much larger or lower values of rainfall, land cover categories not sampled, flow accumulation not observed at the study sites) to avoid extrapolating beyond the range of ecological parameter values studied. To select this interpolative region we constructed a Multivariate Environmental Similarity Surface (MESS map [27]), excluding areas with negative MESS values from our study region. A MESS map was constructed for each model; models with more samples can be interpolated into larger ecological regions. MESS maps were constructed with the function mess() available in the library dismo in R.Figure 2 Delineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous. Delineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous. Insect distribution data from the 36 sample sites (Figure 1) were explored to identify normality, homogeneity of variance and correlation in the five environmental parameters used. Two sites were excluded from analysis as apparent outliers in flow accumulation, otherwise the data were normally distributed and homogenous. Correlation was observed between flow accumulation and wetness index and between precipitation in the driest and in the wettest seasons, however this was not significant (p > 0.05) using a Spearman’s correlation test. All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for modelling using resample() in the library ‘raster’ of the software R [24]. Across the scale of West Africa we assume that absence data is not reliable, as it is more likely to indicate failure of detection rather than evidence of absence. For this reason, we chose to conduct presence-only modelling, and the specific method selected was Maximum Entropy [25]. We used the software Maxent (Maxent 3.3.3 k, [25]) to construct these models. Maxent has been used several times in the past to model the ecological niche of disease vectors [26]. Maximum entropy modelling minimises the divergence between the distribution of the environmental parameters and the species distribution, assuming the species is distributed in the most ecologically efficient manner possible. Maxent describes the environmental parameters across the study region, and is sensitive to the size of the study region. Herein, we generated background points in every raster cell within the extent of our study (that is, every environment was described). The models were replicated 100 times and averaged. In order to select an appropriate extent to the area considered for modelling we chose to confine the spatial extent of our study to the region ecologically interpolated by the sample sites (Figure 2). The sample sites of the SME dataset describe a particular range of environmental conditions for the five environmental parameters (Table 1). We excluded conditions that were not studied by the SME regime (much larger or lower values of rainfall, land cover categories not sampled, flow accumulation not observed at the study sites) to avoid extrapolating beyond the range of ecological parameter values studied. To select this interpolative region we constructed a Multivariate Environmental Similarity Surface (MESS map [27]), excluding areas with negative MESS values from our study region. A MESS map was constructed for each model; models with more samples can be interpolated into larger ecological regions. MESS maps were constructed with the function mess() available in the library dismo in R.Figure 2 Delineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous. Delineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous. Evaluating model performance We evaluated the performance of these models according to their ability to correctly predict the distribution of the insects (with a separate dataset, the AG dataset) as efficiently as possible (avoiding overfitting). We used two methods to evaluate the models. First, the Akaike information criterion, corrected for small sample size (AICc), was used. AICc was generated as in [28], and penalises complex models to avoid over-fitting the data, low AICc scores indicate the model is not over-fitting. Second, we tested the models ability to predict the AG dataset. The Area Under Curve (AUC) is often used to evaluate Maxent models, and has been criticized previously [29]. We used a modified version of the AUC for model evaluation, termed here AG AUC. The model predictions were compared to the known values as collected by AG. AG AUC values range from 0 to 1, values close to 1 indicate good performance, 0.5 is no better than random. Use of the AG dataset allowed us to use true absence data for model validation, avoiding the problem of pseudo-absence data in Maxent. The purpose of these two different metrics is to consider different aspects of performance, neither were without limitations. Use of the AG dataset allowed a degree of validation across methodologies, indicating the extent to which our results were dependent on a particular sampling regime. AICc is traditionally used to indicate over-fitting in models, however it is sensitive to the size of the ecological niche of the species. We evaluated the performance of these models according to their ability to correctly predict the distribution of the insects (with a separate dataset, the AG dataset) as efficiently as possible (avoiding overfitting). We used two methods to evaluate the models. First, the Akaike information criterion, corrected for small sample size (AICc), was used. AICc was generated as in [28], and penalises complex models to avoid over-fitting the data, low AICc scores indicate the model is not over-fitting. Second, we tested the models ability to predict the AG dataset. The Area Under Curve (AUC) is often used to evaluate Maxent models, and has been criticized previously [29]. We used a modified version of the AUC for model evaluation, termed here AG AUC. The model predictions were compared to the known values as collected by AG. AG AUC values range from 0 to 1, values close to 1 indicate good performance, 0.5 is no better than random. Use of the AG dataset allowed us to use true absence data for model validation, avoiding the problem of pseudo-absence data in Maxent. The purpose of these two different metrics is to consider different aspects of performance, neither were without limitations. Use of the AG dataset allowed a degree of validation across methodologies, indicating the extent to which our results were dependent on a particular sampling regime. AICc is traditionally used to indicate over-fitting in models, however it is sensitive to the size of the ecological niche of the species. Identifying the relationship between habitat suitability and Buruli ulcer prevalence Buruli ulcer prevalence data was collected for two endemic regions in Cameroon, Akonolinga [30] and Bankim [31], as shown in Figure 3. Around the centre of each village a buffer was created, and average habitat suitability in this buffer was correlated to the village Buruli ulcer prevalence using Spearman’s rank correlation coefficient. Seven buffers were used to explore the effect of buffer size and shape. Around the centre of the village circular buffers of 1, 2, 3, 4, 5 and 10 km were selected, and average habitat suitability recorded. We also used a buffer defined by the borders of the village (Figure 3). This buffer changes in size for each village, and represents the approximate extent of the village area. 5 km is approximately the flight radius of the insects, and the distance easily walkable by local people on an average day in Akonolinga [32]. To deal with this multiple testing, Bonferonni’s correction of the p-values was used.Figure 3 Spatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon. Spatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon. Buruli ulcer prevalence data was collected for two endemic regions in Cameroon, Akonolinga [30] and Bankim [31], as shown in Figure 3. Around the centre of each village a buffer was created, and average habitat suitability in this buffer was correlated to the village Buruli ulcer prevalence using Spearman’s rank correlation coefficient. Seven buffers were used to explore the effect of buffer size and shape. Around the centre of the village circular buffers of 1, 2, 3, 4, 5 and 10 km were selected, and average habitat suitability recorded. We also used a buffer defined by the borders of the village (Figure 3). This buffer changes in size for each village, and represents the approximate extent of the village area. 5 km is approximately the flight radius of the insects, and the distance easily walkable by local people on an average day in Akonolinga [32]. To deal with this multiple testing, Bonferonni’s correction of the p-values was used.Figure 3 Spatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon. Spatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon. Distribution of Belostomatidae and Naucoridae aquatic insect families: Data were collected as described in [15] hereafter referred to as the SME dataset. In brief, 36 sample sites in Cameroon were visited monthly from September 2012 to February 2013 (Figure 1), a period including both wet and dry seasons. Dip net sampling was conducted at all sites. Due to limitations of current taxonomic keys, the aquatic insects of interest were only identifiable to the phylogenetic division of Family. A second dataset was used in model validation, using data collected separately by A. Garchitorena (Figure 1). This dataset was collected as described in [16], and is hereafter referred to as the AG dataset. Niche models of Belostomatidae and Naucoridae are constructed using the SME dataset, and then tested on their ability to reproduce the independent AG dataset.Figure 1 Study sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category. Study sites Cameroon against local land cover. Data from the SME dataset is in red, and AG data set is in blue. The sample sites span the extent of Cameroon, sampling from every major land cover category. Environmental parameters used in ecological niche modelling: Five ecological parameters were used to describe the distribution of suitable habitats: rainfall, flow accumulation, wetness index, land cover at the sample site, and land cover within the flight range of adult insects. These variables were selected on the basis of their likelihood to influence the distribution and condition of water, and are summarised in Table 1. Rainfall was highly seasonal, so we divide models by the season of collection. Models constructed using species distribution data from the dry season used the precipitation in the driest season; models constructed using species distribution data from the wet season used the precipitation in the wettest season. These two variables were taken from the Worldclim database, as BIO13 Precipitation of Wettest Month and BIO14 Precipitation of Driest Month [17]. Flow accumulation was derived using elevation data [18]. Flow accumulation is the surface area contributing water to a particular point, and indicates the potential amount of water available, which is then determined by rainfall. Using the SRTM elevation, flow accumulation was derived using the Fill, Flow Direction and Flow Accumulation tools in ArcMap 10.1 [19]. Wetness index has previously been shown to be associated with the Buruli ulcer [9]. In ecological terms it indicates the topological potential for water to accumulate, and was derived according to Equation 1,Table 1 Environmental variables used in ecological niche modelling VariableUnitsOriginal resolutionSourceWetness indexm2 15 arc-sec (approx 450 m2)SRTMFlow accumulationm2 15 arc-sec (approx 450 m2)SRTMPrecipitation in wettest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 13Precipitation in driest seasonmillimeter30 arc-sec (approx 1 km2)Bioclim 14GLC-APUnitless300 m2 GLC 2009GLC-5 KUnitless300 m2 GLC 2009All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R. Environmental variables used in ecological niche modelling All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for use in Maxent. Resampling used resample() in the library ‘raster’ of the software R. 1 where FA was the flow accumulation, 500 was the cell size in meters, and S was the surface slope in degrees. Large flow accumulation values and flat slopes resulted in high wetness index values, and indicate areas where water is likely to stagnate. In areas where the slope is zero wetness index had no value. Land cover was derived from the Global Land Cover Map 2009 [20]. GLC-At Point (hereafter GLC-AP) is the land cover at the sample site. In a 5 km radius around the site the most common (modal) land cover category was described. For example, a sample site may be in savannah, but surrounded by forest. 5 km was selected as the approximate flight radius of the insects [21–23]. This is termed GLC-5 K. Data preparation, niche modelling, and prediction of spatial distribution of suitable habitat: Insect distribution data from the 36 sample sites (Figure 1) were explored to identify normality, homogeneity of variance and correlation in the five environmental parameters used. Two sites were excluded from analysis as apparent outliers in flow accumulation, otherwise the data were normally distributed and homogenous. Correlation was observed between flow accumulation and wetness index and between precipitation in the driest and in the wettest seasons, however this was not significant (p > 0.05) using a Spearman’s correlation test. All data were resampled to a spatial resolution of 0.004 decimal degrees (~300 m2) for modelling using resample() in the library ‘raster’ of the software R [24]. Across the scale of West Africa we assume that absence data is not reliable, as it is more likely to indicate failure of detection rather than evidence of absence. For this reason, we chose to conduct presence-only modelling, and the specific method selected was Maximum Entropy [25]. We used the software Maxent (Maxent 3.3.3 k, [25]) to construct these models. Maxent has been used several times in the past to model the ecological niche of disease vectors [26]. Maximum entropy modelling minimises the divergence between the distribution of the environmental parameters and the species distribution, assuming the species is distributed in the most ecologically efficient manner possible. Maxent describes the environmental parameters across the study region, and is sensitive to the size of the study region. Herein, we generated background points in every raster cell within the extent of our study (that is, every environment was described). The models were replicated 100 times and averaged. In order to select an appropriate extent to the area considered for modelling we chose to confine the spatial extent of our study to the region ecologically interpolated by the sample sites (Figure 2). The sample sites of the SME dataset describe a particular range of environmental conditions for the five environmental parameters (Table 1). We excluded conditions that were not studied by the SME regime (much larger or lower values of rainfall, land cover categories not sampled, flow accumulation not observed at the study sites) to avoid extrapolating beyond the range of ecological parameter values studied. To select this interpolative region we constructed a Multivariate Environmental Similarity Surface (MESS map [27]), excluding areas with negative MESS values from our study region. A MESS map was constructed for each model; models with more samples can be interpolated into larger ecological regions. MESS maps were constructed with the function mess() available in the library dismo in R.Figure 2 Delineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous. Delineation of the study area. We exclude areas of ecological extrapolation. Across West Africa the red-green coloured region is ecologically interpolative within our sites, according to a multivariate environmental similarity surface (MESS). Regions where we extrapolate beyond the ecology of our study sites are identified using MESS values less than 0. Colour values are MESS values, indicating similarity to sample sites; a value of 100 is identical to the median of the sample sites. The spatial projection of this interpolative region is shown against a map of country borders of West Africa, for Belostomatidae adults in the dry season. This projection changes for each family, developmental stage and season, and though it is spatially discontinuous it is ecologically homogenous. Evaluating model performance: We evaluated the performance of these models according to their ability to correctly predict the distribution of the insects (with a separate dataset, the AG dataset) as efficiently as possible (avoiding overfitting). We used two methods to evaluate the models. First, the Akaike information criterion, corrected for small sample size (AICc), was used. AICc was generated as in [28], and penalises complex models to avoid over-fitting the data, low AICc scores indicate the model is not over-fitting. Second, we tested the models ability to predict the AG dataset. The Area Under Curve (AUC) is often used to evaluate Maxent models, and has been criticized previously [29]. We used a modified version of the AUC for model evaluation, termed here AG AUC. The model predictions were compared to the known values as collected by AG. AG AUC values range from 0 to 1, values close to 1 indicate good performance, 0.5 is no better than random. Use of the AG dataset allowed us to use true absence data for model validation, avoiding the problem of pseudo-absence data in Maxent. The purpose of these two different metrics is to consider different aspects of performance, neither were without limitations. Use of the AG dataset allowed a degree of validation across methodologies, indicating the extent to which our results were dependent on a particular sampling regime. AICc is traditionally used to indicate over-fitting in models, however it is sensitive to the size of the ecological niche of the species. Identifying the relationship between habitat suitability and Buruli ulcer prevalence: Buruli ulcer prevalence data was collected for two endemic regions in Cameroon, Akonolinga [30] and Bankim [31], as shown in Figure 3. Around the centre of each village a buffer was created, and average habitat suitability in this buffer was correlated to the village Buruli ulcer prevalence using Spearman’s rank correlation coefficient. Seven buffers were used to explore the effect of buffer size and shape. Around the centre of the village circular buffers of 1, 2, 3, 4, 5 and 10 km were selected, and average habitat suitability recorded. We also used a buffer defined by the borders of the village (Figure 3). This buffer changes in size for each village, and represents the approximate extent of the village area. 5 km is approximately the flight radius of the insects, and the distance easily walkable by local people on an average day in Akonolinga [32]. To deal with this multiple testing, Bonferonni’s correction of the p-values was used.Figure 3 Spatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon. Spatial distribution of Buruli ulcer prevalence in Akonolianga and Bankim, two endemic regions within Cameroon. Results: Distribution of suitable habitat, and its relationship to Buruli ulcer In Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2 Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species SpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4 Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season. Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim. In Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2 Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species SpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4 Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season. Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim. Ecologically important variables in the distribution of the aquatic insect families Variable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5 Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6 Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable. Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Flow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae. Variable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5 Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6 Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable. Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Flow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae. Model performance AICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model. The AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset. AICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model. The AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset. Distribution of suitable habitat, and its relationship to Buruli ulcer: In Akonolinga there was a significantly positive correlation between Buruli ulcer prevalence and average habitat suitability, for both Naucoridae and Belostomatidae, in the wet season (Table 2, Figure 4). This relationship was significant at multiple buffer distances. In contrast, in Bankim there was no significant correlation between Buruli ulcer prevalence and Belostomatidae or Naucoridae average habitat suitability, in either wet or dry seasons or at any buffer distance (Table 3).Table 2 Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species SpeciesSeasonBufferBonferroni pvalueBelostomatidaeDry10-5-4-3-2-1-Village-Wet100.00050.01440.0233-2-1-Village-NaucoridaeDry10-5-4-3-2-1-Village-Wet100.00150.01340.0143-2-1-Village-The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season.Figure 4 Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman’s rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Akonolinga, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. Significant positive correlations were observed between both Belostomatidae and Naucoridae in the wet season and Buruli ulcer prevalence, but not the dry season. Correlation between the prevalence of Buruli ulcer and habitat suitability of Belostomatidae (left) and Naucoridae (right) in the wet season in Akonolinga. Colour indicates use of a linear model (Black) or locally weighted scatterplot smoothing (Red), different ways of viewing the correlation. Buruli ulcer was absent from certain villages (grey dots) where habitat suitability for the insects is high. Because this can skew any correlation between habitat suitability and prevalence, we explored the effect of including (thin lines) or excluding (thick lines) these villages. We note that in either case Spearman’s rank correlation coefficient was significant. For Belostomatidae with Buruli ulcer absent villages p = 0.08, without BU absent villages p = 0.03. For Naucoridae with BU absent villages p = 0.04, without BU absent villages p = 0.02. Spearman rank correlation coefficients for correlation between Buruli ulcer prevalence and habitat suitability in Bankim, for both seasons and species The column labelled Buffer is the distance, in km, around the village centre that the habitat suitability is considered, the buffer labelled village uses village boarders as a buffer (Figure 3). Bonferonni’s p value is the significance of the correlation between the insect and the disease, for clarity only significant values (<0.05) are presented, non-significant values are marked “-”. No significant correlations were observed in Bankim. Ecologically important variables in the distribution of the aquatic insect families: Variable importance was evaluated using Jackknife variable removal. Jacknife removes a variable and evaluates the effect of variable removal on the model. In the dry season Belostomatidae and Naucoridae responded in broadly similar fashions; the variable whose removal had the largest effect was GLC 5 km (Figure 5). The land cover categories most suitable for both Belostomatidae and Naucoridae are water bodies, artificial areas, rain fed croplands and forest/grassland mosaics (Figure 6). If one of these categories is the dominant category in 5 km radius, in the dry season, the likelihood of encountering the insect is higher. Unsuitable categories were forest and vegetation/cropland mosaic.In the wet season precipitation is more important than land cover. Precipitation suitability peaks at approximately 300 millimeters per month, and diminishes above or below this (Figure 6). For the dry season there is a simple increase in habitat suitability with increasing precipitation.Figure 5 Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable.Figure 6 Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Importance of each variable according to Jack-knife AUC for wet and dry seasons. A high value indicates the variable is important; however this is sensitive to correlation within the variables. For both insects the most important variable in the dry season was the land cover in the flight radius (GLC 5 km), in the wet season precipitation was the most important variable. Habitat suitability for Belostomatidae (1 st and 2 nd rows) and Naucoridae (3 rd and 4 th rows) in the wet and dry seasons. Both insects have a negative relationship with flow accumulation, and a positive relationship with wetness index; an unexpected relationship given that increasing flow accumulation normally means increasing wetness index. Flow accumulation had a negative association with habitat suitability, and wetness index had a positive association, regardless of season, for both Belostomatidae and Naucoridae. Model performance: AICc for Naucoridae adults (dry season) was 14.6, and 14.2 in the wet season, as in Table 2. For Belostomatidae adults (dry season) the AICc was 12.5, and 12.2 in the wet season. Scores of overfitting are relative; these scores indicate the Belostomatidae model was less prone to overfitting than the Naucoridae model. The AG data set was also used in model validation. In the dry season Naucoridae adults had an AG AUC of 0.83, and 0.80 in the wet season. Belostomatidae adults had an AG AUC of 0.80 in the dry season, and 0.86 in the wet season. These scores indicate that the models are able to describe the distribution of the insects with good accuracy; the model based on SME dataset is able to accurately replicate the independently collected AG dataset. Discussion: We explored the correlation between the distribution of Belostomatidae and Naucoridae and the prevalence of Buruli ulcer. We have found a positive gradient between habitat suitability of Naucoridae and Belostomatidae and Buruli ulcer prevalence. Correlation does not imply causation; this result is not proof that the insects are vectors. However, understanding the reasons for the temporal and spatial changes in this correlation will enable a better understanding of the reasons for changes in Buruli ulcer prevalence. There are significant temporal changes in this correlation between habitat suitability of the insects and Buruli ulcer prevalence; in Akonolinga the Buruli ulcer prevalence is correlated to Naucoridae and Belostomatidae distribution in the wet season, but not in the dry season. Buruli ulcer is known to have complex temporal changes in prevalence [30, 33, 34], as is M. ulcerans [14, 35]. It is therefore unsurprising that, if these insects are implicated in maintaining M. ulcerans in the environment, or involved in transmission to humans in some way (either as host vectors, or host carriers), the correlation between Buruli ulcer prevalence and their abundance would change in time. We also observe spatial changes in the correlation; water bug habitat suitability is not correlated to Buruli ulcer prevalence in Bankim, 457 km North of Akonolinga. Speculatively, perhaps other routes of transmission may be more important in this region, for example contact with infected plant biofilms, as suggested in Ghana [36]. How do we interpret this result in terms of the Bradford-Hill guidelines? Herein we have focused on the correlation between these insects and the prevalence of the disease in both space and time. While there is a significant positive correlation for the predicted abundance of the aquatic insects and the prevalence of the Buruli ulcer, this correlation is not consistent from region to region. Previous research has proposed that M. ulcerans is transmitted within a multi-host transmission network [4]. In such a situation of multiple hosts the relative importance of any given mode of transmission may be expected to vary in time or space, and our results are consistent with, though not proof of, this hypothesis. The lack of a clear signal between water bugs and Buruli ulcer in Bankim would suggest that may not be key vectors or host carriers in that region. Recent studies have found notable changes in community composition relevant to M. ulcerans distribution, in the Greater Accra, Ashanti and Volta regions of Ghana, for both plant [35] and aquatic insects [37]. This would support the importance of changing biotic communities as a key factor in changing Buruli ulcer prevalence, a priority for future work. We have found that the prevalence of Buruli ulcer is correlated to Belostomatidae and Naucoridae abundance in Akonolinga but not Bankim, we do not know if the plant community composition for these regions, and wider aquatic insect community, also correlates to disease prevalence changes on a similar scale. The ecological niches of both Naucorid and Belostomatid water bugs in West Africa are predominantly determined by the distribution of suitable landcover in a 5 km radius, preferring water bodies, artificial areas and rain fed croplands. The specific land cover at the point of the site (GLC-AP) was less informative. The observation that the most suitable regional land cover class is water bodies is not surprising, but the high suitability of urban areas is curious. Ecologically this could have a variety of causes; there may be changes in the chemical composition of water in these habitats, a reduction in predation pressure, or a greater abundance of food. The specific reasons will require further research. Our study has been limited in certain points; the low taxonomic resolution of the insects is a current limitation in this study. Secondly, an important limitation is that the distribution of M. ulcerans in these insects in these areas is unknown. The distribution of Naucoridae and Belostomatidae infected by M. ulcerans may differ from the distribution of Naucoridae and Belostomatidae generally. However, the insects are known to host the bacillus on their carapace, in their body [10, 14, 16]) and in their salivary glands [14] in the wild, and the distribution of the insect necessarily sets a limit to the distribution of infected insects. A related limitation is the unknown incubation time of M. ulcerans; the time from infection to presentation at the hospital, remains unknown. Finally, we have only addressed a single criterion of the Bradford-Hill guidelines; correlation. We have not aimed for a full discussion of the other criteria, and our findings should not be interpreted as proof of the role of these insects as vectors or key host carriers. Rather, we have discussed the existence of, and change in, a correlation between these insects and Buruli ulcer. Future work aims to explore spatial variation in the correlation between Buruli ulcer and the entire plant and animal communities, identifying any similarities between regions where the correlation exists, expanding on previous studies [37] which have focused on the community assemblage differences between M. ulcerans endemic and non-endemic regions of Ghana. Despite these limitations, these results are consistent with previous research, which has shown that in Akonolinga the Nyong river is a risk factor for Buruli ulcer [30]. Our results agree with this conclusion; the main focus of suitable habitats for the insects in Akonolinga is the Nyong river, where the existence of large plants near the river banks provides appropriate habitat for Naucoridae and Belostomatidae to forage, develop and reproduce. Previous research has also implicated aquatic insects as important vectors in Akonolinga [14], including detection of M. ulcerans in the saliva of the insects. In conclusion, we find a positive correlation between the abundance of Naucoridae and Belostomatidae suitable habitat and Buruli ulcer prevalence. This correlation is not constant, and changes in time and space. We interpret this as evidence consistent with that idea that Naucoridae and Belostomatidae may be locally important host carriers of M. ulcerans in certain conditions, their importance changing as the environmental conditions change, which would be expected in the situation of multi-host transmission.
Background: The mode of transmission of the emerging neglected disease Buruli ulcer is unknown. Several potential transmission pathways have been proposed, such as amoebae, or transmission through food webs. Several lines of evidence have suggested that biting aquatic insects, Naucoridae and Belostomatidae, may act as vectors, however this proposal remains controversial. Methods: Herein, based on sampling in Cameroon, we construct an ecological niche model of these insects to describe their spatial distribution. We predict their distribution across West Africa, describe important environmental drivers of their abundance, and examine the correlation between their abundance and Buruli ulcer prevalence in the context of the Bradford-Hill guidelines. Results: We find a significant positive correlation between the abundance of the insects and the prevalence of Buruli ulcer. This correlation changes in space and time, it is significant in one Camerounese study region in (Akonolinga) and not other (Bankim). We discuss notable environmental differences between these regions. Conclusions: We interpret the presence of, and change in, this correlation as evidence (though not proof) that these insects may be locally important in the environmental persistence, or transmission, of Mycobacterium. ulcerans. This is consistent with the idea of M. ulcerans as a pathogen transmitted by multiple modes of infection, the importance of any one pathway changing from region to region, depending on the local environmental conditions.
null
null
13,021
265
[ 993, 238, 559, 770, 294, 232, 819, 496, 153 ]
12
[ "season", "correlation", "buruli", "ulcer", "buruli ulcer", "habitat", "belostomatidae", "suitability", "naucoridae", "habitat suitability" ]
[ "association ulcerans aquatic", "bugs buruli ulcer", "pathogen mycobacterium ulcerans", "transmission ulcerans aquatic", "distribution ulcerans insects" ]
null
null
null
[CONTENT] Ecological niche modelling | Naucoridae | Belostomatidae | Spatial distribution | Habitat suitability | Buruli ulcer | Mycobacterium ulcerans | Vector-borne transmission | Environmentally-acquired disease [SUMMARY]
null
[CONTENT] Ecological niche modelling | Naucoridae | Belostomatidae | Spatial distribution | Habitat suitability | Buruli ulcer | Mycobacterium ulcerans | Vector-borne transmission | Environmentally-acquired disease [SUMMARY]
null
[CONTENT] Ecological niche modelling | Naucoridae | Belostomatidae | Spatial distribution | Habitat suitability | Buruli ulcer | Mycobacterium ulcerans | Vector-borne transmission | Environmentally-acquired disease [SUMMARY]
null
[CONTENT] Animals | Buruli Ulcer | Cameroon | Ecosystem | Geographic Mapping | Hemiptera | Humans | Insect Vectors | Mycobacterium ulcerans [SUMMARY]
null
[CONTENT] Animals | Buruli Ulcer | Cameroon | Ecosystem | Geographic Mapping | Hemiptera | Humans | Insect Vectors | Mycobacterium ulcerans [SUMMARY]
null
[CONTENT] Animals | Buruli Ulcer | Cameroon | Ecosystem | Geographic Mapping | Hemiptera | Humans | Insect Vectors | Mycobacterium ulcerans [SUMMARY]
null
[CONTENT] association ulcerans aquatic | bugs buruli ulcer | pathogen mycobacterium ulcerans | transmission ulcerans aquatic | distribution ulcerans insects [SUMMARY]
null
[CONTENT] association ulcerans aquatic | bugs buruli ulcer | pathogen mycobacterium ulcerans | transmission ulcerans aquatic | distribution ulcerans insects [SUMMARY]
null
[CONTENT] association ulcerans aquatic | bugs buruli ulcer | pathogen mycobacterium ulcerans | transmission ulcerans aquatic | distribution ulcerans insects [SUMMARY]
null
[CONTENT] season | correlation | buruli | ulcer | buruli ulcer | habitat | belostomatidae | suitability | naucoridae | habitat suitability [SUMMARY]
null
[CONTENT] season | correlation | buruli | ulcer | buruli ulcer | habitat | belostomatidae | suitability | naucoridae | habitat suitability [SUMMARY]
null
[CONTENT] season | correlation | buruli | ulcer | buruli ulcer | habitat | belostomatidae | suitability | naucoridae | habitat suitability [SUMMARY]
null
[CONTENT] transmission | proposed | vector | ulcerans | infection | human | buruli | buruli ulcer | ulcer | vectors [SUMMARY]
null
[CONTENT] season | variable | villages | correlation | suitability | habitat | habitat suitability | village | absent | wet [SUMMARY]
null
[CONTENT] season | buruli | buruli ulcer | ulcer | sites | ag | correlation | village | dataset | prevalence [SUMMARY]
null
[CONTENT] Buruli ||| amoebae ||| Naucoridae | Belostomatidae [SUMMARY]
null
[CONTENT] Buruli ||| one | Camerounese | Akonolinga | Bankim ||| [SUMMARY]
null
[CONTENT] Buruli ||| amoebae ||| Naucoridae | Belostomatidae ||| Cameroon ||| West Africa | Buruli | Bradford ||| Buruli ||| one | Camerounese | Akonolinga | Bankim ||| ||| Mycobacterium ||| ||| M. [SUMMARY]
null
Feature-specific terrain park-injury rates and risk factors in snowboarders: a case-control study.
24184587
Snowboarding is a popular albeit risky sport and terrain park (TP) injuries are more severe than regular slope injuries. TPs contain man-made features that facilitate aerial manoeuvres. The objectives of this study were to determine overall and feature-specific injury rates and the potential risk factors for TP injuries.
BACKGROUND
Case-control study with exposure estimation, conducted in an Alberta TP during two ski seasons. Cases were snowboarders injured in the TP who presented to ski patrol and/or local emergency departments. Controls were uninjured snowboarders in the same TP. κ Statistics were used to measure the reliability of reported risk factor information. Injury rates were calculated and adjusted logistic regression was used to calculate the feature-specific odds of injury.
METHODS
Overall, 333 cases and 1261 controls were enrolled. Reliability of risk factor information was κ>0.60 for 21/24 variables. The overall injury rate was 0.75/1000 runs. Rates were highest for jumps and half-pipe (both 2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs). Compared with rails, there were increased odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12).
RESULTS
Higher feature-specific injury rates and increased odds of injury were associated with features that promote aerial manoeuvres or a large drop to the ground. Further research is required to determine ways to increase snowboarder safety in the TP.
CONCLUSIONS
[ "Adolescent", "Adult", "Alberta", "Case-Control Studies", "Child", "Female", "Humans", "Incidence", "Injury Severity Score", "Male", "Risk Factors", "Skiing" ]
3888610
Introduction
Snowboarding is a popular sport1 2 and the risk of injury is higher for snowboarding than for skiing.3 4 Being a beginner,5–8 poor weather conditions9 and not wearing protective equipment10–12 increase injury risk. Ski areas often include terrain parks (TPs) with man-made features (eg, jumps, rails and half-pipes) for performing tricks and aerial manoeuvres. In November 2007, Resorts of the Canadian Rockies (RCR) removed all man-made jumps from their TPs because they believed jumps increased injury risk.13 Definitions of common features can be found at: http://www.snowboard-coach.com/freestyle-snowboarding-features.html and in appendix 1. Between 5% and 27% of skiing and snowboarding injuries occur in TPs2 14–19 and are more severe than regular slope injuries.15 16 18 At the 2012 Winter Youth Olympic Games, 35% of all snowboard half-pipe and slope-style competitors were injured.20 Those injured in TPs tend to be snowboarders, male, 13–24 years old, fall from higher heights15 or self-perceived experts.16 There is a dearth of research examining injury rates and intrinsic and extrinsic risk factors for snowboarders in TPs in relation to injury mechanism—a comprehensive approach recommended by sport injury prevention research leaders.21 22 Therefore, the study objectives were to calculate overall and feature-specific TP injury rates, determine potential risk factors for injury in the TP and assess the reliability of the data collection methods.
Methods
Definition of cases and controls This unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP. Cases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs. This unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP. Cases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs. Data collection Case data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14 18 19 23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included. To collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis. Injury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders. Ethical approval was granted by the University of Calgary Conjoint Health Research Ethics Board. Case data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14 18 19 23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included. To collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis. Injury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders. Ethical approval was granted by the University of Calgary Conjoint Health Research Ethics Board. Analysis Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33
Results
A total of 333 cases (107 ski patrol, 174 ski patrol and ED, 18 ski patrol and healthcare provider and 34 ED only) and 1261 controls were included. The consent rate was 79% for cases and 94% for controls (figure 1). Recruitment of included snowboarders. Reliability Based on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury. Based on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury. Injuries Overall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders. Features used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park *Weighted for time of day, weekday versus weekend and exposure opportunity. Females aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2). Overall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI) Features that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3). Table 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4). All and severe injury rates (per 1000 feature exposures and 95% CI) *Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Feature-specific injury rate ratios and 95% CI N, number; RR, rate ratio. *Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries. †Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results. Baseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries. Summary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI) *Odds of injury increases for every increase in year. †Compared with no injuries. ‡Compared with minor injuries. N, numbers; N/A, not applicable. The crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails. Association between injury and feature type after controlling for confounders (OR and 95% CI) *Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%. †Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%. There were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change. Using the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97). Overall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders. Features used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park *Weighted for time of day, weekday versus weekend and exposure opportunity. Females aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2). Overall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI) Features that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3). Table 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4). All and severe injury rates (per 1000 feature exposures and 95% CI) *Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Feature-specific injury rate ratios and 95% CI N, number; RR, rate ratio. *Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries. †Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results. Baseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries. Summary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI) *Odds of injury increases for every increase in year. †Compared with no injuries. ‡Compared with minor injuries. N, numbers; N/A, not applicable. The crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails. Association between injury and feature type after controlling for confounders (OR and 95% CI) *Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%. †Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%. There were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change. Using the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97).
Conclusion
Feature-specific injury rates ranged from 2.56 injuries/1000 runs (jumps and half-pipe) to 0.24 injuries/1000 runs (quarter-pipe). Half-pipe, jumps and kickers were significant risk factors for any injury and severe injury. Recommendations have been made for prevention strategy development to reduce TP injury risk. These strategies will require rigorous evaluation. What are the new findings?In this study, the overall injury rate for snowboarding terrain park (TP) injuries is 0.75 injuries/1000 runs.The injury rates are highest for jumps (2.56/1000 runs) and half-pipe (2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs).Compared with rails, the odds of injury is significantly higher on the half-pipe (OR 9.63; 95% CI 4.80 to 19.32) and jumps (OR 4.29; 95% CI 2.72 to 6.76). How might it impact on clinical practice in the near future?Terrain parks (TPs) are popular and the majority of snowboarders injured in the TP present to the emergency department.With the identification of potential risk factors for TP injuries among snowboarders, injury prevention programmes can be tailored to those at greatest risk of injury.Should these programmes be effective, clinicians can expect to treat fewer snowboarders injured in the TP. In this study, the overall injury rate for snowboarding terrain park (TP) injuries is 0.75 injuries/1000 runs. The injury rates are highest for jumps (2.56/1000 runs) and half-pipe (2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs). Compared with rails, the odds of injury is significantly higher on the half-pipe (OR 9.63; 95% CI 4.80 to 19.32) and jumps (OR 4.29; 95% CI 2.72 to 6.76). Terrain parks (TPs) are popular and the majority of snowboarders injured in the TP present to the emergency department. With the identification of potential risk factors for TP injuries among snowboarders, injury prevention programmes can be tailored to those at greatest risk of injury. Should these programmes be effective, clinicians can expect to treat fewer snowboarders injured in the TP.
[ "Introduction", "Definition of cases and controls", "Data collection", "Analysis", "Reliability", "Rates", "Risk factors", "Reliability", "Injuries", "Limitations" ]
[ "Snowboarding is a popular sport1\n2 and the risk of injury is higher for snowboarding than for skiing.3\n4 Being a beginner,5–8 poor weather conditions9 and not wearing protective equipment10–12 increase injury risk. Ski areas often include terrain parks (TPs) with man-made features (eg, jumps, rails and half-pipes) for performing tricks and aerial manoeuvres. In November 2007, Resorts of the Canadian Rockies (RCR) removed all man-made jumps from their TPs because they believed jumps increased injury risk.13 Definitions of common features can be found at: http://www.snowboard-coach.com/freestyle-snowboarding-features.html and in appendix 1.\nBetween 5% and 27% of skiing and snowboarding injuries occur in TPs2\n14–19 and are more severe than regular slope injuries.15\n16\n18 At the 2012 Winter Youth Olympic Games, 35% of all snowboard half-pipe and slope-style competitors were injured.20 Those injured in TPs tend to be snowboarders, male, 13–24 years old, fall from higher heights15 or self-perceived experts.16 There is a dearth of research examining injury rates and intrinsic and extrinsic risk factors for snowboarders in TPs in relation to injury mechanism—a comprehensive approach recommended by sport injury prevention research leaders.21\n22 Therefore, the study objectives were to calculate overall and feature-specific TP injury rates, determine potential risk factors for injury in the TP and assess the reliability of the data collection methods.", "This unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP.\nCases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs.", "Case data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14\n18\n19\n23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included.\nTo collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis.\nInjury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders.\nEthical approval was granted by the University of Calgary Conjoint Health Research Ethics Board.", " Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27\nThree pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27\n Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.\nInjury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.\n Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33\nThe distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33", "Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27", "Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.", "The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33", "Based on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury.", "Overall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders.\nFeatures used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park\n*Weighted for time of day, weekday versus weekend and exposure opportunity.\nFemales aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2).\nOverall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI)\nFeatures that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3).\nTable 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4).\nAll and severe injury rates (per 1000 feature exposures and 95% CI)\n*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed).\nFeature-specific injury rate ratios and 95% CI\nN, number; RR, rate ratio.\n*Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries.\n†Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results.\nBaseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries.\nSummary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI)\n*Odds of injury increases for every increase in year.\n†Compared with no injuries.\n‡Compared with minor injuries.\nN, numbers; N/A, not applicable.\nThe crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails.\nAssociation between injury and feature type after controlling for confounders (OR and 95% CI)\n*Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%.\n†Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%.\nThere were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change.\nUsing the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97).", "Snowboarders who climbed up to reattempt features were counted as one run, which would underestimate the number of TP runs. Injured snowboarders who did not present to the ski patrol or either of the two nearest EDs were missed. This selection bias resulted in rate underestimation and observed associations between feature and injury would be overestimated if missed snowboarders were injured on rails. However, only 47 (14%) of the included snowboarders said they saw the ski patrol and a non-participating healthcare provider, indicating that most sought treatment at a participating ED.\nThere was potential misclassification by feature if the injured snowboarder could not recall the feature; however, this occurred only four times. Controls may not have correctly reported their feature use when the RA could not see the entire TP and it was not possible to determine the reliability of controls’ feature use. This could have an unpredictable effect on the ORs if it operated differently by feature type. Only age-group and sex of the uninjured controls could be assessed for accuracy and our observations were found to be valid. There was potential for misclassification of severe injury because factors other than severity could predict presentation to the ED, such as parental fear or anxiety influenced by a recent celebrity skiing death.39 It was unknown if non-consenting injured snowboarders presented to a non-study ED and were incorrectly classified as a minor injury. Fortunately, few who saw the ski patrol went to a non-study ED (8%).\nIt is possible that some important confounders were overlooked, such as first attempt at a new feature, manoeuvre performed, speed, fatigue, height or weight. There may be behavioural confounders, such as peer pressure to attempt difficult features or manoeuvres. However, some of these potential confounders were likely accounted for by other variables such as ability, age and sex. Although this study was conducted at only one resort, the TP layout changed four times during the two seasons and this enhances the generalisability of the results." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Definition of cases and controls", "Data collection", "Analysis", "Reliability", "Rates", "Risk factors", "Results", "Reliability", "Injuries", "Discussion", "Limitations", "Conclusion", "Supplementary Material" ]
[ "Snowboarding is a popular sport1\n2 and the risk of injury is higher for snowboarding than for skiing.3\n4 Being a beginner,5–8 poor weather conditions9 and not wearing protective equipment10–12 increase injury risk. Ski areas often include terrain parks (TPs) with man-made features (eg, jumps, rails and half-pipes) for performing tricks and aerial manoeuvres. In November 2007, Resorts of the Canadian Rockies (RCR) removed all man-made jumps from their TPs because they believed jumps increased injury risk.13 Definitions of common features can be found at: http://www.snowboard-coach.com/freestyle-snowboarding-features.html and in appendix 1.\nBetween 5% and 27% of skiing and snowboarding injuries occur in TPs2\n14–19 and are more severe than regular slope injuries.15\n16\n18 At the 2012 Winter Youth Olympic Games, 35% of all snowboard half-pipe and slope-style competitors were injured.20 Those injured in TPs tend to be snowboarders, male, 13–24 years old, fall from higher heights15 or self-perceived experts.16 There is a dearth of research examining injury rates and intrinsic and extrinsic risk factors for snowboarders in TPs in relation to injury mechanism—a comprehensive approach recommended by sport injury prevention research leaders.21\n22 Therefore, the study objectives were to calculate overall and feature-specific TP injury rates, determine potential risk factors for injury in the TP and assess the reliability of the data collection methods.", " Definition of cases and controls This unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP.\nCases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs.\nThis unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP.\nCases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs.\n Data collection Case data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14\n18\n19\n23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included.\nTo collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis.\nInjury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders.\nEthical approval was granted by the University of Calgary Conjoint Health Research Ethics Board.\nCase data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14\n18\n19\n23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included.\nTo collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis.\nInjury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders.\nEthical approval was granted by the University of Calgary Conjoint Health Research Ethics Board.\n Analysis Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27\nThree pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27\n Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.\nInjury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.\n Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33\nThe distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33\n Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27\nThree pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27\n Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.\nInjury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.\n Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33\nThe distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33", "This unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP.\nCases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs.", "Case data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14\n18\n19\n23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included.\nTo collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis.\nInjury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders.\nEthical approval was granted by the University of Calgary Conjoint Health Research Ethics Board.", " Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27\nThree pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27\n Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.\nInjury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.\n Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33\nThe distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33", "Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated.\nAt the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27", "Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated.\nFeature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28\nOverall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground.", "The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated.\nLogistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls.\nTo compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls.\nFor the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33", "A total of 333 cases (107 ski patrol, 174 ski patrol and ED, 18 ski patrol and healthcare provider and 34 ED only) and 1261 controls were included. The consent rate was 79% for cases and 94% for controls (figure 1).\nRecruitment of included snowboarders.\n Reliability Based on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury.\nBased on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury.\n Injuries Overall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders.\nFeatures used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park\n*Weighted for time of day, weekday versus weekend and exposure opportunity.\nFemales aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2).\nOverall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI)\nFeatures that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3).\nTable 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4).\nAll and severe injury rates (per 1000 feature exposures and 95% CI)\n*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed).\nFeature-specific injury rate ratios and 95% CI\nN, number; RR, rate ratio.\n*Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries.\n†Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results.\nBaseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries.\nSummary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI)\n*Odds of injury increases for every increase in year.\n†Compared with no injuries.\n‡Compared with minor injuries.\nN, numbers; N/A, not applicable.\nThe crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails.\nAssociation between injury and feature type after controlling for confounders (OR and 95% CI)\n*Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%.\n†Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%.\nThere were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change.\nUsing the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97).\nOverall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders.\nFeatures used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park\n*Weighted for time of day, weekday versus weekend and exposure opportunity.\nFemales aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2).\nOverall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI)\nFeatures that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3).\nTable 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4).\nAll and severe injury rates (per 1000 feature exposures and 95% CI)\n*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed).\nFeature-specific injury rate ratios and 95% CI\nN, number; RR, rate ratio.\n*Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries.\n†Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results.\nBaseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries.\nSummary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI)\n*Odds of injury increases for every increase in year.\n†Compared with no injuries.\n‡Compared with minor injuries.\nN, numbers; N/A, not applicable.\nThe crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails.\nAssociation between injury and feature type after controlling for confounders (OR and 95% CI)\n*Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%.\n†Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%.\nThere were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change.\nUsing the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97).", "Based on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury.", "Overall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders.\nFeatures used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park\n*Weighted for time of day, weekday versus weekend and exposure opportunity.\nFemales aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2).\nOverall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI)\nFeatures that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3).\nTable 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4).\nAll and severe injury rates (per 1000 feature exposures and 95% CI)\n*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed).\nFeature-specific injury rate ratios and 95% CI\nN, number; RR, rate ratio.\n*Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries.\n†Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results.\nBaseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries.\nSummary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI)\n*Odds of injury increases for every increase in year.\n†Compared with no injuries.\n‡Compared with minor injuries.\nN, numbers; N/A, not applicable.\nThe crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails.\nAssociation between injury and feature type after controlling for confounders (OR and 95% CI)\n*Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%.\n†Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%.\nThere were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change.\nUsing the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97).", "To our knowledge this is the first study to examine feature-specific injury rates and potential risk factors for TP snowboarding injuries. Data collection methods were found to be accurate and reliable for age and sex information. There was ‘substantial’ to ‘perfect’ agreement for most risk factors for injured snowboarders, including feature use.27 Self-reported risk factors were confirmed with reliable sources where possible. The overall injury rate of TP snowboarders was estimated at 0.75/1000 runs. Feature-specific injury rates were higher for features that supported aerial manoeuvres or a large drop to the ground. Aerial features facilitate more air time and we hypothesise that snowboarders may have more opportunity to lose their sense of body position/orientation and land with more force and this increases the likelihood of injury.\nResearch suggests that TP injuries are more severe than those on regular slopes.15\n16\n18 One study reported an overall TP injury rate of 0.62/1000 ski and snowboard days, including both skiers and snowboarders, but only ski patrol injuries.15 If each skier and snowboarder took only one TP run during their day, this rate is lower than our observed rate. It is unknown if the TPs were similar in size or number of features. A literature review34 reported that ski patrol-reported injuries among snowboarders varied from 2.135 to 7.0/361000 outings; however, a TP-specific injury rate or the precise injury definition was not provided.\nHalf-pipes and jumps significantly predicted injury. Torjussen and Bahr37  found professional snowboarders competing in half-pipe and big air competitions had a significantly higher injury risk compared with giant-slalom, where the snowboarder does not leave the ground.\nOverall 34% of snowboarders listened to music through a personal music player. The unadjusted result indicated that listening to music reduced the odds of injury. In a laboratory setting, sport students listening to music while wearing a helmet with built-in speakers had similar mean reaction time to peripheral stimulus as those who were wearing a helmet but not listening to music.38 Similar to non-TP research, snowboarding in suboptimal environmental (bad weather/visibility) conditions affected the odds of injury.9 In contrast to previous research,5–8 we found beginners had significantly reduced odds of injury; perhaps beginners realise they are in an environment beyond their skill level and choose easier features or do not use features as intended (eg, not leaving the ground when going over a jump).\nFuture research should identify and evaluate ways to reduce injuries without sacrificing participation, motor learning or skill development. Injury prevention should focus on risk mitigation. Possible strategies include marking landings, smaller TPs for progression to larger features, marking feature difficulty, controlling speed and slope for take-offs and landings, enforcing TP etiquette rules or reducing the height of aerial features. Further research is needed to develop guidelines regarding optimal TP design. The efficacy of these strategies should be investigated.\n Limitations Snowboarders who climbed up to reattempt features were counted as one run, which would underestimate the number of TP runs. Injured snowboarders who did not present to the ski patrol or either of the two nearest EDs were missed. This selection bias resulted in rate underestimation and observed associations between feature and injury would be overestimated if missed snowboarders were injured on rails. However, only 47 (14%) of the included snowboarders said they saw the ski patrol and a non-participating healthcare provider, indicating that most sought treatment at a participating ED.\nThere was potential misclassification by feature if the injured snowboarder could not recall the feature; however, this occurred only four times. Controls may not have correctly reported their feature use when the RA could not see the entire TP and it was not possible to determine the reliability of controls’ feature use. This could have an unpredictable effect on the ORs if it operated differently by feature type. Only age-group and sex of the uninjured controls could be assessed for accuracy and our observations were found to be valid. There was potential for misclassification of severe injury because factors other than severity could predict presentation to the ED, such as parental fear or anxiety influenced by a recent celebrity skiing death.39 It was unknown if non-consenting injured snowboarders presented to a non-study ED and were incorrectly classified as a minor injury. Fortunately, few who saw the ski patrol went to a non-study ED (8%).\nIt is possible that some important confounders were overlooked, such as first attempt at a new feature, manoeuvre performed, speed, fatigue, height or weight. There may be behavioural confounders, such as peer pressure to attempt difficult features or manoeuvres. However, some of these potential confounders were likely accounted for by other variables such as ability, age and sex. Although this study was conducted at only one resort, the TP layout changed four times during the two seasons and this enhances the generalisability of the results.\nSnowboarders who climbed up to reattempt features were counted as one run, which would underestimate the number of TP runs. Injured snowboarders who did not present to the ski patrol or either of the two nearest EDs were missed. This selection bias resulted in rate underestimation and observed associations between feature and injury would be overestimated if missed snowboarders were injured on rails. However, only 47 (14%) of the included snowboarders said they saw the ski patrol and a non-participating healthcare provider, indicating that most sought treatment at a participating ED.\nThere was potential misclassification by feature if the injured snowboarder could not recall the feature; however, this occurred only four times. Controls may not have correctly reported their feature use when the RA could not see the entire TP and it was not possible to determine the reliability of controls’ feature use. This could have an unpredictable effect on the ORs if it operated differently by feature type. Only age-group and sex of the uninjured controls could be assessed for accuracy and our observations were found to be valid. There was potential for misclassification of severe injury because factors other than severity could predict presentation to the ED, such as parental fear or anxiety influenced by a recent celebrity skiing death.39 It was unknown if non-consenting injured snowboarders presented to a non-study ED and were incorrectly classified as a minor injury. Fortunately, few who saw the ski patrol went to a non-study ED (8%).\nIt is possible that some important confounders were overlooked, such as first attempt at a new feature, manoeuvre performed, speed, fatigue, height or weight. There may be behavioural confounders, such as peer pressure to attempt difficult features or manoeuvres. However, some of these potential confounders were likely accounted for by other variables such as ability, age and sex. Although this study was conducted at only one resort, the TP layout changed four times during the two seasons and this enhances the generalisability of the results.", "Snowboarders who climbed up to reattempt features were counted as one run, which would underestimate the number of TP runs. Injured snowboarders who did not present to the ski patrol or either of the two nearest EDs were missed. This selection bias resulted in rate underestimation and observed associations between feature and injury would be overestimated if missed snowboarders were injured on rails. However, only 47 (14%) of the included snowboarders said they saw the ski patrol and a non-participating healthcare provider, indicating that most sought treatment at a participating ED.\nThere was potential misclassification by feature if the injured snowboarder could not recall the feature; however, this occurred only four times. Controls may not have correctly reported their feature use when the RA could not see the entire TP and it was not possible to determine the reliability of controls’ feature use. This could have an unpredictable effect on the ORs if it operated differently by feature type. Only age-group and sex of the uninjured controls could be assessed for accuracy and our observations were found to be valid. There was potential for misclassification of severe injury because factors other than severity could predict presentation to the ED, such as parental fear or anxiety influenced by a recent celebrity skiing death.39 It was unknown if non-consenting injured snowboarders presented to a non-study ED and were incorrectly classified as a minor injury. Fortunately, few who saw the ski patrol went to a non-study ED (8%).\nIt is possible that some important confounders were overlooked, such as first attempt at a new feature, manoeuvre performed, speed, fatigue, height or weight. There may be behavioural confounders, such as peer pressure to attempt difficult features or manoeuvres. However, some of these potential confounders were likely accounted for by other variables such as ability, age and sex. Although this study was conducted at only one resort, the TP layout changed four times during the two seasons and this enhances the generalisability of the results.", "Feature-specific injury rates ranged from 2.56 injuries/1000 runs (jumps and half-pipe) to 0.24 injuries/1000 runs (quarter-pipe). Half-pipe, jumps and kickers were significant risk factors for any injury and severe injury. Recommendations have been made for prevention strategy development to reduce TP injury risk. These strategies will require rigorous evaluation.\nWhat are the new findings?In this study, the overall injury rate for snowboarding terrain park (TP) injuries is 0.75 injuries/1000 runs.The injury rates are highest for jumps (2.56/1000 runs) and half-pipe (2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs).Compared with rails, the odds of injury is significantly higher on the half-pipe (OR 9.63; 95% CI 4.80 to 19.32) and jumps (OR 4.29; 95% CI 2.72 to 6.76).\nHow might it impact on clinical practice in the near future?Terrain parks (TPs) are popular and the majority of snowboarders injured in the TP present to the emergency department.With the identification of potential risk factors for TP injuries among snowboarders, injury prevention programmes can be tailored to those at greatest risk of injury.Should these programmes be effective, clinicians can expect to treat fewer snowboarders injured in the TP.\nIn this study, the overall injury rate for snowboarding terrain park (TP) injuries is 0.75 injuries/1000 runs.\nThe injury rates are highest for jumps (2.56/1000 runs) and half-pipe (2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs).\nCompared with rails, the odds of injury is significantly higher on the half-pipe (OR 9.63; 95% CI 4.80 to 19.32) and jumps (OR 4.29; 95% CI 2.72 to 6.76).\nTerrain parks (TPs) are popular and the majority of snowboarders injured in the TP present to the emergency department.\nWith the identification of potential risk factors for TP injuries among snowboarders, injury prevention programmes can be tailored to those at greatest risk of injury.\nShould these programmes be effective, clinicians can expect to treat fewer snowboarders injured in the TP.", "" ]
[ null, "methods", null, null, null, null, null, null, "results", null, null, "discussion", null, "conclusions", "supplementary-material" ]
[ "Epidemiology", "Injury Prevention" ]
Introduction: Snowboarding is a popular sport1 2 and the risk of injury is higher for snowboarding than for skiing.3 4 Being a beginner,5–8 poor weather conditions9 and not wearing protective equipment10–12 increase injury risk. Ski areas often include terrain parks (TPs) with man-made features (eg, jumps, rails and half-pipes) for performing tricks and aerial manoeuvres. In November 2007, Resorts of the Canadian Rockies (RCR) removed all man-made jumps from their TPs because they believed jumps increased injury risk.13 Definitions of common features can be found at: http://www.snowboard-coach.com/freestyle-snowboarding-features.html and in appendix 1. Between 5% and 27% of skiing and snowboarding injuries occur in TPs2 14–19 and are more severe than regular slope injuries.15 16 18 At the 2012 Winter Youth Olympic Games, 35% of all snowboard half-pipe and slope-style competitors were injured.20 Those injured in TPs tend to be snowboarders, male, 13–24 years old, fall from higher heights15 or self-perceived experts.16 There is a dearth of research examining injury rates and intrinsic and extrinsic risk factors for snowboarders in TPs in relation to injury mechanism—a comprehensive approach recommended by sport injury prevention research leaders.21 22 Therefore, the study objectives were to calculate overall and feature-specific TP injury rates, determine potential risk factors for injury in the TP and assess the reliability of the data collection methods. Methods: Definition of cases and controls This unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP. Cases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs. This unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP. Cases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs. Data collection Case data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14 18 19 23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included. To collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis. Injury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders. Ethical approval was granted by the University of Calgary Conjoint Health Research Ethics Board. Case data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14 18 19 23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included. To collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis. Injury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders. Ethical approval was granted by the University of Calgary Conjoint Health Research Ethics Board. Analysis Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 Definition of cases and controls: This unmatched case-control study was conducted in one Alberta TP during the 2008–2009 and 2009–2010 seasons. There were approximately 290 000 skier–snowboarder visits to the resort annually. Except for the half-pipe and mushroom, the overall TP layout and number of features changed once each season, resulting in four configurations containing all seven feature types. The resort did not assign a difficulty rating to individual features. Helmets were mandatory in the TP. Cases were snowboarders injured in the TP who presented to either the ski patrol and/or one of two nearby emergency departments (EDs), both Level 1 trauma centres (one adult, one paediatric). Controls were uninjured snowboarders using the same TP. Cases were ‘severe’ if they presented to an ED, and ‘minor’ if they presented to ski patrol only or to ski patrol and a non-emergent healthcare provider. Injuries presenting to the ED represent the public health burden and snowboarders injured in the TPs near hospitals may place a strain on EDs. Data collection: Case data were collected from ski patrol Accident Report Forms (ARF) and ED medical records. ARFs are completed by the ski patrol for anyone injured and presented to them. ARFs record demographics (age, sex and five-point self-reported ability), injured body region, injury type (fracture, dislocation, sprain/strain, bruise/abrasion/laceration and concussion), environment (temperature, light and snow (groomed/ungroomed)) and contact information. ARFs have previously been used in research14 18 19 23–25 and were collected from the resort biweekly. Snowboarders who presented to the ED were identified from the Regional Emergency Department Information System. Following verbal consent (parent/guardian if snowboarder was <14 years) and confirmation they were injured at the TP of interest, additional self-reported data were collected by telephone. The previously listed information was captured for snowboarders presenting only to the ED, along with years of snowboarding and TP experience, listening to music, wearing wrist guards, previous snowboarding injury and feature used (jump, kicker, box, rail, quarter-pipe, half-pipe or mushroom) when injured. If the case presented to the ski patrol and could not be contacted or did not consent to the telephone interview, only the ARF data were included. To collect feature use among controls, trained research assistants (RAs) at the bottom of the hill observed snowboarders' TP runs in 3 h time slots, 3–4 times a week at various times during the day, and recorded feature use on a map. Data were collected each week from 1 January until the resort closed (end of March in Season 1 and mid-April in Season 2) and included all four TP layouts. RAs approached the first snowboarder, obtained consent and asked the same risk factor information as asked to cases. Snowboarders indicated feature use on the map for features not fully visible to the RA. After each interview, the next snowboarder closest to the RA was approached. Temperature (smartphone Weather Network application), light and snow conditions were recorded on an hourly basis. Injury rate denominator data were collected in the same 3 h slots. Snowboarder runs were counted by an RA at the top of the TP, which was the only run to the right of the chairlift and only serviced by that chairlift. Age-group (<12, 12–17 or >17) and sex were visually assessed. To determine accuracy of observed age-group and sex classifications, RAs independently estimated the age-group and sex of a snowboarder entering the TP and then approached them to confirm. This was repeated in 10 min intervals for 3 h blocks and included 337 snowboarders. Ethical approval was granted by the University of Calgary Conjoint Health Research Ethics Board. Analysis: Reliability Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Rates Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Risk factors The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 Reliability: Three pairs of RAs independently classified uninjured snowboarders entering the TP by age-group and sex. The Stuart-Maxwell test for overall marginal homogeneity assessed the reliability within the three pairings.26 To determine accuracy of observed age-group and sex classifications, unweighted κ statistics and 95% CIs were calculated. At the end of Season 1, cases interviewed within the last month were reinterviewed using the original questions. To measure reliability, unweighted κ with 95% CIs were calculated for variables without ordering and weighted κ (κw) with 95% CIs for ordinal variables.27 Rates: Injury rates were presented as injuries per 1000 runs; the numerator was the number of injured snowboarders over the two seasons and the denominator was the estimated total runs. The denominator was extrapolated from the observed number of snowboarders entering the TP by multiplying the number of runs per 3 h time slot by the number of time slots the TP was open each season. This resulted in a denominator that was representative of participation at different times of the day during weekends and weekdays. The severe injury rate numerator was snowboarders who presented to the ED. Age-group, sex and age–sex-specific injury rates were calculated. Feature-specific injury and severe injury rates were calculated. The denominator was the total number of runs taken on that type of feature, extrapolated in the aforementioned manner to reflect exposure opportunity. For example, there were seven opportunities to go over a box during one run but only one opportunity to use a mushroom. The 95% CIs were calculated using the Poisson distribution.28 Overall, sex-specific and age-specific rate ratios and 95% CIs were calculated comparing each feature with rails. Rails were the reference feature as they were still permitted in RCR TPs13 and were hypothesised to have a lower injury rate possibly due to their smaller drop to the ground. Risk factors: The distributions of potential risk factors between cases and controls, severe cases and controls and severe cases and minor cases were compared using proportions for dichotomous/polytomous risk factors and means with SDs for continuous risk factors. Unadjusted ORs with 95% CIs were calculated. Logistic regression was used to calculate the association between injury versus no injury and feature use using a backwards elimination.29 Potential confounders were: age (continuous), sex (male/female), previous injury (yes/no), self-reported ability (beginner-novice/intermediate/advanced/expert), wrist guard use (yes/no), music use (yes/no), temperature (>10C, 0–10C, −10–0C, <−10C), light (sunny/cloud/night) and snow (groomed/ungroomed). These were entered into the model containing feature (jumps, kickers, half-pipe, quarter-pipe, box, mushroom, rails). Whichever confounder produced the smallest change in the feature-specific ORs was removed, provided the change was <15% and repeated until all potential confounders were removed.30 The 95% CIs were adjusted for the clustering effect of multiple feature use within controls. To compare severe injuries with no injuries and severe injuries with minor injuries, forward selection was used because of smaller sample size.31 Potential confounders were added one at a time to the crude model; the confounder that produced the greatest percent change in a feature OR was retained. This was repeated until either the addition of another confounder no longer changed any of the TP feature estimates by >15% or there was one confounder for every 10 cases.31 The 95% CIs were adjusted for clustering effect of multiple feature use within uninjured controls. For the injury versus no injury comparison, a sensitivity analysis using multiple imputation by chained equations with five imputed datasets was conducted32 to address missing data and test for effect modification by age or sex and feature. Feature, age, sex, ability, previous injury, music, wrist guards, temperature, light and snow were used for imputation. Effect modification was assessed using an omnibus test (p>0.05 indicating no evidence of effect modification), and backwards elimination multiple logistic regression was conducted on the imputed data. Analyses were conducted in Stata/SE V.11.33 Results: A total of 333 cases (107 ski patrol, 174 ski patrol and ED, 18 ski patrol and healthcare provider and 34 ED only) and 1261 controls were included. The consent rate was 79% for cases and 94% for controls (figure 1). Recruitment of included snowboarders. Reliability Based on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury. Based on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury. Injuries Overall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders. Features used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park *Weighted for time of day, weekday versus weekend and exposure opportunity. Females aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2). Overall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI) Features that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3). Table 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4). All and severe injury rates (per 1000 feature exposures and 95% CI) *Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Feature-specific injury rate ratios and 95% CI N, number; RR, rate ratio. *Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries. †Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results. Baseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries. Summary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI) *Odds of injury increases for every increase in year. †Compared with no injuries. ‡Compared with minor injuries. N, numbers; N/A, not applicable. The crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails. Association between injury and feature type after controlling for confounders (OR and 95% CI) *Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%. †Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%. There were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change. Using the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97). Overall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders. Features used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park *Weighted for time of day, weekday versus weekend and exposure opportunity. Females aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2). Overall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI) Features that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3). Table 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4). All and severe injury rates (per 1000 feature exposures and 95% CI) *Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Feature-specific injury rate ratios and 95% CI N, number; RR, rate ratio. *Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries. †Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results. Baseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries. Summary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI) *Odds of injury increases for every increase in year. †Compared with no injuries. ‡Compared with minor injuries. N, numbers; N/A, not applicable. The crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails. Association between injury and feature type after controlling for confounders (OR and 95% CI) *Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%. †Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%. There were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change. Using the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97). Reliability: Based on initial RA observation and follow-up confirmation, the RAs correctly classified age-group in 78% (κ=0.71; 95% CI 0.49 to 0.93) and sex in 100% of uninjured snowboarders (κ=1.00; 95% CI 0.69 to 1.00). There was significant agreement for age and sex classification between the RAs (p>0.05 for each of the three pairs). For interview responses, κ=0.72 (95% CI 0.60 to 0.85) for feature use at the time of injury and this improved when follow-up interviews were conducted within 2 weeks (κ=0.86; 95% CI 0.64 to 1.00). The overall κ were ≥0.60 except for body region of the second and third injuries and diagnosis of the second injury. Injuries: Overall, 62.5% went to the ED. The most commonly injured body regions were the wrist (20%), and head (14%), while the most common injury type was fracture (36%). Table 1 describes feature use among injured and uninjured TP snowboarders. Features used by all injured, severely injured snowboarders, and uninjured snowboarders in the terrain park *Weighted for time of day, weekday versus weekend and exposure opportunity. Females aged 12–17 and >17 had higher rates of injuries than males, and higher rates of severe injuries at all ages (table 2). Overall, sex-specific and age-specific injury rates (per 1000 runs and 95% CI) Features that promoted aerial manoeuvres or resulted in a greater drop to the ground typically had higher injury rates than features with a small drop (table 3). Table 3All and severe injury rates (per 1000 feature exposures and 95% CI)FeatureAll injuries* rate (95% CI)Severe injuries* rate (95% CI)Aerial manoeuvre or substantial drop to the ground Half-pipe2.6 (1.5 to 4.0)1.8 (0.9 to 3.0) Jumps2.6 (2.1 to 3.2)1.8 (1.4 to 2.3) Kickers0.6 (0.5 to 0.8)0.4 (0.3 to 0.5) Mushroom0.5 (0.3 to 0.8)0.2 (0.1 to 0.5)Small drop to the ground Boxes0.7 (0.5 to 0.8)0.4 (0.3 to 0.5) Rails0.4 (0.3 to 0.6)0.2 (0.1 to 0.4) Quarter-pipes0.2 (0.1 to 0.4)0.1 (0.1 to 0.3)*Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Jumps and half-pipe had the highest overall and severe injury rates. Compared with rails, rates of all injuries were significantly higher on the half-pipe, jumps and boxes (table 4). All and severe injury rates (per 1000 feature exposures and 95% CI) *Injury rates were adjusted for weekday versus weekend, time of day and exposure opportunity (ie, incorporates days when the terrain park or certain features were closed). Feature-specific injury rate ratios and 95% CI N, number; RR, rate ratio. *Four snowboarders could not recall feature used at time of injury and 13 snowboarders were not using a feature at the time of injury but were in the boundaries of the terrain park (TP) for a total of 333 TP injuries. †Two snowboarders could not recall feature used at time of injury and nine snowboarders were not using a feature at the time of injury but were in the boundaries of the TP for a total of 208 severe TP injuries. Bold refers to statistically significant results. Baseline characteristics are presented in table 5. The odds of injury were significantly higher when it was −10°C to 0°C compared with 0–10°C, and at night versus sunny weather. Beginners/novices had significantly lower odds of injury than intermediates, as did those with a previous snowboarding injury, music use, or when it was above 10°C versus 0–10°C. When comparing severely injured with uninjured snowboarders, the same patterns were observed except that music or temperatures above 10°C were not significant. There were no significant differences between severe versus minor injuries. Summary of the characteristics of all injured and uninjured snowboarders (crude OR and 95% CI) *Odds of injury increases for every increase in year. †Compared with no injuries. ‡Compared with minor injuries. N, numbers; N/A, not applicable. The crude associations between injury and feature use showed significantly greater odds of injury for jumps and half-pipe, compared with rails and significantly lower odds of injury for quarter-pipes (table 6). For severe injury versus no injury, the crude odds of severe injury were significantly greater on jumps, half-pipe and kickers compared with rails. Association between injury and feature type after controlling for confounders (OR and 95% CI) *Adjusted for previous injury, ability and temperature. Age, sex, listening to music, wearing wrist guards and light and snow conditions did not change any of the feature-specific estimates more than 15%. †Adjusted for music and light. Age, sex, ability, wrist guard use and temperature and snow conditions did not change any of the feature-specific estimates more than 15%. There were significant increases in the adjusted odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12) versus rails. The adjusted odds of severe injury versus no injury were significantly higher for half-pipe, jumps and kickers compared with rails. After accounting for clustering, the 95% CI width increased marginally but significance did not change. Using the imputed dataset, there was no evidence of effect modification of feature by age or sex (p=0.41) versus rails, significantly increased adjusted odds of injury on half-pipe (OR 5.88; 95% CI 3.25 to 10.63) and jumps (OR 4.78; 95% CI 3.22 to 7.11) and significantly decreased odds of injury on quarter-pipes (OR 0.49; 95% CI 0.25 to 0.97). Discussion: To our knowledge this is the first study to examine feature-specific injury rates and potential risk factors for TP snowboarding injuries. Data collection methods were found to be accurate and reliable for age and sex information. There was ‘substantial’ to ‘perfect’ agreement for most risk factors for injured snowboarders, including feature use.27 Self-reported risk factors were confirmed with reliable sources where possible. The overall injury rate of TP snowboarders was estimated at 0.75/1000 runs. Feature-specific injury rates were higher for features that supported aerial manoeuvres or a large drop to the ground. Aerial features facilitate more air time and we hypothesise that snowboarders may have more opportunity to lose their sense of body position/orientation and land with more force and this increases the likelihood of injury. Research suggests that TP injuries are more severe than those on regular slopes.15 16 18 One study reported an overall TP injury rate of 0.62/1000 ski and snowboard days, including both skiers and snowboarders, but only ski patrol injuries.15 If each skier and snowboarder took only one TP run during their day, this rate is lower than our observed rate. It is unknown if the TPs were similar in size or number of features. A literature review34 reported that ski patrol-reported injuries among snowboarders varied from 2.135 to 7.0/361000 outings; however, a TP-specific injury rate or the precise injury definition was not provided. Half-pipes and jumps significantly predicted injury. Torjussen and Bahr37  found professional snowboarders competing in half-pipe and big air competitions had a significantly higher injury risk compared with giant-slalom, where the snowboarder does not leave the ground. Overall 34% of snowboarders listened to music through a personal music player. The unadjusted result indicated that listening to music reduced the odds of injury. In a laboratory setting, sport students listening to music while wearing a helmet with built-in speakers had similar mean reaction time to peripheral stimulus as those who were wearing a helmet but not listening to music.38 Similar to non-TP research, snowboarding in suboptimal environmental (bad weather/visibility) conditions affected the odds of injury.9 In contrast to previous research,5–8 we found beginners had significantly reduced odds of injury; perhaps beginners realise they are in an environment beyond their skill level and choose easier features or do not use features as intended (eg, not leaving the ground when going over a jump). Future research should identify and evaluate ways to reduce injuries without sacrificing participation, motor learning or skill development. Injury prevention should focus on risk mitigation. Possible strategies include marking landings, smaller TPs for progression to larger features, marking feature difficulty, controlling speed and slope for take-offs and landings, enforcing TP etiquette rules or reducing the height of aerial features. Further research is needed to develop guidelines regarding optimal TP design. The efficacy of these strategies should be investigated. Limitations Snowboarders who climbed up to reattempt features were counted as one run, which would underestimate the number of TP runs. Injured snowboarders who did not present to the ski patrol or either of the two nearest EDs were missed. This selection bias resulted in rate underestimation and observed associations between feature and injury would be overestimated if missed snowboarders were injured on rails. However, only 47 (14%) of the included snowboarders said they saw the ski patrol and a non-participating healthcare provider, indicating that most sought treatment at a participating ED. There was potential misclassification by feature if the injured snowboarder could not recall the feature; however, this occurred only four times. Controls may not have correctly reported their feature use when the RA could not see the entire TP and it was not possible to determine the reliability of controls’ feature use. This could have an unpredictable effect on the ORs if it operated differently by feature type. Only age-group and sex of the uninjured controls could be assessed for accuracy and our observations were found to be valid. There was potential for misclassification of severe injury because factors other than severity could predict presentation to the ED, such as parental fear or anxiety influenced by a recent celebrity skiing death.39 It was unknown if non-consenting injured snowboarders presented to a non-study ED and were incorrectly classified as a minor injury. Fortunately, few who saw the ski patrol went to a non-study ED (8%). It is possible that some important confounders were overlooked, such as first attempt at a new feature, manoeuvre performed, speed, fatigue, height or weight. There may be behavioural confounders, such as peer pressure to attempt difficult features or manoeuvres. However, some of these potential confounders were likely accounted for by other variables such as ability, age and sex. Although this study was conducted at only one resort, the TP layout changed four times during the two seasons and this enhances the generalisability of the results. Snowboarders who climbed up to reattempt features were counted as one run, which would underestimate the number of TP runs. Injured snowboarders who did not present to the ski patrol or either of the two nearest EDs were missed. This selection bias resulted in rate underestimation and observed associations between feature and injury would be overestimated if missed snowboarders were injured on rails. However, only 47 (14%) of the included snowboarders said they saw the ski patrol and a non-participating healthcare provider, indicating that most sought treatment at a participating ED. There was potential misclassification by feature if the injured snowboarder could not recall the feature; however, this occurred only four times. Controls may not have correctly reported their feature use when the RA could not see the entire TP and it was not possible to determine the reliability of controls’ feature use. This could have an unpredictable effect on the ORs if it operated differently by feature type. Only age-group and sex of the uninjured controls could be assessed for accuracy and our observations were found to be valid. There was potential for misclassification of severe injury because factors other than severity could predict presentation to the ED, such as parental fear or anxiety influenced by a recent celebrity skiing death.39 It was unknown if non-consenting injured snowboarders presented to a non-study ED and were incorrectly classified as a minor injury. Fortunately, few who saw the ski patrol went to a non-study ED (8%). It is possible that some important confounders were overlooked, such as first attempt at a new feature, manoeuvre performed, speed, fatigue, height or weight. There may be behavioural confounders, such as peer pressure to attempt difficult features or manoeuvres. However, some of these potential confounders were likely accounted for by other variables such as ability, age and sex. Although this study was conducted at only one resort, the TP layout changed four times during the two seasons and this enhances the generalisability of the results. Limitations: Snowboarders who climbed up to reattempt features were counted as one run, which would underestimate the number of TP runs. Injured snowboarders who did not present to the ski patrol or either of the two nearest EDs were missed. This selection bias resulted in rate underestimation and observed associations between feature and injury would be overestimated if missed snowboarders were injured on rails. However, only 47 (14%) of the included snowboarders said they saw the ski patrol and a non-participating healthcare provider, indicating that most sought treatment at a participating ED. There was potential misclassification by feature if the injured snowboarder could not recall the feature; however, this occurred only four times. Controls may not have correctly reported their feature use when the RA could not see the entire TP and it was not possible to determine the reliability of controls’ feature use. This could have an unpredictable effect on the ORs if it operated differently by feature type. Only age-group and sex of the uninjured controls could be assessed for accuracy and our observations were found to be valid. There was potential for misclassification of severe injury because factors other than severity could predict presentation to the ED, such as parental fear or anxiety influenced by a recent celebrity skiing death.39 It was unknown if non-consenting injured snowboarders presented to a non-study ED and were incorrectly classified as a minor injury. Fortunately, few who saw the ski patrol went to a non-study ED (8%). It is possible that some important confounders were overlooked, such as first attempt at a new feature, manoeuvre performed, speed, fatigue, height or weight. There may be behavioural confounders, such as peer pressure to attempt difficult features or manoeuvres. However, some of these potential confounders were likely accounted for by other variables such as ability, age and sex. Although this study was conducted at only one resort, the TP layout changed four times during the two seasons and this enhances the generalisability of the results. Conclusion: Feature-specific injury rates ranged from 2.56 injuries/1000 runs (jumps and half-pipe) to 0.24 injuries/1000 runs (quarter-pipe). Half-pipe, jumps and kickers were significant risk factors for any injury and severe injury. Recommendations have been made for prevention strategy development to reduce TP injury risk. These strategies will require rigorous evaluation. What are the new findings?In this study, the overall injury rate for snowboarding terrain park (TP) injuries is 0.75 injuries/1000 runs.The injury rates are highest for jumps (2.56/1000 runs) and half-pipe (2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs).Compared with rails, the odds of injury is significantly higher on the half-pipe (OR 9.63; 95% CI 4.80 to 19.32) and jumps (OR 4.29; 95% CI 2.72 to 6.76). How might it impact on clinical practice in the near future?Terrain parks (TPs) are popular and the majority of snowboarders injured in the TP present to the emergency department.With the identification of potential risk factors for TP injuries among snowboarders, injury prevention programmes can be tailored to those at greatest risk of injury.Should these programmes be effective, clinicians can expect to treat fewer snowboarders injured in the TP. In this study, the overall injury rate for snowboarding terrain park (TP) injuries is 0.75 injuries/1000 runs. The injury rates are highest for jumps (2.56/1000 runs) and half-pipe (2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs). Compared with rails, the odds of injury is significantly higher on the half-pipe (OR 9.63; 95% CI 4.80 to 19.32) and jumps (OR 4.29; 95% CI 2.72 to 6.76). Terrain parks (TPs) are popular and the majority of snowboarders injured in the TP present to the emergency department. With the identification of potential risk factors for TP injuries among snowboarders, injury prevention programmes can be tailored to those at greatest risk of injury. Should these programmes be effective, clinicians can expect to treat fewer snowboarders injured in the TP. Supplementary Material:
Background: Snowboarding is a popular albeit risky sport and terrain park (TP) injuries are more severe than regular slope injuries. TPs contain man-made features that facilitate aerial manoeuvres. The objectives of this study were to determine overall and feature-specific injury rates and the potential risk factors for TP injuries. Methods: Case-control study with exposure estimation, conducted in an Alberta TP during two ski seasons. Cases were snowboarders injured in the TP who presented to ski patrol and/or local emergency departments. Controls were uninjured snowboarders in the same TP. κ Statistics were used to measure the reliability of reported risk factor information. Injury rates were calculated and adjusted logistic regression was used to calculate the feature-specific odds of injury. Results: Overall, 333 cases and 1261 controls were enrolled. Reliability of risk factor information was κ>0.60 for 21/24 variables. The overall injury rate was 0.75/1000 runs. Rates were highest for jumps and half-pipe (both 2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs). Compared with rails, there were increased odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12). Conclusions: Higher feature-specific injury rates and increased odds of injury were associated with features that promote aerial manoeuvres or a large drop to the ground. Further research is required to determine ways to increase snowboarder safety in the TP.
Introduction: Snowboarding is a popular sport1 2 and the risk of injury is higher for snowboarding than for skiing.3 4 Being a beginner,5–8 poor weather conditions9 and not wearing protective equipment10–12 increase injury risk. Ski areas often include terrain parks (TPs) with man-made features (eg, jumps, rails and half-pipes) for performing tricks and aerial manoeuvres. In November 2007, Resorts of the Canadian Rockies (RCR) removed all man-made jumps from their TPs because they believed jumps increased injury risk.13 Definitions of common features can be found at: http://www.snowboard-coach.com/freestyle-snowboarding-features.html and in appendix 1. Between 5% and 27% of skiing and snowboarding injuries occur in TPs2 14–19 and are more severe than regular slope injuries.15 16 18 At the 2012 Winter Youth Olympic Games, 35% of all snowboard half-pipe and slope-style competitors were injured.20 Those injured in TPs tend to be snowboarders, male, 13–24 years old, fall from higher heights15 or self-perceived experts.16 There is a dearth of research examining injury rates and intrinsic and extrinsic risk factors for snowboarders in TPs in relation to injury mechanism—a comprehensive approach recommended by sport injury prevention research leaders.21 22 Therefore, the study objectives were to calculate overall and feature-specific TP injury rates, determine potential risk factors for injury in the TP and assess the reliability of the data collection methods. Conclusion: Feature-specific injury rates ranged from 2.56 injuries/1000 runs (jumps and half-pipe) to 0.24 injuries/1000 runs (quarter-pipe). Half-pipe, jumps and kickers were significant risk factors for any injury and severe injury. Recommendations have been made for prevention strategy development to reduce TP injury risk. These strategies will require rigorous evaluation. What are the new findings?In this study, the overall injury rate for snowboarding terrain park (TP) injuries is 0.75 injuries/1000 runs.The injury rates are highest for jumps (2.56/1000 runs) and half-pipe (2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs).Compared with rails, the odds of injury is significantly higher on the half-pipe (OR 9.63; 95% CI 4.80 to 19.32) and jumps (OR 4.29; 95% CI 2.72 to 6.76). How might it impact on clinical practice in the near future?Terrain parks (TPs) are popular and the majority of snowboarders injured in the TP present to the emergency department.With the identification of potential risk factors for TP injuries among snowboarders, injury prevention programmes can be tailored to those at greatest risk of injury.Should these programmes be effective, clinicians can expect to treat fewer snowboarders injured in the TP. In this study, the overall injury rate for snowboarding terrain park (TP) injuries is 0.75 injuries/1000 runs. The injury rates are highest for jumps (2.56/1000 runs) and half-pipe (2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs). Compared with rails, the odds of injury is significantly higher on the half-pipe (OR 9.63; 95% CI 4.80 to 19.32) and jumps (OR 4.29; 95% CI 2.72 to 6.76). Terrain parks (TPs) are popular and the majority of snowboarders injured in the TP present to the emergency department. With the identification of potential risk factors for TP injuries among snowboarders, injury prevention programmes can be tailored to those at greatest risk of injury. Should these programmes be effective, clinicians can expect to treat fewer snowboarders injured in the TP.
Background: Snowboarding is a popular albeit risky sport and terrain park (TP) injuries are more severe than regular slope injuries. TPs contain man-made features that facilitate aerial manoeuvres. The objectives of this study were to determine overall and feature-specific injury rates and the potential risk factors for TP injuries. Methods: Case-control study with exposure estimation, conducted in an Alberta TP during two ski seasons. Cases were snowboarders injured in the TP who presented to ski patrol and/or local emergency departments. Controls were uninjured snowboarders in the same TP. κ Statistics were used to measure the reliability of reported risk factor information. Injury rates were calculated and adjusted logistic regression was used to calculate the feature-specific odds of injury. Results: Overall, 333 cases and 1261 controls were enrolled. Reliability of risk factor information was κ>0.60 for 21/24 variables. The overall injury rate was 0.75/1000 runs. Rates were highest for jumps and half-pipe (both 2.56/1000 runs) and lowest for rails (0.43/1000 runs) and quarter-pipes (0.24/1000 runs). Compared with rails, there were increased odds of injury for half-pipe (OR 9.63; 95% CI 4.80 to 19.32), jumps (OR 4.29; 95% CI 2.72 to 6.76), mushroom (OR 2.30; 95% CI 1.20 to 4.41) and kickers (OR 1.99; 95% CI 1.27 to 3.12). Conclusions: Higher feature-specific injury rates and increased odds of injury were associated with features that promote aerial manoeuvres or a large drop to the ground. Further research is required to determine ways to increase snowboarder safety in the TP.
13,827
317
[ 261, 193, 539, 1591, 106, 246, 437, 140, 1048, 379 ]
15
[ "injury", "feature", "95", "snowboarders", "tp", "age", "sex", "injuries", "severe", "use" ]
[ "skiing snowboarding injuries", "tp injuries snowboarders", "snowboarders injury prevention", "injury snowboarders feature", "injury rate snowboarding" ]
[CONTENT] Epidemiology | Injury Prevention [SUMMARY]
[CONTENT] Epidemiology | Injury Prevention [SUMMARY]
[CONTENT] Epidemiology | Injury Prevention [SUMMARY]
[CONTENT] Epidemiology | Injury Prevention [SUMMARY]
[CONTENT] Epidemiology | Injury Prevention [SUMMARY]
[CONTENT] Epidemiology | Injury Prevention [SUMMARY]
[CONTENT] Adolescent | Adult | Alberta | Case-Control Studies | Child | Female | Humans | Incidence | Injury Severity Score | Male | Risk Factors | Skiing [SUMMARY]
[CONTENT] Adolescent | Adult | Alberta | Case-Control Studies | Child | Female | Humans | Incidence | Injury Severity Score | Male | Risk Factors | Skiing [SUMMARY]
[CONTENT] Adolescent | Adult | Alberta | Case-Control Studies | Child | Female | Humans | Incidence | Injury Severity Score | Male | Risk Factors | Skiing [SUMMARY]
[CONTENT] Adolescent | Adult | Alberta | Case-Control Studies | Child | Female | Humans | Incidence | Injury Severity Score | Male | Risk Factors | Skiing [SUMMARY]
[CONTENT] Adolescent | Adult | Alberta | Case-Control Studies | Child | Female | Humans | Incidence | Injury Severity Score | Male | Risk Factors | Skiing [SUMMARY]
[CONTENT] Adolescent | Adult | Alberta | Case-Control Studies | Child | Female | Humans | Incidence | Injury Severity Score | Male | Risk Factors | Skiing [SUMMARY]
[CONTENT] skiing snowboarding injuries | tp injuries snowboarders | snowboarders injury prevention | injury snowboarders feature | injury rate snowboarding [SUMMARY]
[CONTENT] skiing snowboarding injuries | tp injuries snowboarders | snowboarders injury prevention | injury snowboarders feature | injury rate snowboarding [SUMMARY]
[CONTENT] skiing snowboarding injuries | tp injuries snowboarders | snowboarders injury prevention | injury snowboarders feature | injury rate snowboarding [SUMMARY]
[CONTENT] skiing snowboarding injuries | tp injuries snowboarders | snowboarders injury prevention | injury snowboarders feature | injury rate snowboarding [SUMMARY]
[CONTENT] skiing snowboarding injuries | tp injuries snowboarders | snowboarders injury prevention | injury snowboarders feature | injury rate snowboarding [SUMMARY]
[CONTENT] skiing snowboarding injuries | tp injuries snowboarders | snowboarders injury prevention | injury snowboarders feature | injury rate snowboarding [SUMMARY]
[CONTENT] injury | feature | 95 | snowboarders | tp | age | sex | injuries | severe | use [SUMMARY]
[CONTENT] injury | feature | 95 | snowboarders | tp | age | sex | injuries | severe | use [SUMMARY]
[CONTENT] injury | feature | 95 | snowboarders | tp | age | sex | injuries | severe | use [SUMMARY]
[CONTENT] injury | feature | 95 | snowboarders | tp | age | sex | injuries | severe | use [SUMMARY]
[CONTENT] injury | feature | 95 | snowboarders | tp | age | sex | injuries | severe | use [SUMMARY]
[CONTENT] injury | feature | 95 | snowboarders | tp | age | sex | injuries | severe | use [SUMMARY]
[CONTENT] injury | risk | tps | snowboarding | man | 16 | snowboard | slope | skiing | injury risk [SUMMARY]
[CONTENT] feature | injury | 95 cis | cis | calculated | cases | sex | age | 95 | tp [SUMMARY]
[CONTENT] 95 ci | ci | injury | 95 | odds | versus | significantly | table | odds injury | feature [SUMMARY]
[CONTENT] 1000 runs | 1000 | runs | injury | 56 | programmes | 56 1000 runs | 56 1000 | injuries | tp [SUMMARY]
[CONTENT] injury | feature | 95 | snowboarders | 95 ci | ci | tp | sex | age | 95 cis [SUMMARY]
[CONTENT] injury | feature | 95 | snowboarders | 95 ci | ci | tp | sex | age | 95 cis [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] Alberta TP | two ski seasons ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] 333 | 1261 ||| κ>0.60 | 21/24 ||| 0.75/1000 ||| half | 2.56/1000 | 0.43/1000 | quarter | 0.24/1000 ||| half | 9.63 | 95% | CI | 4.80 | 19.32 | 4.29 | 95% | 2.72 | 2.30 | 95% | 1.20 | 4.41 | 1.99 | 95% | CI | 1.27 | 3.12 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| Alberta TP | two ski seasons ||| ||| ||| ||| ||| ||| ||| 333 | 1261 ||| κ>0.60 | 21/24 ||| 0.75/1000 ||| half | 2.56/1000 | 0.43/1000 | quarter | 0.24/1000 ||| half | 9.63 | 95% | CI | 4.80 | 19.32 | 4.29 | 95% | 2.72 | 2.30 | 95% | 1.20 | 4.41 | 1.99 | 95% | CI | 1.27 | 3.12 ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| Alberta TP | two ski seasons ||| ||| ||| ||| ||| ||| ||| 333 | 1261 ||| κ>0.60 | 21/24 ||| 0.75/1000 ||| half | 2.56/1000 | 0.43/1000 | quarter | 0.24/1000 ||| half | 9.63 | 95% | CI | 4.80 | 19.32 | 4.29 | 95% | 2.72 | 2.30 | 95% | 1.20 | 4.41 | 1.99 | 95% | CI | 1.27 | 3.12 ||| ||| [SUMMARY]
Traditional medicines and their common uses in central region of Syria: Hama and Homs - an ethnomedicinal survey.
34165371
Since ancient times, traditional Arabic medicine (TAM) has been used to treat various diseases in Syria. They are cost-effective with fewer side effects and are more suitable for long-term use compared with chemically synthesized medicines. In addition, the scientific importance is manifested, as this survey proceeds, for the purposes of verifying and documenting these traditional medicines and their common uses.
CONTEXT
Information was collected from 2019 to 2021 from the cities of Homs and Hama and their villages, which are two governorates located in central Syria, after interviews with traditional practitioners called Attarin, and many other people. Plant specimens were collected according to different references concerning medicinal plants of Syria, to document the traditional uses of plants at least two of the traditional healers and three other people were asked.
METHODS
In this survey, we listed 76 medicinal plants belonging to 39 families in alphabetical order with the parts used and the method of preparation according to their therapeutic use, which are used to treat 106 ailments.
RESULTS
Many of the uses of medicinal plants mentioned in this survey are still under study. There is no doubt that this study will provide new data that could contribute to further pharmacological discoveries by identifying the active ingredients and their mechanism of effect by doing additional pharmacological work to confirm the alleged biological activities of these plants.
CONCLUSIONS
[ "Adult", "Aged", "Ethnobotany", "Ethnopharmacology", "Female", "Humans", "Male", "Medicine, Traditional", "Middle Aged", "Plants, Medicinal", "Surveys and Questionnaires", "Syria" ]
8231352
Introduction
Traditional medicine (TM), as defined by the World Health Organization (WHO), is the sum total of the knowledge, skills and practices based on the theories, beliefs and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness (WHO 2002a, 2002b). Some TM systems are supported by huge volumes of literature and records of the theoretical concepts and practical skills; others pass down from generation to generation through verbal teaching. To date, in some parts of the world, the majority of the population continue to rely on their own TM to meet their primary health care needs. When adopted outside of its traditional culture, TM is often referred as complementary and alternative medicine (CAM) (Che et al. 2017). Although modern medicine is currently available in many developing countries, large proportions of the population in these countries still rely to a large extent on traditional practitioners and medicinal plants for therapeutic purposes. TM is often the first choice for providing primary health care in developing countries, and the WHO estimates that more than 80% of healthcare needs in these countries are met by traditional health care practices, being the cheapest and most accessible (WHO 2002a, 2002b, 2004, 2005). The WHO pays special attention to TM and CAM. Resolution (no. WHA13-6) issued by the World Health Assembly (WHA) in 2009 emphasized the need to update the global TM strategy (WHO 2009), so the WHO issued its strategy for traditional (folk medicine) 2014–2023 (WHO 2013). The United Nations Educational Scientific and Cultural Organization (UNESCO) works to document the medical heritage of peoples within its documentation of the living intangible cultural heritage of peoples and civilizations, through the documentation of TM practices that were included in the UNESCO convention (2003), the convention on biological diversity (1992) and the UNESCO universal declaration on cultural diversity (2001) and the United Nations declaration on the rights of indigenous peoples (2007), and according to the meeting of the UNESCO International Bioethics Committee (IBC) working group on TM and its ethical effects in Paris (2010), it stressed the need to conduct studies that illustrate the use of TM in various regions around the world, and its evaluation in clinical practice and in research and policy (IBC 2013). In 2017, the International Center for Information and Networks for Intangible Cultural Heritage in the Asia-Pacific region (ICHCAP) under the auspices of UNESCO, issued a book entitled Traditional Medicine in which the Syria Trust for Development in Section VII included TM in Syria (Falk 2017). Ethno-botany is the scientific study of the relationships between people and plants. It was first coined in 1896 by the US botanist John Harshberger; however, the history of ethnobotany began long before that (Campbell et al. 2002; Amjad et al. 2015). It plays an important role in understanding the dynamic relationships between biological diversity and social and cultural systems (Husain et al. 2008; Amjad et al. 2013, 2015). Plants are essential for human beings as they provide food, and medicines (Alam et al. 2011; Ahmad et al. 2012). Traditional Arabic medicine (TAM) is one of the famous traditional medical systems, which is occasionally called Unani medicine, Graeco-Arabic medicine, humoral medicine or Islamic medicine. The subject of TM in Syria has received little attention in the literature, and very little is known about the traditional medicinal substances used nowadays by the Syrian population to treat the most common diseases. Throughout ancient times in Syria, as part of the Levantine Nations (Bilad al-Sham), and other lands in the region, humans used various natural materials as sources of medicines (Jaddouh 2004). In the western countryside of Hama, there is a natural reserve for medicinal plants, which is called the Abu Qubais Protected area in Al-Ghab region (which protect the biodiversity rights of indigenous people and affiliated to the general commission for Al-Ghab administration and development), 509 plant species belonging to 72 families have been recorded (Al-Mahmoud and Al-Shater 2010). For these reasons, the present investigation gathered the uses of medicinal plants in the centre region of Syria (Homs and Hama), as a supplement for a national survey, and documents the information concerning the uses of medicinal plants, which may serve as the basis of knowledge for a more intensive scientific research.
Methods
Study area Syria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate. This area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021). Historically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021). Syria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate. This area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021). Historically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021). Field work and data collection Field surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Name and surname Age and sex Educational level Are plants collected in your region? Do you have any contact with plants? Can you show the plants you use in your region? Can you tell the names of local plants you use in your region? In which season do you collect the plants you use in your region? When collecting plant, which parts of the plant do you collect and how do you collect them? Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.). How do you prepare and administrate the plants’ parts? How did you diagnose the disease (by physician, by traditional healers, self-treating)? How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Only the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected. Information about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C. Field surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Name and surname Age and sex Educational level Are plants collected in your region? Do you have any contact with plants? Can you show the plants you use in your region? Can you tell the names of local plants you use in your region? In which season do you collect the plants you use in your region? When collecting plant, which parts of the plant do you collect and how do you collect them? Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.). How do you prepare and administrate the plants’ parts? How did you diagnose the disease (by physician, by traditional healers, self-treating)? How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Only the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected. Information about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C. Taxonomic identification of the species Medicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order. Medicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order. Ethics approval and consent to participate The study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006). The study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006). Data analysis The data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021). The data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021). Use value and use reports The UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012): UR = Ni/n where Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned. The UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003): UV=∑ URi/N where ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species. The UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021). The UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012): UR = Ni/n where Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned. The UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003): UV=∑ URi/N where ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species. The UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021).
Results
Demographic data of informants In total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines. Age and gender distribution. In total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines. Age and gender distribution. Ethnobotanical and ethnomedicinal uses of plant species The study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study. A total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses. Ethnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama). Ap: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant. C: cultivated plants, W: wild plants. Herbarium no. The study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study. A total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses. Ethnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama). Ap: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant. C: cultivated plants, W: wild plants. Herbarium no. Botanical families of plants used The most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1). Plant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama). The most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1). Plant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama). Use value of the plants Medicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066). Medicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066). Medicinal parts of the plant used The analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2). Medicinal parts of the plants used for ethnomedicinal purposes in the study. The analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2). Medicinal parts of the plants used for ethnomedicinal purposes in the study. Modes and conditions of medicine preparation The analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2). Modes of ethnomedicines preparation in Homs and Hama. The analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2). Modes of ethnomedicines preparation in Homs and Hama. Ethnomedicinal information about treatment the different diseases The results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement). Among these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4). Ethno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama). The results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement). Among these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4). Ethno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama).
Conclusions
Many of the uses of medicinal plants mentioned in Syria are still under study. This study has been conducted with the aim to generate new concepts that could supplement pharmacological work with potentially further pharmacological discoveries; that is, by identifying the active ingredients and their mechanism of effect to confirm the alleged biological activities of these plants.
[ "Study area", "Field work and data collection", "Taxonomic identification of the species", "Ethics approval and consent to participate", "Data analysis", "Use value and use reports", "Demographic data of informants", "Ethnobotanical and ethnomedicinal uses of plant species", "Botanical families of plants used", "Use value of the plants", "Medicinal parts of the plant used", "Modes and conditions of medicine preparation", "Ethnomedicinal information about treatment the different diseases", "Limitations" ]
[ "Syria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate.\nThis area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021).\nHistorically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021).", "Field surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)?\nName and surname\nAge and sex\nEducational level\nAre plants collected in your region?\nDo you have any contact with plants?\nCan you show the plants you use in your region?\nCan you tell the names of local plants you use in your region?\nIn which season do you collect the plants you use in your region?\nWhen collecting plant, which parts of the plant do you collect and how do you collect them?\nWhich parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).\nHow do you prepare and administrate the plants’ parts?\nHow did you diagnose the disease (by physician, by traditional healers, self-treating)?\nHow did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)?\nOnly the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected.\nInformation about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C.", "Medicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order.", "The study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006).", "The data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021).", "The UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012):\nUR = Ni/n\nwhere Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned.\nThe UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003):\nUV=∑ URi/N\nwhere ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species.\nThe UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021).", "In total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines.\nAge and gender distribution.", "The study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study.\nA total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses.\nEthnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama).\nAp: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant.\nC: cultivated plants, W: wild plants.\nHerbarium no.", "The most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1).\nPlant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama).", "Medicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066).", "The analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2).\nMedicinal parts of the plants used for ethnomedicinal purposes in the study.", "The analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2).\nModes of ethnomedicines preparation in Homs and Hama.", "The results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement).\nAmong these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4).\nEthno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama).", "There is insufficient information about the pharmacokinetic efficacy of the medicinal plant species in this study. These herbs have reportedly and traditionally been used as adjuvant to relieve and treat some diseases." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study area", "Field work and data collection", "Taxonomic identification of the species", "Ethics approval and consent to participate", "Data analysis", "Use value and use reports", "Results", "Demographic data of informants", "Ethnobotanical and ethnomedicinal uses of plant species", "Botanical families of plants used", "Use value of the plants", "Medicinal parts of the plant used", "Modes and conditions of medicine preparation", "Ethnomedicinal information about treatment the different diseases", "Discussion", "Limitations", "Conclusions" ]
[ "Traditional medicine (TM), as defined by the World Health Organization (WHO), is the sum total of the knowledge, skills and practices based on the theories, beliefs and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness (WHO 2002a, 2002b). Some TM systems are supported by huge volumes of literature and records of the theoretical concepts and practical skills; others pass down from generation to generation through verbal teaching. To date, in some parts of the world, the majority of the population continue to rely on their own TM to meet their primary health care needs. When adopted outside of its traditional culture, TM is often referred as complementary and alternative medicine (CAM) (Che et al. 2017).\nAlthough modern medicine is currently available in many developing countries, large proportions of the population in these countries still rely to a large extent on traditional practitioners and medicinal plants for therapeutic purposes. TM is often the first choice for providing primary health care in developing countries, and the WHO estimates that more than 80% of healthcare needs in these countries are met by traditional health care practices, being the cheapest and most accessible (WHO 2002a, 2002b, 2004, 2005).\nThe WHO pays special attention to TM and CAM. Resolution (no. WHA13-6) issued by the World Health Assembly (WHA) in 2009 emphasized the need to update the global TM strategy (WHO 2009), so the WHO issued its strategy for traditional (folk medicine) 2014–2023 (WHO 2013).\nThe United Nations Educational Scientific and Cultural Organization (UNESCO) works to document the medical heritage of peoples within its documentation of the living intangible cultural heritage of peoples and civilizations, through the documentation of TM practices that were included in the UNESCO convention (2003), the convention on biological diversity (1992) and the UNESCO universal declaration on cultural diversity (2001) and the United Nations declaration on the rights of indigenous peoples (2007), and according to the meeting of the UNESCO International Bioethics Committee (IBC) working group on TM and its ethical effects in Paris (2010), it stressed the need to conduct studies that illustrate the use of TM in various regions around the world, and its evaluation in clinical practice and in research and policy (IBC 2013).\nIn 2017, the International Center for Information and Networks for Intangible Cultural Heritage in the Asia-Pacific region (ICHCAP) under the auspices of UNESCO, issued a book entitled Traditional Medicine in which the Syria Trust for Development in Section VII included TM in Syria (Falk 2017).\nEthno-botany is the scientific study of the relationships between people and plants. It was first coined in 1896 by the US botanist John Harshberger; however, the history of ethnobotany began long before that (Campbell et al. 2002; Amjad et al. 2015). It plays an important role in understanding the dynamic relationships between biological diversity and social and cultural systems (Husain et al. 2008; Amjad et al. 2013, 2015). Plants are essential for human beings as they provide food, and medicines (Alam et al. 2011; Ahmad et al. 2012).\nTraditional Arabic medicine (TAM) is one of the famous traditional medical systems, which is occasionally called Unani medicine, Graeco-Arabic medicine, humoral medicine or Islamic medicine. The subject of TM in Syria has received little attention in the literature, and very little is known about the traditional medicinal substances used nowadays by the Syrian population to treat the most common diseases. Throughout ancient times in Syria, as part of the Levantine Nations (Bilad al-Sham), and other lands in the region, humans used various natural materials as sources of medicines (Jaddouh 2004). In the western countryside of Hama, there is a natural reserve for medicinal plants, which is called the Abu Qubais Protected area in Al-Ghab region (which protect the biodiversity rights of indigenous people and affiliated to the general commission for Al-Ghab administration and development), 509 plant species belonging to 72 families have been recorded (Al-Mahmoud and Al-Shater 2010).\nFor these reasons, the present investigation gathered the uses of medicinal plants in the centre region of Syria (Homs and Hama), as a supplement for a national survey, and documents the information concerning the uses of medicinal plants, which may serve as the basis of knowledge for a more intensive scientific research.", "Study area Syria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate.\nThis area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021).\nHistorically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021).\nSyria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate.\nThis area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021).\nHistorically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021).\nField work and data collection Field surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)?\nName and surname\nAge and sex\nEducational level\nAre plants collected in your region?\nDo you have any contact with plants?\nCan you show the plants you use in your region?\nCan you tell the names of local plants you use in your region?\nIn which season do you collect the plants you use in your region?\nWhen collecting plant, which parts of the plant do you collect and how do you collect them?\nWhich parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).\nHow do you prepare and administrate the plants’ parts?\nHow did you diagnose the disease (by physician, by traditional healers, self-treating)?\nHow did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)?\nOnly the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected.\nInformation about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C.\nField surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)?\nName and surname\nAge and sex\nEducational level\nAre plants collected in your region?\nDo you have any contact with plants?\nCan you show the plants you use in your region?\nCan you tell the names of local plants you use in your region?\nIn which season do you collect the plants you use in your region?\nWhen collecting plant, which parts of the plant do you collect and how do you collect them?\nWhich parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).\nHow do you prepare and administrate the plants’ parts?\nHow did you diagnose the disease (by physician, by traditional healers, self-treating)?\nHow did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)?\nOnly the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected.\nInformation about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C.\nTaxonomic identification of the species Medicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order.\nMedicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order.\nEthics approval and consent to participate The study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006).\nThe study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006).\nData analysis The data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021).\nThe data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021).\nUse value and use reports The UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012):\nUR = Ni/n\nwhere Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned.\nThe UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003):\nUV=∑ URi/N\nwhere ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species.\nThe UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021).\nThe UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012):\nUR = Ni/n\nwhere Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned.\nThe UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003):\nUV=∑ URi/N\nwhere ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species.\nThe UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021).", "Syria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate.\nThis area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021).\nHistorically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021).", "Field surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)?\nName and surname\nAge and sex\nEducational level\nAre plants collected in your region?\nDo you have any contact with plants?\nCan you show the plants you use in your region?\nCan you tell the names of local plants you use in your region?\nIn which season do you collect the plants you use in your region?\nWhen collecting plant, which parts of the plant do you collect and how do you collect them?\nWhich parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).\nHow do you prepare and administrate the plants’ parts?\nHow did you diagnose the disease (by physician, by traditional healers, self-treating)?\nHow did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)?\nOnly the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected.\nInformation about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C.", "Medicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order.", "The study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006).", "The data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021).", "The UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012):\nUR = Ni/n\nwhere Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned.\nThe UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003):\nUV=∑ URi/N\nwhere ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species.\nThe UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021).", "Demographic data of informants In total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines.\nAge and gender distribution.\nIn total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines.\nAge and gender distribution.\nEthnobotanical and ethnomedicinal uses of plant species The study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study.\nA total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses.\nEthnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama).\nAp: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant.\nC: cultivated plants, W: wild plants.\nHerbarium no.\nThe study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study.\nA total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses.\nEthnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama).\nAp: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant.\nC: cultivated plants, W: wild plants.\nHerbarium no.\nBotanical families of plants used The most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1).\nPlant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama).\nThe most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1).\nPlant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama).\nUse value of the plants Medicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066).\nMedicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066).\nMedicinal parts of the plant used The analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2).\nMedicinal parts of the plants used for ethnomedicinal purposes in the study.\nThe analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2).\nMedicinal parts of the plants used for ethnomedicinal purposes in the study.\nModes and conditions of medicine preparation The analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2).\nModes of ethnomedicines preparation in Homs and Hama.\nThe analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2).\nModes of ethnomedicines preparation in Homs and Hama.\nEthnomedicinal information about treatment the different diseases The results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement).\nAmong these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4).\nEthno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama).\nThe results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement).\nAmong these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4).\nEthno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama).", "In total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines.\nAge and gender distribution.", "The study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study.\nA total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses.\nEthnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama).\nAp: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant.\nC: cultivated plants, W: wild plants.\nHerbarium no.", "The most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1).\nPlant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama).", "Medicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066).", "The analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2).\nMedicinal parts of the plants used for ethnomedicinal purposes in the study.", "The analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2).\nModes of ethnomedicines preparation in Homs and Hama.", "The results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement).\nAmong these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4).\nEthno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama).", "The use of TAM has spread to treat various diseases in Syria since ancient times. They are cost-effective with fewer side effects and are more suitable for long-term use compared with chemically synthesized medicines.\nThe ethnobotanical categories indicated that there is large use of medicinal herbs in the area of study, most of them are wild. There is an increased exploitation of medicinal plants by the local population, collectors and dealers of herbal medicines, in line with the demand from the pharmaceutical industry. This caused a sharp decrease in the occurrence and products of medicinal plants. Grazing, deforestation by cutting down trees for heating, and fires were mainly responsible for the reduction of medicinal plants. That is why the government is working on developing strategies to conserve wild plant diversity. Some people collect the medicinal plants for an income. They uproot and collect each part of the medicinal plants in non-scientific way. Thus, to date, a few articles devoted to TM of Syria, such as a study of folk medicine in Aleppo Governorate (Alachkar et al. 2011), and a study about the use of ‘Zahraa’ (Syrian traditional tisane) (Carmona et al. 2005), and a third one on the medicinal plants in Golan (Said et al. 2002), which is an occupied Syrian territory.\nThis research aspires to genuinely contribute in providing useful information on the conserving and sustaining the natural resources in the area.\nThe perspectives in the questionnaire were compared with other ethnomedicine studies in the countries surrounding Syria such as Lebanon (Taha et al. 2013), Jordan (Lev and Amar 2002; Al-Qura'n 2009), Palestine (Friedman et al. 1986; Kaileh et al. 2007), Iraq (Al-Douri 2000) and Turkey (Yeşilada et al. 1995; Sezik et al. 2001). Similarities in various traditional uses in Syria, Lebanon, Palestine and Jordan were observed. This is mainly due to the mutual history of these areas that were previously called Levantine Nations (Bilad al-Sham) (Lev 2002), and there is some similarity with a smaller number of folk uses both in Syria and Iraq, but there is a difference in the folk uses described between Syria and Turkey (Korkmaz et al. 2016; Yerebasan et al. 2020). We compared the folk uses of plants mentioned in related articles on ethnomedicine in these regions with the plants studied in our research to find out the extent of congruence or difference the uses of similar plants.\nWe did not record significant differences in phytomedicines consumption customs between interviewees of different religions. In general, phytomedicines consumption was often explained and justified by interviewees as family tradition. We did not detect any gender-related differences in phytomedicines consumption. There were no gender differences concerning the common traditional use of medicinal plants. The ethno-medicine data presented herein imply that medicinal plants are important as food and particularly as medicine (traditional healing) for various local people. While chemical medicinal treatments are becoming commonplace, traditional medications are still of huge importance in many rural, poor and remote places.\nThis study will undoubtedly provide new data that could contribute to further pharmacological discoveries by identifying the active ingredients and their mechanism of effect by doing a lot of pharmacological work to confirm the alleged biological activities of these plants, and the possibility of developing new pharmaceutical formulas cannot be excluded depending on Syrian medicinal plants and their folk uses; as the discovery of artemisinin from Artemisia annua, based on ethnobotanical information (Acton and Klayman 1985), serves as evidence that it is possible to find new and effective medicines using data from TM.", "There is insufficient information about the pharmacokinetic efficacy of the medicinal plant species in this study. These herbs have reportedly and traditionally been used as adjuvant to relieve and treat some diseases.", "Many of the uses of medicinal plants mentioned in Syria are still under study. This study has been conducted with the aim to generate new concepts that could supplement pharmacological work with potentially further pharmacological discoveries; that is, by identifying the active ingredients and their mechanism of effect to confirm the alleged biological activities of these plants." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion", null, "conclusions" ]
[ "Traditional Arabic medicine (TAM)", "herbal medicine", "Mediterranean", "phytotherapy", "medicinal plants", "folk uses", "ethnobotanical", "ethnopharmacology" ]
Introduction: Traditional medicine (TM), as defined by the World Health Organization (WHO), is the sum total of the knowledge, skills and practices based on the theories, beliefs and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness (WHO 2002a, 2002b). Some TM systems are supported by huge volumes of literature and records of the theoretical concepts and practical skills; others pass down from generation to generation through verbal teaching. To date, in some parts of the world, the majority of the population continue to rely on their own TM to meet their primary health care needs. When adopted outside of its traditional culture, TM is often referred as complementary and alternative medicine (CAM) (Che et al. 2017). Although modern medicine is currently available in many developing countries, large proportions of the population in these countries still rely to a large extent on traditional practitioners and medicinal plants for therapeutic purposes. TM is often the first choice for providing primary health care in developing countries, and the WHO estimates that more than 80% of healthcare needs in these countries are met by traditional health care practices, being the cheapest and most accessible (WHO 2002a, 2002b, 2004, 2005). The WHO pays special attention to TM and CAM. Resolution (no. WHA13-6) issued by the World Health Assembly (WHA) in 2009 emphasized the need to update the global TM strategy (WHO 2009), so the WHO issued its strategy for traditional (folk medicine) 2014–2023 (WHO 2013). The United Nations Educational Scientific and Cultural Organization (UNESCO) works to document the medical heritage of peoples within its documentation of the living intangible cultural heritage of peoples and civilizations, through the documentation of TM practices that were included in the UNESCO convention (2003), the convention on biological diversity (1992) and the UNESCO universal declaration on cultural diversity (2001) and the United Nations declaration on the rights of indigenous peoples (2007), and according to the meeting of the UNESCO International Bioethics Committee (IBC) working group on TM and its ethical effects in Paris (2010), it stressed the need to conduct studies that illustrate the use of TM in various regions around the world, and its evaluation in clinical practice and in research and policy (IBC 2013). In 2017, the International Center for Information and Networks for Intangible Cultural Heritage in the Asia-Pacific region (ICHCAP) under the auspices of UNESCO, issued a book entitled Traditional Medicine in which the Syria Trust for Development in Section VII included TM in Syria (Falk 2017). Ethno-botany is the scientific study of the relationships between people and plants. It was first coined in 1896 by the US botanist John Harshberger; however, the history of ethnobotany began long before that (Campbell et al. 2002; Amjad et al. 2015). It plays an important role in understanding the dynamic relationships between biological diversity and social and cultural systems (Husain et al. 2008; Amjad et al. 2013, 2015). Plants are essential for human beings as they provide food, and medicines (Alam et al. 2011; Ahmad et al. 2012). Traditional Arabic medicine (TAM) is one of the famous traditional medical systems, which is occasionally called Unani medicine, Graeco-Arabic medicine, humoral medicine or Islamic medicine. The subject of TM in Syria has received little attention in the literature, and very little is known about the traditional medicinal substances used nowadays by the Syrian population to treat the most common diseases. Throughout ancient times in Syria, as part of the Levantine Nations (Bilad al-Sham), and other lands in the region, humans used various natural materials as sources of medicines (Jaddouh 2004). In the western countryside of Hama, there is a natural reserve for medicinal plants, which is called the Abu Qubais Protected area in Al-Ghab region (which protect the biodiversity rights of indigenous people and affiliated to the general commission for Al-Ghab administration and development), 509 plant species belonging to 72 families have been recorded (Al-Mahmoud and Al-Shater 2010). For these reasons, the present investigation gathered the uses of medicinal plants in the centre region of Syria (Homs and Hama), as a supplement for a national survey, and documents the information concerning the uses of medicinal plants, which may serve as the basis of knowledge for a more intensive scientific research. Methods: Study area Syria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate. This area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021). Historically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021). Syria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate. This area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021). Historically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021). Field work and data collection Field surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Name and surname Age and sex Educational level Are plants collected in your region? Do you have any contact with plants? Can you show the plants you use in your region? Can you tell the names of local plants you use in your region? In which season do you collect the plants you use in your region? When collecting plant, which parts of the plant do you collect and how do you collect them? Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.). How do you prepare and administrate the plants’ parts? How did you diagnose the disease (by physician, by traditional healers, self-treating)? How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Only the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected. Information about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C. Field surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Name and surname Age and sex Educational level Are plants collected in your region? Do you have any contact with plants? Can you show the plants you use in your region? Can you tell the names of local plants you use in your region? In which season do you collect the plants you use in your region? When collecting plant, which parts of the plant do you collect and how do you collect them? Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.). How do you prepare and administrate the plants’ parts? How did you diagnose the disease (by physician, by traditional healers, self-treating)? How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Only the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected. Information about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C. Taxonomic identification of the species Medicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order. Medicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order. Ethics approval and consent to participate The study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006). The study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006). Data analysis The data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021). The data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021). Use value and use reports The UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012): UR = Ni/n where Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned. The UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003): UV=∑ URi/N where ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species. The UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021). The UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012): UR = Ni/n where Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned. The UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003): UV=∑ URi/N where ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species. The UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021). Study area: Syria is a country located on the east coast of the Mediterranean Sea in southwestern Asia. Syria is bounded by Turkey to the north, Iraq to the east and southeast, Jordan to the south and Lebanon to the southwest. The study area is the middle region of Syria including Homs Governorate and Hama Governorate. This area has a Mediterranean climate with a long dry season from May to October. In the extreme northwest, there is some light summer rain. On the western region, summers are hot, with mean daily maximum temperatures ranging from low to mid 80 °F, while the mild winters have daily mean minimum reaching temperatures low level of 50 °F. Only above about 1500 m are the summers relatively cool. In inland the climate becomes arid, with colder winters and hotter summers. In the desert, at Tadmur maximum temperatures in the summer, temperature reaches averages in the ranges of upper 90s to low 100 °F, with extremes in the 110 °F. In rural areas, work takes place according to the seasonal rhythm of agriculture. Women generally share in much of the agricultural labour. Agriculture constitutes an important source of income, and fruits and vegetables including onions, olives and grapes. Commercially important forest plants include: pistachio, which is important for its oil-rich fruit, and plants such as olive trees, grapevines, apricot trees and cumin (Hamidé et al. 2021). Historically, the ancient Palmyra, also called Tadmur, is an ancient city in south-central Syria (Homs Governorate). An oasis in the Syrian desert, Palmyra contains the monumental ruins of a great city that was one of the most important cultural centres of the ancient world, from the first to the second century according to World Heritage List (UNESCO 2021). Field work and data collection: Field surveys were conducted between December 2019 and January 2021, to document ethnobotanical information through oral interviews and designed semi-structured questionnaire. Thirty-five villages and 59 districts were visited for field research; 235 people were contacted and 151 of those with ethno-botanical experience agreed to become our informants, including 35 local herbalists (Tabib Arabi) and 10 physicians (who hold a general medicine specialty). The queries were repeatedly made to increase the reliability of the data; interviews with the men were usually carried out in the ‘Mukhtar’ house where they come together, and with women in their homes, bazaars and gardens. The Syria Trust for Development (which is a national development organization, and has a program called ‘Mashrouie’ that runs innovative microcredit programs that encourage economic growth in disadvantaged areas) helped us in data collecting. The information gathered during the present study included socio-demographic characteristics of the interviewed informants (age, gender) and ethnopharmacological information, including the local and scientific name of the species, local names, plant parts used, modes of use, conservation method, administration mode and toxicity, all documented data were then translated into English and Latin. Information that had been carried to the region from the outside and that was not used or confirmed was not included and recorded (Weckerle et al. 2018). During the interviews, questions about the following were asked to the participants:Name and surnameAge and sexEducational levelAre plants collected in your region?Do you have any contact with plants?Can you show the plants you use in your region?Can you tell the names of local plants you use in your region?In which season do you collect the plants you use in your region?When collecting plant, which parts of the plant do you collect and how do you collect them?Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.).How do you prepare and administrate the plants’ parts?How did you diagnose the disease (by physician, by traditional healers, self-treating)?How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Name and surname Age and sex Educational level Are plants collected in your region? Do you have any contact with plants? Can you show the plants you use in your region? Can you tell the names of local plants you use in your region? In which season do you collect the plants you use in your region? When collecting plant, which parts of the plant do you collect and how do you collect them? Which parts of the plants do you use? (flowers, fruits, leaves, roots, tubers, young shoots, branches, stems, aerial parts, etc.). How do you prepare and administrate the plants’ parts? How did you diagnose the disease (by physician, by traditional healers, self-treating)? How did you check the effectiveness of the treatment (disappearance of symptoms, by laboratory analysis, other methods)? Only the medication of single herbs was included, while mixtures and multiple recipes were disregarded. The answers that were given with doubt to the questions above were not recorded, and we adopted the information that gave 40% frequency in the collected data, and the data that gave less than that were neglected. Information about diseases diagnosed specifically by physicians and traditional healers was approved, while self-treating diseases were referred to as auxiliary in treatment, nutritional uses were clearly indicated, and some plants were used to prevent disease and improve public health. The information about diseases was divided into three groups when analysing the data (group A: acute diseases, group B: chronic diseases, group C: simple symptoms and signs), and the focus was only on groups B and C. Taxonomic identification of the species: Medicinal plants being mentioned by the Informants were recorded with local names and photographed. Each reported medicinal plant species was gathered, compressed, dehydrated, mounted on herbarium sheets, and identified; the taxonomic identity of the plants was confirmed by Prof. Abdel Aleem Bello (PhD/Botanical Taxonomy) and Dr. Bayan Tiba (PhD/Botanical Taxonomy) Aleppo University. As far as possible, the name of the plants was updated by consulting the latest literature; generic and species names followed the plant list (http://www.theplantlist.org). All voucher specimens have been preserved during documentation and deposited in the Damascus University, Faculty of Pharmacy, Pharmacognosy Labs Herbarium for future reference, serial numbers were taken from (1ch to 76ch) according to its alphabetical order. Ethics approval and consent to participate: The study was approved by the Ethics Committee of the University of Damascus. Before beginning data collection, we obtained verbal informed permission in each case site-wide and then individually before each interview. We also informed informants that it was an academic project and that the investigation was for research purposes only, and not for any financial or other benefits. Informants provided verbal informed consent to participate in this study; they were free to withdraw their information at any time. These informants freely accepted the interview. All steps of research are consistent with Ethnobiology Code of Ethics (ISE 2006). Data analysis: The data collected through interviews of the informants were classified and examined with the statistical program IBM® SPSS® Statistics 26 (IBM, Armonk, NY), to determine the proportions of different variables such as ethnopharmacological data. Quantitative value indices were analysed using different statistical quantitative tools, i.e., the use reports (UR) of a species, and use value (UV) (Chaachouaya et al. 2021). Use value and use reports: The UR of a species or its importance in the culture of a community is denoted by its mentioning rate or its mention frequency by informants. The UR of the species of plants being utilized was evaluated using the formula (Tenté et al 2012): UR = Ni/n where Ni is the number of times a particular species was mentioned; n is the total number of times that all species were mentioned. The UV of recorded medicinal plants was determined by applying the following formula (Tabuti et al. 2003): UV=∑ URi/N where ∑ URi is the total number of UR per plants; N is the total of interviewees questioned for given medicinal species. The UV rate will be more important if there are several useful records for a species, implying that the plant is significant, whereas they will be near 0 if there are few reports compared to its use (Yaseen 2015; Chaachouaya et al. 2021). Results: Demographic data of informants In total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines. Age and gender distribution. In total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines. Age and gender distribution. Ethnobotanical and ethnomedicinal uses of plant species The study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study. A total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses. Ethnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama). Ap: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant. C: cultivated plants, W: wild plants. Herbarium no. The study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study. A total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses. Ethnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama). Ap: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant. C: cultivated plants, W: wild plants. Herbarium no. Botanical families of plants used The most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1). Plant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama). The most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1). Plant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama). Use value of the plants Medicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066). Medicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066). Medicinal parts of the plant used The analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2). Medicinal parts of the plants used for ethnomedicinal purposes in the study. The analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2). Medicinal parts of the plants used for ethnomedicinal purposes in the study. Modes and conditions of medicine preparation The analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2). Modes of ethnomedicines preparation in Homs and Hama. The analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2). Modes of ethnomedicines preparation in Homs and Hama. Ethnomedicinal information about treatment the different diseases The results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement). Among these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4). Ethno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama). The results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement). Among these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4). Ethno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama). Demographic data of informants: In total, 151 local inhabitants of 35 villages and 59 districts were selected based on their experience in traditional uses of plants. Table 1 shows the age and gender wise distribution. All of them were interrogated using semi-structured questionnaires. Generally in Syria, both genders were interested in herbal medicines. Age and gender distribution. Ethnobotanical and ethnomedicinal uses of plant species: The study area is considered one of the Syrian areas rich in medicinal plants. In the east of it there is a desert (badia), which is dominated by desert and thorny plants, and in the west of it, aromatic plants are spread in the mountains near the eastern coast of the Mediterranean Sea. It is a fact that people in some of these rural areas suffer from poverty, so they depend a lot on folk remedies, and folk healers in these areas provide their expertise at small costs, because the medicinal herbs are cheaper than chemical medicines, and most of the medicinal recipes are available around. However, a large portion of the uses of medicinal plants mentioned in the research are still under study. A total of 76 medicinal plant species (57.9% are wild and 42.1% are cultivated plants) belonging to 39 families are recorded in the present study; they are being used for a variety of purposes by native people. The detailed inventory is provided in Table 2, which includes botanical names, followed by local name, family and ethnobotanical uses. Ethnobotanical uses of plant species according to ethnomedicinal survey of central region in Syria (Homs and Hama). Ap: aerial parts; Bk: bark; Br: branches; Bu: buds; Bb: bulb; Ca: capitulum; Co: cones; Fl: flower; F: fruit; G: gum; Lx: latex; L: leaves; Ps: pedicles; Pe: peel; Ph: phloem; P: pods; Rs: resin; Rm: rhizome; R: root; Sd: seeds; S: stem; St: stylus; T: tubers; W: whole plant. C: cultivated plants, W: wild plants. Herbarium no. Botanical families of plants used: The most commonly mentioned family is Asteraceae (11.84%), followed by Lamiaceae (10.52%), then Rosaceae (7.89%) and Apiaceae (6.57%), Poaceae (5.26%), Anacardiaceae and Fabaceae (3.94%), Fagaceae, Liliaceae, Myrtaceae and Oleaceae (2.63%), then all the other families (1.31%) (Figure 1). Plant families commonly used in ethnomedicinal survey of central region in Syria (Homs and Hama). Use value of the plants: Medicinal use plants (UV) are utilized to find the most frequently used plant species in the study area. Its value ranged from 0.066 to 0.92 (Table 2). The calculated results of UV showed that Cichorium intybus L., Eucalyptus globulus Labill. was ranked first (UV = 0.92) followed by Fraxinus syriaca Boiss. (UV = 0.907), Olea europaea L. (UV = 0.9), then Allium sativum L. (UV = 0.894), Lepidium sativum L. (UV = 0.867), Coriandrum sativum L. (UV = 0.86), Glycyrrhiza glabra L. (UV = 0.854), Dittrichia viscosa (L.) Greuter (UV = 0.841), while the lowest value was found for Plumbago europaea L. (UV = 0.066). Medicinal parts of the plant used: The analysis of the ethnobotanical data showed that central region was best suited to the medicinal plant and rangeland. Ethnobotanical use categories showed that leaves were commonly used parts for making indigenous recipes (28.08%), followed by fruits (19.76%) and seeds (15.6%) then roots (10.4%), flowers (9.36%) and aerial parts (4.16%). Then, the others parts of plant are rarely used (Figure 2). Medicinal parts of the plants used for ethnomedicinal purposes in the study. Modes and conditions of medicine preparation: The analysis of the ethnobotanical data showed that the recipes in the most cases were obtained from single herb, but some of recipes were prepared together, and there is a famous local mixture called Damask tisane (zhourate Shamieh). A mode of TM preparation reported was a decoction (30%), followed by infusion (23%), and then by other method such as fresh herbs, juice, cooked, powder, vinegar and oils (47%) (Figure 3). Considering according to results, most of the plant preparations are used orally (Table 2). Modes of ethnomedicines preparation in Homs and Hama. Ethnomedicinal information about treatment the different diseases: The results of questionnaires showed that 20% of the informants were diagnosed with their diseases by a doctor, and 45% were diagnosed with a conventional therapist, and 35% self-diagnosed their diseases, while the results of the questionnaires showed that the evaluation of the treatment by informants as following (58% relied on the disappearance of symptoms, and 24% through the results of laboratory analysis, 18% adopted other methods such as chest radiography, adopting the attending physician’s opinion and clinical observation of the improvement of skin diseases, and some of them depended on psychological comfort during treatment as evidence of improvement). Among these studied plants, 62 are used to treat digestive disorders, 41 for respiratory diseases, including asthma, bronchitis and coughs, 40 for skin diseases, 16 for diabetes, 36 for kidney and urinary tract disorders, 22 for nervous system disorders, six for enhance the body's immunity, two for haemorrhoids, five for fever, eight for heart disorders, five for infertility and impotence, six for treating several types of cancer, two for increasing breast milk production, five for losing weight, four for lowering cholesterol, and two for increasing weight, and six for anaemia, 15 for blood disorder, two anti-toxicant, 19 for arthritis and pain, one for typhoid disorder, eight for infections, six for gynaecological diseases, one for eye inflammation, two anti-toxicant and four for mouth sores. Many of them are still used today, especially those plants recommended for internal uses such as traditional medicinal teas, which mainly consist of remedies for obesity, weight loss, colds, colds, digestive disorders, abdominal pain, constipation and some skin diseases (Figure 4). Ethno-medicinal information about treating different diseases related to central region in Syria (Homs and Hama). Discussion: The use of TAM has spread to treat various diseases in Syria since ancient times. They are cost-effective with fewer side effects and are more suitable for long-term use compared with chemically synthesized medicines. The ethnobotanical categories indicated that there is large use of medicinal herbs in the area of study, most of them are wild. There is an increased exploitation of medicinal plants by the local population, collectors and dealers of herbal medicines, in line with the demand from the pharmaceutical industry. This caused a sharp decrease in the occurrence and products of medicinal plants. Grazing, deforestation by cutting down trees for heating, and fires were mainly responsible for the reduction of medicinal plants. That is why the government is working on developing strategies to conserve wild plant diversity. Some people collect the medicinal plants for an income. They uproot and collect each part of the medicinal plants in non-scientific way. Thus, to date, a few articles devoted to TM of Syria, such as a study of folk medicine in Aleppo Governorate (Alachkar et al. 2011), and a study about the use of ‘Zahraa’ (Syrian traditional tisane) (Carmona et al. 2005), and a third one on the medicinal plants in Golan (Said et al. 2002), which is an occupied Syrian territory. This research aspires to genuinely contribute in providing useful information on the conserving and sustaining the natural resources in the area. The perspectives in the questionnaire were compared with other ethnomedicine studies in the countries surrounding Syria such as Lebanon (Taha et al. 2013), Jordan (Lev and Amar 2002; Al-Qura'n 2009), Palestine (Friedman et al. 1986; Kaileh et al. 2007), Iraq (Al-Douri 2000) and Turkey (Yeşilada et al. 1995; Sezik et al. 2001). Similarities in various traditional uses in Syria, Lebanon, Palestine and Jordan were observed. This is mainly due to the mutual history of these areas that were previously called Levantine Nations (Bilad al-Sham) (Lev 2002), and there is some similarity with a smaller number of folk uses both in Syria and Iraq, but there is a difference in the folk uses described between Syria and Turkey (Korkmaz et al. 2016; Yerebasan et al. 2020). We compared the folk uses of plants mentioned in related articles on ethnomedicine in these regions with the plants studied in our research to find out the extent of congruence or difference the uses of similar plants. We did not record significant differences in phytomedicines consumption customs between interviewees of different religions. In general, phytomedicines consumption was often explained and justified by interviewees as family tradition. We did not detect any gender-related differences in phytomedicines consumption. There were no gender differences concerning the common traditional use of medicinal plants. The ethno-medicine data presented herein imply that medicinal plants are important as food and particularly as medicine (traditional healing) for various local people. While chemical medicinal treatments are becoming commonplace, traditional medications are still of huge importance in many rural, poor and remote places. This study will undoubtedly provide new data that could contribute to further pharmacological discoveries by identifying the active ingredients and their mechanism of effect by doing a lot of pharmacological work to confirm the alleged biological activities of these plants, and the possibility of developing new pharmaceutical formulas cannot be excluded depending on Syrian medicinal plants and their folk uses; as the discovery of artemisinin from Artemisia annua, based on ethnobotanical information (Acton and Klayman 1985), serves as evidence that it is possible to find new and effective medicines using data from TM. Limitations: There is insufficient information about the pharmacokinetic efficacy of the medicinal plant species in this study. These herbs have reportedly and traditionally been used as adjuvant to relieve and treat some diseases. Conclusions: Many of the uses of medicinal plants mentioned in Syria are still under study. This study has been conducted with the aim to generate new concepts that could supplement pharmacological work with potentially further pharmacological discoveries; that is, by identifying the active ingredients and their mechanism of effect to confirm the alleged biological activities of these plants.
Background: Since ancient times, traditional Arabic medicine (TAM) has been used to treat various diseases in Syria. They are cost-effective with fewer side effects and are more suitable for long-term use compared with chemically synthesized medicines. In addition, the scientific importance is manifested, as this survey proceeds, for the purposes of verifying and documenting these traditional medicines and their common uses. Methods: Information was collected from 2019 to 2021 from the cities of Homs and Hama and their villages, which are two governorates located in central Syria, after interviews with traditional practitioners called Attarin, and many other people. Plant specimens were collected according to different references concerning medicinal plants of Syria, to document the traditional uses of plants at least two of the traditional healers and three other people were asked. Results: In this survey, we listed 76 medicinal plants belonging to 39 families in alphabetical order with the parts used and the method of preparation according to their therapeutic use, which are used to treat 106 ailments. Conclusions: Many of the uses of medicinal plants mentioned in this survey are still under study. There is no doubt that this study will provide new data that could contribute to further pharmacological discoveries by identifying the active ingredients and their mechanism of effect by doing additional pharmacological work to confirm the alleged biological activities of these plants.
Introduction: Traditional medicine (TM), as defined by the World Health Organization (WHO), is the sum total of the knowledge, skills and practices based on the theories, beliefs and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness (WHO 2002a, 2002b). Some TM systems are supported by huge volumes of literature and records of the theoretical concepts and practical skills; others pass down from generation to generation through verbal teaching. To date, in some parts of the world, the majority of the population continue to rely on their own TM to meet their primary health care needs. When adopted outside of its traditional culture, TM is often referred as complementary and alternative medicine (CAM) (Che et al. 2017). Although modern medicine is currently available in many developing countries, large proportions of the population in these countries still rely to a large extent on traditional practitioners and medicinal plants for therapeutic purposes. TM is often the first choice for providing primary health care in developing countries, and the WHO estimates that more than 80% of healthcare needs in these countries are met by traditional health care practices, being the cheapest and most accessible (WHO 2002a, 2002b, 2004, 2005). The WHO pays special attention to TM and CAM. Resolution (no. WHA13-6) issued by the World Health Assembly (WHA) in 2009 emphasized the need to update the global TM strategy (WHO 2009), so the WHO issued its strategy for traditional (folk medicine) 2014–2023 (WHO 2013). The United Nations Educational Scientific and Cultural Organization (UNESCO) works to document the medical heritage of peoples within its documentation of the living intangible cultural heritage of peoples and civilizations, through the documentation of TM practices that were included in the UNESCO convention (2003), the convention on biological diversity (1992) and the UNESCO universal declaration on cultural diversity (2001) and the United Nations declaration on the rights of indigenous peoples (2007), and according to the meeting of the UNESCO International Bioethics Committee (IBC) working group on TM and its ethical effects in Paris (2010), it stressed the need to conduct studies that illustrate the use of TM in various regions around the world, and its evaluation in clinical practice and in research and policy (IBC 2013). In 2017, the International Center for Information and Networks for Intangible Cultural Heritage in the Asia-Pacific region (ICHCAP) under the auspices of UNESCO, issued a book entitled Traditional Medicine in which the Syria Trust for Development in Section VII included TM in Syria (Falk 2017). Ethno-botany is the scientific study of the relationships between people and plants. It was first coined in 1896 by the US botanist John Harshberger; however, the history of ethnobotany began long before that (Campbell et al. 2002; Amjad et al. 2015). It plays an important role in understanding the dynamic relationships between biological diversity and social and cultural systems (Husain et al. 2008; Amjad et al. 2013, 2015). Plants are essential for human beings as they provide food, and medicines (Alam et al. 2011; Ahmad et al. 2012). Traditional Arabic medicine (TAM) is one of the famous traditional medical systems, which is occasionally called Unani medicine, Graeco-Arabic medicine, humoral medicine or Islamic medicine. The subject of TM in Syria has received little attention in the literature, and very little is known about the traditional medicinal substances used nowadays by the Syrian population to treat the most common diseases. Throughout ancient times in Syria, as part of the Levantine Nations (Bilad al-Sham), and other lands in the region, humans used various natural materials as sources of medicines (Jaddouh 2004). In the western countryside of Hama, there is a natural reserve for medicinal plants, which is called the Abu Qubais Protected area in Al-Ghab region (which protect the biodiversity rights of indigenous people and affiliated to the general commission for Al-Ghab administration and development), 509 plant species belonging to 72 families have been recorded (Al-Mahmoud and Al-Shater 2010). For these reasons, the present investigation gathered the uses of medicinal plants in the centre region of Syria (Homs and Hama), as a supplement for a national survey, and documents the information concerning the uses of medicinal plants, which may serve as the basis of knowledge for a more intensive scientific research. Conclusions: Many of the uses of medicinal plants mentioned in Syria are still under study. This study has been conducted with the aim to generate new concepts that could supplement pharmacological work with potentially further pharmacological discoveries; that is, by identifying the active ingredients and their mechanism of effect to confirm the alleged biological activities of these plants.
Background: Since ancient times, traditional Arabic medicine (TAM) has been used to treat various diseases in Syria. They are cost-effective with fewer side effects and are more suitable for long-term use compared with chemically synthesized medicines. In addition, the scientific importance is manifested, as this survey proceeds, for the purposes of verifying and documenting these traditional medicines and their common uses. Methods: Information was collected from 2019 to 2021 from the cities of Homs and Hama and their villages, which are two governorates located in central Syria, after interviews with traditional practitioners called Attarin, and many other people. Plant specimens were collected according to different references concerning medicinal plants of Syria, to document the traditional uses of plants at least two of the traditional healers and three other people were asked. Results: In this survey, we listed 76 medicinal plants belonging to 39 families in alphabetical order with the parts used and the method of preparation according to their therapeutic use, which are used to treat 106 ailments. Conclusions: Many of the uses of medicinal plants mentioned in this survey are still under study. There is no doubt that this study will provide new data that could contribute to further pharmacological discoveries by identifying the active ingredients and their mechanism of effect by doing additional pharmacological work to confirm the alleged biological activities of these plants.
10,363
261
[ 349, 744, 139, 111, 80, 188, 63, 341, 92, 142, 101, 121, 348, 34 ]
19
[ "plants", "medicinal", "plant", "use", "region", "uv", "species", "parts", "diseases", "data" ]
[ "traditional practitioners", "met traditional health", "traditional health care", "entitled traditional medicine", "traditional medicine tm" ]
[CONTENT] Traditional Arabic medicine (TAM) | herbal medicine | Mediterranean | phytotherapy | medicinal plants | folk uses | ethnobotanical | ethnopharmacology [SUMMARY]
[CONTENT] Traditional Arabic medicine (TAM) | herbal medicine | Mediterranean | phytotherapy | medicinal plants | folk uses | ethnobotanical | ethnopharmacology [SUMMARY]
[CONTENT] Traditional Arabic medicine (TAM) | herbal medicine | Mediterranean | phytotherapy | medicinal plants | folk uses | ethnobotanical | ethnopharmacology [SUMMARY]
[CONTENT] Traditional Arabic medicine (TAM) | herbal medicine | Mediterranean | phytotherapy | medicinal plants | folk uses | ethnobotanical | ethnopharmacology [SUMMARY]
[CONTENT] Traditional Arabic medicine (TAM) | herbal medicine | Mediterranean | phytotherapy | medicinal plants | folk uses | ethnobotanical | ethnopharmacology [SUMMARY]
[CONTENT] Traditional Arabic medicine (TAM) | herbal medicine | Mediterranean | phytotherapy | medicinal plants | folk uses | ethnobotanical | ethnopharmacology [SUMMARY]
[CONTENT] Adult | Aged | Ethnobotany | Ethnopharmacology | Female | Humans | Male | Medicine, Traditional | Middle Aged | Plants, Medicinal | Surveys and Questionnaires | Syria [SUMMARY]
[CONTENT] Adult | Aged | Ethnobotany | Ethnopharmacology | Female | Humans | Male | Medicine, Traditional | Middle Aged | Plants, Medicinal | Surveys and Questionnaires | Syria [SUMMARY]
[CONTENT] Adult | Aged | Ethnobotany | Ethnopharmacology | Female | Humans | Male | Medicine, Traditional | Middle Aged | Plants, Medicinal | Surveys and Questionnaires | Syria [SUMMARY]
[CONTENT] Adult | Aged | Ethnobotany | Ethnopharmacology | Female | Humans | Male | Medicine, Traditional | Middle Aged | Plants, Medicinal | Surveys and Questionnaires | Syria [SUMMARY]
[CONTENT] Adult | Aged | Ethnobotany | Ethnopharmacology | Female | Humans | Male | Medicine, Traditional | Middle Aged | Plants, Medicinal | Surveys and Questionnaires | Syria [SUMMARY]
[CONTENT] Adult | Aged | Ethnobotany | Ethnopharmacology | Female | Humans | Male | Medicine, Traditional | Middle Aged | Plants, Medicinal | Surveys and Questionnaires | Syria [SUMMARY]
[CONTENT] traditional practitioners | met traditional health | traditional health care | entitled traditional medicine | traditional medicine tm [SUMMARY]
[CONTENT] traditional practitioners | met traditional health | traditional health care | entitled traditional medicine | traditional medicine tm [SUMMARY]
[CONTENT] traditional practitioners | met traditional health | traditional health care | entitled traditional medicine | traditional medicine tm [SUMMARY]
[CONTENT] traditional practitioners | met traditional health | traditional health care | entitled traditional medicine | traditional medicine tm [SUMMARY]
[CONTENT] traditional practitioners | met traditional health | traditional health care | entitled traditional medicine | traditional medicine tm [SUMMARY]
[CONTENT] traditional practitioners | met traditional health | traditional health care | entitled traditional medicine | traditional medicine tm [SUMMARY]
[CONTENT] plants | medicinal | plant | use | region | uv | species | parts | diseases | data [SUMMARY]
[CONTENT] plants | medicinal | plant | use | region | uv | species | parts | diseases | data [SUMMARY]
[CONTENT] plants | medicinal | plant | use | region | uv | species | parts | diseases | data [SUMMARY]
[CONTENT] plants | medicinal | plant | use | region | uv | species | parts | diseases | data [SUMMARY]
[CONTENT] plants | medicinal | plant | use | region | uv | species | parts | diseases | data [SUMMARY]
[CONTENT] plants | medicinal | plant | use | region | uv | species | parts | diseases | data [SUMMARY]
[CONTENT] tm | medicine | traditional | health | unesco | cultural | countries | world | health care | peoples [SUMMARY]
[CONTENT] plants | plants use | use | region | parts | data | plants use region | use region | species | collect [SUMMARY]
[CONTENT] uv | plants | diseases | medicinal | disorders | showed | plant | results | parts | ethnobotanical [SUMMARY]
[CONTENT] pharmacological | pharmacological work potentially | new concepts supplement | plants mentioned syria | study conducted | study conducted aim | study conducted aim generate | study study | study study conducted | study study conducted aim [SUMMARY]
[CONTENT] plants | uv | medicinal | species | plant | parts | use | diseases | study | region [SUMMARY]
[CONTENT] plants | uv | medicinal | species | plant | parts | use | diseases | study | region [SUMMARY]
[CONTENT] Arabic | TAM | Syria ||| ||| [SUMMARY]
[CONTENT] 2019 | 2021 | Homs | Hama | two | Syria | Attarin ||| Syria | at least two | three [SUMMARY]
[CONTENT] 76 | 39 | 106 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Arabic | TAM | Syria ||| ||| ||| 2019 | 2021 | Homs | Hama | two | Syria | Attarin ||| Syria | at least two | three ||| ||| 76 | 39 | 106 ||| ||| [SUMMARY]
[CONTENT] Arabic | TAM | Syria ||| ||| ||| 2019 | 2021 | Homs | Hama | two | Syria | Attarin ||| Syria | at least two | three ||| ||| 76 | 39 | 106 ||| ||| [SUMMARY]
Mental health and suicidal ideation in US military veterans with histories of COVID-19 infection.
34035155
There have been reports of increased prevalence in psychiatric conditions in non-veteran survivors of COVID-19. To date, however, no known study has examined the prevalence, risk and protective factors of psychiatric conditions among US military veterans who survived COVID-19.
INTRODUCTION
Data were analysed from the 2019 to 2020 National Health and Resilience in Veterans Study, which surveyed a nationally representative, prospective cohort of 3078 US veterans. Prepandemic and 1-year peripandemic risk and protective factors associated with positive screens for peripandemic internalising (major depressive, generalised anxiety and/or posttraumatic stress disorders) and externalising psychiatric disorders (alcohol and/or drug use disorders) and suicidal ideation were examined using bivariate and multivariate logistic regression analyses.
METHODS
A total of 233 veterans (8.6%) reported having been infected with COVID-19. Relative to veterans who were not infected, veterans who were infected were more likely to screen positive for internalising disorders (20.5% vs 13.9%, p=0.005), externalising disorders (23.2% vs 14.8%, p=0.001) and current suicidal ideation (12.0% vs 7.6%, p=0.015) at peripandemic. Multivariable analyses revealed that greater prepandemic psychiatric symptom severity and COVID-related stressors were the strongest independent predictors of peripandemic internalising disorders, while prepandemic trauma burden was protective. Prepandemic suicidal ideation, greater loneliness and lower household income were the strongest independent predictors of peripandemic suicidal ideation, whereas prepandemic community integration was protective.
RESULTS
Psychiatric symptoms and suicidal ideation are prevalent in veterans who have survived COVID-19. Veterans with greater prepandemic psychiatric and substance use problems, COVID-related stressors and fewer psychosocial resources may be at increased risk of these outcomes.
CONCLUSION
[ "COVID-19", "Depressive Disorder, Major", "Humans", "Mental Health", "Prospective Studies", "SARS-CoV-2", "Suicidal Ideation", "Veterans" ]
8154290
Introduction
The COVID-19 pandemic has been linked to increased social isolation, economic recession and psychological distress.1 Numerous studies have reported a substantial increase in the prevalence of depression and anxiety in the general public during the pandemic.2 3 Previous psychiatric diagnoses, COVID-related stressors, such as worries about being infected with COVID-19, financial stressors and actual COVID-19 infection have been proposed as possible risk factors of both the development and worsening of mental illness during the pandemic.1 Veterans may be a highly vulnerable group to the negative mental health effects of the pandemic due to their older age, and higher prepandemic prevalence of physical and psychiatric risk factors and conditions relative to the general population.4 5i Further, previous studies on veterans have found that stressful events such as combat deployments were associated with increased rates of depression and substance use disorders in veterans.6 As of April 2021, in the USA, over 31 million people have been infected with COVID-19 and over half a million people have died of COVID-19-related complications.7 Previous studies on survivors of COVID-19 have reported high prevalence of mental disorders, including depression, anxiety and post-traumatic stress disorder (PTSD8–10). Thus, there has been increasing attention in closely monitoring for psychiatric symptoms and functioning in those who have been infected with COVID-19 in order to mitigate distress and risk of suicide. However, due to the high burden of COVID-19 infections in the USA and the limited capacity of mental health treatment services, it is challenging to monitor everyone who has been infected. These challenges have been further complicated by infection-control measures implemented by almost all mental health service providers during the pandemic.11 One potentially effective way of informing the allocation of limited mental health treatment resources may be to identify subgroups that are at high risk of developing or experiencing a worsening of psychiatric conditions after COVID-19 infection. Identifying potential risk and protective factors of psychiatric conditions among COVID-19 survivors in populations with elevated risk, such as US military veterans, may help inform more targeted population-based prevention, screening and intervention strategies. Toward this end, we analysed data from the National Health and Resilience in Veterans Study (NHRVS12) to examine (1) the prevalence of psychiatric conditions in US veterans who have and have not been infected with COVID-19, and (2) prepandemic and peripandemic risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection.
Methods
Participants Data were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses. Data were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses. Assessments Online supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection. Online supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection. Data analysis χ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13 χ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13
Results
The final sample included 3078 veterans who completed both prepandemic and 1-year peripandemic assessments. The average age of the sample was 62.2 years old (SD=15.7, range 22–99); the majority was male (90.2%) and Caucasian (78.1%); and 35.0% were combat veterans. A total of 233 veterans (8.6%) reported that they had been infected with COVID-19. The majority (70.5%) reported mild-to-moderate symptoms; 6.3% reported requiring a visit to the emergency room; 4.5% reported severe symptoms with serious concerns about not surviving; 1.9% reported being admitted to an intensive care unit; 1.2% reported requiring an overnight stay at a hospital, and 0.8% reported requiring intubation/mechanical ventilation; the remaining 14.8% of the sample did not provide details about the severity of their illness. Prevalence of peripandemic psychiatric conditions of veterans who were infected by COVID-19 Relative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant. Relative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant. Correlates of peripandemic psychiatric conditions In table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes. Correlates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection *P<0.05, **P<0.01, ***P<0.001. r, Pearson correlation coefficient. In table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes. Correlates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection *P<0.05, **P<0.01, ***P<0.001. r, Pearson correlation coefficient. Multivariable logistic regression models Multivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder. Prepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively. Prepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance. Multivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder. Prepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively. Prepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance.
null
null
[ "Participants", "Assessments", "Data analysis", "Prevalence of peripandemic psychiatric conditions of veterans who were infected by COVID-19", "Correlates of peripandemic psychiatric conditions", "Multivariable logistic regression models" ]
[ "Data were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses.", "Online supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection.\n\n\n", "χ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13", "Relative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant.", "In table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes.\nCorrelates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection\n*P<0.05, **P<0.01, ***P<0.001.\nr, Pearson correlation coefficient.", "Multivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder.\nPrepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively.\nPrepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance." ]
[ null, null, null, null, null, null ]
[ "Introduction", "Methods", "Participants", "Assessments", "Data analysis", "Results", "Prevalence of peripandemic psychiatric conditions of veterans who were infected by COVID-19", "Correlates of peripandemic psychiatric conditions", "Multivariable logistic regression models", "Discussion" ]
[ "The COVID-19 pandemic has been linked to increased social isolation, economic recession and psychological distress.1 Numerous studies have reported a substantial increase in the prevalence of depression and anxiety in the general public during the pandemic.2 3 Previous psychiatric diagnoses, COVID-related stressors, such as worries about being infected with COVID-19, financial stressors and actual COVID-19 infection have been proposed as possible risk factors of both the development and worsening of mental illness during the pandemic.1\nVeterans may be a highly vulnerable group to the negative mental health effects of the pandemic due to their older age, and higher prepandemic prevalence of physical and psychiatric risk factors and conditions relative to the general population.4 5i Further, previous studies on veterans have found that stressful events such as combat deployments were associated with increased rates of depression and substance use disorders in veterans.6\nAs of April 2021, in the USA, over 31 million people have been infected with COVID-19 and over half a million people have died of COVID-19-related complications.7 Previous studies on survivors of COVID-19 have reported high prevalence of mental disorders, including depression, anxiety and post-traumatic stress disorder (PTSD8–10). Thus, there has been increasing attention in closely monitoring for psychiatric symptoms and functioning in those who have been infected with COVID-19 in order to mitigate distress and risk of suicide. However, due to the high burden of COVID-19 infections in the USA and the limited capacity of mental health treatment services, it is challenging to monitor everyone who has been infected. These challenges have been further complicated by infection-control measures implemented by almost all mental health service providers during the pandemic.11\nOne potentially effective way of informing the allocation of limited mental health treatment resources may be to identify subgroups that are at high risk of developing or experiencing a worsening of psychiatric conditions after COVID-19 infection. Identifying potential risk and protective factors of psychiatric conditions among COVID-19 survivors in populations with elevated risk, such as US military veterans, may help inform more targeted population-based prevention, screening and intervention strategies.\nToward this end, we analysed data from the National Health and Resilience in Veterans Study (NHRVS12) to examine (1) the prevalence of psychiatric conditions in US veterans who have and have not been infected with COVID-19, and (2) prepandemic and peripandemic risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection.", "Participants Data were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses.\nData were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses.\nAssessments Online supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection.\n\n\n\nOnline supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection.\n\n\n\nData analysis χ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13\nχ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13", "Data were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses.", "Online supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection.\n\n\n", "χ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13", "The final sample included 3078 veterans who completed both prepandemic and 1-year peripandemic assessments. The average age of the sample was 62.2 years old (SD=15.7, range 22–99); the majority was male (90.2%) and Caucasian (78.1%); and 35.0% were combat veterans. A total of 233 veterans (8.6%) reported that they had been infected with COVID-19. The majority (70.5%) reported mild-to-moderate symptoms; 6.3% reported requiring a visit to the emergency room; 4.5% reported severe symptoms with serious concerns about not surviving; 1.9% reported being admitted to an intensive care unit; 1.2% reported requiring an overnight stay at a hospital, and 0.8% reported requiring intubation/mechanical ventilation; the remaining 14.8% of the sample did not provide details about the severity of their illness.\nPrevalence of peripandemic psychiatric conditions of veterans who were infected by COVID-19 Relative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant.\nRelative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant.\nCorrelates of peripandemic psychiatric conditions In table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes.\nCorrelates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection\n*P<0.05, **P<0.01, ***P<0.001.\nr, Pearson correlation coefficient.\nIn table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes.\nCorrelates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection\n*P<0.05, **P<0.01, ***P<0.001.\nr, Pearson correlation coefficient.\nMultivariable logistic regression models Multivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder.\nPrepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively.\nPrepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance.\nMultivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder.\nPrepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively.\nPrepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance.", "Relative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant.", "In table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes.\nCorrelates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection\n*P<0.05, **P<0.01, ***P<0.001.\nr, Pearson correlation coefficient.", "Multivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder.\nPrepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively.\nPrepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance.", "This study compared the prevalence of peripandemic psychiatric disorders and suicidal ideation between US veterans with and without histories of COVID-19 infection, and examined prepandemic and peripandemic risk and protective factors for these outcomes among COVID-19 survivors. Results revealed that veterans who were infected with COVID-19 had a significantly higher prevalence of positive screens for psychiatric conditions, as well as suicidal ideation relative to those who were not infected. Prepandemic greater psychiatric symptom severity and COVID-related stressors were the strongest predictors of peripandemic internalising psychiatric disorders in veterans who survived COVID-19, while previous trauma exposure was found to be protective. Prepandemic suicidal ideation, loneliness and impulsivity were the strongest independent risk factors of peripandemic suicidal ideation, while greater prepandemic community integration and higher annual household income were protective.\nResults of this study are largely consistent with previous studies of non-representative patient samples who were hospitalised for COVID-19 infection, which observed a higher prevalence of psychiatric conditions, including depression, anxiety and PTSD.8 It is possible that being infected with COVID-19 may have aggravated psychological distress during the pandemic. Further, it is also plausible that those who had prepandemic psychiatric conditions and more psychosocial difficulties may have been at higher risk of infection given less resources and protective measures to avoid infection.9 Of note, the strongest predictors of peripandemic internalising disorders and externalising disorders in those who survived COVID-19 were prepandemic psychiatric symptom and substance use severity, respectively. This finding suggests that veterans with greater severity of prepandemic psychiatric distress and substance use may sensitise veterans to the deleterious mental health effects of COVID-19 infection.\nA growing body of literature suggests that COVID-related stressors may contribute to risk of pandemic-related psychiatric distress.14 15 Results of the current study extend this work to suggest that COVID-related worries, financial distress and social restriction stress were independently associated with increased risk of screening positive for any internalising psychiatric disorders in veterans who survived COVID-19 infection.\nPrevious research has identified previous suicidal behaviour as a strong predictor of future suicidal behaviour.16 The results of this study are consistent with these findings, as prepandemic suicidal ideation was the strongest predictor of peripandemic suicidal ideation. The negative impact of loneliness on mental health outcomes, including suicidal ideation, has been well established.17 In addition, due to the increased social restrictions imposed during the pandemic, increasing loneliness has been suggested as a possible contributor to worsening mental health.1 In this study, greater prepandemic loneliness was the second strongest predictor of peripandemic suicidal ideation. This finding suggests that loneliness may be a clinically relevant intervention target to help mitigate risk of suicidal behaviour in veterans during the pandemic. Moreover, impulsivity was a strong predictor of peripandemic suicidal ideation, which is consistent with previous research that reported strong positive association between impulsivity and suicidal behaviour.18 Of note, all of these factors were assessed prior to the pandemic, suggesting that they may represent increased vulnerability for suicidal ideation during the pandemic in US military veterans who survived COVID-19 infection.\nGreater prepandemic community integration was protective against peripandemic suicidal ideation, even after controlling for other variables. This finding accords with previous research describing the protective effects of community integration in promoting psychological resilience and mitigation of risk for suicidal behaviour.19 20 Further, higher prepandemic household income was protective against risk of peripandemic suicidal ideation. This finding, which is consistent with the conservation of resources theory,21 suggests that fewer prepandemic financial resources may have led to increased peripandemic distress and suicidal ideation. This finding underscores the importance of assessing, monitoring and ameliorating financial distress brought on by the pandemic in veterans with histories of COVID-19 infection.\nGreater trauma burden was identified as a protective factor to internalising psychiatric disorders during the pandemic. While previous trauma is a well known risk factor for psychiatric distress,22 previous exposure to traumatic events may also have ‘stress inoculating’ effects and help protect against risk of adverse mental health outcomes.23 Specifically, exposure to prior traumas may help promote engagement in adaptive coping strategies and may give rise to positive psychological changes (eg, greater sense of personal strength)24 25 that may in turn help foster resilience to new stressors, such as COVID-19 infection.\nFurther, it is noteworthy that perceived social support was found to be associated with increased risk of both internalising psychiatric disorders and suicidal ideation during the pandemic. One interpretation of this finding is that greater prepandemic severity of psychiatric distress may have increased support-seeking behaviours during the pandemic. Further research is needed to elucidate prospective associations between aspects of social support and risk for pandemic-related psychiatric conditions, particularly given the imposition of social restrictions during the pandemic.\nThis study has several limitations. First, history of COVID-19 infection was based on self-report and may be subject to response bias, as well as access to and use of testing for COVID-19 infection. Thus, it is possible that the prevalence of self-reported COVID-19 infection may not capture the accurate prevalence of infections in US veterans. Second, due to the survey design of this study, we used screening instruments to assess psychiatric conditions. Further research using diagnostic instruments such as structured clinical interviews are needed to replicate the results reported herein. Third, while nationally representative, our sample was composed entirely of US military veterans who are predominantly older, male and white, which makes it difficult to generalise results to non-veteran and more diverse veteran populations. Fourth, although this was a prospective cohort study, given the cross-sectional nature of the surveys and limited follow-up time frame, it is difficult to draw conclusions regarding temporal/causal associations between independent variables, and peripandemic psychiatric disorders and suicidal ideation.\nDespite these limitations, this study provides the first known nationally representative data on the prevalence and prepandemic and pandemic-related risk and protective factors of peripandemic psychiatric outcomes in US military veterans who survived COVID-19 infection. Given the enormous toll of COVID-19 infections in the USA,4 this information may help inform targeted population-based prevention, monitoring and intervention strategies. For example, US veterans who are COVID-19 survivors may be at elevated risk of peripandemic psychiatric conditions, and those with less socioeconomic resources and greater symptom severity at baseline may require higher clinical attention. Further, veterans who endorse greater COVID-related stressors may be at increased risk of peripandemic psychiatric disorders and suicidal ideation. Further research is needed to replicate and extend these results to other populations who are also at higher risk of psychiatric conditions; identify mechanisms leading to increased risk of psychiatric conditions during the pandemic; and assess the efficacy of interventions targeting such evidence-based risk and protective factors in mitigating risk of psychiatric conditions in veterans and other populations." ]
[ "intro", "methods", null, null, null, "results", null, null, null, "discussion" ]
[ "epidemiology", "mental health", "depression & mood disorders", "substance misuse", "suicide & self-harm", "adult psychiatry" ]
Introduction: The COVID-19 pandemic has been linked to increased social isolation, economic recession and psychological distress.1 Numerous studies have reported a substantial increase in the prevalence of depression and anxiety in the general public during the pandemic.2 3 Previous psychiatric diagnoses, COVID-related stressors, such as worries about being infected with COVID-19, financial stressors and actual COVID-19 infection have been proposed as possible risk factors of both the development and worsening of mental illness during the pandemic.1 Veterans may be a highly vulnerable group to the negative mental health effects of the pandemic due to their older age, and higher prepandemic prevalence of physical and psychiatric risk factors and conditions relative to the general population.4 5i Further, previous studies on veterans have found that stressful events such as combat deployments were associated with increased rates of depression and substance use disorders in veterans.6 As of April 2021, in the USA, over 31 million people have been infected with COVID-19 and over half a million people have died of COVID-19-related complications.7 Previous studies on survivors of COVID-19 have reported high prevalence of mental disorders, including depression, anxiety and post-traumatic stress disorder (PTSD8–10). Thus, there has been increasing attention in closely monitoring for psychiatric symptoms and functioning in those who have been infected with COVID-19 in order to mitigate distress and risk of suicide. However, due to the high burden of COVID-19 infections in the USA and the limited capacity of mental health treatment services, it is challenging to monitor everyone who has been infected. These challenges have been further complicated by infection-control measures implemented by almost all mental health service providers during the pandemic.11 One potentially effective way of informing the allocation of limited mental health treatment resources may be to identify subgroups that are at high risk of developing or experiencing a worsening of psychiatric conditions after COVID-19 infection. Identifying potential risk and protective factors of psychiatric conditions among COVID-19 survivors in populations with elevated risk, such as US military veterans, may help inform more targeted population-based prevention, screening and intervention strategies. Toward this end, we analysed data from the National Health and Resilience in Veterans Study (NHRVS12) to examine (1) the prevalence of psychiatric conditions in US veterans who have and have not been infected with COVID-19, and (2) prepandemic and peripandemic risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection. Methods: Participants Data were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses. Data were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses. Assessments Online supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection. Online supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection. Data analysis χ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13 χ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13 Participants: Data were analysed from the 2019 to 2020 NHRVS, which surveyed a nationally representative, prospective sample of US military veterans. A total of 4069 veterans completed the prepandemic survey (median completion date: 21 November 2019) prior to the first documented COVID-19 cases in the USA (19 January 2020), and 3078 (75.6%) completed a 1-year peripandemic follow-up assessment (median completion date 14 November 2020). Both surveys were online, 45–60 min, self-report surveys. All participants received 15 000 points (equivalent to $15) for participating in the prepandemic survey, and 20 000 points (equivalent to $20) for participating in the peripandemic survey. Details of the study have been described previously.12 Briefly, the NHRVS sample was drawn from KnowledgePanel, a survey research panel of more than 50 000 US households maintained by Ipsos, a survey research firm. To promote generalisability of the results to the US veteran population, poststratification weights based on the demographic distribution of veterans in the contemporaneous US Census Current Population Survey Veterans Supplement were applied in inferential analyses. Assessments: Online supplemental table 1 presents a detailed description of measures used to assess potential risk and protective factors of peripandemic psychiatric conditions in veterans who survived COVID-19 infection. Data analysis: χ2 analyses were conducted to compare the prevalence of psychiatric disorders and suicidal ideation between veterans who were and were not infected with COVID-19. Subsequent analyses focused on veterans who self-reported a history of COVID-19 infection at the peripandemic assessment. Analyses in this subcohort proceeded in two steps. First, we conducted Pearson correlations between participant characteristics and peripandemic internalising psychiatric disorders (ie, major depressive disorder, generalised anxiety disorder and/or pandemic-related stress symptoms); externalising disorders (ie, alcohol and/or drug use disorder); and suicidal ideation; internalising and externalising disorders were grouped together to increase statistical power. Second, we conducted multivariable binary logistic regression analyses to identify factors that independently predicted the aforementioned three categories of peripandemic psychiatric conditions. Variables that differed at the p<0.05 level in bivariate analyses for each set of comparisons were entered into each regression analysis. Third, we conducted relative importance analyses to determine the relative contribution of each significant variable in predicting positive screens for the three outcomes. These analyses partitioned the explained variance in the study outcome variables that is explained by each independent variable while simultaneously accounting for intercorrelations among these independent variables.13 Results: The final sample included 3078 veterans who completed both prepandemic and 1-year peripandemic assessments. The average age of the sample was 62.2 years old (SD=15.7, range 22–99); the majority was male (90.2%) and Caucasian (78.1%); and 35.0% were combat veterans. A total of 233 veterans (8.6%) reported that they had been infected with COVID-19. The majority (70.5%) reported mild-to-moderate symptoms; 6.3% reported requiring a visit to the emergency room; 4.5% reported severe symptoms with serious concerns about not surviving; 1.9% reported being admitted to an intensive care unit; 1.2% reported requiring an overnight stay at a hospital, and 0.8% reported requiring intubation/mechanical ventilation; the remaining 14.8% of the sample did not provide details about the severity of their illness. Prevalence of peripandemic psychiatric conditions of veterans who were infected by COVID-19 Relative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant. Relative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant. Correlates of peripandemic psychiatric conditions In table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes. Correlates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection *P<0.05, **P<0.01, ***P<0.001. r, Pearson correlation coefficient. In table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes. Correlates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection *P<0.05, **P<0.01, ***P<0.001. r, Pearson correlation coefficient. Multivariable logistic regression models Multivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder. Prepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively. Prepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance. Multivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder. Prepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively. Prepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance. Prevalence of peripandemic psychiatric conditions of veterans who were infected by COVID-19: Relative to veterans who were not infected with COVID-19, veterans who were infected were significantly more likely to screen positive at the peripandemic assessment for any internalising psychiatric disorder (20.5% vs 13.9%, χ2=7.85, p=0.005), alcohol or drug use disorder (23.2% vs 14.8%, χ2=11.51, p=0.001) and current suicidal ideation (12.0% vs 7.6%, χ2=5.92, p=0.015). With regard to specific disorders, veterans with histories of COVID-19 infection were more likely to screen positive for generalised anxiety disorder (14.3% vs 9.1%, χ2=6.99, p=0.008), pandemic-related stress symptoms (7.3% vs 4.0 %, χ2=5.99, p=0.014), alcohol use disorder (14.8% vs 9.2%, χ2=8.03, p=0.005) and drug use disorder (13.9% vs 7.8%, χ2=10.27, p=0.001) relative to veterans who were not infected. The differences in major depressive disorder (11.8% vs 8.6%, χ2=2.93, p=0.087) and past-year suicide plan/attempt (4.9% vs 4.6%, χ2=1.04, p=0.59) were not significant. Correlates of peripandemic psychiatric conditions: In table 1, columns 2–4 show bivariate correlates of peripandemic positive screens for psychiatric conditions. Prepandemic psychiatric symptom severity, alcohol use problem severity, psychosocial difficulties and loneliness were positively correlated with all three outcomes, while perceived social support and protective psychosocial characteristics were negatively correlated with these outcomes. With regard to COVID-related variables, pandemic-related stress symptoms were positively correlated with all three outcomes. Correlates of positive screens for peripandemic psychiatric outcomes among US veterans with histories of COVID-19 infection *P<0.05, **P<0.01, ***P<0.001. r, Pearson correlation coefficient. Multivariable logistic regression models: Multivariable logistic regression analyses (table 1, columns 5–7) revealed that greater prepandemic psychiatric symptom severity, non-prescription drug use, adverse childhood experiences, perceived social support, and COVID-related worries, social restriction stress and financial stress were independent risk factors for peripandemic internalising psychiatric disorders, whereas greater trauma burden was protective. Relative importance analysis indicated that prepandemic psychiatric symptom severity (34.9%), trauma burden (18.6%), COVID-related worries (14.9%), financial stress (14.5%) and social restriction stress (11.6%) explained the majority of variance in this outcome, with drug use severity (5.5%) explaining the remainder. Prepandemic severity of alcohol and non-prescription drug use were the only independent risk factors for peripandemic externalising psychiatric disorders, each explaining 48.0% and 52.0% of the variance, respectively. Prepandemic alcohol use severity, past-year suicidal ideation, loneliness, impulsivity, perceived social support and having a household member infected with COVID-19 were independent risk factors for peripandemic suicidal ideation, whereas greater protective psychosocial characteristics (specifically greater community integration: OR=0.51, 95% CI 0.34 to 0.79) and higher household income were protective. Relative importance analysis showed that prepandemic suicidal ideation (29.3%), loneliness (25.2%), household income (12.0%), community integration (10.8%) and impulsivity (9.4%) explained the majority of variance in this outcome, with household member COVID-19 infection (5.0%), prepandemic social support (4.9%) and alcohol use severity (3.4%) explaining the remaining variance. Discussion: This study compared the prevalence of peripandemic psychiatric disorders and suicidal ideation between US veterans with and without histories of COVID-19 infection, and examined prepandemic and peripandemic risk and protective factors for these outcomes among COVID-19 survivors. Results revealed that veterans who were infected with COVID-19 had a significantly higher prevalence of positive screens for psychiatric conditions, as well as suicidal ideation relative to those who were not infected. Prepandemic greater psychiatric symptom severity and COVID-related stressors were the strongest predictors of peripandemic internalising psychiatric disorders in veterans who survived COVID-19, while previous trauma exposure was found to be protective. Prepandemic suicidal ideation, loneliness and impulsivity were the strongest independent risk factors of peripandemic suicidal ideation, while greater prepandemic community integration and higher annual household income were protective. Results of this study are largely consistent with previous studies of non-representative patient samples who were hospitalised for COVID-19 infection, which observed a higher prevalence of psychiatric conditions, including depression, anxiety and PTSD.8 It is possible that being infected with COVID-19 may have aggravated psychological distress during the pandemic. Further, it is also plausible that those who had prepandemic psychiatric conditions and more psychosocial difficulties may have been at higher risk of infection given less resources and protective measures to avoid infection.9 Of note, the strongest predictors of peripandemic internalising disorders and externalising disorders in those who survived COVID-19 were prepandemic psychiatric symptom and substance use severity, respectively. This finding suggests that veterans with greater severity of prepandemic psychiatric distress and substance use may sensitise veterans to the deleterious mental health effects of COVID-19 infection. A growing body of literature suggests that COVID-related stressors may contribute to risk of pandemic-related psychiatric distress.14 15 Results of the current study extend this work to suggest that COVID-related worries, financial distress and social restriction stress were independently associated with increased risk of screening positive for any internalising psychiatric disorders in veterans who survived COVID-19 infection. Previous research has identified previous suicidal behaviour as a strong predictor of future suicidal behaviour.16 The results of this study are consistent with these findings, as prepandemic suicidal ideation was the strongest predictor of peripandemic suicidal ideation. The negative impact of loneliness on mental health outcomes, including suicidal ideation, has been well established.17 In addition, due to the increased social restrictions imposed during the pandemic, increasing loneliness has been suggested as a possible contributor to worsening mental health.1 In this study, greater prepandemic loneliness was the second strongest predictor of peripandemic suicidal ideation. This finding suggests that loneliness may be a clinically relevant intervention target to help mitigate risk of suicidal behaviour in veterans during the pandemic. Moreover, impulsivity was a strong predictor of peripandemic suicidal ideation, which is consistent with previous research that reported strong positive association between impulsivity and suicidal behaviour.18 Of note, all of these factors were assessed prior to the pandemic, suggesting that they may represent increased vulnerability for suicidal ideation during the pandemic in US military veterans who survived COVID-19 infection. Greater prepandemic community integration was protective against peripandemic suicidal ideation, even after controlling for other variables. This finding accords with previous research describing the protective effects of community integration in promoting psychological resilience and mitigation of risk for suicidal behaviour.19 20 Further, higher prepandemic household income was protective against risk of peripandemic suicidal ideation. This finding, which is consistent with the conservation of resources theory,21 suggests that fewer prepandemic financial resources may have led to increased peripandemic distress and suicidal ideation. This finding underscores the importance of assessing, monitoring and ameliorating financial distress brought on by the pandemic in veterans with histories of COVID-19 infection. Greater trauma burden was identified as a protective factor to internalising psychiatric disorders during the pandemic. While previous trauma is a well known risk factor for psychiatric distress,22 previous exposure to traumatic events may also have ‘stress inoculating’ effects and help protect against risk of adverse mental health outcomes.23 Specifically, exposure to prior traumas may help promote engagement in adaptive coping strategies and may give rise to positive psychological changes (eg, greater sense of personal strength)24 25 that may in turn help foster resilience to new stressors, such as COVID-19 infection. Further, it is noteworthy that perceived social support was found to be associated with increased risk of both internalising psychiatric disorders and suicidal ideation during the pandemic. One interpretation of this finding is that greater prepandemic severity of psychiatric distress may have increased support-seeking behaviours during the pandemic. Further research is needed to elucidate prospective associations between aspects of social support and risk for pandemic-related psychiatric conditions, particularly given the imposition of social restrictions during the pandemic. This study has several limitations. First, history of COVID-19 infection was based on self-report and may be subject to response bias, as well as access to and use of testing for COVID-19 infection. Thus, it is possible that the prevalence of self-reported COVID-19 infection may not capture the accurate prevalence of infections in US veterans. Second, due to the survey design of this study, we used screening instruments to assess psychiatric conditions. Further research using diagnostic instruments such as structured clinical interviews are needed to replicate the results reported herein. Third, while nationally representative, our sample was composed entirely of US military veterans who are predominantly older, male and white, which makes it difficult to generalise results to non-veteran and more diverse veteran populations. Fourth, although this was a prospective cohort study, given the cross-sectional nature of the surveys and limited follow-up time frame, it is difficult to draw conclusions regarding temporal/causal associations between independent variables, and peripandemic psychiatric disorders and suicidal ideation. Despite these limitations, this study provides the first known nationally representative data on the prevalence and prepandemic and pandemic-related risk and protective factors of peripandemic psychiatric outcomes in US military veterans who survived COVID-19 infection. Given the enormous toll of COVID-19 infections in the USA,4 this information may help inform targeted population-based prevention, monitoring and intervention strategies. For example, US veterans who are COVID-19 survivors may be at elevated risk of peripandemic psychiatric conditions, and those with less socioeconomic resources and greater symptom severity at baseline may require higher clinical attention. Further, veterans who endorse greater COVID-related stressors may be at increased risk of peripandemic psychiatric disorders and suicidal ideation. Further research is needed to replicate and extend these results to other populations who are also at higher risk of psychiatric conditions; identify mechanisms leading to increased risk of psychiatric conditions during the pandemic; and assess the efficacy of interventions targeting such evidence-based risk and protective factors in mitigating risk of psychiatric conditions in veterans and other populations.
Background: There have been reports of increased prevalence in psychiatric conditions in non-veteran survivors of COVID-19. To date, however, no known study has examined the prevalence, risk and protective factors of psychiatric conditions among US military veterans who survived COVID-19. Methods: Data were analysed from the 2019 to 2020 National Health and Resilience in Veterans Study, which surveyed a nationally representative, prospective cohort of 3078 US veterans. Prepandemic and 1-year peripandemic risk and protective factors associated with positive screens for peripandemic internalising (major depressive, generalised anxiety and/or posttraumatic stress disorders) and externalising psychiatric disorders (alcohol and/or drug use disorders) and suicidal ideation were examined using bivariate and multivariate logistic regression analyses. Results: A total of 233 veterans (8.6%) reported having been infected with COVID-19. Relative to veterans who were not infected, veterans who were infected were more likely to screen positive for internalising disorders (20.5% vs 13.9%, p=0.005), externalising disorders (23.2% vs 14.8%, p=0.001) and current suicidal ideation (12.0% vs 7.6%, p=0.015) at peripandemic. Multivariable analyses revealed that greater prepandemic psychiatric symptom severity and COVID-related stressors were the strongest independent predictors of peripandemic internalising disorders, while prepandemic trauma burden was protective. Prepandemic suicidal ideation, greater loneliness and lower household income were the strongest independent predictors of peripandemic suicidal ideation, whereas prepandemic community integration was protective. Conclusions: Psychiatric symptoms and suicidal ideation are prevalent in veterans who have survived COVID-19. Veterans with greater prepandemic psychiatric and substance use problems, COVID-related stressors and fewer psychosocial resources may be at increased risk of these outcomes.
null
null
5,128
315
[ 210, 30, 211, 202, 112, 303 ]
10
[ "covid", "psychiatric", "19", "veterans", "covid 19", "peripandemic", "prepandemic", "suicidal", "risk", "ideation" ]
[ "pandemic previous psychiatric", "mental illness pandemic", "illness pandemic veterans", "covid related stressors", "stressors covid 19" ]
null
null
[CONTENT] epidemiology | mental health | depression & mood disorders | substance misuse | suicide & self-harm | adult psychiatry [SUMMARY]
[CONTENT] epidemiology | mental health | depression & mood disorders | substance misuse | suicide & self-harm | adult psychiatry [SUMMARY]
[CONTENT] epidemiology | mental health | depression & mood disorders | substance misuse | suicide & self-harm | adult psychiatry [SUMMARY]
null
[CONTENT] epidemiology | mental health | depression & mood disorders | substance misuse | suicide & self-harm | adult psychiatry [SUMMARY]
null
[CONTENT] COVID-19 | Depressive Disorder, Major | Humans | Mental Health | Prospective Studies | SARS-CoV-2 | Suicidal Ideation | Veterans [SUMMARY]
[CONTENT] COVID-19 | Depressive Disorder, Major | Humans | Mental Health | Prospective Studies | SARS-CoV-2 | Suicidal Ideation | Veterans [SUMMARY]
[CONTENT] COVID-19 | Depressive Disorder, Major | Humans | Mental Health | Prospective Studies | SARS-CoV-2 | Suicidal Ideation | Veterans [SUMMARY]
null
[CONTENT] COVID-19 | Depressive Disorder, Major | Humans | Mental Health | Prospective Studies | SARS-CoV-2 | Suicidal Ideation | Veterans [SUMMARY]
null
[CONTENT] pandemic previous psychiatric | mental illness pandemic | illness pandemic veterans | covid related stressors | stressors covid 19 [SUMMARY]
[CONTENT] pandemic previous psychiatric | mental illness pandemic | illness pandemic veterans | covid related stressors | stressors covid 19 [SUMMARY]
[CONTENT] pandemic previous psychiatric | mental illness pandemic | illness pandemic veterans | covid related stressors | stressors covid 19 [SUMMARY]
null
[CONTENT] pandemic previous psychiatric | mental illness pandemic | illness pandemic veterans | covid related stressors | stressors covid 19 [SUMMARY]
null
[CONTENT] covid | psychiatric | 19 | veterans | covid 19 | peripandemic | prepandemic | suicidal | risk | ideation [SUMMARY]
[CONTENT] covid | psychiatric | 19 | veterans | covid 19 | peripandemic | prepandemic | suicidal | risk | ideation [SUMMARY]
[CONTENT] covid | psychiatric | 19 | veterans | covid 19 | peripandemic | prepandemic | suicidal | risk | ideation [SUMMARY]
null
[CONTENT] covid | psychiatric | 19 | veterans | covid 19 | peripandemic | prepandemic | suicidal | risk | ideation [SUMMARY]
null
[CONTENT] mental | covid | covid 19 | 19 | health | risk | mental health | high | veterans | psychiatric [SUMMARY]
[CONTENT] analyses | survey | conducted | veterans | 2020 | 000 | peripandemic | disorders | 19 | variables [SUMMARY]
[CONTENT] vs | χ2 | vs χ2 | severity | use | prepandemic | psychiatric | disorder | social | covid [SUMMARY]
null
[CONTENT] psychiatric | covid | veterans | 19 | vs | covid 19 | peripandemic | χ2 | risk | prepandemic [SUMMARY]
null
[CONTENT] COVID-19 ||| US | COVID-19 [SUMMARY]
[CONTENT] 2019 | Veterans Study | 3078 | US ||| 1-year | anxiety and/or posttraumatic stress disorders [SUMMARY]
[CONTENT] 233 | 8.6% | COVID-19 ||| 20.5% | 13.9% | p=0.005 | 23.2% | 14.8% | 12.0% | 7.6% ||| ||| [SUMMARY]
null
[CONTENT] COVID-19 ||| US | COVID-19 ||| 2019 | Veterans Study | 3078 | US ||| 1-year | anxiety and/or posttraumatic stress disorders ||| 233 | 8.6% | COVID-19 ||| 20.5% | 13.9% | p=0.005 | 23.2% | 14.8% | 12.0% | 7.6% ||| ||| ||| COVID-19 ||| Veterans [SUMMARY]
null
Acute Caffeine Intake Enhances Mean Power Output and Bar Velocity during the Bench Press Throw in Athletes Habituated to Caffeine.
32033103
The main objective of the current investigation was to evaluate the effects of caffeine on power output and bar velocity during an explosive bench press throw in athletes habituated to caffeine.
BACKGROUND
Twelve resistance trained individuals habituated to caffeine ingestion participated in a randomized double-blind experimental design. Each participant performed three identical experimental sessions 60 min after the intake of a placebo, 3, and 6 mg/kg/b.m. of caffeine. In each experimental session, the participants performed 5 sets of 2 repetitions of the bench press throw (with a load equivalent to 30% repetition maximum (RM), measured in a familiarization trial) on a Smith machine, while bar velocity and power output were registered with a rotatory encoder.
METHODS
In comparison to the placebo, the intake of caffeine increased mean bar velocity during 5 sets of the bench press throw (1.37 ± 0.05 vs. 1.41 ± 0.05 and 1.41 ± 0.06 m/s for placebo, 3, and 6 mg/kg/b.m., respectively; p < 0.01), as well as mean power output (545 ± 117 vs. 562 ± 118 and 560 ± 107 W; p < 0.01). However, caffeine was not effective at increasing peak velocity (p = 0.09) nor peak power output (p = 0.07) during the explosive exercise.
RESULTS
The acute doses of caffeine before resistance exercise may increase mean power output and mean bar velocity during the bench press throw training session in a group of habitual caffeine users. Thus, caffeine prior to ballistic exercises enhances performance during a power-specific resistance training session.
CONCLUSION
[ "Adult", "Athletes", "Athletic Performance", "Caffeine", "Cross-Over Studies", "Double-Blind Method", "Healthy Volunteers", "Humans", "Male", "Performance-Enhancing Substances", "Resistance Training", "Weight Lifting" ]
7071256
1. Introduction
Caffeine (CAF) is one of the most common substances used in sport which enhances physical performance [1]. Although CAF may affect various body tissues [2,3], there is a growing body of evidence in animal [4] and human models [5] sustaining the ability of CAF to act as an adenosine antagonist, as the main mechanism behind CAF ergogenic effect. Current research recommends doses of CAF ranging from 3 to 6 mg/kg body mass to elicit ergogenic benefits, while the time of ingestion (from 30 to 90 min before exercise) and the form of ingestion (pills, liquid, chewing gum) are less significant as CAF is rapidly absorbed after ingestion [6]. However, the optimal protocol of CAF supplementation may differ based on the type and duration of exercise, previous habituation to CAF and the type of muscle contraction [6,7,8,9,10]. Acute CAF intake causes a slightly different response to upper and lower body exercise [10], despite the lack of a mechanisms explaining this phenomenon. A recent meta-analysis about the effects of CAF on muscle strength and power output found that CAF significantly improved upper but not lower body strength [7]. Although the outcomes of the first investigations were contradictory [11,12,13], more recent studies have found a clear effect of CAF on several forms of upper body muscle performance [14,15,16], when the used dose was at least 3 mg/kg/b.m. [17]. Interestingly, the effect of CAF on upper body muscle performance may be partially dampened in athletes habituated to CAF because the regular use of CAF-containing products [18] may develop tolerance to this substance. It must be noticed that this decreased ergogenic effect of CAF is not removed even with the ingestion of doses >9 mg/kg/b.m. [19,20]. Most studies on the acute effects of CAF on upper body muscle performance used the bench press exercise. The bench press exercise is widely used as a means of developing strength and power of the upper limbs [21,22]. However, other authors, recommend the use of ballistic exercises for the development of upper body power, such as the bench press throw (BPT) [23,24]. Specifically, to increase power output, the loads ranging from 0% to 50% of one-repetition maximum (1 RM) moved at maximal speed are recommended as the most potent loading stimuli for improving power output [23]. However, this type of routine can only be performed on a Smith machine while using the BPT exercise (i.e., maximal bar velocity is obtained at the moment of throw). The traditional free-weight bench press exercise does not allow for the attainment of maximal velocity of execution (i.e., velocity equals 0 m/s at maximal arm extension). Thus, ballistic exercises could be an optimal choice for power training as they allow for greater velocity, and muscle activation in comparison to similar traditional resistance training routines [21]. Perhaps, the main asset of the BPT is the maximum acceleration of a given load, which ultimately produces high movement velocity in a short time [25]. In this respect, the loads applied in ballistic exercises during training will depend on the specific requirements of particular sport disciplines and will determine success in numerous power-based competitions [23]. Furthermore, the BPT performance has been associated with overall performance in different sport-specific tasks [26,27,28,29]. Therefore, it seems reasonable to use the BPT as a means of testing upper-body ballistic performance. Although the BPT is indicated as the most effective exercise for developing power of the upper limbs [30], previous studies have not determined the acute effects of CAF on power output and bar velocity during this type of exercise. Burke [31] has suggested that, in the current literature, there is a lack of data about the practical use of supplements in competitive sports because experimental protocols are often different from sports practice. In case of CAF, most studies considering the acute effects of this supplement were assessed on the basis of only a single set of an exercise [7,8,32], while real resistance training sessions in trained individuals rarely contain only one set of a particular exercise [31]. On the other hand, the investigations analyzing the acute effects of CAF on performance during successive sets of resistance exercises are scarce. Bowtell et al. [3] showed that pre-exercise CAF (6 mg/kg/b.m.) intake improved total exercise time during 5 sets of one-legged knee extensions performed to failure in comparison to a placebo (PLAC). It is worth noting that this ergogenic effect was achieved despite significantly lower muscle phosphocreatine concentration (PCr) and pH in the latter sets of an exercise in the CAF trial. Further, CAF ingestion (6 mg/kg/b.m.) attenuated the increase in interstitial potassium during one-legged knee extensions at 20 W (10 min) and 50 W (3 × 3 min) measured using microdialysis [33], which resulted in a 16% improvement in high-intensity exercise capacity. Most studies related to the acute impact of CAF intake on power output and bar velocity have used participants unhabituated to CAF or individuals with low-to-moderate daily consumption of this stimulant [11,16,34]. However, the analysis of urinary CAF concentrations after official competitions suggests that CAF is widely employed before or during exercise to enhance performance [35,36]. This would mean that it is highly likely that some athletes are habituated to CAF due to daily consumption of caffeine-containing products. The existence of athletes habituated to CAF may be particularly common in sports such as cycling, rowing, triathlon, athletics, and weightlifting, sport disciplines that benefit from the use of ballistic exercises during training to increase power output. Habitual CAF intake modifies physiological responses to acute ingestion of this stimulant by the up-regulation of adenosine receptors [37,38]. This effect would produce a progressive reduction of CAF ergogenicity in those athletes consuming CAF on a regular basis, because the newly created adenosine receptors may bind to adenosine and induce fatigue. However, the fact of habituation to the ergogenic benefits of CAF is still inconclusive. The studies by Dodd et al. [39] and de Souza [40] showed similar responses to endurance exercise after acute CAF intake between low and habitual CAF consumers, although this is not always the case [41]. Considering the above, the use of cross-sectional investigations including participants with different degrees of habituation to CAF may explain the lack of consistency when concluding about tolerance to the ergogenic effect of CAF. Lara’s et al. [42] crossover design showed that the ergogenicity of CAF was reduced when the substance was consumed daily for 20 days, yet afterwards the ergogenic properties of CAF were maintained. On the contrary, tolerance to some of the side-effects associated to CAF has been observed in habitual consumers of CAF [43]. However, only two previous studies analyzed the impact of CAF intake in habitual CAF consumers using resistance exercise test protocols [18,19,20]. These investigations indicate that CAF ergogenicity to power output is mostly reduced in individuals habituated to CAF, while only high doses (>9 mg/kg/b.m.) may exert some benefit in maximal strength [19,20]. However, to date, there is no available data regarding the influence of acute CAF intake on power output and bar velocity during ballistic exercises in athletes habitually consuming CAF. Given the widespread use of the BPT exercise as a mean of developing power output in the upper limbs [44,45] and the widespread use of CAF in sport, it would be interesting to investigate whether acute CAF intake affects power output and bar velocity in athletes habituated to CAF. For this reason, the aim of the present study was to evaluate the effects of the acute intake of 3 and 6 mg/kg/b.m. of CAF on power output and bar velocity during five sets of the BPT in participants habituated to CAF. It was hypothesized that acute CAF intake would increase power output and bar velocity during the BPT training session when compared to a control situation, even in participants habituated to CAF.
null
null
3. Results
The two-way repeated measures ANOVA indicated no significant substance × set main interaction effect for MP (F = 1.19; p = 0.32); MV (F = 1.18; p = 0.32); PP (F = 1.05; p = 0.40); PV (F = 1.09; p = 0.38). However, there was a significant main effect of substance in MP (F = 7.27; p < 0.01) and MV (F = 6.75; p < 0.01). No statistically significant main effect of substance was revealed in PP (F = 2.91; p = 0.07) and PV (F = 2.63; p = 0.09; Table 1). Post hoc analyses for main effect of substance indicated significant increases in MP (p < 0.01; ES = 0.14) and MV (p = 0.01; ES = 0.78) in BPT (mean of the 5 sets) after the intake of CAF-3 compared to PLAC as well as significant increases in MP (p = 0.01; ES = 0.13) and MV (p = 0.01; ES = 0.72) in the BPT (mean of the 5 sets) after the intake of CAF-6 compared to PLAC. There were no significant differences in MP and MV between the two doses of CAF (CAF-3 vs. CAF-6). The results of particular sets in MP, MV, PP, and PV as well ES between PLAC and CAF-3, CAF-6 in each set are presented in Table 2. Figure 2 and Figure 3 represent the individual responses induced by CAF-3 and CAF-6, in comparison to the placebo, for MP and MV. The 11 out of 12 participants showed an increase in MP and MV after the ingestion of CAF-3, while CAF-6 produced higher values for MP and MV in 10 out of 12 participants. The y-axis represents the difference in mean bar velocity output during the 5 sets of BPT between PLAC–CAF-3; PLAC–CAF-6 for each individual.
5. Conclusions
The results of the present study indicate that acute doses of CAF, between 3 and 6 mg/kg/b.m., ingested before the onset of an explosive resistance exercise produced an overall effect on mean power output and mean bar velocity during a BPT training session in a group of habitual CAF users. The main effect in mean power and bar velocity was found in several sets during the trial which may indicate that the use of CAF was effective in increasing performance in the whole training session. In contrast, no significant changes were observed for peak power output and peak bar velocity. These results suggest that the ingestion of CAF prior to ballistic exercise can enhance the outcomes of resistance training. However, the results of our study refer only to power output and bar velocity of the upper limbs during the BPT with an external load of 30% 1 RM and further investigations should consider the effect of CAF with different loads or the use of lower-body exercises.
[ "2. Materials and Methods", "2.2. Habitual Caffeine Intake Assessment", "2.3. Familiarization Session and One Repetition Maximum Test", "2.4. Experimental Sessions", "2.5. Statistical Analysis" ]
[ "A randomized, double-blind, PLAC-controlled crossover design was used for this investigation. Initially, the participants performed a familiarization session with the experimental protocols that included a 1 RM measurement for the bench press exercise. Afterwards, they performed 3 different experimental sessions with a one-week interval between sessions to allow for complete recovery and a wash-out of ingested substances (Figure 1). During the 3 experimental sessions, the study participants either ingested a PLAC, 3 mg/kg/b.m. of CAF (CAF-3) or 6 mg/kg/b.m. of CAF (CAF-6). One hour after ingestion of CAF or PLAC, they performed 5 sets of 2 repetitions of the BPT exercise at 30% 1 RM. Both CAF and PLAC were administered orally to allow for peak blood CAF concentration during the training session and at least 2 h after the last meal to avoid any interference of the diet with the absorption of the experimental substances. CAF supplementation was provided to participants in the form of unidentifiable capsules (Caffeine Kick®, Olimp Laboratories, Debica, Poland). The manufacturer of the CAF capsules also provided identical PLAC capsules filled with all-purpose flour. Participants refrained from strenuous physical activity the day before the experimental trials but they maintained their training routines during the duration of the experiment to avoid any performance decrement due to inactivity. Additionally, the participants maintained their dietary habits during the study period, including daily CAF intake. They received a list of products containing CAF which could not be consumed within 12 h of each experimental trial. Compliance was tested verbally and by using dietary records. Additionally, the participants were required to refrain from alcohol and tobacco, medications or dietary supplements for two weeks prior to the experiment. All subjects registered their calorie intake using the “Myfitness pal” software [46] (Under Armour, Baltimore, MD, USA) every 24 h before the testing procedure, to ensure that the caloric intake was similar between experimental sessions.\n 2.1. Participants Twelve healthy strength-trained male athletes (age: 25.3 ± 1.7 years., body mass: 88.4 ± 16.5 kg, body mass index (BMI): 26.5 ± 4.7, bench press 1 RM: 128.6 ± 36.0 kg; mean ± SD) volunteered to participate in the study. All participants completed a written consent form after they had been informed of the risks and benefits of the study protocols. The participants had a minimum of 3 years of strength training experience (4.4 ± 1.6 years). All of them were classified as high habitual CAF consumers according to the classification recently proposed by de Souza Gonçalves et al. [40]. They self-reported their daily ingestion of CAF (5.0 ± 0.95 mg/kg/b.m./day, 443 ± 142 mg/day) based on a Food Frequency Questionnaire (FFQ). The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders, (b) 1 RM bench press performance with a load of at least 120% body mass, (c) habitual CAF intake in the range of 4–6 mg/day/kg/b.m. The athletes were excluded from the study when they suffered from any pathology or injury or when they were unable to perform the exercise protocol at the maximum effort. The investigation protocols were approved by the Bioethics Committee for Scientific Research at the Academy of Physical Education in Katowice (March 2019), Poland, according to the ethical standards of the latest version of the Declaration of Helsinki, 2013.\nTwelve healthy strength-trained male athletes (age: 25.3 ± 1.7 years., body mass: 88.4 ± 16.5 kg, body mass index (BMI): 26.5 ± 4.7, bench press 1 RM: 128.6 ± 36.0 kg; mean ± SD) volunteered to participate in the study. All participants completed a written consent form after they had been informed of the risks and benefits of the study protocols. The participants had a minimum of 3 years of strength training experience (4.4 ± 1.6 years). All of them were classified as high habitual CAF consumers according to the classification recently proposed by de Souza Gonçalves et al. [40]. They self-reported their daily ingestion of CAF (5.0 ± 0.95 mg/kg/b.m./day, 443 ± 142 mg/day) based on a Food Frequency Questionnaire (FFQ). The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders, (b) 1 RM bench press performance with a load of at least 120% body mass, (c) habitual CAF intake in the range of 4–6 mg/day/kg/b.m. The athletes were excluded from the study when they suffered from any pathology or injury or when they were unable to perform the exercise protocol at the maximum effort. The investigation protocols were approved by the Bioethics Committee for Scientific Research at the Academy of Physical Education in Katowice (March 2019), Poland, according to the ethical standards of the latest version of the Declaration of Helsinki, 2013.\n 2.2. Habitual Caffeine Intake Assessment Daily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant.\nDaily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant.\n 2.3. Familiarization Session and One Repetition Maximum Test A familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement.\nA familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement.\n 2.4. Experimental Sessions During experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition.\nDuring experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition.\n 2.5. Statistical Analysis Data are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52].\nData are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52].", "Daily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant.", "A familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement.", "During experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition.", "Data are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52]." ]
[ null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Participants", "2.2. Habitual Caffeine Intake Assessment", "2.3. Familiarization Session and One Repetition Maximum Test", "2.4. Experimental Sessions", "2.5. Statistical Analysis", "3. Results", "4. Discussion", "5. Conclusions" ]
[ "Caffeine (CAF) is one of the most common substances used in sport which enhances physical performance [1]. Although CAF may affect various body tissues [2,3], there is a growing body of evidence in animal [4] and human models [5] sustaining the ability of CAF to act as an adenosine antagonist, as the main mechanism behind CAF ergogenic effect. Current research recommends doses of CAF ranging from 3 to 6 mg/kg body mass to elicit ergogenic benefits, while the time of ingestion (from 30 to 90 min before exercise) and the form of ingestion (pills, liquid, chewing gum) are less significant as CAF is rapidly absorbed after ingestion [6]. However, the optimal protocol of CAF supplementation may differ based on the type and duration of exercise, previous habituation to CAF and the type of muscle contraction [6,7,8,9,10].\nAcute CAF intake causes a slightly different response to upper and lower body exercise [10], despite the lack of a mechanisms explaining this phenomenon. A recent meta-analysis about the effects of CAF on muscle strength and power output found that CAF significantly improved upper but not lower body strength [7]. Although the outcomes of the first investigations were contradictory [11,12,13], more recent studies have found a clear effect of CAF on several forms of upper body muscle performance [14,15,16], when the used dose was at least 3 mg/kg/b.m. [17]. Interestingly, the effect of CAF on upper body muscle performance may be partially dampened in athletes habituated to CAF because the regular use of CAF-containing products [18] may develop tolerance to this substance. It must be noticed that this decreased ergogenic effect of CAF is not removed even with the ingestion of doses >9 mg/kg/b.m. [19,20].\nMost studies on the acute effects of CAF on upper body muscle performance used the bench press exercise. The bench press exercise is widely used as a means of developing strength and power of the upper limbs [21,22]. However, other authors, recommend the use of ballistic exercises for the development of upper body power, such as the bench press throw (BPT) [23,24]. Specifically, to increase power output, the loads ranging from 0% to 50% of one-repetition maximum (1 RM) moved at maximal speed are recommended as the most potent loading stimuli for improving power output [23]. However, this type of routine can only be performed on a Smith machine while using the BPT exercise (i.e., maximal bar velocity is obtained at the moment of throw). The traditional free-weight bench press exercise does not allow for the attainment of maximal velocity of execution (i.e., velocity equals 0 m/s at maximal arm extension). Thus, ballistic exercises could be an optimal choice for power training as they allow for greater velocity, and muscle activation in comparison to similar traditional resistance training routines [21]. Perhaps, the main asset of the BPT is the maximum acceleration of a given load, which ultimately produces high movement velocity in a short time [25]. In this respect, the loads applied in ballistic exercises during training will depend on the specific requirements of particular sport disciplines and will determine success in numerous power-based competitions [23]. Furthermore, the BPT performance has been associated with overall performance in different sport-specific tasks [26,27,28,29]. Therefore, it seems reasonable to use the BPT as a means of testing upper-body ballistic performance. Although the BPT is indicated as the most effective exercise for developing power of the upper limbs [30], previous studies have not determined the acute effects of CAF on power output and bar velocity during this type of exercise.\nBurke [31] has suggested that, in the current literature, there is a lack of data about the practical use of supplements in competitive sports because experimental protocols are often different from sports practice. In case of CAF, most studies considering the acute effects of this supplement were assessed on the basis of only a single set of an exercise [7,8,32], while real resistance training sessions in trained individuals rarely contain only one set of a particular exercise [31]. On the other hand, the investigations analyzing the acute effects of CAF on performance during successive sets of resistance exercises are scarce. Bowtell et al. [3] showed that pre-exercise CAF (6 mg/kg/b.m.) intake improved total exercise time during 5 sets of one-legged knee extensions performed to failure in comparison to a placebo (PLAC). It is worth noting that this ergogenic effect was achieved despite significantly lower muscle phosphocreatine concentration (PCr) and pH in the latter sets of an exercise in the CAF trial. Further, CAF ingestion (6 mg/kg/b.m.) attenuated the increase in interstitial potassium during one-legged knee extensions at 20 W (10 min) and 50 W (3 × 3 min) measured using microdialysis [33], which resulted in a 16% improvement in high-intensity exercise capacity.\nMost studies related to the acute impact of CAF intake on power output and bar velocity have used participants unhabituated to CAF or individuals with low-to-moderate daily consumption of this stimulant [11,16,34]. However, the analysis of urinary CAF concentrations after official competitions suggests that CAF is widely employed before or during exercise to enhance performance [35,36]. This would mean that it is highly likely that some athletes are habituated to CAF due to daily consumption of caffeine-containing products. The existence of athletes habituated to CAF may be particularly common in sports such as cycling, rowing, triathlon, athletics, and weightlifting, sport disciplines that benefit from the use of ballistic exercises during training to increase power output. Habitual CAF intake modifies physiological responses to acute ingestion of this stimulant by the up-regulation of adenosine receptors [37,38]. This effect would produce a progressive reduction of CAF ergogenicity in those athletes consuming CAF on a regular basis, because the newly created adenosine receptors may bind to adenosine and induce fatigue. However, the fact of habituation to the ergogenic benefits of CAF is still inconclusive. The studies by Dodd et al. [39] and de Souza [40] showed similar responses to endurance exercise after acute CAF intake between low and habitual CAF consumers, although this is not always the case [41]. Considering the above, the use of cross-sectional investigations including participants with different degrees of habituation to CAF may explain the lack of consistency when concluding about tolerance to the ergogenic effect of CAF. Lara’s et al. [42] crossover design showed that the ergogenicity of CAF was reduced when the substance was consumed daily for 20 days, yet afterwards the ergogenic properties of CAF were maintained. On the contrary, tolerance to some of the side-effects associated to CAF has been observed in habitual consumers of CAF [43]. However, only two previous studies analyzed the impact of CAF intake in habitual CAF consumers using resistance exercise test protocols [18,19,20]. These investigations indicate that CAF ergogenicity to power output is mostly reduced in individuals habituated to CAF, while only high doses (>9 mg/kg/b.m.) may exert some benefit in maximal strength [19,20]. However, to date, there is no available data regarding the influence of acute CAF intake on power output and bar velocity during ballistic exercises in athletes habitually consuming CAF.\nGiven the widespread use of the BPT exercise as a mean of developing power output in the upper limbs [44,45] and the widespread use of CAF in sport, it would be interesting to investigate whether acute CAF intake affects power output and bar velocity in athletes habituated to CAF. For this reason, the aim of the present study was to evaluate the effects of the acute intake of 3 and 6 mg/kg/b.m. of CAF on power output and bar velocity during five sets of the BPT in participants habituated to CAF. It was hypothesized that acute CAF intake would increase power output and bar velocity during the BPT training session when compared to a control situation, even in participants habituated to CAF.", "A randomized, double-blind, PLAC-controlled crossover design was used for this investigation. Initially, the participants performed a familiarization session with the experimental protocols that included a 1 RM measurement for the bench press exercise. Afterwards, they performed 3 different experimental sessions with a one-week interval between sessions to allow for complete recovery and a wash-out of ingested substances (Figure 1). During the 3 experimental sessions, the study participants either ingested a PLAC, 3 mg/kg/b.m. of CAF (CAF-3) or 6 mg/kg/b.m. of CAF (CAF-6). One hour after ingestion of CAF or PLAC, they performed 5 sets of 2 repetitions of the BPT exercise at 30% 1 RM. Both CAF and PLAC were administered orally to allow for peak blood CAF concentration during the training session and at least 2 h after the last meal to avoid any interference of the diet with the absorption of the experimental substances. CAF supplementation was provided to participants in the form of unidentifiable capsules (Caffeine Kick®, Olimp Laboratories, Debica, Poland). The manufacturer of the CAF capsules also provided identical PLAC capsules filled with all-purpose flour. Participants refrained from strenuous physical activity the day before the experimental trials but they maintained their training routines during the duration of the experiment to avoid any performance decrement due to inactivity. Additionally, the participants maintained their dietary habits during the study period, including daily CAF intake. They received a list of products containing CAF which could not be consumed within 12 h of each experimental trial. Compliance was tested verbally and by using dietary records. Additionally, the participants were required to refrain from alcohol and tobacco, medications or dietary supplements for two weeks prior to the experiment. All subjects registered their calorie intake using the “Myfitness pal” software [46] (Under Armour, Baltimore, MD, USA) every 24 h before the testing procedure, to ensure that the caloric intake was similar between experimental sessions.\n 2.1. Participants Twelve healthy strength-trained male athletes (age: 25.3 ± 1.7 years., body mass: 88.4 ± 16.5 kg, body mass index (BMI): 26.5 ± 4.7, bench press 1 RM: 128.6 ± 36.0 kg; mean ± SD) volunteered to participate in the study. All participants completed a written consent form after they had been informed of the risks and benefits of the study protocols. The participants had a minimum of 3 years of strength training experience (4.4 ± 1.6 years). All of them were classified as high habitual CAF consumers according to the classification recently proposed by de Souza Gonçalves et al. [40]. They self-reported their daily ingestion of CAF (5.0 ± 0.95 mg/kg/b.m./day, 443 ± 142 mg/day) based on a Food Frequency Questionnaire (FFQ). The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders, (b) 1 RM bench press performance with a load of at least 120% body mass, (c) habitual CAF intake in the range of 4–6 mg/day/kg/b.m. The athletes were excluded from the study when they suffered from any pathology or injury or when they were unable to perform the exercise protocol at the maximum effort. The investigation protocols were approved by the Bioethics Committee for Scientific Research at the Academy of Physical Education in Katowice (March 2019), Poland, according to the ethical standards of the latest version of the Declaration of Helsinki, 2013.\nTwelve healthy strength-trained male athletes (age: 25.3 ± 1.7 years., body mass: 88.4 ± 16.5 kg, body mass index (BMI): 26.5 ± 4.7, bench press 1 RM: 128.6 ± 36.0 kg; mean ± SD) volunteered to participate in the study. All participants completed a written consent form after they had been informed of the risks and benefits of the study protocols. The participants had a minimum of 3 years of strength training experience (4.4 ± 1.6 years). All of them were classified as high habitual CAF consumers according to the classification recently proposed by de Souza Gonçalves et al. [40]. They self-reported their daily ingestion of CAF (5.0 ± 0.95 mg/kg/b.m./day, 443 ± 142 mg/day) based on a Food Frequency Questionnaire (FFQ). The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders, (b) 1 RM bench press performance with a load of at least 120% body mass, (c) habitual CAF intake in the range of 4–6 mg/day/kg/b.m. The athletes were excluded from the study when they suffered from any pathology or injury or when they were unable to perform the exercise protocol at the maximum effort. The investigation protocols were approved by the Bioethics Committee for Scientific Research at the Academy of Physical Education in Katowice (March 2019), Poland, according to the ethical standards of the latest version of the Declaration of Helsinki, 2013.\n 2.2. Habitual Caffeine Intake Assessment Daily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant.\nDaily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant.\n 2.3. Familiarization Session and One Repetition Maximum Test A familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement.\nA familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement.\n 2.4. Experimental Sessions During experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition.\nDuring experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition.\n 2.5. Statistical Analysis Data are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52].\nData are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52].", "Twelve healthy strength-trained male athletes (age: 25.3 ± 1.7 years., body mass: 88.4 ± 16.5 kg, body mass index (BMI): 26.5 ± 4.7, bench press 1 RM: 128.6 ± 36.0 kg; mean ± SD) volunteered to participate in the study. All participants completed a written consent form after they had been informed of the risks and benefits of the study protocols. The participants had a minimum of 3 years of strength training experience (4.4 ± 1.6 years). All of them were classified as high habitual CAF consumers according to the classification recently proposed by de Souza Gonçalves et al. [40]. They self-reported their daily ingestion of CAF (5.0 ± 0.95 mg/kg/b.m./day, 443 ± 142 mg/day) based on a Food Frequency Questionnaire (FFQ). The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders, (b) 1 RM bench press performance with a load of at least 120% body mass, (c) habitual CAF intake in the range of 4–6 mg/day/kg/b.m. The athletes were excluded from the study when they suffered from any pathology or injury or when they were unable to perform the exercise protocol at the maximum effort. The investigation protocols were approved by the Bioethics Committee for Scientific Research at the Academy of Physical Education in Katowice (March 2019), Poland, according to the ethical standards of the latest version of the Declaration of Helsinki, 2013.", "Daily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant.", "A familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement.", "During experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition.", "Data are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52].", "The two-way repeated measures ANOVA indicated no significant substance × set main interaction effect for MP (F = 1.19; p = 0.32); MV (F = 1.18; p = 0.32); PP (F = 1.05; p = 0.40); PV (F = 1.09; p = 0.38). However, there was a significant main effect of substance in MP (F = 7.27; p < 0.01) and MV (F = 6.75; p < 0.01). No statistically significant main effect of substance was revealed in PP (F = 2.91; p = 0.07) and PV (F = 2.63; p = 0.09; Table 1).\nPost hoc analyses for main effect of substance indicated significant increases in MP (p < 0.01; ES = 0.14) and MV (p = 0.01; ES = 0.78) in BPT (mean of the 5 sets) after the intake of CAF-3 compared to PLAC as well as significant increases in MP (p = 0.01; ES = 0.13) and MV (p = 0.01; ES = 0.72) in the BPT (mean of the 5 sets) after the intake of CAF-6 compared to PLAC. There were no significant differences in MP and MV between the two doses of CAF (CAF-3 vs. CAF-6). The results of particular sets in MP, MV, PP, and PV as well ES between PLAC and CAF-3, CAF-6 in each set are presented in Table 2.\nFigure 2 and Figure 3 represent the individual responses induced by CAF-3 and CAF-6, in comparison to the placebo, for MP and MV. The 11 out of 12 participants showed an increase in MP and MV after the ingestion of CAF-3, while CAF-6 produced higher values for MP and MV in 10 out of 12 participants.\nThe y-axis represents the difference in mean bar velocity output during the 5 sets of BPT between PLAC–CAF-3; PLAC–CAF-6 for each individual.", "The main finding of this study was that acute CAF intake has a positive effect on MP and MV during a training session of the BPT performed at 30% 1 RM. Interestingly, both 3 and 6 mg/kg/b.m. doses of CAF had similar effectiveness in enhancing performance when compared to PLAC. Additionally, the ergogenic effect of CAF on MP and MV was evident in most of participants, as all of them responded by improving performance with either CAF-3 or CAF-6, even when they were catalogued as individuals habituated to CAF (Figure 2 and Figure 3). However, the study did not show significant changes in PP and PV after CAF intake with either dose of CAF (3 and 6 mg/kg/b.m.) compared to PLAC. These outcomes suggest that acute CAF intake in a moderate dose (from 3 to 6 mg/kg/b.m.) is effective in increasing mean power and bar velocity during the BPT without a significant influence on peak values of these variables. These results suggest that CAF can be effectively used to acutely improve this power-specific training routine even with individuals habituated to CAF, although the long-term training effects with CAF should be further investigated.\nThe y-axis represents the difference in mean power output during the 5 sets of BPT between PLAC–CAF-3; PLAC–CAF-6 for each individual.\nPrevious research showed that acute CAF intake increases power output during the bench press exercise [8,14,15,16]. However, most of these studies included only one set of the exercise which is not the habitual practice during sports training, where several sets of a particular exercise are performed in order to obtain significant adaptations derived from training. In presented study, the main effect of increase in MP and MV after the intake of CAF-3 and CAF-6 over the placebo has occurred for training session consisting several sets (Table 2). The ergogenic effect of CAF observed during the BPT is partly consistent with the results of previous findings [8,16]. However, it should be emphasized that this is the first study investigating the effects of CAF during a training session that includes several sets of a ballistic exercise. Experimental procedures with the use of CAF in which more than one set of an upper body exercise are used are scarce [18,53,54]. The study of Lane and Byrd [53] showed that the intake of 300 mg of CAF, representing 3.5 mg/kg/b.m. increased peak velocity during 10 sets of the bench press exercise at 80% 1 RM compared to PLAC. Wilk et al. [18] did not show significant changes after the intake of different doses of CAF (3, 6, and 9 mg/kg/b.m.) in both, mean and peak power output and bar velocity during the BP exercise at 50% 1 RM (3 sets of 5 repetitions), although this investigation was carried out in athletes habituated to CAF. No changes in mean and peak bar velocity after CAF intake of 150 mg, representing 1.74 mg/kg/b.m. were observed in a study by Lane et al. [54] where 10 sets of 3 repetitions of the bench press exercise were performed at 80% 1 RM. The current study is quite innovative because it is the first investigation geared to assess the effects of CAF intake on power output by using a ballistic upper body exercise with a low external load (30% 1 RM), geared for power training of athletes [55]. The current state of the literature, indicates that CAF is useful in increasing power during one or multiple sets of the bench press exercise when the dose ingested is >3 mg/kg/b.m., but it seems particularly effective when using low and moderate loads during an explosive exercise, such as the BPT.\nThe overall increase of MP and MV during the training session of the BPT after ingestion of CAF-3 and CAF-6 can be also attributed to increased pre-exercise central excitability. Specifically, the pre-exercise ingestion of CAF would allow the athletes to maintain a certain amount of force even in the presence of biochemical changes within the working muscle that lead to fatigue [3]. Under this theory, CAF intake would allow a higher physical performance because it would help to maintain the neural response even in the presence of metabolic perturbations such as low muscle pH. This effect may be accompanied by reductions in interstitial potassium accumulation found after CAF intake [2], that ultimately leads to the maintenance of excitability during exercise [33,56]. In the central nervous system (CNS), CAF binds to adenosine receptors that influence the release of neurotransmitters, such as noradrenaline and acetylcholine [4,57,58,59] and consequently, increase muscle tension [60]. However, in the current investigation, this purported effect of CAF on CNS was not sufficient to enhance PP and PV during the BPT at 30% 1 RM. Thus, reduced fatigue through CAF-induced modulation of both peripheral and central neural processes may explain the obtained results and higher MP and MV of the bar during the BPT training session. Nevertheless, the association of the ergogenic effect with the mechanisms that allowed this ergogenic effect is speculative at this moment because no measurements were carried out to test the origin of caffeine’s ergogenic effects.\nIt should be taken into consideration that the study participants in the current study were habitual CAF users. In contrast, most of the investigations aimed at determination of the ergogenic effect of CAF on muscle performance have selected individuals unhabituated to this stimulant or with low-to-moderate daily consumption of CAF (e.g., from 58 to 250 mg/day), [11,16,34], to avoid the effects that tolerance to CAF may. However, CAF is an ergogenic aid frequently used in training and competition and it is likely that some athletes seeking for ergogenic benefits of CAF are already habituated to this substance due to the chronic use of caffeine-containing supplements during training and competition. In fact, previous investigations have suggested that between 75% and 90% of athletes use CAF in competitive and training settings [35,36,61], suggesting that studies on the effect of acute CAF intake on physical performance during real training and competition settings are particularly important in athletes habituated to CAF. In this respect, previous research using well-controlled CAF treatments has suggested that the habitual intake of this stimulant may progressively reduce its ergogenic effect on exercise performance [42,62] and then, it has been speculated that the ergogenic effect of CAF could be dampened in habitual CAF users.\nTo the authors’ knowledge, only three previous studies analyzed power output of the upper limbs in a group of participants habituated to CAF [9,18,19,20]. The study of Sabol et al. [9] showed an increase in medicine ball throwing distance after the acute intake of 6 mg/kg/b.m. of CAF but the doses of 2 and 4 mg/kg/b.m. did not show any differences with the PLAC. The study by Wilk et al. [18] did not show increases in power output and bar velocity during the bench press exercise in high habitual CAF users that ingested from 3 to 9 mg/kg/b.m. Although it has been theorized that the reduction in the ergogenic effects of CAF in habitual users can be modified using doses greater than the daily habitual intake [63], previous investigations indicate that athletes habituated to CAF do not benefit from the acute ingestion of CAF in doses above their habitual intake while the prevalence of side effects is greatly increased [19,20]. Interestingly, participants in the presented study self-reported their daily ingestion of CAF, which amounted to 5.0 ± 0.95 mg/kg/b.m., (443 ± 142 mg of CAF per day), and the acute CAF doses (especially CAF-3) and some performance enhancements were obtained even when de dose of CAF did not exceed the value of habitual consumption. In any case, although the current investigation found a positive effect of CAF on mean power output and mean bar velocity during the BPT in athletes habituated to CAF, it is still possible that the effect of this substance is higher in unhabituated individuals.\nIn addition to its strengths, the current study presents limitations that should be addressed. Although the results showed a significant main effect on MP and MV after CAF intake, the direct causes of these changes cannot be determined and explained. The study did not include biochemical analysis which could explain the obtained results. In addition, blood samples were not obtained and thus, we have no data about serum CAF concentrations with each of the dosages of CAF employed in this investigation. Further, we did not analyze the genetic intolerance on CAF in the tested subjects. However, the participants of this study did not report any side effects after consuming CAF in the six months prior to the experiment. Due to the fact that the response to CAF is related to the individual tolerance of this substance [42], the dose [19,20], and gender [64] therefore the results of this study should only be translated to males habituated to CAF who use low to moderate CAF doses to enhance performance. Another limitation of the study was that the 1 RM test was performed using the barbell bench press exercise while the BPT was performed on a Smith machine during the experimental trials to increase the security of participants and investigators. Although there is a high transfer between the results obtained in both types of exercise, the calculation of loading would be more reliable if both evaluations were performed on the same resistance exercise. In any case, this limitation did not affect the outcomes of the investigation because the load was the same for all experimental trials.", "The results of the present study indicate that acute doses of CAF, between 3 and 6 mg/kg/b.m., ingested before the onset of an explosive resistance exercise produced an overall effect on mean power output and mean bar velocity during a BPT training session in a group of habitual CAF users. The main effect in mean power and bar velocity was found in several sets during the trial which may indicate that the use of CAF was effective in increasing performance in the whole training session. In contrast, no significant changes were observed for peak power output and peak bar velocity. These results suggest that the ingestion of CAF prior to ballistic exercise can enhance the outcomes of resistance training. However, the results of our study refer only to power output and bar velocity of the upper limbs during the BPT with an external load of 30% 1 RM and further investigations should consider the effect of CAF with different loads or the use of lower-body exercises." ]
[ "intro", null, "subjects", null, null, null, null, "results", "discussion", "conclusions" ]
[ "ballistic exercise", "upper limbs", "resistance exercise", "ergogenic substances", "sport performance" ]
1. Introduction: Caffeine (CAF) is one of the most common substances used in sport which enhances physical performance [1]. Although CAF may affect various body tissues [2,3], there is a growing body of evidence in animal [4] and human models [5] sustaining the ability of CAF to act as an adenosine antagonist, as the main mechanism behind CAF ergogenic effect. Current research recommends doses of CAF ranging from 3 to 6 mg/kg body mass to elicit ergogenic benefits, while the time of ingestion (from 30 to 90 min before exercise) and the form of ingestion (pills, liquid, chewing gum) are less significant as CAF is rapidly absorbed after ingestion [6]. However, the optimal protocol of CAF supplementation may differ based on the type and duration of exercise, previous habituation to CAF and the type of muscle contraction [6,7,8,9,10]. Acute CAF intake causes a slightly different response to upper and lower body exercise [10], despite the lack of a mechanisms explaining this phenomenon. A recent meta-analysis about the effects of CAF on muscle strength and power output found that CAF significantly improved upper but not lower body strength [7]. Although the outcomes of the first investigations were contradictory [11,12,13], more recent studies have found a clear effect of CAF on several forms of upper body muscle performance [14,15,16], when the used dose was at least 3 mg/kg/b.m. [17]. Interestingly, the effect of CAF on upper body muscle performance may be partially dampened in athletes habituated to CAF because the regular use of CAF-containing products [18] may develop tolerance to this substance. It must be noticed that this decreased ergogenic effect of CAF is not removed even with the ingestion of doses >9 mg/kg/b.m. [19,20]. Most studies on the acute effects of CAF on upper body muscle performance used the bench press exercise. The bench press exercise is widely used as a means of developing strength and power of the upper limbs [21,22]. However, other authors, recommend the use of ballistic exercises for the development of upper body power, such as the bench press throw (BPT) [23,24]. Specifically, to increase power output, the loads ranging from 0% to 50% of one-repetition maximum (1 RM) moved at maximal speed are recommended as the most potent loading stimuli for improving power output [23]. However, this type of routine can only be performed on a Smith machine while using the BPT exercise (i.e., maximal bar velocity is obtained at the moment of throw). The traditional free-weight bench press exercise does not allow for the attainment of maximal velocity of execution (i.e., velocity equals 0 m/s at maximal arm extension). Thus, ballistic exercises could be an optimal choice for power training as they allow for greater velocity, and muscle activation in comparison to similar traditional resistance training routines [21]. Perhaps, the main asset of the BPT is the maximum acceleration of a given load, which ultimately produces high movement velocity in a short time [25]. In this respect, the loads applied in ballistic exercises during training will depend on the specific requirements of particular sport disciplines and will determine success in numerous power-based competitions [23]. Furthermore, the BPT performance has been associated with overall performance in different sport-specific tasks [26,27,28,29]. Therefore, it seems reasonable to use the BPT as a means of testing upper-body ballistic performance. Although the BPT is indicated as the most effective exercise for developing power of the upper limbs [30], previous studies have not determined the acute effects of CAF on power output and bar velocity during this type of exercise. Burke [31] has suggested that, in the current literature, there is a lack of data about the practical use of supplements in competitive sports because experimental protocols are often different from sports practice. In case of CAF, most studies considering the acute effects of this supplement were assessed on the basis of only a single set of an exercise [7,8,32], while real resistance training sessions in trained individuals rarely contain only one set of a particular exercise [31]. On the other hand, the investigations analyzing the acute effects of CAF on performance during successive sets of resistance exercises are scarce. Bowtell et al. [3] showed that pre-exercise CAF (6 mg/kg/b.m.) intake improved total exercise time during 5 sets of one-legged knee extensions performed to failure in comparison to a placebo (PLAC). It is worth noting that this ergogenic effect was achieved despite significantly lower muscle phosphocreatine concentration (PCr) and pH in the latter sets of an exercise in the CAF trial. Further, CAF ingestion (6 mg/kg/b.m.) attenuated the increase in interstitial potassium during one-legged knee extensions at 20 W (10 min) and 50 W (3 × 3 min) measured using microdialysis [33], which resulted in a 16% improvement in high-intensity exercise capacity. Most studies related to the acute impact of CAF intake on power output and bar velocity have used participants unhabituated to CAF or individuals with low-to-moderate daily consumption of this stimulant [11,16,34]. However, the analysis of urinary CAF concentrations after official competitions suggests that CAF is widely employed before or during exercise to enhance performance [35,36]. This would mean that it is highly likely that some athletes are habituated to CAF due to daily consumption of caffeine-containing products. The existence of athletes habituated to CAF may be particularly common in sports such as cycling, rowing, triathlon, athletics, and weightlifting, sport disciplines that benefit from the use of ballistic exercises during training to increase power output. Habitual CAF intake modifies physiological responses to acute ingestion of this stimulant by the up-regulation of adenosine receptors [37,38]. This effect would produce a progressive reduction of CAF ergogenicity in those athletes consuming CAF on a regular basis, because the newly created adenosine receptors may bind to adenosine and induce fatigue. However, the fact of habituation to the ergogenic benefits of CAF is still inconclusive. The studies by Dodd et al. [39] and de Souza [40] showed similar responses to endurance exercise after acute CAF intake between low and habitual CAF consumers, although this is not always the case [41]. Considering the above, the use of cross-sectional investigations including participants with different degrees of habituation to CAF may explain the lack of consistency when concluding about tolerance to the ergogenic effect of CAF. Lara’s et al. [42] crossover design showed that the ergogenicity of CAF was reduced when the substance was consumed daily for 20 days, yet afterwards the ergogenic properties of CAF were maintained. On the contrary, tolerance to some of the side-effects associated to CAF has been observed in habitual consumers of CAF [43]. However, only two previous studies analyzed the impact of CAF intake in habitual CAF consumers using resistance exercise test protocols [18,19,20]. These investigations indicate that CAF ergogenicity to power output is mostly reduced in individuals habituated to CAF, while only high doses (>9 mg/kg/b.m.) may exert some benefit in maximal strength [19,20]. However, to date, there is no available data regarding the influence of acute CAF intake on power output and bar velocity during ballistic exercises in athletes habitually consuming CAF. Given the widespread use of the BPT exercise as a mean of developing power output in the upper limbs [44,45] and the widespread use of CAF in sport, it would be interesting to investigate whether acute CAF intake affects power output and bar velocity in athletes habituated to CAF. For this reason, the aim of the present study was to evaluate the effects of the acute intake of 3 and 6 mg/kg/b.m. of CAF on power output and bar velocity during five sets of the BPT in participants habituated to CAF. It was hypothesized that acute CAF intake would increase power output and bar velocity during the BPT training session when compared to a control situation, even in participants habituated to CAF. 2. Materials and Methods: A randomized, double-blind, PLAC-controlled crossover design was used for this investigation. Initially, the participants performed a familiarization session with the experimental protocols that included a 1 RM measurement for the bench press exercise. Afterwards, they performed 3 different experimental sessions with a one-week interval between sessions to allow for complete recovery and a wash-out of ingested substances (Figure 1). During the 3 experimental sessions, the study participants either ingested a PLAC, 3 mg/kg/b.m. of CAF (CAF-3) or 6 mg/kg/b.m. of CAF (CAF-6). One hour after ingestion of CAF or PLAC, they performed 5 sets of 2 repetitions of the BPT exercise at 30% 1 RM. Both CAF and PLAC were administered orally to allow for peak blood CAF concentration during the training session and at least 2 h after the last meal to avoid any interference of the diet with the absorption of the experimental substances. CAF supplementation was provided to participants in the form of unidentifiable capsules (Caffeine Kick®, Olimp Laboratories, Debica, Poland). The manufacturer of the CAF capsules also provided identical PLAC capsules filled with all-purpose flour. Participants refrained from strenuous physical activity the day before the experimental trials but they maintained their training routines during the duration of the experiment to avoid any performance decrement due to inactivity. Additionally, the participants maintained their dietary habits during the study period, including daily CAF intake. They received a list of products containing CAF which could not be consumed within 12 h of each experimental trial. Compliance was tested verbally and by using dietary records. Additionally, the participants were required to refrain from alcohol and tobacco, medications or dietary supplements for two weeks prior to the experiment. All subjects registered their calorie intake using the “Myfitness pal” software [46] (Under Armour, Baltimore, MD, USA) every 24 h before the testing procedure, to ensure that the caloric intake was similar between experimental sessions. 2.1. Participants Twelve healthy strength-trained male athletes (age: 25.3 ± 1.7 years., body mass: 88.4 ± 16.5 kg, body mass index (BMI): 26.5 ± 4.7, bench press 1 RM: 128.6 ± 36.0 kg; mean ± SD) volunteered to participate in the study. All participants completed a written consent form after they had been informed of the risks and benefits of the study protocols. The participants had a minimum of 3 years of strength training experience (4.4 ± 1.6 years). All of them were classified as high habitual CAF consumers according to the classification recently proposed by de Souza Gonçalves et al. [40]. They self-reported their daily ingestion of CAF (5.0 ± 0.95 mg/kg/b.m./day, 443 ± 142 mg/day) based on a Food Frequency Questionnaire (FFQ). The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders, (b) 1 RM bench press performance with a load of at least 120% body mass, (c) habitual CAF intake in the range of 4–6 mg/day/kg/b.m. The athletes were excluded from the study when they suffered from any pathology or injury or when they were unable to perform the exercise protocol at the maximum effort. The investigation protocols were approved by the Bioethics Committee for Scientific Research at the Academy of Physical Education in Katowice (March 2019), Poland, according to the ethical standards of the latest version of the Declaration of Helsinki, 2013. Twelve healthy strength-trained male athletes (age: 25.3 ± 1.7 years., body mass: 88.4 ± 16.5 kg, body mass index (BMI): 26.5 ± 4.7, bench press 1 RM: 128.6 ± 36.0 kg; mean ± SD) volunteered to participate in the study. All participants completed a written consent form after they had been informed of the risks and benefits of the study protocols. The participants had a minimum of 3 years of strength training experience (4.4 ± 1.6 years). All of them were classified as high habitual CAF consumers according to the classification recently proposed by de Souza Gonçalves et al. [40]. They self-reported their daily ingestion of CAF (5.0 ± 0.95 mg/kg/b.m./day, 443 ± 142 mg/day) based on a Food Frequency Questionnaire (FFQ). The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders, (b) 1 RM bench press performance with a load of at least 120% body mass, (c) habitual CAF intake in the range of 4–6 mg/day/kg/b.m. The athletes were excluded from the study when they suffered from any pathology or injury or when they were unable to perform the exercise protocol at the maximum effort. The investigation protocols were approved by the Bioethics Committee for Scientific Research at the Academy of Physical Education in Katowice (March 2019), Poland, according to the ethical standards of the latest version of the Declaration of Helsinki, 2013. 2.2. Habitual Caffeine Intake Assessment Daily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant. Daily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant. 2.3. Familiarization Session and One Repetition Maximum Test A familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement. A familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement. 2.4. Experimental Sessions During experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition. During experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition. 2.5. Statistical Analysis Data are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52]. Data are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52]. 2.1. Participants: Twelve healthy strength-trained male athletes (age: 25.3 ± 1.7 years., body mass: 88.4 ± 16.5 kg, body mass index (BMI): 26.5 ± 4.7, bench press 1 RM: 128.6 ± 36.0 kg; mean ± SD) volunteered to participate in the study. All participants completed a written consent form after they had been informed of the risks and benefits of the study protocols. The participants had a minimum of 3 years of strength training experience (4.4 ± 1.6 years). All of them were classified as high habitual CAF consumers according to the classification recently proposed by de Souza Gonçalves et al. [40]. They self-reported their daily ingestion of CAF (5.0 ± 0.95 mg/kg/b.m./day, 443 ± 142 mg/day) based on a Food Frequency Questionnaire (FFQ). The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders, (b) 1 RM bench press performance with a load of at least 120% body mass, (c) habitual CAF intake in the range of 4–6 mg/day/kg/b.m. The athletes were excluded from the study when they suffered from any pathology or injury or when they were unable to perform the exercise protocol at the maximum effort. The investigation protocols were approved by the Bioethics Committee for Scientific Research at the Academy of Physical Education in Katowice (March 2019), Poland, according to the ethical standards of the latest version of the Declaration of Helsinki, 2013. 2.2. Habitual Caffeine Intake Assessment: Daily CAF intake was measured by an adapted version of the Food Frequency Questionnaire (FFQ) proposed by Bühler et al. [47]. Household measures were employed to individually assess the amount of food consumed during a day, week and month. The list was composed of dietary products with moderate-to-high CAF content including different types of coffee, tea, energy drinks, cocoa products, popular beverages, medications, and CAF supplements. Nutritional tables were used for database construction [48,49,50] and an experienced nutritionist calculated the daily CAF intake for each participant. 2.3. Familiarization Session and One Repetition Maximum Test: A familiarization session with the experimental procedures preceded 1 RM testing in the bench press exercise. In this session, the athletes arrived at the laboratory between 9:00 and 10:00 am. and cycled on an ergometer for 5 min. Afterwards, they performed 15 repetitions at 20% of their estimated 1 RM in the barbell bench press exercise followed by 10 repetitions at 40% 1 RM, 5 repetitions at 60% 1 RM and 3 repetitions at 80% 1 RM. Then they executed single repetitions of the bench press exercise with a 5 min rest interval between successful attempts. The load for each subsequent attempt was increased by 2.5 to 10 kg, and the process was repeated until failure. Hand placement on the barbell was individually selected grip width (~150% individual bi-acromial distance). After completing the 1 RM test in the bench press exercise, the participants performed a maximal BPT on a Smith machine with a load of 30% 1 RM from 1 RM bench press test result, with a maximal tempo of movement. 2.4. Experimental Sessions: During experimental sessions, the athletes participated in three identical training trials. All trials took place between 9.00 and 11.00 am. to avoid the effect of circadian variations on the outcomes of the investigation. After replicating the warm-up procedures of the familiarization trial, the athletes performed 5 sets of the 2 BPT repetitions at 30% 1 RM on the Smith machine. The repetitions were performed without rest to produce a ballistic movement while the rest interval between sets was 3 min. The participants were encouraged to produce maximal velocity during both the eccentric and concentric phase of the BPT movement. Two spotters were present on each side of the bar during the exercise protocol to ensure safety. To standardize the exercise protocol for all trials, each BPT was performed without bouncing the barbell off the chest, with the lower back in contact with the bench and without any pause between the eccentric and concentric phases of the movement. A rotatory encoder (Tendo Power Analyzer, Tendo Sport Machines, Trencin, Slovakia) was used for instantaneous recording of bar velocity during the whole range of motion, as in previous investigations [51]. During each BPT, peak power output (PP, in W) mean power output (MP, in W); peak bar velocity (PV, in m/s); and mean bar velocity (MV, in m/s) were registered. MP and MV were obtained as the mean of the two repetitions while PP and PV were obtained from the best repetition. 2.5. Statistical Analysis: Data are presented as the mean ± SD. All variables presented a normal distribution according to the Shapiro-Wilk test. Verification of differences in peak power output (PP), mean power output (MP), peak bar velocity (PV), and mean bar velocity (MV) was performed using a two-way (substance × set) analysis of variance (ANOVA) with repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Tukey’s test. Percent changes and 95% confidence intervals were also calculated. Effect sizes (Cohen’s d) were reported where appropriate and interpreted as large (d ≥ 0.80); moderate (d between 0.79 and 0.50); small (d between 0.49 and 0.20); and trivial (d < 0.20); [52]. 3. Results: The two-way repeated measures ANOVA indicated no significant substance × set main interaction effect for MP (F = 1.19; p = 0.32); MV (F = 1.18; p = 0.32); PP (F = 1.05; p = 0.40); PV (F = 1.09; p = 0.38). However, there was a significant main effect of substance in MP (F = 7.27; p < 0.01) and MV (F = 6.75; p < 0.01). No statistically significant main effect of substance was revealed in PP (F = 2.91; p = 0.07) and PV (F = 2.63; p = 0.09; Table 1). Post hoc analyses for main effect of substance indicated significant increases in MP (p < 0.01; ES = 0.14) and MV (p = 0.01; ES = 0.78) in BPT (mean of the 5 sets) after the intake of CAF-3 compared to PLAC as well as significant increases in MP (p = 0.01; ES = 0.13) and MV (p = 0.01; ES = 0.72) in the BPT (mean of the 5 sets) after the intake of CAF-6 compared to PLAC. There were no significant differences in MP and MV between the two doses of CAF (CAF-3 vs. CAF-6). The results of particular sets in MP, MV, PP, and PV as well ES between PLAC and CAF-3, CAF-6 in each set are presented in Table 2. Figure 2 and Figure 3 represent the individual responses induced by CAF-3 and CAF-6, in comparison to the placebo, for MP and MV. The 11 out of 12 participants showed an increase in MP and MV after the ingestion of CAF-3, while CAF-6 produced higher values for MP and MV in 10 out of 12 participants. The y-axis represents the difference in mean bar velocity output during the 5 sets of BPT between PLAC–CAF-3; PLAC–CAF-6 for each individual. 4. Discussion: The main finding of this study was that acute CAF intake has a positive effect on MP and MV during a training session of the BPT performed at 30% 1 RM. Interestingly, both 3 and 6 mg/kg/b.m. doses of CAF had similar effectiveness in enhancing performance when compared to PLAC. Additionally, the ergogenic effect of CAF on MP and MV was evident in most of participants, as all of them responded by improving performance with either CAF-3 or CAF-6, even when they were catalogued as individuals habituated to CAF (Figure 2 and Figure 3). However, the study did not show significant changes in PP and PV after CAF intake with either dose of CAF (3 and 6 mg/kg/b.m.) compared to PLAC. These outcomes suggest that acute CAF intake in a moderate dose (from 3 to 6 mg/kg/b.m.) is effective in increasing mean power and bar velocity during the BPT without a significant influence on peak values of these variables. These results suggest that CAF can be effectively used to acutely improve this power-specific training routine even with individuals habituated to CAF, although the long-term training effects with CAF should be further investigated. The y-axis represents the difference in mean power output during the 5 sets of BPT between PLAC–CAF-3; PLAC–CAF-6 for each individual. Previous research showed that acute CAF intake increases power output during the bench press exercise [8,14,15,16]. However, most of these studies included only one set of the exercise which is not the habitual practice during sports training, where several sets of a particular exercise are performed in order to obtain significant adaptations derived from training. In presented study, the main effect of increase in MP and MV after the intake of CAF-3 and CAF-6 over the placebo has occurred for training session consisting several sets (Table 2). The ergogenic effect of CAF observed during the BPT is partly consistent with the results of previous findings [8,16]. However, it should be emphasized that this is the first study investigating the effects of CAF during a training session that includes several sets of a ballistic exercise. Experimental procedures with the use of CAF in which more than one set of an upper body exercise are used are scarce [18,53,54]. The study of Lane and Byrd [53] showed that the intake of 300 mg of CAF, representing 3.5 mg/kg/b.m. increased peak velocity during 10 sets of the bench press exercise at 80% 1 RM compared to PLAC. Wilk et al. [18] did not show significant changes after the intake of different doses of CAF (3, 6, and 9 mg/kg/b.m.) in both, mean and peak power output and bar velocity during the BP exercise at 50% 1 RM (3 sets of 5 repetitions), although this investigation was carried out in athletes habituated to CAF. No changes in mean and peak bar velocity after CAF intake of 150 mg, representing 1.74 mg/kg/b.m. were observed in a study by Lane et al. [54] where 10 sets of 3 repetitions of the bench press exercise were performed at 80% 1 RM. The current study is quite innovative because it is the first investigation geared to assess the effects of CAF intake on power output by using a ballistic upper body exercise with a low external load (30% 1 RM), geared for power training of athletes [55]. The current state of the literature, indicates that CAF is useful in increasing power during one or multiple sets of the bench press exercise when the dose ingested is >3 mg/kg/b.m., but it seems particularly effective when using low and moderate loads during an explosive exercise, such as the BPT. The overall increase of MP and MV during the training session of the BPT after ingestion of CAF-3 and CAF-6 can be also attributed to increased pre-exercise central excitability. Specifically, the pre-exercise ingestion of CAF would allow the athletes to maintain a certain amount of force even in the presence of biochemical changes within the working muscle that lead to fatigue [3]. Under this theory, CAF intake would allow a higher physical performance because it would help to maintain the neural response even in the presence of metabolic perturbations such as low muscle pH. This effect may be accompanied by reductions in interstitial potassium accumulation found after CAF intake [2], that ultimately leads to the maintenance of excitability during exercise [33,56]. In the central nervous system (CNS), CAF binds to adenosine receptors that influence the release of neurotransmitters, such as noradrenaline and acetylcholine [4,57,58,59] and consequently, increase muscle tension [60]. However, in the current investigation, this purported effect of CAF on CNS was not sufficient to enhance PP and PV during the BPT at 30% 1 RM. Thus, reduced fatigue through CAF-induced modulation of both peripheral and central neural processes may explain the obtained results and higher MP and MV of the bar during the BPT training session. Nevertheless, the association of the ergogenic effect with the mechanisms that allowed this ergogenic effect is speculative at this moment because no measurements were carried out to test the origin of caffeine’s ergogenic effects. It should be taken into consideration that the study participants in the current study were habitual CAF users. In contrast, most of the investigations aimed at determination of the ergogenic effect of CAF on muscle performance have selected individuals unhabituated to this stimulant or with low-to-moderate daily consumption of CAF (e.g., from 58 to 250 mg/day), [11,16,34], to avoid the effects that tolerance to CAF may. However, CAF is an ergogenic aid frequently used in training and competition and it is likely that some athletes seeking for ergogenic benefits of CAF are already habituated to this substance due to the chronic use of caffeine-containing supplements during training and competition. In fact, previous investigations have suggested that between 75% and 90% of athletes use CAF in competitive and training settings [35,36,61], suggesting that studies on the effect of acute CAF intake on physical performance during real training and competition settings are particularly important in athletes habituated to CAF. In this respect, previous research using well-controlled CAF treatments has suggested that the habitual intake of this stimulant may progressively reduce its ergogenic effect on exercise performance [42,62] and then, it has been speculated that the ergogenic effect of CAF could be dampened in habitual CAF users. To the authors’ knowledge, only three previous studies analyzed power output of the upper limbs in a group of participants habituated to CAF [9,18,19,20]. The study of Sabol et al. [9] showed an increase in medicine ball throwing distance after the acute intake of 6 mg/kg/b.m. of CAF but the doses of 2 and 4 mg/kg/b.m. did not show any differences with the PLAC. The study by Wilk et al. [18] did not show increases in power output and bar velocity during the bench press exercise in high habitual CAF users that ingested from 3 to 9 mg/kg/b.m. Although it has been theorized that the reduction in the ergogenic effects of CAF in habitual users can be modified using doses greater than the daily habitual intake [63], previous investigations indicate that athletes habituated to CAF do not benefit from the acute ingestion of CAF in doses above their habitual intake while the prevalence of side effects is greatly increased [19,20]. Interestingly, participants in the presented study self-reported their daily ingestion of CAF, which amounted to 5.0 ± 0.95 mg/kg/b.m., (443 ± 142 mg of CAF per day), and the acute CAF doses (especially CAF-3) and some performance enhancements were obtained even when de dose of CAF did not exceed the value of habitual consumption. In any case, although the current investigation found a positive effect of CAF on mean power output and mean bar velocity during the BPT in athletes habituated to CAF, it is still possible that the effect of this substance is higher in unhabituated individuals. In addition to its strengths, the current study presents limitations that should be addressed. Although the results showed a significant main effect on MP and MV after CAF intake, the direct causes of these changes cannot be determined and explained. The study did not include biochemical analysis which could explain the obtained results. In addition, blood samples were not obtained and thus, we have no data about serum CAF concentrations with each of the dosages of CAF employed in this investigation. Further, we did not analyze the genetic intolerance on CAF in the tested subjects. However, the participants of this study did not report any side effects after consuming CAF in the six months prior to the experiment. Due to the fact that the response to CAF is related to the individual tolerance of this substance [42], the dose [19,20], and gender [64] therefore the results of this study should only be translated to males habituated to CAF who use low to moderate CAF doses to enhance performance. Another limitation of the study was that the 1 RM test was performed using the barbell bench press exercise while the BPT was performed on a Smith machine during the experimental trials to increase the security of participants and investigators. Although there is a high transfer between the results obtained in both types of exercise, the calculation of loading would be more reliable if both evaluations were performed on the same resistance exercise. In any case, this limitation did not affect the outcomes of the investigation because the load was the same for all experimental trials. 5. Conclusions: The results of the present study indicate that acute doses of CAF, between 3 and 6 mg/kg/b.m., ingested before the onset of an explosive resistance exercise produced an overall effect on mean power output and mean bar velocity during a BPT training session in a group of habitual CAF users. The main effect in mean power and bar velocity was found in several sets during the trial which may indicate that the use of CAF was effective in increasing performance in the whole training session. In contrast, no significant changes were observed for peak power output and peak bar velocity. These results suggest that the ingestion of CAF prior to ballistic exercise can enhance the outcomes of resistance training. However, the results of our study refer only to power output and bar velocity of the upper limbs during the BPT with an external load of 30% 1 RM and further investigations should consider the effect of CAF with different loads or the use of lower-body exercises.
Background: The main objective of the current investigation was to evaluate the effects of caffeine on power output and bar velocity during an explosive bench press throw in athletes habituated to caffeine. Methods: Twelve resistance trained individuals habituated to caffeine ingestion participated in a randomized double-blind experimental design. Each participant performed three identical experimental sessions 60 min after the intake of a placebo, 3, and 6 mg/kg/b.m. of caffeine. In each experimental session, the participants performed 5 sets of 2 repetitions of the bench press throw (with a load equivalent to 30% repetition maximum (RM), measured in a familiarization trial) on a Smith machine, while bar velocity and power output were registered with a rotatory encoder. Results: In comparison to the placebo, the intake of caffeine increased mean bar velocity during 5 sets of the bench press throw (1.37 ± 0.05 vs. 1.41 ± 0.05 and 1.41 ± 0.06 m/s for placebo, 3, and 6 mg/kg/b.m., respectively; p < 0.01), as well as mean power output (545 ± 117 vs. 562 ± 118 and 560 ± 107 W; p < 0.01). However, caffeine was not effective at increasing peak velocity (p = 0.09) nor peak power output (p = 0.07) during the explosive exercise. Conclusions: The acute doses of caffeine before resistance exercise may increase mean power output and mean bar velocity during the bench press throw training session in a group of habitual caffeine users. Thus, caffeine prior to ballistic exercises enhances performance during a power-specific resistance training session.
1. Introduction: Caffeine (CAF) is one of the most common substances used in sport which enhances physical performance [1]. Although CAF may affect various body tissues [2,3], there is a growing body of evidence in animal [4] and human models [5] sustaining the ability of CAF to act as an adenosine antagonist, as the main mechanism behind CAF ergogenic effect. Current research recommends doses of CAF ranging from 3 to 6 mg/kg body mass to elicit ergogenic benefits, while the time of ingestion (from 30 to 90 min before exercise) and the form of ingestion (pills, liquid, chewing gum) are less significant as CAF is rapidly absorbed after ingestion [6]. However, the optimal protocol of CAF supplementation may differ based on the type and duration of exercise, previous habituation to CAF and the type of muscle contraction [6,7,8,9,10]. Acute CAF intake causes a slightly different response to upper and lower body exercise [10], despite the lack of a mechanisms explaining this phenomenon. A recent meta-analysis about the effects of CAF on muscle strength and power output found that CAF significantly improved upper but not lower body strength [7]. Although the outcomes of the first investigations were contradictory [11,12,13], more recent studies have found a clear effect of CAF on several forms of upper body muscle performance [14,15,16], when the used dose was at least 3 mg/kg/b.m. [17]. Interestingly, the effect of CAF on upper body muscle performance may be partially dampened in athletes habituated to CAF because the regular use of CAF-containing products [18] may develop tolerance to this substance. It must be noticed that this decreased ergogenic effect of CAF is not removed even with the ingestion of doses >9 mg/kg/b.m. [19,20]. Most studies on the acute effects of CAF on upper body muscle performance used the bench press exercise. The bench press exercise is widely used as a means of developing strength and power of the upper limbs [21,22]. However, other authors, recommend the use of ballistic exercises for the development of upper body power, such as the bench press throw (BPT) [23,24]. Specifically, to increase power output, the loads ranging from 0% to 50% of one-repetition maximum (1 RM) moved at maximal speed are recommended as the most potent loading stimuli for improving power output [23]. However, this type of routine can only be performed on a Smith machine while using the BPT exercise (i.e., maximal bar velocity is obtained at the moment of throw). The traditional free-weight bench press exercise does not allow for the attainment of maximal velocity of execution (i.e., velocity equals 0 m/s at maximal arm extension). Thus, ballistic exercises could be an optimal choice for power training as they allow for greater velocity, and muscle activation in comparison to similar traditional resistance training routines [21]. Perhaps, the main asset of the BPT is the maximum acceleration of a given load, which ultimately produces high movement velocity in a short time [25]. In this respect, the loads applied in ballistic exercises during training will depend on the specific requirements of particular sport disciplines and will determine success in numerous power-based competitions [23]. Furthermore, the BPT performance has been associated with overall performance in different sport-specific tasks [26,27,28,29]. Therefore, it seems reasonable to use the BPT as a means of testing upper-body ballistic performance. Although the BPT is indicated as the most effective exercise for developing power of the upper limbs [30], previous studies have not determined the acute effects of CAF on power output and bar velocity during this type of exercise. Burke [31] has suggested that, in the current literature, there is a lack of data about the practical use of supplements in competitive sports because experimental protocols are often different from sports practice. In case of CAF, most studies considering the acute effects of this supplement were assessed on the basis of only a single set of an exercise [7,8,32], while real resistance training sessions in trained individuals rarely contain only one set of a particular exercise [31]. On the other hand, the investigations analyzing the acute effects of CAF on performance during successive sets of resistance exercises are scarce. Bowtell et al. [3] showed that pre-exercise CAF (6 mg/kg/b.m.) intake improved total exercise time during 5 sets of one-legged knee extensions performed to failure in comparison to a placebo (PLAC). It is worth noting that this ergogenic effect was achieved despite significantly lower muscle phosphocreatine concentration (PCr) and pH in the latter sets of an exercise in the CAF trial. Further, CAF ingestion (6 mg/kg/b.m.) attenuated the increase in interstitial potassium during one-legged knee extensions at 20 W (10 min) and 50 W (3 × 3 min) measured using microdialysis [33], which resulted in a 16% improvement in high-intensity exercise capacity. Most studies related to the acute impact of CAF intake on power output and bar velocity have used participants unhabituated to CAF or individuals with low-to-moderate daily consumption of this stimulant [11,16,34]. However, the analysis of urinary CAF concentrations after official competitions suggests that CAF is widely employed before or during exercise to enhance performance [35,36]. This would mean that it is highly likely that some athletes are habituated to CAF due to daily consumption of caffeine-containing products. The existence of athletes habituated to CAF may be particularly common in sports such as cycling, rowing, triathlon, athletics, and weightlifting, sport disciplines that benefit from the use of ballistic exercises during training to increase power output. Habitual CAF intake modifies physiological responses to acute ingestion of this stimulant by the up-regulation of adenosine receptors [37,38]. This effect would produce a progressive reduction of CAF ergogenicity in those athletes consuming CAF on a regular basis, because the newly created adenosine receptors may bind to adenosine and induce fatigue. However, the fact of habituation to the ergogenic benefits of CAF is still inconclusive. The studies by Dodd et al. [39] and de Souza [40] showed similar responses to endurance exercise after acute CAF intake between low and habitual CAF consumers, although this is not always the case [41]. Considering the above, the use of cross-sectional investigations including participants with different degrees of habituation to CAF may explain the lack of consistency when concluding about tolerance to the ergogenic effect of CAF. Lara’s et al. [42] crossover design showed that the ergogenicity of CAF was reduced when the substance was consumed daily for 20 days, yet afterwards the ergogenic properties of CAF were maintained. On the contrary, tolerance to some of the side-effects associated to CAF has been observed in habitual consumers of CAF [43]. However, only two previous studies analyzed the impact of CAF intake in habitual CAF consumers using resistance exercise test protocols [18,19,20]. These investigations indicate that CAF ergogenicity to power output is mostly reduced in individuals habituated to CAF, while only high doses (>9 mg/kg/b.m.) may exert some benefit in maximal strength [19,20]. However, to date, there is no available data regarding the influence of acute CAF intake on power output and bar velocity during ballistic exercises in athletes habitually consuming CAF. Given the widespread use of the BPT exercise as a mean of developing power output in the upper limbs [44,45] and the widespread use of CAF in sport, it would be interesting to investigate whether acute CAF intake affects power output and bar velocity in athletes habituated to CAF. For this reason, the aim of the present study was to evaluate the effects of the acute intake of 3 and 6 mg/kg/b.m. of CAF on power output and bar velocity during five sets of the BPT in participants habituated to CAF. It was hypothesized that acute CAF intake would increase power output and bar velocity during the BPT training session when compared to a control situation, even in participants habituated to CAF. 5. Conclusions: The results of the present study indicate that acute doses of CAF, between 3 and 6 mg/kg/b.m., ingested before the onset of an explosive resistance exercise produced an overall effect on mean power output and mean bar velocity during a BPT training session in a group of habitual CAF users. The main effect in mean power and bar velocity was found in several sets during the trial which may indicate that the use of CAF was effective in increasing performance in the whole training session. In contrast, no significant changes were observed for peak power output and peak bar velocity. These results suggest that the ingestion of CAF prior to ballistic exercise can enhance the outcomes of resistance training. However, the results of our study refer only to power output and bar velocity of the upper limbs during the BPT with an external load of 30% 1 RM and further investigations should consider the effect of CAF with different loads or the use of lower-body exercises.
Background: The main objective of the current investigation was to evaluate the effects of caffeine on power output and bar velocity during an explosive bench press throw in athletes habituated to caffeine. Methods: Twelve resistance trained individuals habituated to caffeine ingestion participated in a randomized double-blind experimental design. Each participant performed three identical experimental sessions 60 min after the intake of a placebo, 3, and 6 mg/kg/b.m. of caffeine. In each experimental session, the participants performed 5 sets of 2 repetitions of the bench press throw (with a load equivalent to 30% repetition maximum (RM), measured in a familiarization trial) on a Smith machine, while bar velocity and power output were registered with a rotatory encoder. Results: In comparison to the placebo, the intake of caffeine increased mean bar velocity during 5 sets of the bench press throw (1.37 ± 0.05 vs. 1.41 ± 0.05 and 1.41 ± 0.06 m/s for placebo, 3, and 6 mg/kg/b.m., respectively; p < 0.01), as well as mean power output (545 ± 117 vs. 562 ± 118 and 560 ± 107 W; p < 0.01). However, caffeine was not effective at increasing peak velocity (p = 0.09) nor peak power output (p = 0.07) during the explosive exercise. Conclusions: The acute doses of caffeine before resistance exercise may increase mean power output and mean bar velocity during the bench press throw training session in a group of habitual caffeine users. Thus, caffeine prior to ballistic exercises enhances performance during a power-specific resistance training session.
7,580
312
[ 2491, 108, 195, 284, 160 ]
10
[ "caf", "exercise", "power", "rm", "intake", "bpt", "velocity", "effect", "kg", "bar" ]
[ "caf ingestion", "caf muscle performance", "caffeine ergogenic effects", "effect caf muscle", "exercise ingestion caf" ]
null
[CONTENT] ballistic exercise | upper limbs | resistance exercise | ergogenic substances | sport performance [SUMMARY]
null
[CONTENT] ballistic exercise | upper limbs | resistance exercise | ergogenic substances | sport performance [SUMMARY]
[CONTENT] ballistic exercise | upper limbs | resistance exercise | ergogenic substances | sport performance [SUMMARY]
[CONTENT] ballistic exercise | upper limbs | resistance exercise | ergogenic substances | sport performance [SUMMARY]
[CONTENT] ballistic exercise | upper limbs | resistance exercise | ergogenic substances | sport performance [SUMMARY]
[CONTENT] Adult | Athletes | Athletic Performance | Caffeine | Cross-Over Studies | Double-Blind Method | Healthy Volunteers | Humans | Male | Performance-Enhancing Substances | Resistance Training | Weight Lifting [SUMMARY]
null
[CONTENT] Adult | Athletes | Athletic Performance | Caffeine | Cross-Over Studies | Double-Blind Method | Healthy Volunteers | Humans | Male | Performance-Enhancing Substances | Resistance Training | Weight Lifting [SUMMARY]
[CONTENT] Adult | Athletes | Athletic Performance | Caffeine | Cross-Over Studies | Double-Blind Method | Healthy Volunteers | Humans | Male | Performance-Enhancing Substances | Resistance Training | Weight Lifting [SUMMARY]
[CONTENT] Adult | Athletes | Athletic Performance | Caffeine | Cross-Over Studies | Double-Blind Method | Healthy Volunteers | Humans | Male | Performance-Enhancing Substances | Resistance Training | Weight Lifting [SUMMARY]
[CONTENT] Adult | Athletes | Athletic Performance | Caffeine | Cross-Over Studies | Double-Blind Method | Healthy Volunteers | Humans | Male | Performance-Enhancing Substances | Resistance Training | Weight Lifting [SUMMARY]
[CONTENT] caf ingestion | caf muscle performance | caffeine ergogenic effects | effect caf muscle | exercise ingestion caf [SUMMARY]
null
[CONTENT] caf ingestion | caf muscle performance | caffeine ergogenic effects | effect caf muscle | exercise ingestion caf [SUMMARY]
[CONTENT] caf ingestion | caf muscle performance | caffeine ergogenic effects | effect caf muscle | exercise ingestion caf [SUMMARY]
[CONTENT] caf ingestion | caf muscle performance | caffeine ergogenic effects | effect caf muscle | exercise ingestion caf [SUMMARY]
[CONTENT] caf ingestion | caf muscle performance | caffeine ergogenic effects | effect caf muscle | exercise ingestion caf [SUMMARY]
[CONTENT] caf | exercise | power | rm | intake | bpt | velocity | effect | kg | bar [SUMMARY]
null
[CONTENT] caf | exercise | power | rm | intake | bpt | velocity | effect | kg | bar [SUMMARY]
[CONTENT] caf | exercise | power | rm | intake | bpt | velocity | effect | kg | bar [SUMMARY]
[CONTENT] caf | exercise | power | rm | intake | bpt | velocity | effect | kg | bar [SUMMARY]
[CONTENT] caf | exercise | power | rm | intake | bpt | velocity | effect | kg | bar [SUMMARY]
[CONTENT] caf | exercise | power | acute | upper | power output | ergogenic | habituated | studies | habituated caf [SUMMARY]
null
[CONTENT] caf | 01 | mv | mp | es | 01 es | plac | mp mv | significant | main effect substance [SUMMARY]
[CONTENT] caf | results | power | effect mean power | effect mean | bar | bar velocity | velocity | power output | training [SUMMARY]
[CONTENT] caf | exercise | power | rm | velocity | bar | repetitions | mean | effect | bpt [SUMMARY]
[CONTENT] caf | exercise | power | rm | velocity | bar | repetitions | mean | effect | bpt [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] 5 | 1.37 | 1.41 | 0.05 | 1.41 | 0.06 | 3 | 6 mg/kg/b.m | 545 | 117 | 562 | 118 | 560 | 107 ||| 0.09 | 0.07 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| Twelve ||| three | 60 | 3 | 6 mg/kg/b.m ||| 5 | 2 | 30% | RM | Smith ||| ||| 5 | 1.37 | 1.41 | 0.05 | 1.41 | 0.06 | 3 | 6 mg/kg/b.m | 545 | 117 | 562 | 118 | 560 | 107 ||| 0.09 | 0.07 ||| ||| [SUMMARY]
[CONTENT] ||| Twelve ||| three | 60 | 3 | 6 mg/kg/b.m ||| 5 | 2 | 30% | RM | Smith ||| ||| 5 | 1.37 | 1.41 | 0.05 | 1.41 | 0.06 | 3 | 6 mg/kg/b.m | 545 | 117 | 562 | 118 | 560 | 107 ||| 0.09 | 0.07 ||| ||| [SUMMARY]
The varicella vaccination pattern among children under 5 years old in selected areas in China.
28487494
Vaccine is the most effective way to protect susceptible children from varicella. Few published literature or reports on varicella vaccination of Chinese children exist. Thus, in order to obtain specific information on varicella vaccination of this population, we conducted this survey.
BACKGROUND
We first used purposive sampling methods to select 6 provinces 10 counties from eastern, middle and western parts of China with high quality of Immunization Information Management System (IIMS), and then randomly select children from population in the IIMS, then we checked vaccination certificate on-site.
METHODOLOGY
Based on the varicella vaccination information collected from 481 children's vaccination certificates from all ten selected counties in China, overall coverage of the first dose of varicella vaccine was 73.6%. There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r=0.929, P < 0.01). The cumulative vaccine coverage among children at 1 year, 2 years and ≥3 years old were 67.6%, 71.9% and 73.6% respectively (X2=4.53, P =0.10). The age of vaccination was mainly concentrated in 12-17 months.
PRINCIPAL FINDINGS
The coverage rate of the first dose of varicella vaccine in selected areas was lower than that recommended by WHO position paper. The coverage rate was relatively low in areas of low social-economic status. The cumulative coverage had no significant statistical difference among different age group. Most children received varicella vaccine before 3 years old. We suggest introducing the varicella vaccine into routine immunization program, to ensure universal high coverage among children in China. We also suggest that varicella vaccination information should be checked before entering school, in order to control and prevent varicella outbreaks in schools.
CONCLUSIONS
[ "Chickenpox", "Chickenpox Vaccine", "Child, Preschool", "China", "Geography, Medical", "Humans", "Infant", "Infant, Newborn", "Population Surveillance", "Seasons", "Socioeconomic Factors", "Vaccination" ]
5542212
INTRODUCTION
Varicella is an infectious disease with high transmissibility, common in children [1]. The disease is prone to resulting in large scale outbreak in institutional units, such as nurseries, kindergartens, and schools [2]. Varicella vaccine has been demonstrated good safety and effect [3], and varicella vaccination is the most effective measure to protect susceptible people from the disease. Although varicella case information has been collected nationally since 2005 through National Disease Supervision Information Management System(NDSIMS), its limitation in quality of local clinic reporting poses a challenge in accurate analysis and evaluation of the true situation in China [4]. Based on the estimation of varicella incidence in Shandong, Gansu and Hunan provinces, 4,705,000 cases were reported in 2007 in China, and the cost estimation in 2007 was 2.31 billion RMB for outpatients and 103 million RMB for inpatients [5, 6]. “Varicella and Herpes Zoster Vaccines: WHO Position Paper” recommends that countries where varicella is an important public health burden could consider introducing varicella vaccination in the routine childhood immunization programme. Resources should be sufficient to ensure reaching and sustaining vaccine coverage ≥ 80%. Vaccine coverage that remains <80% over the long term is expected to shift varicella infection to older ages in some settings, which may result in an increase of morbidity and mortality despite reduction in total number of cases [3]. Varicella vaccine was licensed in China in 1996 [7]. Currently, varicella vaccine is sold on the private market, meaning it is voluntary and must be self-paid. There is no national recommended immunization schedule available for varicella vaccine, and the vaccine is administered to eligible children at a vaccination clinic according to instructions provided by the vaccine manufacturers. Based on the immunization regulation, the record of vaccination information is kept by both the parents and the clinic [8–9]. In this study, we investigated the varicella vaccination information among children under 5 years old in selected areas in China, in order to understand the varicella vaccination situation for children in China. This information will provide evidence for authority department of China introducing varicella vaccine into the national routine immunization program.
MATERIALS AND METHODS
Sampling methods We used purposive sampling methods to select counties (districts) from Shanghai, Jiangsu, Heilongjiang, Jiangxi, Chongqing and Gansu provinces. We selected these investigation sites based on two factors: (i) representation of east, middle and western parts of China; (ii) having good records of private vaccinations in children's vaccination certificates and with a high quality Immunization Information Management System (IIMS). From Nov 1th 2013 to Nov 15th 2013, we used the simple random sampling method to randomly select children from the population whose birth date was between Jan 1st 2008 and Dec 31th 2012 in the IIMS in selected counties (districts), then from January 1th 2014 to July 15th2014, we checked hand-held vaccination certificates on-site to collect varicella vaccination information. We used the following formula to calculate the sample size: n=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45 n: the sample size, P: estimate coverage of varicella vaccine, based on past experience, we used 45%. After calculation, a sample size of 470 was needed for this survey. Considering a loss to follow-up of about 10%, a total of 520 children needed to be investigated. The sample size was divided by 10 (10 counties), and 52 children in each county were selected to participate in the investigation. Finally, 39 children were excluded because their parents did not have a hand-held vaccination certificate or we were unable to contact them. We used purposive sampling methods to select counties (districts) from Shanghai, Jiangsu, Heilongjiang, Jiangxi, Chongqing and Gansu provinces. We selected these investigation sites based on two factors: (i) representation of east, middle and western parts of China; (ii) having good records of private vaccinations in children's vaccination certificates and with a high quality Immunization Information Management System (IIMS). From Nov 1th 2013 to Nov 15th 2013, we used the simple random sampling method to randomly select children from the population whose birth date was between Jan 1st 2008 and Dec 31th 2012 in the IIMS in selected counties (districts), then from January 1th 2014 to July 15th2014, we checked hand-held vaccination certificates on-site to collect varicella vaccination information. We used the following formula to calculate the sample size: n=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45 n: the sample size, P: estimate coverage of varicella vaccine, based on past experience, we used 45%. After calculation, a sample size of 470 was needed for this survey. Considering a loss to follow-up of about 10%, a total of 520 children needed to be investigated. The sample size was divided by 10 (10 counties), and 52 children in each county were selected to participate in the investigation. Finally, 39 children were excluded because their parents did not have a hand-held vaccination certificate or we were unable to contact them. Calculation of per capita gross domestic product (GDP) Per capita GDP=GDP/domicile population. GDP and household population data came from the “2012 Per Capita National Economic and Social Development Statistical Bulletin” on the selected County or District's Bureau of Statistics website. The data was not available for Jianye in Jiangsu, Ning'an in Heilongjiang, Chengguan and Qilihe in Gansu, so the per capita GDP in the corresponding city level (higher administrative level) was used for these counties. Per capita GDP=GDP/domicile population. GDP and household population data came from the “2012 Per Capita National Economic and Social Development Statistical Bulletin” on the selected County or District's Bureau of Statistics website. The data was not available for Jianye in Jiangsu, Ning'an in Heilongjiang, Chengguan and Qilihe in Gansu, so the per capita GDP in the corresponding city level (higher administrative level) was used for these counties. Survey content and data processing We conducted this survey to obtain the basic information of children (such as gender, date of birth) and varicella vaccination (such as inoculation date and doses). We used Excel to enter the data, ArcGIS to draw the map, and Epi Info software to analyze the data. We conducted this survey to obtain the basic information of children (such as gender, date of birth) and varicella vaccination (such as inoculation date and doses). We used Excel to enter the data, ArcGIS to draw the map, and Epi Info software to analyze the data. Ethics statement We obtained oral informed consent from all children's guardians before the investigation, and two investigators checked vaccination certificates on-site together. We would continue the investigation only when the children's guardians gave consent. We didn't go through the institutional review board approval in the case of this study for ethical issues, because monitoring vaccine coverage is part of routine program work, in this context, there is no risk for participants and it is not required to get IRB approval. However, we strictly protected the private information of participants in the field, such as the name and contact information. During the analysis, we also took out the personal identification information. We obtained oral informed consent from all children's guardians before the investigation, and two investigators checked vaccination certificates on-site together. We would continue the investigation only when the children's guardians gave consent. We didn't go through the institutional review board approval in the case of this study for ethical issues, because monitoring vaccine coverage is part of routine program work, in this context, there is no risk for participants and it is not required to get IRB approval. However, we strictly protected the private information of participants in the field, such as the name and contact information. During the analysis, we also took out the personal identification information.
null
null
DISCUSSION
We obtained oral informed consent from all children's guardians before the investigation, and two investigators checked vaccination certificates on-site together. We would continue the investigation only when the children's guardians gave consent. We didn't go through the institutional review board approval in the case of this study for ethical issues, because monitoring vaccine coverage is part of routine program work, in this context, there is no risk for participants and it is not required to get IRB approval. However, we strictly protected the private information of participants in the field, such as the name and contact information. During the analysis, we also took out the personal identification information.
[ "INTRODUCTION", "RESULTS", "Basic information and vaccine coverage in selected counties", "Relationship between varicella vaccine coverage and the economic situation", "Cumulative coverage of varicella vaccine and distribution of vaccinated age", "DISCUSSION" ]
[ "Varicella is an infectious disease with high transmissibility, common in children [1]. The disease is prone to resulting in large scale outbreak in institutional units, such as nurseries, kindergartens, and schools [2]. Varicella vaccine has been demonstrated good safety and effect [3], and varicella vaccination is the most effective measure to protect susceptible people from the disease. Although varicella case information has been collected nationally since 2005 through National Disease Supervision Information Management System(NDSIMS), its limitation in quality of local clinic reporting poses a challenge in accurate analysis and evaluation of the true situation in China [4]. Based on the estimation of varicella incidence in Shandong, Gansu and Hunan provinces, 4,705,000 cases were reported in 2007 in China, and the cost estimation in 2007 was 2.31 billion RMB for outpatients and 103 million RMB for inpatients [5, 6]. “Varicella and Herpes Zoster Vaccines: WHO Position Paper” recommends that countries where varicella is an important public health burden could consider introducing varicella vaccination in the routine childhood immunization programme. Resources should be sufficient to ensure reaching and sustaining vaccine coverage ≥ 80%. Vaccine coverage that remains <80% over the long term is expected to shift varicella infection to older ages in some settings, which may result in an increase of morbidity and mortality despite reduction in total number of cases [3]. Varicella vaccine was licensed in China in 1996 [7]. Currently, varicella vaccine is sold on the private market, meaning it is voluntary and must be self-paid. There is no national recommended immunization schedule available for varicella vaccine, and the vaccine is administered to eligible children at a vaccination clinic according to instructions provided by the vaccine manufacturers. Based on the immunization regulation, the record of vaccination information is kept by both the parents and the clinic [8–9]. In this study, we investigated the varicella vaccination information among children under 5 years old in selected areas in China, in order to understand the varicella vaccination situation for children in China. This information will provide evidence for authority department of China introducing varicella vaccine into the national routine immunization program.", " Basic information and vaccine coverage in selected counties In total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine.\nIn total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine.\n Relationship between varicella vaccine coverage and the economic situation The median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2).\nThe median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2).\n Cumulative coverage of varicella vaccine and distribution of vaccinated age At age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2).\nNote:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months”\nAmong all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3).\nn=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45\nAt age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2).\nNote:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months”\nAmong all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3).\nn=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45", "In total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine.", "The median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2).", "At age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2).\nNote:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months”\nAmong all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3).\nn=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45", "The basic reproduction number (R0) is about 8 ∼ 10 for varicella, which is higher than influenza [10], it is easy for varicella to cause an outbreak in a nursery, kindergarten or school. Therefore, it is very difficult to block the spreading of varicella disease in areas with low vaccine coverage, and it is necessary to maintain high coverage to prevent the disease from becoming an epidemic. The World Health Organization position paper states that varicella may transit to older age groups when the rate of vaccination coverage is lower than 80% for a significant amount of time, which may result in increased morbidity and mortality [3]. During this survey we found that the coverage of the varicella vaccine was higher in counties with higher per capita GDP, and was relatively lower in counties with lower per capita GDP. The coverage of selected areas varies from 44% to 98%, and basing on the linear regression modeling of the relationship between GDP and coverage, per capita GDP of China was 38,354 RMB in 2012 [11], we estimated the average coverage rate for children under 5 years old in China might be about 52%. A national survey conducted in 2011 in China showed that the varicella vaccine coverage rate under 2 years old was 47% [12], this is similarly with our estimates.\nVaricella vaccine is a private vaccine in China, the parents of children voluntarily select and pay for the vaccine for their children. In areas with low coverage rate, a child will be individually protected due to the vaccination, however, herd immunity may not be established at the population level. Furthermore, as a private vaccine, the vaccination fee may become a barrier to vaccinate for children who live in financially poor resource environments [13, 14]. The inequalities of immunization coverage due to socioeconomic differences also exist in other countries [15, 16]. Varicella vaccine has been demonstrated good safety and effect [3], and one study in China showed that introducing varicella vaccination into the routine childhood immunization programme is quite cost effective: an one-dose strategy could save 8 billion RMB in one year [17]. We suggest introducing the varicella vaccine into the routine childhood immunization program as soon as possible to improve the equity in access to immunization services in areas of low socioeconomic status, and ensure a coverage rate of more than 80% in all areas according to the recommendations by WHO.\nOur study showed that the coverage rate for different birth cohorts remained stable. The vaccine coverage rate was 68% at 1 year old, and increased slightly with age, cumulative vaccine coverage rate was 72% at 2 years old and 74% ≥ 3 years old, suggesting that after 3 years of age the access to varicella vaccination service is limited. The peak seasons for incidence of varicella in China are winter and spring seasons, and mainly effect preschool and school-age children of 3 ∼ 10 years old [18]. In China, varicella outbreaks account for a relatively high proportion of the school emergency public health events. For example, reported outbreaks due to varicella in Zhejiang province accounted for 35% in the school emergency public health events from 2005 to 2008 [19]. Therefore, it is necessary to provide an opportunity for access to varicella vaccination service for those children over 3 years of age, who missed the opportunity for vaccination at a younger age. School entry check of immunization certificates and providing vaccination services for eligible children during entry of kindergarten and school may increase vaccine coverage rate among older children and reduce the amount of outbreaks in nurseries and schools [20].\nAt present, the one-dose schedule for varicella vaccine is used in most provinces in China, and the one-dose schedule is used in the 6 provinces selected for this investigation. This survey found that most children completed the schedule before 2 years of age, with the majority (85%) receiving the vaccine at 12-17 months old. Research in the United States showed that the vaccine efficacy decreased significantly (only 73%) during the first year after vaccination if the vaccine is provided before 15 months old [10]. Before 2006, the one-dose schedule was recommended in the United States. After 2006, in order to strengthen the control of varicella, a two-dose schedule was introduced, the first dose is administered at 12-15 months old and the second dose administered at preschool age (4-6 years old) [21]. Geometric mean titer (GMT) showed a particularly high boost after the second dose when the interval between doses was more than one year [22]. A two-dose schedule was introduced in Beijing in 2012: the first dose at 18 months of age and the second dose at 4 years of age [7]. To further improve and optimize the varicella vaccine schedule, we suggest further developing the recommendations regarding the varicella immunization schedule by considering disease risk and the effectiveness of vaccines.\nLimitations: We used purposive sampling methods to select provinces and counties (districts) [23], the selected provinces cannot represent all of China. And we obtained the sample based on Immunization Information Management System (IIMS), according to the national requirements, all immunization information for children should be reported to IIMS, however, the quality of the IIMS data varies across areas, it is possible that some migrant children's information was not recorded in IIMS, which may overestimate the coverage rate of varicella vaccine." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "RESULTS", "Basic information and vaccine coverage in selected counties", "Relationship between varicella vaccine coverage and the economic situation", "Cumulative coverage of varicella vaccine and distribution of vaccinated age", "DISCUSSION", "MATERIALS AND METHODS" ]
[ "Varicella is an infectious disease with high transmissibility, common in children [1]. The disease is prone to resulting in large scale outbreak in institutional units, such as nurseries, kindergartens, and schools [2]. Varicella vaccine has been demonstrated good safety and effect [3], and varicella vaccination is the most effective measure to protect susceptible people from the disease. Although varicella case information has been collected nationally since 2005 through National Disease Supervision Information Management System(NDSIMS), its limitation in quality of local clinic reporting poses a challenge in accurate analysis and evaluation of the true situation in China [4]. Based on the estimation of varicella incidence in Shandong, Gansu and Hunan provinces, 4,705,000 cases were reported in 2007 in China, and the cost estimation in 2007 was 2.31 billion RMB for outpatients and 103 million RMB for inpatients [5, 6]. “Varicella and Herpes Zoster Vaccines: WHO Position Paper” recommends that countries where varicella is an important public health burden could consider introducing varicella vaccination in the routine childhood immunization programme. Resources should be sufficient to ensure reaching and sustaining vaccine coverage ≥ 80%. Vaccine coverage that remains <80% over the long term is expected to shift varicella infection to older ages in some settings, which may result in an increase of morbidity and mortality despite reduction in total number of cases [3]. Varicella vaccine was licensed in China in 1996 [7]. Currently, varicella vaccine is sold on the private market, meaning it is voluntary and must be self-paid. There is no national recommended immunization schedule available for varicella vaccine, and the vaccine is administered to eligible children at a vaccination clinic according to instructions provided by the vaccine manufacturers. Based on the immunization regulation, the record of vaccination information is kept by both the parents and the clinic [8–9]. In this study, we investigated the varicella vaccination information among children under 5 years old in selected areas in China, in order to understand the varicella vaccination situation for children in China. This information will provide evidence for authority department of China introducing varicella vaccine into the national routine immunization program.", " Basic information and vaccine coverage in selected counties In total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine.\nIn total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine.\n Relationship between varicella vaccine coverage and the economic situation The median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2).\nThe median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2).\n Cumulative coverage of varicella vaccine and distribution of vaccinated age At age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2).\nNote:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months”\nAmong all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3).\nn=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45\nAt age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2).\nNote:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months”\nAmong all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3).\nn=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45", "In total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine.", "The median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2).", "At age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2).\nNote:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months”\nAmong all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3).\nn=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45", "The basic reproduction number (R0) is about 8 ∼ 10 for varicella, which is higher than influenza [10], it is easy for varicella to cause an outbreak in a nursery, kindergarten or school. Therefore, it is very difficult to block the spreading of varicella disease in areas with low vaccine coverage, and it is necessary to maintain high coverage to prevent the disease from becoming an epidemic. The World Health Organization position paper states that varicella may transit to older age groups when the rate of vaccination coverage is lower than 80% for a significant amount of time, which may result in increased morbidity and mortality [3]. During this survey we found that the coverage of the varicella vaccine was higher in counties with higher per capita GDP, and was relatively lower in counties with lower per capita GDP. The coverage of selected areas varies from 44% to 98%, and basing on the linear regression modeling of the relationship between GDP and coverage, per capita GDP of China was 38,354 RMB in 2012 [11], we estimated the average coverage rate for children under 5 years old in China might be about 52%. A national survey conducted in 2011 in China showed that the varicella vaccine coverage rate under 2 years old was 47% [12], this is similarly with our estimates.\nVaricella vaccine is a private vaccine in China, the parents of children voluntarily select and pay for the vaccine for their children. In areas with low coverage rate, a child will be individually protected due to the vaccination, however, herd immunity may not be established at the population level. Furthermore, as a private vaccine, the vaccination fee may become a barrier to vaccinate for children who live in financially poor resource environments [13, 14]. The inequalities of immunization coverage due to socioeconomic differences also exist in other countries [15, 16]. Varicella vaccine has been demonstrated good safety and effect [3], and one study in China showed that introducing varicella vaccination into the routine childhood immunization programme is quite cost effective: an one-dose strategy could save 8 billion RMB in one year [17]. We suggest introducing the varicella vaccine into the routine childhood immunization program as soon as possible to improve the equity in access to immunization services in areas of low socioeconomic status, and ensure a coverage rate of more than 80% in all areas according to the recommendations by WHO.\nOur study showed that the coverage rate for different birth cohorts remained stable. The vaccine coverage rate was 68% at 1 year old, and increased slightly with age, cumulative vaccine coverage rate was 72% at 2 years old and 74% ≥ 3 years old, suggesting that after 3 years of age the access to varicella vaccination service is limited. The peak seasons for incidence of varicella in China are winter and spring seasons, and mainly effect preschool and school-age children of 3 ∼ 10 years old [18]. In China, varicella outbreaks account for a relatively high proportion of the school emergency public health events. For example, reported outbreaks due to varicella in Zhejiang province accounted for 35% in the school emergency public health events from 2005 to 2008 [19]. Therefore, it is necessary to provide an opportunity for access to varicella vaccination service for those children over 3 years of age, who missed the opportunity for vaccination at a younger age. School entry check of immunization certificates and providing vaccination services for eligible children during entry of kindergarten and school may increase vaccine coverage rate among older children and reduce the amount of outbreaks in nurseries and schools [20].\nAt present, the one-dose schedule for varicella vaccine is used in most provinces in China, and the one-dose schedule is used in the 6 provinces selected for this investigation. This survey found that most children completed the schedule before 2 years of age, with the majority (85%) receiving the vaccine at 12-17 months old. Research in the United States showed that the vaccine efficacy decreased significantly (only 73%) during the first year after vaccination if the vaccine is provided before 15 months old [10]. Before 2006, the one-dose schedule was recommended in the United States. After 2006, in order to strengthen the control of varicella, a two-dose schedule was introduced, the first dose is administered at 12-15 months old and the second dose administered at preschool age (4-6 years old) [21]. Geometric mean titer (GMT) showed a particularly high boost after the second dose when the interval between doses was more than one year [22]. A two-dose schedule was introduced in Beijing in 2012: the first dose at 18 months of age and the second dose at 4 years of age [7]. To further improve and optimize the varicella vaccine schedule, we suggest further developing the recommendations regarding the varicella immunization schedule by considering disease risk and the effectiveness of vaccines.\nLimitations: We used purposive sampling methods to select provinces and counties (districts) [23], the selected provinces cannot represent all of China. And we obtained the sample based on Immunization Information Management System (IIMS), according to the national requirements, all immunization information for children should be reported to IIMS, however, the quality of the IIMS data varies across areas, it is possible that some migrant children's information was not recorded in IIMS, which may overestimate the coverage rate of varicella vaccine.", " Sampling methods We used purposive sampling methods to select counties (districts) from Shanghai, Jiangsu, Heilongjiang, Jiangxi, Chongqing and Gansu provinces. We selected these investigation sites based on two factors: (i) representation of east, middle and western parts of China; (ii) having good records of private vaccinations in children's vaccination certificates and with a high quality Immunization Information Management System (IIMS). From Nov 1th 2013 to Nov 15th 2013, we used the simple random sampling method to randomly select children from the population whose birth date was between Jan 1st 2008 and Dec 31th 2012 in the IIMS in selected counties (districts), then from January 1th 2014 to July 15th2014, we checked hand-held vaccination certificates on-site to collect varicella vaccination information.\nWe used the following formula to calculate the sample size:\nn=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45\nn: the sample size, P: estimate coverage of varicella vaccine, based on past experience, we used 45%. After calculation, a sample size of 470 was needed for this survey.\nConsidering a loss to follow-up of about 10%, a total of 520 children needed to be investigated. The sample size was divided by 10 (10 counties), and 52 children in each county were selected to participate in the investigation. Finally, 39 children were excluded because their parents did not have a hand-held vaccination certificate or we were unable to contact them.\nWe used purposive sampling methods to select counties (districts) from Shanghai, Jiangsu, Heilongjiang, Jiangxi, Chongqing and Gansu provinces. We selected these investigation sites based on two factors: (i) representation of east, middle and western parts of China; (ii) having good records of private vaccinations in children's vaccination certificates and with a high quality Immunization Information Management System (IIMS). From Nov 1th 2013 to Nov 15th 2013, we used the simple random sampling method to randomly select children from the population whose birth date was between Jan 1st 2008 and Dec 31th 2012 in the IIMS in selected counties (districts), then from January 1th 2014 to July 15th2014, we checked hand-held vaccination certificates on-site to collect varicella vaccination information.\nWe used the following formula to calculate the sample size:\nn=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45\nn: the sample size, P: estimate coverage of varicella vaccine, based on past experience, we used 45%. After calculation, a sample size of 470 was needed for this survey.\nConsidering a loss to follow-up of about 10%, a total of 520 children needed to be investigated. The sample size was divided by 10 (10 counties), and 52 children in each county were selected to participate in the investigation. Finally, 39 children were excluded because their parents did not have a hand-held vaccination certificate or we were unable to contact them.\n Calculation of per capita gross domestic product (GDP) Per capita GDP=GDP/domicile population. GDP and household population data came from the “2012 Per Capita National Economic and Social Development Statistical Bulletin” on the selected County or District's Bureau of Statistics website. The data was not available for Jianye in Jiangsu, Ning'an in Heilongjiang, Chengguan and Qilihe in Gansu, so the per capita GDP in the corresponding city level (higher administrative level) was used for these counties.\nPer capita GDP=GDP/domicile population. GDP and household population data came from the “2012 Per Capita National Economic and Social Development Statistical Bulletin” on the selected County or District's Bureau of Statistics website. The data was not available for Jianye in Jiangsu, Ning'an in Heilongjiang, Chengguan and Qilihe in Gansu, so the per capita GDP in the corresponding city level (higher administrative level) was used for these counties.\n Survey content and data processing We conducted this survey to obtain the basic information of children (such as gender, date of birth) and varicella vaccination (such as inoculation date and doses). We used Excel to enter the data, ArcGIS to draw the map, and Epi Info software to analyze the data.\nWe conducted this survey to obtain the basic information of children (such as gender, date of birth) and varicella vaccination (such as inoculation date and doses). We used Excel to enter the data, ArcGIS to draw the map, and Epi Info software to analyze the data.\n Ethics statement We obtained oral informed consent from all children's guardians before the investigation, and two investigators checked vaccination certificates on-site together. We would continue the investigation only when the children's guardians gave consent. We didn't go through the institutional review board approval in the case of this study for ethical issues, because monitoring vaccine coverage is part of routine program work, in this context, there is no risk for participants and it is not required to get IRB approval. However, we strictly protected the private information of participants in the field, such as the name and contact information. During the analysis, we also took out the personal identification information.\nWe obtained oral informed consent from all children's guardians before the investigation, and two investigators checked vaccination certificates on-site together. We would continue the investigation only when the children's guardians gave consent. We didn't go through the institutional review board approval in the case of this study for ethical issues, because monitoring vaccine coverage is part of routine program work, in this context, there is no risk for participants and it is not required to get IRB approval. However, we strictly protected the private information of participants in the field, such as the name and contact information. During the analysis, we also took out the personal identification information." ]
[ null, null, null, null, null, null, "methods" ]
[ "varicella vaccine", "vaccination", "coverage", "children", "GDP" ]
INTRODUCTION: Varicella is an infectious disease with high transmissibility, common in children [1]. The disease is prone to resulting in large scale outbreak in institutional units, such as nurseries, kindergartens, and schools [2]. Varicella vaccine has been demonstrated good safety and effect [3], and varicella vaccination is the most effective measure to protect susceptible people from the disease. Although varicella case information has been collected nationally since 2005 through National Disease Supervision Information Management System(NDSIMS), its limitation in quality of local clinic reporting poses a challenge in accurate analysis and evaluation of the true situation in China [4]. Based on the estimation of varicella incidence in Shandong, Gansu and Hunan provinces, 4,705,000 cases were reported in 2007 in China, and the cost estimation in 2007 was 2.31 billion RMB for outpatients and 103 million RMB for inpatients [5, 6]. “Varicella and Herpes Zoster Vaccines: WHO Position Paper” recommends that countries where varicella is an important public health burden could consider introducing varicella vaccination in the routine childhood immunization programme. Resources should be sufficient to ensure reaching and sustaining vaccine coverage ≥ 80%. Vaccine coverage that remains <80% over the long term is expected to shift varicella infection to older ages in some settings, which may result in an increase of morbidity and mortality despite reduction in total number of cases [3]. Varicella vaccine was licensed in China in 1996 [7]. Currently, varicella vaccine is sold on the private market, meaning it is voluntary and must be self-paid. There is no national recommended immunization schedule available for varicella vaccine, and the vaccine is administered to eligible children at a vaccination clinic according to instructions provided by the vaccine manufacturers. Based on the immunization regulation, the record of vaccination information is kept by both the parents and the clinic [8–9]. In this study, we investigated the varicella vaccination information among children under 5 years old in selected areas in China, in order to understand the varicella vaccination situation for children in China. This information will provide evidence for authority department of China introducing varicella vaccine into the national routine immunization program. RESULTS: Basic information and vaccine coverage in selected counties In total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine. In total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine. Relationship between varicella vaccine coverage and the economic situation The median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2). The median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2). Cumulative coverage of varicella vaccine and distribution of vaccinated age At age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2). Note:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months” Among all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3). n=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45 At age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2). Note:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months” Among all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3). n=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45 Basic information and vaccine coverage in selected counties: In total, we collected varicella vaccination information from 481 caregivers’ vaccination certificates: 96 in Shanghai, 48 in Jiangsu, 96 in Heilongjiang, 52 in Jiangxi, 93 in Gansu and 96 in Chongqing, the geographical distribution of selected provinces see Figure 1; Male: 263 and female: 218, with a male to female ratio of 1.2:1. The varicella vaccine coverage of the first dose was 73.6% (354/481) for all children surveyed. The coverage in selected counties is described in Table 1. 1.2% (6/481) received the second dose of varicella vaccine. Relationship between varicella vaccine coverage and the economic situation: The median coverage of varicella vaccine in areas with per capita GDP ≥ 100,000 RMB is 92.7% (85.4% - 97.9%), while median coverage of varicella vaccine in areas with per capita GDP < 100,000RMB is 55.9% (44.2% - 80.9%) (Wilcoxon rank sum test P < 0.05). There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r = 0.929, P < 0.01). The equation of linear regression is y = 0.0004x + 37.08 (F = 50.426, P < 0.001) (Figure 2). Cumulative coverage of varicella vaccine and distribution of vaccinated age: At age of 1 year old(12-23 months), the vaccine coverage were 70.8%, 70.0%, 65.7%, 70.0%, 60.5% among children born in 2008, 2009, 2010, 2011 and 2012 respectively with no significant difference (X 2 = 3.15, P =0.53). The cumulative vaccine coverage ≥3 years old (≥ 36 months) were 79.2%, 78.0% and 74.7 % among children born in 2008, 2009 and 2010 respectively with no significant difference (X 2=0.59, P =0.75). The children born in 2012 were under 2 years of age during investigation for whom the vaccine coverage was lower (60.5%). In total, the cumulative coverage among children at 1 year, 2 years(24-35 months) and≥3 years old were 67.6%, 71.9% and 73.6% (X 2=4.53, P =0.10), with 4.3% increasing at 2 years old, and 1.7% at 3 years old and later compared to 1 year old (Table 2). Note:* 1 year old=”12-23 months”, 2 years old=”24-35 months”, ≥3 years old=”≥36 months” Among all vaccinated children, 91.8% (325/354) received the vaccine at 1 year old, 5.9% (21/354) received at 2 years old, and 2.3% (8/354) at 3 years old and later. Among those receiving the vaccine at 1 year old, the vaccinated age was mainly concentrated in 12-17 months, accounting for 85.3% (302/325) (Figure 3). n=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45 DISCUSSION: The basic reproduction number (R0) is about 8 ∼ 10 for varicella, which is higher than influenza [10], it is easy for varicella to cause an outbreak in a nursery, kindergarten or school. Therefore, it is very difficult to block the spreading of varicella disease in areas with low vaccine coverage, and it is necessary to maintain high coverage to prevent the disease from becoming an epidemic. The World Health Organization position paper states that varicella may transit to older age groups when the rate of vaccination coverage is lower than 80% for a significant amount of time, which may result in increased morbidity and mortality [3]. During this survey we found that the coverage of the varicella vaccine was higher in counties with higher per capita GDP, and was relatively lower in counties with lower per capita GDP. The coverage of selected areas varies from 44% to 98%, and basing on the linear regression modeling of the relationship between GDP and coverage, per capita GDP of China was 38,354 RMB in 2012 [11], we estimated the average coverage rate for children under 5 years old in China might be about 52%. A national survey conducted in 2011 in China showed that the varicella vaccine coverage rate under 2 years old was 47% [12], this is similarly with our estimates. Varicella vaccine is a private vaccine in China, the parents of children voluntarily select and pay for the vaccine for their children. In areas with low coverage rate, a child will be individually protected due to the vaccination, however, herd immunity may not be established at the population level. Furthermore, as a private vaccine, the vaccination fee may become a barrier to vaccinate for children who live in financially poor resource environments [13, 14]. The inequalities of immunization coverage due to socioeconomic differences also exist in other countries [15, 16]. Varicella vaccine has been demonstrated good safety and effect [3], and one study in China showed that introducing varicella vaccination into the routine childhood immunization programme is quite cost effective: an one-dose strategy could save 8 billion RMB in one year [17]. We suggest introducing the varicella vaccine into the routine childhood immunization program as soon as possible to improve the equity in access to immunization services in areas of low socioeconomic status, and ensure a coverage rate of more than 80% in all areas according to the recommendations by WHO. Our study showed that the coverage rate for different birth cohorts remained stable. The vaccine coverage rate was 68% at 1 year old, and increased slightly with age, cumulative vaccine coverage rate was 72% at 2 years old and 74% ≥ 3 years old, suggesting that after 3 years of age the access to varicella vaccination service is limited. The peak seasons for incidence of varicella in China are winter and spring seasons, and mainly effect preschool and school-age children of 3 ∼ 10 years old [18]. In China, varicella outbreaks account for a relatively high proportion of the school emergency public health events. For example, reported outbreaks due to varicella in Zhejiang province accounted for 35% in the school emergency public health events from 2005 to 2008 [19]. Therefore, it is necessary to provide an opportunity for access to varicella vaccination service for those children over 3 years of age, who missed the opportunity for vaccination at a younger age. School entry check of immunization certificates and providing vaccination services for eligible children during entry of kindergarten and school may increase vaccine coverage rate among older children and reduce the amount of outbreaks in nurseries and schools [20]. At present, the one-dose schedule for varicella vaccine is used in most provinces in China, and the one-dose schedule is used in the 6 provinces selected for this investigation. This survey found that most children completed the schedule before 2 years of age, with the majority (85%) receiving the vaccine at 12-17 months old. Research in the United States showed that the vaccine efficacy decreased significantly (only 73%) during the first year after vaccination if the vaccine is provided before 15 months old [10]. Before 2006, the one-dose schedule was recommended in the United States. After 2006, in order to strengthen the control of varicella, a two-dose schedule was introduced, the first dose is administered at 12-15 months old and the second dose administered at preschool age (4-6 years old) [21]. Geometric mean titer (GMT) showed a particularly high boost after the second dose when the interval between doses was more than one year [22]. A two-dose schedule was introduced in Beijing in 2012: the first dose at 18 months of age and the second dose at 4 years of age [7]. To further improve and optimize the varicella vaccine schedule, we suggest further developing the recommendations regarding the varicella immunization schedule by considering disease risk and the effectiveness of vaccines. Limitations: We used purposive sampling methods to select provinces and counties (districts) [23], the selected provinces cannot represent all of China. And we obtained the sample based on Immunization Information Management System (IIMS), according to the national requirements, all immunization information for children should be reported to IIMS, however, the quality of the IIMS data varies across areas, it is possible that some migrant children's information was not recorded in IIMS, which may overestimate the coverage rate of varicella vaccine. MATERIALS AND METHODS: Sampling methods We used purposive sampling methods to select counties (districts) from Shanghai, Jiangsu, Heilongjiang, Jiangxi, Chongqing and Gansu provinces. We selected these investigation sites based on two factors: (i) representation of east, middle and western parts of China; (ii) having good records of private vaccinations in children's vaccination certificates and with a high quality Immunization Information Management System (IIMS). From Nov 1th 2013 to Nov 15th 2013, we used the simple random sampling method to randomly select children from the population whose birth date was between Jan 1st 2008 and Dec 31th 2012 in the IIMS in selected counties (districts), then from January 1th 2014 to July 15th2014, we checked hand-held vaccination certificates on-site to collect varicella vaccination information. We used the following formula to calculate the sample size: n=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45 n: the sample size, P: estimate coverage of varicella vaccine, based on past experience, we used 45%. After calculation, a sample size of 470 was needed for this survey. Considering a loss to follow-up of about 10%, a total of 520 children needed to be investigated. The sample size was divided by 10 (10 counties), and 52 children in each county were selected to participate in the investigation. Finally, 39 children were excluded because their parents did not have a hand-held vaccination certificate or we were unable to contact them. We used purposive sampling methods to select counties (districts) from Shanghai, Jiangsu, Heilongjiang, Jiangxi, Chongqing and Gansu provinces. We selected these investigation sites based on two factors: (i) representation of east, middle and western parts of China; (ii) having good records of private vaccinations in children's vaccination certificates and with a high quality Immunization Information Management System (IIMS). From Nov 1th 2013 to Nov 15th 2013, we used the simple random sampling method to randomly select children from the population whose birth date was between Jan 1st 2008 and Dec 31th 2012 in the IIMS in selected counties (districts), then from January 1th 2014 to July 15th2014, we checked hand-held vaccination certificates on-site to collect varicella vaccination information. We used the following formula to calculate the sample size: n=(ta/2Δp)2P(1−P),ta/2=1.96,Δp=0.0045,p=0.45 n: the sample size, P: estimate coverage of varicella vaccine, based on past experience, we used 45%. After calculation, a sample size of 470 was needed for this survey. Considering a loss to follow-up of about 10%, a total of 520 children needed to be investigated. The sample size was divided by 10 (10 counties), and 52 children in each county were selected to participate in the investigation. Finally, 39 children were excluded because their parents did not have a hand-held vaccination certificate or we were unable to contact them. Calculation of per capita gross domestic product (GDP) Per capita GDP=GDP/domicile population. GDP and household population data came from the “2012 Per Capita National Economic and Social Development Statistical Bulletin” on the selected County or District's Bureau of Statistics website. The data was not available for Jianye in Jiangsu, Ning'an in Heilongjiang, Chengguan and Qilihe in Gansu, so the per capita GDP in the corresponding city level (higher administrative level) was used for these counties. Per capita GDP=GDP/domicile population. GDP and household population data came from the “2012 Per Capita National Economic and Social Development Statistical Bulletin” on the selected County or District's Bureau of Statistics website. The data was not available for Jianye in Jiangsu, Ning'an in Heilongjiang, Chengguan and Qilihe in Gansu, so the per capita GDP in the corresponding city level (higher administrative level) was used for these counties. Survey content and data processing We conducted this survey to obtain the basic information of children (such as gender, date of birth) and varicella vaccination (such as inoculation date and doses). We used Excel to enter the data, ArcGIS to draw the map, and Epi Info software to analyze the data. We conducted this survey to obtain the basic information of children (such as gender, date of birth) and varicella vaccination (such as inoculation date and doses). We used Excel to enter the data, ArcGIS to draw the map, and Epi Info software to analyze the data. Ethics statement We obtained oral informed consent from all children's guardians before the investigation, and two investigators checked vaccination certificates on-site together. We would continue the investigation only when the children's guardians gave consent. We didn't go through the institutional review board approval in the case of this study for ethical issues, because monitoring vaccine coverage is part of routine program work, in this context, there is no risk for participants and it is not required to get IRB approval. However, we strictly protected the private information of participants in the field, such as the name and contact information. During the analysis, we also took out the personal identification information. We obtained oral informed consent from all children's guardians before the investigation, and two investigators checked vaccination certificates on-site together. We would continue the investigation only when the children's guardians gave consent. We didn't go through the institutional review board approval in the case of this study for ethical issues, because monitoring vaccine coverage is part of routine program work, in this context, there is no risk for participants and it is not required to get IRB approval. However, we strictly protected the private information of participants in the field, such as the name and contact information. During the analysis, we also took out the personal identification information.
Background: Vaccine is the most effective way to protect susceptible children from varicella. Few published literature or reports on varicella vaccination of Chinese children exist. Thus, in order to obtain specific information on varicella vaccination of this population, we conducted this survey. Methods: We first used purposive sampling methods to select 6 provinces 10 counties from eastern, middle and western parts of China with high quality of Immunization Information Management System (IIMS), and then randomly select children from population in the IIMS, then we checked vaccination certificate on-site. Results: Based on the varicella vaccination information collected from 481 children's vaccination certificates from all ten selected counties in China, overall coverage of the first dose of varicella vaccine was 73.6%. There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r=0.929, P < 0.01). The cumulative vaccine coverage among children at 1 year, 2 years and ≥3 years old were 67.6%, 71.9% and 73.6% respectively (X2=4.53, P =0.10). The age of vaccination was mainly concentrated in 12-17 months. Conclusions: The coverage rate of the first dose of varicella vaccine in selected areas was lower than that recommended by WHO position paper. The coverage rate was relatively low in areas of low social-economic status. The cumulative coverage had no significant statistical difference among different age group. Most children received varicella vaccine before 3 years old. We suggest introducing the varicella vaccine into routine immunization program, to ensure universal high coverage among children in China. We also suggest that varicella vaccination information should be checked before entering school, in order to control and prevent varicella outbreaks in schools.
INTRODUCTION: Varicella is an infectious disease with high transmissibility, common in children [1]. The disease is prone to resulting in large scale outbreak in institutional units, such as nurseries, kindergartens, and schools [2]. Varicella vaccine has been demonstrated good safety and effect [3], and varicella vaccination is the most effective measure to protect susceptible people from the disease. Although varicella case information has been collected nationally since 2005 through National Disease Supervision Information Management System(NDSIMS), its limitation in quality of local clinic reporting poses a challenge in accurate analysis and evaluation of the true situation in China [4]. Based on the estimation of varicella incidence in Shandong, Gansu and Hunan provinces, 4,705,000 cases were reported in 2007 in China, and the cost estimation in 2007 was 2.31 billion RMB for outpatients and 103 million RMB for inpatients [5, 6]. “Varicella and Herpes Zoster Vaccines: WHO Position Paper” recommends that countries where varicella is an important public health burden could consider introducing varicella vaccination in the routine childhood immunization programme. Resources should be sufficient to ensure reaching and sustaining vaccine coverage ≥ 80%. Vaccine coverage that remains <80% over the long term is expected to shift varicella infection to older ages in some settings, which may result in an increase of morbidity and mortality despite reduction in total number of cases [3]. Varicella vaccine was licensed in China in 1996 [7]. Currently, varicella vaccine is sold on the private market, meaning it is voluntary and must be self-paid. There is no national recommended immunization schedule available for varicella vaccine, and the vaccine is administered to eligible children at a vaccination clinic according to instructions provided by the vaccine manufacturers. Based on the immunization regulation, the record of vaccination information is kept by both the parents and the clinic [8–9]. In this study, we investigated the varicella vaccination information among children under 5 years old in selected areas in China, in order to understand the varicella vaccination situation for children in China. This information will provide evidence for authority department of China introducing varicella vaccine into the national routine immunization program. DISCUSSION: We obtained oral informed consent from all children's guardians before the investigation, and two investigators checked vaccination certificates on-site together. We would continue the investigation only when the children's guardians gave consent. We didn't go through the institutional review board approval in the case of this study for ethical issues, because monitoring vaccine coverage is part of routine program work, in this context, there is no risk for participants and it is not required to get IRB approval. However, we strictly protected the private information of participants in the field, such as the name and contact information. During the analysis, we also took out the personal identification information.
Background: Vaccine is the most effective way to protect susceptible children from varicella. Few published literature or reports on varicella vaccination of Chinese children exist. Thus, in order to obtain specific information on varicella vaccination of this population, we conducted this survey. Methods: We first used purposive sampling methods to select 6 provinces 10 counties from eastern, middle and western parts of China with high quality of Immunization Information Management System (IIMS), and then randomly select children from population in the IIMS, then we checked vaccination certificate on-site. Results: Based on the varicella vaccination information collected from 481 children's vaccination certificates from all ten selected counties in China, overall coverage of the first dose of varicella vaccine was 73.6%. There is a positive linear correlation between per capita GDP and vaccine coverage at county level (r=0.929, P < 0.01). The cumulative vaccine coverage among children at 1 year, 2 years and ≥3 years old were 67.6%, 71.9% and 73.6% respectively (X2=4.53, P =0.10). The age of vaccination was mainly concentrated in 12-17 months. Conclusions: The coverage rate of the first dose of varicella vaccine in selected areas was lower than that recommended by WHO position paper. The coverage rate was relatively low in areas of low social-economic status. The cumulative coverage had no significant statistical difference among different age group. Most children received varicella vaccine before 3 years old. We suggest introducing the varicella vaccine into routine immunization program, to ensure universal high coverage among children in China. We also suggest that varicella vaccination information should be checked before entering school, in order to control and prevent varicella outbreaks in schools.
4,206
326
[ 404, 1076, 109, 111, 300, 1051 ]
7
[ "vaccine", "varicella", "coverage", "old", "children", "years", "vaccination", "varicella vaccine", "years old", "vaccine coverage" ]
[ "varicella vaccine national", "estimates varicella vaccine", "schools varicella vaccine", "china varicella outbreaks", "varicella vaccination situation" ]
null
[CONTENT] varicella vaccine | vaccination | coverage | children | GDP [SUMMARY]
[CONTENT] varicella vaccine | vaccination | coverage | children | GDP [SUMMARY]
null
[CONTENT] varicella vaccine | vaccination | coverage | children | GDP [SUMMARY]
[CONTENT] varicella vaccine | vaccination | coverage | children | GDP [SUMMARY]
[CONTENT] varicella vaccine | vaccination | coverage | children | GDP [SUMMARY]
[CONTENT] Chickenpox | Chickenpox Vaccine | Child, Preschool | China | Geography, Medical | Humans | Infant | Infant, Newborn | Population Surveillance | Seasons | Socioeconomic Factors | Vaccination [SUMMARY]
[CONTENT] Chickenpox | Chickenpox Vaccine | Child, Preschool | China | Geography, Medical | Humans | Infant | Infant, Newborn | Population Surveillance | Seasons | Socioeconomic Factors | Vaccination [SUMMARY]
null
[CONTENT] Chickenpox | Chickenpox Vaccine | Child, Preschool | China | Geography, Medical | Humans | Infant | Infant, Newborn | Population Surveillance | Seasons | Socioeconomic Factors | Vaccination [SUMMARY]
[CONTENT] Chickenpox | Chickenpox Vaccine | Child, Preschool | China | Geography, Medical | Humans | Infant | Infant, Newborn | Population Surveillance | Seasons | Socioeconomic Factors | Vaccination [SUMMARY]
[CONTENT] Chickenpox | Chickenpox Vaccine | Child, Preschool | China | Geography, Medical | Humans | Infant | Infant, Newborn | Population Surveillance | Seasons | Socioeconomic Factors | Vaccination [SUMMARY]
[CONTENT] varicella vaccine national | estimates varicella vaccine | schools varicella vaccine | china varicella outbreaks | varicella vaccination situation [SUMMARY]
[CONTENT] varicella vaccine national | estimates varicella vaccine | schools varicella vaccine | china varicella outbreaks | varicella vaccination situation [SUMMARY]
null
[CONTENT] varicella vaccine national | estimates varicella vaccine | schools varicella vaccine | china varicella outbreaks | varicella vaccination situation [SUMMARY]
[CONTENT] varicella vaccine national | estimates varicella vaccine | schools varicella vaccine | china varicella outbreaks | varicella vaccination situation [SUMMARY]
[CONTENT] varicella vaccine national | estimates varicella vaccine | schools varicella vaccine | china varicella outbreaks | varicella vaccination situation [SUMMARY]
[CONTENT] vaccine | varicella | coverage | old | children | years | vaccination | varicella vaccine | years old | vaccine coverage [SUMMARY]
[CONTENT] vaccine | varicella | coverage | old | children | years | vaccination | varicella vaccine | years old | vaccine coverage [SUMMARY]
null
[CONTENT] vaccine | varicella | coverage | old | children | years | vaccination | varicella vaccine | years old | vaccine coverage [SUMMARY]
[CONTENT] vaccine | varicella | coverage | old | children | years | vaccination | varicella vaccine | years old | vaccine coverage [SUMMARY]
[CONTENT] vaccine | varicella | coverage | old | children | years | vaccination | varicella vaccine | years old | vaccine coverage [SUMMARY]
[CONTENT] varicella | china | vaccine | disease | vaccination | clinic | immunization | information | varicella vaccine | varicella vaccination [SUMMARY]
[CONTENT] sample size | size | children | data | sample | vaccination | information | date | gdp | population [SUMMARY]
null
[CONTENT] rate | varicella | coverage rate | dose | vaccine | age | coverage | schedule | china | old [SUMMARY]
[CONTENT] varicella | old | vaccine | years | coverage | children | years old | vaccination | months | varicella vaccine [SUMMARY]
[CONTENT] varicella | old | vaccine | years | coverage | children | years old | vaccination | months | varicella vaccine [SUMMARY]
[CONTENT] Vaccine ||| Chinese ||| [SUMMARY]
[CONTENT] 6 | 10 | eastern | China | Immunization Information Management System [SUMMARY]
null
[CONTENT] first ||| ||| ||| 3 years old ||| China ||| [SUMMARY]
[CONTENT] Vaccine ||| Chinese ||| ||| 6 | 10 | eastern | China | Immunization Information Management System ||| 481 | ten | China | first | 73.6% ||| 0.01 ||| 1 year | 2 years | years | 67.6% | 71.9% | 73.6% | X2=4.53 | 0.10 ||| 12-17 months ||| first ||| ||| ||| 3 years old ||| China ||| [SUMMARY]
[CONTENT] Vaccine ||| Chinese ||| ||| 6 | 10 | eastern | China | Immunization Information Management System ||| 481 | ten | China | first | 73.6% ||| 0.01 ||| 1 year | 2 years | years | 67.6% | 71.9% | 73.6% | X2=4.53 | 0.10 ||| 12-17 months ||| first ||| ||| ||| 3 years old ||| China ||| [SUMMARY]
Ischemia-Selective Cardioprotection by Malonate for Ischemia/Reperfusion Injury.
35959683
Inhibiting SDH (succinate dehydrogenase), with the competitive inhibitor malonate, has shown promise in ameliorating ischemia/reperfusion injury. However, key for translation to the clinic is understanding the mechanism of malonate entry into cells to enable inhibition of SDH, its mitochondrial target, as malonate itself poorly permeates cellular membranes. The possibility of malonate selectively entering the at-risk heart tissue on reperfusion, however, remains unexplored.
BACKGROUND
C57BL/6J mice, C2C12 and H9c2 myoblasts, and HeLa cells were used to elucidate the mechanism of selective malonate uptake into the ischemic heart upon reperfusion. Cells were treated with malonate while varying pH or together with transport inhibitors. Mouse hearts were either perfused ex vivo (Langendorff) or subjected to in vivo left anterior descending coronary artery ligation as models of ischemia/reperfusion injury. Succinate and malonate levels were assessed by liquid chromatography-tandem mass spectrometry LC-MS/MS, in vivo by mass spectrometry imaging, and infarct size by TTC (2,3,5-triphenyl-2H-tetrazolium chloride) staining.
METHODS
Malonate was robustly protective against cardiac ischemia/reperfusion injury, but only if administered at reperfusion and not when infused before ischemia. The extent of malonate uptake into the heart was proportional to the duration of ischemia. Malonate entry into cardiomyocytes in vivo and in vitro was dramatically increased at the low pH (≈6.5) associated with ischemia. This increased uptake of malonate was blocked by selective inhibition of MCT1 (monocarboxylate transporter 1). Reperfusion of the ischemic heart region with malonate led to selective SDH inhibition in the at-risk region. Acid-formulation greatly enhances the cardioprotective potency of malonate.
RESULTS
Cardioprotection by malonate is dependent on its entry into cardiomyocytes. This is facilitated by the local decrease in pH that occurs during ischemia, leading to its selective uptake upon reperfusion into the at-risk tissue, via MCT1. Thus, malonate's preferential uptake in reperfused tissue means it is an at-risk tissue-selective drug that protects against cardiac ischemia/reperfusion injury.
CONCLUSIONS
[ "Animals", "Chromatography, Liquid", "HeLa Cells", "Humans", "Ischemia", "Malonates", "Mice", "Mice, Inbred C57BL", "Myocardial Reperfusion Injury", "Myocytes, Cardiac", "Tandem Mass Spectrometry" ]
9426742
null
null
Methods
Data Availability Detailed methods and Major Resources Table can be found in the Supplemental Material. Data will be made available upon reasonable request, by contacting a corresponding author. Detailed methods and Major Resources Table can be found in the Supplemental Material. Data will be made available upon reasonable request, by contacting a corresponding author.
Results
Malonate Is Protective in an MI Model and Is Taken Up Selectively into the Ischemic Region of the Heart DSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism. We next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment. To better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate. Localization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion. We conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS. DSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism. We next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment. To better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate. Localization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion. We conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS. Malonate Uptake into Cells Is Inefficient at pH 7.4 To explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25 The kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart. To explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25 The kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart. Malonate Uptake into Cells and the Heart Can Be Modulated by pH Malonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition. Malonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test). Incubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion. As the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes. Malonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition. Malonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test). Incubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion. As the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes. Malonate Uptake Can Be Perturbed by Modulating the Plasma Membrane H+ Gradient As pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F). We next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process. Malonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range. Together, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form. As pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F). We next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process. Malonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range. Together, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form. Malonate Uptake Under Reperfusion Conditions Is Dependent on MCT1 The inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36 Inhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A. To confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C). MCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form. We next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39 To confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G). Genetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control). As HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH. Overall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies. Finally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart. The inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36 Inhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A. To confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C). MCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form. We next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39 To confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G). Genetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control). As HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH. Overall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies. Finally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart. Ischemic Conditions Are Sufficient to Drive Malonate Uptake into the Heart Upon Reperfusion That low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1. Ischemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate). During ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect. Therefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate. That low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1. Ischemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate). During ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect. Therefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate. Low pH Formulation Improves Cardioprotection by Malonate The mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C). The mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C).
Conclusions
We have shown that malonate is an ischemia-selective drug, due to the lowered pH and lactate accumulation of ischemic tissue driving its uptake via the MCT1 (Figure 7). Furthermore, we show that a low pH formulation of malonate enhances its therapeutic potency. Malonate is the first at-risk tissue-selective cardioprotective drug and represents a significant step toward the treatment of IR injury.
[ "What Is Known?", "What New Information Does This Article Contribute?", "Malonate Is Protective in an MI Model and Is Taken Up Selectively into the Ischemic Region of the Heart", "Malonate Uptake into Cells Is Inefficient at pH 7.4", "Malonate Uptake into Cells and the Heart Can Be Modulated by pH", "Malonate Uptake Can Be Perturbed by Modulating the Plasma Membrane H+ Gradient", "Malonate Uptake Under Reperfusion Conditions Is Dependent on MCT1", "Ischemic Conditions Are Sufficient to Drive Malonate Uptake into the Heart Upon Reperfusion", "Low pH Formulation Improves Cardioprotection by Malonate", "Article Information", "Acknowledgments", "Sources of Funding", "Supplemental Materials" ]
[ "Extensive succinate accumulation during ischemia and its subsequent rapid oxidation on reperfusion drives ischemia/reperfusion injury.\nPreventing succinate accumulation during ischemia reduces damage on reperfusion.\nInhibiting succinate dehydrogenase with malonate is protective against ischemia/reperfusion injury, although its mechanism of entry into cardiomyocytes is undefined.", "The cardioprotective effect of malonate is dependent on its selective uptake into cardiomyocytes on reperfusion after an ischemic period.\nMalonate entry into cardiomyocytes upon reperfusion is facilitated by the lowered pH and lactate exchange, which selectively drives malonate into cardiomyocytes as a monoanion via the monocarboxylate transporter MCT1 (monocarboxylate transporter 1). This is the first time malonate has been shown to be a substrate for MCT1.\nMalonate selectively enters the at-risk tissue, sparing the nonischemic area on reperfusion. Thus, malonate is the first example of an at-risk tissue-selective, cardioprotective drug.\nDetermining the molecular basis for selective malonate entry via MCT1 into the ischemic heart upon reperfusion is a significant step toward treating cardiac ischemia/reperfusion injury. Malonate is cardioprotective in small and large animal models and MCT1 is highly expressed in the human heart. Furthermore, this mechanism can be exploited to increase malonate potency using an acidic formulation. The next step is to assess malonate as a treatment for cardiac ischemia/reperfusion injury in patients.", "DSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism.\nWe next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment.\nTo better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate.\nLocalization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion.\nWe conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS.", "To explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25\nThe kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart.", "Malonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition.\nMalonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test).\nIncubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion.\nAs the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes.", "As pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F).\nWe next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process.\nMalonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range.\nTogether, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form.", "The inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36\nInhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A.\nTo confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C).\nMCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form.\nWe next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39\nTo confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G).\nGenetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control).\nAs HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH.\nOverall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies.\nFinally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart.", "That low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1.\nIschemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate).\nDuring ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect.\nTherefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate.", "The mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C).", " Acknowledgments The authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments.\nThe authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments.\n Sources of Funding This work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic).\nThis work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic).\n Disclosures Some authors currently hold a patent on the use of malonate esters in cardiac ischemia/reperfusion (IR) injury (M.P. Murphy and T. Krieg) and have submitted patent applications on the use of malonate in IR injury associated with ischemic stroke (M.P. Murphy and T. Krieg) and the pH-enhancement of malonate described in this article (H.A. Prag, M.P. Murphy, and T. Krieg). The other authors report no conflicts.\nSome authors currently hold a patent on the use of malonate esters in cardiac ischemia/reperfusion (IR) injury (M.P. Murphy and T. Krieg) and have submitted patent applications on the use of malonate in IR injury associated with ischemic stroke (M.P. Murphy and T. Krieg) and the pH-enhancement of malonate described in this article (H.A. Prag, M.P. Murphy, and T. Krieg). The other authors report no conflicts.\n Supplemental Materials Supplemental Methods\nMajor Resources Table\nFigures S1–S6\nReferences 49–56\nSupplemental Methods\nMajor Resources Table\nFigures S1–S6\nReferences 49–56", "The authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments.", "This work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic).", "Supplemental Methods\nMajor Resources Table\nFigures S1–S6\nReferences 49–56" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "What Is Known?", "What New Information Does This Article Contribute?", "Methods", "Data Availability", "Results", "Malonate Is Protective in an MI Model and Is Taken Up Selectively into the Ischemic Region of the Heart", "Malonate Uptake into Cells Is Inefficient at pH 7.4", "Malonate Uptake into Cells and the Heart Can Be Modulated by pH", "Malonate Uptake Can Be Perturbed by Modulating the Plasma Membrane H+ Gradient", "Malonate Uptake Under Reperfusion Conditions Is Dependent on MCT1", "Ischemic Conditions Are Sufficient to Drive Malonate Uptake into the Heart Upon Reperfusion", "Low pH Formulation Improves Cardioprotection by Malonate", "Discussion", "Conclusions", "Article Information", "Acknowledgments", "Sources of Funding", "Disclosures", "Supplemental Materials", "Supplementary Material" ]
[ "Extensive succinate accumulation during ischemia and its subsequent rapid oxidation on reperfusion drives ischemia/reperfusion injury.\nPreventing succinate accumulation during ischemia reduces damage on reperfusion.\nInhibiting succinate dehydrogenase with malonate is protective against ischemia/reperfusion injury, although its mechanism of entry into cardiomyocytes is undefined.", "The cardioprotective effect of malonate is dependent on its selective uptake into cardiomyocytes on reperfusion after an ischemic period.\nMalonate entry into cardiomyocytes upon reperfusion is facilitated by the lowered pH and lactate exchange, which selectively drives malonate into cardiomyocytes as a monoanion via the monocarboxylate transporter MCT1 (monocarboxylate transporter 1). This is the first time malonate has been shown to be a substrate for MCT1.\nMalonate selectively enters the at-risk tissue, sparing the nonischemic area on reperfusion. Thus, malonate is the first example of an at-risk tissue-selective, cardioprotective drug.\nDetermining the molecular basis for selective malonate entry via MCT1 into the ischemic heart upon reperfusion is a significant step toward treating cardiac ischemia/reperfusion injury. Malonate is cardioprotective in small and large animal models and MCT1 is highly expressed in the human heart. Furthermore, this mechanism can be exploited to increase malonate potency using an acidic formulation. The next step is to assess malonate as a treatment for cardiac ischemia/reperfusion injury in patients.", " Data Availability Detailed methods and Major Resources Table can be found in the Supplemental Material. Data will be made available upon reasonable request, by contacting a corresponding author.\nDetailed methods and Major Resources Table can be found in the Supplemental Material. Data will be made available upon reasonable request, by contacting a corresponding author.", "Detailed methods and Major Resources Table can be found in the Supplemental Material. Data will be made available upon reasonable request, by contacting a corresponding author.", " Malonate Is Protective in an MI Model and Is Taken Up Selectively into the Ischemic Region of the Heart DSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism.\nWe next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment.\nTo better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate.\nLocalization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion.\nWe conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS.\nDSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism.\nWe next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment.\nTo better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate.\nLocalization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion.\nWe conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS.\n Malonate Uptake into Cells Is Inefficient at pH 7.4 To explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25\nThe kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart.\nTo explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25\nThe kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart.\n Malonate Uptake into Cells and the Heart Can Be Modulated by pH Malonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition.\nMalonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test).\nIncubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion.\nAs the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes.\nMalonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition.\nMalonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test).\nIncubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion.\nAs the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes.\n Malonate Uptake Can Be Perturbed by Modulating the Plasma Membrane H+ Gradient As pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F).\nWe next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process.\nMalonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range.\nTogether, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form.\nAs pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F).\nWe next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process.\nMalonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range.\nTogether, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form.\n Malonate Uptake Under Reperfusion Conditions Is Dependent on MCT1 The inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36\nInhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A.\nTo confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C).\nMCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form.\nWe next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39\nTo confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G).\nGenetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control).\nAs HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH.\nOverall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies.\nFinally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart.\nThe inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36\nInhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A.\nTo confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C).\nMCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form.\nWe next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39\nTo confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G).\nGenetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control).\nAs HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH.\nOverall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies.\nFinally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart.\n Ischemic Conditions Are Sufficient to Drive Malonate Uptake into the Heart Upon Reperfusion That low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1.\nIschemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate).\nDuring ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect.\nTherefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate.\nThat low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1.\nIschemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate).\nDuring ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect.\nTherefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate.\n Low pH Formulation Improves Cardioprotection by Malonate The mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C).\nThe mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C).", "DSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism.\nWe next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment.\nTo better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate.\nLocalization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion.\nWe conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS.", "To explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25\nThe kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart.", "Malonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition.\nMalonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test).\nIncubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion.\nAs the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes.", "As pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F).\nWe next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process.\nMalonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range.\nTogether, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form.", "The inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36\nInhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A.\nTo confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C).\nMCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form.\nWe next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39\nTo confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G).\nGenetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control).\nAs HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH.\nOverall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies.\nFinally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart.", "That low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1.\nIschemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate).\nDuring ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect.\nTherefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate.", "The mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C).", "No medicine is currently available that can be given at reperfusion to prevent cardiac IR injury.1,6 Drugs that prevent cardiac IR injury should both reduce MI damage and the subsequent development of heart failure.1,12 Targeting succinate metabolism has been shown to be a promising therapeutic approach. Inhibiting SDH using the reversible, competitive inhibitor malonate reduces infarct size in small and large animal models of cardiac IR injury, despite its mechanism of entry into heart tissue being unknown.26,27\nHere, we found that malonate uptake into cells and the heart at pH 7.4 was inefficient. However, the uptake of malonate into cells was dramatically enhanced at the lower levels of pH that occur during ischemia and by lactate accumulation within cells. Thus, ischemia provides an environment that will protonate malonate and thereby enable its uptake by the MCT1 transporter. Interestingly, this transporter undergoes a trans-acceleration phenomenon, whereby its transition from the outward-facing to the inward-facing conformation occurs more rapidly in the presence of a trans-substrate.41,42 In this case, the uptake of malonate on the extracellular side of the plasma membrane may be accelerated by lactate efflux. In the LAD MI model of IR injury, malonate is readily available at the point of reperfusion, thus a proportion of the malonate would be protonated and accessible for transport by MCT1. This enables malonate entry into the heart in an at-risk tissue-selective manner.\nAdditionally, as well as the inhibition of SDH, accelerating lactate efflux from the heart may also play a role in the reduction of IR injury. By shifting the equilibrium to facilitate anaerobic ATP production, this may promote the early restoration of ionic gradients through ATP-dependent transporters.43,44 Furthermore, facilitating lactate efflux may promote the extrusion of protons from the intracellular environment, thus reducing the activity of H+ transporters such as the Na+-H+ exchanger, though further investigation into these mechanisms is warranted.43,45\nThis is the first account of an ischemia-selective cardioprotective agent, utilizing the pathological differences between risk and nonrisk tissue to drive uptake. As MCT1 is highly expressed in the hearts of mice, rats, pigs, and humans (Figure S6D) and the characteristics of ischemia are conserved between these species,19,24 these drivers for malonate uptake and the plasma membrane transporter may facilitate malonate cardioprotection in humans. In addition to its place in the treatment of IR injury in MI, the mechanism of ischemia-enhanced delivery of malonate may provide a novel treatment option for IR injury under many other circumstances.\nThere is much interest in targeted drug delivery and the ability to engage the intended site while reducing off-target effects. Here, we have shown that malonate is not only a cardioprotective agent, but it does this while its entry into the rest of the heart is limited. Thus, malonate is a potent and ischemic tissue-targeted drug, explaining how large doses can be delivered acutely with minimal toxic effects, which is likely to be important bearing in mind the many comorbidities associated with MI.46 Furthermore, as malonate can be efficiently metabolised28,47 and has the potential to promote cardiomyocyte regeneration.48\nMalonate is robustly protective in acute IR injury, though further work is now required to understand the tractability of malonate treatment in chronic IR injury models; in particular, conducting a double-blind chronic large animal IR injury study.49 This would provide the greatest insight into the cardioprotection capabilities of malonate post-MI and its effect on the development of heart failure and define its potential for translation to the clinic.\nSchematic of ischemia-dependent malonate uptake via MCT1 (monocarboxylate transporter 1). The accumulation of lactate and protons in ischemic tissue and equilibration with the extracellular space facilitates protonation of malonate to its monocarboxylate form. This enables it to be an MCT1 substrate and enter cardiomyocytes upon reperfusion. Here, the malonate is transported into mitochondria by the mitochondrial dicarboxylate carrier where it can subsequently go on to inhibit SDH (succinate dehydrogenase). SDH inhibition reduces succinate oxidation, reactive oxygen species (ROS) production by reverse electron transport (RET) through complex I and opening of the mitochondrial permeability transition pore, thereby reducing cell death from ischemia/reperfusion (IR) injury. CxI indicates complex I; DIC, mitochondrial dicarboxylate carrier (SLC25A10); mPTP, mitochondrial permeability transition pore; and TCA, tricarboxylic acid.", "We have shown that malonate is an ischemia-selective drug, due to the lowered pH and lactate accumulation of ischemic tissue driving its uptake via the MCT1 (Figure 7). Furthermore, we show that a low pH formulation of malonate enhances its therapeutic potency. Malonate is the first at-risk tissue-selective cardioprotective drug and represents a significant step toward the treatment of IR injury.", " Acknowledgments The authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments.\nThe authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments.\n Sources of Funding This work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic).\nThis work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic).\n Disclosures Some authors currently hold a patent on the use of malonate esters in cardiac ischemia/reperfusion (IR) injury (M.P. Murphy and T. Krieg) and have submitted patent applications on the use of malonate in IR injury associated with ischemic stroke (M.P. Murphy and T. Krieg) and the pH-enhancement of malonate described in this article (H.A. Prag, M.P. Murphy, and T. Krieg). The other authors report no conflicts.\nSome authors currently hold a patent on the use of malonate esters in cardiac ischemia/reperfusion (IR) injury (M.P. Murphy and T. Krieg) and have submitted patent applications on the use of malonate in IR injury associated with ischemic stroke (M.P. Murphy and T. Krieg) and the pH-enhancement of malonate described in this article (H.A. Prag, M.P. Murphy, and T. Krieg). The other authors report no conflicts.\n Supplemental Materials Supplemental Methods\nMajor Resources Table\nFigures S1–S6\nReferences 49–56\nSupplemental Methods\nMajor Resources Table\nFigures S1–S6\nReferences 49–56", "The authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments.", "This work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic).", "Some authors currently hold a patent on the use of malonate esters in cardiac ischemia/reperfusion (IR) injury (M.P. Murphy and T. Krieg) and have submitted patent applications on the use of malonate in IR injury associated with ischemic stroke (M.P. Murphy and T. Krieg) and the pH-enhancement of malonate described in this article (H.A. Prag, M.P. Murphy, and T. Krieg). The other authors report no conflicts.", "Supplemental Methods\nMajor Resources Table\nFigures S1–S6\nReferences 49–56", "" ]
[ null, null, "methods", "data-availability", "results", null, null, null, null, null, null, null, "discussion", "conclusions", null, null, null, "COI-Statement", null, "supplementary-material" ]
[ "ischemia", "mitochondria", "myocardial infarction", "reactive oxygen species", "reperfusion" ]
What Is Known?: Extensive succinate accumulation during ischemia and its subsequent rapid oxidation on reperfusion drives ischemia/reperfusion injury. Preventing succinate accumulation during ischemia reduces damage on reperfusion. Inhibiting succinate dehydrogenase with malonate is protective against ischemia/reperfusion injury, although its mechanism of entry into cardiomyocytes is undefined. What New Information Does This Article Contribute?: The cardioprotective effect of malonate is dependent on its selective uptake into cardiomyocytes on reperfusion after an ischemic period. Malonate entry into cardiomyocytes upon reperfusion is facilitated by the lowered pH and lactate exchange, which selectively drives malonate into cardiomyocytes as a monoanion via the monocarboxylate transporter MCT1 (monocarboxylate transporter 1). This is the first time malonate has been shown to be a substrate for MCT1. Malonate selectively enters the at-risk tissue, sparing the nonischemic area on reperfusion. Thus, malonate is the first example of an at-risk tissue-selective, cardioprotective drug. Determining the molecular basis for selective malonate entry via MCT1 into the ischemic heart upon reperfusion is a significant step toward treating cardiac ischemia/reperfusion injury. Malonate is cardioprotective in small and large animal models and MCT1 is highly expressed in the human heart. Furthermore, this mechanism can be exploited to increase malonate potency using an acidic formulation. The next step is to assess malonate as a treatment for cardiac ischemia/reperfusion injury in patients. Methods: Data Availability Detailed methods and Major Resources Table can be found in the Supplemental Material. Data will be made available upon reasonable request, by contacting a corresponding author. Detailed methods and Major Resources Table can be found in the Supplemental Material. Data will be made available upon reasonable request, by contacting a corresponding author. Data Availability: Detailed methods and Major Resources Table can be found in the Supplemental Material. Data will be made available upon reasonable request, by contacting a corresponding author. Results: Malonate Is Protective in an MI Model and Is Taken Up Selectively into the Ischemic Region of the Heart DSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism. We next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment. To better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate. Localization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion. We conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS. DSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism. We next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment. To better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate. Localization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion. We conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS. Malonate Uptake into Cells Is Inefficient at pH 7.4 To explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25 The kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart. To explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25 The kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart. Malonate Uptake into Cells and the Heart Can Be Modulated by pH Malonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition. Malonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test). Incubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion. As the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes. Malonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition. Malonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test). Incubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion. As the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes. Malonate Uptake Can Be Perturbed by Modulating the Plasma Membrane H+ Gradient As pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F). We next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process. Malonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range. Together, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form. As pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F). We next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process. Malonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range. Together, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form. Malonate Uptake Under Reperfusion Conditions Is Dependent on MCT1 The inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36 Inhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A. To confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C). MCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form. We next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39 To confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G). Genetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control). As HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH. Overall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies. Finally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart. The inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36 Inhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A. To confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C). MCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form. We next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39 To confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G). Genetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control). As HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH. Overall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies. Finally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart. Ischemic Conditions Are Sufficient to Drive Malonate Uptake into the Heart Upon Reperfusion That low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1. Ischemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate). During ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect. Therefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate. That low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1. Ischemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate). During ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect. Therefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate. Low pH Formulation Improves Cardioprotection by Malonate The mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C). The mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C). Malonate Is Protective in an MI Model and Is Taken Up Selectively into the Ischemic Region of the Heart: DSM is protective against cardiac IR injury, but its mechanism of protection remains uncertain. To address this gap in our knowledge we first tested the therapeutic range of DSM in IR injury. We infused DSM at a range of concentrations (1.6–160 mg/kg equivalent to ≈11-1100 µmol/kg) at reperfusion in an in vivo murine LAD (left anterior descending coronary artery) ligation model of MI. DSM infusion during this clinically relevant reperfusion period led to a dose-dependent decrease in infarct size (Figure 1C), with 160 and 16 mg/kg showing robust cardioprotection. However, when DSM was infused before the onset of ischemia it was not protective (Figure 1D), even at 160 mg/kg which gave the smallest infarct when administered at reperfusion. This contrasted with the cell-permeable dimethyl malonate, which can diffuse into the tissue and generate malonate, preventing succinate accumulation and thereby reducing IR injury.18,25 This suggests that either DSM is minimally taken up by tissues during normoxia and is only taken up by tissues after ischemia or that DSM protects against acute IR injury by an extracellular mechanism. We next measured malonate uptake into the ischemic and healthy heart tissue upon reperfusion in the LAD MI model. This showed that malonate was indeed taken up into cells within the infarct region to a level of ≈400 pmol/mg tissue and to a far greater extent than into cells in the healthy tissue (Figure 1E). Furthermore, reperfusing with malonate slowed succinate oxidation within the infarct region with succinate remaining significantly elevated in the infarct region following 1-minute reperfusion, compared with control reperfusion with saline (Figure 1F). Limiting succinate oxidation at reperfusion with malonate also blunted ROS production in the at-risk tissue (Figure 1G). We next assessed how SDH inhibition compared to direct inhibition of the mitochondrial permeability transition pore with CsA (cyclosporin A). Here we found that the cardioprotection from malonate was additive to cyclosporin A alone (Figure 1H), hence targeting the upstream mechanism of permeability transition pore opening is an increasingly attractive option for IR injury treatment. To better understand the metabolic differences between the healthy and infarcted tissue upon malonate treatment, we investigated the in vivo MI model using mass spectrometry imaging. To do this, we subjected hearts to either 30-minute ischemia, 30-minute ischemia before reperfusing the heart for 15-minute, or 30-minute ischemia before reperfusing with DSM for 15 minutes (to mimic the cardioprotective malonate infusion) before snap freezing and processing for mass spectrometry imaging (Figure 2A). Infarct lesions were demarcated using hematoxylin and eosin (H&E), silver infarct staining and metabolite principal component analysis to differentiate the risk and nonrisk regions. mass spectrometry imaging coupled with the demarcated risk areas showed striking changes in the levels of succinate in the infarct region. During ischemia, succinate was significantly elevated in the at-risk tissue, with considerable succinate accumulation in the core of the infarct (Figure 2B and 2C). After 15-minute reperfusion, the succinate levels in the infarct lesion returned to healthy tissue levels, due to succinate oxidation and efflux.21 When malonate was infused at the time of reperfusion, succinate levels remained higher in all regions of the heart than in the nonmalonate treated heart, consistent with the prevention of its oxidation by SDH inhibition by malonate. Localization of succinate accumulation in heart tissue identified by mass spectrometry imaging. A, Outline of experimental groups for mass spectrometry imaging. B, Representative images of succinate abundance detected by mass spectrometry imaging in myocardial sections. Black outer line indicates the edge of the tissue slice, white inner line indicates infarct region. C, Quantification of succinate (C) identified by MSI (mean±SEM, n=3 biological replicates, statistics: Friedman paired test for healthy vs lesion and Kruskal-Wallis with Dunn post hoc test between conditions). au indicates arbitrary units; DSM, disodium malonate; H, healthy tissue; IR, ischemia/reperfusion; L, infarct lesion; and rep, reperfusion. We conclude that malonate is indeed taken up by the heart, but this is greatly enhanced in ischemic region upon reperfusion. Therefore, malonate provides protection against cardiac IR injury in vivo over a range of concentrations following its selective entry into the infarct region, where it slows the oxidation of succinate during reperfusion, preventing the production of RET-derived ROS. Malonate Uptake into Cells Is Inefficient at pH 7.4: To explore the mechanism of malonate uptake in vitro, we incubated C2C12 and H9c2 myoblasts with DSM and measured intracellular malonate levels by LC-MS/MS (liquid chromatography-tandem mass spectrometry). Incubation with DSM for 15 minutes at pH 7.4, led to a dose-dependent increase in intracellular malonate (Figure S1A and S1D). Intracellular succinate also accumulated in a malonate-dependent manner (Figure S1B and S1E), confirming that once malonate enters the cell it is rapidly transported into mitochondria and inhibits SDH. However, incubating cells with 1 mmol/L DSM led to variable malonate uptake and little SDH inhibition, indicating that malonate uptake across biological membranes is inefficient. Moreover, the intracellular levels of malonate achieved by 250 µmol/L of the malonate ester prodrug diacetoxymethyl malonate, are 80-fold more than that generated by incubation with 5 mmol/L DSM.25 As DSM was protective in the LAD model when infused at 16 mg/kg, corresponding to a maximum possible blood malonate concentration of ≈1.8 mmol/L (assuming a blood volume of 1.5 mL in a 25 g mouse),29 this compares to the dose range tested in cells (furthermore, far more malonate is available for uptake in vitro than in vivo due to the large reservoir in the incubation medium). Even so, the intracellular malonate levels were far lower with DSM compared to diacetoxymethyl malonate.25 The kinetics of cell uptake upon incubation with 5 mmol/L DSM showed time-dependent uptake, although extended incubation times were required for succinate elevation (Figure S1C and S1F). This is likely due to the initial concentrations of malonate entering the cell being insufficient to inhibit SDH. These data are consistent with the lack of protection by DSM delivered before ischemia in vivo, as well as its limited uptake into normoxic tissues. Together these suggest that exposure to ischemia may facilitate malonate entry into the heart. Malonate Uptake into Cells and the Heart Can Be Modulated by pH: Malonate is a dicarboxylate at physiological pH (Figure 1A, pKa=2.83 and 5.69), suggesting that the pH decrease in ischemic tissue9,11,30 may enhance malonate uptake into the heart during early reperfusion, by increasing the concentration of its monocarboxylic form. Incubating cells with DSM at either pH 6, 7.4, or 8 for 15 minutes, led to large differences in malonate uptake (Figure 3A and Figure S2A). At pH 6, the levels of malonate in the cell were significantly higher than at pH 7.4 or 8, thus malonate uptake is favored by acidic pH. Succinate levels mirrored those of malonate, with greater succinate accumulation as a result of increased malonate-dependent SDH inhibition at low pH (Figure 3B and Figure S2B). Low pH alone had no effect on succinate levels (Figure 3B and Figure S2B), suggesting that it was due to malonate entry into the cells followed by SDH inhibition. Malonate uptake is enhanced at low pH. A and B, C2C12 cells were incubated with disodium malonate (DSM; 0, 1, or 5 mmol/L) for 15 min at either pH 6, 7.4, or 8 before measuring intracellular malonate (A) and succinate (B) by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). C, Malonate levels in C2C12 cells after incubation with DSM (5 mmol/L) for 15 min at various (patho)physiological pH (mean±SEM, n=3 biological replicates). D, Malonate levels in murine isolated Langendorff-perfused hearts treated with 5 mmol/L DSM infused at either pH 7.4 or 6 for 5 min (mean±SEM, n=4 biological replicates, statistics: unpaired, 2-tailed Mann-Whitney U test). E to H, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at either pH 6 or 7.4 in the presence of FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone) (E), gramicidin (F), nigericin (G), or BAM15 (N5,N6-bis(2-Fluorophenyl)-[1,2,5]oxadiazolo[3,4-b]pyrazine-5,6-diamine) (H) before measuring intracellular malonate by LC-MS/MS (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). I, Structures of malonic acid and malonamic acid. J, Quantification of levels of malonate and malonamate in C2C12 cells after incubation (5 mmol/L DSM or disodium malonamate; 15 min) at pH 7.4 (n=6 biological replicates). K, C2C12 cells were incubated with either DSM or disodium malonamate (both 5 mmol/L) at pH 6 or 7.4 for 15 min and the intracellular levels measured by LC-MS/MS. (mean±SEM of the fold-enhancement of uptake at pH 6 vs pH 7.4, n=6 biological replicates). Statistics for J and K: unpaired, 2-tailed Student t test). Incubating cells with malonate over a range of pH values between pH 7.4 and 6 clearly showed the pH-dependent uptake of malonate, in line with the increase in monocarboxylate malonate proportion (Figure 3C). This was mirrored by succinate levels (Figure S2C). Therefore, even a small drop in pH can lead to a substantial increase in malonate uptake, and thus is likely to be relevant for its entry into ischemic tissue upon reperfusion. As the cultured cells used are noncontractile and substantially differ in their properties from in situ contracting cardiomyocytes,15 importantly, we next assessed whether decreasing the pH also enhanced malonate uptake into the ex vivo perfused Langendorff heart. Malonate was infused into the heart at either pH 7.4 or 6 for 5 minutes, before briefly flushing at pH 7.4 to remove nonmyocardial malonate. Remarkably, the levels of malonate were significantly elevated when infused at pH 6 compared to 7.4, confirming that the in vitro results translate to the heart (Figure 3D). We conclude that low pH conditions facilitate the entry of malonate into cardiomyocytes. Malonate Uptake Can Be Perturbed by Modulating the Plasma Membrane H+ Gradient: As pH modulated malonate entry into cells in vitro and in the heart, we next probed the factors affecting malonate uptake into cells. Malonate entry into cells at either pH 7.4 or 6 was blocked at 4 °C and was associated with negligible succinate levels (Figure S2D and S2E). That a decrease in extracellular pH increased malonate entry into cells, suggested uptake driven by the proton gradient. Therefore, we next abolished the plasma membrane proton gradient using ionophores21 which prevented the cellular uptake of malonate (Figure 3E–3G and Figure S3A) and subsequent increase in succinate levels (Figure S3B through S3E). However, the uncoupler BAM15, which is selective for the mitochondrial inner membrane over the plasma membrane,31 had little effect, consistent with the plasma membrane proton gradient driving malonate uptake (Figure 3H and Figure S3F). We next used nonspecific transport inhibitors to assess whether we could block malonate uptake at low pH. DIDS (4,4’-Diisothiocyano-2,2’-stilbenedisulfonic acid), an irreversible inhibitor of chloride/bicarbonate exchange that inhibits malonate uptake into erythrocytes,32,33 led to a dose-dependent inhibition of malonate uptake, although succinate remained high (Figure S3G and S3H). Thus, malonate uptake into cardiomyocytes at low pH is driven by a proton gradient through a transporter-dependent process. Malonate’s pKa’s are 2.83 and 5.69, so at pH 6.4, ≈16% of the malonate would be in its monocarboxylate form. To determine whether the uptake of malonate was dependent on its protonation to a monocarboxylate form, we assessed the uptake of a compound that mimics the monocarboxylate malonate. In 3-amino-3-oxopropionate (malonamate; Figure 3I), one carboxylic acid has been replaced with a neutral amido group leaving a single carboxylic acid of pKa ≈4.75. Thus, at pH 7.4 malonamate resembles the monocarboxylate form of malonate. This led to more of malonamate being taken up into cells at pH 7.4 than malonate (Figure 3J). Lowering the pH to 6 led to ≈15-fold increase in malonate uptake while malonamate uptake changed negligibly (Figure 3K) because it will remain as a monocarboxylate across this pH range. Together, these data support transport of the monoanionic form of malonate, under the conditions that occur during early reperfusion. Therefore, a pH gradient can drive the uptake of malonate via protonation and transport in its monocarboxylate form. Malonate Uptake Under Reperfusion Conditions Is Dependent on MCT1: The inhibitory effect of DIDS, which also inhibits MCT1,34 raised the possibility of the uptake of the protonated form of malonate being catalyzed by a monocarboxylate transporter (MCT; SLC16 (solute carrier family member 16)). MCT1 (SLC16A1) is the principal transporter of lactate, which is carried in symport with a proton, both of which are elevated during ischemia, making MCT1 an attractive candidate transporter for malonate uptake upon reperfusion. Additionally, it was recently shown that MCT1 mediates the efflux of succinate from the ischemic heart during reperfusion and from exercising muscle.21,35 As MCT1 is a lactate transporter, we first assessed whether high concentrations of lactate in the incubation medium compete with malonate. Excess lactate led to a concentration-dependent decrease in malonate uptake into the cell and a corresponding decrease in succinate accumulation (Figure 4A and Figure S4A). This suggested that at a low pH, malonate uptake into the cells is via MCT1, which is highly expressed in cardiomyocytes.36 Inhibition of MCT1 (monocarboxylate transporter 1) prevents the enhanced uptake of malonate at lowered pH. A, Malonate uptake (5 mmol/L disodium malonate [DSM], 15 min) in C2C12 cells in the presence of lactate (0, 10 or 50 mmol/L). B, Lactate levels in C2C12 cells after 15 min treatment with varying concentrations of MCT1 inhibitors. C and D, Effect of MCT1 inhibition by AR-C141990 or AZD3965 on malonate (5 mmol/L DSM, 15 min) uptake (C) at pH 6 and subsequent succinate levels (D). E, C2C12 cells were incubated with DSM (5 mmol/L) for 15 min at various pH±10 µmol/L AR-C141990. F, MCT1 inhibition by AR-C141990 on malonate (5 mmol/L DSM, 15 min) uptake at pH 7.4 (A to F, mean±SEM, n=3 biological replicates, statistics: (A) Kruskal-Wallis with Dunn post hoc test). G and H, Effect of malonate (5 mmol/L DSM) on cellular oxygen consumption at pH 7.4 (G) or 6 (H) ±MCT1 inhibitor (10 µmol/L AR-C141990; data presented as nonmitochondrial respiration normalized mean oxygen consumption rate (OCR)±SEM of 3 biological replicates (n=12–16 technical replicates per biological replicate), statistics: Kruskal-Wallis with Dunn post hoc test). ATP indicates OCR in the presence of 1.5 µmol/L oligomycin; BL, baseline OCR; MAX, OCR in the presence of 1 µmol/L FCCP (carbonyl cyanide-p-trifluoromethoxyphenylhydrazone); and NM, OCR in the presence of 4 µg/mL rotenone and 10 µmol/L antimycin A. To confirm that MCT1 was the malonate transporter, we assessed the potent and selective MCT1 inhibitors, AR-C141990 and AZD3965.21,35,37,38 When cells were incubated with these MCT1 inhibitors, they led to a dramatic dose-dependent increase in lactate levels within the cell, consistent with preventing lactate efflux via MCT1 (Figure 4B). MCT1 inhibition by either AR-C141990 or AZD3965 led to a profound dose-dependent decrease in malonate uptake (Figure 4C). Inhibition of malonate uptake by MCT1 inhibition led to a corresponding decrease in succinate accumulation within cells (Figure 4D). Additionally, a time course of malonate uptake at low pH, showed that MCT1 inhibition largely abolished the increase in malonate over time, along with the corresponding increase in succinate (Figure S4B and S4C). MCT1 inhibition blocked the pH-dependent increase in malonate uptake (Figure 4E) and in parallel prevented succinate accumulation (Figure 4D). The MCT1 inhibitor led to a dose-dependent inhibition of malonate uptake even at pH 7.4 (Figure 4F). Therefore, malonate can be transported by MCT1 at pH 7.4, but this is greatly enhanced at lower pH due to the increased proportion of malonate in its monocarboxylate form. We next assessed the impact of the malonate taken up into cells on mitochondrial function by measuring respiration. Lowering pH itself had little effect; however, addition of malonate severely reduced cellular respiration (Figure 4G and H and Figure S4E and S4F) and this effect was rescued by MCT1 inhibition (Figure 4G and H and Figure S4E and S4F). MCT1 inhibition led to a small increase in oxygen consumption at baseline, which may be due to lactate accumulation increasing the NADH/NAD+ ratio driving mitochondrial respiration.39 To confirm the pharmacological effects of MCT1 inhibition on malonate uptake genetically, we knocked down (KD) MCT1 using siRNA in both HeLa and C2C12 cells (Figure S5A through S5D). Consequently, lactate levels in MCT1 KD cells were significantly elevated compared to the control siRNA (Figure 5A). When MCT1 KD cells were incubated with malonate at low pH, malonate uptake was dramatically blocked (Figure 5B through 5E and Figure S5C), confirming MCT1 is directly responsible for malonate transport at low pH. Residual malonate uptake in the MCT1 KD cells was inhibited by cotreatment with an MCT1 inhibitor, further confirming its MCT1 dependence (Figure 5F and Figure S5F). Additionally, succinate was elevated in the MCT1 KD cells compared with control siRNA cells, a result not seen with acute pharmacological MCT1 inhibition (Figure 5D). This suggests that MCT1 may also play an important role in normoxic succinate metabolism and homeostasis and not just during myocardial reperfusion and intense exercise.21,35 Intriguingly, the succinate levels after malonate treatment over time differed between the 2 cell types, with succinate levels remaining low in C2C12 KD but elevated in HeLa MCT1 KDs compared to controls, suggesting a reliance on MCT1 for succinate efflux (Figure 5G and Figure S5G). Genetic knockdown of MCT1 (monocarboxylate transporter 1) prevents the uptake of malonate. A, Relative lactate levels in C2C12 cells treated with control or MCT1 siRNA (mean±SEM of lactate levels relative to control, n=6 biological replicates, statistics: 1-way ANOVA with Bonferroni post hoc test). B to E, Incubation of malonate (5 mmol/L disodium malonate [DSM]) in MCT1 KD cells at pH 7.4 (B and D) or 6 (C and E) for 15 min±MCT1i (10 µmol/L, MCT1 inhibitor AR-C141990). Levels of malonate (B and C) and succinate (D and E) quantified by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=3 biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). F and G, Time course of malonate uptake (5 mmol/L DSM) at pH 6 in MCT1 KD cells (F) and corresponding succinate levels (G; mean±SEM, n=3 biological replicates). H, Malonate levels in murine Langendorff hearts perfused at pH 6 for 5 min with 5 mmol/L DSM±lactate (50 mmol/L) or MCT1 inhibitor (10 µmol/L AR-C141990; MCT1i; mean±SEM, n=4, statistics: unpaired, 2-tailed Mann-Whitney U test vs control). As HeLa cells constitutively express MCT4, which is under the control of HIF (hypoxia-inducible factor)-1α and may be implicated in the transport of metabolites in IR, we KD MCT4 independently of MCT1 (Figure S5D and S5E). MCT4 KD had little effect on malonate uptake, with malonate and succinate levels mirroring those of control siRNA-treated cells (Figure S5F and S5G). Thus, MCT1 is the main driver of malonate uptake at lowered pH. Overall, diminishing MCT1 activity, either by pharmacological inhibition or genetic knockdown, impairs malonate uptake at both acidic and normal physiological pH. In addition, this is the first evidence that MCT1 can transport a 3-carbon chain length dicarboxylate, which may also have implications in normal physiology and other pathologies. Finally, we assessed if MCT1 was responsible for malonate uptake at lowered pH in the heart. Malonate uptake into the Langendorff-perfused heart was blocked by either excess lactate, or the MCT1 inhibitor AR-C141990, in the perfusion medium (Figure 5H). Together these findings confirm that MCT1 is responsible for the low pH uptake of malonate, both in cells and in the intact heart. Ischemic Conditions Are Sufficient to Drive Malonate Uptake into the Heart Upon Reperfusion: That low pH enhanced malonate entry into heart cells was consistent with the protection afforded by DSM in IR injury being due to the local decrease in pH during ischemia and initial reperfusion. To test this hypothesis, Langendorff-perfused hearts were held ischemic for various times before DSM was infused for 5 minutes, followed by flushing, and measurement of malonate. The malonate levels in heart tissue were dependent on ischemia-time, with the highest occurring after 20 minutes ischemia and being ≈10-fold greater than in the normoxic heart (Figure 6A). This malonate uptake into the ischemic Langendorff-perfused heart was dramatically reduced by the MCT1 inhibitor AR-C141990 (Figure 6B). This confirmed that malonate uptake into the ischemic heart upon reperfusion is a selective process, driven by the low pH and facilitated by the MCT1. Ischemia drives malonate protonation, uptake, and cardioprotection. A, Langendorff-perfused murine hearts were held ischemic for either 0, 5, 10, or 20 min and reperfused with 5 mmol/L disodium malonate (DSM; pH 7.4) before malonate levels measured in the heart by LC-MS/MS (liquid chromatography-tandem mass spectrometry) (mean±SEM, n=4 (control, 5 min) or 6 (10 and 20 min) biological replicates, statistics: Kruskal-Wallis with Dunn post hoc test). B, Malonate levels in murine Langendorff hearts exposed to 20 min ischemia and reperfused with 5 mmol/L malonate (pH 7.4)±MCT1 (monocarboxylate transporter 1) inhibitor (10 or 50 µmol/L AR-C141990; mean±SEM, n=4–5 biological replicates, statistical significance assessed by unpaired, 2-tailed Mann-Whitney U test vs control). C, Model of potential lactate and malonate exchange during reperfusion. D, Lactate levels in the Langendorff heart after 20 min ischemia and 1 min reperfusion±5 mmol/L DSM (mean, n=4 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test). E, Infarct size in murine LAD (left anterior descending coronary artery) ligation MI model with 100 µl bolus of 8 mg/kg DSM, pH 4 acid control or 8 mg/kg pH 4 formulated malonate at reperfusion after 30 min ischemia (mean, n=5 biological replicates, statistics: 2-tailed, unpaired Mann-Whitney U test vs acid malonate). During ischemia, the pH of ischemic tissue lowers but in addition, lactate accumulates and upon reperfusion can be transported out of cardiomyocytes by MCT1. This could enhance malonate uptake, as the MCT1 may then act as a lactate-H+/monocarboxylate malonate-H+ exchanger (Figure 6C). To assess this, we measured lactate levels in malonate-perfused IR Langendorff hearts. We found that compared to reperfusion alone, lactate levels decreased in the malonate-treated hearts (Figure 6D). Furthermore, malonate treatment decreased lactate levels in cells that had been treated with the mitochondrial inhibitor antimycin A to enhance lactate production (Figure S6A). This suggests that extracellular malonate and a proton exchange for intracellular lactate, help drive malonate entry into cardiomyocytes upon reperfusion and contribute to its at-risk tissue-selective effect. Therefore, malonate uptake into the heart upon reperfusion is dependent on the tissue having first undergone a period of ischemia, leading to a drop in pH and an accumulation of lactate. Low pH Formulation Improves Cardioprotection by Malonate: The mechanism of malonate uptake into cardiac tissue suggested that lowering the pH of the malonate infusion would increase the proportion in the monocarboxylate form and thereby increase its potency as a cardioprotective agent. To assess this, we used a malonate dose that was not protective (8 mg/kg) when administered at a neutral pH in the in vivo LAD MI model. When this dose of malonate was reformulated at pH 4 (a pH currently used in Food and Drug Administration–approved parenteral formulations)40 and administered as a bolus, this conferred significant protection that was not due to the low pH alone (Figure 6E and Figure S6B). Furthermore, although neutral malonate administered before ischemia was not protective, acidified malonate drove its uptake into cardiomyocytes and significantly reduced infarct size when infused before ischemia (Figure S6C). Discussion: No medicine is currently available that can be given at reperfusion to prevent cardiac IR injury.1,6 Drugs that prevent cardiac IR injury should both reduce MI damage and the subsequent development of heart failure.1,12 Targeting succinate metabolism has been shown to be a promising therapeutic approach. Inhibiting SDH using the reversible, competitive inhibitor malonate reduces infarct size in small and large animal models of cardiac IR injury, despite its mechanism of entry into heart tissue being unknown.26,27 Here, we found that malonate uptake into cells and the heart at pH 7.4 was inefficient. However, the uptake of malonate into cells was dramatically enhanced at the lower levels of pH that occur during ischemia and by lactate accumulation within cells. Thus, ischemia provides an environment that will protonate malonate and thereby enable its uptake by the MCT1 transporter. Interestingly, this transporter undergoes a trans-acceleration phenomenon, whereby its transition from the outward-facing to the inward-facing conformation occurs more rapidly in the presence of a trans-substrate.41,42 In this case, the uptake of malonate on the extracellular side of the plasma membrane may be accelerated by lactate efflux. In the LAD MI model of IR injury, malonate is readily available at the point of reperfusion, thus a proportion of the malonate would be protonated and accessible for transport by MCT1. This enables malonate entry into the heart in an at-risk tissue-selective manner. Additionally, as well as the inhibition of SDH, accelerating lactate efflux from the heart may also play a role in the reduction of IR injury. By shifting the equilibrium to facilitate anaerobic ATP production, this may promote the early restoration of ionic gradients through ATP-dependent transporters.43,44 Furthermore, facilitating lactate efflux may promote the extrusion of protons from the intracellular environment, thus reducing the activity of H+ transporters such as the Na+-H+ exchanger, though further investigation into these mechanisms is warranted.43,45 This is the first account of an ischemia-selective cardioprotective agent, utilizing the pathological differences between risk and nonrisk tissue to drive uptake. As MCT1 is highly expressed in the hearts of mice, rats, pigs, and humans (Figure S6D) and the characteristics of ischemia are conserved between these species,19,24 these drivers for malonate uptake and the plasma membrane transporter may facilitate malonate cardioprotection in humans. In addition to its place in the treatment of IR injury in MI, the mechanism of ischemia-enhanced delivery of malonate may provide a novel treatment option for IR injury under many other circumstances. There is much interest in targeted drug delivery and the ability to engage the intended site while reducing off-target effects. Here, we have shown that malonate is not only a cardioprotective agent, but it does this while its entry into the rest of the heart is limited. Thus, malonate is a potent and ischemic tissue-targeted drug, explaining how large doses can be delivered acutely with minimal toxic effects, which is likely to be important bearing in mind the many comorbidities associated with MI.46 Furthermore, as malonate can be efficiently metabolised28,47 and has the potential to promote cardiomyocyte regeneration.48 Malonate is robustly protective in acute IR injury, though further work is now required to understand the tractability of malonate treatment in chronic IR injury models; in particular, conducting a double-blind chronic large animal IR injury study.49 This would provide the greatest insight into the cardioprotection capabilities of malonate post-MI and its effect on the development of heart failure and define its potential for translation to the clinic. Schematic of ischemia-dependent malonate uptake via MCT1 (monocarboxylate transporter 1). The accumulation of lactate and protons in ischemic tissue and equilibration with the extracellular space facilitates protonation of malonate to its monocarboxylate form. This enables it to be an MCT1 substrate and enter cardiomyocytes upon reperfusion. Here, the malonate is transported into mitochondria by the mitochondrial dicarboxylate carrier where it can subsequently go on to inhibit SDH (succinate dehydrogenase). SDH inhibition reduces succinate oxidation, reactive oxygen species (ROS) production by reverse electron transport (RET) through complex I and opening of the mitochondrial permeability transition pore, thereby reducing cell death from ischemia/reperfusion (IR) injury. CxI indicates complex I; DIC, mitochondrial dicarboxylate carrier (SLC25A10); mPTP, mitochondrial permeability transition pore; and TCA, tricarboxylic acid. Conclusions: We have shown that malonate is an ischemia-selective drug, due to the lowered pH and lactate accumulation of ischemic tissue driving its uptake via the MCT1 (Figure 7). Furthermore, we show that a low pH formulation of malonate enhances its therapeutic potency. Malonate is the first at-risk tissue-selective cardioprotective drug and represents a significant step toward the treatment of IR injury. Article Information: Acknowledgments The authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments. The authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments. Sources of Funding This work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic). This work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic). Disclosures Some authors currently hold a patent on the use of malonate esters in cardiac ischemia/reperfusion (IR) injury (M.P. Murphy and T. Krieg) and have submitted patent applications on the use of malonate in IR injury associated with ischemic stroke (M.P. Murphy and T. Krieg) and the pH-enhancement of malonate described in this article (H.A. Prag, M.P. Murphy, and T. Krieg). The other authors report no conflicts. Some authors currently hold a patent on the use of malonate esters in cardiac ischemia/reperfusion (IR) injury (M.P. Murphy and T. Krieg) and have submitted patent applications on the use of malonate in IR injury associated with ischemic stroke (M.P. Murphy and T. Krieg) and the pH-enhancement of malonate described in this article (H.A. Prag, M.P. Murphy, and T. Krieg). The other authors report no conflicts. Supplemental Materials Supplemental Methods Major Resources Table Figures S1–S6 References 49–56 Supplemental Methods Major Resources Table Figures S1–S6 References 49–56 Acknowledgments: The authors thank Stephen Large, Fouad Taghavi, and Margaret M. Huang (Department of Surgery, University of Cambridge) for obtaining human heart tissue and Benjamin Thackray (Department of Medicine, University of Cambridge) for assistance with initial experiments. Sources of Funding: This work was supported by the British Heart Foundation (PG/20/10025 to T. Krieg); the Medical Research Council (MC_UU_00015/3 to M.P. Murphy and MR/P000320/1 to T. Krieg), the Wellcome Trust (220257/Z/20/Z to M.P. Murphy, 221604/Z/20/Z to D. Aksentijevic), Barts Charity (MRC0215 to D. Aksentijevic). Disclosures: Some authors currently hold a patent on the use of malonate esters in cardiac ischemia/reperfusion (IR) injury (M.P. Murphy and T. Krieg) and have submitted patent applications on the use of malonate in IR injury associated with ischemic stroke (M.P. Murphy and T. Krieg) and the pH-enhancement of malonate described in this article (H.A. Prag, M.P. Murphy, and T. Krieg). The other authors report no conflicts. Supplemental Materials: Supplemental Methods Major Resources Table Figures S1–S6 References 49–56 Supplementary Material:
Background: Inhibiting SDH (succinate dehydrogenase), with the competitive inhibitor malonate, has shown promise in ameliorating ischemia/reperfusion injury. However, key for translation to the clinic is understanding the mechanism of malonate entry into cells to enable inhibition of SDH, its mitochondrial target, as malonate itself poorly permeates cellular membranes. The possibility of malonate selectively entering the at-risk heart tissue on reperfusion, however, remains unexplored. Methods: C57BL/6J mice, C2C12 and H9c2 myoblasts, and HeLa cells were used to elucidate the mechanism of selective malonate uptake into the ischemic heart upon reperfusion. Cells were treated with malonate while varying pH or together with transport inhibitors. Mouse hearts were either perfused ex vivo (Langendorff) or subjected to in vivo left anterior descending coronary artery ligation as models of ischemia/reperfusion injury. Succinate and malonate levels were assessed by liquid chromatography-tandem mass spectrometry LC-MS/MS, in vivo by mass spectrometry imaging, and infarct size by TTC (2,3,5-triphenyl-2H-tetrazolium chloride) staining. Results: Malonate was robustly protective against cardiac ischemia/reperfusion injury, but only if administered at reperfusion and not when infused before ischemia. The extent of malonate uptake into the heart was proportional to the duration of ischemia. Malonate entry into cardiomyocytes in vivo and in vitro was dramatically increased at the low pH (≈6.5) associated with ischemia. This increased uptake of malonate was blocked by selective inhibition of MCT1 (monocarboxylate transporter 1). Reperfusion of the ischemic heart region with malonate led to selective SDH inhibition in the at-risk region. Acid-formulation greatly enhances the cardioprotective potency of malonate. Conclusions: Cardioprotection by malonate is dependent on its entry into cardiomyocytes. This is facilitated by the local decrease in pH that occurs during ischemia, leading to its selective uptake upon reperfusion into the at-risk tissue, via MCT1. Thus, malonate's preferential uptake in reperfused tissue means it is an at-risk tissue-selective drug that protects against cardiac ischemia/reperfusion injury.
null
null
16,271
388
[ 53, 192, 829, 356, 759, 442, 1546, 633, 152, 433, 45, 66, 14 ]
20
[ "malonate", "ph", "uptake", "figure", "mct1", "malonate uptake", "cells", "succinate", "levels", "dsm" ]
[ "ischemia facilitate malonate", "facilitate malonate cardioprotection", "cardioprotection malonate mechanism", "malonate ischemia selective", "cardiomyocytes reperfusion malonate" ]
null
null
null
[CONTENT] ischemia | mitochondria | myocardial infarction | reactive oxygen species | reperfusion [SUMMARY]
[CONTENT] ischemia | mitochondria | myocardial infarction | reactive oxygen species | reperfusion [SUMMARY]
[CONTENT] ischemia | mitochondria | myocardial infarction | reactive oxygen species | reperfusion [SUMMARY]
[CONTENT] ischemia | mitochondria | myocardial infarction | reactive oxygen species | reperfusion [SUMMARY]
null
null
[CONTENT] Animals | Chromatography, Liquid | HeLa Cells | Humans | Ischemia | Malonates | Mice | Mice, Inbred C57BL | Myocardial Reperfusion Injury | Myocytes, Cardiac | Tandem Mass Spectrometry [SUMMARY]
[CONTENT] Animals | Chromatography, Liquid | HeLa Cells | Humans | Ischemia | Malonates | Mice | Mice, Inbred C57BL | Myocardial Reperfusion Injury | Myocytes, Cardiac | Tandem Mass Spectrometry [SUMMARY]
[CONTENT] Animals | Chromatography, Liquid | HeLa Cells | Humans | Ischemia | Malonates | Mice | Mice, Inbred C57BL | Myocardial Reperfusion Injury | Myocytes, Cardiac | Tandem Mass Spectrometry [SUMMARY]
[CONTENT] Animals | Chromatography, Liquid | HeLa Cells | Humans | Ischemia | Malonates | Mice | Mice, Inbred C57BL | Myocardial Reperfusion Injury | Myocytes, Cardiac | Tandem Mass Spectrometry [SUMMARY]
null
null
[CONTENT] ischemia facilitate malonate | facilitate malonate cardioprotection | cardioprotection malonate mechanism | malonate ischemia selective | cardiomyocytes reperfusion malonate [SUMMARY]
[CONTENT] ischemia facilitate malonate | facilitate malonate cardioprotection | cardioprotection malonate mechanism | malonate ischemia selective | cardiomyocytes reperfusion malonate [SUMMARY]
[CONTENT] ischemia facilitate malonate | facilitate malonate cardioprotection | cardioprotection malonate mechanism | malonate ischemia selective | cardiomyocytes reperfusion malonate [SUMMARY]
[CONTENT] ischemia facilitate malonate | facilitate malonate cardioprotection | cardioprotection malonate mechanism | malonate ischemia selective | cardiomyocytes reperfusion malonate [SUMMARY]
null
null
[CONTENT] malonate | ph | uptake | figure | mct1 | malonate uptake | cells | succinate | levels | dsm [SUMMARY]
[CONTENT] malonate | ph | uptake | figure | mct1 | malonate uptake | cells | succinate | levels | dsm [SUMMARY]
[CONTENT] malonate | ph | uptake | figure | mct1 | malonate uptake | cells | succinate | levels | dsm [SUMMARY]
[CONTENT] malonate | ph | uptake | figure | mct1 | malonate uptake | cells | succinate | levels | dsm [SUMMARY]
null
null
[CONTENT] data | request contacting corresponding author | reasonable | contacting corresponding | contacting | author | table found | table found supplemental | table found supplemental material | major resources table found [SUMMARY]
[CONTENT] malonate | mct1 | ph | figure | uptake | dsm | malonate uptake | cells | succinate | levels [SUMMARY]
[CONTENT] drug | malonate | selective | ph lactate accumulation | drug lowered ph lactate | driving uptake | driving uptake mct1 | driving uptake mct1 figure | shown malonate ischemia | shown malonate ischemia selective [SUMMARY]
[CONTENT] malonate | ph | uptake | figure | mct1 | reperfusion | succinate | dsm | ischemia | malonate uptake [SUMMARY]
null
null
[CONTENT] C2C12 | HeLa ||| ||| Mouse | Langendorff ||| LC-MS/MS | TTC [SUMMARY]
[CONTENT] Malonate ||| ||| Malonate ||| MCT1 | 1 ||| ||| [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| SDH ||| ||| C2C12 | HeLa ||| ||| Mouse | Langendorff ||| LC-MS/MS | TTC ||| ||| ||| Malonate ||| MCT1 | 1 ||| ||| ||| ||| ||| [SUMMARY]
null
The Evolution of HIV Patient Retention and Care in French Guiana: A Broader View From the Système National des Données de Santé.
35252098
Although the simplification of antiretroviral (AVR) treatment regimens and follow-up has led to fewer constraints for patients with HIV, their follow-up remains of paramount importance to optimize AVR therapy, to detect and prevent HIV-related morbidity, and prevent secondary infections. The problem of follow-up interruption in French Guiana has been persistent and seemingly impervious to efforts to alleviate it.
BACKGROUND
Using the complementary lenses of the hospital HIV cohort and the health insurance information system, we looked at the incidence of follow-up interruption and the proportion of patients followed by private practitioners.
METHOD
We tallied 803 persons that were not known to have died and who were lost to follow-up. Over time, hospital outpatients were lost to follow-up significantly sooner. By contrast, there was a significant trend with more and more patients exclusively followed by private practitioners.
RESULTS
While hospital outpatient care remains by far the most common mode of patient care, there seems to be a gradual erosion of this model in favor of private practice.
CONCLUSION
[ "Anti-Retroviral Agents", "Cohort Studies", "French Guiana", "HIV Infections", "Humans", "Incidence" ]
8891454
Introduction
French Guiana is the French overseas territory where the HIV epidemic is most prevalent (1). Recent modeling using the European Center for Disease Control (ECDC) modeling tool estimated the number of persons living with HIV to be over 3,200, 3,000 of whom knew their diagnosis (2). HIV transmission in French Guiana is mostly heterosexual and three of four patients are foreign citizens (3). French Guiana has the highest gross domestic product (GDP) per capita in South America and, as such, it is attractive for the poor Caribbean and South American populations in search of a better life. In a context of precariousness and sexual vulnerability, a large proportion of infected immigrants actually acquire the virus after their arrival in French Guiana (4, 5). The standards of health care are those of mainland France. Since 2013, antiretroviral (AVR) treatment is recommended for all patients irrespective of their CD4 count. All persons living with HIV receive free or paid AVR treatments regardless of their origin or socio-economic level. Undocumented immigrants with HIV are eligible for residence permits for medical reasons and health insurance coverage. Although the simplification of treatment regimens and follow-up has led to fewer constraints for the patients, their follow-up remains of paramount importance to optimize anti-retroviral therapy to detect and prevent HIV-related morbidity and prevent secondary infections. In 2006, we had studied follow-up interruption and its risk factors in French Guiana (6). Younger patients, foreigners, untreated patients, and non-immunosuppressed patients at the time of diagnosis were more likely to interrupt follow-up. In addition, the greatest risk of interruption was usually within the first 6 months after diagnosis. Since then, nearly all patients are given AVRs with regimens that are more potent but better tolerated, which usually simplifies medical follow-up. In the post 90-90-90 context, French Guiana aims to optimize the cascade of care by improving early testing and treatment and retaining patients in the healthcare system so they can continue to benefit from virological suppression. Despite progress, this remains a problem in France (7, 8). For the specific case of French Guiana, our hypothesis was that different and novel forces were at play and may have modified the adherence to outpatient follow-up. First, in the widespread early treatment with powerful and well-tolerated drugs, the simplification of follow-up would predict that patient retention should increase over time. Second, because patient follow-up is much simpler than before, general practitioners are more inclined to accept such a task, and patients –always weary of being spotted in a specialized HIV outpatient care center—may prefer to be followed closely to their home, in the “stigma-free” practice of their family physician. We therefore looked at the proportion of patients lost to follow-up –from the point of view of hospital outpatient care in different time periods. We also looked, for the first time, at the data from the health insurance system (9), which notably reimburses all antiretroviral drugs (ARVs) to determine the proportion of patients for whom ambulatory care was given from a specialized hospital department and those who only received ambulatory care from a private practitioner.
Methods
Patients From the Hospital Outpatient Clinics The information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study. The information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study. Lost to Follow-Up This cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system. This cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system. Determination of the Proportion of Patients Receiving Treatment in a Public Hospital or in a Private Practice Since 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both. Since 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both. Selection of ARV Treatments The ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance. The ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance. Patient Selection The data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana. A subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations. The data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana. A subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations. Description of the Evolution of the Type of Care The type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector. The annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level. The type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector. The annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level. Data Analysis Using the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual. Using the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual. Regulatory and Ethical Issues Anonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm). Anonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm).
Results
Hospital Follow-Up in Time We tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1). Incidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort. Log-rank test comparing the incidence of follow-up interruption for different consecutive years. CHI2 (4) = 307.45; p = 0.0000. We tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1). Incidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort. Log-rank test comparing the incidence of follow-up interruption for different consecutive years. CHI2 (4) = 307.45; p = 0.0000. The Total Number of Persons on ARV in French Guiana When looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns. Distribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020. Source: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME). RSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer. When looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns. Distribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020. Source: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME). RSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer. Hospital and/or Private Follow-Up In 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1). Until 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively. Focus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat). In 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3). Distribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients. Source: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana. Linear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001; Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003. Because of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020. *Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002. In 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1). Until 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively. Focus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat). In 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3). Distribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients. Source: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana. Linear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001; Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003. Because of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020. *Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002.
null
null
[ "Patients From the Hospital Outpatient Clinics", "Lost to Follow-Up", "Determination of the Proportion of Patients Receiving Treatment in a Public Hospital or in a Private Practice", "Selection of ARV Treatments", "Patient Selection", "Description of the Evolution of the Type of Care", "Data Analysis", "Regulatory and Ethical Issues", "Hospital Follow-Up in Time", "The Total Number of Persons on ARV in French Guiana", "Hospital and/or Private Follow-Up", "Ethics Statement", "Author Contributions" ]
[ "The information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study.", "This cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system.", "Since 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both.", "The ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance.", "The data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana.\nA subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations.", "The type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector.\nThe annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level.", "Using the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual.", "Anonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm).", "We tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1).\nIncidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort.\nLog-rank test comparing the incidence of follow-up interruption for different consecutive years.\nCHI2 (4) = 307.45; p = 0.0000.", "When looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns.\nDistribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020.\nSource: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME).\nRSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer.", "In 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1).\nUntil 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively.\nFocus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat).\nIn 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3).\nDistribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients.\nSource: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana.\nLinear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001;\nLinear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003.\nBecause of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020.\n*Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002.", "The studies involving human participants were reviewed and approved by Commission Nationale Informatique et Libertés. The patients/participants provided their written informed consent to participate in this study.", "MN: study design. HD and NV: data analysis. HD and MN: first draft writing. CS, LA, AL, FH, LM, JW, NV, EP, and AA: review and editing. All authors contributed to the article and approved the submitted version." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Patients From the Hospital Outpatient Clinics", "Lost to Follow-Up", "Determination of the Proportion of Patients Receiving Treatment in a Public Hospital or in a Private Practice", "Selection of ARV Treatments", "Patient Selection", "Description of the Evolution of the Type of Care", "Data Analysis", "Regulatory and Ethical Issues", "Results", "Hospital Follow-Up in Time", "The Total Number of Persons on ARV in French Guiana", "Hospital and/or Private Follow-Up", "Discussion", "Data Availability Statement", "Ethics Statement", "Author Contributions", "Conflict of Interest", "Publisher's Note" ]
[ "French Guiana is the French overseas territory where the HIV epidemic is most prevalent (1). Recent modeling using the European Center for Disease Control (ECDC) modeling tool estimated the number of persons living with HIV to be over 3,200, 3,000 of whom knew their diagnosis (2). HIV transmission in French Guiana is mostly heterosexual and three of four patients are foreign citizens (3). French Guiana has the highest gross domestic product (GDP) per capita in South America and, as such, it is attractive for the poor Caribbean and South American populations in search of a better life. In a context of precariousness and sexual vulnerability, a large proportion of infected immigrants actually acquire the virus after their arrival in French Guiana (4, 5). The standards of health care are those of mainland France. Since 2013, antiretroviral (AVR) treatment is recommended for all patients irrespective of their CD4 count. All persons living with HIV receive free or paid AVR treatments regardless of their origin or socio-economic level. Undocumented immigrants with HIV are eligible for residence permits for medical reasons and health insurance coverage. Although the simplification of treatment regimens and follow-up has led to fewer constraints for the patients, their follow-up remains of paramount importance to optimize anti-retroviral therapy to detect and prevent HIV-related morbidity and prevent secondary infections. In 2006, we had studied follow-up interruption and its risk factors in French Guiana (6). Younger patients, foreigners, untreated patients, and non-immunosuppressed patients at the time of diagnosis were more likely to interrupt follow-up. In addition, the greatest risk of interruption was usually within the first 6 months after diagnosis. Since then, nearly all patients are given AVRs with regimens that are more potent but better tolerated, which usually simplifies medical follow-up. In the post 90-90-90 context, French Guiana aims to optimize the cascade of care by improving early testing and treatment and retaining patients in the healthcare system so they can continue to benefit from virological suppression. Despite progress, this remains a problem in France (7, 8). For the specific case of French Guiana, our hypothesis was that different and novel forces were at play and may have modified the adherence to outpatient follow-up. First, in the widespread early treatment with powerful and well-tolerated drugs, the simplification of follow-up would predict that patient retention should increase over time. Second, because patient follow-up is much simpler than before, general practitioners are more inclined to accept such a task, and patients –always weary of being spotted in a specialized HIV outpatient care center—may prefer to be followed closely to their home, in the “stigma-free” practice of their family physician. We therefore looked at the proportion of patients lost to follow-up –from the point of view of hospital outpatient care in different time periods. We also looked, for the first time, at the data from the health insurance system (9), which notably reimburses all antiretroviral drugs (ARVs) to determine the proportion of patients for whom ambulatory care was given from a specialized hospital department and those who only received ambulatory care from a private practitioner.", " Patients From the Hospital Outpatient Clinics The information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study.\nThe information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study.\n Lost to Follow-Up This cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system.\nThis cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system.\n Determination of the Proportion of Patients Receiving Treatment in a Public Hospital or in a Private Practice Since 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both.\nSince 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both.\n Selection of ARV Treatments The ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance.\nThe ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance.\n Patient Selection The data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana.\nA subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations.\nThe data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana.\nA subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations.\n Description of the Evolution of the Type of Care The type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector.\nThe annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level.\nThe type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector.\nThe annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level.\n Data Analysis Using the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual.\nUsing the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual.\n Regulatory and Ethical Issues Anonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm).\nAnonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm).", "The information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study.", "This cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system.", "Since 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both.", "The ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance.", "The data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana.\nA subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations.", "The type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector.\nThe annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level.", "Using the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual.", "Anonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm).", " Hospital Follow-Up in Time We tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1).\nIncidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort.\nLog-rank test comparing the incidence of follow-up interruption for different consecutive years.\nCHI2 (4) = 307.45; p = 0.0000.\nWe tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1).\nIncidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort.\nLog-rank test comparing the incidence of follow-up interruption for different consecutive years.\nCHI2 (4) = 307.45; p = 0.0000.\n The Total Number of Persons on ARV in French Guiana When looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns.\nDistribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020.\nSource: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME).\nRSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer.\nWhen looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns.\nDistribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020.\nSource: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME).\nRSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer.\n Hospital and/or Private Follow-Up In 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1).\nUntil 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively.\nFocus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat).\nIn 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3).\nDistribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients.\nSource: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana.\nLinear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001;\nLinear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003.\nBecause of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020.\n*Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002.\nIn 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1).\nUntil 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively.\nFocus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat).\nIn 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3).\nDistribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients.\nSource: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana.\nLinear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001;\nLinear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003.\nBecause of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020.\n*Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002.", "We tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1).\nIncidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort.\nLog-rank test comparing the incidence of follow-up interruption for different consecutive years.\nCHI2 (4) = 307.45; p = 0.0000.", "When looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns.\nDistribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020.\nSource: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME).\nRSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer.", "In 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1).\nUntil 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively.\nFocus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat).\nIn 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3).\nDistribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients.\nSource: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana.\nLinear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001;\nLinear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003.\nBecause of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020.\n*Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002.", "For the first time, the use of health insurance data allowed us to see the whole picture of HIV care in French Guiana. The actual number of persons benefiting from antiretroviral treatment in French Guiana was actually very close to the recent estimations using HIV-cohort data for the ECDC modeling tool (1). The number of persons on AVR treatment has substantially increased over the study time but slowed down in 2020. Given the simultaneous disruption of information systems and HIV testing and care, it is difficult to tease out whether this reflects the slowing down of an epidemic where increasing numbers of persons are virologically controlled, or the COVID-19-related gap in new diagnoses and patient referrals. Data from 2021 is also likely to be impacted by COVID-19, so only future post-COVID-19 years may allow us to find out a better explanation for our observation.\nWhile the organization of care and virological results had gradually improved over the years, we were initially surprised to see the rate of persons lost to follow-up from the hospital outpatient cohort actually increased over time. However, clinicians had perceived the feedback from patients increasingly tempted by a follow-up by their private practitioner for proximity reasons and because going to specialized HIV outpatient departments was more stigmatizing than a waiting room of a general practitioner. At the same time, treatment regimens and follow-up are far simpler than they used to be, which expands the number of physicians willing to take-on the task of being the referent physician of patients with HIV. Furthermore, the aging HIV-cohort (39.5% aged over 50 years) increasingly requires general practitioners–not infectious disease specialists—to manage comorbidities, which may be another incentive to have an integral follow-up by one's family physician instead of a specialist. This perception seems to be vindicated by the health insurance data, which shows a growing proportion of persons choosing to be exclusively treated by their private practitioner. Furthermore, about a quarter used both the private and public sectors, suggesting pragmatic convenience was behind this behavior. The COVID-19 disruption of HIV follow-up may have further tilted follow-up toward private practice in 2020, and it remains to be seen whether this will be corrected in the future or if it has accelerated an ongoing trend toward decentralization of HIV care.\nAt first view, one could have hypothesized that this trend toward private practice did not include the most precarious cases which in French Guiana's HIV cohort, represents 2/3rds of patients if we only look at the type of Health Insurance. However, this was refuted by the data which shows the exact same trend among the more precarious and shows that, contrary to widespread beliefs, poor populations do have access to private practice and use it readily as soon as they have health coverage. Since 2000, the Public Health Law has ensured that the most precarious cases do not have to advance any fees for medical consultations, and thus there are no financial obstacles to consult a private practitioner instead of a hospital physician.\nThe study has a number of limitations. First, the use of health insurance regimens as a proxy for poverty is legitimate, but it may actually underestimate the degree of precariousness. Particularly, the SNDS did not provide other indicators, such as country of birth or nationality, which may have served as another proxy for precariousness. Another limitation is that the simultaneous accelerating rate of persons being lost to follow-up from the hospital system and the increasing proportion of patients exclusively followed by private practitioners do not formally prove that those that do not come back in the outpatient ward are actually consulting private practitioners. Instead, they could have left the territory, or stopped treatment and follow-up, or died. However, for deaths or treatment interruptions, this seems largely unlikely because opportunistic infections and deaths –which eventually are counted—have substantially decreased over time suggesting that indeed many of the patients who do not come back to the hospital outpatient clinic are receiving care in private practices.\nIn conclusion, while hospital outpatient care remains by far the most common mode of patient care, there seems to be a gradual erosion of this model in favor of private practice. For the first time, the Health Insurance information system allows obtaining a broader view of patient care than hospital cohort data alone. In the rapidly evolving therapeutic and conceptual landscape of HIV care, in the context of aging cohorts and shifting patient preferences, coordinating continuous medical education with private practitioners seems to be crucially important so that every caregiver is up to date with the current guidelines. This slow gradual process may even have accelerated in the transformational context of the COVID-19 pandemic.", "The data analyzed in this study is subject to the following licenses/restrictions: The DATAIDS data may be shared upon reasonable request. The health Insurance data requires additional permissions. Requests to access these datasets should be directed to corevih@ch-cayenne.", "The studies involving human participants were reviewed and approved by Commission Nationale Informatique et Libertés. The patients/participants provided their written informed consent to participate in this study.", "MN: study design. HD and NV: data analysis. HD and MN: first draft writing. CS, LA, AL, FH, LM, JW, NV, EP, and AA: review and editing. All authors contributed to the article and approved the submitted version.", "The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.", "All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher." ]
[ "intro", "methods", null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "data-availability", null, null, "COI-statement", "disclaimer" ]
[ "HIV", "cascade of care", "follow-up interruption", "private practice", "French Guiana" ]
Introduction: French Guiana is the French overseas territory where the HIV epidemic is most prevalent (1). Recent modeling using the European Center for Disease Control (ECDC) modeling tool estimated the number of persons living with HIV to be over 3,200, 3,000 of whom knew their diagnosis (2). HIV transmission in French Guiana is mostly heterosexual and three of four patients are foreign citizens (3). French Guiana has the highest gross domestic product (GDP) per capita in South America and, as such, it is attractive for the poor Caribbean and South American populations in search of a better life. In a context of precariousness and sexual vulnerability, a large proportion of infected immigrants actually acquire the virus after their arrival in French Guiana (4, 5). The standards of health care are those of mainland France. Since 2013, antiretroviral (AVR) treatment is recommended for all patients irrespective of their CD4 count. All persons living with HIV receive free or paid AVR treatments regardless of their origin or socio-economic level. Undocumented immigrants with HIV are eligible for residence permits for medical reasons and health insurance coverage. Although the simplification of treatment regimens and follow-up has led to fewer constraints for the patients, their follow-up remains of paramount importance to optimize anti-retroviral therapy to detect and prevent HIV-related morbidity and prevent secondary infections. In 2006, we had studied follow-up interruption and its risk factors in French Guiana (6). Younger patients, foreigners, untreated patients, and non-immunosuppressed patients at the time of diagnosis were more likely to interrupt follow-up. In addition, the greatest risk of interruption was usually within the first 6 months after diagnosis. Since then, nearly all patients are given AVRs with regimens that are more potent but better tolerated, which usually simplifies medical follow-up. In the post 90-90-90 context, French Guiana aims to optimize the cascade of care by improving early testing and treatment and retaining patients in the healthcare system so they can continue to benefit from virological suppression. Despite progress, this remains a problem in France (7, 8). For the specific case of French Guiana, our hypothesis was that different and novel forces were at play and may have modified the adherence to outpatient follow-up. First, in the widespread early treatment with powerful and well-tolerated drugs, the simplification of follow-up would predict that patient retention should increase over time. Second, because patient follow-up is much simpler than before, general practitioners are more inclined to accept such a task, and patients –always weary of being spotted in a specialized HIV outpatient care center—may prefer to be followed closely to their home, in the “stigma-free” practice of their family physician. We therefore looked at the proportion of patients lost to follow-up –from the point of view of hospital outpatient care in different time periods. We also looked, for the first time, at the data from the health insurance system (9), which notably reimburses all antiretroviral drugs (ARVs) to determine the proportion of patients for whom ambulatory care was given from a specialized hospital department and those who only received ambulatory care from a private practitioner. Methods: Patients From the Hospital Outpatient Clinics The information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study. The information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study. Lost to Follow-Up This cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system. This cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system. Determination of the Proportion of Patients Receiving Treatment in a Public Hospital or in a Private Practice Since 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both. Since 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both. Selection of ARV Treatments The ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance. The ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance. Patient Selection The data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana. A subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations. The data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana. A subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations. Description of the Evolution of the Type of Care The type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector. The annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level. The type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector. The annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level. Data Analysis Using the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual. Using the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual. Regulatory and Ethical Issues Anonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm). Anonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm). Patients From the Hospital Outpatient Clinics: The information systems for HIV in French Guiana, mandatory reporting of new HIV infections and AIDS cases, and the French Hospital Database of HIV, a cohort for which specifically trained research technicians have collected data on HIV patients since 1989 {first in the DMI2 government program until 2008, then in eNADIS/DATAIDS [Dat'AIDS cohort (clinicaltrials.gov ref. NCT02898987)]}, were used for the study. Lost to Follow-Up: This cohort allowed us to obtain the number and proportion of patients lost to follow-up (not having consulted in more than 12 months) from the point of view of the hospital system. Determination of the Proportion of Patients Receiving Treatment in a Public Hospital or in a Private Practice: Since 2017, health administrations and researchers may, with permission, access data from the Système National des Données de Santé which now combines health care reimbursement data (SNIIRAM), hospital data (PMSI), and death certificate data (CepiDC). For French Guiana, this allowed us to obtain, for the first time, the proportion of persons treated for HIV in a public hospital structure or in private practice, or both. Selection of ARV Treatments: The ARV treatments were selected from the code identifiant de présentation (CIP) code of the Drug Database [base de données publique des médicaments (BDM)], the reference database of drugs reimbursed by the French Health Insurance. Patient Selection: The data used to select the people on ARVs were taken from the inter-scheme consumption datamart (DCIR). All the health insurance beneficiaries were included in the study if they received at least one reimbursement for the delivery of ARVs during the year, even when the treatments were prescribed and/or delivered outside French Guiana. The specific insurance regimens included the General Health Insurance Scheme, including beneficiaries of the free complementary health insurance coverage for the poor French residents or legal migrants (CMUC/CSS) and the state medical aid for undocumented foreigners (AME); the social security system for the self-employed (RSI); and the agricultural social mutual insurance (MSA) of French Guiana. A subsample was also made up of patients who received at least one reimbursement for ARVs under the AME or CMUC/CSS during the year. Since AME or CMUC/CSS reflect poverty, these selection criteria allowed to test the hypothesis that trends for private or public follow-up differed between socially precarious and non-precarious populations. Description of the Evolution of the Type of Care: The type of care was defined as follows: exclusively hospital-based when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the hospital sector, exclusively private when the ARV treatments for which the patient was reimbursed during the year were exclusively prescribed by the private sector and mixed when the ARV treatments for which the patient was reimbursed during the year are prescribed by the hospital sector and the private sector. The annual evolution of the distribution of the type of care was studied over the 2016–2020 period in the population of all patients on ARVs at the regional level and in the population, including only CMUC/CSS or AME patients (precarious persons) at the regional level. Data Analysis: Using the DAT'AIDS cohort data, single failure survival analyses were performed with follow-up interruption of more than 12 months (not due to death) as a failure event for a patient without knowledge of his or her present follow-up status. Kaplan Meier curves were plotted. Log Rank tests were performed. These analyses allowed us to quantify and visualize the time until follow-up interruption for different periods. For the health insurance data, a linear trend Chi-square test was used to test the statistical significance of the annual change in the different types of care, observed graphically over the period studied. This trend analysis aimed to determine whether the process was gradual. Regulatory and Ethical Issues: Anonymized individual data from the DAT'AIDS database was used (clinicaltrials.gov ref. NCT02898987). For the proportion followed up in private or public structure, the anonymized data from the système national des données de santé (SNDS) is accessible for certain organizations with a public service mission. Particularly, these organizations, listed by decree by the Conseil d'Etat, issued after the opinion of the Commission Nationale Informatique et Libertés (CNIL), may access certain data on a permanent basis in order to carry out their missions. This is the case for researchers of Academic hospitals and the Institut National de la Santé et de la Recherche Médicale (Inserm). Results: Hospital Follow-Up in Time We tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1). Incidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort. Log-rank test comparing the incidence of follow-up interruption for different consecutive years. CHI2 (4) = 307.45; p = 0.0000. We tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1). Incidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort. Log-rank test comparing the incidence of follow-up interruption for different consecutive years. CHI2 (4) = 307.45; p = 0.0000. The Total Number of Persons on ARV in French Guiana When looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns. Distribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020. Source: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME). RSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer. When looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns. Distribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020. Source: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME). RSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer. Hospital and/or Private Follow-Up In 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1). Until 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively. Focus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat). In 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3). Distribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients. Source: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana. Linear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001; Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003. Because of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020. *Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002. In 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1). Until 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively. Focus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat). In 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3). Distribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients. Source: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana. Linear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001; Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003. Because of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020. *Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002. Hospital Follow-Up in Time: We tallied 803 persons that were not known to have died and who were lost to follow-up. Figure 1 shows that over time, patients seemed to be lost to follow-up increasingly sooner (Log-rank test, p < 0.001, Table 1). Incidence of follow-up interruption in French Guiana from the point of view of the hospital HIV outpatient cohort. Log-rank test comparing the incidence of follow-up interruption for different consecutive years. CHI2 (4) = 307.45; p = 0.0000. The Total Number of Persons on ARV in French Guiana: When looking at the health insurance data, in 2020, there were 3,047 patients who benefitted from health insurance payments for ARV treatment. This total number has been steadily increasing since 2016 (see Table 2). The evolution rate over the period was 33.35% with an average annual evolution rate of 7.46%. However, the evolution of the number of patients was more moderate in 2020, a year when activity was completely disorganized by coronavirus disease 2019 (COVID-19) and lockdowns. Distribution of the type of antiretroviral (ARV) follow-up (private, hospital, or mixed) in Guiana between 2016 and 2020. Source: SNDS/DCIR—Beneficiaries of the RG (CMU/CSS and AME). RSI and MSA affiliated in French Guiana, excluding complementary insurances not managed by the computer. Hospital and/or Private Follow-Up: In 2020, 57.43% (n = 1,750) of patients were followed exclusively by the hospital sector. The proportion of patients treated exclusively in the private sector was 16.31% (n = 497), and that of patients with mixed treatment prescription (private and hospital) was 26.26% (n = 800) (Table 1). Until 2019, the number of patients followed up increased whatever the type of care. In 2020, only the number of patients followed up exclusively in the private sector continued to increase (+138 patients), whereas the number of patients followed up in the hospital sector or with mixed follow-up decreased slightly (see Table 2). Furthermore, the proportion of patients followed up exclusively by the hospital sector has decreased over the last 3 years, particularly in 2020. The change over the entire period was statistically significant (p < 0.0001). Conversely, the proportion of patients followed exclusively by the private sector has been steadily increasing since 2018 (the trend was statistically significant; p < 0.0001). Finally, the proportion of patients with follow-up in both the hospital and private sectors has varied little since 2016. However, there was a slight decrease in 2020 compared to 2018 and 2019, years in which the proportion had stabilized at around 28% of patients (the trend over the whole period being statistically significant; p = 0.006). Because COVID-19 in 2020 led to dramatic changes (lock-downs, cancelation of hospital consultations, etc.), we also computed the trends for 2016–2019, and the observation of a linear increase of the trend toward outpatients was still significant: p = 0.004 and p = 0.002, respectively. Focus on patients with indicators of precariousness (Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire or Aide Médicale Etat). In 2020, 1,873 patients who had been reimbursed for ARVs were beneficiaries of the CMUC/CSS or the AME (i.e., 61.47% of all patients on ARVs) insurance plans for socially precarious persons. The number of patients in this situation increased until 2019 but decreased in 2020. Overall, the increase over the whole period was 24.53%, with an average annual growth rate of 5.64%. In 2020, 1,082 of the 1,873 patients receiving CMUC/CSS or AME on ARVs had exclusive hospital follow-up (i.e., 57.77%). This proportion steadily decreased over the last 3 years after an initial increase in 2017 (the change over the whole period being statistically significant; p < 0.0001). Conversely, the proportions of patients managed exclusively by the private sector or benefiting from mixed public and private follow-up, after having decreased over the year 2017, increased over the years 2018, 2019, and 2020. The proportion of precarious patients on ARVs exclusively followed up in the private sector thus rose to reach 14.15% in 2020 (a statistically significant trend for the period 2016–2020; p < 0.0001). The proportion of patients with mixed management (combination of public and private follow-up) increased by + 2 points between 2016 and 2020 (the trend was statistically significant over the period; p = 0.0036) to reach more than 28% in 2020 (Table 3). Distribution of the type of antiretroviral follow-up (private practice, hospital, or mixed) in French Guiana between 2016 and 2020 among precarious patients. Source: SNDS/DCIR—Beneficiaries of Couverture Maladie Universelle Complémentaire/Complémentaire Santé Solidaire and Aide Médicale Etat in French Guiana. Linear trend chi2 (private vs. hospital only, 2016–2020): p < 0.001; Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.003. Because of trend distortions resulting from COVID-19, the trend was recalculated for 2016–2019, excluding 2020. *Linear trend chi2 (private vs. hospital only, 2016–2009): p = 0.004; **Linear trend chi2 (mixed private+hospital vs. hospital only, 2016–2020): p = 0.002. Discussion: For the first time, the use of health insurance data allowed us to see the whole picture of HIV care in French Guiana. The actual number of persons benefiting from antiretroviral treatment in French Guiana was actually very close to the recent estimations using HIV-cohort data for the ECDC modeling tool (1). The number of persons on AVR treatment has substantially increased over the study time but slowed down in 2020. Given the simultaneous disruption of information systems and HIV testing and care, it is difficult to tease out whether this reflects the slowing down of an epidemic where increasing numbers of persons are virologically controlled, or the COVID-19-related gap in new diagnoses and patient referrals. Data from 2021 is also likely to be impacted by COVID-19, so only future post-COVID-19 years may allow us to find out a better explanation for our observation. While the organization of care and virological results had gradually improved over the years, we were initially surprised to see the rate of persons lost to follow-up from the hospital outpatient cohort actually increased over time. However, clinicians had perceived the feedback from patients increasingly tempted by a follow-up by their private practitioner for proximity reasons and because going to specialized HIV outpatient departments was more stigmatizing than a waiting room of a general practitioner. At the same time, treatment regimens and follow-up are far simpler than they used to be, which expands the number of physicians willing to take-on the task of being the referent physician of patients with HIV. Furthermore, the aging HIV-cohort (39.5% aged over 50 years) increasingly requires general practitioners–not infectious disease specialists—to manage comorbidities, which may be another incentive to have an integral follow-up by one's family physician instead of a specialist. This perception seems to be vindicated by the health insurance data, which shows a growing proportion of persons choosing to be exclusively treated by their private practitioner. Furthermore, about a quarter used both the private and public sectors, suggesting pragmatic convenience was behind this behavior. The COVID-19 disruption of HIV follow-up may have further tilted follow-up toward private practice in 2020, and it remains to be seen whether this will be corrected in the future or if it has accelerated an ongoing trend toward decentralization of HIV care. At first view, one could have hypothesized that this trend toward private practice did not include the most precarious cases which in French Guiana's HIV cohort, represents 2/3rds of patients if we only look at the type of Health Insurance. However, this was refuted by the data which shows the exact same trend among the more precarious and shows that, contrary to widespread beliefs, poor populations do have access to private practice and use it readily as soon as they have health coverage. Since 2000, the Public Health Law has ensured that the most precarious cases do not have to advance any fees for medical consultations, and thus there are no financial obstacles to consult a private practitioner instead of a hospital physician. The study has a number of limitations. First, the use of health insurance regimens as a proxy for poverty is legitimate, but it may actually underestimate the degree of precariousness. Particularly, the SNDS did not provide other indicators, such as country of birth or nationality, which may have served as another proxy for precariousness. Another limitation is that the simultaneous accelerating rate of persons being lost to follow-up from the hospital system and the increasing proportion of patients exclusively followed by private practitioners do not formally prove that those that do not come back in the outpatient ward are actually consulting private practitioners. Instead, they could have left the territory, or stopped treatment and follow-up, or died. However, for deaths or treatment interruptions, this seems largely unlikely because opportunistic infections and deaths –which eventually are counted—have substantially decreased over time suggesting that indeed many of the patients who do not come back to the hospital outpatient clinic are receiving care in private practices. In conclusion, while hospital outpatient care remains by far the most common mode of patient care, there seems to be a gradual erosion of this model in favor of private practice. For the first time, the Health Insurance information system allows obtaining a broader view of patient care than hospital cohort data alone. In the rapidly evolving therapeutic and conceptual landscape of HIV care, in the context of aging cohorts and shifting patient preferences, coordinating continuous medical education with private practitioners seems to be crucially important so that every caregiver is up to date with the current guidelines. This slow gradual process may even have accelerated in the transformational context of the COVID-19 pandemic. Data Availability Statement: The data analyzed in this study is subject to the following licenses/restrictions: The DATAIDS data may be shared upon reasonable request. The health Insurance data requires additional permissions. Requests to access these datasets should be directed to corevih@ch-cayenne. Ethics Statement: The studies involving human participants were reviewed and approved by Commission Nationale Informatique et Libertés. The patients/participants provided their written informed consent to participate in this study. Author Contributions: MN: study design. HD and NV: data analysis. HD and MN: first draft writing. CS, LA, AL, FH, LM, JW, NV, EP, and AA: review and editing. All authors contributed to the article and approved the submitted version. Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Background: Although the simplification of antiretroviral (AVR) treatment regimens and follow-up has led to fewer constraints for patients with HIV, their follow-up remains of paramount importance to optimize AVR therapy, to detect and prevent HIV-related morbidity, and prevent secondary infections. The problem of follow-up interruption in French Guiana has been persistent and seemingly impervious to efforts to alleviate it. Methods: Using the complementary lenses of the hospital HIV cohort and the health insurance information system, we looked at the incidence of follow-up interruption and the proportion of patients followed by private practitioners. Results: We tallied 803 persons that were not known to have died and who were lost to follow-up. Over time, hospital outpatients were lost to follow-up significantly sooner. By contrast, there was a significant trend with more and more patients exclusively followed by private practitioners. Conclusions: While hospital outpatient care remains by far the most common mode of patient care, there seems to be a gradual erosion of this model in favor of private practice.
null
null
7,477
208
[ 76, 37, 82, 43, 195, 134, 128, 121, 104, 156, 757, 31, 55 ]
20
[ "patients", "hospital", "private", "follow", "2020", "data", "french", "trend", "proportion", "2016" ]
[ "antiretroviral treatment french", "systems hiv french", "care french guiana", "hiv transmission french", "guiana hiv cohort" ]
null
null
[CONTENT] HIV | cascade of care | follow-up interruption | private practice | French Guiana [SUMMARY]
[CONTENT] HIV | cascade of care | follow-up interruption | private practice | French Guiana [SUMMARY]
[CONTENT] HIV | cascade of care | follow-up interruption | private practice | French Guiana [SUMMARY]
null
[CONTENT] HIV | cascade of care | follow-up interruption | private practice | French Guiana [SUMMARY]
null
[CONTENT] Anti-Retroviral Agents | Cohort Studies | French Guiana | HIV Infections | Humans | Incidence [SUMMARY]
[CONTENT] Anti-Retroviral Agents | Cohort Studies | French Guiana | HIV Infections | Humans | Incidence [SUMMARY]
[CONTENT] Anti-Retroviral Agents | Cohort Studies | French Guiana | HIV Infections | Humans | Incidence [SUMMARY]
null
[CONTENT] Anti-Retroviral Agents | Cohort Studies | French Guiana | HIV Infections | Humans | Incidence [SUMMARY]
null
[CONTENT] antiretroviral treatment french | systems hiv french | care french guiana | hiv transmission french | guiana hiv cohort [SUMMARY]
[CONTENT] antiretroviral treatment french | systems hiv french | care french guiana | hiv transmission french | guiana hiv cohort [SUMMARY]
[CONTENT] antiretroviral treatment french | systems hiv french | care french guiana | hiv transmission french | guiana hiv cohort [SUMMARY]
null
[CONTENT] antiretroviral treatment french | systems hiv french | care french guiana | hiv transmission french | guiana hiv cohort [SUMMARY]
null
[CONTENT] patients | hospital | private | follow | 2020 | data | french | trend | proportion | 2016 [SUMMARY]
[CONTENT] patients | hospital | private | follow | 2020 | data | french | trend | proportion | 2016 [SUMMARY]
[CONTENT] patients | hospital | private | follow | 2020 | data | french | trend | proportion | 2016 [SUMMARY]
null
[CONTENT] patients | hospital | private | follow | 2020 | data | french | trend | proportion | 2016 [SUMMARY]
null
[CONTENT] patients | follow | hiv | french | guiana | french guiana | care | diagnosis | 90 | treatment [SUMMARY]
[CONTENT] data | de | hospital | treatments | arv treatments | insurance | health | french | private | prescribed [SUMMARY]
[CONTENT] 2020 | patients | 2016 | hospital | private | trend | significant | follow | sector | 2019 [SUMMARY]
null
[CONTENT] patients | follow | hospital | data | private | 2020 | hiv | health | french | insurance [SUMMARY]
null
[CONTENT] AVR | AVR ||| French | Guiana [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 803 ||| ||| [SUMMARY]
null
[CONTENT] ||| AVR | AVR ||| French | Guiana ||| ||| ||| 803 ||| ||| ||| [SUMMARY]
null
Spectral content (colour) of noise exposure affects work efficiency.
33243964
As part of an effort to enhance the efficiency of workers, experiments relating to three types of noise exposure were conducted. Previous studies have proved that pink noise can cause a brain wave to reach a lower potential. In this study, we utilized physical methods, in cognitive experiments, to understand the impacts that three colour noises have on working efficiency.
INTRODUCTION
All 22 participants were exposed to a sound environment of quiet, red, pink and white noises respectively. After a laboratory experiment, details of psychomotor speed, continuous performance, executive function and working memory were recorded.
SUBJECTS AND METHODS
Red, pink and white noises were significantly positive in comparison with the quiet environment of the psychomotor speed test. As for the continuous performance test, pink noise gave the only significantly positive result. Red, pink and white noise resulted in a better executive function test. Red and pink noise showed significantly positive improvement, while white noise was significantly positive in comparison with the quiet environment of the working memory test. In addition, the results from the comfort questionnaires showed that red and pink noise increase the possibility of better judgment, implementation, and overall environment.
RESULTS
At present time, it is considered that noise has negative effects on hearing and health. However, experimental results show that certain noise can enhance environmental comfort. It is feasible, in the future, to use knowledge of colour noises to improve productivity in a workplace with a healthy environment.
CONCLUSION
[ "Audiometry, Pure-Tone", "Efficiency", "Executive Function", "Female", "Healthy Volunteers", "Hearing", "Humans", "Male", "Memory, Short-Term", "Noise", "Psychomotor Performance", "Reaction Time", "Sound", "Work", "Young Adult" ]
7986458
INTRODUCTION
Exposure to high levels of noise in industry factory is associated with a number of effects on health manifested in various psychosocial responses such as annoyance, disturbance of daily activities, sleep and performance and in physical responses, such as hearing loss, hypertension and ischemic heart disease.[12] While the open-plan office is now widespread in the workplace, it appears that noise is a major nuisance factor in open-plan offices in spite of a relatively low noise level(less than 65 dB(A)).[3] In general, the higher the decibel numbers of sound, the greater the health hazard. However, when the number of sound decibels is not high, the degree of influence on people may vary depending on the sound characteristics. In audio engineering, physics, and many other fields, the colour of noise refers to the power spectrum of a noise signal. The practice of naming kinds of noise after colours started with white noise, a signal whose spectrum has equal power within any equal interval of frequencies. That name was given by analogy with white light, which was assumed to have such a flat power spectrum over the visible range. Other colour names, like pink, red, and blue were then given to noise with other spectral profiles, often (but not always) in reference to the colour of light with similar spectra. Noise often has detrimental effects on performance. Under most circumstances, information processing is disturbed by environmental noise and other non-task compatible distracters.[45] However, researchers have recently reported that under certain circumstances individuals with attention problems appear to benefit from the addition of specific forms of environmental noise. Typically, this facilitative effect has been limited to non-vocal background music on simple arithmetic task performance,[6] but Stansfeld et al.[7] found just that under certain conditions even road traffic noise can improve performance on episodic memory tasks, particularly in children at risk of attention problems and academic underachievement. Furthermore, Söderlund et al.[8] have demonstrated that adding auditory white noise to the environment enhanced the memory performance of children with Attention Deficit Hyperactivity Disorder (ADHD)-type problems but disrupted that of non-ADHD control children. Stochastic resonance, a phenomenon whereby signal processing is enhanced by the addition of random noise, has been widely demonstrated across various modalities, including visual, auditory, tactile and cross-modal forms of processing.[910] Of particular interest are findings that auditory noise has the capacity to enhance some aspects of human cognitive performance, such as the speed of arithmetical computations.[11] Meanwhile, some recent studies showed related evidence. For example, in 2010, light music, an example belonging to pink noise, is proved beneficial for elder people to improve their sleep quality as a long term effect.[12] Moreover, in a previous study,[13] the steady pink noise with intensity of 35 dB, 40 dB, 50 dB and 60 dB was used to stimulate four participants during sleep and they believed that pink noise could improve the sleep quality of participants. However, it is clear that this conclusion was based on significant increase of light sleep period (especially stage 2) accompanied with declined duration of rapid eye movement. In the brain synchronization study, it has demonstrated that the pink noise could synchronize brain wave and induce brain activity into a specialized state, that is, in a lower complexity level.[14] Sezici’s study showed that playing of white noise significantly decreased the daily crying durations and increased the sleeping durations of the colicky babies compared to swinging in both groups.[15] In the literature, parents emphasized that playing of white noise was quite beneficial in relieving pain among crying colicky babies.[16] In their studies, Kucukoglu et al.[17] and Karakoc and Turker[18] revealed that the pain scores of the newborns who were made to listen to white noise were observed to decrease compared with those of the newborns in the control group. White noise encompasses all characteristics of sounds within the range of human hearing. It has been used in the treatment of tinnitus, insomnia, masking unwanted sounds, and provision of relaxation.[1920] Zhou et al.[14] demonstrated that steady pink noise has significant effect on reducing brain wave complexity and inducing more stable sleep time to improve sleep quality of individuals. In total, 42 elderly people (21 using music and 21 controls) completed the study, and there was some indication that soft slow music yielder higher improvement on some of the parameters, which are worthy of further investigation.[12] A tabulated list of the effects observed in previous studies with colour noise exposures is shown in Table 1. A tabulated list of the effects observed in previous studies with colour noise exposures Several experiments have demonstrated that the detrimental effect of office noise on worker attitude and performance can be significantly reduced through the use of masking noise.[2122232425] For instance the experimental study performed by Haka[24] demonstrated that the performance of operation span tasks, serial recall and long term memory tasks were all improved when masking noise was used; and the field study performed by Hongisto[23] indicated that masking noise significantly reduced disturbance of worker attitudes towards concentration caused by office noise. Masking noise achieves this improvement through reducing the intelligibility of nearby speech.[25] In addition, Vassie et al.[26] indicated that participants rejected the brown masking noise delivered through earphones as it caused irritation and discomfort. Future studies should also consider other types of masking noise and should measure the level and duration of the masking noise.[26] It is well-known that any human action results from brain activity, which is affected by light and sound. Too much acoustic energy causes hearing loss, impaired human nerves, and emotional disorders.[2728] However, sound play also very important role for people. According to previous researchers, not only white noise can create shading effect, but pink noise can improve sleep quality. Most people in the workplace are in the office or in the service industry, not in the noisy working environment for the factory. Creating a quiet environment is sometimes not easy. However, adding some sounds to enhance the work efficiency and reduce the work pressure of low-noise environment should be an issue worth exploring. Therefore, this study wanted to understand the effects of three different colours noise exposure. In the past, there were not many studies in this regard. Therefore, the goal of the current study is to test the hypothesis that adding another sound in a low background noise environment may increases productivity and comfort.
null
null
RESULTS
The results of the four tests conducted in this current study are summarized in Table 2. For each simple reaction time test, 10 sets of tests were completed. In the simple reaction time test, the average response time was better in red noise exposure scenario (0.565 seconds) than in the pink noise exposure scenario (0.602 seconds) and in the white noise exposure scenario (0.609 seconds). The variances of red noise exposure scenario (0.005 seconds) and pink noise exposure scenario (0.008 seconds) were lower than the white noise exposure scenario (0.013 seconds) and the quiet scenario (0.023 seconds). From the average of 10 test results, there was a very significant difference between the three noise exposure scenarios, red noise exposure (P < 0.001), pink noise exposure (P < 0.001), white noise exposure (P < 0.001) and the quiet scenario, as shown in Figure 2. Means and variances of SRT, CPT-IP, TMT, TOENST in four different sound field environments Paired t-test analysis of Simple Reaction Time (SRT) in four different sound field exposures. Compared with the average correct rate in the continuous performance test, 93.6% of the pink noise exposure situation is better than 91.7% of the red noise exposure situation and 91.1% of the white noise exposure situation, and the three kinds of noise exposure are better than 88.8% on the quiet situation. Among the 22 participants, half of the respondents achieved 93.3% correct rate on red noise exposure, 93.3% on pink noise exposure, 93.3% on white noise exposure, and they all scored better than 90.0% in the quiet situation. The average correct rate of the test showed that there was a slight improvement over the quiet scenario for all of the red noise scenario, the pink noise scenario, and the white noise scenario. The paired t-test showed significant differences (P < 0.05) between the pink noise exposure scenario and the quiet scenario in the continuous performance test, Figure 3. Paired t-test analysis of continuous performance test-identical pairs (CPT-IP) in four different sound field exposures. In the executive function tests, comparing the average time it takes to complete the test, it shows the red noise exposure scenario (21.37 seconds) is superior to the pink noise exposure scenario (22.31 seconds) and the white noise exposure scenario (23.11 seconds), all of which are better than the quiet scenario (24.54 seconds). The best test results (minimum) showed that the red noise exposure scenario (12.00 seconds), pink noise exposure scenario (10.73 seconds), white noise exposure scenario (13.13 seconds) were also better than the quiet scenario (15.33 seconds). The average finishing time of the tests showed that the red noise exposure situation, the pink noise exposure situation and the white noise exposure situation are better than the quiet situation. As shown on Figure 4, the paired t-test of executive function tests showed significant differences between the red noise exposure scenario (P < 0.001), the pink noise exposure scenario (P < 0.001), white noise exposure scenario (P < 0.001) and the quiet scenario. Paired t-test analysis of Trail Making Test (TMT) in four different sound field exposures. The average score of the working memory test showed that the red noise exposure situation (18.8 points) was superior to the pink noise exposure situation (17.3 points) and the white noise exposure situation (15.9 points), all of which were better than the quiet situation (14.4 points). In a comparison of variance, the red noise exposure (5.1 points) and the pink noise exposure (6.9 points) were significantly lower than the white noise exposure (11.2 points) and noiseless exposure (9.3 points). The paired t-test analysis showed that in the working memory test, the red noise exposure (P < 0.001), the pink noise exposure (P < 0.001), the white noise exposure (P < 0.01), the three noise exposure all had a significant difference as compared with the quiet situation, as shown on Figure 5. Paired t-test analysis of Taiwan’s Odd-Even Number Sequencing Test (TOENST) in four different sound field exposures. As shown in Table 3, compared with the quiet situation, both the red noise and the pink noise exposure in the simple reaction time test, the executive function test and the working memory test all showed significant differences (P < 0.001). The white noise exposure was very significantly different (P < 0.001) in the simple reaction time test, and there was a significant difference (P < 0.01) in the working memory test. However, only pink noise exposure was significantly different (P < 0.05) in the continuous performance test. Paired t-test analysis of three colour noise exposure situations compared with quiet situations A statistical analysis of the sound field comfort questionnaire showed that, the questionnaire had a Cronbach’s alpha score of 0.772. Generally, a questionnaire with a Cronbach’s alpha score between 0.7 and 0.9 is considered to be of good reliability. As shown in Table 4, the subjective comfort evaluation statistics of the sound field environment indicated a highest and lowest score of 0 and −2, a little worse in the quiet scenario, while the scores between −1 and 3 were slightly worse to good in the red noise scenario; the scores between −1 and 2 were slightly worse to better in pink noise scenario; however, the scores between −3 and 1 represented a worse to slightly better result for the white noise scenario. The sum of the scores of the subjective evaluation in the sound field were −40 (average rating −0.26), +81 (average rating 0.53), +38 (average rating 0.25), −94 (average rating −0.61) for quiet, red noise, pink noise, and white noise scenarios, respectively. Subjective comfort evaluations of four sound environments
CONCLUSION
The results of this study show that participants perform significantly better in Psychomotor Speed Test, Continuous Performance Test, Executive Function Test and Working Memory Test. Therefore, colour noise may be used in the future to improve a workplace sound field environment, so that white-collar workers have a better working environment. Using physical stimuli of red, pink and white noises to improve worker productivity may prove to be a new approach towards productivity improvement that may provide fewer side effects than traditional methods such as drinking refreshing beverages. The introduction of the sounds represents a simple, low-cost, non-invasive improvement. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
[ "Participants", "Presentation of noise", "Outcome measures", "Experimental protocol", "Statistical analysis of data", "Financial support and sponsorship" ]
[ "Twenty-two healthy college students were enrolled in this study. The mean age of the participants was 22 years. All participants reported no medical history of neurological disease, hypertension or heart disease. A machine called an audiometer was used to produce sounds at various volumes and frequencies. The participants being tested listened to sounds through headphones and responded when they hear them by pressing a button. A pure tone audiogram was performed by a trained health care professional at the beginning of the study for each participant, and the hearing acuity of the participants was determined to be normal, with the thresholds of 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz being less than 15 dB. The participants received financial compensation for their participation in the experiment. Informed consent was obtained from all participants.", "During this experiment, four sound conditions were used, relating to the three types of masking sound (red noise, pink noise, white noise) and a condition, which consisted of a wide band background noise. White noise is a signal with a flat frequency spectrum when plotted as a linear function of frequency. Red noise will refer to a power density which decreases 6 dB per octave with increasing frequency (density proportional to 1/f2). The frequency spectrum of pink noise is linear in logarithmic scale, and the spectral power density, compared with white noise, decreases by 3 dB per octave (density proportional to 1/f). The noise level within the room used in the study was measured using a Cirrus Type 832C sound level meter. Thirty second SPL samples, LAeq,30s, were measured over a period of an hour. Noise levels ranged from an Lmin of 41 dBA to an Lmax of 50 dBA. Measurements performed on successive days identified similar ranges of Lmin and Lmax.\nThe signal source of masking sound was sent through a desktop computer to a Bluetooth speaker (BOSE SoundLink Mini). The sound level of each colour noise exposure was about 47 dBA when turning on the speaker and the background noise (without adding colour noise) was 44 dBA. Background noise had a decibel (dBA) value of three decibels (dBA) difference compared to the noise exposure scenario of three different colours, so that means that the sound energy of the colour noise exposure scenario was double that of the original silence.", "Over the years, different scholars have given their definitions of efficiency, and different scholars also have different views and explanations for it. Therefore, it is not easy to offer a precise definition of work efficiency. Among many offerings, the definition of work efficiency includes “work response time”, the time when the work instruction is received plus the response time, and “work execution processing time” the time it takes for each task from the reaction to the completion. “Working memory” refers to the processing of tasks for the content of the current short-term memories. Cognitive function is concept that covers several functions of the brain such as attention, executive function, processing speed, learning and memory. Information processing and response speed are basic cognitive function, which is needed for more complex functions such as working memory. We measured cognitive performance via a battery of standardized cognitive tests. If improvements were observed in cognitive performance, this would not only have important benefits for employees, but benefits for the organization in terms of the potential for increased productivity. The main outcome measurers for this study were those of four laboratory tests: Psychomotor Speed Test, Continuous Performance Test, Executive Function Test, and Working Memory Test. These tests were presented in a random order. Participants completed them on individual laptops in a laboratory with one of the testers present. The Psychomotor Speed Test measures Simple Reaction Time (SRT), general alertness and motor speed through delivery of a known stimulus to a known location to elicit a known response. The Continuous Performance Test − Identical Pairs (CPT-IP) requires the identification of identical stimulus pairs within a continuously presented series of stimuli. The Trail Making Test (TMT) is a widely used test to assess executive function, and it is a neuropsychological test of visual attention and task switching. It consists of two parts in which the participant is instructed to connect a set of 25 dots as quickly as possible while still maintaining accuracy. The test can provide information about visual search speed, scanning, speed of processing, mental flexibility, as well as executive functioning. Currently, the Wechsler Adult Intelligence Scale (WAIS) is the most commonly used instrument in the armamentarium of clinical neuropsychologists.[2930] This test involves letter sequences and tests one’s ability to think logically and analytically. However, the Wechsler Adult Intelligence Scale-III and IV Letter Number Sequencing are not appropriate for non-alphabetic cultures. The Taiwan’s Odd-Even Number Sequencing Test (TOENST), as proposed by the Department of Psychology, National Taiwan University,[31] was adopted to replace the Letter Number Test. Participants need to rearrange and read the Arabic numerals in a series. The way to read out the series is: “read out the odd numbers in ascending order and then the even numbers in descending order.” For example, the random numbers 7, 2, 8, 6, 3 is incremented by an odd number (3, 7) and decrements by an even number (8, 6, 2), so the answer is 3 7 8 6 2.\nThe comfort of the sound field environment was evaluated by a closed questionnaire, and the subjective comfort of the participant’s sound field environment was measured by the semantic differential method. There were mainly seven closed questionnaires in the questionnaire design, with “no feeling” in the middle (0 point). From middle to the left indicates that the ability deteriorates and worsens. Each left one grid is −1, the left two grids are −2, and so on. The middle to the right means that the ability to become better, every right one grid is +1, the right two grids is +2, and so on.", "Participants were tested in the same laboratory. Subject to the participants’ available time, the experiment was conducted under conditions of quiet (without noise), red noise, pink noise and white noise exposure. Before each experiment, the participants adopted a random method to determine the sound field exposure situation and the participants were not informed of the kind of sound scenario. The order of the four sound conditions (quiet, red, pink, white) was randomized. Ten minutes before the experiment was started, the participants were physiologically and psychologically adapted to the laboratory environment before they started the experiment. Each participant had to perform four tests and complete questionnaires in each of the four sessions in the experiment, and the order of the four tests was randomized. There was a five-minute break between each test. During the break, the sound exposure was on and the sound exposure was not terminated until all sound field exposure conditions had ended. After finishing four tests, participants filled in the sound field comfort evaluation questionnaire. During the filling in process, the sound continued. The flowchart in Figure 1 illustrates the experimental protocol.\nFlow chart of the experimental protocol.\nA within-subject design was used in the experiment with one independent variable, i.e. sound condition. Five dependent variables, including four objective parameters (psychomotor speed test, continuous performance test, executive function test, and working memory test) and one subjective perception, were collected to explore work efficiency in this study.", "The study was interested in evaluating the effectiveness of colour noise exposure. The approach considered was to measure the performance of a sample of participants before and after colour noise exposure, and analyse the differences using a paired sample t-test. IBM SPSS Statistics Version 20 was used to determine whether there were any statistically significant differences between the means of independent groups. Data from the completed questionnaires were stored in a database and transported in and subsequently analysed using the statistical program SPSS.", "Nil." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "SUBJECTS AND METHODS", "Participants", "Presentation of noise", "Outcome measures", "Experimental protocol", "Statistical analysis of data", "RESULTS", "DISCUSSION", "CONCLUSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Exposure to high levels of noise in industry factory is associated with a number of effects on health manifested in various psychosocial responses such as annoyance, disturbance of daily activities, sleep and performance and in physical responses, such as hearing loss, hypertension and ischemic heart disease.[12] While the open-plan office is now widespread in the workplace, it appears that noise is a major nuisance factor in open-plan offices in spite of a relatively low noise level(less than 65 dB(A)).[3] In general, the higher the decibel numbers of sound, the greater the health hazard. However, when the number of sound decibels is not high, the degree of influence on people may vary depending on the sound characteristics.\nIn audio engineering, physics, and many other fields, the colour of noise refers to the power spectrum of a noise signal. The practice of naming kinds of noise after colours started with white noise, a signal whose spectrum has equal power within any equal interval of frequencies. That name was given by analogy with white light, which was assumed to have such a flat power spectrum over the visible range. Other colour names, like pink, red, and blue were then given to noise with other spectral profiles, often (but not always) in reference to the colour of light with similar spectra.\nNoise often has detrimental effects on performance. Under most circumstances, information processing is disturbed by environmental noise and other non-task compatible distracters.[45] However, researchers have recently reported that under certain circumstances individuals with attention problems appear to benefit from the addition of specific forms of environmental noise. Typically, this facilitative effect has been limited to non-vocal background music on simple arithmetic task performance,[6] but Stansfeld et al.[7] found just that under certain conditions even road traffic noise can improve performance on episodic memory tasks, particularly in children at risk of attention problems and academic underachievement. Furthermore, Söderlund et al.[8] have demonstrated that adding auditory white noise to the environment enhanced the memory performance of children with Attention Deficit Hyperactivity Disorder (ADHD)-type problems but disrupted that of non-ADHD control children. Stochastic resonance, a phenomenon whereby signal processing is enhanced by the addition of random noise, has been widely demonstrated across various modalities, including visual, auditory, tactile and cross-modal forms of processing.[910] Of particular interest are findings that auditory noise has the capacity to enhance some aspects of human cognitive performance, such as the speed of arithmetical computations.[11]\n\nMeanwhile, some recent studies showed related evidence. For example, in 2010, light music, an example belonging to pink noise, is proved beneficial for elder people to improve their sleep quality as a long term effect.[12] Moreover, in a previous study,[13] the steady pink noise with intensity of 35 dB, 40 dB, 50 dB and 60 dB was used to stimulate four participants during sleep and they believed that pink noise could improve the sleep quality of participants. However, it is clear that this conclusion was based on significant increase of light sleep period (especially stage 2) accompanied with declined duration of rapid eye movement. In the brain synchronization study, it has demonstrated that the pink noise could synchronize brain wave and induce brain activity into a specialized state, that is, in a lower complexity level.[14]\n\nSezici’s study showed that playing of white noise significantly decreased the daily crying durations and increased the sleeping durations of the colicky babies compared to swinging in both groups.[15] In the literature, parents emphasized that playing of white noise was quite beneficial in relieving pain among crying colicky babies.[16] In their studies, Kucukoglu et al.[17] and Karakoc and Turker[18] revealed that the pain scores of the newborns who were made to listen to white noise were observed to decrease compared with those of the newborns in the control group. White noise encompasses all characteristics of sounds within the range of human hearing. It has been used in the treatment of tinnitus, insomnia, masking unwanted sounds, and provision of relaxation.[1920]\n\nZhou et al.[14] demonstrated that steady pink noise has significant effect on reducing brain wave complexity and inducing more stable sleep time to improve sleep quality of individuals. In total, 42 elderly people (21 using music and 21 controls) completed the study, and there was some indication that soft slow music yielder higher improvement on some of the parameters, which are worthy of further investigation.[12] A tabulated list of the effects observed in previous studies with colour noise exposures is shown in Table 1.\nA tabulated list of the effects observed in previous studies with colour noise exposures\nSeveral experiments have demonstrated that the detrimental effect of office noise on worker attitude and performance can be significantly reduced through the use of masking noise.[2122232425] For instance the experimental study performed by Haka[24] demonstrated that the performance of operation span tasks, serial recall and long term memory tasks were all improved when masking noise was used; and the field study performed by Hongisto[23] indicated that masking noise significantly reduced disturbance of worker attitudes towards concentration caused by office noise. Masking noise achieves this improvement through reducing the intelligibility of nearby speech.[25] In addition, Vassie et al.[26] indicated that participants rejected the brown masking noise delivered through earphones as it caused irritation and discomfort. Future studies should also consider other types of masking noise and should measure the level and duration of the masking noise.[26]\n\nIt is well-known that any human action results from brain activity, which is affected by light and sound. Too much acoustic energy causes hearing loss, impaired human nerves, and emotional disorders.[2728] However, sound play also very important role for people. According to previous researchers, not only white noise can create shading effect, but pink noise can improve sleep quality. Most people in the workplace are in the office or in the service industry, not in the noisy working environment for the factory. Creating a quiet environment is sometimes not easy. However, adding some sounds to enhance the work efficiency and reduce the work pressure of low-noise environment should be an issue worth exploring. Therefore, this study wanted to understand the effects of three different colours noise exposure. In the past, there were not many studies in this regard. Therefore, the goal of the current study is to test the hypothesis that adding another sound in a low background noise environment may increases productivity and comfort.", " Participants Twenty-two healthy college students were enrolled in this study. The mean age of the participants was 22 years. All participants reported no medical history of neurological disease, hypertension or heart disease. A machine called an audiometer was used to produce sounds at various volumes and frequencies. The participants being tested listened to sounds through headphones and responded when they hear them by pressing a button. A pure tone audiogram was performed by a trained health care professional at the beginning of the study for each participant, and the hearing acuity of the participants was determined to be normal, with the thresholds of 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz being less than 15 dB. The participants received financial compensation for their participation in the experiment. Informed consent was obtained from all participants.\nTwenty-two healthy college students were enrolled in this study. The mean age of the participants was 22 years. All participants reported no medical history of neurological disease, hypertension or heart disease. A machine called an audiometer was used to produce sounds at various volumes and frequencies. The participants being tested listened to sounds through headphones and responded when they hear them by pressing a button. A pure tone audiogram was performed by a trained health care professional at the beginning of the study for each participant, and the hearing acuity of the participants was determined to be normal, with the thresholds of 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz being less than 15 dB. The participants received financial compensation for their participation in the experiment. Informed consent was obtained from all participants.\n Presentation of noise During this experiment, four sound conditions were used, relating to the three types of masking sound (red noise, pink noise, white noise) and a condition, which consisted of a wide band background noise. White noise is a signal with a flat frequency spectrum when plotted as a linear function of frequency. Red noise will refer to a power density which decreases 6 dB per octave with increasing frequency (density proportional to 1/f2). The frequency spectrum of pink noise is linear in logarithmic scale, and the spectral power density, compared with white noise, decreases by 3 dB per octave (density proportional to 1/f). The noise level within the room used in the study was measured using a Cirrus Type 832C sound level meter. Thirty second SPL samples, LAeq,30s, were measured over a period of an hour. Noise levels ranged from an Lmin of 41 dBA to an Lmax of 50 dBA. Measurements performed on successive days identified similar ranges of Lmin and Lmax.\nThe signal source of masking sound was sent through a desktop computer to a Bluetooth speaker (BOSE SoundLink Mini). The sound level of each colour noise exposure was about 47 dBA when turning on the speaker and the background noise (without adding colour noise) was 44 dBA. Background noise had a decibel (dBA) value of three decibels (dBA) difference compared to the noise exposure scenario of three different colours, so that means that the sound energy of the colour noise exposure scenario was double that of the original silence.\nDuring this experiment, four sound conditions were used, relating to the three types of masking sound (red noise, pink noise, white noise) and a condition, which consisted of a wide band background noise. White noise is a signal with a flat frequency spectrum when plotted as a linear function of frequency. Red noise will refer to a power density which decreases 6 dB per octave with increasing frequency (density proportional to 1/f2). The frequency spectrum of pink noise is linear in logarithmic scale, and the spectral power density, compared with white noise, decreases by 3 dB per octave (density proportional to 1/f). The noise level within the room used in the study was measured using a Cirrus Type 832C sound level meter. Thirty second SPL samples, LAeq,30s, were measured over a period of an hour. Noise levels ranged from an Lmin of 41 dBA to an Lmax of 50 dBA. Measurements performed on successive days identified similar ranges of Lmin and Lmax.\nThe signal source of masking sound was sent through a desktop computer to a Bluetooth speaker (BOSE SoundLink Mini). The sound level of each colour noise exposure was about 47 dBA when turning on the speaker and the background noise (without adding colour noise) was 44 dBA. Background noise had a decibel (dBA) value of three decibels (dBA) difference compared to the noise exposure scenario of three different colours, so that means that the sound energy of the colour noise exposure scenario was double that of the original silence.\n Outcome measures Over the years, different scholars have given their definitions of efficiency, and different scholars also have different views and explanations for it. Therefore, it is not easy to offer a precise definition of work efficiency. Among many offerings, the definition of work efficiency includes “work response time”, the time when the work instruction is received plus the response time, and “work execution processing time” the time it takes for each task from the reaction to the completion. “Working memory” refers to the processing of tasks for the content of the current short-term memories. Cognitive function is concept that covers several functions of the brain such as attention, executive function, processing speed, learning and memory. Information processing and response speed are basic cognitive function, which is needed for more complex functions such as working memory. We measured cognitive performance via a battery of standardized cognitive tests. If improvements were observed in cognitive performance, this would not only have important benefits for employees, but benefits for the organization in terms of the potential for increased productivity. The main outcome measurers for this study were those of four laboratory tests: Psychomotor Speed Test, Continuous Performance Test, Executive Function Test, and Working Memory Test. These tests were presented in a random order. Participants completed them on individual laptops in a laboratory with one of the testers present. The Psychomotor Speed Test measures Simple Reaction Time (SRT), general alertness and motor speed through delivery of a known stimulus to a known location to elicit a known response. The Continuous Performance Test − Identical Pairs (CPT-IP) requires the identification of identical stimulus pairs within a continuously presented series of stimuli. The Trail Making Test (TMT) is a widely used test to assess executive function, and it is a neuropsychological test of visual attention and task switching. It consists of two parts in which the participant is instructed to connect a set of 25 dots as quickly as possible while still maintaining accuracy. The test can provide information about visual search speed, scanning, speed of processing, mental flexibility, as well as executive functioning. Currently, the Wechsler Adult Intelligence Scale (WAIS) is the most commonly used instrument in the armamentarium of clinical neuropsychologists.[2930] This test involves letter sequences and tests one’s ability to think logically and analytically. However, the Wechsler Adult Intelligence Scale-III and IV Letter Number Sequencing are not appropriate for non-alphabetic cultures. The Taiwan’s Odd-Even Number Sequencing Test (TOENST), as proposed by the Department of Psychology, National Taiwan University,[31] was adopted to replace the Letter Number Test. Participants need to rearrange and read the Arabic numerals in a series. The way to read out the series is: “read out the odd numbers in ascending order and then the even numbers in descending order.” For example, the random numbers 7, 2, 8, 6, 3 is incremented by an odd number (3, 7) and decrements by an even number (8, 6, 2), so the answer is 3 7 8 6 2.\nThe comfort of the sound field environment was evaluated by a closed questionnaire, and the subjective comfort of the participant’s sound field environment was measured by the semantic differential method. There were mainly seven closed questionnaires in the questionnaire design, with “no feeling” in the middle (0 point). From middle to the left indicates that the ability deteriorates and worsens. Each left one grid is −1, the left two grids are −2, and so on. The middle to the right means that the ability to become better, every right one grid is +1, the right two grids is +2, and so on.\nOver the years, different scholars have given their definitions of efficiency, and different scholars also have different views and explanations for it. Therefore, it is not easy to offer a precise definition of work efficiency. Among many offerings, the definition of work efficiency includes “work response time”, the time when the work instruction is received plus the response time, and “work execution processing time” the time it takes for each task from the reaction to the completion. “Working memory” refers to the processing of tasks for the content of the current short-term memories. Cognitive function is concept that covers several functions of the brain such as attention, executive function, processing speed, learning and memory. Information processing and response speed are basic cognitive function, which is needed for more complex functions such as working memory. We measured cognitive performance via a battery of standardized cognitive tests. If improvements were observed in cognitive performance, this would not only have important benefits for employees, but benefits for the organization in terms of the potential for increased productivity. The main outcome measurers for this study were those of four laboratory tests: Psychomotor Speed Test, Continuous Performance Test, Executive Function Test, and Working Memory Test. These tests were presented in a random order. Participants completed them on individual laptops in a laboratory with one of the testers present. The Psychomotor Speed Test measures Simple Reaction Time (SRT), general alertness and motor speed through delivery of a known stimulus to a known location to elicit a known response. The Continuous Performance Test − Identical Pairs (CPT-IP) requires the identification of identical stimulus pairs within a continuously presented series of stimuli. The Trail Making Test (TMT) is a widely used test to assess executive function, and it is a neuropsychological test of visual attention and task switching. It consists of two parts in which the participant is instructed to connect a set of 25 dots as quickly as possible while still maintaining accuracy. The test can provide information about visual search speed, scanning, speed of processing, mental flexibility, as well as executive functioning. Currently, the Wechsler Adult Intelligence Scale (WAIS) is the most commonly used instrument in the armamentarium of clinical neuropsychologists.[2930] This test involves letter sequences and tests one’s ability to think logically and analytically. However, the Wechsler Adult Intelligence Scale-III and IV Letter Number Sequencing are not appropriate for non-alphabetic cultures. The Taiwan’s Odd-Even Number Sequencing Test (TOENST), as proposed by the Department of Psychology, National Taiwan University,[31] was adopted to replace the Letter Number Test. Participants need to rearrange and read the Arabic numerals in a series. The way to read out the series is: “read out the odd numbers in ascending order and then the even numbers in descending order.” For example, the random numbers 7, 2, 8, 6, 3 is incremented by an odd number (3, 7) and decrements by an even number (8, 6, 2), so the answer is 3 7 8 6 2.\nThe comfort of the sound field environment was evaluated by a closed questionnaire, and the subjective comfort of the participant’s sound field environment was measured by the semantic differential method. There were mainly seven closed questionnaires in the questionnaire design, with “no feeling” in the middle (0 point). From middle to the left indicates that the ability deteriorates and worsens. Each left one grid is −1, the left two grids are −2, and so on. The middle to the right means that the ability to become better, every right one grid is +1, the right two grids is +2, and so on.\n Experimental protocol Participants were tested in the same laboratory. Subject to the participants’ available time, the experiment was conducted under conditions of quiet (without noise), red noise, pink noise and white noise exposure. Before each experiment, the participants adopted a random method to determine the sound field exposure situation and the participants were not informed of the kind of sound scenario. The order of the four sound conditions (quiet, red, pink, white) was randomized. Ten minutes before the experiment was started, the participants were physiologically and psychologically adapted to the laboratory environment before they started the experiment. Each participant had to perform four tests and complete questionnaires in each of the four sessions in the experiment, and the order of the four tests was randomized. There was a five-minute break between each test. During the break, the sound exposure was on and the sound exposure was not terminated until all sound field exposure conditions had ended. After finishing four tests, participants filled in the sound field comfort evaluation questionnaire. During the filling in process, the sound continued. The flowchart in Figure 1 illustrates the experimental protocol.\nFlow chart of the experimental protocol.\nA within-subject design was used in the experiment with one independent variable, i.e. sound condition. Five dependent variables, including four objective parameters (psychomotor speed test, continuous performance test, executive function test, and working memory test) and one subjective perception, were collected to explore work efficiency in this study.\nParticipants were tested in the same laboratory. Subject to the participants’ available time, the experiment was conducted under conditions of quiet (without noise), red noise, pink noise and white noise exposure. Before each experiment, the participants adopted a random method to determine the sound field exposure situation and the participants were not informed of the kind of sound scenario. The order of the four sound conditions (quiet, red, pink, white) was randomized. Ten minutes before the experiment was started, the participants were physiologically and psychologically adapted to the laboratory environment before they started the experiment. Each participant had to perform four tests and complete questionnaires in each of the four sessions in the experiment, and the order of the four tests was randomized. There was a five-minute break between each test. During the break, the sound exposure was on and the sound exposure was not terminated until all sound field exposure conditions had ended. After finishing four tests, participants filled in the sound field comfort evaluation questionnaire. During the filling in process, the sound continued. The flowchart in Figure 1 illustrates the experimental protocol.\nFlow chart of the experimental protocol.\nA within-subject design was used in the experiment with one independent variable, i.e. sound condition. Five dependent variables, including four objective parameters (psychomotor speed test, continuous performance test, executive function test, and working memory test) and one subjective perception, were collected to explore work efficiency in this study.\n Statistical analysis of data The study was interested in evaluating the effectiveness of colour noise exposure. The approach considered was to measure the performance of a sample of participants before and after colour noise exposure, and analyse the differences using a paired sample t-test. IBM SPSS Statistics Version 20 was used to determine whether there were any statistically significant differences between the means of independent groups. Data from the completed questionnaires were stored in a database and transported in and subsequently analysed using the statistical program SPSS.\nThe study was interested in evaluating the effectiveness of colour noise exposure. The approach considered was to measure the performance of a sample of participants before and after colour noise exposure, and analyse the differences using a paired sample t-test. IBM SPSS Statistics Version 20 was used to determine whether there were any statistically significant differences between the means of independent groups. Data from the completed questionnaires were stored in a database and transported in and subsequently analysed using the statistical program SPSS.", "Twenty-two healthy college students were enrolled in this study. The mean age of the participants was 22 years. All participants reported no medical history of neurological disease, hypertension or heart disease. A machine called an audiometer was used to produce sounds at various volumes and frequencies. The participants being tested listened to sounds through headphones and responded when they hear them by pressing a button. A pure tone audiogram was performed by a trained health care professional at the beginning of the study for each participant, and the hearing acuity of the participants was determined to be normal, with the thresholds of 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz being less than 15 dB. The participants received financial compensation for their participation in the experiment. Informed consent was obtained from all participants.", "During this experiment, four sound conditions were used, relating to the three types of masking sound (red noise, pink noise, white noise) and a condition, which consisted of a wide band background noise. White noise is a signal with a flat frequency spectrum when plotted as a linear function of frequency. Red noise will refer to a power density which decreases 6 dB per octave with increasing frequency (density proportional to 1/f2). The frequency spectrum of pink noise is linear in logarithmic scale, and the spectral power density, compared with white noise, decreases by 3 dB per octave (density proportional to 1/f). The noise level within the room used in the study was measured using a Cirrus Type 832C sound level meter. Thirty second SPL samples, LAeq,30s, were measured over a period of an hour. Noise levels ranged from an Lmin of 41 dBA to an Lmax of 50 dBA. Measurements performed on successive days identified similar ranges of Lmin and Lmax.\nThe signal source of masking sound was sent through a desktop computer to a Bluetooth speaker (BOSE SoundLink Mini). The sound level of each colour noise exposure was about 47 dBA when turning on the speaker and the background noise (without adding colour noise) was 44 dBA. Background noise had a decibel (dBA) value of three decibels (dBA) difference compared to the noise exposure scenario of three different colours, so that means that the sound energy of the colour noise exposure scenario was double that of the original silence.", "Over the years, different scholars have given their definitions of efficiency, and different scholars also have different views and explanations for it. Therefore, it is not easy to offer a precise definition of work efficiency. Among many offerings, the definition of work efficiency includes “work response time”, the time when the work instruction is received plus the response time, and “work execution processing time” the time it takes for each task from the reaction to the completion. “Working memory” refers to the processing of tasks for the content of the current short-term memories. Cognitive function is concept that covers several functions of the brain such as attention, executive function, processing speed, learning and memory. Information processing and response speed are basic cognitive function, which is needed for more complex functions such as working memory. We measured cognitive performance via a battery of standardized cognitive tests. If improvements were observed in cognitive performance, this would not only have important benefits for employees, but benefits for the organization in terms of the potential for increased productivity. The main outcome measurers for this study were those of four laboratory tests: Psychomotor Speed Test, Continuous Performance Test, Executive Function Test, and Working Memory Test. These tests were presented in a random order. Participants completed them on individual laptops in a laboratory with one of the testers present. The Psychomotor Speed Test measures Simple Reaction Time (SRT), general alertness and motor speed through delivery of a known stimulus to a known location to elicit a known response. The Continuous Performance Test − Identical Pairs (CPT-IP) requires the identification of identical stimulus pairs within a continuously presented series of stimuli. The Trail Making Test (TMT) is a widely used test to assess executive function, and it is a neuropsychological test of visual attention and task switching. It consists of two parts in which the participant is instructed to connect a set of 25 dots as quickly as possible while still maintaining accuracy. The test can provide information about visual search speed, scanning, speed of processing, mental flexibility, as well as executive functioning. Currently, the Wechsler Adult Intelligence Scale (WAIS) is the most commonly used instrument in the armamentarium of clinical neuropsychologists.[2930] This test involves letter sequences and tests one’s ability to think logically and analytically. However, the Wechsler Adult Intelligence Scale-III and IV Letter Number Sequencing are not appropriate for non-alphabetic cultures. The Taiwan’s Odd-Even Number Sequencing Test (TOENST), as proposed by the Department of Psychology, National Taiwan University,[31] was adopted to replace the Letter Number Test. Participants need to rearrange and read the Arabic numerals in a series. The way to read out the series is: “read out the odd numbers in ascending order and then the even numbers in descending order.” For example, the random numbers 7, 2, 8, 6, 3 is incremented by an odd number (3, 7) and decrements by an even number (8, 6, 2), so the answer is 3 7 8 6 2.\nThe comfort of the sound field environment was evaluated by a closed questionnaire, and the subjective comfort of the participant’s sound field environment was measured by the semantic differential method. There were mainly seven closed questionnaires in the questionnaire design, with “no feeling” in the middle (0 point). From middle to the left indicates that the ability deteriorates and worsens. Each left one grid is −1, the left two grids are −2, and so on. The middle to the right means that the ability to become better, every right one grid is +1, the right two grids is +2, and so on.", "Participants were tested in the same laboratory. Subject to the participants’ available time, the experiment was conducted under conditions of quiet (without noise), red noise, pink noise and white noise exposure. Before each experiment, the participants adopted a random method to determine the sound field exposure situation and the participants were not informed of the kind of sound scenario. The order of the four sound conditions (quiet, red, pink, white) was randomized. Ten minutes before the experiment was started, the participants were physiologically and psychologically adapted to the laboratory environment before they started the experiment. Each participant had to perform four tests and complete questionnaires in each of the four sessions in the experiment, and the order of the four tests was randomized. There was a five-minute break between each test. During the break, the sound exposure was on and the sound exposure was not terminated until all sound field exposure conditions had ended. After finishing four tests, participants filled in the sound field comfort evaluation questionnaire. During the filling in process, the sound continued. The flowchart in Figure 1 illustrates the experimental protocol.\nFlow chart of the experimental protocol.\nA within-subject design was used in the experiment with one independent variable, i.e. sound condition. Five dependent variables, including four objective parameters (psychomotor speed test, continuous performance test, executive function test, and working memory test) and one subjective perception, were collected to explore work efficiency in this study.", "The study was interested in evaluating the effectiveness of colour noise exposure. The approach considered was to measure the performance of a sample of participants before and after colour noise exposure, and analyse the differences using a paired sample t-test. IBM SPSS Statistics Version 20 was used to determine whether there were any statistically significant differences between the means of independent groups. Data from the completed questionnaires were stored in a database and transported in and subsequently analysed using the statistical program SPSS.", "The results of the four tests conducted in this current study are summarized in Table 2. For each simple reaction time test, 10 sets of tests were completed. In the simple reaction time test, the average response time was better in red noise exposure scenario (0.565 seconds) than in the pink noise exposure scenario (0.602 seconds) and in the white noise exposure scenario (0.609 seconds). The variances of red noise exposure scenario (0.005 seconds) and pink noise exposure scenario (0.008 seconds) were lower than the white noise exposure scenario (0.013 seconds) and the quiet scenario (0.023 seconds). From the average of 10 test results, there was a very significant difference between the three noise exposure scenarios, red noise exposure (P < 0.001), pink noise exposure (P < 0.001), white noise exposure (P < 0.001) and the quiet scenario, as shown in Figure 2.\nMeans and variances of SRT, CPT-IP, TMT, TOENST in four different sound field environments\nPaired t-test analysis of Simple Reaction Time (SRT) in four different sound field exposures.\nCompared with the average correct rate in the continuous performance test, 93.6% of the pink noise exposure situation is better than 91.7% of the red noise exposure situation and 91.1% of the white noise exposure situation, and the three kinds of noise exposure are better than 88.8% on the quiet situation. Among the 22 participants, half of the respondents achieved 93.3% correct rate on red noise exposure, 93.3% on pink noise exposure, 93.3% on white noise exposure, and they all scored better than 90.0% in the quiet situation. The average correct rate of the test showed that there was a slight improvement over the quiet scenario for all of the red noise scenario, the pink noise scenario, and the white noise scenario. The paired t-test showed significant differences (P < 0.05) between the pink noise exposure scenario and the quiet scenario in the continuous performance test, Figure 3.\nPaired t-test analysis of continuous performance test-identical pairs (CPT-IP) in four different sound field exposures.\nIn the executive function tests, comparing the average time it takes to complete the test, it shows the red noise exposure scenario (21.37 seconds) is superior to the pink noise exposure scenario (22.31 seconds) and the white noise exposure scenario (23.11 seconds), all of which are better than the quiet scenario (24.54 seconds). The best test results (minimum) showed that the red noise exposure scenario (12.00 seconds), pink noise exposure scenario (10.73 seconds), white noise exposure scenario (13.13 seconds) were also better than the quiet scenario (15.33 seconds). The average finishing time of the tests showed that the red noise exposure situation, the pink noise exposure situation and the white noise exposure situation are better than the quiet situation. As shown on Figure 4, the paired t-test of executive function tests showed significant differences between the red noise exposure scenario (P < 0.001), the pink noise exposure scenario (P < 0.001), white noise exposure scenario (P < 0.001) and the quiet scenario.\nPaired t-test analysis of Trail Making Test (TMT) in four different sound field exposures.\nThe average score of the working memory test showed that the red noise exposure situation (18.8 points) was superior to the pink noise exposure situation (17.3 points) and the white noise exposure situation (15.9 points), all of which were better than the quiet situation (14.4 points). In a comparison of variance, the red noise exposure (5.1 points) and the pink noise exposure (6.9 points) were significantly lower than the white noise exposure (11.2 points) and noiseless exposure (9.3 points). The paired t-test analysis showed that in the working memory test, the red noise exposure (P < 0.001), the pink noise exposure (P < 0.001), the white noise exposure (P < 0.01), the three noise exposure all had a significant difference as compared with the quiet situation, as shown on Figure 5.\nPaired t-test analysis of Taiwan’s Odd-Even Number Sequencing Test (TOENST) in four different sound field exposures.\nAs shown in Table 3, compared with the quiet situation, both the red noise and the pink noise exposure in the simple reaction time test, the executive function test and the working memory test all showed significant differences (P < 0.001). The white noise exposure was very significantly different (P < 0.001) in the simple reaction time test, and there was a significant difference (P < 0.01) in the working memory test. However, only pink noise exposure was significantly different (P < 0.05) in the continuous performance test.\nPaired t-test analysis of three colour noise exposure situations compared with quiet situations\nA statistical analysis of the sound field comfort questionnaire showed that, the questionnaire had a Cronbach’s alpha score of 0.772. Generally, a questionnaire with a Cronbach’s alpha score between 0.7 and 0.9 is considered to be of good reliability. As shown in Table 4, the subjective comfort evaluation statistics of the sound field environment indicated a highest and lowest score of 0 and −2, a little worse in the quiet scenario, while the scores between −1 and 3 were slightly worse to good in the red noise scenario; the scores between −1 and 2 were slightly worse to better in pink noise scenario; however, the scores between −3 and 1 represented a worse to slightly better result for the white noise scenario. The sum of the scores of the subjective evaluation in the sound field were −40 (average rating −0.26), +81 (average rating 0.53), +38 (average rating 0.25), −94 (average rating −0.61) for quiet, red noise, pink noise, and white noise scenarios, respectively.\nSubjective comfort evaluations of four sound environments", "Of the four trials, there were three significant (P < 0.001) increases in scores for red noise exposure and pink noise exposure, possibly reflecting improvements in work efficiency. Literature explains that white noise exposure can serve as a masking sound, and this may improve work efficiency. Two tests showed a significant (P < 0.001) increase in performance, with a significant (P < 0.01) increase in one trial also corroborating previous studies. In brain synchronization studies, pink noise exposure has been shown to synchronize the brain and induce brain activity to lower its potential,[13] and to exclude the effects of the outside world for purposes of concentrated processing. Except for the slight significant difference (P < 0.05) in white noise exposure, the continuous performance test was the only one that did not show very significant results. Participants may all have been preoccupied with the test, so the average correct rates are above 90%. Even the correct rates of two participants reached 100% when exposed to all four sound situations, and there were three participants with a 100% correct rate when exposed to three scenarios.\nIn the continuous performance test, the correct rate of non-exposure was about 88.8%, which of red noise, pink noise, and white noise were 91.7%, 93.6%, and 91.1%, respectively. Half of the 22 participants achieved a correct rate of 93.3% under the conditions of red noise exposure, pink noise exposure and white noise exposure, all of which were higher than the 88.8% correct rate of no noise exposure. The average score of the test results showed that a red noise exposure scenario, pink noise exposure scenario, and white noise exposure scenario are slightly better than the quiet scenario, which means that with the exposure to colour noise, the correct rate will rise and attention will be slightly improved.\nBecause the exposure of the participant’s living environment to the noise itself, might affect the questionnaire data, the type of accommodation should be indicated on the comfort questionnaire, and whether or not there is any noise exposure should be indicated in the closed questionnaire. In this study, there were three home participants exposed to noise, there were three home participants who sometimes had noise exposure. Of these six participants, five of the participants had different answers from the others, indicating a larger numerical increase in white noise.\nThe scores of the questionnaire results in the comfort of colour noises environment did not fully match the test data. Participants were randomly interviewed after the experiment. They said that with the additional sound increases, they felt that the sound interference, and the selection value should be less favourable. The laboratory background sound level was 44 dBA in the quiet scenario. The participants experienced significant changes to sound of 47 dBA when exposed to colour noise. Moreover, because the experimental exposure scenario was determined in a random manner, it was difficult to remember which previous test scores were selected. Scenario comfort evaluation is felt on a subjective basis, but participants may have had preconceptions when completing the questionnaire, thus affecting the final score. Although the results of the environmental comfort questionnaire were not as expected, the overall percentage of participants who felt slightly better or above were 48.8%, 32%, 9% for participants exposed to red, pink, white noise, respectively.\nThe results of this study show that exposure to the three kinds of colour noises, red, pink and white noise yields significantly better results in the Psychomotor Speed Test, Continuous Performance Test, Executive Function Test and Working Memory Test, which proves that these three kinds of sound field environments on the office work-type job site might improve work efficiency. The sum of the questionnaire scores on subjective comfort of the sound field environment showed that red, pink, and white noise exposure, were better than the original no noise exposure of the sound field, and this result implies that the three colour noise sound fields seem to make the white-collar work patterns a better workplaces not only by improving productivity, but also by making workers feel comfortable. Because white noise is equal energy at each frequency, it can cover the sounds that do not want to be heard around the environment. Pink noise might be due to neurons in the hypothalamus synchronously resonating with pink noises in the low oscillation frequency band, making slow rhythms and dominating brain waves. In addition, the power density of high frequency in red noise is even lower than these in pink noise and may therefore affect neurons in the hypothalamus even more, leading to information more speedily reaching the cerebral cortex through the hypothalamus and to faster transmission of motor messages through the hypothalamus.\nThe concept of the open-plan office is now widespread in the workplace. While offering many advantages in terms of layout and facilitating communication between colleagues, this way of organising the workspace had two major disadvantages. The feeling of privacy is lessened and noise level is increased causing discomfort for individuals working in this type of environment. It appears that noise is a major nuisance factor in open-plan offices in spite of a relatively low noise level. The noise nuisance experienced in open-plan offices affects work satisfaction, and this exposure to noise can reduce employee performance depending on the types of task to be carried out and the characteristics of the noise present in the workplace. The current research suggests that creating a colour noise environment through a loudspeaker or headphone might improve some aspects of cognitive performance in different workplaces. Many studies in the past have proved that noise has a negative influence on human hearing, cardiovascular systems, and emotions and so on. However, the three colour noises are applied to the original sound field environment. Therefore, if the background noise of the workplace is already too high, it would not be in order, for health reasons, to further add colour noise.", "The results of this study show that participants perform significantly better in Psychomotor Speed Test, Continuous Performance Test, Executive Function Test and Working Memory Test. Therefore, colour noise may be used in the future to improve a workplace sound field environment, so that white-collar workers have a better working environment. Using physical stimuli of red, pink and white noises to improve worker productivity may prove to be a new approach towards productivity improvement that may provide fewer side effects than traditional methods such as drinking refreshing beverages. The introduction of the sounds represents a simple, low-cost, non-invasive improvement.\n Financial support and sponsorship Nil.\nNil.\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "Nil.", "There are no conflicts of interest." ]
[ "intro", "subjects|methods", null, null, null, null, null, "results", "discussion", "conclusion", null, "COI-statement" ]
[ "Human performance", "pink noise", "red noise", "subjective comfort", "white noise" ]
INTRODUCTION: Exposure to high levels of noise in industry factory is associated with a number of effects on health manifested in various psychosocial responses such as annoyance, disturbance of daily activities, sleep and performance and in physical responses, such as hearing loss, hypertension and ischemic heart disease.[12] While the open-plan office is now widespread in the workplace, it appears that noise is a major nuisance factor in open-plan offices in spite of a relatively low noise level(less than 65 dB(A)).[3] In general, the higher the decibel numbers of sound, the greater the health hazard. However, when the number of sound decibels is not high, the degree of influence on people may vary depending on the sound characteristics. In audio engineering, physics, and many other fields, the colour of noise refers to the power spectrum of a noise signal. The practice of naming kinds of noise after colours started with white noise, a signal whose spectrum has equal power within any equal interval of frequencies. That name was given by analogy with white light, which was assumed to have such a flat power spectrum over the visible range. Other colour names, like pink, red, and blue were then given to noise with other spectral profiles, often (but not always) in reference to the colour of light with similar spectra. Noise often has detrimental effects on performance. Under most circumstances, information processing is disturbed by environmental noise and other non-task compatible distracters.[45] However, researchers have recently reported that under certain circumstances individuals with attention problems appear to benefit from the addition of specific forms of environmental noise. Typically, this facilitative effect has been limited to non-vocal background music on simple arithmetic task performance,[6] but Stansfeld et al.[7] found just that under certain conditions even road traffic noise can improve performance on episodic memory tasks, particularly in children at risk of attention problems and academic underachievement. Furthermore, Söderlund et al.[8] have demonstrated that adding auditory white noise to the environment enhanced the memory performance of children with Attention Deficit Hyperactivity Disorder (ADHD)-type problems but disrupted that of non-ADHD control children. Stochastic resonance, a phenomenon whereby signal processing is enhanced by the addition of random noise, has been widely demonstrated across various modalities, including visual, auditory, tactile and cross-modal forms of processing.[910] Of particular interest are findings that auditory noise has the capacity to enhance some aspects of human cognitive performance, such as the speed of arithmetical computations.[11] Meanwhile, some recent studies showed related evidence. For example, in 2010, light music, an example belonging to pink noise, is proved beneficial for elder people to improve their sleep quality as a long term effect.[12] Moreover, in a previous study,[13] the steady pink noise with intensity of 35 dB, 40 dB, 50 dB and 60 dB was used to stimulate four participants during sleep and they believed that pink noise could improve the sleep quality of participants. However, it is clear that this conclusion was based on significant increase of light sleep period (especially stage 2) accompanied with declined duration of rapid eye movement. In the brain synchronization study, it has demonstrated that the pink noise could synchronize brain wave and induce brain activity into a specialized state, that is, in a lower complexity level.[14] Sezici’s study showed that playing of white noise significantly decreased the daily crying durations and increased the sleeping durations of the colicky babies compared to swinging in both groups.[15] In the literature, parents emphasized that playing of white noise was quite beneficial in relieving pain among crying colicky babies.[16] In their studies, Kucukoglu et al.[17] and Karakoc and Turker[18] revealed that the pain scores of the newborns who were made to listen to white noise were observed to decrease compared with those of the newborns in the control group. White noise encompasses all characteristics of sounds within the range of human hearing. It has been used in the treatment of tinnitus, insomnia, masking unwanted sounds, and provision of relaxation.[1920] Zhou et al.[14] demonstrated that steady pink noise has significant effect on reducing brain wave complexity and inducing more stable sleep time to improve sleep quality of individuals. In total, 42 elderly people (21 using music and 21 controls) completed the study, and there was some indication that soft slow music yielder higher improvement on some of the parameters, which are worthy of further investigation.[12] A tabulated list of the effects observed in previous studies with colour noise exposures is shown in Table 1. A tabulated list of the effects observed in previous studies with colour noise exposures Several experiments have demonstrated that the detrimental effect of office noise on worker attitude and performance can be significantly reduced through the use of masking noise.[2122232425] For instance the experimental study performed by Haka[24] demonstrated that the performance of operation span tasks, serial recall and long term memory tasks were all improved when masking noise was used; and the field study performed by Hongisto[23] indicated that masking noise significantly reduced disturbance of worker attitudes towards concentration caused by office noise. Masking noise achieves this improvement through reducing the intelligibility of nearby speech.[25] In addition, Vassie et al.[26] indicated that participants rejected the brown masking noise delivered through earphones as it caused irritation and discomfort. Future studies should also consider other types of masking noise and should measure the level and duration of the masking noise.[26] It is well-known that any human action results from brain activity, which is affected by light and sound. Too much acoustic energy causes hearing loss, impaired human nerves, and emotional disorders.[2728] However, sound play also very important role for people. According to previous researchers, not only white noise can create shading effect, but pink noise can improve sleep quality. Most people in the workplace are in the office or in the service industry, not in the noisy working environment for the factory. Creating a quiet environment is sometimes not easy. However, adding some sounds to enhance the work efficiency and reduce the work pressure of low-noise environment should be an issue worth exploring. Therefore, this study wanted to understand the effects of three different colours noise exposure. In the past, there were not many studies in this regard. Therefore, the goal of the current study is to test the hypothesis that adding another sound in a low background noise environment may increases productivity and comfort. SUBJECTS AND METHODS: Participants Twenty-two healthy college students were enrolled in this study. The mean age of the participants was 22 years. All participants reported no medical history of neurological disease, hypertension or heart disease. A machine called an audiometer was used to produce sounds at various volumes and frequencies. The participants being tested listened to sounds through headphones and responded when they hear them by pressing a button. A pure tone audiogram was performed by a trained health care professional at the beginning of the study for each participant, and the hearing acuity of the participants was determined to be normal, with the thresholds of 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz being less than 15 dB. The participants received financial compensation for their participation in the experiment. Informed consent was obtained from all participants. Twenty-two healthy college students were enrolled in this study. The mean age of the participants was 22 years. All participants reported no medical history of neurological disease, hypertension or heart disease. A machine called an audiometer was used to produce sounds at various volumes and frequencies. The participants being tested listened to sounds through headphones and responded when they hear them by pressing a button. A pure tone audiogram was performed by a trained health care professional at the beginning of the study for each participant, and the hearing acuity of the participants was determined to be normal, with the thresholds of 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz being less than 15 dB. The participants received financial compensation for their participation in the experiment. Informed consent was obtained from all participants. Presentation of noise During this experiment, four sound conditions were used, relating to the three types of masking sound (red noise, pink noise, white noise) and a condition, which consisted of a wide band background noise. White noise is a signal with a flat frequency spectrum when plotted as a linear function of frequency. Red noise will refer to a power density which decreases 6 dB per octave with increasing frequency (density proportional to 1/f2). The frequency spectrum of pink noise is linear in logarithmic scale, and the spectral power density, compared with white noise, decreases by 3 dB per octave (density proportional to 1/f). The noise level within the room used in the study was measured using a Cirrus Type 832C sound level meter. Thirty second SPL samples, LAeq,30s, were measured over a period of an hour. Noise levels ranged from an Lmin of 41 dBA to an Lmax of 50 dBA. Measurements performed on successive days identified similar ranges of Lmin and Lmax. The signal source of masking sound was sent through a desktop computer to a Bluetooth speaker (BOSE SoundLink Mini). The sound level of each colour noise exposure was about 47 dBA when turning on the speaker and the background noise (without adding colour noise) was 44 dBA. Background noise had a decibel (dBA) value of three decibels (dBA) difference compared to the noise exposure scenario of three different colours, so that means that the sound energy of the colour noise exposure scenario was double that of the original silence. During this experiment, four sound conditions were used, relating to the three types of masking sound (red noise, pink noise, white noise) and a condition, which consisted of a wide band background noise. White noise is a signal with a flat frequency spectrum when plotted as a linear function of frequency. Red noise will refer to a power density which decreases 6 dB per octave with increasing frequency (density proportional to 1/f2). The frequency spectrum of pink noise is linear in logarithmic scale, and the spectral power density, compared with white noise, decreases by 3 dB per octave (density proportional to 1/f). The noise level within the room used in the study was measured using a Cirrus Type 832C sound level meter. Thirty second SPL samples, LAeq,30s, were measured over a period of an hour. Noise levels ranged from an Lmin of 41 dBA to an Lmax of 50 dBA. Measurements performed on successive days identified similar ranges of Lmin and Lmax. The signal source of masking sound was sent through a desktop computer to a Bluetooth speaker (BOSE SoundLink Mini). The sound level of each colour noise exposure was about 47 dBA when turning on the speaker and the background noise (without adding colour noise) was 44 dBA. Background noise had a decibel (dBA) value of three decibels (dBA) difference compared to the noise exposure scenario of three different colours, so that means that the sound energy of the colour noise exposure scenario was double that of the original silence. Outcome measures Over the years, different scholars have given their definitions of efficiency, and different scholars also have different views and explanations for it. Therefore, it is not easy to offer a precise definition of work efficiency. Among many offerings, the definition of work efficiency includes “work response time”, the time when the work instruction is received plus the response time, and “work execution processing time” the time it takes for each task from the reaction to the completion. “Working memory” refers to the processing of tasks for the content of the current short-term memories. Cognitive function is concept that covers several functions of the brain such as attention, executive function, processing speed, learning and memory. Information processing and response speed are basic cognitive function, which is needed for more complex functions such as working memory. We measured cognitive performance via a battery of standardized cognitive tests. If improvements were observed in cognitive performance, this would not only have important benefits for employees, but benefits for the organization in terms of the potential for increased productivity. The main outcome measurers for this study were those of four laboratory tests: Psychomotor Speed Test, Continuous Performance Test, Executive Function Test, and Working Memory Test. These tests were presented in a random order. Participants completed them on individual laptops in a laboratory with one of the testers present. The Psychomotor Speed Test measures Simple Reaction Time (SRT), general alertness and motor speed through delivery of a known stimulus to a known location to elicit a known response. The Continuous Performance Test − Identical Pairs (CPT-IP) requires the identification of identical stimulus pairs within a continuously presented series of stimuli. The Trail Making Test (TMT) is a widely used test to assess executive function, and it is a neuropsychological test of visual attention and task switching. It consists of two parts in which the participant is instructed to connect a set of 25 dots as quickly as possible while still maintaining accuracy. The test can provide information about visual search speed, scanning, speed of processing, mental flexibility, as well as executive functioning. Currently, the Wechsler Adult Intelligence Scale (WAIS) is the most commonly used instrument in the armamentarium of clinical neuropsychologists.[2930] This test involves letter sequences and tests one’s ability to think logically and analytically. However, the Wechsler Adult Intelligence Scale-III and IV Letter Number Sequencing are not appropriate for non-alphabetic cultures. The Taiwan’s Odd-Even Number Sequencing Test (TOENST), as proposed by the Department of Psychology, National Taiwan University,[31] was adopted to replace the Letter Number Test. Participants need to rearrange and read the Arabic numerals in a series. The way to read out the series is: “read out the odd numbers in ascending order and then the even numbers in descending order.” For example, the random numbers 7, 2, 8, 6, 3 is incremented by an odd number (3, 7) and decrements by an even number (8, 6, 2), so the answer is 3 7 8 6 2. The comfort of the sound field environment was evaluated by a closed questionnaire, and the subjective comfort of the participant’s sound field environment was measured by the semantic differential method. There were mainly seven closed questionnaires in the questionnaire design, with “no feeling” in the middle (0 point). From middle to the left indicates that the ability deteriorates and worsens. Each left one grid is −1, the left two grids are −2, and so on. The middle to the right means that the ability to become better, every right one grid is +1, the right two grids is +2, and so on. Over the years, different scholars have given their definitions of efficiency, and different scholars also have different views and explanations for it. Therefore, it is not easy to offer a precise definition of work efficiency. Among many offerings, the definition of work efficiency includes “work response time”, the time when the work instruction is received plus the response time, and “work execution processing time” the time it takes for each task from the reaction to the completion. “Working memory” refers to the processing of tasks for the content of the current short-term memories. Cognitive function is concept that covers several functions of the brain such as attention, executive function, processing speed, learning and memory. Information processing and response speed are basic cognitive function, which is needed for more complex functions such as working memory. We measured cognitive performance via a battery of standardized cognitive tests. If improvements were observed in cognitive performance, this would not only have important benefits for employees, but benefits for the organization in terms of the potential for increased productivity. The main outcome measurers for this study were those of four laboratory tests: Psychomotor Speed Test, Continuous Performance Test, Executive Function Test, and Working Memory Test. These tests were presented in a random order. Participants completed them on individual laptops in a laboratory with one of the testers present. The Psychomotor Speed Test measures Simple Reaction Time (SRT), general alertness and motor speed through delivery of a known stimulus to a known location to elicit a known response. The Continuous Performance Test − Identical Pairs (CPT-IP) requires the identification of identical stimulus pairs within a continuously presented series of stimuli. The Trail Making Test (TMT) is a widely used test to assess executive function, and it is a neuropsychological test of visual attention and task switching. It consists of two parts in which the participant is instructed to connect a set of 25 dots as quickly as possible while still maintaining accuracy. The test can provide information about visual search speed, scanning, speed of processing, mental flexibility, as well as executive functioning. Currently, the Wechsler Adult Intelligence Scale (WAIS) is the most commonly used instrument in the armamentarium of clinical neuropsychologists.[2930] This test involves letter sequences and tests one’s ability to think logically and analytically. However, the Wechsler Adult Intelligence Scale-III and IV Letter Number Sequencing are not appropriate for non-alphabetic cultures. The Taiwan’s Odd-Even Number Sequencing Test (TOENST), as proposed by the Department of Psychology, National Taiwan University,[31] was adopted to replace the Letter Number Test. Participants need to rearrange and read the Arabic numerals in a series. The way to read out the series is: “read out the odd numbers in ascending order and then the even numbers in descending order.” For example, the random numbers 7, 2, 8, 6, 3 is incremented by an odd number (3, 7) and decrements by an even number (8, 6, 2), so the answer is 3 7 8 6 2. The comfort of the sound field environment was evaluated by a closed questionnaire, and the subjective comfort of the participant’s sound field environment was measured by the semantic differential method. There were mainly seven closed questionnaires in the questionnaire design, with “no feeling” in the middle (0 point). From middle to the left indicates that the ability deteriorates and worsens. Each left one grid is −1, the left two grids are −2, and so on. The middle to the right means that the ability to become better, every right one grid is +1, the right two grids is +2, and so on. Experimental protocol Participants were tested in the same laboratory. Subject to the participants’ available time, the experiment was conducted under conditions of quiet (without noise), red noise, pink noise and white noise exposure. Before each experiment, the participants adopted a random method to determine the sound field exposure situation and the participants were not informed of the kind of sound scenario. The order of the four sound conditions (quiet, red, pink, white) was randomized. Ten minutes before the experiment was started, the participants were physiologically and psychologically adapted to the laboratory environment before they started the experiment. Each participant had to perform four tests and complete questionnaires in each of the four sessions in the experiment, and the order of the four tests was randomized. There was a five-minute break between each test. During the break, the sound exposure was on and the sound exposure was not terminated until all sound field exposure conditions had ended. After finishing four tests, participants filled in the sound field comfort evaluation questionnaire. During the filling in process, the sound continued. The flowchart in Figure 1 illustrates the experimental protocol. Flow chart of the experimental protocol. A within-subject design was used in the experiment with one independent variable, i.e. sound condition. Five dependent variables, including four objective parameters (psychomotor speed test, continuous performance test, executive function test, and working memory test) and one subjective perception, were collected to explore work efficiency in this study. Participants were tested in the same laboratory. Subject to the participants’ available time, the experiment was conducted under conditions of quiet (without noise), red noise, pink noise and white noise exposure. Before each experiment, the participants adopted a random method to determine the sound field exposure situation and the participants were not informed of the kind of sound scenario. The order of the four sound conditions (quiet, red, pink, white) was randomized. Ten minutes before the experiment was started, the participants were physiologically and psychologically adapted to the laboratory environment before they started the experiment. Each participant had to perform four tests and complete questionnaires in each of the four sessions in the experiment, and the order of the four tests was randomized. There was a five-minute break between each test. During the break, the sound exposure was on and the sound exposure was not terminated until all sound field exposure conditions had ended. After finishing four tests, participants filled in the sound field comfort evaluation questionnaire. During the filling in process, the sound continued. The flowchart in Figure 1 illustrates the experimental protocol. Flow chart of the experimental protocol. A within-subject design was used in the experiment with one independent variable, i.e. sound condition. Five dependent variables, including four objective parameters (psychomotor speed test, continuous performance test, executive function test, and working memory test) and one subjective perception, were collected to explore work efficiency in this study. Statistical analysis of data The study was interested in evaluating the effectiveness of colour noise exposure. The approach considered was to measure the performance of a sample of participants before and after colour noise exposure, and analyse the differences using a paired sample t-test. IBM SPSS Statistics Version 20 was used to determine whether there were any statistically significant differences between the means of independent groups. Data from the completed questionnaires were stored in a database and transported in and subsequently analysed using the statistical program SPSS. The study was interested in evaluating the effectiveness of colour noise exposure. The approach considered was to measure the performance of a sample of participants before and after colour noise exposure, and analyse the differences using a paired sample t-test. IBM SPSS Statistics Version 20 was used to determine whether there were any statistically significant differences between the means of independent groups. Data from the completed questionnaires were stored in a database and transported in and subsequently analysed using the statistical program SPSS. Participants: Twenty-two healthy college students were enrolled in this study. The mean age of the participants was 22 years. All participants reported no medical history of neurological disease, hypertension or heart disease. A machine called an audiometer was used to produce sounds at various volumes and frequencies. The participants being tested listened to sounds through headphones and responded when they hear them by pressing a button. A pure tone audiogram was performed by a trained health care professional at the beginning of the study for each participant, and the hearing acuity of the participants was determined to be normal, with the thresholds of 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz being less than 15 dB. The participants received financial compensation for their participation in the experiment. Informed consent was obtained from all participants. Presentation of noise: During this experiment, four sound conditions were used, relating to the three types of masking sound (red noise, pink noise, white noise) and a condition, which consisted of a wide band background noise. White noise is a signal with a flat frequency spectrum when plotted as a linear function of frequency. Red noise will refer to a power density which decreases 6 dB per octave with increasing frequency (density proportional to 1/f2). The frequency spectrum of pink noise is linear in logarithmic scale, and the spectral power density, compared with white noise, decreases by 3 dB per octave (density proportional to 1/f). The noise level within the room used in the study was measured using a Cirrus Type 832C sound level meter. Thirty second SPL samples, LAeq,30s, were measured over a period of an hour. Noise levels ranged from an Lmin of 41 dBA to an Lmax of 50 dBA. Measurements performed on successive days identified similar ranges of Lmin and Lmax. The signal source of masking sound was sent through a desktop computer to a Bluetooth speaker (BOSE SoundLink Mini). The sound level of each colour noise exposure was about 47 dBA when turning on the speaker and the background noise (without adding colour noise) was 44 dBA. Background noise had a decibel (dBA) value of three decibels (dBA) difference compared to the noise exposure scenario of three different colours, so that means that the sound energy of the colour noise exposure scenario was double that of the original silence. Outcome measures: Over the years, different scholars have given their definitions of efficiency, and different scholars also have different views and explanations for it. Therefore, it is not easy to offer a precise definition of work efficiency. Among many offerings, the definition of work efficiency includes “work response time”, the time when the work instruction is received plus the response time, and “work execution processing time” the time it takes for each task from the reaction to the completion. “Working memory” refers to the processing of tasks for the content of the current short-term memories. Cognitive function is concept that covers several functions of the brain such as attention, executive function, processing speed, learning and memory. Information processing and response speed are basic cognitive function, which is needed for more complex functions such as working memory. We measured cognitive performance via a battery of standardized cognitive tests. If improvements were observed in cognitive performance, this would not only have important benefits for employees, but benefits for the organization in terms of the potential for increased productivity. The main outcome measurers for this study were those of four laboratory tests: Psychomotor Speed Test, Continuous Performance Test, Executive Function Test, and Working Memory Test. These tests were presented in a random order. Participants completed them on individual laptops in a laboratory with one of the testers present. The Psychomotor Speed Test measures Simple Reaction Time (SRT), general alertness and motor speed through delivery of a known stimulus to a known location to elicit a known response. The Continuous Performance Test − Identical Pairs (CPT-IP) requires the identification of identical stimulus pairs within a continuously presented series of stimuli. The Trail Making Test (TMT) is a widely used test to assess executive function, and it is a neuropsychological test of visual attention and task switching. It consists of two parts in which the participant is instructed to connect a set of 25 dots as quickly as possible while still maintaining accuracy. The test can provide information about visual search speed, scanning, speed of processing, mental flexibility, as well as executive functioning. Currently, the Wechsler Adult Intelligence Scale (WAIS) is the most commonly used instrument in the armamentarium of clinical neuropsychologists.[2930] This test involves letter sequences and tests one’s ability to think logically and analytically. However, the Wechsler Adult Intelligence Scale-III and IV Letter Number Sequencing are not appropriate for non-alphabetic cultures. The Taiwan’s Odd-Even Number Sequencing Test (TOENST), as proposed by the Department of Psychology, National Taiwan University,[31] was adopted to replace the Letter Number Test. Participants need to rearrange and read the Arabic numerals in a series. The way to read out the series is: “read out the odd numbers in ascending order and then the even numbers in descending order.” For example, the random numbers 7, 2, 8, 6, 3 is incremented by an odd number (3, 7) and decrements by an even number (8, 6, 2), so the answer is 3 7 8 6 2. The comfort of the sound field environment was evaluated by a closed questionnaire, and the subjective comfort of the participant’s sound field environment was measured by the semantic differential method. There were mainly seven closed questionnaires in the questionnaire design, with “no feeling” in the middle (0 point). From middle to the left indicates that the ability deteriorates and worsens. Each left one grid is −1, the left two grids are −2, and so on. The middle to the right means that the ability to become better, every right one grid is +1, the right two grids is +2, and so on. Experimental protocol: Participants were tested in the same laboratory. Subject to the participants’ available time, the experiment was conducted under conditions of quiet (without noise), red noise, pink noise and white noise exposure. Before each experiment, the participants adopted a random method to determine the sound field exposure situation and the participants were not informed of the kind of sound scenario. The order of the four sound conditions (quiet, red, pink, white) was randomized. Ten minutes before the experiment was started, the participants were physiologically and psychologically adapted to the laboratory environment before they started the experiment. Each participant had to perform four tests and complete questionnaires in each of the four sessions in the experiment, and the order of the four tests was randomized. There was a five-minute break between each test. During the break, the sound exposure was on and the sound exposure was not terminated until all sound field exposure conditions had ended. After finishing four tests, participants filled in the sound field comfort evaluation questionnaire. During the filling in process, the sound continued. The flowchart in Figure 1 illustrates the experimental protocol. Flow chart of the experimental protocol. A within-subject design was used in the experiment with one independent variable, i.e. sound condition. Five dependent variables, including four objective parameters (psychomotor speed test, continuous performance test, executive function test, and working memory test) and one subjective perception, were collected to explore work efficiency in this study. Statistical analysis of data: The study was interested in evaluating the effectiveness of colour noise exposure. The approach considered was to measure the performance of a sample of participants before and after colour noise exposure, and analyse the differences using a paired sample t-test. IBM SPSS Statistics Version 20 was used to determine whether there were any statistically significant differences between the means of independent groups. Data from the completed questionnaires were stored in a database and transported in and subsequently analysed using the statistical program SPSS. RESULTS: The results of the four tests conducted in this current study are summarized in Table 2. For each simple reaction time test, 10 sets of tests were completed. In the simple reaction time test, the average response time was better in red noise exposure scenario (0.565 seconds) than in the pink noise exposure scenario (0.602 seconds) and in the white noise exposure scenario (0.609 seconds). The variances of red noise exposure scenario (0.005 seconds) and pink noise exposure scenario (0.008 seconds) were lower than the white noise exposure scenario (0.013 seconds) and the quiet scenario (0.023 seconds). From the average of 10 test results, there was a very significant difference between the three noise exposure scenarios, red noise exposure (P < 0.001), pink noise exposure (P < 0.001), white noise exposure (P < 0.001) and the quiet scenario, as shown in Figure 2. Means and variances of SRT, CPT-IP, TMT, TOENST in four different sound field environments Paired t-test analysis of Simple Reaction Time (SRT) in four different sound field exposures. Compared with the average correct rate in the continuous performance test, 93.6% of the pink noise exposure situation is better than 91.7% of the red noise exposure situation and 91.1% of the white noise exposure situation, and the three kinds of noise exposure are better than 88.8% on the quiet situation. Among the 22 participants, half of the respondents achieved 93.3% correct rate on red noise exposure, 93.3% on pink noise exposure, 93.3% on white noise exposure, and they all scored better than 90.0% in the quiet situation. The average correct rate of the test showed that there was a slight improvement over the quiet scenario for all of the red noise scenario, the pink noise scenario, and the white noise scenario. The paired t-test showed significant differences (P < 0.05) between the pink noise exposure scenario and the quiet scenario in the continuous performance test, Figure 3. Paired t-test analysis of continuous performance test-identical pairs (CPT-IP) in four different sound field exposures. In the executive function tests, comparing the average time it takes to complete the test, it shows the red noise exposure scenario (21.37 seconds) is superior to the pink noise exposure scenario (22.31 seconds) and the white noise exposure scenario (23.11 seconds), all of which are better than the quiet scenario (24.54 seconds). The best test results (minimum) showed that the red noise exposure scenario (12.00 seconds), pink noise exposure scenario (10.73 seconds), white noise exposure scenario (13.13 seconds) were also better than the quiet scenario (15.33 seconds). The average finishing time of the tests showed that the red noise exposure situation, the pink noise exposure situation and the white noise exposure situation are better than the quiet situation. As shown on Figure 4, the paired t-test of executive function tests showed significant differences between the red noise exposure scenario (P < 0.001), the pink noise exposure scenario (P < 0.001), white noise exposure scenario (P < 0.001) and the quiet scenario. Paired t-test analysis of Trail Making Test (TMT) in four different sound field exposures. The average score of the working memory test showed that the red noise exposure situation (18.8 points) was superior to the pink noise exposure situation (17.3 points) and the white noise exposure situation (15.9 points), all of which were better than the quiet situation (14.4 points). In a comparison of variance, the red noise exposure (5.1 points) and the pink noise exposure (6.9 points) were significantly lower than the white noise exposure (11.2 points) and noiseless exposure (9.3 points). The paired t-test analysis showed that in the working memory test, the red noise exposure (P < 0.001), the pink noise exposure (P < 0.001), the white noise exposure (P < 0.01), the three noise exposure all had a significant difference as compared with the quiet situation, as shown on Figure 5. Paired t-test analysis of Taiwan’s Odd-Even Number Sequencing Test (TOENST) in four different sound field exposures. As shown in Table 3, compared with the quiet situation, both the red noise and the pink noise exposure in the simple reaction time test, the executive function test and the working memory test all showed significant differences (P < 0.001). The white noise exposure was very significantly different (P < 0.001) in the simple reaction time test, and there was a significant difference (P < 0.01) in the working memory test. However, only pink noise exposure was significantly different (P < 0.05) in the continuous performance test. Paired t-test analysis of three colour noise exposure situations compared with quiet situations A statistical analysis of the sound field comfort questionnaire showed that, the questionnaire had a Cronbach’s alpha score of 0.772. Generally, a questionnaire with a Cronbach’s alpha score between 0.7 and 0.9 is considered to be of good reliability. As shown in Table 4, the subjective comfort evaluation statistics of the sound field environment indicated a highest and lowest score of 0 and −2, a little worse in the quiet scenario, while the scores between −1 and 3 were slightly worse to good in the red noise scenario; the scores between −1 and 2 were slightly worse to better in pink noise scenario; however, the scores between −3 and 1 represented a worse to slightly better result for the white noise scenario. The sum of the scores of the subjective evaluation in the sound field were −40 (average rating −0.26), +81 (average rating 0.53), +38 (average rating 0.25), −94 (average rating −0.61) for quiet, red noise, pink noise, and white noise scenarios, respectively. Subjective comfort evaluations of four sound environments DISCUSSION: Of the four trials, there were three significant (P < 0.001) increases in scores for red noise exposure and pink noise exposure, possibly reflecting improvements in work efficiency. Literature explains that white noise exposure can serve as a masking sound, and this may improve work efficiency. Two tests showed a significant (P < 0.001) increase in performance, with a significant (P < 0.01) increase in one trial also corroborating previous studies. In brain synchronization studies, pink noise exposure has been shown to synchronize the brain and induce brain activity to lower its potential,[13] and to exclude the effects of the outside world for purposes of concentrated processing. Except for the slight significant difference (P < 0.05) in white noise exposure, the continuous performance test was the only one that did not show very significant results. Participants may all have been preoccupied with the test, so the average correct rates are above 90%. Even the correct rates of two participants reached 100% when exposed to all four sound situations, and there were three participants with a 100% correct rate when exposed to three scenarios. In the continuous performance test, the correct rate of non-exposure was about 88.8%, which of red noise, pink noise, and white noise were 91.7%, 93.6%, and 91.1%, respectively. Half of the 22 participants achieved a correct rate of 93.3% under the conditions of red noise exposure, pink noise exposure and white noise exposure, all of which were higher than the 88.8% correct rate of no noise exposure. The average score of the test results showed that a red noise exposure scenario, pink noise exposure scenario, and white noise exposure scenario are slightly better than the quiet scenario, which means that with the exposure to colour noise, the correct rate will rise and attention will be slightly improved. Because the exposure of the participant’s living environment to the noise itself, might affect the questionnaire data, the type of accommodation should be indicated on the comfort questionnaire, and whether or not there is any noise exposure should be indicated in the closed questionnaire. In this study, there were three home participants exposed to noise, there were three home participants who sometimes had noise exposure. Of these six participants, five of the participants had different answers from the others, indicating a larger numerical increase in white noise. The scores of the questionnaire results in the comfort of colour noises environment did not fully match the test data. Participants were randomly interviewed after the experiment. They said that with the additional sound increases, they felt that the sound interference, and the selection value should be less favourable. The laboratory background sound level was 44 dBA in the quiet scenario. The participants experienced significant changes to sound of 47 dBA when exposed to colour noise. Moreover, because the experimental exposure scenario was determined in a random manner, it was difficult to remember which previous test scores were selected. Scenario comfort evaluation is felt on a subjective basis, but participants may have had preconceptions when completing the questionnaire, thus affecting the final score. Although the results of the environmental comfort questionnaire were not as expected, the overall percentage of participants who felt slightly better or above were 48.8%, 32%, 9% for participants exposed to red, pink, white noise, respectively. The results of this study show that exposure to the three kinds of colour noises, red, pink and white noise yields significantly better results in the Psychomotor Speed Test, Continuous Performance Test, Executive Function Test and Working Memory Test, which proves that these three kinds of sound field environments on the office work-type job site might improve work efficiency. The sum of the questionnaire scores on subjective comfort of the sound field environment showed that red, pink, and white noise exposure, were better than the original no noise exposure of the sound field, and this result implies that the three colour noise sound fields seem to make the white-collar work patterns a better workplaces not only by improving productivity, but also by making workers feel comfortable. Because white noise is equal energy at each frequency, it can cover the sounds that do not want to be heard around the environment. Pink noise might be due to neurons in the hypothalamus synchronously resonating with pink noises in the low oscillation frequency band, making slow rhythms and dominating brain waves. In addition, the power density of high frequency in red noise is even lower than these in pink noise and may therefore affect neurons in the hypothalamus even more, leading to information more speedily reaching the cerebral cortex through the hypothalamus and to faster transmission of motor messages through the hypothalamus. The concept of the open-plan office is now widespread in the workplace. While offering many advantages in terms of layout and facilitating communication between colleagues, this way of organising the workspace had two major disadvantages. The feeling of privacy is lessened and noise level is increased causing discomfort for individuals working in this type of environment. It appears that noise is a major nuisance factor in open-plan offices in spite of a relatively low noise level. The noise nuisance experienced in open-plan offices affects work satisfaction, and this exposure to noise can reduce employee performance depending on the types of task to be carried out and the characteristics of the noise present in the workplace. The current research suggests that creating a colour noise environment through a loudspeaker or headphone might improve some aspects of cognitive performance in different workplaces. Many studies in the past have proved that noise has a negative influence on human hearing, cardiovascular systems, and emotions and so on. However, the three colour noises are applied to the original sound field environment. Therefore, if the background noise of the workplace is already too high, it would not be in order, for health reasons, to further add colour noise. CONCLUSION: The results of this study show that participants perform significantly better in Psychomotor Speed Test, Continuous Performance Test, Executive Function Test and Working Memory Test. Therefore, colour noise may be used in the future to improve a workplace sound field environment, so that white-collar workers have a better working environment. Using physical stimuli of red, pink and white noises to improve worker productivity may prove to be a new approach towards productivity improvement that may provide fewer side effects than traditional methods such as drinking refreshing beverages. The introduction of the sounds represents a simple, low-cost, non-invasive improvement. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Background: As part of an effort to enhance the efficiency of workers, experiments relating to three types of noise exposure were conducted. Previous studies have proved that pink noise can cause a brain wave to reach a lower potential. In this study, we utilized physical methods, in cognitive experiments, to understand the impacts that three colour noises have on working efficiency. Methods: All 22 participants were exposed to a sound environment of quiet, red, pink and white noises respectively. After a laboratory experiment, details of psychomotor speed, continuous performance, executive function and working memory were recorded. Results: Red, pink and white noises were significantly positive in comparison with the quiet environment of the psychomotor speed test. As for the continuous performance test, pink noise gave the only significantly positive result. Red, pink and white noise resulted in a better executive function test. Red and pink noise showed significantly positive improvement, while white noise was significantly positive in comparison with the quiet environment of the working memory test. In addition, the results from the comfort questionnaires showed that red and pink noise increase the possibility of better judgment, implementation, and overall environment. Conclusions: At present time, it is considered that noise has negative effects on hearing and health. However, experimental results show that certain noise can enhance environmental comfort. It is feasible, in the future, to use knowledge of colour noises to improve productivity in a workplace with a healthy environment.
INTRODUCTION: Exposure to high levels of noise in industry factory is associated with a number of effects on health manifested in various psychosocial responses such as annoyance, disturbance of daily activities, sleep and performance and in physical responses, such as hearing loss, hypertension and ischemic heart disease.[12] While the open-plan office is now widespread in the workplace, it appears that noise is a major nuisance factor in open-plan offices in spite of a relatively low noise level(less than 65 dB(A)).[3] In general, the higher the decibel numbers of sound, the greater the health hazard. However, when the number of sound decibels is not high, the degree of influence on people may vary depending on the sound characteristics. In audio engineering, physics, and many other fields, the colour of noise refers to the power spectrum of a noise signal. The practice of naming kinds of noise after colours started with white noise, a signal whose spectrum has equal power within any equal interval of frequencies. That name was given by analogy with white light, which was assumed to have such a flat power spectrum over the visible range. Other colour names, like pink, red, and blue were then given to noise with other spectral profiles, often (but not always) in reference to the colour of light with similar spectra. Noise often has detrimental effects on performance. Under most circumstances, information processing is disturbed by environmental noise and other non-task compatible distracters.[45] However, researchers have recently reported that under certain circumstances individuals with attention problems appear to benefit from the addition of specific forms of environmental noise. Typically, this facilitative effect has been limited to non-vocal background music on simple arithmetic task performance,[6] but Stansfeld et al.[7] found just that under certain conditions even road traffic noise can improve performance on episodic memory tasks, particularly in children at risk of attention problems and academic underachievement. Furthermore, Söderlund et al.[8] have demonstrated that adding auditory white noise to the environment enhanced the memory performance of children with Attention Deficit Hyperactivity Disorder (ADHD)-type problems but disrupted that of non-ADHD control children. Stochastic resonance, a phenomenon whereby signal processing is enhanced by the addition of random noise, has been widely demonstrated across various modalities, including visual, auditory, tactile and cross-modal forms of processing.[910] Of particular interest are findings that auditory noise has the capacity to enhance some aspects of human cognitive performance, such as the speed of arithmetical computations.[11] Meanwhile, some recent studies showed related evidence. For example, in 2010, light music, an example belonging to pink noise, is proved beneficial for elder people to improve their sleep quality as a long term effect.[12] Moreover, in a previous study,[13] the steady pink noise with intensity of 35 dB, 40 dB, 50 dB and 60 dB was used to stimulate four participants during sleep and they believed that pink noise could improve the sleep quality of participants. However, it is clear that this conclusion was based on significant increase of light sleep period (especially stage 2) accompanied with declined duration of rapid eye movement. In the brain synchronization study, it has demonstrated that the pink noise could synchronize brain wave and induce brain activity into a specialized state, that is, in a lower complexity level.[14] Sezici’s study showed that playing of white noise significantly decreased the daily crying durations and increased the sleeping durations of the colicky babies compared to swinging in both groups.[15] In the literature, parents emphasized that playing of white noise was quite beneficial in relieving pain among crying colicky babies.[16] In their studies, Kucukoglu et al.[17] and Karakoc and Turker[18] revealed that the pain scores of the newborns who were made to listen to white noise were observed to decrease compared with those of the newborns in the control group. White noise encompasses all characteristics of sounds within the range of human hearing. It has been used in the treatment of tinnitus, insomnia, masking unwanted sounds, and provision of relaxation.[1920] Zhou et al.[14] demonstrated that steady pink noise has significant effect on reducing brain wave complexity and inducing more stable sleep time to improve sleep quality of individuals. In total, 42 elderly people (21 using music and 21 controls) completed the study, and there was some indication that soft slow music yielder higher improvement on some of the parameters, which are worthy of further investigation.[12] A tabulated list of the effects observed in previous studies with colour noise exposures is shown in Table 1. A tabulated list of the effects observed in previous studies with colour noise exposures Several experiments have demonstrated that the detrimental effect of office noise on worker attitude and performance can be significantly reduced through the use of masking noise.[2122232425] For instance the experimental study performed by Haka[24] demonstrated that the performance of operation span tasks, serial recall and long term memory tasks were all improved when masking noise was used; and the field study performed by Hongisto[23] indicated that masking noise significantly reduced disturbance of worker attitudes towards concentration caused by office noise. Masking noise achieves this improvement through reducing the intelligibility of nearby speech.[25] In addition, Vassie et al.[26] indicated that participants rejected the brown masking noise delivered through earphones as it caused irritation and discomfort. Future studies should also consider other types of masking noise and should measure the level and duration of the masking noise.[26] It is well-known that any human action results from brain activity, which is affected by light and sound. Too much acoustic energy causes hearing loss, impaired human nerves, and emotional disorders.[2728] However, sound play also very important role for people. According to previous researchers, not only white noise can create shading effect, but pink noise can improve sleep quality. Most people in the workplace are in the office or in the service industry, not in the noisy working environment for the factory. Creating a quiet environment is sometimes not easy. However, adding some sounds to enhance the work efficiency and reduce the work pressure of low-noise environment should be an issue worth exploring. Therefore, this study wanted to understand the effects of three different colours noise exposure. In the past, there were not many studies in this regard. Therefore, the goal of the current study is to test the hypothesis that adding another sound in a low background noise environment may increases productivity and comfort. CONCLUSION: The results of this study show that participants perform significantly better in Psychomotor Speed Test, Continuous Performance Test, Executive Function Test and Working Memory Test. Therefore, colour noise may be used in the future to improve a workplace sound field environment, so that white-collar workers have a better working environment. Using physical stimuli of red, pink and white noises to improve worker productivity may prove to be a new approach towards productivity improvement that may provide fewer side effects than traditional methods such as drinking refreshing beverages. The introduction of the sounds represents a simple, low-cost, non-invasive improvement. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
Background: As part of an effort to enhance the efficiency of workers, experiments relating to three types of noise exposure were conducted. Previous studies have proved that pink noise can cause a brain wave to reach a lower potential. In this study, we utilized physical methods, in cognitive experiments, to understand the impacts that three colour noises have on working efficiency. Methods: All 22 participants were exposed to a sound environment of quiet, red, pink and white noises respectively. After a laboratory experiment, details of psychomotor speed, continuous performance, executive function and working memory were recorded. Results: Red, pink and white noises were significantly positive in comparison with the quiet environment of the psychomotor speed test. As for the continuous performance test, pink noise gave the only significantly positive result. Red, pink and white noise resulted in a better executive function test. Red and pink noise showed significantly positive improvement, while white noise was significantly positive in comparison with the quiet environment of the working memory test. In addition, the results from the comfort questionnaires showed that red and pink noise increase the possibility of better judgment, implementation, and overall environment. Conclusions: At present time, it is considered that noise has negative effects on hearing and health. However, experimental results show that certain noise can enhance environmental comfort. It is feasible, in the future, to use knowledge of colour noises to improve productivity in a workplace with a healthy environment.
8,294
284
[ 153, 294, 707, 283, 90, 2 ]
12
[ "noise", "test", "exposure", "noise exposure", "sound", "participants", "white", "pink", "scenario", "white noise" ]
[ "noise workplace", "white noise beneficial", "kinds noise colours", "noise workplace high", "colour noise refers" ]
null
[CONTENT] Human performance | pink noise | red noise | subjective comfort | white noise [SUMMARY]
null
[CONTENT] Human performance | pink noise | red noise | subjective comfort | white noise [SUMMARY]
[CONTENT] Human performance | pink noise | red noise | subjective comfort | white noise [SUMMARY]
[CONTENT] Human performance | pink noise | red noise | subjective comfort | white noise [SUMMARY]
[CONTENT] Human performance | pink noise | red noise | subjective comfort | white noise [SUMMARY]
[CONTENT] Audiometry, Pure-Tone | Efficiency | Executive Function | Female | Healthy Volunteers | Hearing | Humans | Male | Memory, Short-Term | Noise | Psychomotor Performance | Reaction Time | Sound | Work | Young Adult [SUMMARY]
null
[CONTENT] Audiometry, Pure-Tone | Efficiency | Executive Function | Female | Healthy Volunteers | Hearing | Humans | Male | Memory, Short-Term | Noise | Psychomotor Performance | Reaction Time | Sound | Work | Young Adult [SUMMARY]
[CONTENT] Audiometry, Pure-Tone | Efficiency | Executive Function | Female | Healthy Volunteers | Hearing | Humans | Male | Memory, Short-Term | Noise | Psychomotor Performance | Reaction Time | Sound | Work | Young Adult [SUMMARY]
[CONTENT] Audiometry, Pure-Tone | Efficiency | Executive Function | Female | Healthy Volunteers | Hearing | Humans | Male | Memory, Short-Term | Noise | Psychomotor Performance | Reaction Time | Sound | Work | Young Adult [SUMMARY]
[CONTENT] Audiometry, Pure-Tone | Efficiency | Executive Function | Female | Healthy Volunteers | Hearing | Humans | Male | Memory, Short-Term | Noise | Psychomotor Performance | Reaction Time | Sound | Work | Young Adult [SUMMARY]
[CONTENT] noise workplace | white noise beneficial | kinds noise colours | noise workplace high | colour noise refers [SUMMARY]
null
[CONTENT] noise workplace | white noise beneficial | kinds noise colours | noise workplace high | colour noise refers [SUMMARY]
[CONTENT] noise workplace | white noise beneficial | kinds noise colours | noise workplace high | colour noise refers [SUMMARY]
[CONTENT] noise workplace | white noise beneficial | kinds noise colours | noise workplace high | colour noise refers [SUMMARY]
[CONTENT] noise workplace | white noise beneficial | kinds noise colours | noise workplace high | colour noise refers [SUMMARY]
[CONTENT] noise | test | exposure | noise exposure | sound | participants | white | pink | scenario | white noise [SUMMARY]
null
[CONTENT] noise | test | exposure | noise exposure | sound | participants | white | pink | scenario | white noise [SUMMARY]
[CONTENT] noise | test | exposure | noise exposure | sound | participants | white | pink | scenario | white noise [SUMMARY]
[CONTENT] noise | test | exposure | noise exposure | sound | participants | white | pink | scenario | white noise [SUMMARY]
[CONTENT] noise | test | exposure | noise exposure | sound | participants | white | pink | scenario | white noise [SUMMARY]
[CONTENT] noise | sleep | masking noise | demonstrated | masking | studies | effect | people | light | white [SUMMARY]
null
[CONTENT] noise | exposure | noise exposure | scenario | seconds | pink noise exposure | test | situation | noise exposure scenario | exposure scenario [SUMMARY]
[CONTENT] conflicts interest | conflicts | interest | conflicts interest conflicts interest | interest conflicts interest | interest conflicts | conflicts interest conflicts | test | nil | improve [SUMMARY]
[CONTENT] noise | nil | exposure | test | conflicts interest | conflicts | sound | interest | noise exposure | participants [SUMMARY]
[CONTENT] noise | nil | exposure | test | conflicts interest | conflicts | sound | interest | noise exposure | participants [SUMMARY]
[CONTENT] three ||| ||| three [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] three ||| ||| three ||| 22 ||| ||| ||| ||| ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] three ||| ||| three ||| 22 ||| ||| ||| ||| ||| ||| ||| ||| ||| [SUMMARY]
Diverting colostomy is an effective and reversible option for severe hemorrhagic radiation proctopathy.
32148382
Severe chronic radiation proctopathy (CRP) is difficult to treat.
BACKGROUND
To assess the efficacy of colostomy in CRP, patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled. Patients with tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or who were lost to follow-up were excluded. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound, rectal manometry, and magnetic resonance imaging findings were recorded. Quality of life before stoma and after closure reversal was scored with questionnaires. Anorectal functions were assessed using the CRP symptom scale, which contains the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence.
METHODS
A total of 738 continual CRP patients were screened. After exclusion, 14 patients in the colostomy group and 25 in the conservative group were included in the final analysis. Preoperative Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. All 14 patients in the former group achieved complete remission of bleeding, and the colostomy was successfully reversed in 13 of 14 (93%), excepting one very old patient. The median duration of stoma was 16 (range: 9-53) mo. The Hb level increased gradually from 75 g/L at 3 mo, 99 g/L at 6 mo, and 107 g/L at 9 mo to 111 g/L at 1 year and 117 g/L at 2 years after the stoma, but no bleeding cessation or significant increase in Hb levels was observed in the conservative group. Endoscopic telangiectasia and bleeding were greatly improved. Endo-ultrasound showed decreased vascularity, and magnetic resonance imaging revealed an increasing presarcal space and thickened rectal wall. Anorectal functions and quality of life were significantly improved after stoma reversal, when compared to those before stoma creation.
RESULTS
Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Stoma can be reversed, and anorectal functions can be recovered after reversal.
CONCLUSION
[ "Aged", "Colostomy", "Female", "Gastrointestinal Hemorrhage", "Humans", "Male", "Middle Aged", "Quality of Life", "Radiation Injuries", "Rectal Diseases", "Rectum", "Retrospective Studies", "Surgical Stomas", "Treatment Outcome" ]
7052535
INTRODUCTION
Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. A permanent change in bowel habits may occur in 90% of patients. After pelvic radiotherapy, 20%-50% of patients will develop difficult bowel function, affecting their quality of life (QOL)[1,2]. CRP can cause more than 20 different symptoms. Rectal bleeding is the most common symptom and occurs in > 50% of CRP patients[1,3]. Transfusion-dependent severe bleeding occurs in 1%-5% of patients[1]. Pathologically, ischemia in the submucosa due to obliterative endarteritis and progressive fibrosis in macroscopic changes are the main causes[4]. Medical treatment consists of topical sucralfate enemas[5], oral or topical sulfasalazine[6], and rebamipide[7]. However, these reagents are only effective in acute or mild CRP, whereas their efficacy is usually disappointing in CRP patients with moderate to severe hemorrhage. More invasive modalities include endoscopic argon plasma coagulation (APC)[8], topical 4%-10% formalin[9-12], radiofrequency ablation[13], and hyperbaric oxygen therapy[14,15], and are currently popular optional treatments for CRP. Most of these modalities lack randomized trial evidence. These treatment options are reported to be effective in controlling mild to moderate bleeding in most of the literature. Nonetheless, severe and refractory bleeding is difficult to manage[1,16]. Furthermore, APC or topical formalin requires multiple sessions for severe bleeding and can cause severe side effects, including perforation, strictures, and perianal pain[17]. Fecal diversion is reported to be effective in the management of severe CRP bleeding[16,18,19], as it can reduce irritation injury to the lesions to decrease hemorrhage[16]. However, the usage of fecal diversion is not adopted as widely as is APC or formalin. Most of the fecal diversion data are from the 1980s, and this option has not been well studied to date. We previously reported one retrospective cohort of CRP patients with severe hemorrhage who received colostomy[16]. The results showed that colostomy can bring much higher remission of severe bleeding (94%) than can conservative treatment (11%) with APC or topical formalin. Pathologically, chronic inflammation and progressive fibrosis are observed after stoma. Diverting stoma is usually thought to be permanent according to Ayerdi et al[20]. Anorectal function and QOL after stoma reversal remain unclear. During the past 3 years, we have successfully performed colostomy reversals in a large cohort of CRP patients with severe bleeding and have follow-up data after colostomy reversal. Here, we report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversal for this series of CRP patients with severe bleeding.
MATERIALS AND METHODS
Patients and ethical statements Patients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived. Patients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived. Inclusion and exclusion criteria CRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding. CRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding. Diagnosis, definitions, and scores CRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments. In this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity. CRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments. In this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity. Indications for fecal diversion and stoma closure The indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16]. The indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16]. Follow-up Patients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications. Patients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications. Anorectal functions In our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal. In our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal. Statistical analysis Analyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant. Analyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant.
null
null
Research conclusions
We thank Dr. Jervoise Andreyev (a famous international leader in radiation proctitis, Consultant Gastroenterologist in Pelvic Radiation Disease, Royal Marsden NHS Foundation Trust, London SW3 6JJ, United Kingdom) for critical revisions of our paper.
[ "INTRODUCTION", "Patients and ethical statements", "Inclusion and exclusion criteria", "Diagnosis, definitions, and scores", "Indications for fecal diversion and stoma closure", "Follow-up", "Anorectal functions", "Statistical analysis", "RESULTS", "Demographics and clinical outcomes", "Dynamically increasing hemoglobin levels after colostomy", "Stoma closure and endoscopic features", "Ultrasound findings before stoma and at reversal", "Improved magnetic resonance imaging features at stoma reversal", "Good restored anorectal functions after stoma reversal", "Improved quality of life at colostomy reversal", "DISCUSSION", "ARTICLE HIGHLIGHTS", "Research background", "Research motivation", "Research objectives", "Research methods", "Research results", "Research conclusions" ]
[ "Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. A permanent change in bowel habits may occur in 90% of patients. After pelvic radiotherapy, 20%-50% of patients will develop difficult bowel function, affecting their quality of life (QOL)[1,2]. CRP can cause more than 20 different symptoms. Rectal bleeding is the most common symptom and occurs in > 50% of CRP patients[1,3]. Transfusion-dependent severe bleeding occurs in 1%-5% of patients[1]. Pathologically, ischemia in the submucosa due to obliterative endarteritis and progressive fibrosis in macroscopic changes are the main causes[4].\nMedical treatment consists of topical sucralfate enemas[5], oral or topical sulfasalazine[6], and rebamipide[7]. However, these reagents are only effective in acute or mild CRP, whereas their efficacy is usually disappointing in CRP patients with moderate to severe hemorrhage. More invasive modalities include endoscopic argon plasma coagulation (APC)[8], topical 4%-10% formalin[9-12], radiofrequency ablation[13], and hyperbaric oxygen therapy[14,15], and are currently popular optional treatments for CRP. Most of these modalities lack randomized trial evidence. These treatment options are reported to be effective in controlling mild to moderate bleeding in most of the literature. Nonetheless, severe and refractory bleeding is difficult to manage[1,16]. Furthermore, APC or topical formalin requires multiple sessions for severe bleeding and can cause severe side effects, including perforation, strictures, and perianal pain[17].\nFecal diversion is reported to be effective in the management of severe CRP bleeding[16,18,19], as it can reduce irritation injury to the lesions to decrease hemorrhage[16]. However, the usage of fecal diversion is not adopted as widely as is APC or formalin. Most of the fecal diversion data are from the 1980s, and this option has not been well studied to date. We previously reported one retrospective cohort of CRP patients with severe hemorrhage who received colostomy[16]. The results showed that colostomy can bring much higher remission of severe bleeding (94%) than can conservative treatment (11%) with APC or topical formalin. Pathologically, chronic inflammation and progressive fibrosis are observed after stoma. Diverting stoma is usually thought to be permanent according to Ayerdi et al[20].\nAnorectal function and QOL after stoma reversal remain unclear. During the past 3 years, we have successfully performed colostomy reversals in a large cohort of CRP patients with severe bleeding and have follow-up data after colostomy reversal. Here, we report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversal for this series of CRP patients with severe bleeding.", "Patients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived.", "CRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding.", "CRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments.\nIn this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity.", "The indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16].", "Patients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications.", "In our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal.", "Analyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant.", " Demographics and clinical outcomes From March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1).\nDemographics and characteristics\nFisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin.\nFlow chart of patient selection.\nIn the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding.\nFrom March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1).\nDemographics and characteristics\nFisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin.\nFlow chart of patient selection.\nIn the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding.\n Dynamically increasing hemoglobin levels after colostomy In the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1).\nClinical details and outcomes of 14 colostomy patients\nMedian value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation.\nDynamic changes in hemoglobin levels. Pre-Op: Pre-operative.\nIn the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1).\nClinical details and outcomes of 14 colostomy patients\nMedian value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation.\nDynamic changes in hemoglobin levels. Pre-Op: Pre-operative.\n Stoma closure and endoscopic features Among the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure.\nEndoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4).\nEndoscopic changes before stoma and at reversal.\nCTC bleeding scores before stoma and at stoma reversal.\nAmong the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure.\nEndoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4).\nEndoscopic changes before stoma and at reversal.\nCTC bleeding scores before stoma and at stoma reversal.\n Ultrasound findings before stoma and at reversal Endorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5).\nEndoscopic ultrasound features before stoma and at stoma reversal.\nEndorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5).\nEndoscopic ultrasound features before stoma and at stoma reversal.\n Improved magnetic resonance imaging features at stoma reversal To evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3).\nMagnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy\nTo evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3).\nMagnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy\n Good restored anorectal functions after stoma reversal Anorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed.\nRectal manometry assessment of anorectal functions before stoma reversals\nAnorectal functions before stoma and after stoma reversal.\nAnorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed.\nRectal manometry assessment of anorectal functions before stoma reversals\nAnorectal functions before stoma and after stoma reversal.\n Improved quality of life at colostomy reversal QOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5).\nQuality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale\nWilcoxon rank-sum test. \nPoint (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation.\nQOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5).\nQuality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale\nWilcoxon rank-sum test. \nPoint (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation.", "From March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1).\nDemographics and characteristics\nFisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin.\nFlow chart of patient selection.\nIn the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding.", "In the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1).\nClinical details and outcomes of 14 colostomy patients\nMedian value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation.\nDynamic changes in hemoglobin levels. Pre-Op: Pre-operative.", "Among the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure.\nEndoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4).\nEndoscopic changes before stoma and at reversal.\nCTC bleeding scores before stoma and at stoma reversal.", "Endorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5).\nEndoscopic ultrasound features before stoma and at stoma reversal.", "To evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3).\nMagnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy", "Anorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed.\nRectal manometry assessment of anorectal functions before stoma reversals\nAnorectal functions before stoma and after stoma reversal.", "QOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5).\nQuality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale\nWilcoxon rank-sum test. \nPoint (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation.", "The literature reports that 1%-6% of CRP patients have transfusion-dependent bleeding; many of these cases can be treated with topical formalin, APC, or hyperbaric oxygen treatment[1]. Surgical resection of rectal lesions is usually reserved as the last resort for hemorrhagic CRP because it is associated with high morbidity and mortality[3,13,19]. In our previous study, fecal diversion was found to be a simple, effective, and safe procedure for severe hemorrhagic CRP[16]. Theoretically, fecal diversion can reduce the irritation of stool and accelerate the course of fibrosis and thus relieve CRP bleeding rapidly[16].\nIn this study, we report results for diverting colostomy and conservative treatments of enemas, topical formalin, or APC in a large cohort of patients with severe hemorrhagic CRP and who were followed for at least 1 year. The results showed that stomas can be reversed in 93% of CRP cases with severe hemorrhage. Increased Hb and rapid cessation of bleeding were found in the colostomy group, whereas bleeding and Hb levels were not changed in the conservative group. Moreover, anorectal function was greatly improved after stoma reversal than before stoma creation, reaching the levels of the normal population. In addition, it is important to note that colostomy was performed in a very select population of CRP patients and those patients with CRP stricture and ulcerations were excluded. Stoma complications occurred in 21% of cases, including 1 case of parastomal hernia due to a weakened abdominal wall of the parastomal zone, 1 case of stoma prolapse due to an intestine overly pulled at stoma creation, and 1 case of stoma obstruction due to stricture. All of these complications were recovered after stoma reversal. According to the literature, the complication rate of stoma is approximately 20%-50%, and the complication rate of stoma in this study is thus acceptable. Thus, colostomy is a better option and can bring more benefits to CRP patients with severe bleeding than can conservative treatments.\nIn our previous study, we found complete remission of telangiectasia and edema, recovery of mucosal integrity, and massive fibrosis in the submucosa at 3-4 years after diversion[16]. In this study, anorectal manometry was conducted in some stoma patients and resulted in good compliance, resting pressure, and contraction. Rectal defecography was performed at reversal for all colostomy patients, and no rectal stricture was found. Thus, temporary diverting colostomy and reversal at an appropriate time can be a useful option for these patients.\nIn this study, most patients in the colostomy group presented with severe anemia and needed multiple transfusions. One patient did not receive transfusion, with an Hb level of 78, before colostomy. In China, blood supply is limited, and transfusion is only conducted in large central hospitals when Hb is < 60. At 3 to 6 mo after diverting colostomy, bleeding gradually remitted and increased Hb levels were observed. By 9 mo to 1 year after colostomy, most patients achieved complete remission of bleeding and the Hb level had almost returned to normal. Stoma reversal was conducted at 1 year after colostomy in 4 (31%) of 13 patients. In 5 cases, reversal was performed at more than 2 years after colostomy because the patients did not know colostomy could be reversed until we contacted them via telephone for follow-up. Thus, the median duration of stoma was prolonged to 16 mo. In fact, most stomas in this cohort could be reversed at 1 year after colostomy.\nBased on the results of our retrospective study[16], we started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901). All patients were followed and evaluated for stoma closure for 1 year after colostomy. The indication for diverting colostomy was extended to hemorrhagic CRP with moderate anemia in our series because preventative colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit from preventative fecal diversion.\nTo date, it is still unclear whether anorectal function can be preserved after stoma closure. In this study, we found that anorectal functions, including anal control and release of gas/stool, could be restored, which also indicated that CRP was inactive after a period of diverting colostomy. Although increased rectal sensitivity and decreased rectal volume were observed at reversal by anorectal manometry, good anorectal function was achieved after reversal compared to the normal population and improved compared to that before stoma creation. No additional pelvic biofeedback treatments were required in these patients.\nFor some patients, an additional one to two rounds of APC were required to help control bleeding during the first 6 mo after colostomy. The temporary stoma not only enabled the remission of anal symptoms but also restored the biological functions of the anus, providing evidence to support fecal diversion as a helpful option for severe hemorrhagic CRP patients.\nIn some studies, bile acid malabsorption and small bowel bacterial overgrowth have been thought to be the cause of diarrhea during acute and chronic radiation enteritis[1]. Diverting colostomy cannot reverse these changes. We focused on hemorrhage in CRP and diarrhea, common in small-bowel radiation injury, which is different from tenesmus and is not the major symptom in most CRP cases.\nAccording to our previous study, severe hemorrhagic CRP patients usually experience poor global health and fatigue due to moderate to severe anemia[16]. Their social and emotional functions are also impacted by frequent stool and anxiety of bleeding. They cannot attend social activities due to concern of looking for a lavatory[1]. In this study, after the remission of anal symptoms, we found that patients had good QOL, especially after stoma reversal, and were able to return to almost the same life as the normal population.\nAnorectal functions scored by the CRPSS were assessed before stoma and after reversal. We also analyzed EUS and pelvic MRI before stoma and at reversal to evaluate the severity of CRP. To objectively evaluate anorectal function before reversal, anorectal manometry was performed and retrospectively analyzed in some patients, though only in 6 patients because manometry is not a routine test before stoma reversal.\nThis study presents our serial results of diverting colostomy for severe hemorrhagic CRP. The results showed significant superiority over conservative treatments. However, there are several limitations to this study. First, this study was retrospective, with potential recall bias. Second, the sample size of this cohort was relatively small, which will affect the grade of evidence, and a larger cohort is essential. Finally, EUS, MRI, and anorectal manometry were only performed on patients undergoing colostomy reversal as a limitation. Thus, we started a prospective cohort of diverting colostomy for severe hemorrhagic CRP and will provide more reliable evidence in the future.\nIn conclusion, diverting colostomy is a very effective and rapid method for the remission of severe bleeding in CRP patients. The stoma can be reversed and good anorectal function can be restored after reversal in most patients.", " Research background Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear.\nChronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear.\n Research motivation During the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals.\nDuring the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals.\n Research objectives The aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP.\nThe aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP.\n Research methods Patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires.\nPatients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires.\n Research results After screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse.\nAfter screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse.\n Research conclusions Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal.\nDiverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal.\n Research perspectives Preventative diverting colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit. Thus, we have started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901), which will provide more evidence to the usage of colostomy in severe CRP patients.\nPreventative diverting colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit. Thus, we have started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901), which will provide more evidence to the usage of colostomy in severe CRP patients.", "Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear.", "During the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals.", "The aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP.", "Patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires.", "After screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse.", "Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients and ethical statements", "Inclusion and exclusion criteria", "Diagnosis, definitions, and scores", "Indications for fecal diversion and stoma closure", "Follow-up", "Anorectal functions", "Statistical analysis", "RESULTS", "Demographics and clinical outcomes", "Dynamically increasing hemoglobin levels after colostomy", "Stoma closure and endoscopic features", "Ultrasound findings before stoma and at reversal", "Improved magnetic resonance imaging features at stoma reversal", "Good restored anorectal functions after stoma reversal", "Improved quality of life at colostomy reversal", "DISCUSSION", "ARTICLE HIGHLIGHTS", "Research background", "Research motivation", "Research objectives", "Research methods", "Research results", "Research conclusions" ]
[ "Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. A permanent change in bowel habits may occur in 90% of patients. After pelvic radiotherapy, 20%-50% of patients will develop difficult bowel function, affecting their quality of life (QOL)[1,2]. CRP can cause more than 20 different symptoms. Rectal bleeding is the most common symptom and occurs in > 50% of CRP patients[1,3]. Transfusion-dependent severe bleeding occurs in 1%-5% of patients[1]. Pathologically, ischemia in the submucosa due to obliterative endarteritis and progressive fibrosis in macroscopic changes are the main causes[4].\nMedical treatment consists of topical sucralfate enemas[5], oral or topical sulfasalazine[6], and rebamipide[7]. However, these reagents are only effective in acute or mild CRP, whereas their efficacy is usually disappointing in CRP patients with moderate to severe hemorrhage. More invasive modalities include endoscopic argon plasma coagulation (APC)[8], topical 4%-10% formalin[9-12], radiofrequency ablation[13], and hyperbaric oxygen therapy[14,15], and are currently popular optional treatments for CRP. Most of these modalities lack randomized trial evidence. These treatment options are reported to be effective in controlling mild to moderate bleeding in most of the literature. Nonetheless, severe and refractory bleeding is difficult to manage[1,16]. Furthermore, APC or topical formalin requires multiple sessions for severe bleeding and can cause severe side effects, including perforation, strictures, and perianal pain[17].\nFecal diversion is reported to be effective in the management of severe CRP bleeding[16,18,19], as it can reduce irritation injury to the lesions to decrease hemorrhage[16]. However, the usage of fecal diversion is not adopted as widely as is APC or formalin. Most of the fecal diversion data are from the 1980s, and this option has not been well studied to date. We previously reported one retrospective cohort of CRP patients with severe hemorrhage who received colostomy[16]. The results showed that colostomy can bring much higher remission of severe bleeding (94%) than can conservative treatment (11%) with APC or topical formalin. Pathologically, chronic inflammation and progressive fibrosis are observed after stoma. Diverting stoma is usually thought to be permanent according to Ayerdi et al[20].\nAnorectal function and QOL after stoma reversal remain unclear. During the past 3 years, we have successfully performed colostomy reversals in a large cohort of CRP patients with severe bleeding and have follow-up data after colostomy reversal. Here, we report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversal for this series of CRP patients with severe bleeding.", " Patients and ethical statements Patients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived.\nPatients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived.\n Inclusion and exclusion criteria CRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding.\nCRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding.\n Diagnosis, definitions, and scores CRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments.\nIn this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity.\nCRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments.\nIn this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity.\n Indications for fecal diversion and stoma closure The indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16].\nThe indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16].\n Follow-up Patients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications.\nPatients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications.\n Anorectal functions In our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal.\nIn our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal.\n Statistical analysis Analyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant.\nAnalyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant.", "Patients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived.", "CRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding.", "CRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments.\nIn this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity.", "The indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16].", "Patients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications.", "In our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal.", "Analyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant.", " Demographics and clinical outcomes From March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1).\nDemographics and characteristics\nFisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin.\nFlow chart of patient selection.\nIn the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding.\nFrom March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1).\nDemographics and characteristics\nFisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin.\nFlow chart of patient selection.\nIn the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding.\n Dynamically increasing hemoglobin levels after colostomy In the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1).\nClinical details and outcomes of 14 colostomy patients\nMedian value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation.\nDynamic changes in hemoglobin levels. Pre-Op: Pre-operative.\nIn the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1).\nClinical details and outcomes of 14 colostomy patients\nMedian value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation.\nDynamic changes in hemoglobin levels. Pre-Op: Pre-operative.\n Stoma closure and endoscopic features Among the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure.\nEndoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4).\nEndoscopic changes before stoma and at reversal.\nCTC bleeding scores before stoma and at stoma reversal.\nAmong the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure.\nEndoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4).\nEndoscopic changes before stoma and at reversal.\nCTC bleeding scores before stoma and at stoma reversal.\n Ultrasound findings before stoma and at reversal Endorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5).\nEndoscopic ultrasound features before stoma and at stoma reversal.\nEndorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5).\nEndoscopic ultrasound features before stoma and at stoma reversal.\n Improved magnetic resonance imaging features at stoma reversal To evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3).\nMagnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy\nTo evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3).\nMagnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy\n Good restored anorectal functions after stoma reversal Anorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed.\nRectal manometry assessment of anorectal functions before stoma reversals\nAnorectal functions before stoma and after stoma reversal.\nAnorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed.\nRectal manometry assessment of anorectal functions before stoma reversals\nAnorectal functions before stoma and after stoma reversal.\n Improved quality of life at colostomy reversal QOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5).\nQuality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale\nWilcoxon rank-sum test. \nPoint (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation.\nQOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5).\nQuality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale\nWilcoxon rank-sum test. \nPoint (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation.", "From March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1).\nDemographics and characteristics\nFisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin.\nFlow chart of patient selection.\nIn the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding.", "In the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1).\nClinical details and outcomes of 14 colostomy patients\nMedian value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation.\nDynamic changes in hemoglobin levels. Pre-Op: Pre-operative.", "Among the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure.\nEndoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4).\nEndoscopic changes before stoma and at reversal.\nCTC bleeding scores before stoma and at stoma reversal.", "Endorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5).\nEndoscopic ultrasound features before stoma and at stoma reversal.", "To evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3).\nMagnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy", "Anorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed.\nRectal manometry assessment of anorectal functions before stoma reversals\nAnorectal functions before stoma and after stoma reversal.", "QOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5).\nQuality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale\nWilcoxon rank-sum test. \nPoint (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation.", "The literature reports that 1%-6% of CRP patients have transfusion-dependent bleeding; many of these cases can be treated with topical formalin, APC, or hyperbaric oxygen treatment[1]. Surgical resection of rectal lesions is usually reserved as the last resort for hemorrhagic CRP because it is associated with high morbidity and mortality[3,13,19]. In our previous study, fecal diversion was found to be a simple, effective, and safe procedure for severe hemorrhagic CRP[16]. Theoretically, fecal diversion can reduce the irritation of stool and accelerate the course of fibrosis and thus relieve CRP bleeding rapidly[16].\nIn this study, we report results for diverting colostomy and conservative treatments of enemas, topical formalin, or APC in a large cohort of patients with severe hemorrhagic CRP and who were followed for at least 1 year. The results showed that stomas can be reversed in 93% of CRP cases with severe hemorrhage. Increased Hb and rapid cessation of bleeding were found in the colostomy group, whereas bleeding and Hb levels were not changed in the conservative group. Moreover, anorectal function was greatly improved after stoma reversal than before stoma creation, reaching the levels of the normal population. In addition, it is important to note that colostomy was performed in a very select population of CRP patients and those patients with CRP stricture and ulcerations were excluded. Stoma complications occurred in 21% of cases, including 1 case of parastomal hernia due to a weakened abdominal wall of the parastomal zone, 1 case of stoma prolapse due to an intestine overly pulled at stoma creation, and 1 case of stoma obstruction due to stricture. All of these complications were recovered after stoma reversal. According to the literature, the complication rate of stoma is approximately 20%-50%, and the complication rate of stoma in this study is thus acceptable. Thus, colostomy is a better option and can bring more benefits to CRP patients with severe bleeding than can conservative treatments.\nIn our previous study, we found complete remission of telangiectasia and edema, recovery of mucosal integrity, and massive fibrosis in the submucosa at 3-4 years after diversion[16]. In this study, anorectal manometry was conducted in some stoma patients and resulted in good compliance, resting pressure, and contraction. Rectal defecography was performed at reversal for all colostomy patients, and no rectal stricture was found. Thus, temporary diverting colostomy and reversal at an appropriate time can be a useful option for these patients.\nIn this study, most patients in the colostomy group presented with severe anemia and needed multiple transfusions. One patient did not receive transfusion, with an Hb level of 78, before colostomy. In China, blood supply is limited, and transfusion is only conducted in large central hospitals when Hb is < 60. At 3 to 6 mo after diverting colostomy, bleeding gradually remitted and increased Hb levels were observed. By 9 mo to 1 year after colostomy, most patients achieved complete remission of bleeding and the Hb level had almost returned to normal. Stoma reversal was conducted at 1 year after colostomy in 4 (31%) of 13 patients. In 5 cases, reversal was performed at more than 2 years after colostomy because the patients did not know colostomy could be reversed until we contacted them via telephone for follow-up. Thus, the median duration of stoma was prolonged to 16 mo. In fact, most stomas in this cohort could be reversed at 1 year after colostomy.\nBased on the results of our retrospective study[16], we started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901). All patients were followed and evaluated for stoma closure for 1 year after colostomy. The indication for diverting colostomy was extended to hemorrhagic CRP with moderate anemia in our series because preventative colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit from preventative fecal diversion.\nTo date, it is still unclear whether anorectal function can be preserved after stoma closure. In this study, we found that anorectal functions, including anal control and release of gas/stool, could be restored, which also indicated that CRP was inactive after a period of diverting colostomy. Although increased rectal sensitivity and decreased rectal volume were observed at reversal by anorectal manometry, good anorectal function was achieved after reversal compared to the normal population and improved compared to that before stoma creation. No additional pelvic biofeedback treatments were required in these patients.\nFor some patients, an additional one to two rounds of APC were required to help control bleeding during the first 6 mo after colostomy. The temporary stoma not only enabled the remission of anal symptoms but also restored the biological functions of the anus, providing evidence to support fecal diversion as a helpful option for severe hemorrhagic CRP patients.\nIn some studies, bile acid malabsorption and small bowel bacterial overgrowth have been thought to be the cause of diarrhea during acute and chronic radiation enteritis[1]. Diverting colostomy cannot reverse these changes. We focused on hemorrhage in CRP and diarrhea, common in small-bowel radiation injury, which is different from tenesmus and is not the major symptom in most CRP cases.\nAccording to our previous study, severe hemorrhagic CRP patients usually experience poor global health and fatigue due to moderate to severe anemia[16]. Their social and emotional functions are also impacted by frequent stool and anxiety of bleeding. They cannot attend social activities due to concern of looking for a lavatory[1]. In this study, after the remission of anal symptoms, we found that patients had good QOL, especially after stoma reversal, and were able to return to almost the same life as the normal population.\nAnorectal functions scored by the CRPSS were assessed before stoma and after reversal. We also analyzed EUS and pelvic MRI before stoma and at reversal to evaluate the severity of CRP. To objectively evaluate anorectal function before reversal, anorectal manometry was performed and retrospectively analyzed in some patients, though only in 6 patients because manometry is not a routine test before stoma reversal.\nThis study presents our serial results of diverting colostomy for severe hemorrhagic CRP. The results showed significant superiority over conservative treatments. However, there are several limitations to this study. First, this study was retrospective, with potential recall bias. Second, the sample size of this cohort was relatively small, which will affect the grade of evidence, and a larger cohort is essential. Finally, EUS, MRI, and anorectal manometry were only performed on patients undergoing colostomy reversal as a limitation. Thus, we started a prospective cohort of diverting colostomy for severe hemorrhagic CRP and will provide more reliable evidence in the future.\nIn conclusion, diverting colostomy is a very effective and rapid method for the remission of severe bleeding in CRP patients. The stoma can be reversed and good anorectal function can be restored after reversal in most patients.", " Research background Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear.\nChronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear.\n Research motivation During the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals.\nDuring the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals.\n Research objectives The aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP.\nThe aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP.\n Research methods Patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires.\nPatients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires.\n Research results After screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse.\nAfter screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse.\n Research conclusions Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal.\nDiverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal.\n Research perspectives Preventative diverting colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit. Thus, we have started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901), which will provide more evidence to the usage of colostomy in severe CRP patients.\nPreventative diverting colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit. Thus, we have started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901), which will provide more evidence to the usage of colostomy in severe CRP patients.", "Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear.", "During the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals.", "The aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP.", "Patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires.", "After screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse.", "Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Chronic radiation proctitis", "Hemorrhage", "Colostomy", "Anorectal function", "Quality of life" ]
INTRODUCTION: Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. A permanent change in bowel habits may occur in 90% of patients. After pelvic radiotherapy, 20%-50% of patients will develop difficult bowel function, affecting their quality of life (QOL)[1,2]. CRP can cause more than 20 different symptoms. Rectal bleeding is the most common symptom and occurs in > 50% of CRP patients[1,3]. Transfusion-dependent severe bleeding occurs in 1%-5% of patients[1]. Pathologically, ischemia in the submucosa due to obliterative endarteritis and progressive fibrosis in macroscopic changes are the main causes[4]. Medical treatment consists of topical sucralfate enemas[5], oral or topical sulfasalazine[6], and rebamipide[7]. However, these reagents are only effective in acute or mild CRP, whereas their efficacy is usually disappointing in CRP patients with moderate to severe hemorrhage. More invasive modalities include endoscopic argon plasma coagulation (APC)[8], topical 4%-10% formalin[9-12], radiofrequency ablation[13], and hyperbaric oxygen therapy[14,15], and are currently popular optional treatments for CRP. Most of these modalities lack randomized trial evidence. These treatment options are reported to be effective in controlling mild to moderate bleeding in most of the literature. Nonetheless, severe and refractory bleeding is difficult to manage[1,16]. Furthermore, APC or topical formalin requires multiple sessions for severe bleeding and can cause severe side effects, including perforation, strictures, and perianal pain[17]. Fecal diversion is reported to be effective in the management of severe CRP bleeding[16,18,19], as it can reduce irritation injury to the lesions to decrease hemorrhage[16]. However, the usage of fecal diversion is not adopted as widely as is APC or formalin. Most of the fecal diversion data are from the 1980s, and this option has not been well studied to date. We previously reported one retrospective cohort of CRP patients with severe hemorrhage who received colostomy[16]. The results showed that colostomy can bring much higher remission of severe bleeding (94%) than can conservative treatment (11%) with APC or topical formalin. Pathologically, chronic inflammation and progressive fibrosis are observed after stoma. Diverting stoma is usually thought to be permanent according to Ayerdi et al[20]. Anorectal function and QOL after stoma reversal remain unclear. During the past 3 years, we have successfully performed colostomy reversals in a large cohort of CRP patients with severe bleeding and have follow-up data after colostomy reversal. Here, we report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversal for this series of CRP patients with severe bleeding. MATERIALS AND METHODS: Patients and ethical statements Patients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived. Patients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived. Inclusion and exclusion criteria CRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding. CRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding. Diagnosis, definitions, and scores CRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments. In this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity. CRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments. In this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity. Indications for fecal diversion and stoma closure The indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16]. The indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16]. Follow-up Patients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications. Patients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications. Anorectal functions In our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal. In our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal. Statistical analysis Analyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant. Analyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant. Patients and ethical statements: Patients with hemorrhagic CRP who were treated after admission at The Sixth Affiliated Hospital of Sun Yat-Sen University from March 2008 to December 2018 were enrolled. Medical records and imaging and clinicopathological data were extracted from our electronic database. This study was approved by the Ethical Committee of The Sixth Affiliated Hospital of Sun Yat-Sen University and was performed according to the provisions of the World Medical Association’s Declaration of Helsinki in 1995 (updated in Tokyo, 2004). Due to the nature of a retrospective study, informed consent was waived. Inclusion and exclusion criteria: CRP patients with refractory rectal bleeding who received diverting colostomy or conservative treatments for severe anemia and who needed transfusions were enrolled. Patients who were lost to follow-up, had tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or underwent rectal resection with preventive colostomy were excluded because these conditions would affect the evaluation of remission of rectal bleeding. Diagnosis, definitions, and scores: CRP was diagnosed by the combination of pelvic radiation history for malignancies, symptoms such as rectal bleeding, and endoscopic findings of CRP-specific changes such as telangiectasia and active bleeding in the rectum, as well as exclusion of other bleeding diseases. Hemorrhagic problems, such as tumor recurrence and hemorrhoids, were excluded. Refractory severe bleeding was defined as frequent bleeding and severe anemia with the need for transfusions. At admission, CRP patients in our center were first referred to noninvasive enemas, and if there was no response or recurrence, invasive treatment, such as APC (1-2 rounds) or formalin topical irrigation, was utilized. Conservative treatment failure was defined as no response or recurrence of frequent or severe rectal bleeding after invasive treatments. In this study, we used a modified subjective-objective management analysis (known as mSOMA) system that we designed previously to assess the severity of rectal bleeding[16]. The advantages of the mSOMA system include both subjective complaints of patients and objective hemoglobin (Hb) levels according to laboratory tests. Remission of bleeding was defined as complete cessation or occasional bleeding that needed no further treatment. The CTCAE score of bleeding was used to assess its severity. Indications for fecal diversion and stoma closure: The indications for diverting diversion in hemorrhagic CRP contained the following conditions: Recurrent bleeding and unresponsive to conservative treatments such as endoscopic APC or topical formalin, accompanying severe anemia, and transfusions needed. Transverse loop colostomy and “gunsight” or “cross‘’ type of stoma closure were created according to the standard protocols used in our previous studies[16]. The criteria of stoma closure were as follows: Remission or occasional rectal bleeding, regressive edema in rectal mucosa, good performance of anorectal function, exclusion of tumor recurrence, and no severe CRP complications such as fistula, stricture, and deep ulcer[16]. Follow-up: Patients were scheduled for follow-up through outpatient visits or telephone questionnaires at 6 mo, 9 mo, 1 year, 1.5 years, and 2 years after colostomy. QOL after stoma closure was evaluated according to EORTC QLQ C30 questionnaires[21]. We focused on the following items in this study: The remission rate of bleeding, the rate of stoma reversal, QOL after stoma reversal, dynamic changes of Hb levels, stoma-related complications, and severe CRP complications. Anorectal functions: In our previous study, tenesmus, stool frequency, and anorectal pain were common symptoms in active CRP[22]. Because there are no standard scales or scores to assess anorectal function precisely in CRP patients, we developed a new scale system that is a patient self-reported scale of outcomes, namely, the chronic radiation proctopathy symptom scale (CRPSS). The CRPSS considers the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. The details are provided in Supplementary File 1. For CRP patients treated by colostomy, CRPSS scores were evaluated before stoma and at reversal. Statistical analysis: Analyses for continuous variables were performed by Student’s t-test. The χ2 test was used for categorical variables. Fisher’s exact test was conducted when appropriate. For non-parameter variables, the Wilcoxon rank sum tests were used. All statistical analyses were performed using SPSS version 23.0 (Chicago, IL, United States). P < 0.05 (two-tailed) was considered statistically significant. RESULTS: Demographics and clinical outcomes From March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1). Demographics and characteristics Fisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin. Flow chart of patient selection. In the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding. From March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1). Demographics and characteristics Fisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin. Flow chart of patient selection. In the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding. Dynamically increasing hemoglobin levels after colostomy In the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1). Clinical details and outcomes of 14 colostomy patients Median value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation. Dynamic changes in hemoglobin levels. Pre-Op: Pre-operative. In the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1). Clinical details and outcomes of 14 colostomy patients Median value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation. Dynamic changes in hemoglobin levels. Pre-Op: Pre-operative. Stoma closure and endoscopic features Among the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure. Endoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4). Endoscopic changes before stoma and at reversal. CTC bleeding scores before stoma and at stoma reversal. Among the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure. Endoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4). Endoscopic changes before stoma and at reversal. CTC bleeding scores before stoma and at stoma reversal. Ultrasound findings before stoma and at reversal Endorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5). Endoscopic ultrasound features before stoma and at stoma reversal. Endorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5). Endoscopic ultrasound features before stoma and at stoma reversal. Improved magnetic resonance imaging features at stoma reversal To evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3). Magnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy To evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3). Magnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy Good restored anorectal functions after stoma reversal Anorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed. Rectal manometry assessment of anorectal functions before stoma reversals Anorectal functions before stoma and after stoma reversal. Anorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed. Rectal manometry assessment of anorectal functions before stoma reversals Anorectal functions before stoma and after stoma reversal. Improved quality of life at colostomy reversal QOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5). Quality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale Wilcoxon rank-sum test. Point (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation. QOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5). Quality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale Wilcoxon rank-sum test. Point (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation. Demographics and clinical outcomes: From March 2008 to Feb 2019, 738 continual CRP patients treated in our center were screened. After exclusion of 644 patients with mild or moderate CRP, 94 with refractory bleeding and moderate to severe anemia were further screened. Among them, 50 patients were treated by diverted colostomy and 44 patients were treated with conservative therapies. Among the 50 patients treated by colostomy, 32 were excluded due to recto-vaginal fistulas or deep rectal ulcer with refractory perianal pain before stoma. The remaining 18 patients with refractory bleeding were enrolled. Among them, two died from cancer recurrence and two were lost to follow-up. The remaining 14 patients were enrolled in the final analysis as the colostomy group. For the conservative group, 19 patients were excluded, including 11 with tumor relapse and 8 who were lost to follow-up. Thus, 25 were enrolled in the final analysis (Figure 1). Demographic and baseline characteristics were collected, including the intention-to-treat (ITT) group of 50 colostomies. In the enrolled patients, 14 patients comprised the diverting colostomy group, and 25 cases comprised the conservative treatment group. The primary tumors in most of the patients (33/39, 85%) were cervical cancers. No significant differences in age, sex, type of primary tumor, or radiation dosage were found between the diverting colostomy group and the conservative treatment group or between the ITT colostomy group and the colostomy group (Table 1). Demographics and characteristics Fisher's exact test. ITT: Intention-to-treat; Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin. Flow chart of patient selection. In the ITT group, no postoperative follow-ups were conducted other than for the 14 investigated colostomy patients. Higher bleeding scores (P = 0.033) and relatively decreased preoperative Hb levels (P = 0.051, no significant difference) were found in the diverting colostomy group compared to the ITT group because 36 patients in the ITT group underwent colostomy for recto-vaginal fistulas or deep rectal ulcers instead of severe bleeding. Dynamically increasing hemoglobin levels after colostomy: In the colostomy group, the median duration of bleeding was 13.0 mo ± 4.3 mo, and 12 (86%) patients received transfusion before stoma compared to 8.0 mo ± 1.8 mo (P = 0.042) of bleeding duration and 6 (24%) (P < 0.001) transfusions in the conservative group. In the conservative group, 8 patients received formalin irrigation and 4 patients received APC treatments. The preoperative (Pre-Op) Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. One patient with moderate anemia (Hb of 87 g/L) received colostomy for perianal pain. Another patient with severe anemia (Hb of 78 g/L) did not receive a transfusion due to unavailability of a blood supply before colostomy. The details of the 14 patients are listed in Table 2. Increasing Hb levels were observed from 3 mo (median Hb of 75 g/L), 6 mo (Hb of 99 g/L), and 9 mo (Hb of 107 g/L) to 1 year (Hb 111 of g/L) and 2 years (Hb 117 g/L) after colostomy (Figure 2). Postoperative transfusions were conducted in 3 of 14 patients (transfusion requirements for 2 patients in other hospitals was unknown) in the colostomy group. The dynamic remission rates between the colostomy and conservative groups after surgery were as follows: 86% vs 4% at 3 mo (P < 0.001), 86% vs 12% at 6 mo (P < 0.001), 93% vs 20% at 9 mo (P < 0.001), 100% vs 24% at 1 year (P < 0.001), and 100% vs 20% at 2 years (P < 0.001) (Table 1). Clinical details and outcomes of 14 colostomy patients Median value. Pre-Op: Pre-operative; Post-Op: Post-operative; Hb: Hemoglobin; RTD: Rectal telangiectasia density; APC: Argon plasma coagulation. Dynamic changes in hemoglobin levels. Pre-Op: Pre-operative. Stoma closure and endoscopic features: Among the 14 patients with severe bleeding treated by colostomy, all obtained complete remission of bleeding during follow-up. Only 3 (23%) of 13 patients received one round of APC after stoma to control bleeding, and 1 patient received 4% formalin irrigation after colostomy (Table 2). Among the 14 patients, 13 (93%) underwent stoma closure, and the remaining patient did not undergo stoma reversal due to concerns of surgical risks because of old age (83 years old). The median duration of stoma was 16 mo (range: 9-53 mo). Stoma complications occurred in 3 (21%) cases, including 1 para-stoma hernia, 1 stoma prolapse, and 1 stoma obstruction (Grade II by Clavien-Dindo classification)[23]. All 3 patients recovered well after stoma closure. Endoscopic findings of bleeding, confluent telangiectasia, and congested mucosa improved dramatically at stoma reversal compared to before stoma creation (Figure 3). The bleeding score by CTCAE decreased from Pre-Op 2.7 points ± 0.5 points to 0.8 points ± 0.5 points (P < 0.001) at 1 year after stoma in the colostomy group; the bleeding score was 2.0 ± 0.5 at initial diagnosis and 1.7 ± 0.9 (P = 0.1282) at 1 year after treatments in the conservative group (Table 1 and Figure 4). Endoscopic changes before stoma and at reversal. CTC bleeding scores before stoma and at stoma reversal. Ultrasound findings before stoma and at reversal: Endorectal ultrasound (EUS) was analyzed retrospectively before stoma and at reversal in 5 patients; the remaining patients did not receive EUS at either of these two time points. In our previous study, thickening of the rectal wall, blurred wall stratification, and increased vascularity were EUS features in CRP[24]. In this study, we found similar features before stoma creation and a tremendous decrease in superficial vascularity, which can explain the cessation of bleeding (Figure 5). Endoscopic ultrasound features before stoma and at stoma reversal. Improved magnetic resonance imaging features at stoma reversal: To evaluate the severity of the effect on the rectal wall and pelvic floor in the colostomy group, we also report magnetic resonance imaging (MRI) parameters related to CRP, including the thickness of the rectal wall, width of the presacral space, and thicknesses of the levator ani, the gluteus maximus muscle, the obturator intemus, and the distal part of the sigmoid colon (Table 3), which was referred to in our previous study of radiation injury to the pelvis[25]. MRI scans both at pre-colostomy and at reversal were analyzed in 5 patients, as the remaining 9 did not receive MRI scans both pre-colostomy and at reversal. The thickness of the rectal wall was significantly decreased at reversal (8.91 ± 1.61) compared with pre-colostomy (10.7 ± 4.29) (P = 0.047) (Table 3). Magnetic resonance imaging findings at post-colostomy and at reversal compared in 5 patients with chronic radiation proctopathy Good restored anorectal functions after stoma reversal: Anorectal functions before stoma and after reversal were evaluated by CRPSS scores in all 12 patients in the colostomy group. The results showed that rectal bleeding, peritoneal pain, tenesmus, urgency and fecal/gas incontinency were significantly improved after stoma reversal compared to those before stoma creation. Additionally, all of these scores were < 2 points, which indicated that anorectal function recovered very well after stoma reversal (Figure 6). Complete remission of bleeding and good performance of anorectal function compared to the normal population (scores ≤ 1 point) were observed. We also analyzed anorectal manometry to objectively assess anorectal function before reversal in 6 of the 14 patients who received it. Increased sensitivity of the rectal mucosa and decreased rectal volume were observed. Decreased sphincter functions were found in 2 of 6 cases (Table 4). These patients fulfilled the indications for reversal, and we found that these patients obtained good performance of anorectal function after reversal. Rectal defecography was also conducted before reversal, and no stricture was observed. Rectal manometry assessment of anorectal functions before stoma reversals Anorectal functions before stoma and after stoma reversal. Improved quality of life at colostomy reversal: QOL after stoma was assessed in the colostomy and conservative groups using EORTC QLQ-C30 questionnaires. As there are no similar reports in the Chinese population, we referred to the normal German population[26]. Osoba et al[26] suggested that a difference of ≥ 20 points in global health was considered to be clinically relevant. In the panel of patients who underwent stoma closure, QOL was dramatically improved compared to Pre-Op baseline QOL before stoma creation, including improved global health (difference of 40, P < 0.001), physical function (difference of 36.4, P < 0.001), role function (difference of 55, P < 0.001), emotional function (difference of 39.4, P < 0.001), social function (difference of 36.3, P < 0.001), and improved symptoms, such as fatigue (difference of -62.5, P < 0.001), pain (difference of -34.9, P < 0.001), dyspnea (difference of -37.8, P < 0.001), insomnia (difference of -36.2, P < 0.001), diarrhea (difference of -23.8, P < 0.001), and financial problems (difference of -35.5, P < 0.001). However, in the conservative group, no improved QOL variables with a difference of ≥ 20 were observed (Table 5). QOL after stoma closure was also compared to the normal population reference, with similar scores for functions and symptoms (Table 5). Quality of life in the colostomy group and conservative group of chronic radiation proctopathy patients based on the EORTC QLQ-C30 scale Wilcoxon rank-sum test. Point (follow-up) – point (pretreatment). Δ(FU): Change in follow-up; SD: Standard deviation. DISCUSSION: The literature reports that 1%-6% of CRP patients have transfusion-dependent bleeding; many of these cases can be treated with topical formalin, APC, or hyperbaric oxygen treatment[1]. Surgical resection of rectal lesions is usually reserved as the last resort for hemorrhagic CRP because it is associated with high morbidity and mortality[3,13,19]. In our previous study, fecal diversion was found to be a simple, effective, and safe procedure for severe hemorrhagic CRP[16]. Theoretically, fecal diversion can reduce the irritation of stool and accelerate the course of fibrosis and thus relieve CRP bleeding rapidly[16]. In this study, we report results for diverting colostomy and conservative treatments of enemas, topical formalin, or APC in a large cohort of patients with severe hemorrhagic CRP and who were followed for at least 1 year. The results showed that stomas can be reversed in 93% of CRP cases with severe hemorrhage. Increased Hb and rapid cessation of bleeding were found in the colostomy group, whereas bleeding and Hb levels were not changed in the conservative group. Moreover, anorectal function was greatly improved after stoma reversal than before stoma creation, reaching the levels of the normal population. In addition, it is important to note that colostomy was performed in a very select population of CRP patients and those patients with CRP stricture and ulcerations were excluded. Stoma complications occurred in 21% of cases, including 1 case of parastomal hernia due to a weakened abdominal wall of the parastomal zone, 1 case of stoma prolapse due to an intestine overly pulled at stoma creation, and 1 case of stoma obstruction due to stricture. All of these complications were recovered after stoma reversal. According to the literature, the complication rate of stoma is approximately 20%-50%, and the complication rate of stoma in this study is thus acceptable. Thus, colostomy is a better option and can bring more benefits to CRP patients with severe bleeding than can conservative treatments. In our previous study, we found complete remission of telangiectasia and edema, recovery of mucosal integrity, and massive fibrosis in the submucosa at 3-4 years after diversion[16]. In this study, anorectal manometry was conducted in some stoma patients and resulted in good compliance, resting pressure, and contraction. Rectal defecography was performed at reversal for all colostomy patients, and no rectal stricture was found. Thus, temporary diverting colostomy and reversal at an appropriate time can be a useful option for these patients. In this study, most patients in the colostomy group presented with severe anemia and needed multiple transfusions. One patient did not receive transfusion, with an Hb level of 78, before colostomy. In China, blood supply is limited, and transfusion is only conducted in large central hospitals when Hb is < 60. At 3 to 6 mo after diverting colostomy, bleeding gradually remitted and increased Hb levels were observed. By 9 mo to 1 year after colostomy, most patients achieved complete remission of bleeding and the Hb level had almost returned to normal. Stoma reversal was conducted at 1 year after colostomy in 4 (31%) of 13 patients. In 5 cases, reversal was performed at more than 2 years after colostomy because the patients did not know colostomy could be reversed until we contacted them via telephone for follow-up. Thus, the median duration of stoma was prolonged to 16 mo. In fact, most stomas in this cohort could be reversed at 1 year after colostomy. Based on the results of our retrospective study[16], we started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901). All patients were followed and evaluated for stoma closure for 1 year after colostomy. The indication for diverting colostomy was extended to hemorrhagic CRP with moderate anemia in our series because preventative colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit from preventative fecal diversion. To date, it is still unclear whether anorectal function can be preserved after stoma closure. In this study, we found that anorectal functions, including anal control and release of gas/stool, could be restored, which also indicated that CRP was inactive after a period of diverting colostomy. Although increased rectal sensitivity and decreased rectal volume were observed at reversal by anorectal manometry, good anorectal function was achieved after reversal compared to the normal population and improved compared to that before stoma creation. No additional pelvic biofeedback treatments were required in these patients. For some patients, an additional one to two rounds of APC were required to help control bleeding during the first 6 mo after colostomy. The temporary stoma not only enabled the remission of anal symptoms but also restored the biological functions of the anus, providing evidence to support fecal diversion as a helpful option for severe hemorrhagic CRP patients. In some studies, bile acid malabsorption and small bowel bacterial overgrowth have been thought to be the cause of diarrhea during acute and chronic radiation enteritis[1]. Diverting colostomy cannot reverse these changes. We focused on hemorrhage in CRP and diarrhea, common in small-bowel radiation injury, which is different from tenesmus and is not the major symptom in most CRP cases. According to our previous study, severe hemorrhagic CRP patients usually experience poor global health and fatigue due to moderate to severe anemia[16]. Their social and emotional functions are also impacted by frequent stool and anxiety of bleeding. They cannot attend social activities due to concern of looking for a lavatory[1]. In this study, after the remission of anal symptoms, we found that patients had good QOL, especially after stoma reversal, and were able to return to almost the same life as the normal population. Anorectal functions scored by the CRPSS were assessed before stoma and after reversal. We also analyzed EUS and pelvic MRI before stoma and at reversal to evaluate the severity of CRP. To objectively evaluate anorectal function before reversal, anorectal manometry was performed and retrospectively analyzed in some patients, though only in 6 patients because manometry is not a routine test before stoma reversal. This study presents our serial results of diverting colostomy for severe hemorrhagic CRP. The results showed significant superiority over conservative treatments. However, there are several limitations to this study. First, this study was retrospective, with potential recall bias. Second, the sample size of this cohort was relatively small, which will affect the grade of evidence, and a larger cohort is essential. Finally, EUS, MRI, and anorectal manometry were only performed on patients undergoing colostomy reversal as a limitation. Thus, we started a prospective cohort of diverting colostomy for severe hemorrhagic CRP and will provide more reliable evidence in the future. In conclusion, diverting colostomy is a very effective and rapid method for the remission of severe bleeding in CRP patients. The stoma can be reversed and good anorectal function can be restored after reversal in most patients. ARTICLE HIGHLIGHTS: Research background Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear. Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear. Research motivation During the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals. During the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals. Research objectives The aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP. The aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP. Research methods Patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires. Patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires. Research results After screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse. After screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse. Research conclusions Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal. Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal. Research perspectives Preventative diverting colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit. Thus, we have started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901), which will provide more evidence to the usage of colostomy in severe CRP patients. Preventative diverting colostomy can relieve refractory bleeding before it progresses to severe life-threatening anemia and patients can obtain dramatic benefit. Thus, we have started a prospective clinical trial to enroll severe hemorrhagic CRP patients who are suitable for diverting colostomy (Clinical Trial No. NCT03397901), which will provide more evidence to the usage of colostomy in severe CRP patients. Research background: Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. Severe and refractory bleeding is hard to manage. Diverting fecal diversion is very effective and fast in remission of rectal bleeding in the management of severe CRP bleeding in our previous study, but diverting stoma is usually thought to be permanent according to the literature. The anorectal function and quality of life after stoma reversal remain unclear. Research motivation: During the past 3 years, we have successfully performed colostomy reversals in a larger cohort of CRP patients with severe bleeding and followed patients after colostomy reversals. In this series study, we will report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversals. Research objectives: The aim of this study was to evaluate the efficacy of colostomy and the rate of stoma reversals in severe hemorrhagic CRP. Research methods: Patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled retrospectively. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound (EUS), rectal manometry, and MRI scans were recorded. Anorectal functions and quality of life before stoma and after stoma reversal were scored with EORTC-QOL-C30 questionnaires. Research results: After screening 738 continual CRP patients, 14 patients in the colostomy group and 25 patients in the conservative group as controls were enrolled. The Hb was gradually increased to normal levels in two years after colostomy, while no significant increase was observed in the conservative group. All of 14 patients obtained complete remission of bleeding and colostomy was successfully reversed in 13 of 14 (93%), except one with very old age. Improved endoscopic telangiectasia and bleeding, decreased vascularity by EUS, increased presarcal space, and thickened rectal wall by MRI were observed. Anorectal functions and quality of life were significantly improved after stoma reverse. Research conclusions: Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Meanwhile, stoma can be reversed and anorectal functions can be recovered after reversal.
Background: Severe chronic radiation proctopathy (CRP) is difficult to treat. Methods: To assess the efficacy of colostomy in CRP, patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled. Patients with tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or who were lost to follow-up were excluded. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound, rectal manometry, and magnetic resonance imaging findings were recorded. Quality of life before stoma and after closure reversal was scored with questionnaires. Anorectal functions were assessed using the CRP symptom scale, which contains the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. Results: A total of 738 continual CRP patients were screened. After exclusion, 14 patients in the colostomy group and 25 in the conservative group were included in the final analysis. Preoperative Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. All 14 patients in the former group achieved complete remission of bleeding, and the colostomy was successfully reversed in 13 of 14 (93%), excepting one very old patient. The median duration of stoma was 16 (range: 9-53) mo. The Hb level increased gradually from 75 g/L at 3 mo, 99 g/L at 6 mo, and 107 g/L at 9 mo to 111 g/L at 1 year and 117 g/L at 2 years after the stoma, but no bleeding cessation or significant increase in Hb levels was observed in the conservative group. Endoscopic telangiectasia and bleeding were greatly improved. Endo-ultrasound showed decreased vascularity, and magnetic resonance imaging revealed an increasing presarcal space and thickened rectal wall. Anorectal functions and quality of life were significantly improved after stoma reversal, when compared to those before stoma creation. Conclusions: Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Stoma can be reversed, and anorectal functions can be recovered after reversal.
INTRODUCTION: Chronic radiation proctopathy (CRP) is a common and sometimes difficult issue after radiotherapy for pelvic malignancies. A permanent change in bowel habits may occur in 90% of patients. After pelvic radiotherapy, 20%-50% of patients will develop difficult bowel function, affecting their quality of life (QOL)[1,2]. CRP can cause more than 20 different symptoms. Rectal bleeding is the most common symptom and occurs in > 50% of CRP patients[1,3]. Transfusion-dependent severe bleeding occurs in 1%-5% of patients[1]. Pathologically, ischemia in the submucosa due to obliterative endarteritis and progressive fibrosis in macroscopic changes are the main causes[4]. Medical treatment consists of topical sucralfate enemas[5], oral or topical sulfasalazine[6], and rebamipide[7]. However, these reagents are only effective in acute or mild CRP, whereas their efficacy is usually disappointing in CRP patients with moderate to severe hemorrhage. More invasive modalities include endoscopic argon plasma coagulation (APC)[8], topical 4%-10% formalin[9-12], radiofrequency ablation[13], and hyperbaric oxygen therapy[14,15], and are currently popular optional treatments for CRP. Most of these modalities lack randomized trial evidence. These treatment options are reported to be effective in controlling mild to moderate bleeding in most of the literature. Nonetheless, severe and refractory bleeding is difficult to manage[1,16]. Furthermore, APC or topical formalin requires multiple sessions for severe bleeding and can cause severe side effects, including perforation, strictures, and perianal pain[17]. Fecal diversion is reported to be effective in the management of severe CRP bleeding[16,18,19], as it can reduce irritation injury to the lesions to decrease hemorrhage[16]. However, the usage of fecal diversion is not adopted as widely as is APC or formalin. Most of the fecal diversion data are from the 1980s, and this option has not been well studied to date. We previously reported one retrospective cohort of CRP patients with severe hemorrhage who received colostomy[16]. The results showed that colostomy can bring much higher remission of severe bleeding (94%) than can conservative treatment (11%) with APC or topical formalin. Pathologically, chronic inflammation and progressive fibrosis are observed after stoma. Diverting stoma is usually thought to be permanent according to Ayerdi et al[20]. Anorectal function and QOL after stoma reversal remain unclear. During the past 3 years, we have successfully performed colostomy reversals in a large cohort of CRP patients with severe bleeding and have follow-up data after colostomy reversal. Here, we report the efficacy of colostomy, the rate of stoma closure of diverting colostomy, and anorectal function after reversal for this series of CRP patients with severe bleeding. Research conclusions: We thank Dr. Jervoise Andreyev (a famous international leader in radiation proctitis, Consultant Gastroenterologist in Pelvic Radiation Disease, Royal Marsden NHS Foundation Trust, London SW3 6JJ, United Kingdom) for critical revisions of our paper.
Background: Severe chronic radiation proctopathy (CRP) is difficult to treat. Methods: To assess the efficacy of colostomy in CRP, patients with severe hemorrhagic CRP who underwent colostomy or conservative treatment were enrolled. Patients with tumor recurrence, rectal-vaginal fistula or other types of rectal fistulas, or who were lost to follow-up were excluded. Rectal bleeding, hemoglobin (Hb), endoscopic features, endo-ultrasound, rectal manometry, and magnetic resonance imaging findings were recorded. Quality of life before stoma and after closure reversal was scored with questionnaires. Anorectal functions were assessed using the CRP symptom scale, which contains the following items: Watery stool, urgency, perianal pain, tenesmus, rectal bleeding, and fecal/gas incontinence. Results: A total of 738 continual CRP patients were screened. After exclusion, 14 patients in the colostomy group and 25 in the conservative group were included in the final analysis. Preoperative Hb was only 63 g/L ± 17.8 g/L in the colostomy group compared to 88.2 g/L ± 19.3 g/L (P < 0.001) in the conservative group. All 14 patients in the former group achieved complete remission of bleeding, and the colostomy was successfully reversed in 13 of 14 (93%), excepting one very old patient. The median duration of stoma was 16 (range: 9-53) mo. The Hb level increased gradually from 75 g/L at 3 mo, 99 g/L at 6 mo, and 107 g/L at 9 mo to 111 g/L at 1 year and 117 g/L at 2 years after the stoma, but no bleeding cessation or significant increase in Hb levels was observed in the conservative group. Endoscopic telangiectasia and bleeding were greatly improved. Endo-ultrasound showed decreased vascularity, and magnetic resonance imaging revealed an increasing presarcal space and thickened rectal wall. Anorectal functions and quality of life were significantly improved after stoma reversal, when compared to those before stoma creation. Conclusions: Diverting colostomy is a very effective method in the remission of refractory hemorrhagic CRP. Stoma can be reversed, and anorectal functions can be recovered after reversal.
11,506
418
[ 497, 102, 69, 226, 114, 89, 119, 76, 3903, 400, 425, 277, 99, 181, 210, 327, 1307, 911, 80, 57, 23, 64, 117, 30 ]
25
[ "patients", "stoma", "colostomy", "bleeding", "crp", "reversal", "group", "rectal", "severe", "anorectal" ]
[ "patients pelvic radiotherapy", "pelvic radiotherapy", "radiation proctopathy good", "bowel radiation injury", "radiation proctopathy crp" ]
null
[CONTENT] Chronic radiation proctitis | Hemorrhage | Colostomy | Anorectal function | Quality of life [SUMMARY]
[CONTENT] Chronic radiation proctitis | Hemorrhage | Colostomy | Anorectal function | Quality of life [SUMMARY]
null
[CONTENT] Chronic radiation proctitis | Hemorrhage | Colostomy | Anorectal function | Quality of life [SUMMARY]
[CONTENT] Chronic radiation proctitis | Hemorrhage | Colostomy | Anorectal function | Quality of life [SUMMARY]
[CONTENT] Chronic radiation proctitis | Hemorrhage | Colostomy | Anorectal function | Quality of life [SUMMARY]
[CONTENT] Aged | Colostomy | Female | Gastrointestinal Hemorrhage | Humans | Male | Middle Aged | Quality of Life | Radiation Injuries | Rectal Diseases | Rectum | Retrospective Studies | Surgical Stomas | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Colostomy | Female | Gastrointestinal Hemorrhage | Humans | Male | Middle Aged | Quality of Life | Radiation Injuries | Rectal Diseases | Rectum | Retrospective Studies | Surgical Stomas | Treatment Outcome [SUMMARY]
null
[CONTENT] Aged | Colostomy | Female | Gastrointestinal Hemorrhage | Humans | Male | Middle Aged | Quality of Life | Radiation Injuries | Rectal Diseases | Rectum | Retrospective Studies | Surgical Stomas | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Colostomy | Female | Gastrointestinal Hemorrhage | Humans | Male | Middle Aged | Quality of Life | Radiation Injuries | Rectal Diseases | Rectum | Retrospective Studies | Surgical Stomas | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Colostomy | Female | Gastrointestinal Hemorrhage | Humans | Male | Middle Aged | Quality of Life | Radiation Injuries | Rectal Diseases | Rectum | Retrospective Studies | Surgical Stomas | Treatment Outcome [SUMMARY]
[CONTENT] patients pelvic radiotherapy | pelvic radiotherapy | radiation proctopathy good | bowel radiation injury | radiation proctopathy crp [SUMMARY]
[CONTENT] patients pelvic radiotherapy | pelvic radiotherapy | radiation proctopathy good | bowel radiation injury | radiation proctopathy crp [SUMMARY]
null
[CONTENT] patients pelvic radiotherapy | pelvic radiotherapy | radiation proctopathy good | bowel radiation injury | radiation proctopathy crp [SUMMARY]
[CONTENT] patients pelvic radiotherapy | pelvic radiotherapy | radiation proctopathy good | bowel radiation injury | radiation proctopathy crp [SUMMARY]
[CONTENT] patients pelvic radiotherapy | pelvic radiotherapy | radiation proctopathy good | bowel radiation injury | radiation proctopathy crp [SUMMARY]
[CONTENT] patients | stoma | colostomy | bleeding | crp | reversal | group | rectal | severe | anorectal [SUMMARY]
[CONTENT] patients | stoma | colostomy | bleeding | crp | reversal | group | rectal | severe | anorectal [SUMMARY]
null
[CONTENT] patients | stoma | colostomy | bleeding | crp | reversal | group | rectal | severe | anorectal [SUMMARY]
[CONTENT] patients | stoma | colostomy | bleeding | crp | reversal | group | rectal | severe | anorectal [SUMMARY]
[CONTENT] patients | stoma | colostomy | bleeding | crp | reversal | group | rectal | severe | anorectal [SUMMARY]
[CONTENT] severe | topical | crp | bleeding | severe bleeding | patients | hemorrhage | crp patients | 16 | difficult [SUMMARY]
[CONTENT] bleeding | rectal | crp | rectal bleeding | recurrence | patients | severe | defined | stoma | system [SUMMARY]
null
[CONTENT] colostomy effective method | remission refractory hemorrhagic | hemorrhagic crp stoma reversed | hemorrhagic crp stoma | method remission refractory | method remission refractory hemorrhagic | anorectal functions recovered | crp stoma reversed anorectal | crp stoma reversed | crp stoma [SUMMARY]
[CONTENT] stoma | colostomy | patients | bleeding | crp | reversal | rectal | severe | group | anorectal [SUMMARY]
[CONTENT] stoma | colostomy | patients | bleeding | crp | reversal | rectal | severe | group | anorectal [SUMMARY]
[CONTENT] CRP [SUMMARY]
[CONTENT] CRP | CRP ||| ||| ||| ||| CRP [SUMMARY]
null
[CONTENT] CRP ||| [SUMMARY]
[CONTENT] CRP ||| CRP | CRP ||| ||| ||| ||| CRP ||| 738 | CRP ||| 14 | 25 ||| Preoperative Hb | 17.8 ||| g/L | 88.2 | 19.3 | g/L ||| 14 | 13 | 14 | 93% | one ||| 16 | 9-53 ||| Hb | 75 | 3 mo, | 99 | 6 | 107 | 111 | 1 year | 117 | 2 years | Hb ||| ||| ||| ||| CRP ||| [SUMMARY]
[CONTENT] CRP ||| CRP | CRP ||| ||| ||| ||| CRP ||| 738 | CRP ||| 14 | 25 ||| Preoperative Hb | 17.8 ||| g/L | 88.2 | 19.3 | g/L ||| 14 | 13 | 14 | 93% | one ||| 16 | 9-53 ||| Hb | 75 | 3 mo, | 99 | 6 | 107 | 111 | 1 year | 117 | 2 years | Hb ||| ||| ||| ||| CRP ||| [SUMMARY]
null
28480430
The present study evaluates the effect of Spatholobus suberectus stem extract (SS) in the management of pancreatitis alone and in combination with heparin.
BACKGROUND
Pancreatitis was induced pancreatitis by cerulean (50μg/kg, i.p.) five times at an interval of 1 h without any pretreatment of drug. Rats were treated with SS (100 and 200 mg/kg, p. o.) and heparin (150 U/kg, i.p.) alone and in combination for the duration of a week. Later pancreatic weight and blood flow was estimated and different biochemical parameters like concentration of D-dimer and Interleukin 1β (IL-Ιβ) and activity of amylase and lipase were determined in blood of pancreatitis rats. Moreover effect of drug treatment on DNA synthesis and histopathology was also estimated on cerulean induced pancreatitis rats.
MATERIAL AND METHODS
Results of this study suggest that treatment with SS alone and in combination with heparin significantly increase in prothrombin time and pancreatic blood flow than negative control group. There was significant decrease in concentration of IL-Ιβ and D-dimer and activity of amylase and lipase in SS and heparin treated group than negative control group. Pancreatic DNA synthesis was also found to be reduced in SS and heparin alone and in combination treated group. Histopathology study also reveals that treatment with SS and heparin alone and in combination reduces edema, hemorrhages, leukocyte infiltration in the TS of pancreatic tissues.
RESULT
Present study concludes that treatment with SS alone effectively manages the pancreatitis by ceasing the inflammatory pathway and potentiates the effect of heparin in the management of pancreatitis.
CONCLUSION
[ "Amylases", "Animals", "Anticoagulants", "Ceruletide", "Drug Therapy, Combination", "Fabaceae", "Fibrin Fibrinogen Degradation Products", "Heparin", "Interleukin-1beta", "Lipase", "Male", "Nucleic Acid Synthesis Inhibitors", "Pancreas", "Pancreatitis", "Phytotherapy", "Plant Extracts", "Plant Stems", "Protective Agents", "Prothrombin Time", "Rats", "Rats, Wistar" ]
5412224
Introduction
Pancreatitis is a coagulation-like disorder. However, inflammation and coagulation is closely related to each other. Coagulation is induced by inflammation and activation of process of inflammation equally responsible for the coagulation as thrombosis (Levi et al, 2012). Reported study suggested that disturbance in microcirculation results in formation of pro-inflammatory cytokines; oxygen derived free radicals, release of proteolytic enzyme and activation of leukocytes (Salomone et al, 2003). Thrombin forms as inflammatory cytokines enhances the expression of tissue factor on endothelium and monocytes (Maeda et al, 2006). Reported study suggested that heparin/anticoagulant shows protective effect in pancreatitis in animal as well as clinical study. Pretreatment with heparin ceases the development of pancreatitis induced by cerulean, taurocholate and bile on various animal studies (Gabryelewicz et al, 1969). Moreover heparin restores the pancreatic function in cerulean induced pancreatitis if it is administered after the induction of pancreatitis (Dobosz et al, 1998; Qiu et al, 2007). Moreover a study suggested that heparin manages the hyperlipidemia induced pancreatitis when co administered with insulin and in sever pancreatitis it protects the encephalopathy (Alagözlü et al, 2006; Kyriakidis et al, 2006). Heparin manages the pancreatitis by inhibiting the formation of thrombin through heparin-antithrombin III complex (Warzecha et al, 2010). However, effect of heparin over inflammatory parameters or pathway is not given in the literatures. Spatholobus suberectus belongs to family: Leguminosae used for the management of several disorders such as rheumatism, anemia and abnormal menstruation traditionally in China (Lam et al, 2000; Li et al, 2003; Yen, 1992). Reported study suggested that several compounds like butin, liquiritigenin, dihydroquercetin, plathymenin, eriodictyol and neoisoliquiritigenin were been isolated from stem of SS (Lee et al, 2006). A study on stem extract of SS effectively manages cerebral ischemia by attenuating NF-κΒ p65 and cytokines and thereby prevents the DNA damage (Zhang et al, 2016). Moreover studies like anti rheumatic, anti inflammatory, antioxidant and anti platelet activity of SS are reported in the literature (Zhang et al, 2016). Thus this study evaluates the synergistic effect of Spatholobus suberectus extract when treated with heparin in pancreatitis.
null
null
null
null
Conclusion
Present study concludes that treatment with SS alone effectively manages the pancreatitis by ceasing the inflammatory pathway. Moreover SS in combination with heparin gives synergistic effect as in combination it act on inflammatory and coagulation pathway simultaneously for the pancreatitis.
[ "Animals", "Plant Extract", "Induction of Pancreatitis", "Estimation of pancreatic blood flow", "Estimation of Biochemical parameters", "Estimation of pancreatic DNA synthesis", "Histopathology study", "Statistical analysis", "Result", "Effect of Spatholobus suberectus and heparin on prothrombin time, weight of pancreas and pancreatic blood flow", "Effect of Spatholobus suberectus and heparin on biochemical parameters", "Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis", "Effect of Spatholobus suberectus and heparin on histopathology of pancreas" ]
[ "Healthy male Wistar rats of 180-200 g body weight were used for the pharmacological screening in the present study. The animals were housed at 25 ±2°C temperature, 12 h light/dark cycle and 60 ± 5 % of relative humidity. Rats were feed with standard diet and water ad libitum. Protocols of the present investigation for all the animal studies were approved by Capital University of Medical Science, Beijing.", "Spatholobus suberectus was procured from the local botanist. S. suberectus stem cut into small pieces and dried it under the shade. Later it was boiled for specific duration of time in distilled water and then filters the extract by using filter paper (150 μm). Thereafter the extract was lyophilized. During the experiment at the time of dosing extract of SS was dissolved in distilled H2O and for 5 min centrifuge it at 10000 RPM.", "All the rats were separated in to seven different group (n=8) such as Control group which recives only saline; Negative control group which receives cerulean (50 μg/kg, i.p.) five times at an interval of 1 h whithout any pretreatment of drug; Heparin treated group receives 150 U/kg of heparin 1 day after the cerulean by subcutaneous injection twice a day; SS treated group receives 100 and 200 mg/kg of SS extract p. o. 1 day after the cerulean injection; Heparin +SS treated group receives heparin (150 U/kg, s. c.) and SS (100 and 200 mg/kg) 1 day after the cerulean injection for the duration of one week (Baczy’ nska et al, 2004).", "All the animals were anesthetized by ketamine (50mg/kg, i.p.) at the end of protocol and pancreatic blood flow was estimated as per previously reported methods in exposed pancreas by using a laser Doppler flowmeter. Interpretation of data was represented as percent change from value obtained in control group without induction of pancreatitis.", "Later abdominal aorta was used for the collection of fresh blood sample and prothrombin time was estimated in it as international normalized ratio using test strip (Alere San Diego, Inc., USA).\nImmunoturbidimetric assay method was used for the estimation of concentration of plasma D-Dimer by using automatic coagulation analyzer (BCS XP System, Simens Healthcare Diagnostics, Garmany). Kodak Ectachem DT II System analyzer was used for the estimation of activity of amylase and lipase in serum using using Amyl and Lipa DT Slides. Interlukin 1β (IL-Ιβ) concentration was estimated in the serum by using IL-Ιβ Platinum Elisa (Konturek et al, 1994).", "Pancreas isolated from all the rats was weighed. Later for histological examination and pancreatic DNA synthesis was done for the pancreatic tissue. Assessment of labeled thymidine in to DNA was done for the estimation of rate of pancreatic DNA synthesis as per previously reported study. Rate of DNA synthesis was expressed as disintegrations of labeled thymidine per minute per microgram DNA (dpm^g DNA) (Dembi’ nski et al, 2006).", "Hematoxylin and eosin (H&E) staining was done for the examination of damage of pancreatic tissue as given in previous report. A scale of range from 0 to 3 was used for histological grading of necrosis, hemorrhages, vacuolization of acinar cells, leukocyte inflammatory infiltration and edema (Tomaszewska et al, 2000).", "Data of present study given as mean ± SD (n=8). Statistically analysis was done by one way ANOVA (Dunnett.) In this study values p<0.05 was considered as significant.", " Effect of Spatholobus suberectus and heparin on prothrombin time, weight of pancreas and pancreatic blood flow Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group.\nWeight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3).\nEffect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group.\nWeight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3).\nEffect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\n Effect of Spatholobus suberectus and heparin on biochemical parameters Effect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group.\nEffect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nIn cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c.\nEffect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group.\nEffect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nIn cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c.\nEffect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\n Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis Effect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6.\nEffect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6.\nEffect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\n Effect of Spatholobus suberectus and heparin on histopathology of pancreas Histopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema.\nEffect of Spatholobus suberectus and heparin on histopathology signs of pancreas\nEffect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg.\nHistopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema.\nEffect of Spatholobus suberectus and heparin on histopathology signs of pancreas\nEffect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg.", "Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group.\nWeight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3).\nEffect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control", "Effect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group.\nEffect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nIn cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c.\nEffect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control", "Effect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6.\nEffect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control", "Histopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema.\nEffect of Spatholobus suberectus and heparin on histopathology signs of pancreas\nEffect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Material and Methods", "Animals", "Plant Extract", "Induction of Pancreatitis", "Estimation of pancreatic blood flow", "Estimation of Biochemical parameters", "Estimation of pancreatic DNA synthesis", "Histopathology study", "Statistical analysis", "Result", "Effect of Spatholobus suberectus and heparin on prothrombin time, weight of pancreas and pancreatic blood flow", "Effect of Spatholobus suberectus and heparin on biochemical parameters", "Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis", "Effect of Spatholobus suberectus and heparin on histopathology of pancreas", "Discussion", "Conclusion" ]
[ "Pancreatitis is a coagulation-like disorder. However, inflammation and coagulation is closely related to each other. Coagulation is induced by inflammation and activation of process of inflammation equally responsible for the coagulation as thrombosis (Levi et al, 2012). Reported study suggested that disturbance in microcirculation results in formation of pro-inflammatory cytokines; oxygen derived free radicals, release of proteolytic enzyme and activation of leukocytes (Salomone et al, 2003). Thrombin forms as inflammatory cytokines enhances the expression of tissue factor on endothelium and monocytes (Maeda et al, 2006).\nReported study suggested that heparin/anticoagulant shows protective effect in pancreatitis in animal as well as clinical study. Pretreatment with heparin ceases the development of pancreatitis induced by cerulean, taurocholate and bile on various animal studies (Gabryelewicz et al, 1969). Moreover heparin restores the pancreatic function in cerulean induced pancreatitis if it is administered after the induction of pancreatitis (Dobosz et al, 1998; Qiu et al, 2007). Moreover a study suggested that heparin manages the hyperlipidemia induced pancreatitis when co administered with insulin and in sever pancreatitis it protects the encephalopathy (Alagözlü et al, 2006; Kyriakidis et al, 2006). Heparin manages the pancreatitis by inhibiting the formation of thrombin through heparin-antithrombin III complex (Warzecha et al, 2010). However, effect of heparin over inflammatory parameters or pathway is not given in the literatures.\nSpatholobus suberectus belongs to family: Leguminosae used for the management of several disorders such as rheumatism, anemia and abnormal menstruation traditionally in China (Lam et al, 2000; Li et al, 2003; Yen, 1992). Reported study suggested that several compounds like butin, liquiritigenin, dihydroquercetin, plathymenin, eriodictyol and neoisoliquiritigenin were been isolated from stem of SS (Lee et al, 2006). A study on stem extract of SS effectively manages cerebral ischemia by attenuating NF-κΒ p65 and cytokines and thereby prevents the DNA damage (Zhang et al, 2016). Moreover studies like anti rheumatic, anti inflammatory, antioxidant and anti platelet activity of SS are reported in the literature (Zhang et al, 2016). Thus this study evaluates the synergistic effect of Spatholobus suberectus extract when treated with heparin in pancreatitis.", " Animals Healthy male Wistar rats of 180-200 g body weight were used for the pharmacological screening in the present study. The animals were housed at 25 ±2°C temperature, 12 h light/dark cycle and 60 ± 5 % of relative humidity. Rats were feed with standard diet and water ad libitum. Protocols of the present investigation for all the animal studies were approved by Capital University of Medical Science, Beijing.\nHealthy male Wistar rats of 180-200 g body weight were used for the pharmacological screening in the present study. The animals were housed at 25 ±2°C temperature, 12 h light/dark cycle and 60 ± 5 % of relative humidity. Rats were feed with standard diet and water ad libitum. Protocols of the present investigation for all the animal studies were approved by Capital University of Medical Science, Beijing.\n Plant Extract Spatholobus suberectus was procured from the local botanist. S. suberectus stem cut into small pieces and dried it under the shade. Later it was boiled for specific duration of time in distilled water and then filters the extract by using filter paper (150 μm). Thereafter the extract was lyophilized. During the experiment at the time of dosing extract of SS was dissolved in distilled H2O and for 5 min centrifuge it at 10000 RPM.\nSpatholobus suberectus was procured from the local botanist. S. suberectus stem cut into small pieces and dried it under the shade. Later it was boiled for specific duration of time in distilled water and then filters the extract by using filter paper (150 μm). Thereafter the extract was lyophilized. During the experiment at the time of dosing extract of SS was dissolved in distilled H2O and for 5 min centrifuge it at 10000 RPM.\n Induction of Pancreatitis All the rats were separated in to seven different group (n=8) such as Control group which recives only saline; Negative control group which receives cerulean (50 μg/kg, i.p.) five times at an interval of 1 h whithout any pretreatment of drug; Heparin treated group receives 150 U/kg of heparin 1 day after the cerulean by subcutaneous injection twice a day; SS treated group receives 100 and 200 mg/kg of SS extract p. o. 1 day after the cerulean injection; Heparin +SS treated group receives heparin (150 U/kg, s. c.) and SS (100 and 200 mg/kg) 1 day after the cerulean injection for the duration of one week (Baczy’ nska et al, 2004).\nAll the rats were separated in to seven different group (n=8) such as Control group which recives only saline; Negative control group which receives cerulean (50 μg/kg, i.p.) five times at an interval of 1 h whithout any pretreatment of drug; Heparin treated group receives 150 U/kg of heparin 1 day after the cerulean by subcutaneous injection twice a day; SS treated group receives 100 and 200 mg/kg of SS extract p. o. 1 day after the cerulean injection; Heparin +SS treated group receives heparin (150 U/kg, s. c.) and SS (100 and 200 mg/kg) 1 day after the cerulean injection for the duration of one week (Baczy’ nska et al, 2004).\n Estimation of pancreatic blood flow All the animals were anesthetized by ketamine (50mg/kg, i.p.) at the end of protocol and pancreatic blood flow was estimated as per previously reported methods in exposed pancreas by using a laser Doppler flowmeter. Interpretation of data was represented as percent change from value obtained in control group without induction of pancreatitis.\nAll the animals were anesthetized by ketamine (50mg/kg, i.p.) at the end of protocol and pancreatic blood flow was estimated as per previously reported methods in exposed pancreas by using a laser Doppler flowmeter. Interpretation of data was represented as percent change from value obtained in control group without induction of pancreatitis.\n Estimation of Biochemical parameters Later abdominal aorta was used for the collection of fresh blood sample and prothrombin time was estimated in it as international normalized ratio using test strip (Alere San Diego, Inc., USA).\nImmunoturbidimetric assay method was used for the estimation of concentration of plasma D-Dimer by using automatic coagulation analyzer (BCS XP System, Simens Healthcare Diagnostics, Garmany). Kodak Ectachem DT II System analyzer was used for the estimation of activity of amylase and lipase in serum using using Amyl and Lipa DT Slides. Interlukin 1β (IL-Ιβ) concentration was estimated in the serum by using IL-Ιβ Platinum Elisa (Konturek et al, 1994).\nLater abdominal aorta was used for the collection of fresh blood sample and prothrombin time was estimated in it as international normalized ratio using test strip (Alere San Diego, Inc., USA).\nImmunoturbidimetric assay method was used for the estimation of concentration of plasma D-Dimer by using automatic coagulation analyzer (BCS XP System, Simens Healthcare Diagnostics, Garmany). Kodak Ectachem DT II System analyzer was used for the estimation of activity of amylase and lipase in serum using using Amyl and Lipa DT Slides. Interlukin 1β (IL-Ιβ) concentration was estimated in the serum by using IL-Ιβ Platinum Elisa (Konturek et al, 1994).\n Estimation of pancreatic DNA synthesis Pancreas isolated from all the rats was weighed. Later for histological examination and pancreatic DNA synthesis was done for the pancreatic tissue. Assessment of labeled thymidine in to DNA was done for the estimation of rate of pancreatic DNA synthesis as per previously reported study. Rate of DNA synthesis was expressed as disintegrations of labeled thymidine per minute per microgram DNA (dpm^g DNA) (Dembi’ nski et al, 2006).\nPancreas isolated from all the rats was weighed. Later for histological examination and pancreatic DNA synthesis was done for the pancreatic tissue. Assessment of labeled thymidine in to DNA was done for the estimation of rate of pancreatic DNA synthesis as per previously reported study. Rate of DNA synthesis was expressed as disintegrations of labeled thymidine per minute per microgram DNA (dpm^g DNA) (Dembi’ nski et al, 2006).\n Histopathology study Hematoxylin and eosin (H&E) staining was done for the examination of damage of pancreatic tissue as given in previous report. A scale of range from 0 to 3 was used for histological grading of necrosis, hemorrhages, vacuolization of acinar cells, leukocyte inflammatory infiltration and edema (Tomaszewska et al, 2000).\nHematoxylin and eosin (H&E) staining was done for the examination of damage of pancreatic tissue as given in previous report. A scale of range from 0 to 3 was used for histological grading of necrosis, hemorrhages, vacuolization of acinar cells, leukocyte inflammatory infiltration and edema (Tomaszewska et al, 2000).\n Statistical analysis Data of present study given as mean ± SD (n=8). Statistically analysis was done by one way ANOVA (Dunnett.) In this study values p<0.05 was considered as significant.\nData of present study given as mean ± SD (n=8). Statistically analysis was done by one way ANOVA (Dunnett.) In this study values p<0.05 was considered as significant.", "Healthy male Wistar rats of 180-200 g body weight were used for the pharmacological screening in the present study. The animals were housed at 25 ±2°C temperature, 12 h light/dark cycle and 60 ± 5 % of relative humidity. Rats were feed with standard diet and water ad libitum. Protocols of the present investigation for all the animal studies were approved by Capital University of Medical Science, Beijing.", "Spatholobus suberectus was procured from the local botanist. S. suberectus stem cut into small pieces and dried it under the shade. Later it was boiled for specific duration of time in distilled water and then filters the extract by using filter paper (150 μm). Thereafter the extract was lyophilized. During the experiment at the time of dosing extract of SS was dissolved in distilled H2O and for 5 min centrifuge it at 10000 RPM.", "All the rats were separated in to seven different group (n=8) such as Control group which recives only saline; Negative control group which receives cerulean (50 μg/kg, i.p.) five times at an interval of 1 h whithout any pretreatment of drug; Heparin treated group receives 150 U/kg of heparin 1 day after the cerulean by subcutaneous injection twice a day; SS treated group receives 100 and 200 mg/kg of SS extract p. o. 1 day after the cerulean injection; Heparin +SS treated group receives heparin (150 U/kg, s. c.) and SS (100 and 200 mg/kg) 1 day after the cerulean injection for the duration of one week (Baczy’ nska et al, 2004).", "All the animals were anesthetized by ketamine (50mg/kg, i.p.) at the end of protocol and pancreatic blood flow was estimated as per previously reported methods in exposed pancreas by using a laser Doppler flowmeter. Interpretation of data was represented as percent change from value obtained in control group without induction of pancreatitis.", "Later abdominal aorta was used for the collection of fresh blood sample and prothrombin time was estimated in it as international normalized ratio using test strip (Alere San Diego, Inc., USA).\nImmunoturbidimetric assay method was used for the estimation of concentration of plasma D-Dimer by using automatic coagulation analyzer (BCS XP System, Simens Healthcare Diagnostics, Garmany). Kodak Ectachem DT II System analyzer was used for the estimation of activity of amylase and lipase in serum using using Amyl and Lipa DT Slides. Interlukin 1β (IL-Ιβ) concentration was estimated in the serum by using IL-Ιβ Platinum Elisa (Konturek et al, 1994).", "Pancreas isolated from all the rats was weighed. Later for histological examination and pancreatic DNA synthesis was done for the pancreatic tissue. Assessment of labeled thymidine in to DNA was done for the estimation of rate of pancreatic DNA synthesis as per previously reported study. Rate of DNA synthesis was expressed as disintegrations of labeled thymidine per minute per microgram DNA (dpm^g DNA) (Dembi’ nski et al, 2006).", "Hematoxylin and eosin (H&E) staining was done for the examination of damage of pancreatic tissue as given in previous report. A scale of range from 0 to 3 was used for histological grading of necrosis, hemorrhages, vacuolization of acinar cells, leukocyte inflammatory infiltration and edema (Tomaszewska et al, 2000).", "Data of present study given as mean ± SD (n=8). Statistically analysis was done by one way ANOVA (Dunnett.) In this study values p<0.05 was considered as significant.", " Effect of Spatholobus suberectus and heparin on prothrombin time, weight of pancreas and pancreatic blood flow Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group.\nWeight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3).\nEffect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group.\nWeight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3).\nEffect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\n Effect of Spatholobus suberectus and heparin on biochemical parameters Effect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group.\nEffect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nIn cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c.\nEffect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group.\nEffect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nIn cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c.\nEffect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\n Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis Effect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6.\nEffect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6.\nEffect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\n Effect of Spatholobus suberectus and heparin on histopathology of pancreas Histopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema.\nEffect of Spatholobus suberectus and heparin on histopathology signs of pancreas\nEffect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg.\nHistopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema.\nEffect of Spatholobus suberectus and heparin on histopathology signs of pancreas\nEffect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg.", "Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group.\nWeight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3).\nEffect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nEffect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control", "Effect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group.\nEffect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control\nIn cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c.\nEffect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β\nMeans ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control", "Effect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6.\nEffect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control", "Histopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema.\nEffect of Spatholobus suberectus and heparin on histopathology signs of pancreas\nEffect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg.", "Heparin reported to have beneficial effect on pancreatitis. Heparin manages the pancreatitis by inhibiting the formation of thrombin through heparin-antithrombin III complex but its effect over inflammatory pathway is not evaluated yet (Warzecha et al, 2010). Moreover Spatholobus suberectus reported to posses anti inflammatory, antioxidant and anti platelet activity (Zhang et al, 2016). Thus an attempt was made to evaluate the beneficial effect of Spatholobus suberectus in cerulean induced pancreatitis and synergistic effect of SS with heparin in cerulean induced pancreatitis rat model. It was evaluated by estimating the prothrobin time, pancreatic weight and blood flow in cerulean induced pancreatitis. Biochemical parameter such as concentration of IL-1β D-dimer, activity of amylase and lipase and DAN synthesis was estimated in cerulean induced pancreatitis rat model. Moreover histopathology study was also performed.\nThe result of the given study suggested that SS alone and in combination with heparin significantly increase in the prothrombin time and pancreatic blood flow in cerulean induced pancreatitis rat model. Literature suggested that in pancreatitis, pancreatic blood flow and prothrombin time significantly decreases and the anticoagulants reported to enhance the blood flow and thereby effective manages the pancreatitis (Ceranowicz et al, 2015).\nThere is activation of leukocyte which results in inflammatory cytokines release that develops the inflammation in pancreatitis. In pancreatitis cytokines like Π^-1β is responsible for inflammatory response and activates the prostaglandin, platelet activating factor and TNFa. These mediators cease the synthesis of DNA and thereby do damage to the pancreatic cells (Fink and Norman, 1996; Vollmar and Menger, 2003). Histopathology of pancreatic tissue reveals the presence of edema, leukocyte infiltration, hemorrhages and vacuolization in pancreatitis.\nThe investigation reveals that SS alone and in presence of heparin effectively decreases the concentration of Π^1β, D-dimer in blood and activity of amylase and lipase than negative control group. Moreover SS alone and in combination with heparin reduces the DNA synthesis in pancreatitis rats. Effect of SS and heparin alone and in combination on histopathology of pancreas tissue attenuates the pancreatitis by reducing the edema, hemorrhages and leukocyte infiltration.", "Present study concludes that treatment with SS alone effectively manages the pancreatitis by ceasing the inflammatory pathway. Moreover SS in combination with heparin gives synergistic effect as in combination it act on inflammatory and coagulation pathway simultaneously for the pancreatitis." ]
[ "intro", "material|methods", null, null, null, null, null, null, null, null, null, null, null, null, null, "discussion", "conclusion" ]
[ "\nSpatholobus suberectus\n", "Pancreatitis", "Heparin", "Cerulean" ]
Introduction: Pancreatitis is a coagulation-like disorder. However, inflammation and coagulation is closely related to each other. Coagulation is induced by inflammation and activation of process of inflammation equally responsible for the coagulation as thrombosis (Levi et al, 2012). Reported study suggested that disturbance in microcirculation results in formation of pro-inflammatory cytokines; oxygen derived free radicals, release of proteolytic enzyme and activation of leukocytes (Salomone et al, 2003). Thrombin forms as inflammatory cytokines enhances the expression of tissue factor on endothelium and monocytes (Maeda et al, 2006). Reported study suggested that heparin/anticoagulant shows protective effect in pancreatitis in animal as well as clinical study. Pretreatment with heparin ceases the development of pancreatitis induced by cerulean, taurocholate and bile on various animal studies (Gabryelewicz et al, 1969). Moreover heparin restores the pancreatic function in cerulean induced pancreatitis if it is administered after the induction of pancreatitis (Dobosz et al, 1998; Qiu et al, 2007). Moreover a study suggested that heparin manages the hyperlipidemia induced pancreatitis when co administered with insulin and in sever pancreatitis it protects the encephalopathy (Alagözlü et al, 2006; Kyriakidis et al, 2006). Heparin manages the pancreatitis by inhibiting the formation of thrombin through heparin-antithrombin III complex (Warzecha et al, 2010). However, effect of heparin over inflammatory parameters or pathway is not given in the literatures. Spatholobus suberectus belongs to family: Leguminosae used for the management of several disorders such as rheumatism, anemia and abnormal menstruation traditionally in China (Lam et al, 2000; Li et al, 2003; Yen, 1992). Reported study suggested that several compounds like butin, liquiritigenin, dihydroquercetin, plathymenin, eriodictyol and neoisoliquiritigenin were been isolated from stem of SS (Lee et al, 2006). A study on stem extract of SS effectively manages cerebral ischemia by attenuating NF-κΒ p65 and cytokines and thereby prevents the DNA damage (Zhang et al, 2016). Moreover studies like anti rheumatic, anti inflammatory, antioxidant and anti platelet activity of SS are reported in the literature (Zhang et al, 2016). Thus this study evaluates the synergistic effect of Spatholobus suberectus extract when treated with heparin in pancreatitis. Material and Methods: Animals Healthy male Wistar rats of 180-200 g body weight were used for the pharmacological screening in the present study. The animals were housed at 25 ±2°C temperature, 12 h light/dark cycle and 60 ± 5 % of relative humidity. Rats were feed with standard diet and water ad libitum. Protocols of the present investigation for all the animal studies were approved by Capital University of Medical Science, Beijing. Healthy male Wistar rats of 180-200 g body weight were used for the pharmacological screening in the present study. The animals were housed at 25 ±2°C temperature, 12 h light/dark cycle and 60 ± 5 % of relative humidity. Rats were feed with standard diet and water ad libitum. Protocols of the present investigation for all the animal studies were approved by Capital University of Medical Science, Beijing. Plant Extract Spatholobus suberectus was procured from the local botanist. S. suberectus stem cut into small pieces and dried it under the shade. Later it was boiled for specific duration of time in distilled water and then filters the extract by using filter paper (150 μm). Thereafter the extract was lyophilized. During the experiment at the time of dosing extract of SS was dissolved in distilled H2O and for 5 min centrifuge it at 10000 RPM. Spatholobus suberectus was procured from the local botanist. S. suberectus stem cut into small pieces and dried it under the shade. Later it was boiled for specific duration of time in distilled water and then filters the extract by using filter paper (150 μm). Thereafter the extract was lyophilized. During the experiment at the time of dosing extract of SS was dissolved in distilled H2O and for 5 min centrifuge it at 10000 RPM. Induction of Pancreatitis All the rats were separated in to seven different group (n=8) such as Control group which recives only saline; Negative control group which receives cerulean (50 μg/kg, i.p.) five times at an interval of 1 h whithout any pretreatment of drug; Heparin treated group receives 150 U/kg of heparin 1 day after the cerulean by subcutaneous injection twice a day; SS treated group receives 100 and 200 mg/kg of SS extract p. o. 1 day after the cerulean injection; Heparin +SS treated group receives heparin (150 U/kg, s. c.) and SS (100 and 200 mg/kg) 1 day after the cerulean injection for the duration of one week (Baczy’ nska et al, 2004). All the rats were separated in to seven different group (n=8) such as Control group which recives only saline; Negative control group which receives cerulean (50 μg/kg, i.p.) five times at an interval of 1 h whithout any pretreatment of drug; Heparin treated group receives 150 U/kg of heparin 1 day after the cerulean by subcutaneous injection twice a day; SS treated group receives 100 and 200 mg/kg of SS extract p. o. 1 day after the cerulean injection; Heparin +SS treated group receives heparin (150 U/kg, s. c.) and SS (100 and 200 mg/kg) 1 day after the cerulean injection for the duration of one week (Baczy’ nska et al, 2004). Estimation of pancreatic blood flow All the animals were anesthetized by ketamine (50mg/kg, i.p.) at the end of protocol and pancreatic blood flow was estimated as per previously reported methods in exposed pancreas by using a laser Doppler flowmeter. Interpretation of data was represented as percent change from value obtained in control group without induction of pancreatitis. All the animals were anesthetized by ketamine (50mg/kg, i.p.) at the end of protocol and pancreatic blood flow was estimated as per previously reported methods in exposed pancreas by using a laser Doppler flowmeter. Interpretation of data was represented as percent change from value obtained in control group without induction of pancreatitis. Estimation of Biochemical parameters Later abdominal aorta was used for the collection of fresh blood sample and prothrombin time was estimated in it as international normalized ratio using test strip (Alere San Diego, Inc., USA). Immunoturbidimetric assay method was used for the estimation of concentration of plasma D-Dimer by using automatic coagulation analyzer (BCS XP System, Simens Healthcare Diagnostics, Garmany). Kodak Ectachem DT II System analyzer was used for the estimation of activity of amylase and lipase in serum using using Amyl and Lipa DT Slides. Interlukin 1β (IL-Ιβ) concentration was estimated in the serum by using IL-Ιβ Platinum Elisa (Konturek et al, 1994). Later abdominal aorta was used for the collection of fresh blood sample and prothrombin time was estimated in it as international normalized ratio using test strip (Alere San Diego, Inc., USA). Immunoturbidimetric assay method was used for the estimation of concentration of plasma D-Dimer by using automatic coagulation analyzer (BCS XP System, Simens Healthcare Diagnostics, Garmany). Kodak Ectachem DT II System analyzer was used for the estimation of activity of amylase and lipase in serum using using Amyl and Lipa DT Slides. Interlukin 1β (IL-Ιβ) concentration was estimated in the serum by using IL-Ιβ Platinum Elisa (Konturek et al, 1994). Estimation of pancreatic DNA synthesis Pancreas isolated from all the rats was weighed. Later for histological examination and pancreatic DNA synthesis was done for the pancreatic tissue. Assessment of labeled thymidine in to DNA was done for the estimation of rate of pancreatic DNA synthesis as per previously reported study. Rate of DNA synthesis was expressed as disintegrations of labeled thymidine per minute per microgram DNA (dpm^g DNA) (Dembi’ nski et al, 2006). Pancreas isolated from all the rats was weighed. Later for histological examination and pancreatic DNA synthesis was done for the pancreatic tissue. Assessment of labeled thymidine in to DNA was done for the estimation of rate of pancreatic DNA synthesis as per previously reported study. Rate of DNA synthesis was expressed as disintegrations of labeled thymidine per minute per microgram DNA (dpm^g DNA) (Dembi’ nski et al, 2006). Histopathology study Hematoxylin and eosin (H&E) staining was done for the examination of damage of pancreatic tissue as given in previous report. A scale of range from 0 to 3 was used for histological grading of necrosis, hemorrhages, vacuolization of acinar cells, leukocyte inflammatory infiltration and edema (Tomaszewska et al, 2000). Hematoxylin and eosin (H&E) staining was done for the examination of damage of pancreatic tissue as given in previous report. A scale of range from 0 to 3 was used for histological grading of necrosis, hemorrhages, vacuolization of acinar cells, leukocyte inflammatory infiltration and edema (Tomaszewska et al, 2000). Statistical analysis Data of present study given as mean ± SD (n=8). Statistically analysis was done by one way ANOVA (Dunnett.) In this study values p<0.05 was considered as significant. Data of present study given as mean ± SD (n=8). Statistically analysis was done by one way ANOVA (Dunnett.) In this study values p<0.05 was considered as significant. Animals: Healthy male Wistar rats of 180-200 g body weight were used for the pharmacological screening in the present study. The animals were housed at 25 ±2°C temperature, 12 h light/dark cycle and 60 ± 5 % of relative humidity. Rats were feed with standard diet and water ad libitum. Protocols of the present investigation for all the animal studies were approved by Capital University of Medical Science, Beijing. Plant Extract: Spatholobus suberectus was procured from the local botanist. S. suberectus stem cut into small pieces and dried it under the shade. Later it was boiled for specific duration of time in distilled water and then filters the extract by using filter paper (150 μm). Thereafter the extract was lyophilized. During the experiment at the time of dosing extract of SS was dissolved in distilled H2O and for 5 min centrifuge it at 10000 RPM. Induction of Pancreatitis: All the rats were separated in to seven different group (n=8) such as Control group which recives only saline; Negative control group which receives cerulean (50 μg/kg, i.p.) five times at an interval of 1 h whithout any pretreatment of drug; Heparin treated group receives 150 U/kg of heparin 1 day after the cerulean by subcutaneous injection twice a day; SS treated group receives 100 and 200 mg/kg of SS extract p. o. 1 day after the cerulean injection; Heparin +SS treated group receives heparin (150 U/kg, s. c.) and SS (100 and 200 mg/kg) 1 day after the cerulean injection for the duration of one week (Baczy’ nska et al, 2004). Estimation of pancreatic blood flow: All the animals were anesthetized by ketamine (50mg/kg, i.p.) at the end of protocol and pancreatic blood flow was estimated as per previously reported methods in exposed pancreas by using a laser Doppler flowmeter. Interpretation of data was represented as percent change from value obtained in control group without induction of pancreatitis. Estimation of Biochemical parameters: Later abdominal aorta was used for the collection of fresh blood sample and prothrombin time was estimated in it as international normalized ratio using test strip (Alere San Diego, Inc., USA). Immunoturbidimetric assay method was used for the estimation of concentration of plasma D-Dimer by using automatic coagulation analyzer (BCS XP System, Simens Healthcare Diagnostics, Garmany). Kodak Ectachem DT II System analyzer was used for the estimation of activity of amylase and lipase in serum using using Amyl and Lipa DT Slides. Interlukin 1β (IL-Ιβ) concentration was estimated in the serum by using IL-Ιβ Platinum Elisa (Konturek et al, 1994). Estimation of pancreatic DNA synthesis: Pancreas isolated from all the rats was weighed. Later for histological examination and pancreatic DNA synthesis was done for the pancreatic tissue. Assessment of labeled thymidine in to DNA was done for the estimation of rate of pancreatic DNA synthesis as per previously reported study. Rate of DNA synthesis was expressed as disintegrations of labeled thymidine per minute per microgram DNA (dpm^g DNA) (Dembi’ nski et al, 2006). Histopathology study: Hematoxylin and eosin (H&E) staining was done for the examination of damage of pancreatic tissue as given in previous report. A scale of range from 0 to 3 was used for histological grading of necrosis, hemorrhages, vacuolization of acinar cells, leukocyte inflammatory infiltration and edema (Tomaszewska et al, 2000). Statistical analysis: Data of present study given as mean ± SD (n=8). Statistically analysis was done by one way ANOVA (Dunnett.) In this study values p<0.05 was considered as significant. Result: Effect of Spatholobus suberectus and heparin on prothrombin time, weight of pancreas and pancreatic blood flow Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group. Weight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3). Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group. Weight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3). Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus and heparin on biochemical parameters Effect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group. Effect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control In cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c. Effect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group. Effect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control In cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c. Effect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis Effect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6. Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6. Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus and heparin on histopathology of pancreas Histopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema. Effect of Spatholobus suberectus and heparin on histopathology signs of pancreas Effect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg. Histopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema. Effect of Spatholobus suberectus and heparin on histopathology signs of pancreas Effect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg. Effect of Spatholobus suberectus and heparin on prothrombin time, weight of pancreas and pancreatic blood flow: Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time in cerulean induced pancreatitis as shown in Figure 1. There was significant (p<0.05, p<0.01) increase in the prothrombin time in heparin, SS 100 & 200 mg, heparin+SS 100 mg/kg and heparin+SS 200 mg/kg treated group than negative control group. Weight of pancreas was significantly increases in cerulean induced pancreatitis group i.e. negative control group than control group. It was also observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases (p<0.05, p<0.01) the weight of pancreas compared to negative control group (Figure 2). Moreover treatment with Spatholobus suberectus and heparin alone and in combination increases the pancreatic blood flow which is calculated as % of control group than negative control group (Figure 3). Effect of Spatholobus suberectus in combination with heparin was observed on prothrombin time as international normalized ratio (INR) in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus in combination with heparin was observed on pancreatic weight in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus in combination with heparin was observed on pancreatic blood flow (% of control) in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus and heparin on biochemical parameters: Effect of Spatholobus suberectus and heparin alone and in combination on the concentration of D-dimer in cerulean induced pancreatitis as shown Figure 4. It was observed that treatment with Spatholobus suberectus and heparin alone and in combination significantly decreases the concentration of D-dimer in cerulean induced pancreatitis than negative control group. Effect of Spatholobus suberectus in combination with heparin was observed on the concentration of D-dimer in cerulean induced pancreatitis Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control In cerulean induced pancreatitis rats the activity of lipase and amylase enzyme and concentration of Π^-1β significantly increases than control group. There were significant decrease in the activity of lipase and amylase enzyme and concentration of Π^-1β in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 5 a, b & c. Effect of Spatholobus suberectus and heparin on biochemical parameter in cerulean induced pancreatitis. (a) Amylase activity; (b) Lipase activity; (c) Interlukin 1 β Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis: Effect of Spatholobus suberectus and heparin alone and in combination on pancreatic DNA synthesis was shown in Figure 6. There were significant (p<0.05, p<0.01) increase in the pancreatic DNA synthesis in Spatholobus suberectus and heparin alone and in combination treated group of rats compared to negative control group as shown in Figure 6. Effect of Spatholobus suberectus and heparin on pancreatic DNA synthesis in cerulean induced pancreatitis. Means ± SD (n=8), ##p < 0.01 Vs control, *p<0.05, **p<0.01 Vs Negative control Effect of Spatholobus suberectus and heparin on histopathology of pancreas: Histopathology study was performed on Spatholobus suberectus and heparin treated cerulean induced pancreatitis rats as shown in Table 1 and Figure 7. It was observed that pancreas of cerulean induced pancreatitis rats shows severe edema, moderate diffuse infiltration of inflammatory leukocyte and hemorrhages. Acinar cell shows more than 50% of vacuolization and signs of necrosis was absent in the TS of pancreas. Treatment with Spatholobus suberectus and heparin alone and in combination attenuates the development of pancreatitis. Histopathological representation shows that heparin in combination with SS significant decline in vacuolization of acinar cells, inflammatory infiltration and pancreatic edema. Effect of Spatholobus suberectus and heparin on histopathology signs of pancreas Effect of drug treatment on histopathology of pancreatic tissue in cerulean induced pancreatitis rat model. A. Control; B. Negative control; C. SS 200 mg/kg; D. Heparin+SS 200 mg/kg. Discussion: Heparin reported to have beneficial effect on pancreatitis. Heparin manages the pancreatitis by inhibiting the formation of thrombin through heparin-antithrombin III complex but its effect over inflammatory pathway is not evaluated yet (Warzecha et al, 2010). Moreover Spatholobus suberectus reported to posses anti inflammatory, antioxidant and anti platelet activity (Zhang et al, 2016). Thus an attempt was made to evaluate the beneficial effect of Spatholobus suberectus in cerulean induced pancreatitis and synergistic effect of SS with heparin in cerulean induced pancreatitis rat model. It was evaluated by estimating the prothrobin time, pancreatic weight and blood flow in cerulean induced pancreatitis. Biochemical parameter such as concentration of IL-1β D-dimer, activity of amylase and lipase and DAN synthesis was estimated in cerulean induced pancreatitis rat model. Moreover histopathology study was also performed. The result of the given study suggested that SS alone and in combination with heparin significantly increase in the prothrombin time and pancreatic blood flow in cerulean induced pancreatitis rat model. Literature suggested that in pancreatitis, pancreatic blood flow and prothrombin time significantly decreases and the anticoagulants reported to enhance the blood flow and thereby effective manages the pancreatitis (Ceranowicz et al, 2015). There is activation of leukocyte which results in inflammatory cytokines release that develops the inflammation in pancreatitis. In pancreatitis cytokines like Π^-1β is responsible for inflammatory response and activates the prostaglandin, platelet activating factor and TNFa. These mediators cease the synthesis of DNA and thereby do damage to the pancreatic cells (Fink and Norman, 1996; Vollmar and Menger, 2003). Histopathology of pancreatic tissue reveals the presence of edema, leukocyte infiltration, hemorrhages and vacuolization in pancreatitis. The investigation reveals that SS alone and in presence of heparin effectively decreases the concentration of Π^1β, D-dimer in blood and activity of amylase and lipase than negative control group. Moreover SS alone and in combination with heparin reduces the DNA synthesis in pancreatitis rats. Effect of SS and heparin alone and in combination on histopathology of pancreas tissue attenuates the pancreatitis by reducing the edema, hemorrhages and leukocyte infiltration. Conclusion: Present study concludes that treatment with SS alone effectively manages the pancreatitis by ceasing the inflammatory pathway. Moreover SS in combination with heparin gives synergistic effect as in combination it act on inflammatory and coagulation pathway simultaneously for the pancreatitis.
Background: The present study evaluates the effect of Spatholobus suberectus stem extract (SS) in the management of pancreatitis alone and in combination with heparin. Methods: Pancreatitis was induced pancreatitis by cerulean (50μg/kg, i.p.) five times at an interval of 1 h without any pretreatment of drug. Rats were treated with SS (100 and 200 mg/kg, p. o.) and heparin (150 U/kg, i.p.) alone and in combination for the duration of a week. Later pancreatic weight and blood flow was estimated and different biochemical parameters like concentration of D-dimer and Interleukin 1β (IL-Ιβ) and activity of amylase and lipase were determined in blood of pancreatitis rats. Moreover effect of drug treatment on DNA synthesis and histopathology was also estimated on cerulean induced pancreatitis rats. Results: Results of this study suggest that treatment with SS alone and in combination with heparin significantly increase in prothrombin time and pancreatic blood flow than negative control group. There was significant decrease in concentration of IL-Ιβ and D-dimer and activity of amylase and lipase in SS and heparin treated group than negative control group. Pancreatic DNA synthesis was also found to be reduced in SS and heparin alone and in combination treated group. Histopathology study also reveals that treatment with SS and heparin alone and in combination reduces edema, hemorrhages, leukocyte infiltration in the TS of pancreatic tissues. Conclusions: Present study concludes that treatment with SS alone effectively manages the pancreatitis by ceasing the inflammatory pathway and potentiates the effect of heparin in the management of pancreatitis.
Introduction: Pancreatitis is a coagulation-like disorder. However, inflammation and coagulation is closely related to each other. Coagulation is induced by inflammation and activation of process of inflammation equally responsible for the coagulation as thrombosis (Levi et al, 2012). Reported study suggested that disturbance in microcirculation results in formation of pro-inflammatory cytokines; oxygen derived free radicals, release of proteolytic enzyme and activation of leukocytes (Salomone et al, 2003). Thrombin forms as inflammatory cytokines enhances the expression of tissue factor on endothelium and monocytes (Maeda et al, 2006). Reported study suggested that heparin/anticoagulant shows protective effect in pancreatitis in animal as well as clinical study. Pretreatment with heparin ceases the development of pancreatitis induced by cerulean, taurocholate and bile on various animal studies (Gabryelewicz et al, 1969). Moreover heparin restores the pancreatic function in cerulean induced pancreatitis if it is administered after the induction of pancreatitis (Dobosz et al, 1998; Qiu et al, 2007). Moreover a study suggested that heparin manages the hyperlipidemia induced pancreatitis when co administered with insulin and in sever pancreatitis it protects the encephalopathy (Alagözlü et al, 2006; Kyriakidis et al, 2006). Heparin manages the pancreatitis by inhibiting the formation of thrombin through heparin-antithrombin III complex (Warzecha et al, 2010). However, effect of heparin over inflammatory parameters or pathway is not given in the literatures. Spatholobus suberectus belongs to family: Leguminosae used for the management of several disorders such as rheumatism, anemia and abnormal menstruation traditionally in China (Lam et al, 2000; Li et al, 2003; Yen, 1992). Reported study suggested that several compounds like butin, liquiritigenin, dihydroquercetin, plathymenin, eriodictyol and neoisoliquiritigenin were been isolated from stem of SS (Lee et al, 2006). A study on stem extract of SS effectively manages cerebral ischemia by attenuating NF-κΒ p65 and cytokines and thereby prevents the DNA damage (Zhang et al, 2016). Moreover studies like anti rheumatic, anti inflammatory, antioxidant and anti platelet activity of SS are reported in the literature (Zhang et al, 2016). Thus this study evaluates the synergistic effect of Spatholobus suberectus extract when treated with heparin in pancreatitis. Conclusion: Present study concludes that treatment with SS alone effectively manages the pancreatitis by ceasing the inflammatory pathway. Moreover SS in combination with heparin gives synergistic effect as in combination it act on inflammatory and coagulation pathway simultaneously for the pancreatitis.
Background: The present study evaluates the effect of Spatholobus suberectus stem extract (SS) in the management of pancreatitis alone and in combination with heparin. Methods: Pancreatitis was induced pancreatitis by cerulean (50μg/kg, i.p.) five times at an interval of 1 h without any pretreatment of drug. Rats were treated with SS (100 and 200 mg/kg, p. o.) and heparin (150 U/kg, i.p.) alone and in combination for the duration of a week. Later pancreatic weight and blood flow was estimated and different biochemical parameters like concentration of D-dimer and Interleukin 1β (IL-Ιβ) and activity of amylase and lipase were determined in blood of pancreatitis rats. Moreover effect of drug treatment on DNA synthesis and histopathology was also estimated on cerulean induced pancreatitis rats. Results: Results of this study suggest that treatment with SS alone and in combination with heparin significantly increase in prothrombin time and pancreatic blood flow than negative control group. There was significant decrease in concentration of IL-Ιβ and D-dimer and activity of amylase and lipase in SS and heparin treated group than negative control group. Pancreatic DNA synthesis was also found to be reduced in SS and heparin alone and in combination treated group. Histopathology study also reveals that treatment with SS and heparin alone and in combination reduces edema, hemorrhages, leukocyte infiltration in the TS of pancreatic tissues. Conclusions: Present study concludes that treatment with SS alone effectively manages the pancreatitis by ceasing the inflammatory pathway and potentiates the effect of heparin in the management of pancreatitis.
5,394
303
[ 80, 81, 143, 60, 124, 78, 59, 35, 1612, 294, 229, 98, 156 ]
17
[ "heparin", "control", "pancreatitis", "suberectus", "spatholobus suberectus", "spatholobus", "group", "cerulean", "pancreatic", "induced" ]
[ "inflammatory coagulation pathway", "pancreatitis heparin manages", "effect heparin inflammatory", "introduction pancreatitis coagulation", "coagulation induced inflammation" ]
null
null
[CONTENT] Spatholobus suberectus | Pancreatitis | Heparin | Cerulean [SUMMARY]
null
null
[CONTENT] Spatholobus suberectus | Pancreatitis | Heparin | Cerulean [SUMMARY]
[CONTENT] Spatholobus suberectus | Pancreatitis | Heparin | Cerulean [SUMMARY]
[CONTENT] Spatholobus suberectus | Pancreatitis | Heparin | Cerulean [SUMMARY]
[CONTENT] Amylases | Animals | Anticoagulants | Ceruletide | Drug Therapy, Combination | Fabaceae | Fibrin Fibrinogen Degradation Products | Heparin | Interleukin-1beta | Lipase | Male | Nucleic Acid Synthesis Inhibitors | Pancreas | Pancreatitis | Phytotherapy | Plant Extracts | Plant Stems | Protective Agents | Prothrombin Time | Rats | Rats, Wistar [SUMMARY]
null
null
[CONTENT] Amylases | Animals | Anticoagulants | Ceruletide | Drug Therapy, Combination | Fabaceae | Fibrin Fibrinogen Degradation Products | Heparin | Interleukin-1beta | Lipase | Male | Nucleic Acid Synthesis Inhibitors | Pancreas | Pancreatitis | Phytotherapy | Plant Extracts | Plant Stems | Protective Agents | Prothrombin Time | Rats | Rats, Wistar [SUMMARY]
[CONTENT] Amylases | Animals | Anticoagulants | Ceruletide | Drug Therapy, Combination | Fabaceae | Fibrin Fibrinogen Degradation Products | Heparin | Interleukin-1beta | Lipase | Male | Nucleic Acid Synthesis Inhibitors | Pancreas | Pancreatitis | Phytotherapy | Plant Extracts | Plant Stems | Protective Agents | Prothrombin Time | Rats | Rats, Wistar [SUMMARY]
[CONTENT] Amylases | Animals | Anticoagulants | Ceruletide | Drug Therapy, Combination | Fabaceae | Fibrin Fibrinogen Degradation Products | Heparin | Interleukin-1beta | Lipase | Male | Nucleic Acid Synthesis Inhibitors | Pancreas | Pancreatitis | Phytotherapy | Plant Extracts | Plant Stems | Protective Agents | Prothrombin Time | Rats | Rats, Wistar [SUMMARY]
[CONTENT] inflammatory coagulation pathway | pancreatitis heparin manages | effect heparin inflammatory | introduction pancreatitis coagulation | coagulation induced inflammation [SUMMARY]
null
null
[CONTENT] inflammatory coagulation pathway | pancreatitis heparin manages | effect heparin inflammatory | introduction pancreatitis coagulation | coagulation induced inflammation [SUMMARY]
[CONTENT] inflammatory coagulation pathway | pancreatitis heparin manages | effect heparin inflammatory | introduction pancreatitis coagulation | coagulation induced inflammation [SUMMARY]
[CONTENT] inflammatory coagulation pathway | pancreatitis heparin manages | effect heparin inflammatory | introduction pancreatitis coagulation | coagulation induced inflammation [SUMMARY]
[CONTENT] heparin | control | pancreatitis | suberectus | spatholobus suberectus | spatholobus | group | cerulean | pancreatic | induced [SUMMARY]
null
null
[CONTENT] heparin | control | pancreatitis | suberectus | spatholobus suberectus | spatholobus | group | cerulean | pancreatic | induced [SUMMARY]
[CONTENT] heparin | control | pancreatitis | suberectus | spatholobus suberectus | spatholobus | group | cerulean | pancreatic | induced [SUMMARY]
[CONTENT] heparin | control | pancreatitis | suberectus | spatholobus suberectus | spatholobus | group | cerulean | pancreatic | induced [SUMMARY]
[CONTENT] pancreatitis | heparin | suggested | study suggested | study | 2006 | reported study suggested | coagulation | reported | inflammation [SUMMARY]
null
null
[CONTENT] pathway | combination | inflammatory | treatment ss effectively | pathway simultaneously | concludes | concludes treatment | concludes treatment ss | concludes treatment ss effectively | pathway simultaneously pancreatitis [SUMMARY]
[CONTENT] heparin | pancreatitis | control | group | suberectus | cerulean | spatholobus suberectus | spatholobus | pancreatic | combination [SUMMARY]
[CONTENT] heparin | pancreatitis | control | group | suberectus | cerulean | spatholobus suberectus | spatholobus | pancreatic | combination [SUMMARY]
[CONTENT] Spatholobus | SS [SUMMARY]
null
null
[CONTENT] SS [SUMMARY]
[CONTENT] Spatholobus | SS ||| Pancreatitis | cerulean | 50μg/kg | i.p. ||| five | 1 ||| SS | 100 | 150 | i.p. ||| a week ||| Interleukin 1β ||| ||| SS ||| SS ||| SS ||| SS ||| SS [SUMMARY]
[CONTENT] Spatholobus | SS ||| Pancreatitis | cerulean | 50μg/kg | i.p. ||| five | 1 ||| SS | 100 | 150 | i.p. ||| a week ||| Interleukin 1β ||| ||| SS ||| SS ||| SS ||| SS ||| SS [SUMMARY]
Serum Osteoprotegerin Is an Independent Marker of Metabolic Complications in Non-DialysisDependent Chronic Kidney Disease Patients.
34684610
Osteoprotegerin (OPG) belongs to the tumour necrosis factor superfamily and is known to accelerate endothelial dysfunction and vascular calcification. OPG concentrations are elevated in patients with chronic kidney disease. The aim of this study was to investigate the association between OPG levels and frequent complications of chronic kidney disease (CKD) such as anaemia, protein energy wasting (PEW), inflammation, overhydration, hyperglycaemia and hypertension.
BACKGROUND
One hundred non-dialysis-dependent men with CKD stage 3-5 were included in the study. Bioimpedance spectroscopy (BIS) was used to measure overhydration, fat amount and lean body mass. We also measured the serum concentrations of haemoglobin, albumin, total cholesterol, C-reactive protein (CRP), fibrinogen and glycated haemoglobin (HgbA1c), as well as blood pressure.
METHODS
We observed a significant, positive correlation between OPG and age, serum creatinine, CRP, fibrinogen, HgbA1c concentrations, systolic blood pressure and overhydration. Negative correlations were observed between OPG and glomerular filtration rate (eGFR), serum albumin concentrations and serum haemoglobin level. Logistic regression models revealed that OPG is an independent marker of metabolic complications such as anaemia, PEW, inflammation and poor renal prognosis (including overhydration, uncontrolled diabetes and hypertension) in the studied population.
RESULTS
Our results suggest that OPG can be an independent marker of PEW, inflammation and vascular metabolic disturbances in patients with chronic kidney disease.
CONCLUSION
[ "Aged", "Anemia", "Biomarkers", "Humans", "Inflammation", "Logistic Models", "Male", "Middle Aged", "Osteoprotegerin", "Prognosis", "Renal Dialysis", "Renal Insufficiency, Chronic" ]
8538217
1. Introduction
Osteoprotegerin (OPG) is a molecule belonging to the tumour necrosis factor (TNF) superfamily. It is a decoy receptor for TNF-related apoptosis-inducing ligand (TRAIL). The TNF superfamily regulates immune responses, homeostasis as well as haematopoiesis [1]. Initially, OPG was identified as a bone resorption inhibitor that blocks the binding of RANK (receptor activator of nuclear factor kappa-β) to its ligand RANKL [2]. Further studies revealed that it has an additional impact on the cardiovascular and immune systems [3]. The OPG/RANKL/RANK system is involved in pathological angiogenesis, inflammatory states and cell survival. OPG is a glycoprotein released by vascular smooth muscle cells, endothelial cells, osteoblasts and immune cells. There is a signalling pathway between endothelial cells and osteoblasts during osteogenesis creating a connection between angiogenesis and osteogenesis [4]. The OPG/RANKL/RANK/TRAIL system, via receptors located on the cell surface, sends intracellular signals and modifies gene expression. As a result, monocytes, neutrophils and endothelial cells are recruited through cytokine production and receptor activation [5]. Serum OPG levels are associated with endothelial dysfunction and mediate vascular calcification [6]. Elevated OPG levels were observed in patients with atherosclerosis, heart failure, metabolic syndrome and diabetes [7,8,9]. The role of OPG in kidney pathologies is not well understood. OPG is expressed in kidney samples, cultured tubular cells and urinary exosome-like vesicles. It has been hypothesized that it can play a role in matrix deposition, inflammation and apoptosis [10]. Furthermore, the role of OPG in cardiovascular pathology and vascular calcification, which are common complications of chronic kidney disease, makes OPG an interesting marker also in CKD. When compared to healthy individuals, non-dialysis-dependent CKD patients, haemodialysis and peritoneal dialysis patients and renal transplant recipients present elevated levels of OPG [11,12,13]. As OPG is involved in vascular calcification, it is also associated with cardiovascular mortality in CKD patients [14]. Decreased levels of OPG are observed in nephrotic syndrome, which is probably caused by the loss with urine. Glucocorticosteroid treatment increases RANKL and reduces OPG expression, which also decreases the OPG levels in patients with nephrotic syndrome [15]. The purpose of our study was to investigate the association between OPG levels and the main complications of chronic kidney disease, including anaemia, protein energy wasting, inflammation and poor prognosis factors of CKD progression such as overhydration, hyperglycaemia and hypertension.
2. Methods
2.1. Design A prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2. A prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2. 2.2. Patients Participants were recruited among patients visiting the Nephrological Outpatient Clinic for a routine control during the period between November 2018 and February 2020. If they fulfilled the inclusion criteria, a new visit was arranged for participation in the study. The participants were asked to avoid saunas, physical exertion and alcohol consumption the day before the examination. The visit took place after overnight fasting. One hundred men with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2 were included in the study. The exclusion criteria were: renal replacement therapy or its requirement within the following 3 months, clinical signs of infection and presence of metal parts in the body. The study protocol was accepted by the local ethics committee (Bioethics Committee in Military Institute of Medicine, IRB acceptance number 120/WIM/2018 obtained 22 August 2018). All participants signed an informed consent. Body composition including overhydration (OH), fat amount and lean body mass were measured by bioimpedance spectroscopy with the use of a Body Composition Monitor (Fresenius Medical Care). While being measured, patients stayed in a supine position after a 5 min rest. Electrodes were placed in a tetrapolar configuration (on one hand and one foot). Blood samples for standard measurements were collected after an overnight fast and were transported immediately to the local Department of Laboratory Diagnostics. Concentrations of high-sensitivity C-reactive protein were determined by a nephelometry assay (BNII Siemens) with a cut-off point of 0.8 mg/dL. Serum creatinine concentrations were measured using the Jaffe method (Gen.2; Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland), and serum albumin levels using a BCP Albumin Assay Kit (Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland). Samples for measuring osteoprotegerin (OPG) levels were kept frozen at −80 °C. OPG levels were assessed using the Luminex MAGPIX platform. GFR was calculated according to the short Modification of Diet in Renal Disease (MDRD) formula. GFR in mL/min per 1.73 m2 = 175 × SerumCr−1.154 × age−0.203 × 1.212 (if patient is black) × 0.742 (if female) Participants were recruited among patients visiting the Nephrological Outpatient Clinic for a routine control during the period between November 2018 and February 2020. If they fulfilled the inclusion criteria, a new visit was arranged for participation in the study. The participants were asked to avoid saunas, physical exertion and alcohol consumption the day before the examination. The visit took place after overnight fasting. One hundred men with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2 were included in the study. The exclusion criteria were: renal replacement therapy or its requirement within the following 3 months, clinical signs of infection and presence of metal parts in the body. The study protocol was accepted by the local ethics committee (Bioethics Committee in Military Institute of Medicine, IRB acceptance number 120/WIM/2018 obtained 22 August 2018). All participants signed an informed consent. Body composition including overhydration (OH), fat amount and lean body mass were measured by bioimpedance spectroscopy with the use of a Body Composition Monitor (Fresenius Medical Care). While being measured, patients stayed in a supine position after a 5 min rest. Electrodes were placed in a tetrapolar configuration (on one hand and one foot). Blood samples for standard measurements were collected after an overnight fast and were transported immediately to the local Department of Laboratory Diagnostics. Concentrations of high-sensitivity C-reactive protein were determined by a nephelometry assay (BNII Siemens) with a cut-off point of 0.8 mg/dL. Serum creatinine concentrations were measured using the Jaffe method (Gen.2; Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland), and serum albumin levels using a BCP Albumin Assay Kit (Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland). Samples for measuring osteoprotegerin (OPG) levels were kept frozen at −80 °C. OPG levels were assessed using the Luminex MAGPIX platform. GFR was calculated according to the short Modification of Diet in Renal Disease (MDRD) formula. GFR in mL/min per 1.73 m2 = 175 × SerumCr−1.154 × age−0.203 × 1.212 (if patient is black) × 0.742 (if female) 2.3. Defining the Complications of Chronic Kidney Disease Complications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] Anaemia, when serum haemoglobin level was lower than 12 g/dL [16] Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17] Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17] Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] Complications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] Anaemia, when serum haemoglobin level was lower than 12 g/dL [16] Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17] Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17] Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] 2.4. Statistical Analysis The characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant. The characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant.
3. Results
The study population consisted of 100 male patients with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2, non-treated with dialysis. The median age of the studied population was 66 years (59–72). Clinical data are presented in Table 1. We observed significant, positive correlations between OPG and age, serum creatinine concentration, CRP, fibrinogen, HgBA1C, systolic blood pressure and overhydration. Significant, negative correlations were observed between OPG and eGFR, serum albumin concentration as well as serum haemoglobin level. Table 2. Logistic regression models, adjusted for age and GFR, were created to evaluate the influence of the OPG level as an independent marker of complications in chronic kidney disease patients, such as anaemia, PEW, an inflammatory state, poor prognostic factors of CKD progression. 3.1. Anaemia The OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001). The model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1. The OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001). The model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1. 3.2. Protein Energy Wasting (PEW) OPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2. OPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2. 3.3. Inflammatory State OPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3. OPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3. 3.4. Poor Prognostic Factors (Overhydration, Hyperglycaemia and Hypertension) Overhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4. Overhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4.
5. Conclusions
CKD is in itself a cardiovascular risk factor, and cardiovascular mortality in CKD patients is high. CVD mortality in CKD patients is even greater in the presence of diabetes and hypertension [30]. Early detection of such complications and the identification of patients at risk of developing them is vital. Therefore, new markers that can predict metabolic disorders and have an unfavourable effect on vascular damage are necessary. OPG seems to be a possible marker associated with PEW, inflammation and vascular metabolic disturbances which is worth further study.
[ "2.1. Design", "2.3. Defining the Complications of Chronic Kidney Disease", "2.4. Statistical Analysis", "3.1. Anaemia", "3.2. Protein Energy Wasting (PEW)", "3.3. Inflammatory State", "3.4. Poor Prognostic Factors (Overhydration, Hyperglycaemia and Hypertension)" ]
[ "A prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2.", "Complications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18]\nAnaemia, when serum haemoglobin level was lower than 12 g/dL [16]\nProtein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]\nInflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]\nPoor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18]", "The characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant.", "The OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001).\nThe model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1.", "OPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2.", "OPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3.", "Overhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4." ]
[ null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Design", "2.2. Patients", "2.3. Defining the Complications of Chronic Kidney Disease", "2.4. Statistical Analysis", "3. Results", "3.1. Anaemia", "3.2. Protein Energy Wasting (PEW)", "3.3. Inflammatory State", "3.4. Poor Prognostic Factors (Overhydration, Hyperglycaemia and Hypertension)", "4. Discussion", "5. Conclusions" ]
[ "Osteoprotegerin (OPG) is a molecule belonging to the tumour necrosis factor (TNF) superfamily. It is a decoy receptor for TNF-related apoptosis-inducing ligand (TRAIL). The TNF superfamily regulates immune responses, homeostasis as well as haematopoiesis [1]. Initially, OPG was identified as a bone resorption inhibitor that blocks the binding of RANK (receptor activator of nuclear factor kappa-β) to its ligand RANKL [2]. Further studies revealed that it has an additional impact on the cardiovascular and immune systems [3]. The OPG/RANKL/RANK system is involved in pathological angiogenesis, inflammatory states and cell survival. OPG is a glycoprotein released by vascular smooth muscle cells, endothelial cells, osteoblasts and immune cells. There is a signalling pathway between endothelial cells and osteoblasts during osteogenesis creating a connection between angiogenesis and osteogenesis [4]. The OPG/RANKL/RANK/TRAIL system, via receptors located on the cell surface, sends intracellular signals and modifies gene expression. As a result, monocytes, neutrophils and endothelial cells are recruited through cytokine production and receptor activation [5]. Serum OPG levels are associated with endothelial dysfunction and mediate vascular calcification [6]. Elevated OPG levels were observed in patients with atherosclerosis, heart failure, metabolic syndrome and diabetes [7,8,9].\nThe role of OPG in kidney pathologies is not well understood. OPG is expressed in kidney samples, cultured tubular cells and urinary exosome-like vesicles. It has been hypothesized that it can play a role in matrix deposition, inflammation and apoptosis [10]. Furthermore, the role of OPG in cardiovascular pathology and vascular calcification, which are common complications of chronic kidney disease, makes OPG an interesting marker also in CKD. When compared to healthy individuals, non-dialysis-dependent CKD patients, haemodialysis and peritoneal dialysis patients and renal transplant recipients present elevated levels of OPG [11,12,13]. As OPG is involved in vascular calcification, it is also associated with cardiovascular mortality in CKD patients [14]. Decreased levels of OPG are observed in nephrotic syndrome, which is probably caused by the loss with urine. Glucocorticosteroid treatment increases RANKL and reduces OPG expression, which also decreases the OPG levels in patients with nephrotic syndrome [15].\nThe purpose of our study was to investigate the association between OPG levels and the main complications of chronic kidney disease, including anaemia, protein energy wasting, inflammation and poor prognosis factors of CKD progression such as overhydration, hyperglycaemia and hypertension.", " 2.1. Design A prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2.\nA prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2.\n 2.2. Patients Participants were recruited among patients visiting the Nephrological Outpatient Clinic for a routine control during the period between November 2018 and February 2020. If they fulfilled the inclusion criteria, a new visit was arranged for participation in the study. The participants were asked to avoid saunas, physical exertion and alcohol consumption the day before the examination. The visit took place after overnight fasting.\nOne hundred men with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2 were included in the study. The exclusion criteria were: renal replacement therapy or its requirement within the following 3 months, clinical signs of infection and presence of metal parts in the body. The study protocol was accepted by the local ethics committee (Bioethics Committee in Military Institute of Medicine, IRB acceptance number 120/WIM/2018 obtained 22 August 2018). All participants signed an informed consent.\nBody composition including overhydration (OH), fat amount and lean body mass were measured by bioimpedance spectroscopy with the use of a Body Composition Monitor (Fresenius Medical Care). While being measured, patients stayed in a supine position after a 5 min rest. Electrodes were placed in a tetrapolar configuration (on one hand and one foot).\nBlood samples for standard measurements were collected after an overnight fast and were transported immediately to the local Department of Laboratory Diagnostics. Concentrations of high-sensitivity C-reactive protein were determined by a nephelometry assay (BNII Siemens) with a cut-off point of 0.8 mg/dL. Serum creatinine concentrations were measured using the Jaffe method (Gen.2; Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland), and serum albumin levels using a BCP Albumin Assay Kit (Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland). Samples for measuring osteoprotegerin (OPG) levels were kept frozen at −80 °C. OPG levels were assessed using the Luminex MAGPIX platform.\nGFR was calculated according to the short Modification of Diet in Renal Disease (MDRD) formula.\nGFR in mL/min per 1.73 m2 = 175 × SerumCr−1.154 × age−0.203 × 1.212 (if patient is black) × 0.742 (if female)\nParticipants were recruited among patients visiting the Nephrological Outpatient Clinic for a routine control during the period between November 2018 and February 2020. If they fulfilled the inclusion criteria, a new visit was arranged for participation in the study. The participants were asked to avoid saunas, physical exertion and alcohol consumption the day before the examination. The visit took place after overnight fasting.\nOne hundred men with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2 were included in the study. The exclusion criteria were: renal replacement therapy or its requirement within the following 3 months, clinical signs of infection and presence of metal parts in the body. The study protocol was accepted by the local ethics committee (Bioethics Committee in Military Institute of Medicine, IRB acceptance number 120/WIM/2018 obtained 22 August 2018). All participants signed an informed consent.\nBody composition including overhydration (OH), fat amount and lean body mass were measured by bioimpedance spectroscopy with the use of a Body Composition Monitor (Fresenius Medical Care). While being measured, patients stayed in a supine position after a 5 min rest. Electrodes were placed in a tetrapolar configuration (on one hand and one foot).\nBlood samples for standard measurements were collected after an overnight fast and were transported immediately to the local Department of Laboratory Diagnostics. Concentrations of high-sensitivity C-reactive protein were determined by a nephelometry assay (BNII Siemens) with a cut-off point of 0.8 mg/dL. Serum creatinine concentrations were measured using the Jaffe method (Gen.2; Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland), and serum albumin levels using a BCP Albumin Assay Kit (Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland). Samples for measuring osteoprotegerin (OPG) levels were kept frozen at −80 °C. OPG levels were assessed using the Luminex MAGPIX platform.\nGFR was calculated according to the short Modification of Diet in Renal Disease (MDRD) formula.\nGFR in mL/min per 1.73 m2 = 175 × SerumCr−1.154 × age−0.203 × 1.212 (if patient is black) × 0.742 (if female)\n 2.3. Defining the Complications of Chronic Kidney Disease Complications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18]\nAnaemia, when serum haemoglobin level was lower than 12 g/dL [16]\nProtein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]\nInflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]\nPoor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18]\nComplications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18]\nAnaemia, when serum haemoglobin level was lower than 12 g/dL [16]\nProtein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]\nInflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]\nPoor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18]\n 2.4. Statistical Analysis The characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant.\nThe characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant.", "A prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2.", "Participants were recruited among patients visiting the Nephrological Outpatient Clinic for a routine control during the period between November 2018 and February 2020. If they fulfilled the inclusion criteria, a new visit was arranged for participation in the study. The participants were asked to avoid saunas, physical exertion and alcohol consumption the day before the examination. The visit took place after overnight fasting.\nOne hundred men with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2 were included in the study. The exclusion criteria were: renal replacement therapy or its requirement within the following 3 months, clinical signs of infection and presence of metal parts in the body. The study protocol was accepted by the local ethics committee (Bioethics Committee in Military Institute of Medicine, IRB acceptance number 120/WIM/2018 obtained 22 August 2018). All participants signed an informed consent.\nBody composition including overhydration (OH), fat amount and lean body mass were measured by bioimpedance spectroscopy with the use of a Body Composition Monitor (Fresenius Medical Care). While being measured, patients stayed in a supine position after a 5 min rest. Electrodes were placed in a tetrapolar configuration (on one hand and one foot).\nBlood samples for standard measurements were collected after an overnight fast and were transported immediately to the local Department of Laboratory Diagnostics. Concentrations of high-sensitivity C-reactive protein were determined by a nephelometry assay (BNII Siemens) with a cut-off point of 0.8 mg/dL. Serum creatinine concentrations were measured using the Jaffe method (Gen.2; Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland), and serum albumin levels using a BCP Albumin Assay Kit (Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland). Samples for measuring osteoprotegerin (OPG) levels were kept frozen at −80 °C. OPG levels were assessed using the Luminex MAGPIX platform.\nGFR was calculated according to the short Modification of Diet in Renal Disease (MDRD) formula.\nGFR in mL/min per 1.73 m2 = 175 × SerumCr−1.154 × age−0.203 × 1.212 (if patient is black) × 0.742 (if female)", "Complications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18]\nAnaemia, when serum haemoglobin level was lower than 12 g/dL [16]\nProtein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]\nInflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]\nPoor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18]", "The characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant.", "The study population consisted of 100 male patients with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2, non-treated with dialysis. The median age of the studied population was 66 years (59–72). Clinical data are presented in Table 1.\nWe observed significant, positive correlations between OPG and age, serum creatinine concentration, CRP, fibrinogen, HgBA1C, systolic blood pressure and overhydration. Significant, negative correlations were observed between OPG and eGFR, serum albumin concentration as well as serum haemoglobin level. Table 2.\nLogistic regression models, adjusted for age and GFR, were created to evaluate the influence of the OPG level as an independent marker of complications in chronic kidney disease patients, such as anaemia, PEW, an inflammatory state, poor prognostic factors of CKD progression.\n 3.1. Anaemia The OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001).\nThe model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1.\nThe OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001).\nThe model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1.\n 3.2. Protein Energy Wasting (PEW) OPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2.\nOPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2.\n 3.3. Inflammatory State OPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3.\nOPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3.\n 3.4. Poor Prognostic Factors (Overhydration, Hyperglycaemia and Hypertension) Overhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4.\nOverhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4.", "The OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001).\nThe model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1.", "OPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2.", "OPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3.", "Overhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4.", "The role of OPG in chronic kidney disease patients is insufficiently comprehended. We know that the OPG level increases together with a decrease in the glomerular filtration rate, as proved in many studies [20,21]. The results of our study also confirm these observations. In a population of non-dialysis-dependent men with stage 3–5 chronic kidney disease, a significant, inverse correlation between OPG levels and eGFR was noted (R = −0.36; p < 0.001).\nIn our study, we also observed significant correlations between OPG and parameters which indicate CKD complications. The OPG level was correlated with the level of haemoglobins and acted as a marker of anaemia occurrence in the studied population. The logistic regression model including OPG was superior to that including only age and GFR in identifying patients with anaemia. However, the difference was not of statistical significance. OPG, as a member of the TNF superfamily, can influence haematopoiesis. Some researchers have observed that TNF superfamily members can regulate the growth of hematopoietic stem cells and progenitor cells [1,22].\nAside from anaemia, protein energy wasting (PEW) is also a complication of chronic kidney disease. The components of PEW are low serum albumin, cholesterol levels as well as low BMI and fat amount [17]. In our study, the OPG level was a marker of PEW in non-dialysis-dependent CKD men. PEW aetiology in CKD patients is associated with many factors such as hormonal disorders, insulin resistance and a subclinical inflammatory state. In our study, the OPG levels also correlated with the CRP and fibrinogen levels. OPG was also a significant marker of instances of inflammation. The logistic regression model that included OPG proved significantly better in in the identification of subclinical inflammation than the model which only included age and GFR. This finding is unsurprising, because OPG is known for its pro-inflammatory action. OPG has been described as a marker of endothelial damage associated with the inflammatory process, and in vitro studies have shown that OPG is involved in inflammatory cell chemotaxis [23].\nMost studies concerning OPG in CKD patients concentrate on the association between OPG and vascular calcification. OPG levels correlate with vascular calcification in non-dialysis-dependent CKD patients as well as in patients treated with renal replacement therapy [24,25]. In animal studies, mice without OPG developed severe vascular calcification [26]. Therefore, it is hypothesized that OPG has a protective role in vascular injury, and the elevation of OPG levels inhibits vascular calcification. However, vascular calcification can also occur in the medial as well as in the intimal layer. In atherosclerosis, intima is thickened, inflamed and calcified, forming plaques with diffuse localisation along the vessel walls, whereas medial calcification occurs along the elastic lamina, leading to stiffness of the artery wall. Studies in animal models, including Apolipoprotein E-deficient mice, suggest that OPG plays a protective, anti-calcification role in both the medial and the intimal (atherosclerotic) calcification process [27].\nThe OPG levels increase rapidly in the early stages of vascular calcification and subsequently remain almost invariant [21]. Therefore, OPG appears to be a marker of the onset of atherosclerosis. However, the development of cardiovascular disease (CVD) begins with endothelial dysfunction. The earliest manifestation of CVD is observed in the area of microcirculation, and in CKD patients, this area is affected by the loss of homeostasis, even in cases of non-severe renal impairment [28]. OPG, as a factor released by endothelial cells, can be a marker of endothelial dysfunction and may probably be involved in a complicated chain of pathophysiological connections between chronic kidney disease and CVD.\nIn our study, we analysed factors of poor prognosis of CKD progression which are associated with vascular damage, such as overhydration, hypertension and glycaemic disturbances expressed by HgbA1c. Positive, significant correlations were observed between OPG levels and all three parameters. In other studies, OPG levels were associated with glycaemic status, and higher OPG levels were observed in patients with poor glycaemic control [29]. In our study, OPG was identified independently, associated with the presence of poor prognostic factors (overhydration, hypertension and glycaemic disturbances) irrespective of age and eGFR.\nThe limitation of our study is its relatively small sample size. An increase in the number of participants could enable a division of the study population into various subgroups according to the different stages of chronic kidney disease. Complications of CKD are more severe in more advanced stages of CKD, and OPG influence could also be more significant.", "CKD is in itself a cardiovascular risk factor, and cardiovascular mortality in CKD patients is high. CVD mortality in CKD patients is even greater in the presence of diabetes and hypertension [30]. Early detection of such complications and the identification of patients at risk of developing them is vital. Therefore, new markers that can predict metabolic disorders and have an unfavourable effect on vascular damage are necessary. OPG seems to be a possible marker associated with PEW, inflammation and vascular metabolic disturbances which is worth further study." ]
[ "intro", "methods", null, "subjects", null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "chronic kidney disease", "kidney", "inflammation", "protein energy-wasting", "metabolic complications", "bioimpedance spectroscopy" ]
1. Introduction: Osteoprotegerin (OPG) is a molecule belonging to the tumour necrosis factor (TNF) superfamily. It is a decoy receptor for TNF-related apoptosis-inducing ligand (TRAIL). The TNF superfamily regulates immune responses, homeostasis as well as haematopoiesis [1]. Initially, OPG was identified as a bone resorption inhibitor that blocks the binding of RANK (receptor activator of nuclear factor kappa-β) to its ligand RANKL [2]. Further studies revealed that it has an additional impact on the cardiovascular and immune systems [3]. The OPG/RANKL/RANK system is involved in pathological angiogenesis, inflammatory states and cell survival. OPG is a glycoprotein released by vascular smooth muscle cells, endothelial cells, osteoblasts and immune cells. There is a signalling pathway between endothelial cells and osteoblasts during osteogenesis creating a connection between angiogenesis and osteogenesis [4]. The OPG/RANKL/RANK/TRAIL system, via receptors located on the cell surface, sends intracellular signals and modifies gene expression. As a result, monocytes, neutrophils and endothelial cells are recruited through cytokine production and receptor activation [5]. Serum OPG levels are associated with endothelial dysfunction and mediate vascular calcification [6]. Elevated OPG levels were observed in patients with atherosclerosis, heart failure, metabolic syndrome and diabetes [7,8,9]. The role of OPG in kidney pathologies is not well understood. OPG is expressed in kidney samples, cultured tubular cells and urinary exosome-like vesicles. It has been hypothesized that it can play a role in matrix deposition, inflammation and apoptosis [10]. Furthermore, the role of OPG in cardiovascular pathology and vascular calcification, which are common complications of chronic kidney disease, makes OPG an interesting marker also in CKD. When compared to healthy individuals, non-dialysis-dependent CKD patients, haemodialysis and peritoneal dialysis patients and renal transplant recipients present elevated levels of OPG [11,12,13]. As OPG is involved in vascular calcification, it is also associated with cardiovascular mortality in CKD patients [14]. Decreased levels of OPG are observed in nephrotic syndrome, which is probably caused by the loss with urine. Glucocorticosteroid treatment increases RANKL and reduces OPG expression, which also decreases the OPG levels in patients with nephrotic syndrome [15]. The purpose of our study was to investigate the association between OPG levels and the main complications of chronic kidney disease, including anaemia, protein energy wasting, inflammation and poor prognosis factors of CKD progression such as overhydration, hyperglycaemia and hypertension. 2. Methods: 2.1. Design A prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2. A prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2. 2.2. Patients Participants were recruited among patients visiting the Nephrological Outpatient Clinic for a routine control during the period between November 2018 and February 2020. If they fulfilled the inclusion criteria, a new visit was arranged for participation in the study. The participants were asked to avoid saunas, physical exertion and alcohol consumption the day before the examination. The visit took place after overnight fasting. One hundred men with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2 were included in the study. The exclusion criteria were: renal replacement therapy or its requirement within the following 3 months, clinical signs of infection and presence of metal parts in the body. The study protocol was accepted by the local ethics committee (Bioethics Committee in Military Institute of Medicine, IRB acceptance number 120/WIM/2018 obtained 22 August 2018). All participants signed an informed consent. Body composition including overhydration (OH), fat amount and lean body mass were measured by bioimpedance spectroscopy with the use of a Body Composition Monitor (Fresenius Medical Care). While being measured, patients stayed in a supine position after a 5 min rest. Electrodes were placed in a tetrapolar configuration (on one hand and one foot). Blood samples for standard measurements were collected after an overnight fast and were transported immediately to the local Department of Laboratory Diagnostics. Concentrations of high-sensitivity C-reactive protein were determined by a nephelometry assay (BNII Siemens) with a cut-off point of 0.8 mg/dL. Serum creatinine concentrations were measured using the Jaffe method (Gen.2; Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland), and serum albumin levels using a BCP Albumin Assay Kit (Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland). Samples for measuring osteoprotegerin (OPG) levels were kept frozen at −80 °C. OPG levels were assessed using the Luminex MAGPIX platform. GFR was calculated according to the short Modification of Diet in Renal Disease (MDRD) formula. GFR in mL/min per 1.73 m2 = 175 × SerumCr−1.154 × age−0.203 × 1.212 (if patient is black) × 0.742 (if female) Participants were recruited among patients visiting the Nephrological Outpatient Clinic for a routine control during the period between November 2018 and February 2020. If they fulfilled the inclusion criteria, a new visit was arranged for participation in the study. The participants were asked to avoid saunas, physical exertion and alcohol consumption the day before the examination. The visit took place after overnight fasting. One hundred men with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2 were included in the study. The exclusion criteria were: renal replacement therapy or its requirement within the following 3 months, clinical signs of infection and presence of metal parts in the body. The study protocol was accepted by the local ethics committee (Bioethics Committee in Military Institute of Medicine, IRB acceptance number 120/WIM/2018 obtained 22 August 2018). All participants signed an informed consent. Body composition including overhydration (OH), fat amount and lean body mass were measured by bioimpedance spectroscopy with the use of a Body Composition Monitor (Fresenius Medical Care). While being measured, patients stayed in a supine position after a 5 min rest. Electrodes were placed in a tetrapolar configuration (on one hand and one foot). Blood samples for standard measurements were collected after an overnight fast and were transported immediately to the local Department of Laboratory Diagnostics. Concentrations of high-sensitivity C-reactive protein were determined by a nephelometry assay (BNII Siemens) with a cut-off point of 0.8 mg/dL. Serum creatinine concentrations were measured using the Jaffe method (Gen.2; Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland), and serum albumin levels using a BCP Albumin Assay Kit (Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland). Samples for measuring osteoprotegerin (OPG) levels were kept frozen at −80 °C. OPG levels were assessed using the Luminex MAGPIX platform. GFR was calculated according to the short Modification of Diet in Renal Disease (MDRD) formula. GFR in mL/min per 1.73 m2 = 175 × SerumCr−1.154 × age−0.203 × 1.212 (if patient is black) × 0.742 (if female) 2.3. Defining the Complications of Chronic Kidney Disease Complications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] Anaemia, when serum haemoglobin level was lower than 12 g/dL [16] Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17] Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17] Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] Complications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] Anaemia, when serum haemoglobin level was lower than 12 g/dL [16] Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17] Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17] Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] 2.4. Statistical Analysis The characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant. The characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant. 2.1. Design: A prospective, observational study in male patients with chronic kidney disease non-treated with dialysis was performed. The inclusion criterion was eGFR lower than 60 mL/min/1.73 m2. 2.2. Patients: Participants were recruited among patients visiting the Nephrological Outpatient Clinic for a routine control during the period between November 2018 and February 2020. If they fulfilled the inclusion criteria, a new visit was arranged for participation in the study. The participants were asked to avoid saunas, physical exertion and alcohol consumption the day before the examination. The visit took place after overnight fasting. One hundred men with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2 were included in the study. The exclusion criteria were: renal replacement therapy or its requirement within the following 3 months, clinical signs of infection and presence of metal parts in the body. The study protocol was accepted by the local ethics committee (Bioethics Committee in Military Institute of Medicine, IRB acceptance number 120/WIM/2018 obtained 22 August 2018). All participants signed an informed consent. Body composition including overhydration (OH), fat amount and lean body mass were measured by bioimpedance spectroscopy with the use of a Body Composition Monitor (Fresenius Medical Care). While being measured, patients stayed in a supine position after a 5 min rest. Electrodes were placed in a tetrapolar configuration (on one hand and one foot). Blood samples for standard measurements were collected after an overnight fast and were transported immediately to the local Department of Laboratory Diagnostics. Concentrations of high-sensitivity C-reactive protein were determined by a nephelometry assay (BNII Siemens) with a cut-off point of 0.8 mg/dL. Serum creatinine concentrations were measured using the Jaffe method (Gen.2; Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland), and serum albumin levels using a BCP Albumin Assay Kit (Roche Diagnostics GmbH, Risch-Rotkreuz, Switzerland). Samples for measuring osteoprotegerin (OPG) levels were kept frozen at −80 °C. OPG levels were assessed using the Luminex MAGPIX platform. GFR was calculated according to the short Modification of Diet in Renal Disease (MDRD) formula. GFR in mL/min per 1.73 m2 = 175 × SerumCr−1.154 × age−0.203 × 1.212 (if patient is black) × 0.742 (if female) 2.3. Defining the Complications of Chronic Kidney Disease: Complications of chronic kidney disease were defined as follows:-Anaemia, when serum haemoglobin level was lower than 12 g/dL [16]-Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17]-Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17]-Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] Anaemia, when serum haemoglobin level was lower than 12 g/dL [16] Protein energy wasting (PEW), when one of the following occurred: serum albumin level was lower than 3.8 g/dL, total cholesterol level was lower than 100 mg/dL, BMI was lower than 23 kg/m2 or Fat was lower than 10% [17] Inflammatory state, when CRP >0.8 mg/dL or Fbg >400 mg/dL [17] Poor prognostic factors of CKD progression, when HbA1c >8% or SBP >140 mmHg or presence of overhydration, i.e., (OH) >4 L [18] 2.4. Statistical Analysis: The characteristics of the study population are presented using medians with interquartile ranges (for continuous data with distribution other than normal, tested with the Shapiro–Wilk test). The correlation between OPG and clinical and laboratory parameters was assessed using the Spearman rank correlation coefficient. Logistic regression was used to investigate the significance of OPG levels as a marker of metabolic complications of CKD. Given the known relationship between OPG, age and GFR, we compared the value of logistic models, with and without OPG, for each metabolic complication (anaemia, PEW, inflammation, uncontrolled factors of CKD progression). Suitability of fit of each model was assessed using the area under the curve (AUC) with standard error (SE) and presented using the Receiver Operating Characteristic (ROC) curve. The models were compared using Hanley’s algorithm [19]. The analysis was performed with Statistica 13.1 with p-values < 0.05 considered statistically significant. 3. Results: The study population consisted of 100 male patients with chronic kidney disease and eGFR lower than 60 mL/min/1.73 m2, non-treated with dialysis. The median age of the studied population was 66 years (59–72). Clinical data are presented in Table 1. We observed significant, positive correlations between OPG and age, serum creatinine concentration, CRP, fibrinogen, HgBA1C, systolic blood pressure and overhydration. Significant, negative correlations were observed between OPG and eGFR, serum albumin concentration as well as serum haemoglobin level. Table 2. Logistic regression models, adjusted for age and GFR, were created to evaluate the influence of the OPG level as an independent marker of complications in chronic kidney disease patients, such as anaemia, PEW, an inflammatory state, poor prognostic factors of CKD progression. 3.1. Anaemia The OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001). The model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1. The OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001). The model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1. 3.2. Protein Energy Wasting (PEW) OPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2. OPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2. 3.3. Inflammatory State OPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3. OPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3. 3.4. Poor Prognostic Factors (Overhydration, Hyperglycaemia and Hypertension) Overhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4. Overhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4. 3.1. Anaemia: The OPG level was a significant, independent marker of anaemia in the studied population (p < 0.001). The model including OPG, age and GFR (AUC=0.84 ± 0.05) identified CKD patients with anaemia better than the model including only age and GFR (AUC=0.76 ± 0.06), as shown in Table 3. However, the difference was not of statistical significance (p = 0.096), as shown in Figure 1. 3.2. Protein Energy Wasting (PEW): OPG was a significant, independent risk factor for PEW occurrence in the study population (p < 0.001). However, the model including OPG, age and GFR was not significantly better (AUC = 0.79 ± 0.05) than those considering only age and GFR (AUC = 0.71 ± 0.06, p = 0.142), as shown in Table 4 and Figure 2. 3.3. Inflammatory State: OPG was a significant, independent risk factor for subclinical inflammation in the study population (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model including only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 5 and Figure 3. 3.4. Poor Prognostic Factors (Overhydration, Hyperglycaemia and Hypertension): Overhydration, hyperglycaemia and hypertension were chosen as poor prognostic factors of chronic kidney disease progression and were analysed together in logistic regression analysis. OPG was an independent marker that identified patients with poor prognostic factors (p < 0.001). The model including OPG, age and GFR was significantly better (AUC = 0.77 ± 0.05) than the model which included only age and GFR (AUC = 0.67 ± 0.06; p = 0.041), as shown in Table 6 and Figure 4. 4. Discussion: The role of OPG in chronic kidney disease patients is insufficiently comprehended. We know that the OPG level increases together with a decrease in the glomerular filtration rate, as proved in many studies [20,21]. The results of our study also confirm these observations. In a population of non-dialysis-dependent men with stage 3–5 chronic kidney disease, a significant, inverse correlation between OPG levels and eGFR was noted (R = −0.36; p < 0.001). In our study, we also observed significant correlations between OPG and parameters which indicate CKD complications. The OPG level was correlated with the level of haemoglobins and acted as a marker of anaemia occurrence in the studied population. The logistic regression model including OPG was superior to that including only age and GFR in identifying patients with anaemia. However, the difference was not of statistical significance. OPG, as a member of the TNF superfamily, can influence haematopoiesis. Some researchers have observed that TNF superfamily members can regulate the growth of hematopoietic stem cells and progenitor cells [1,22]. Aside from anaemia, protein energy wasting (PEW) is also a complication of chronic kidney disease. The components of PEW are low serum albumin, cholesterol levels as well as low BMI and fat amount [17]. In our study, the OPG level was a marker of PEW in non-dialysis-dependent CKD men. PEW aetiology in CKD patients is associated with many factors such as hormonal disorders, insulin resistance and a subclinical inflammatory state. In our study, the OPG levels also correlated with the CRP and fibrinogen levels. OPG was also a significant marker of instances of inflammation. The logistic regression model that included OPG proved significantly better in in the identification of subclinical inflammation than the model which only included age and GFR. This finding is unsurprising, because OPG is known for its pro-inflammatory action. OPG has been described as a marker of endothelial damage associated with the inflammatory process, and in vitro studies have shown that OPG is involved in inflammatory cell chemotaxis [23]. Most studies concerning OPG in CKD patients concentrate on the association between OPG and vascular calcification. OPG levels correlate with vascular calcification in non-dialysis-dependent CKD patients as well as in patients treated with renal replacement therapy [24,25]. In animal studies, mice without OPG developed severe vascular calcification [26]. Therefore, it is hypothesized that OPG has a protective role in vascular injury, and the elevation of OPG levels inhibits vascular calcification. However, vascular calcification can also occur in the medial as well as in the intimal layer. In atherosclerosis, intima is thickened, inflamed and calcified, forming plaques with diffuse localisation along the vessel walls, whereas medial calcification occurs along the elastic lamina, leading to stiffness of the artery wall. Studies in animal models, including Apolipoprotein E-deficient mice, suggest that OPG plays a protective, anti-calcification role in both the medial and the intimal (atherosclerotic) calcification process [27]. The OPG levels increase rapidly in the early stages of vascular calcification and subsequently remain almost invariant [21]. Therefore, OPG appears to be a marker of the onset of atherosclerosis. However, the development of cardiovascular disease (CVD) begins with endothelial dysfunction. The earliest manifestation of CVD is observed in the area of microcirculation, and in CKD patients, this area is affected by the loss of homeostasis, even in cases of non-severe renal impairment [28]. OPG, as a factor released by endothelial cells, can be a marker of endothelial dysfunction and may probably be involved in a complicated chain of pathophysiological connections between chronic kidney disease and CVD. In our study, we analysed factors of poor prognosis of CKD progression which are associated with vascular damage, such as overhydration, hypertension and glycaemic disturbances expressed by HgbA1c. Positive, significant correlations were observed between OPG levels and all three parameters. In other studies, OPG levels were associated with glycaemic status, and higher OPG levels were observed in patients with poor glycaemic control [29]. In our study, OPG was identified independently, associated with the presence of poor prognostic factors (overhydration, hypertension and glycaemic disturbances) irrespective of age and eGFR. The limitation of our study is its relatively small sample size. An increase in the number of participants could enable a division of the study population into various subgroups according to the different stages of chronic kidney disease. Complications of CKD are more severe in more advanced stages of CKD, and OPG influence could also be more significant. 5. Conclusions: CKD is in itself a cardiovascular risk factor, and cardiovascular mortality in CKD patients is high. CVD mortality in CKD patients is even greater in the presence of diabetes and hypertension [30]. Early detection of such complications and the identification of patients at risk of developing them is vital. Therefore, new markers that can predict metabolic disorders and have an unfavourable effect on vascular damage are necessary. OPG seems to be a possible marker associated with PEW, inflammation and vascular metabolic disturbances which is worth further study.
Background: Osteoprotegerin (OPG) belongs to the tumour necrosis factor superfamily and is known to accelerate endothelial dysfunction and vascular calcification. OPG concentrations are elevated in patients with chronic kidney disease. The aim of this study was to investigate the association between OPG levels and frequent complications of chronic kidney disease (CKD) such as anaemia, protein energy wasting (PEW), inflammation, overhydration, hyperglycaemia and hypertension. Methods: One hundred non-dialysis-dependent men with CKD stage 3-5 were included in the study. Bioimpedance spectroscopy (BIS) was used to measure overhydration, fat amount and lean body mass. We also measured the serum concentrations of haemoglobin, albumin, total cholesterol, C-reactive protein (CRP), fibrinogen and glycated haemoglobin (HgbA1c), as well as blood pressure. Results: We observed a significant, positive correlation between OPG and age, serum creatinine, CRP, fibrinogen, HgbA1c concentrations, systolic blood pressure and overhydration. Negative correlations were observed between OPG and glomerular filtration rate (eGFR), serum albumin concentrations and serum haemoglobin level. Logistic regression models revealed that OPG is an independent marker of metabolic complications such as anaemia, PEW, inflammation and poor renal prognosis (including overhydration, uncontrolled diabetes and hypertension) in the studied population. Conclusions: Our results suggest that OPG can be an independent marker of PEW, inflammation and vascular metabolic disturbances in patients with chronic kidney disease.
1. Introduction: Osteoprotegerin (OPG) is a molecule belonging to the tumour necrosis factor (TNF) superfamily. It is a decoy receptor for TNF-related apoptosis-inducing ligand (TRAIL). The TNF superfamily regulates immune responses, homeostasis as well as haematopoiesis [1]. Initially, OPG was identified as a bone resorption inhibitor that blocks the binding of RANK (receptor activator of nuclear factor kappa-β) to its ligand RANKL [2]. Further studies revealed that it has an additional impact on the cardiovascular and immune systems [3]. The OPG/RANKL/RANK system is involved in pathological angiogenesis, inflammatory states and cell survival. OPG is a glycoprotein released by vascular smooth muscle cells, endothelial cells, osteoblasts and immune cells. There is a signalling pathway between endothelial cells and osteoblasts during osteogenesis creating a connection between angiogenesis and osteogenesis [4]. The OPG/RANKL/RANK/TRAIL system, via receptors located on the cell surface, sends intracellular signals and modifies gene expression. As a result, monocytes, neutrophils and endothelial cells are recruited through cytokine production and receptor activation [5]. Serum OPG levels are associated with endothelial dysfunction and mediate vascular calcification [6]. Elevated OPG levels were observed in patients with atherosclerosis, heart failure, metabolic syndrome and diabetes [7,8,9]. The role of OPG in kidney pathologies is not well understood. OPG is expressed in kidney samples, cultured tubular cells and urinary exosome-like vesicles. It has been hypothesized that it can play a role in matrix deposition, inflammation and apoptosis [10]. Furthermore, the role of OPG in cardiovascular pathology and vascular calcification, which are common complications of chronic kidney disease, makes OPG an interesting marker also in CKD. When compared to healthy individuals, non-dialysis-dependent CKD patients, haemodialysis and peritoneal dialysis patients and renal transplant recipients present elevated levels of OPG [11,12,13]. As OPG is involved in vascular calcification, it is also associated with cardiovascular mortality in CKD patients [14]. Decreased levels of OPG are observed in nephrotic syndrome, which is probably caused by the loss with urine. Glucocorticosteroid treatment increases RANKL and reduces OPG expression, which also decreases the OPG levels in patients with nephrotic syndrome [15]. The purpose of our study was to investigate the association between OPG levels and the main complications of chronic kidney disease, including anaemia, protein energy wasting, inflammation and poor prognosis factors of CKD progression such as overhydration, hyperglycaemia and hypertension. 5. Conclusions: CKD is in itself a cardiovascular risk factor, and cardiovascular mortality in CKD patients is high. CVD mortality in CKD patients is even greater in the presence of diabetes and hypertension [30]. Early detection of such complications and the identification of patients at risk of developing them is vital. Therefore, new markers that can predict metabolic disorders and have an unfavourable effect on vascular damage are necessary. OPG seems to be a possible marker associated with PEW, inflammation and vascular metabolic disturbances which is worth further study.
Background: Osteoprotegerin (OPG) belongs to the tumour necrosis factor superfamily and is known to accelerate endothelial dysfunction and vascular calcification. OPG concentrations are elevated in patients with chronic kidney disease. The aim of this study was to investigate the association between OPG levels and frequent complications of chronic kidney disease (CKD) such as anaemia, protein energy wasting (PEW), inflammation, overhydration, hyperglycaemia and hypertension. Methods: One hundred non-dialysis-dependent men with CKD stage 3-5 were included in the study. Bioimpedance spectroscopy (BIS) was used to measure overhydration, fat amount and lean body mass. We also measured the serum concentrations of haemoglobin, albumin, total cholesterol, C-reactive protein (CRP), fibrinogen and glycated haemoglobin (HgbA1c), as well as blood pressure. Results: We observed a significant, positive correlation between OPG and age, serum creatinine, CRP, fibrinogen, HgbA1c concentrations, systolic blood pressure and overhydration. Negative correlations were observed between OPG and glomerular filtration rate (eGFR), serum albumin concentrations and serum haemoglobin level. Logistic regression models revealed that OPG is an independent marker of metabolic complications such as anaemia, PEW, inflammation and poor renal prognosis (including overhydration, uncontrolled diabetes and hypertension) in the studied population. Conclusions: Our results suggest that OPG can be an independent marker of PEW, inflammation and vascular metabolic disturbances in patients with chronic kidney disease.
5,272
277
[ 33, 252, 176, 82, 70, 68, 91 ]
13
[ "opg", "lower", "age", "gfr", "patients", "dl", "ckd", "study", "age gfr", "levels" ]
[ "connection angiogenesis osteogenesis", "introduction osteoprotegerin", "measuring osteoprotegerin opg", "osteoprotegerin opg levels", "angiogenesis osteogenesis opg" ]
[CONTENT] chronic kidney disease | kidney | inflammation | protein energy-wasting | metabolic complications | bioimpedance spectroscopy [SUMMARY]
[CONTENT] chronic kidney disease | kidney | inflammation | protein energy-wasting | metabolic complications | bioimpedance spectroscopy [SUMMARY]
[CONTENT] chronic kidney disease | kidney | inflammation | protein energy-wasting | metabolic complications | bioimpedance spectroscopy [SUMMARY]
[CONTENT] chronic kidney disease | kidney | inflammation | protein energy-wasting | metabolic complications | bioimpedance spectroscopy [SUMMARY]
[CONTENT] chronic kidney disease | kidney | inflammation | protein energy-wasting | metabolic complications | bioimpedance spectroscopy [SUMMARY]
[CONTENT] chronic kidney disease | kidney | inflammation | protein energy-wasting | metabolic complications | bioimpedance spectroscopy [SUMMARY]
[CONTENT] Aged | Anemia | Biomarkers | Humans | Inflammation | Logistic Models | Male | Middle Aged | Osteoprotegerin | Prognosis | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Aged | Anemia | Biomarkers | Humans | Inflammation | Logistic Models | Male | Middle Aged | Osteoprotegerin | Prognosis | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Aged | Anemia | Biomarkers | Humans | Inflammation | Logistic Models | Male | Middle Aged | Osteoprotegerin | Prognosis | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Aged | Anemia | Biomarkers | Humans | Inflammation | Logistic Models | Male | Middle Aged | Osteoprotegerin | Prognosis | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Aged | Anemia | Biomarkers | Humans | Inflammation | Logistic Models | Male | Middle Aged | Osteoprotegerin | Prognosis | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] Aged | Anemia | Biomarkers | Humans | Inflammation | Logistic Models | Male | Middle Aged | Osteoprotegerin | Prognosis | Renal Dialysis | Renal Insufficiency, Chronic [SUMMARY]
[CONTENT] connection angiogenesis osteogenesis | introduction osteoprotegerin | measuring osteoprotegerin opg | osteoprotegerin opg levels | angiogenesis osteogenesis opg [SUMMARY]
[CONTENT] connection angiogenesis osteogenesis | introduction osteoprotegerin | measuring osteoprotegerin opg | osteoprotegerin opg levels | angiogenesis osteogenesis opg [SUMMARY]
[CONTENT] connection angiogenesis osteogenesis | introduction osteoprotegerin | measuring osteoprotegerin opg | osteoprotegerin opg levels | angiogenesis osteogenesis opg [SUMMARY]
[CONTENT] connection angiogenesis osteogenesis | introduction osteoprotegerin | measuring osteoprotegerin opg | osteoprotegerin opg levels | angiogenesis osteogenesis opg [SUMMARY]
[CONTENT] connection angiogenesis osteogenesis | introduction osteoprotegerin | measuring osteoprotegerin opg | osteoprotegerin opg levels | angiogenesis osteogenesis opg [SUMMARY]
[CONTENT] connection angiogenesis osteogenesis | introduction osteoprotegerin | measuring osteoprotegerin opg | osteoprotegerin opg levels | angiogenesis osteogenesis opg [SUMMARY]
[CONTENT] opg | lower | age | gfr | patients | dl | ckd | study | age gfr | levels [SUMMARY]
[CONTENT] opg | lower | age | gfr | patients | dl | ckd | study | age gfr | levels [SUMMARY]
[CONTENT] opg | lower | age | gfr | patients | dl | ckd | study | age gfr | levels [SUMMARY]
[CONTENT] opg | lower | age | gfr | patients | dl | ckd | study | age gfr | levels [SUMMARY]
[CONTENT] opg | lower | age | gfr | patients | dl | ckd | study | age gfr | levels [SUMMARY]
[CONTENT] opg | lower | age | gfr | patients | dl | ckd | study | age gfr | levels [SUMMARY]
[CONTENT] opg | cells | rankl | levels | endothelial | vascular | immune | receptor | syndrome | calcification [SUMMARY]
[CONTENT] dl | lower | mg dl | mg | level lower | level | body | serum | m2 | 17 [SUMMARY]
[CONTENT] age | auc | age gfr | gfr | opg | model including | model | age gfr auc | table | gfr auc [SUMMARY]
[CONTENT] mortality ckd | mortality | mortality ckd patients | vascular | cardiovascular | ckd | risk | metabolic | patients | ckd patients [SUMMARY]
[CONTENT] opg | auc | gfr | age | age gfr | model | model including | patients | lower | dl [SUMMARY]
[CONTENT] opg | auc | gfr | age | age gfr | model | model including | patients | lower | dl [SUMMARY]
[CONTENT] Osteoprotegerin | OPG ||| OPG ||| OPG | PEW [SUMMARY]
[CONTENT] One hundred | CKD | 3-5 ||| Bioimpedance | BIS ||| CRP [SUMMARY]
[CONTENT] OPG | CRP | HgbA1c ||| OPG ||| OPG | PEW [SUMMARY]
[CONTENT] OPG | PEW [SUMMARY]
[CONTENT] Osteoprotegerin | OPG ||| OPG ||| OPG | PEW ||| One hundred | CKD | 3-5 ||| Bioimpedance | BIS ||| CRP ||| ||| OPG | CRP | HgbA1c ||| OPG ||| OPG | PEW ||| OPG | PEW [SUMMARY]
[CONTENT] Osteoprotegerin | OPG ||| OPG ||| OPG | PEW ||| One hundred | CKD | 3-5 ||| Bioimpedance | BIS ||| CRP ||| ||| OPG | CRP | HgbA1c ||| OPG ||| OPG | PEW ||| OPG | PEW [SUMMARY]
A pan-Theileria FRET-qPCR survey for Theileria spp. in ruminants from nine provinces of China.
25175751
Theileria spp. are tick transmitted protozoa that can infect large and small ruminants causing disease and economic losses. Diagnosis of infections is often challenging, as parasites can be difficult to detect and identify microscopically and serology is unreliable. While there are PCR assays which can identify certain Theileria spp., there is no one PCR that has been designed to identify all recognized species that occur in ruminants and which will greatly simplify the laboratory diagnoses of infections.
BACKGROUND
Primers and probes for a genus-specific pan-Theileria FRET-qPCR were selected by comparing sequences of recognized Theileria spp. in GenBank and the test validated using reference organisms. The assay was also tested on whole blood samples from large and small ruminants from nine provinces in China.
METHODS
The pan-Theileria FRET-qPCR detected all recognized species but none of the closely related protozoa. In whole blood samples from animals in China, Theileria spp. DNA was detected in 53.2% of the sheep tested (59/111), 44.4% of the goats (120/270) and 30.8% of the cattle (380/1,235). Water buffaloes (n = 29) were negative. Sequencing of some of the PCR products showed cattle in China were infected with T. orientalis/T. sergenti/T. buffeli group while T. ovis and T. luwenshuni were found in sheep and T. luwenshuni in goats. The prevalence of Theileria DNA was significantly higher in Bos p. indicus than in Bos p. taurus (77.7% vs. 18.3%) and copy numbers were also significantly higher (10(4.88) vs. 10(3.00) Theileria 18S rRNA gene copies/per ml whole blood).
RESULTS
The pan-Theileria FRET-qPCR can detect all recognized Theileria spp. of ruminants in a single reaction. Large and small ruminants in China are commonly infected with a variety of Theileria spp.
CONCLUSIONS
[ "Animals", "DNA, Protozoan", "Molecular Sequence Data", "Polymerase Chain Reaction", "Prevalence", "Ruminants", "Sensitivity and Specificity", "Theileria", "Theileriasis" ]
4262081
Background
Theileria spp. are tick-transmitted, intracellular protozoan parasites infecting leukocytes and erythrocytes of wild and domestic large and small ruminants. Several Theileria spp., transmitted by ixodid ticks of the genera Rhipicephalus, Hyalomma, Amblyomma and Haemaphysalis, have been described in cattle, water buffaloes, sheep and goats in different geographical zones of the world [1–5]. Theileriosis is primarily limited to tropical and sub-tropical areas of the world, with infections mainly reported in Africa and the Middle East but also in southern Europe and northern Asia [6–11]. Infections by Theileria spp. can cause fever, anemia and hemoglobinuria and, in severe cases, death although many species are benign. Animals recovered from acute or primary infections usually remain persistently infected and may act as reservoirs of infecting ticks [12, 13]. While there have been numerous reports of theileriosis in various animal species in China since 1958 [14–27], many have been reported in Chinese and some were based on microscopic detection of parasites which can be difficult with low parasitemia and does not allow ready differentiation of species. Serological studies, although sensitive and easy to perform, are not specific as there is cross reactivity between Theileria spp. Although molecular studies have been performed, these have been to detect Theileria of specific domestic animal species, for example sheep and goats [27]. There have been no highly sensitive and specific molecular methods described which enable studies on various animals from widely divergent areas of China where different Theileria spp. might occur. To address this problem, we developed and validated a highly sensitive genus-specific Theileria FRET-qPCR that detects the recognized Theileria spp. of domestic animals and investigated the molecular prevalence of Theileria in cattle, water buffaloes, goats and sheep from nine provinces in China.
Methods
Animals and blood collection Between 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1 Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep Animal speciesSubspecies /breedProvinceCityCoordinate of city Theileriapositivitypositive /total % Cattle (n = 1235) Bos p. taurus SimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100% Bos p. indicus YunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0% Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep Between 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1 Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep Animal speciesSubspecies /breedProvinceCityCoordinate of city Theileriapositivitypositive /total % Cattle (n = 1235) Bos p. taurus SimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100% Bos p. indicus YunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0% Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep DNA extraction DNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30]. DNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30]. Theileriaspp.FRET-qPCR Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. Identification of Theileriaspp. by PCR and sequencing The amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31]. The amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31]. Statistical analysis Differences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant. Differences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant.
Results
Development of the pan-TheileriaFRET-PCR Comparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis. Comparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis. Prevalence of Theileriaspp. DNA in ruminants Animals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2 Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively. Animals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2 Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively. Factors associated with the occurrence of theileriosis in cattle When we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males. When we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males. Gene accession numbers The Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN Isolates identified in this studyHighly similar sequences in GenBank Theileriaspp.Gene accession #Source/originGene accession #Source/originMismatches T. orientalis KJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549 T. sergenti KJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492 T. buffeli KJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545 T. ovis KJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529 T. luwenshuni KJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN The Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN Isolates identified in this studyHighly similar sequences in GenBank Theileriaspp.Gene accession #Source/originGene accession #Source/originMismatches T. orientalis KJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549 T. sergenti KJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492 T. buffeli KJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545 T. ovis KJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529 T. luwenshuni KJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN
Conclusion
In summary, our study has described the development and testing of a FRET-PCR which can detect recognized Theileria spp. The pan-Theileria FRET-PCR should be a useful diagnostic tool as it will enable diagnostic laboratories to detect infections in all domestic species with a single test.
[ "Background", "Animals and blood collection", "DNA extraction", "Theileriaspp.FRET-qPCR", "Primers and probes", "Thermal cycling", "Specificity", "Sensitivity", "Identification of Theileriaspp. by PCR and sequencing", "Statistical analysis", "Development of the pan-TheileriaFRET-PCR", "Prevalence of Theileriaspp. DNA in ruminants", "Factors associated with the occurrence of theileriosis in cattle", "Gene accession numbers" ]
[ "Theileria spp. are tick-transmitted, intracellular protozoan parasites infecting leukocytes and erythrocytes of wild and domestic large and small ruminants. Several Theileria spp., transmitted by ixodid ticks of the genera Rhipicephalus, Hyalomma, Amblyomma and Haemaphysalis, have been described in cattle, water buffaloes, sheep and goats in different geographical zones of the world [1–5]. Theileriosis is primarily limited to tropical and sub-tropical areas of the world, with infections mainly reported in Africa and the Middle East but also in southern Europe and northern Asia [6–11]. Infections by Theileria spp. can cause fever, anemia and hemoglobinuria and, in severe cases, death although many species are benign. Animals recovered from acute or primary infections usually remain persistently infected and may act as reservoirs of infecting ticks [12, 13].\nWhile there have been numerous reports of theileriosis in various animal species in China since 1958 [14–27], many have been reported in Chinese and some were based on microscopic detection of parasites which can be difficult with low parasitemia and does not allow ready differentiation of species. Serological studies, although sensitive and easy to perform, are not specific as there is cross reactivity between Theileria spp. Although molecular studies have been performed, these have been to detect Theileria of specific domestic animal species, for example sheep and goats [27]. There have been no highly sensitive and specific molecular methods described which enable studies on various animals from widely divergent areas of China where different Theileria spp. might occur. To address this problem, we developed and validated a highly sensitive genus-specific Theileria FRET-qPCR that detects the recognized Theileria spp. of domestic animals and investigated the molecular prevalence of Theileria in cattle, water buffaloes, goats and sheep from nine provinces in China.", "Between 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1\nMolecular prevalence of\nTheileria\nspp. in cattle, water buffalo, goat and sheep\nAnimal speciesSubspecies /breedProvinceCityCoordinate of city\nTheileriapositivitypositive /total\n%\nCattle (n = 1235)\nBos p. taurus\nSimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100%\nBos p. indicus\nYunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0%\n\nMolecular prevalence of\nTheileria\nspp. in cattle, water buffalo, goat and sheep\n", "DNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30].", " Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\nThe 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.\nThe Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.\n Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).\nPCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).\n Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.\nFor use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.", "The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.", "The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.", "PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).", "For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.", "The amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31].", "Differences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant.", "Comparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis.", "Animals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2\nSites in China where samples ruminants were tested for\nTheileria\nspp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study.\n\nSites in China where samples ruminants were tested for\nTheileria\nspp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study.\nSequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively.", "When we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males.", "The Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2\nComparison of isolates identified in this study and similar sequences in GenBank by BLASTN\nIsolates identified in this studyHighly similar sequences in GenBank\nTheileriaspp.Gene accession #Source/originGene accession #Source/originMismatches\nT. orientalis\nKJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549\nT. sergenti\nKJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492\nT. buffeli\nKJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545\nT. ovis\nKJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529\nT. luwenshuni\nKJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571\n\nComparison of isolates identified in this study and similar sequences in GenBank by BLASTN\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Animals and blood collection", "DNA extraction", "Theileriaspp.FRET-qPCR", "Primers and probes", "Thermal cycling", "Specificity", "Sensitivity", "Identification of Theileriaspp. by PCR and sequencing", "Statistical analysis", "Results", "Development of the pan-TheileriaFRET-PCR", "Prevalence of Theileriaspp. DNA in ruminants", "Factors associated with the occurrence of theileriosis in cattle", "Gene accession numbers", "Discussion", "Conclusion" ]
[ "Theileria spp. are tick-transmitted, intracellular protozoan parasites infecting leukocytes and erythrocytes of wild and domestic large and small ruminants. Several Theileria spp., transmitted by ixodid ticks of the genera Rhipicephalus, Hyalomma, Amblyomma and Haemaphysalis, have been described in cattle, water buffaloes, sheep and goats in different geographical zones of the world [1–5]. Theileriosis is primarily limited to tropical and sub-tropical areas of the world, with infections mainly reported in Africa and the Middle East but also in southern Europe and northern Asia [6–11]. Infections by Theileria spp. can cause fever, anemia and hemoglobinuria and, in severe cases, death although many species are benign. Animals recovered from acute or primary infections usually remain persistently infected and may act as reservoirs of infecting ticks [12, 13].\nWhile there have been numerous reports of theileriosis in various animal species in China since 1958 [14–27], many have been reported in Chinese and some were based on microscopic detection of parasites which can be difficult with low parasitemia and does not allow ready differentiation of species. Serological studies, although sensitive and easy to perform, are not specific as there is cross reactivity between Theileria spp. Although molecular studies have been performed, these have been to detect Theileria of specific domestic animal species, for example sheep and goats [27]. There have been no highly sensitive and specific molecular methods described which enable studies on various animals from widely divergent areas of China where different Theileria spp. might occur. To address this problem, we developed and validated a highly sensitive genus-specific Theileria FRET-qPCR that detects the recognized Theileria spp. of domestic animals and investigated the molecular prevalence of Theileria in cattle, water buffaloes, goats and sheep from nine provinces in China.", " Animals and blood collection Between 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1\nMolecular prevalence of\nTheileria\nspp. in cattle, water buffalo, goat and sheep\nAnimal speciesSubspecies /breedProvinceCityCoordinate of city\nTheileriapositivitypositive /total\n%\nCattle (n = 1235)\nBos p. taurus\nSimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100%\nBos p. indicus\nYunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0%\n\nMolecular prevalence of\nTheileria\nspp. in cattle, water buffalo, goat and sheep\n\nBetween 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1\nMolecular prevalence of\nTheileria\nspp. in cattle, water buffalo, goat and sheep\nAnimal speciesSubspecies /breedProvinceCityCoordinate of city\nTheileriapositivitypositive /total\n%\nCattle (n = 1235)\nBos p. taurus\nSimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100%\nBos p. indicus\nYunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0%\n\nMolecular prevalence of\nTheileria\nspp. in cattle, water buffalo, goat and sheep\n\n DNA extraction DNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30].\nDNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30].\n Theileriaspp.FRET-qPCR Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\nThe 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.\nThe Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.\n Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).\nPCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).\n Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.\nFor use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.\n Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\nThe 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.\nThe Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.\n Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).\nPCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).\n Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.\nFor use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.\n Identification of Theileriaspp. by PCR and sequencing The amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31].\nThe amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31].\n Statistical analysis Differences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant.\nDifferences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant.", "Between 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1\nMolecular prevalence of\nTheileria\nspp. in cattle, water buffalo, goat and sheep\nAnimal speciesSubspecies /breedProvinceCityCoordinate of city\nTheileriapositivitypositive /total\n%\nCattle (n = 1235)\nBos p. taurus\nSimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100%\nBos p. indicus\nYunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0%\n\nMolecular prevalence of\nTheileria\nspp. in cattle, water buffalo, goat and sheep\n", "DNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30].", " Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\nThe 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.\nThe Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.\n Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).\nPCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).\n Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.\nFor use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.", "The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.\n\nAlignment of oligonucleotides for\nTheileria\nPCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe.", "The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C.", "PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine).", "For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples.", "The amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31].", "Differences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant.", " Development of the pan-TheileriaFRET-PCR Comparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis.\nComparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis.\n Prevalence of Theileriaspp. DNA in ruminants Animals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2\nSites in China where samples ruminants were tested for\nTheileria\nspp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study.\n\nSites in China where samples ruminants were tested for\nTheileria\nspp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study.\nSequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively.\nAnimals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2\nSites in China where samples ruminants were tested for\nTheileria\nspp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study.\n\nSites in China where samples ruminants were tested for\nTheileria\nspp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study.\nSequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively.\n Factors associated with the occurrence of theileriosis in cattle When we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males.\nWhen we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males.\n Gene accession numbers The Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2\nComparison of isolates identified in this study and similar sequences in GenBank by BLASTN\nIsolates identified in this studyHighly similar sequences in GenBank\nTheileriaspp.Gene accession #Source/originGene accession #Source/originMismatches\nT. orientalis\nKJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549\nT. sergenti\nKJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492\nT. buffeli\nKJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545\nT. ovis\nKJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529\nT. luwenshuni\nKJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571\n\nComparison of isolates identified in this study and similar sequences in GenBank by BLASTN\n\nThe Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2\nComparison of isolates identified in this study and similar sequences in GenBank by BLASTN\nIsolates identified in this studyHighly similar sequences in GenBank\nTheileriaspp.Gene accession #Source/originGene accession #Source/originMismatches\nT. orientalis\nKJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549\nT. sergenti\nKJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492\nT. buffeli\nKJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545\nT. ovis\nKJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529\nT. luwenshuni\nKJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571\n\nComparison of isolates identified in this study and similar sequences in GenBank by BLASTN\n", "Comparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis.", "Animals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2\nSites in China where samples ruminants were tested for\nTheileria\nspp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study.\n\nSites in China where samples ruminants were tested for\nTheileria\nspp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study.\nSequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively.", "When we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males.", "The Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2\nComparison of isolates identified in this study and similar sequences in GenBank by BLASTN\nIsolates identified in this studyHighly similar sequences in GenBank\nTheileriaspp.Gene accession #Source/originGene accession #Source/originMismatches\nT. orientalis\nKJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549\nT. sergenti\nKJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492\nT. buffeli\nKJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545\nT. ovis\nKJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529\nT. luwenshuni\nKJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571\n\nComparison of isolates identified in this study and similar sequences in GenBank by BLASTN\n", "By systematically aligning the 18S rRNA sequences of representative Theileria spp. and other related protozoa, we identified a highly conserved region to place primers and probes for a pan-Theileria FRET-qPCR. The primers and probes we designed enabled us to amplify all our reference Theileria species with a detection limit of at least 2 Theileria 18S rRNA gene copies per PCR system. None of the other protozoan species tested gave reaction products. To the best of our knowledge, this is the first FRET-qPCR which specifically detects all Theileria species.\nOur data indicate infections with Theileria spp. are very widespread and common in cattle in China. We found positive animals in each province where we tested and an overall average of 30.77% of animals being positive. This relatively high level of positivity is similar to that obtained in the only other molecular survey for Theileria in China which showed 13.46% being positive in the northeast [32]. Sequencing data showed cattle are infected with the T. orientalis/T. sergenti/T. buffeli group which are generally recognized to be benign species [33] although some strains might cause economic losses [34].\nAlthough we sequenced relatively few positive amplicons, we found no evidence of the more pathogenic strains, T. annulata and T. parva, which is not unexpected in the case of T. parva which has only been reported from Africa [8]. T. annulata, however, has been reported in H. asiaticum in northwestern China [35]. Our failure to demonstrate the organism in our study might be because T. annulata has an uneven distribution in China.\nAlthough we found no evidence of Theileria spp. in the 29 water buffaloes we sampled, He [36] reported 58/304 (19.1%) positive for T. buffeli in Hubei province, south China. More extensive studies are necessary to determine the epidemiology of theileriosis in water buffaloes.\nWe found both goats and sheep were infected with T. luwenshuni (Table 2) using the pan-Theileria FRET-PCR and sequencing. This is a highly pathogenic organism that is known to occur in China where it is transmitted by Haemaphysalis qinghaiensis and may cause significant economic losses. While PCR assays have been described which detect and differentiate T. luwenshuni from T. uilenbergi\n[37, 38], they do not enable detection of all Theirleria spp. in all species and are thus not as versatile as our pan-Theileria FRET-PCR. Yin et al.\n[37] found the prevalence of T. luwenshuni varied from 0% to 85% in 4 provinces and we also found a wide range of positive values (4.1% to 67.4%). Such variability is probably due to tick prevalence rates, geoclimatic factors and livestock management systems. Further research using sensitive detection methods will be important in determining the best mechanisms to control infections.\nT. ovis was first identified in China in 2011 by PCR and sequencing and an infection rate of 78% was found in Xinjiang but no positive animals were found in twelve other provinces [26, 27, 39]. Our study confirms the presence of the organism in China although T. ovis is considered benign and economically unimportant as it might only cause signs in some animals which are stressed [40].\nIt is well recognized that exotic breeds are more susceptible to disease following infection with Theileria spp. than local stocks. Recent studies, however, have indicated that different breeds of animals might have similar susceptibilities to infections with Theileria spp. although some breeds are more capable of controlling the pathogenic effects of the organism [41].\nGenerally, B. p. taurus breeds are more susceptible to theileriosis than B. p. indicus breeds which might be associated with innate or immune mechanisms and/or general resistance to ticks. These factors have been mainly investigated with T. parva infections and thus it is interesting that our pan-Theileria FRET-PCR testing showed that in China it is the B. p. indicus breeds that are more likely to be positive than the B. p. taurus breeds and that the former generally had higher copy numbers indicating heavier infections. While our sample sizes were small and we cannot exclude the possibility of sample bias, our findings might be due to host genetic factors relating to infections with less pathogenic Theileria spp. or other factors such as differences in tick control and husbandry practices on different farms. Further studies taking these factors into account are needed to more precisely investigate the relationships between infections with benign Theileria spp. and the genetic background of the host.", "In summary, our study has described the development and testing of a FRET-PCR which can detect recognized Theileria spp. The pan-Theileria FRET-PCR should be a useful diagnostic tool as it will enable diagnostic laboratories to detect infections in all domestic species with a single test." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "Theileria spp", "FRET-qPCR", "Prevalence", "Ruminants" ]
Background: Theileria spp. are tick-transmitted, intracellular protozoan parasites infecting leukocytes and erythrocytes of wild and domestic large and small ruminants. Several Theileria spp., transmitted by ixodid ticks of the genera Rhipicephalus, Hyalomma, Amblyomma and Haemaphysalis, have been described in cattle, water buffaloes, sheep and goats in different geographical zones of the world [1–5]. Theileriosis is primarily limited to tropical and sub-tropical areas of the world, with infections mainly reported in Africa and the Middle East but also in southern Europe and northern Asia [6–11]. Infections by Theileria spp. can cause fever, anemia and hemoglobinuria and, in severe cases, death although many species are benign. Animals recovered from acute or primary infections usually remain persistently infected and may act as reservoirs of infecting ticks [12, 13]. While there have been numerous reports of theileriosis in various animal species in China since 1958 [14–27], many have been reported in Chinese and some were based on microscopic detection of parasites which can be difficult with low parasitemia and does not allow ready differentiation of species. Serological studies, although sensitive and easy to perform, are not specific as there is cross reactivity between Theileria spp. Although molecular studies have been performed, these have been to detect Theileria of specific domestic animal species, for example sheep and goats [27]. There have been no highly sensitive and specific molecular methods described which enable studies on various animals from widely divergent areas of China where different Theileria spp. might occur. To address this problem, we developed and validated a highly sensitive genus-specific Theileria FRET-qPCR that detects the recognized Theileria spp. of domestic animals and investigated the molecular prevalence of Theileria in cattle, water buffaloes, goats and sheep from nine provinces in China. Methods: Animals and blood collection Between 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1 Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep Animal speciesSubspecies /breedProvinceCityCoordinate of city Theileriapositivitypositive /total % Cattle (n = 1235) Bos p. taurus SimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100% Bos p. indicus YunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0% Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep Between 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1 Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep Animal speciesSubspecies /breedProvinceCityCoordinate of city Theileriapositivitypositive /total % Cattle (n = 1235) Bos p. taurus SimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100% Bos p. indicus YunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0% Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep DNA extraction DNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30]. DNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30]. Theileriaspp.FRET-qPCR Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. Identification of Theileriaspp. by PCR and sequencing The amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31]. The amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31]. Statistical analysis Differences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant. Differences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant. Animals and blood collection: Between 2007 and 2013, whole blood samples (around 6 ml) were collected in EDTA from apparently healthy cattle (n = 1,235), water buffaloes (29), goats (270) and sheep (111) from 9 provinces/municipality of China (Table 1). The Bos primigenius (p.) taurus studied (n = 975) were Holsteins, Simmentals, Bohai blacks, Luxis and Wannans while the Bos. p. indicus (n = 260) were the Yunlings, Minnans, and Leiqiongs (Table 1). The water buffaloes, goats and sheep in the study were bred in China and were indigenous breeds. Gender information was available for cattle from Yunnan province. After collection, the blood samples were frozen at -20°C and shipped on ice (over 2 days) to Yangzhou University where they were frozen at -80°C until thawed at room temperature for DNA extraction as described below. This study was reviewed and approved by the Institutional Animal Care and Use Committee of Yangzhou University and animal owners gave written permissions for blood collection.Table 1 Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep Animal speciesSubspecies /breedProvinceCityCoordinate of city Theileriapositivitypositive /total % Cattle (n = 1235) Bos p. taurus SimmentalInner MongoliaChifeng42.17°N, 118.58°E19/13214.4%Bohai blackShandongBinzhou37.22°N, 118.02°E4/666.1%LuxiShandongJining35.23°N, 116.33°E40/40100%HolsteinJiangsuYancheng33.22°N, 120.08°E72/32122.4%HolsteinJiangsuYangzhou32.23°N, 119.26°E17/14411.8%HolsteinShanghaiShanghai31.14°N, 121.29°E9/2553.5%WannanAnhuiWuhu31.19°N, 118.22°E17/17100% Bos p. indicus YunlingYunnanKunming25.04°N, 102.42°E124/16177.0%MinnanFujianPutian24.26°N, 119.01°E4/2516.0%LeiqiongHainanHaikou20.02°N, 110.20°E74/74100%Water buffalo (n = 29)HaiziJiangsuYancheng33.22°N, 120.08°E0/290%Goat (n = 270)XinjiangXinjiangUrumqi43.45°N, 87.36°E4/984.1%Yangtse River Delta WhiteJiangsuYangzhou32.23°N, 119.26°E116/17267.4%Sheep (n = 111)WurankeInner MongoliaXilingol43.57°N, 116.03°E36/7250.0%Sishui FurShandongJining35.23°N, 116.33°E23/3959.0% Molecular prevalence of Theileria spp. in cattle, water buffalo, goat and sheep DNA extraction: DNA was extracted from whole blood samples using a standard phenol-chloroform method previously described [28]. Two ml whole blood was used to extract DNA which was resuspended into 200 μl 1 × T10E0.1 buffer. The concentration of the extracted DNA was established with a Microscale Ultraviolet Spectrophotometer. Negative controls consisting of sterile molecular grade water were used to detect cross- contamination during DNA extraction and processing. The HMBS-based FRET-PCR was performed to verify if the extracted DNAs from blood samples were appropriate for molecular detection of tick-borne pathogens [29, 30]. Theileriaspp.FRET-qPCR: Primers and probes The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Thermal cycling The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. Specificity PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). Sensitivity For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. Primers and probes: The 18S rRNA sequences for the available recognized Theileria spp. on GenBank and 4 other closely related protozoan species were obtained from GenBank: T. orientalis (HM538222), T. buffeli (HQ840967), T. annulata (KF429799), T. sergenti (EU083804), T. luwenshuni (JX469527), T. velifera (AF097993), T. ovis (AY508458), T. parva (L02366), T. uilenbergi (JF719835), T. equi (AB515310), T. lestoquardi (JQ917458), T. separata (AY260175), T. capreoli (AY726011), T. cervi (AY735119), T. bicornis (AF499604), T. taurotragi (L19082), T. mutans (FJ213585); Babesia bovis (KF928529), B. divergens (LK935835), B. bigemia (LK391709), Hepatozoon americanum (AF176836), Cytauxzoon felis (AY679105) and Toxoplasma gondii (L37415) (Figure 1). The Clustal Multiple Alignment Algorithm was used to identify a highly conserved region of the 18S rRNA gene common to all the above Theileria spp. but significantly different from the other protozoan species (Figure 1). The primers and probes we developed were situated within the conserved region and synthesized by Integrated DNA Technologies (Coralville, IA, USA). The Theileria FRET-qPCR we established amplifies a 149-bp target with the positions of primers and probes shown in Figure 1: forward primer: 5′-TAGTGACAAGAAATAACAATACGGGGCTT-3′; reverse primer: 5′-CAGCAGAAATTCAACTACGAGCTTTTTAACT-3′; anchor probe: 5′-CCAATTGATACTCTGGAAGAGGTTT-(6-FAM)-3′; reporter probe: 5′-(LCRed640)-AATTCCCATCATTCCAATTACAAGAC-phosphate-3′.Figure 1 Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Alignment of oligonucleotides for Theileria PCR used in this study. Primers and probes are shown at the top of the boxes. Dots indicate nucleotides identical to primers and probes, and dashes denote absence of the nucleotide. The upstream primer is used as the demonstrated sequences without gaps while the two probes and downstream primer are used as antisense oligonucleotides. The designed oligonucleotides show minimum mismatching with Theileria spp. (0 mismatch with 11 species, 1 mismatch with 3 species, 2 mismatches with 2 species and 4 mismatches with 1 species), but significant numbers of mismatches with Babesia bovis (25 mismatches), B. divergens (23 mismatches), B. bigemina (22 mismatches), Cytauxzoon felis (8 mismatches), Hepatozoon americanum (16 mismatches) and Toxoplasma gondii (15 mismatches). The 6-FAM label is directly attached to the 3-terminal nucleotide of the fluorescein probe, and the LCRed-640 fluorescein label is added via a linker to the 5′-end of the LCRed-640 probe. Thermal cycling: The Theileria FRET-PCR was performed in a LightCycler 480®II real-time PCR platform with 20 μl volumes comprising 10 μl reaction master mix and 10 μl of sample. Thermal cycling consisted of a 2 min denaturation step at 95°C followed by 18 high-stringency step-down thermal cycles, 40 low-stringency fluorescence acquisition cycles, and melting curve determination between 38°C and 80°C. The parameters for qPCR were 6 × 12 sec at 64°C, 8 sec at 72°C, 0 sec at 95°C; 9 × 12 sec at 62°C, 8 sec at 72°C, 0 sec at 95°C; 3 × 12 sec at 60°C, 8 sec at 72°C, 0 sec at 95°C; 40 × 8 sec at 54°C and fluorescence acquisition, 8 sec at 72°C, 0 sec at 95°C. Specificity: PCR products were verified using electrophoresis (1.5% MetaPhor agarose gels), followed by purification with a QIAquick PCR Purification Kit (Qiagen, Valencia, CA, USA) and genomic sequencing (GenScript, Nanjing, Jiangsu, China). The sequencing data from randomly selected positive Theileria samples (n = 37) were compared with the existing Theileria sequences in the GenBank using BLAST. The specificity of the PCR was further verified with the amplification of T. orientalis rRNA-containing pIDTSMART cloning Vector (Integrated DNA Technologies, Coralville, IA, USA) and 100 DNA copies of the rRNA gene of B. canis, H. americanum, C. felis and T. gondii (kindly provided by the parasitological laboratory of Yangzhou University College of Veterinary Medicine). Sensitivity: For use as quantitative standards, the PCR products of DNAs of 5 Theileria species (T. orientalis, T. sergenti, T. buffeli, T. luwenshuni, T. ovis) were gel purified using a QIAquick Gel Extraction Kit (Qiagen, Valencia, CA, USA). After using the estimated molecular mass of the rRNA gene and the Quanti-iT TM PicoGreen ® dsDNA Assay Kit (Invitrogen Corporation, Carlsbad, CA, USA) to calculate the molarity of the solution, dilutions were made to give solutions containing 10,000, 1,000, 100, 10, 1 gene copies per PCR reaction system. These dilutions, and further dilutions providing 2, 4, 6 and 8 gene copies per PCR reaction, were used to determine the minimal detection limit. The 10-fold dilutions were used as quantitative standards in the FRET-PCR surveys to enable standard curves to be developed for the calculation of the gene copy numbers in positive samples. Identification of Theileriaspp. by PCR and sequencing: The amplicon of the pan-Theileria FRET-qPCR we established has a sequence which is highly conserved among the different Theileria species. To differentiate Theileria spp. in a positive reaction, we used a standard PCR to amplify a highly polymorphic region of the 18S rRNA gene (591–594 nucleotides for different Theileria spp.) and sequenced the products (GenScript, Nanjing, Jiangsu, China). For the PCR we designed a forward primer (5′-CCTGAGAAACGGCTACCACATCT-3′) that amplified all Theileria species and used a previously described reverse primer (5′-GGACTACGACGGTATCTGATCG-3′) that also amplified all species [31]. Statistical analysis: Differences in positivity of Theileria spp. were analyzed by Chi-squared Test while numbers of copies of the Theileria 18S rRNA gene determined in the Theileria FRET-qPCR were log10-transformed and analyzed using the Student’s T-test. Differences of P < 0.05 were considered statistically significant. Results: Development of the pan-TheileriaFRET-PCR Comparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis. Comparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis. Prevalence of Theileriaspp. DNA in ruminants Animals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2 Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively. Animals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2 Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively. Factors associated with the occurrence of theileriosis in cattle When we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males. When we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males. Gene accession numbers The Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN Isolates identified in this studyHighly similar sequences in GenBank Theileriaspp.Gene accession #Source/originGene accession #Source/originMismatches T. orientalis KJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549 T. sergenti KJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492 T. buffeli KJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545 T. ovis KJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529 T. luwenshuni KJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN The Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN Isolates identified in this studyHighly similar sequences in GenBank Theileriaspp.Gene accession #Source/originGene accession #Source/originMismatches T. orientalis KJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549 T. sergenti KJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492 T. buffeli KJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545 T. ovis KJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529 T. luwenshuni KJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN Development of the pan-TheileriaFRET-PCR: Comparison of the sequences in the highly conserved region of the Theileria spp. we used showed the region is highly conserved, but is substantially different from those in closely related protozoan species (Figure 1). The two primers and two probes we chose for the pan-Theileria FRET-qPCR had 0–4 nucleotide mismatches with the Theileria spp. in GenBank, but had 25, 23, 22, 8, 16 and 15 mismatches with B. bovis, B. divergens, B. bigemina, C. felis, H. americanum and T. gondii, respectively (Figure 1). The specificity of the pan-Theileria FRET-PCR was further confirmed when it gave positive reactions with the T. orientalis control, but gave negative reactions with DNAs of B. canis, C. felis, H. americanum and T. gondii. The pan-Theileria FRET-qPCR had a specific melting curve (Tm 57.5°C) with Theileria spp. DNA. Using the gel-purified PCR products as quantitative standards, we determined the detection limit of the pan-Theileria FRET-qPCR was 2 copies of the Theileria 18S rRNA gene per reaction for T. orientalis, T. sergenti and T. luwenshuni, T. buffeli and T. ovis. Prevalence of Theileriaspp. DNA in ruminants: Animals positive for Theileria were found in each of the nine provinces sampled with several animals of each species being positive at each location, except in the case of water buffaloes which were all negative in the one site they were studied. The overall prevalences of Theileria spp. DNA in sheep (53.2%; 59/111) and goats (44.4%; 120/270) were significantly higher than in cattle (30.8%; 380/1,235) (two-tailed Chi-squared Test, P < 10-4). The pan-Theileria FRET-PCR showed that sheep had an average of 102.4 copies of Theileria 18S rRNA/ml whole blood which was significantly lower than the 104.3 copies in cattle and 105.8 copies in goats (Student’s t Test, P < 10-4). While the prevalence of Theileria spp. DNA varied greatly from 3.5% (9/255) in Holsteins from Shanghai to 100% in Luxi cattle from Shandong (40/40) and Leiqiong cattle from Hainan (74/74), the prevalence did not differ significantly in sheep from Inner Mongolia and Shandong (Table 1, Figure 2).Figure 2 Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sites in China where samples ruminants were tested for Theileria spp. DNAs. Dots of different colors represent sites where samples obtained from cattle, water buffalo, goat and sheep of nine provinces were tested by pan-Theileria FRET-qPCR in this study. Sequencing of 87 randomly selected amplicons (52 from cattle, 14 from goats and 21 from sheep) from Theileria DNA positive samples showed that T. orientalis/ T. sergenti/T. buffeli group [29] were present in cattle while T. luwenshuni was found in goats in Jiangsu province and T. ovis and T. luwenshuni in sheep from Inner Mongolia and Jiangsu province, respectively. Factors associated with the occurrence of theileriosis in cattle: When we analyzed factors that might be associated with the prevalence of Theileria spp. DNA in cattle we found that Bos p. indicus animals had significantly higher positivity (77.7% vs. 18.3%; P < 10-4) and copy number of the Theileria 18S rRNA gene (104.81 vs. 103.73 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. Similarly, Bos p. indicus cattle were more likely to be positive (65.2% vs. 13.7%; P < 10-4) and have higher copy numbers of the Theileria 18S rRNA gene (104.88 vs. 103.00 copies/per ml whole blood; P < 10-4) than Bos p. taurus animals. The cattle from southern China had significantly higher Theileria 18S rRNA gene copy numbers (104.39 vs. 103.87 copies/per ml whole blood; P = 0.02) than those from northern China but this difference in the prevalence was not significant (31.8% vs. 26.5%). In cattle from Yunnan province where gender information was available, female animals were more commonly positive (76.7% vs. 41.3%; P < 10-4) and copy numbers (105.02 vs. 102.93Theileria/per ml whole blood; P < 10-4) than males. Gene accession numbers: The Theileria rRNA nucleotide sequences obtained in this study that were not identical to existing entries in GenBank were deposited with the following gene accession numbers: KJ850933 and KJ850938 (T. sergenti); KJ850936 and KJ850940 (T. buffeli); KJ850934, KJ850937, KJ850943, KJ850939 and KJ850941 (T. orientalis); KJ850942 (T. ovis); KJ850935 and KM016463 (T. luwenshuni). The sequences obtained were very similar (0–4 nucleotide mismatches) to Theileria spp. sequences deposited by other laboratories in China, USA, France, Australia and Iran (Table 2).Table 2 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN Isolates identified in this studyHighly similar sequences in GenBank Theileriaspp.Gene accession #Source/originGene accession #Source/originMismatches T. orientalis KJ850934Simmental cattle, Inner MongoliaAP011948Cattle, Shintoku of Japan0/547KJ850939Luxi cattle, ShandongHM538220Cattle, Suizhou of China0/547KJ850937Yunling cattle, YunnanAB520956Cattle, New South Wales of Australia2/491KJ850941Yunling cattle, YunnanAB520955Cattle, Raymond of Australia0/509KJ850943Holstein cattle, JiangsuAB520956Cattle, New South Wales of Australia0/549 T. sergenti KJ850933Australian Holstein cattle, JiangsuJQ723015Cattle, Hunan of China0/541KJ850938Yunling cattle, YunnanJQ723015Cattle, Hunan of China0/492 T. buffeli KJ850936Leiqiong cattle, HainanHM538196Cattle, Hubei of China;1/508KJ850940Wannan cattle, AnhuiAY661513Cattle, USA0/545 T. ovis KJ850942Wuranke sheep, Inner MongoliaFJ603460Sheep, Xinjiang of China;0/529 T. luwenshuni KJ850935from Sishui-Fur sheep, ShandongKC769996, JX469518, JF719831Sheep, China0/549KM016463Yangtse River Delta White goat, JiangsuKC769997Goat, Beijing of China0/571 Comparison of isolates identified in this study and similar sequences in GenBank by BLASTN Discussion: By systematically aligning the 18S rRNA sequences of representative Theileria spp. and other related protozoa, we identified a highly conserved region to place primers and probes for a pan-Theileria FRET-qPCR. The primers and probes we designed enabled us to amplify all our reference Theileria species with a detection limit of at least 2 Theileria 18S rRNA gene copies per PCR system. None of the other protozoan species tested gave reaction products. To the best of our knowledge, this is the first FRET-qPCR which specifically detects all Theileria species. Our data indicate infections with Theileria spp. are very widespread and common in cattle in China. We found positive animals in each province where we tested and an overall average of 30.77% of animals being positive. This relatively high level of positivity is similar to that obtained in the only other molecular survey for Theileria in China which showed 13.46% being positive in the northeast [32]. Sequencing data showed cattle are infected with the T. orientalis/T. sergenti/T. buffeli group which are generally recognized to be benign species [33] although some strains might cause economic losses [34]. Although we sequenced relatively few positive amplicons, we found no evidence of the more pathogenic strains, T. annulata and T. parva, which is not unexpected in the case of T. parva which has only been reported from Africa [8]. T. annulata, however, has been reported in H. asiaticum in northwestern China [35]. Our failure to demonstrate the organism in our study might be because T. annulata has an uneven distribution in China. Although we found no evidence of Theileria spp. in the 29 water buffaloes we sampled, He [36] reported 58/304 (19.1%) positive for T. buffeli in Hubei province, south China. More extensive studies are necessary to determine the epidemiology of theileriosis in water buffaloes. We found both goats and sheep were infected with T. luwenshuni (Table 2) using the pan-Theileria FRET-PCR and sequencing. This is a highly pathogenic organism that is known to occur in China where it is transmitted by Haemaphysalis qinghaiensis and may cause significant economic losses. While PCR assays have been described which detect and differentiate T. luwenshuni from T. uilenbergi [37, 38], they do not enable detection of all Theirleria spp. in all species and are thus not as versatile as our pan-Theileria FRET-PCR. Yin et al. [37] found the prevalence of T. luwenshuni varied from 0% to 85% in 4 provinces and we also found a wide range of positive values (4.1% to 67.4%). Such variability is probably due to tick prevalence rates, geoclimatic factors and livestock management systems. Further research using sensitive detection methods will be important in determining the best mechanisms to control infections. T. ovis was first identified in China in 2011 by PCR and sequencing and an infection rate of 78% was found in Xinjiang but no positive animals were found in twelve other provinces [26, 27, 39]. Our study confirms the presence of the organism in China although T. ovis is considered benign and economically unimportant as it might only cause signs in some animals which are stressed [40]. It is well recognized that exotic breeds are more susceptible to disease following infection with Theileria spp. than local stocks. Recent studies, however, have indicated that different breeds of animals might have similar susceptibilities to infections with Theileria spp. although some breeds are more capable of controlling the pathogenic effects of the organism [41]. Generally, B. p. taurus breeds are more susceptible to theileriosis than B. p. indicus breeds which might be associated with innate or immune mechanisms and/or general resistance to ticks. These factors have been mainly investigated with T. parva infections and thus it is interesting that our pan-Theileria FRET-PCR testing showed that in China it is the B. p. indicus breeds that are more likely to be positive than the B. p. taurus breeds and that the former generally had higher copy numbers indicating heavier infections. While our sample sizes were small and we cannot exclude the possibility of sample bias, our findings might be due to host genetic factors relating to infections with less pathogenic Theileria spp. or other factors such as differences in tick control and husbandry practices on different farms. Further studies taking these factors into account are needed to more precisely investigate the relationships between infections with benign Theileria spp. and the genetic background of the host. Conclusion: In summary, our study has described the development and testing of a FRET-PCR which can detect recognized Theileria spp. The pan-Theileria FRET-PCR should be a useful diagnostic tool as it will enable diagnostic laboratories to detect infections in all domestic species with a single test.
Background: Theileria spp. are tick transmitted protozoa that can infect large and small ruminants causing disease and economic losses. Diagnosis of infections is often challenging, as parasites can be difficult to detect and identify microscopically and serology is unreliable. While there are PCR assays which can identify certain Theileria spp., there is no one PCR that has been designed to identify all recognized species that occur in ruminants and which will greatly simplify the laboratory diagnoses of infections. Methods: Primers and probes for a genus-specific pan-Theileria FRET-qPCR were selected by comparing sequences of recognized Theileria spp. in GenBank and the test validated using reference organisms. The assay was also tested on whole blood samples from large and small ruminants from nine provinces in China. Results: The pan-Theileria FRET-qPCR detected all recognized species but none of the closely related protozoa. In whole blood samples from animals in China, Theileria spp. DNA was detected in 53.2% of the sheep tested (59/111), 44.4% of the goats (120/270) and 30.8% of the cattle (380/1,235). Water buffaloes (n = 29) were negative. Sequencing of some of the PCR products showed cattle in China were infected with T. orientalis/T. sergenti/T. buffeli group while T. ovis and T. luwenshuni were found in sheep and T. luwenshuni in goats. The prevalence of Theileria DNA was significantly higher in Bos p. indicus than in Bos p. taurus (77.7% vs. 18.3%) and copy numbers were also significantly higher (10(4.88) vs. 10(3.00) Theileria 18S rRNA gene copies/per ml whole blood). Conclusions: The pan-Theileria FRET-qPCR can detect all recognized Theileria spp. of ruminants in a single reaction. Large and small ruminants in China are commonly infected with a variety of Theileria spp.
Background: Theileria spp. are tick-transmitted, intracellular protozoan parasites infecting leukocytes and erythrocytes of wild and domestic large and small ruminants. Several Theileria spp., transmitted by ixodid ticks of the genera Rhipicephalus, Hyalomma, Amblyomma and Haemaphysalis, have been described in cattle, water buffaloes, sheep and goats in different geographical zones of the world [1–5]. Theileriosis is primarily limited to tropical and sub-tropical areas of the world, with infections mainly reported in Africa and the Middle East but also in southern Europe and northern Asia [6–11]. Infections by Theileria spp. can cause fever, anemia and hemoglobinuria and, in severe cases, death although many species are benign. Animals recovered from acute or primary infections usually remain persistently infected and may act as reservoirs of infecting ticks [12, 13]. While there have been numerous reports of theileriosis in various animal species in China since 1958 [14–27], many have been reported in Chinese and some were based on microscopic detection of parasites which can be difficult with low parasitemia and does not allow ready differentiation of species. Serological studies, although sensitive and easy to perform, are not specific as there is cross reactivity between Theileria spp. Although molecular studies have been performed, these have been to detect Theileria of specific domestic animal species, for example sheep and goats [27]. There have been no highly sensitive and specific molecular methods described which enable studies on various animals from widely divergent areas of China where different Theileria spp. might occur. To address this problem, we developed and validated a highly sensitive genus-specific Theileria FRET-qPCR that detects the recognized Theileria spp. of domestic animals and investigated the molecular prevalence of Theileria in cattle, water buffaloes, goats and sheep from nine provinces in China. Conclusion: In summary, our study has described the development and testing of a FRET-PCR which can detect recognized Theileria spp. The pan-Theileria FRET-PCR should be a useful diagnostic tool as it will enable diagnostic laboratories to detect infections in all domestic species with a single test.
Background: Theileria spp. are tick transmitted protozoa that can infect large and small ruminants causing disease and economic losses. Diagnosis of infections is often challenging, as parasites can be difficult to detect and identify microscopically and serology is unreliable. While there are PCR assays which can identify certain Theileria spp., there is no one PCR that has been designed to identify all recognized species that occur in ruminants and which will greatly simplify the laboratory diagnoses of infections. Methods: Primers and probes for a genus-specific pan-Theileria FRET-qPCR were selected by comparing sequences of recognized Theileria spp. in GenBank and the test validated using reference organisms. The assay was also tested on whole blood samples from large and small ruminants from nine provinces in China. Results: The pan-Theileria FRET-qPCR detected all recognized species but none of the closely related protozoa. In whole blood samples from animals in China, Theileria spp. DNA was detected in 53.2% of the sheep tested (59/111), 44.4% of the goats (120/270) and 30.8% of the cattle (380/1,235). Water buffaloes (n = 29) were negative. Sequencing of some of the PCR products showed cattle in China were infected with T. orientalis/T. sergenti/T. buffeli group while T. ovis and T. luwenshuni were found in sheep and T. luwenshuni in goats. The prevalence of Theileria DNA was significantly higher in Bos p. indicus than in Bos p. taurus (77.7% vs. 18.3%) and copy numbers were also significantly higher (10(4.88) vs. 10(3.00) Theileria 18S rRNA gene copies/per ml whole blood). Conclusions: The pan-Theileria FRET-qPCR can detect all recognized Theileria spp. of ruminants in a single reaction. Large and small ruminants in China are commonly infected with a variety of Theileria spp.
15,204
353
[ 338, 401, 111, 2379, 660, 203, 141, 176, 109, 57, 226, 385, 250, 283 ]
18
[ "theileria", "mismatches", "pcr", "species", "spp", "theileria spp", "sec", "cattle", "gene", "probes" ]
[ "pathogenic theileria spp", "tick borne pathogens", "occurrence theileriosis cattle", "theileria spp widespread", "prevalence theileria cattle" ]
[CONTENT] Theileria spp | FRET-qPCR | Prevalence | Ruminants [SUMMARY]
[CONTENT] Theileria spp | FRET-qPCR | Prevalence | Ruminants [SUMMARY]
[CONTENT] Theileria spp | FRET-qPCR | Prevalence | Ruminants [SUMMARY]
[CONTENT] Theileria spp | FRET-qPCR | Prevalence | Ruminants [SUMMARY]
[CONTENT] Theileria spp | FRET-qPCR | Prevalence | Ruminants [SUMMARY]
[CONTENT] Theileria spp | FRET-qPCR | Prevalence | Ruminants [SUMMARY]
[CONTENT] Animals | DNA, Protozoan | Molecular Sequence Data | Polymerase Chain Reaction | Prevalence | Ruminants | Sensitivity and Specificity | Theileria | Theileriasis [SUMMARY]
[CONTENT] Animals | DNA, Protozoan | Molecular Sequence Data | Polymerase Chain Reaction | Prevalence | Ruminants | Sensitivity and Specificity | Theileria | Theileriasis [SUMMARY]
[CONTENT] Animals | DNA, Protozoan | Molecular Sequence Data | Polymerase Chain Reaction | Prevalence | Ruminants | Sensitivity and Specificity | Theileria | Theileriasis [SUMMARY]
[CONTENT] Animals | DNA, Protozoan | Molecular Sequence Data | Polymerase Chain Reaction | Prevalence | Ruminants | Sensitivity and Specificity | Theileria | Theileriasis [SUMMARY]
[CONTENT] Animals | DNA, Protozoan | Molecular Sequence Data | Polymerase Chain Reaction | Prevalence | Ruminants | Sensitivity and Specificity | Theileria | Theileriasis [SUMMARY]
[CONTENT] Animals | DNA, Protozoan | Molecular Sequence Data | Polymerase Chain Reaction | Prevalence | Ruminants | Sensitivity and Specificity | Theileria | Theileriasis [SUMMARY]
[CONTENT] pathogenic theileria spp | tick borne pathogens | occurrence theileriosis cattle | theileria spp widespread | prevalence theileria cattle [SUMMARY]
[CONTENT] pathogenic theileria spp | tick borne pathogens | occurrence theileriosis cattle | theileria spp widespread | prevalence theileria cattle [SUMMARY]
[CONTENT] pathogenic theileria spp | tick borne pathogens | occurrence theileriosis cattle | theileria spp widespread | prevalence theileria cattle [SUMMARY]
[CONTENT] pathogenic theileria spp | tick borne pathogens | occurrence theileriosis cattle | theileria spp widespread | prevalence theileria cattle [SUMMARY]
[CONTENT] pathogenic theileria spp | tick borne pathogens | occurrence theileriosis cattle | theileria spp widespread | prevalence theileria cattle [SUMMARY]
[CONTENT] pathogenic theileria spp | tick borne pathogens | occurrence theileriosis cattle | theileria spp widespread | prevalence theileria cattle [SUMMARY]
[CONTENT] theileria | mismatches | pcr | species | spp | theileria spp | sec | cattle | gene | probes [SUMMARY]
[CONTENT] theileria | mismatches | pcr | species | spp | theileria spp | sec | cattle | gene | probes [SUMMARY]
[CONTENT] theileria | mismatches | pcr | species | spp | theileria spp | sec | cattle | gene | probes [SUMMARY]
[CONTENT] theileria | mismatches | pcr | species | spp | theileria spp | sec | cattle | gene | probes [SUMMARY]
[CONTENT] theileria | mismatches | pcr | species | spp | theileria spp | sec | cattle | gene | probes [SUMMARY]
[CONTENT] theileria | mismatches | pcr | species | spp | theileria spp | sec | cattle | gene | probes [SUMMARY]
[CONTENT] specific | theileria | studies | sensitive | domestic | infections | spp | theileria spp | infecting | sheep goats [SUMMARY]
[CONTENT] mismatches | sec | species | probes | theileria | pcr | primer | probe | oligonucleotides | primers [SUMMARY]
[CONTENT] cattle | theileria | vs | sheep | 10 | china0 | pan | pan theileria fret | pan theileria | sequences [SUMMARY]
[CONTENT] diagnostic | detect | useful diagnostic tool | laboratories detect infections | summary study | summary study described | summary study described development | development testing | development testing fret | laboratories detect infections domestic [SUMMARY]
[CONTENT] theileria | mismatches | cattle | sec | pcr | species | spp | theileria spp | 10 | fret [SUMMARY]
[CONTENT] theileria | mismatches | cattle | sec | pcr | species | spp | theileria spp | 10 | fret [SUMMARY]
[CONTENT] ||| ||| ||| PCR | Theileria | PCR [SUMMARY]
[CONTENT] pan-Theileria FRET-qPCR | Theileria ||| GenBank ||| nine | China [SUMMARY]
[CONTENT] ||| China | Theileria ||| 53.2% | 59/111 | 44.4% | 120/270 | 30.8% | 380/1,235 ||| 29 ||| PCR | China | T. | T. ovis | T. | T. ||| Theileria | Bos | Bos | 77.7% | 18.3% | 10(3.00 ||| 18S [SUMMARY]
[CONTENT] Theileria ||| ||| China | Theileria [SUMMARY]
[CONTENT] ||| ||| ||| PCR | Theileria | PCR ||| pan-Theileria FRET-qPCR | Theileria ||| GenBank ||| nine | China ||| ||| China | Theileria ||| 53.2% | 59/111 | 44.4% | 120/270 | 30.8% | 380/1,235 ||| 29 ||| PCR | China | T. | T. ovis | T. | T. ||| Theileria | Bos | Bos | 77.7% | 18.3% | 10(3.00 ||| 18S ||| Theileria ||| ||| China | Theileria [SUMMARY]
[CONTENT] ||| ||| ||| PCR | Theileria | PCR ||| pan-Theileria FRET-qPCR | Theileria ||| GenBank ||| nine | China ||| ||| China | Theileria ||| 53.2% | 59/111 | 44.4% | 120/270 | 30.8% | 380/1,235 ||| 29 ||| PCR | China | T. | T. ovis | T. | T. ||| Theileria | Bos | Bos | 77.7% | 18.3% | 10(3.00 ||| 18S ||| Theileria ||| ||| China | Theileria [SUMMARY]
Preterm birth by vacuum extraction and neonatal outcome: a population-based cohort study.
24450413
Very few studies have investigated the neonatal outcomes after vacuum extraction delivery (VE) in the preterm period and the results of these studies are inconclusive. The objective of this study was to describe the use of VE for preterm delivery in Sweden and to compare rates of neonatal complications after preterm delivery by VE to those found after cesarean section during labor (CS) or unassisted vaginal delivery (VD).
BACKGROUND
Data was obtained from Swedish national registers. In a population-based cohort from 1999 to 2010, all live-born, singleton preterm infants in a non-breech presentation at birth, born after onset of labor (either spontaneously, by induction, or by rupture of membranes) by VD, CS, or VE were included, leaving a study population of 40,764 infants. Logistic regression analyses were used to calculate adjusted odds ratios (AOR), using unassisted vaginal delivery as reference group.
METHODS
VE was used in 5.7% of the preterm deliveries, with lower rates in earlier gestations. Overall, intracranial hemorrhage (ICH) occurred in 1.51%, extracranial hemorrhage (ECH) in 0.64%, and brachial plexus injury in 0.13% of infants. Infants delivered by VE had higher risks for ICH (AOR = 1.84 (95% CI: 1.09-3.12)), ECH (AOR = 4.48 (95% CI: 2.84-7.07)) and brachial plexus injury (AOR = 6.21 (95% CI: 2.22-17.4)), while infants delivered by CS during labor had no increased risk for these complications, as compared to VD.
RESULTS
While rates of neonatal complications after VE are generally low, higher odds ratios for intra- and extracranial hemorrhages and brachial plexus injuries after VE, compared with other modes of delivery, support a continued cautious use of VE for preterm delivery.
CONCLUSION
[ "Adult", "Birth Injuries", "Brachial Plexus", "Cesarean Section", "Cohort Studies", "Female", "Gestational Age", "Humans", "Infant, Newborn", "Infant, Premature", "Intracranial Hemorrhage, Traumatic", "Labor, Obstetric", "Parturition", "Pregnancy", "Premature Birth", "Registries", "Scalp", "Seizures", "Sweden", "Vacuum Extraction, Obstetrical", "Young Adult" ]
3900732
Background
Preterm birth is common [1] but still, the optimal mode of delivery of preterm infants is not known. Although neonatal outcomes in preterm infants delivered vaginally or by cesarean section (CS) [2-5] have been compared, there is no evidence to provide clear guidance on the method of choice [6]. Given the widespread assumption that assisted vaginal delivery could be harmful for fragile infants that are underweight and preterm, very few studies have addressed the use of vacuum extraction (VE) for preterm birth. Delivery by VE is a common obstetrical procedure, and in many countries it has replaced the use of forceps. VE is used to terminate a protracted second stage of labor and as an intervention for fetal or maternal distress. VE requires vertex presentation, a fully dilated cervix and ruptured membranes [7]. A cesarean section, on the other hand, can be performed at any stage of labor and does not require prerequisites of this kind. Most clinical guidelines do not recommend VE before 34 gestational weeks [8-10]. These recommendations are not based on results of randomized controlled trials, but rely on the observation that preterm infants are more likely than term infants to develop ICH, and on extrapolations from studies of term infants showing that VE is associated with an increased risk of ICH and other neonatal complications [11-17]. Only three studies have previously investigated the use and outcomes of VE in preterm births. The first was undertaken over 40 years ago and showed increased mortality and morbidity among preterm infants delivered by VE as compared with term infants delivered by VE [18]. The second study compared neonatal morbidity in preterm infants delivered vaginally with (n = 61) or without VE (n = 122), and found no differences in neonatal morbidity between the two groups [19]. The last study compared VE and forceps for preterm delivery (n = 64) [20]; the neonatal outcomes were similar in both groups. The available data are clearly untimely and hampered by limitations in power and, therefore, current knowledge on safety of preterm vacuum-assisted birth is unsatisfactory. The aim of this study was to 1) describe the use of VE and compare it to rates of CS during labor in preterm deliveries in Sweden from 1999–2010, 2) characterize the distribution of perinatal risk factors associated with each mode of delivery, and 3) compare rates of neonatal intra- and extracranial hemorrhages, as well as occurrence of brachial plexus injury after preterm delivery by VE or CS during labor, using unassisted vaginal birth as a reference.
Methods
This study was based on data from national data bases held by the Swedish National Board of Health and Welfare. The national registration number, assigned to each Swedish resident at birth, was used for individual record linkage. We used two registers: The Swedish Medical Birth Register (SMBR) that covers 99% of all births in Sweden, and The Swedish National Inpatient Register (IPR) that covers all public inpatient care. The SMBR includes prospectively collected information on maternal characteristics, reproductive history, and complications during pregnancy, delivery, and the neonatal period. The IPR includes data on each hospital admission and discharge. Study population During the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction. A number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams. During the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction. A number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams. Outcome variables Neonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table  1. Neonatal outcomes studied in 40 764 preterm infants Neonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG. Statistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31. Neonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table  1. Neonatal outcomes studied in 40 764 preterm infants Neonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG. Statistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31.
Results
Use of VE in relation to gestational age Among the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure  1. Mode of delivery in relation to gestational age. Figure 1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries. Among the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure  1. Mode of delivery in relation to gestational age. Figure 1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries. Distribution of risk factors or covariates in relation to mode of delivery Table  2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights. Maternal, pregnancy, delivery, and infant characteristics by mode of delivery Population based cohort consisting of 40 764 preterm deliveries. CS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia. The most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor. Table  2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights. Maternal, pregnancy, delivery, and infant characteristics by mode of delivery Population based cohort consisting of 40 764 preterm deliveries. CS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia. The most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor. Neonatal outcome in relation to gestational age The proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure  2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure  3. Proportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure 2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions. Proportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure 3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury. The proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure  2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure  3. Proportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure 2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions. Proportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure 3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury. Neonatal outcome in relation to mode of delivery To report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA. In Table  3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group. Neonatal outcomes in preterm infants by mode of delivery and gestational age Table  4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE. Logistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery CS = cesarean section, VE = vacuum extraction. Model 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight. Model 2 is also adjusted for indications for operative delivery. The ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group. A total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group. To report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA. In Table  3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group. Neonatal outcomes in preterm infants by mode of delivery and gestational age Table  4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE. Logistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery CS = cesarean section, VE = vacuum extraction. Model 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight. Model 2 is also adjusted for indications for operative delivery. The ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group. A total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group.
Conclusion
The rates of serious birth injuries and complications are generally low, but preterm infants delivered by VE have higher odds ratios for intra- and extracranial hemorrhages and brachial plexus injuries than those delivered by CS during labor or by unassisted vaginal delivery. We therefore support a continued conservative/cautious use of VE in preterm deliveries. Furthermore, the possible causal relationship between mode of delivery and ICH needs to be further investigated.
[ "Background", "Study population", "Outcome variables", "Use of VE in relation to gestational age", "Distribution of risk factors or covariates in relation to mode of delivery", "Neonatal outcome in relation to gestational age", "Neonatal outcome in relation to mode of delivery", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Preterm birth is common\n[1] but still, the optimal mode of delivery of preterm infants is not known. Although neonatal outcomes in preterm infants delivered vaginally or by cesarean section (CS)\n[2-5] have been compared, there is no evidence to provide clear guidance on the method of choice\n[6]. Given the widespread assumption that assisted vaginal delivery could be harmful for fragile infants that are underweight and preterm, very few studies have addressed the use of vacuum extraction (VE) for preterm birth.\nDelivery by VE is a common obstetrical procedure, and in many countries it has replaced the use of forceps. VE is used to terminate a protracted second stage of labor and as an intervention for fetal or maternal distress. VE requires vertex presentation, a fully dilated cervix and ruptured membranes\n[7]. A cesarean section, on the other hand, can be performed at any stage of labor and does not require prerequisites of this kind.\nMost clinical guidelines do not recommend VE before 34 gestational weeks\n[8-10]. These recommendations are not based on results of randomized controlled trials, but rely on the observation that preterm infants are more likely than term infants to develop ICH, and on extrapolations from studies of term infants showing that VE is associated with an increased risk of ICH and other neonatal complications\n[11-17]. Only three studies have previously investigated the use and outcomes of VE in preterm births. The first was undertaken over 40 years ago and showed increased mortality and morbidity among preterm infants delivered by VE as compared with term infants delivered by VE\n[18]. The second study compared neonatal morbidity in preterm infants delivered vaginally with (n = 61) or without VE (n = 122), and found no differences in neonatal morbidity between the two groups\n[19]. The last study compared VE and forceps for preterm delivery (n = 64)\n[20]; the neonatal outcomes were similar in both groups. The available data are clearly untimely and hampered by limitations in power and, therefore, current knowledge on safety of preterm vacuum-assisted birth is unsatisfactory.\nThe aim of this study was to 1) describe the use of VE and compare it to rates of CS during labor in preterm deliveries in Sweden from 1999–2010, 2) characterize the distribution of perinatal risk factors associated with each mode of delivery, and 3) compare rates of neonatal intra- and extracranial hemorrhages, as well as occurrence of brachial plexus injury after preterm delivery by VE or CS during labor, using unassisted vaginal birth as a reference.", "During the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction.\nA number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams.", "Neonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table \n1.\nNeonatal outcomes studied in 40 764 preterm infants\nNeonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG.\nStatistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31.", "Among the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure \n1.\nMode of delivery in relation to gestational age. Figure\n1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries.", "Table \n2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights.\nMaternal, pregnancy, delivery, and infant characteristics by mode of delivery\nPopulation based cohort consisting of 40 764 preterm deliveries.\nCS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia.\nThe most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor.", "The proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure \n2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure \n3.\nProportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure\n2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions.\nProportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure\n3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury.", "To report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA.\nIn Table \n3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group.\nNeonatal outcomes in preterm infants by mode of delivery and gestational age\nTable \n4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE.\nLogistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery\nCS = cesarean section, VE = vacuum extraction.\nModel 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight.\nModel 2 is also adjusted for indications for operative delivery.\nThe ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group.\nA total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group.", "BMI: Body mass index; CI: Confidence interval; CS: Cesarean section; CT: Computerized tomography; EA: Epidural analgesia; ECH: Extracranial hemorrhage; EEG: Electroencephalography; GA: Gestational age; ICH: Intracranial hemorrhage; LGA: Large for gestational age; OR: Odds ratio; SGA: Small for gestational age; VD: Vaginal delivery; VE: Vacuum extraction.", "The authors declare that they have no competing interest.", "CE had the idea for the study, designed it and carried out the statistical analysis. KÅ participated in the design and wrote the first draft of the manuscript together with CE. MN contributed to the interpretation of the results and writing of the manuscript. All authors approved the final version of the submitted article.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/14/42/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Outcome variables", "Results", "Use of VE in relation to gestational age", "Distribution of risk factors or covariates in relation to mode of delivery", "Neonatal outcome in relation to gestational age", "Neonatal outcome in relation to mode of delivery", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Preterm birth is common\n[1] but still, the optimal mode of delivery of preterm infants is not known. Although neonatal outcomes in preterm infants delivered vaginally or by cesarean section (CS)\n[2-5] have been compared, there is no evidence to provide clear guidance on the method of choice\n[6]. Given the widespread assumption that assisted vaginal delivery could be harmful for fragile infants that are underweight and preterm, very few studies have addressed the use of vacuum extraction (VE) for preterm birth.\nDelivery by VE is a common obstetrical procedure, and in many countries it has replaced the use of forceps. VE is used to terminate a protracted second stage of labor and as an intervention for fetal or maternal distress. VE requires vertex presentation, a fully dilated cervix and ruptured membranes\n[7]. A cesarean section, on the other hand, can be performed at any stage of labor and does not require prerequisites of this kind.\nMost clinical guidelines do not recommend VE before 34 gestational weeks\n[8-10]. These recommendations are not based on results of randomized controlled trials, but rely on the observation that preterm infants are more likely than term infants to develop ICH, and on extrapolations from studies of term infants showing that VE is associated with an increased risk of ICH and other neonatal complications\n[11-17]. Only three studies have previously investigated the use and outcomes of VE in preterm births. The first was undertaken over 40 years ago and showed increased mortality and morbidity among preterm infants delivered by VE as compared with term infants delivered by VE\n[18]. The second study compared neonatal morbidity in preterm infants delivered vaginally with (n = 61) or without VE (n = 122), and found no differences in neonatal morbidity between the two groups\n[19]. The last study compared VE and forceps for preterm delivery (n = 64)\n[20]; the neonatal outcomes were similar in both groups. The available data are clearly untimely and hampered by limitations in power and, therefore, current knowledge on safety of preterm vacuum-assisted birth is unsatisfactory.\nThe aim of this study was to 1) describe the use of VE and compare it to rates of CS during labor in preterm deliveries in Sweden from 1999–2010, 2) characterize the distribution of perinatal risk factors associated with each mode of delivery, and 3) compare rates of neonatal intra- and extracranial hemorrhages, as well as occurrence of brachial plexus injury after preterm delivery by VE or CS during labor, using unassisted vaginal birth as a reference.", "This study was based on data from national data bases held by the Swedish National Board of Health and Welfare. The national registration number, assigned to each Swedish resident at birth, was used for individual record linkage. We used two registers: The Swedish Medical Birth Register (SMBR) that covers 99% of all births in Sweden, and The Swedish National Inpatient Register (IPR) that covers all public inpatient care. The SMBR includes prospectively collected information on maternal characteristics, reproductive history, and complications during pregnancy, delivery, and the neonatal period. The IPR includes data on each hospital admission and discharge.\n Study population During the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction.\nA number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams.\nDuring the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction.\nA number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams.\n Outcome variables Neonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table \n1.\nNeonatal outcomes studied in 40 764 preterm infants\nNeonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG.\nStatistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31.\nNeonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table \n1.\nNeonatal outcomes studied in 40 764 preterm infants\nNeonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG.\nStatistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31.", "During the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction.\nA number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams.", "Neonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table \n1.\nNeonatal outcomes studied in 40 764 preterm infants\nNeonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG.\nStatistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31.", " Use of VE in relation to gestational age Among the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure \n1.\nMode of delivery in relation to gestational age. Figure\n1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries.\nAmong the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure \n1.\nMode of delivery in relation to gestational age. Figure\n1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries.\n Distribution of risk factors or covariates in relation to mode of delivery Table \n2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights.\nMaternal, pregnancy, delivery, and infant characteristics by mode of delivery\nPopulation based cohort consisting of 40 764 preterm deliveries.\nCS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia.\nThe most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor.\nTable \n2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights.\nMaternal, pregnancy, delivery, and infant characteristics by mode of delivery\nPopulation based cohort consisting of 40 764 preterm deliveries.\nCS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia.\nThe most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor.\n Neonatal outcome in relation to gestational age The proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure \n2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure \n3.\nProportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure\n2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions.\nProportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure\n3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury.\nThe proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure \n2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure \n3.\nProportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure\n2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions.\nProportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure\n3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury.\n Neonatal outcome in relation to mode of delivery To report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA.\nIn Table \n3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group.\nNeonatal outcomes in preterm infants by mode of delivery and gestational age\nTable \n4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE.\nLogistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery\nCS = cesarean section, VE = vacuum extraction.\nModel 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight.\nModel 2 is also adjusted for indications for operative delivery.\nThe ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group.\nA total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group.\nTo report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA.\nIn Table \n3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group.\nNeonatal outcomes in preterm infants by mode of delivery and gestational age\nTable \n4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE.\nLogistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery\nCS = cesarean section, VE = vacuum extraction.\nModel 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight.\nModel 2 is also adjusted for indications for operative delivery.\nThe ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group.\nA total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group.", "Among the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure \n1.\nMode of delivery in relation to gestational age. Figure\n1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries.", "Table \n2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights.\nMaternal, pregnancy, delivery, and infant characteristics by mode of delivery\nPopulation based cohort consisting of 40 764 preterm deliveries.\nCS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia.\nThe most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor.", "The proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure \n2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure \n3.\nProportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure\n2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions.\nProportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure\n3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury.", "To report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA.\nIn Table \n3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group.\nNeonatal outcomes in preterm infants by mode of delivery and gestational age\nTable \n4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE.\nLogistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery\nCS = cesarean section, VE = vacuum extraction.\nModel 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight.\nModel 2 is also adjusted for indications for operative delivery.\nThe ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group.\nA total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group.", "In this large cohort study of singleton, non-breech preterm births after onset of labor, we identify three clinically important findings related to mode of delivery: First, VE was used in 5.7% of preterm births, and despite recommendations of no use, 3.3% of preterm infants born before 34 gestational weeks were delivered by VE. Secondly, VE for preterm birth was used more frequently in shorter mothers, primiparae and among women treated with EA as pain relief during labor. Finally, and adjusting for potential confounders and covariates, preterm infants delivered by VE had almost doubled OR for ICH, four times higher OR for extracranial hemorrhage, as well as a 6-fold risk for brachial plexus palsy compared with those delivered by VD. Exclusion of intraventricular hemorrhage grades I-II (the most common form of ICH in preterm infants) from the analysis increased the OR for ICH after VE, indicating that severe bleedings were more common among preterm infants delivered by VE.\nAlthough VE was related to significantly increased rates of ICH, it is not clear whether the extraction as such causes the injury, or if there is an underlying common pathway for both VE-assisted delivery and ICH, i.e., that the relationship is confounded by indication. Since the ORs for ICH were significantly higher in the VE group compared with both the CS and unassisted VD groups, whereas the ORs for other disturbances of cerebral status were slightly higher in both the VE and CS groups as compared with VD, different mechanisms may be involved in the development of these two complications. The forces by vacuum extraction could lead to significant vertical stress, which might be avoided with CS. In a case report of MRI findings after birth injuries among infants delivered by VE\n[21], it was suggested that vertical traction on the skull and brain may produce tentorial lacerations and rupture of intracranial veins. Another explanation for our findings of different outcomes after VE and CS could be that infants delivered by VE may have been exposed to contractions for a longer time than those delivered by CS. A protective effect of CS is indicated by lower ORs for ICH; however, the exposure to contractions as the sole explanation for the increased risks for ICH after VE is less likely, as the VD group – presumably the group exposed to the largest forces of labor – exhibited significantly lower odds for hemorrhagic complications compared with infants delivered with VE.\nDuring the study period, the overall rate of ICH increased from 6% originally, up to 12% at the end of the period, most likely reflecting the increased access and use of ultrasonography among Swedish neonatologists in recent years. Improved ultrasound technology and image resolution may also have contributed to this development. Finally, we cannot exclude a contribution from misclassification: a large but normal choroid plexus could have been classified as a small subependymal hemorrhage by less experienced investigators. The finding that the diagnosis of small subependymal hemorrhage (without intraventricular extension; P52.0) increased most compared to other types of ICH during the study period (from 0.5% to 1.2%), supports these interpretations.\nThe overrepresentation of subgaleal hemorrhage and cephalhematoma after VE is less surprising, since earlier studies have stated the relation between these diagnoses and the use of VE. The risk of subgaleal hemorrhage seems to be unrelated to GA, as this study demonstrates rates similar to those in previous studies of infants born at term\n[12,13].\nThe present study showed that preterm infants delivered by VE had a 6- to 7-fold risk increase for injury to the plexus brachialis compared with unassisted VD. This injury is usually associated with large macrosomic infants and shoulder dystocia\n[22] and not to preterm birth. Our result emphasizes the importance of gentle maneuvers and avoiding application of excessive pressure or traction on the brachial plexus also when delivering the preterm infant, especially by VE.\nThe major strengths of this study were the large study population covering all preterm deliveries in Sweden during a period of twelve years, and the high quality of the registers, making it possible to analyze rare diagnoses and unusual events such as ICH in preterm infants delivered by VE. We were able to include data on risk factors, potential confounders, and outcomes collected independently from one another and without involving the study subjects, thus minimizing various types of bias (e.g., selection and recall bias). Moreover, antenatal and obstetric care is free of charge in Sweden, management routines as well as GA determinations are standardized, and 99% of births are delivered in public hospitals. This minimizes the risk for confounding by unmeasured socio-demographic factors. Another advantage was the inclusion of the main indications for VE and CS, enabling us to address the question of confounding by indication.\nA major limitation of this study is that the registers do not contain detailed information about many important factors during the VE deliveries. For instance the registry does not provide specific information about the type of VE instrument used, level, position, and attitude of the fetal head in the pelvis when applying VE, location of placement of the vacuum cup, traction work, skill of the obstetrician, pressure exposure (duration and force), and cup detachments. The register does not provide information about confounders such as use of oxytocin and application of fundal pressure.\nThere is a general recommendation not to use VE before a GA of 34 weeks. According to the Royal College of Obstetricians and Gynecologists, there is insufficient evidence to establish the safety on VE deliveries in gestations between 34 weeks + 0 days and 36 weeks + 0 days\n[9]. Our results show that the use of VE is related to rare, but serious complications also between gestational weeks 34–36.", "The rates of serious birth injuries and complications are generally low, but preterm infants delivered by VE have higher odds ratios for intra- and extracranial hemorrhages and brachial plexus injuries than those delivered by CS during labor or by unassisted vaginal delivery. We therefore support a continued conservative/cautious use of VE in preterm deliveries. Furthermore, the possible causal relationship between mode of delivery and ICH needs to be further investigated.", "BMI: Body mass index; CI: Confidence interval; CS: Cesarean section; CT: Computerized tomography; EA: Epidural analgesia; ECH: Extracranial hemorrhage; EEG: Electroencephalography; GA: Gestational age; ICH: Intracranial hemorrhage; LGA: Large for gestational age; OR: Odds ratio; SGA: Small for gestational age; VD: Vaginal delivery; VE: Vacuum extraction.", "The authors declare that they have no competing interest.", "CE had the idea for the study, designed it and carried out the statistical analysis. KÅ participated in the design and wrote the first draft of the manuscript together with CE. MN contributed to the interpretation of the results and writing of the manuscript. All authors approved the final version of the submitted article.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/14/42/prepub\n" ]
[ null, "methods", null, null, "results", null, null, null, null, "discussion", "conclusions", null, null, null, null ]
[ "Mode of delivery", "Preterm delivery", "Intracranial hemorrhage", "Extracranial hemorrhage", "Brachial plexus injury" ]
Background: Preterm birth is common [1] but still, the optimal mode of delivery of preterm infants is not known. Although neonatal outcomes in preterm infants delivered vaginally or by cesarean section (CS) [2-5] have been compared, there is no evidence to provide clear guidance on the method of choice [6]. Given the widespread assumption that assisted vaginal delivery could be harmful for fragile infants that are underweight and preterm, very few studies have addressed the use of vacuum extraction (VE) for preterm birth. Delivery by VE is a common obstetrical procedure, and in many countries it has replaced the use of forceps. VE is used to terminate a protracted second stage of labor and as an intervention for fetal or maternal distress. VE requires vertex presentation, a fully dilated cervix and ruptured membranes [7]. A cesarean section, on the other hand, can be performed at any stage of labor and does not require prerequisites of this kind. Most clinical guidelines do not recommend VE before 34 gestational weeks [8-10]. These recommendations are not based on results of randomized controlled trials, but rely on the observation that preterm infants are more likely than term infants to develop ICH, and on extrapolations from studies of term infants showing that VE is associated with an increased risk of ICH and other neonatal complications [11-17]. Only three studies have previously investigated the use and outcomes of VE in preterm births. The first was undertaken over 40 years ago and showed increased mortality and morbidity among preterm infants delivered by VE as compared with term infants delivered by VE [18]. The second study compared neonatal morbidity in preterm infants delivered vaginally with (n = 61) or without VE (n = 122), and found no differences in neonatal morbidity between the two groups [19]. The last study compared VE and forceps for preterm delivery (n = 64) [20]; the neonatal outcomes were similar in both groups. The available data are clearly untimely and hampered by limitations in power and, therefore, current knowledge on safety of preterm vacuum-assisted birth is unsatisfactory. The aim of this study was to 1) describe the use of VE and compare it to rates of CS during labor in preterm deliveries in Sweden from 1999–2010, 2) characterize the distribution of perinatal risk factors associated with each mode of delivery, and 3) compare rates of neonatal intra- and extracranial hemorrhages, as well as occurrence of brachial plexus injury after preterm delivery by VE or CS during labor, using unassisted vaginal birth as a reference. Methods: This study was based on data from national data bases held by the Swedish National Board of Health and Welfare. The national registration number, assigned to each Swedish resident at birth, was used for individual record linkage. We used two registers: The Swedish Medical Birth Register (SMBR) that covers 99% of all births in Sweden, and The Swedish National Inpatient Register (IPR) that covers all public inpatient care. The SMBR includes prospectively collected information on maternal characteristics, reproductive history, and complications during pregnancy, delivery, and the neonatal period. The IPR includes data on each hospital admission and discharge. Study population During the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction. A number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams. During the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction. A number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams. Outcome variables Neonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table  1. Neonatal outcomes studied in 40 764 preterm infants Neonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG. Statistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31. Neonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table  1. Neonatal outcomes studied in 40 764 preterm infants Neonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG. Statistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31. Study population: During the period of 1999–2010, there were 75,296 (6.2%) preterm births in Sweden. We excluded deliveries by CS before the onset of labor (n = 17,306), forceps (n = 257), or performed with both VE and CS (n = 125). We also excluded stillbirths (fetal deaths occurring before labor or intra partum) (n = 1,839), multiple births (n = 11,088), and births in breech presentation (n = 3,917). Thus, the final study population was restricted to all live-born, preterm singleton infants with a non-breech presentation at birth, delivered after a spontaneous or induced onset of labor followed by CS, vacuum extraction (VE), or by unassisted vaginal delivery (VD) before gestational week 37 + 0 days (N = 40,764). CS during labor was defined as abdominal delivery after the onset of labor, either spontaneously, by rupture of membranes, or by induction. A number of independent variables were collected; the maternal anthropometrics included: age, height, and body mass index (BMI). BMI was calculated from measured height and weight obtained at the first antenatal care visit, which occurred before the 15th week of gestation in more than 95% of the pregnancies. BMI was categorized into underweight (below 18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2), obese (>29.9 kg/m2), or missing. Parity was categorized as primi- or multiparity. Information on complications during pregnancy and delivery were coded according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards). The following pregnancy complications were included: diabetes – both gestational and types 1 and 2 (O24.0-9) preeclampsia- both hypertension, preeclampsia, and eclampsia (O10.0-O15.9). Labor-related risk factors or covariates includedepidural analgesia (EA; yes/no), and induction of labor (yes/no). Indications for operative delivery were classified into four major groups: prolonged labor (O62.0-2, O63.0-9), signs of fetal distress (O68.0-O68.1-9), preeclampsia, and a non-occipitoanterior presentation of the fetus (all presentations except occipitoanterior and breech, registered at birth). Gestational age (GA) for preterm infants was divided into three periods according to the World Health Organization: extremely preterm (before 28 weeks), very preterm (28–31 weeks) and moderately preterm (32–36 weeks). Furthermore, we also divided the preterm gestational period according to guidelines on instrumental delivery into either: less than 34 weeks (VE not recommended), and 34–36 weeks (VE may be used). GA was recorded in completed weeks, and was based on routine ultrasound dating performed at 17 to 18 postmenstrual weeks in 97-98% of all pregnant women. Infant birthweight was categorized as less than 1,500 grams, 1,500-2,000 grams, 2,001-2,500 grams, 2,501-3,000 grams, and 3,001-4,000 grams. Outcome variables: Neonatal diagnoses were classified according to the International Classification of Diseases (ICD) Tenth Revision (1997 and onwards), and identified/collected in the SMBR or in the IPR. The following neonatal outcomes (ICD codes) were assessed: Intracranial laceration and hemorrhage due to birth injury (P10), intracranial non-traumatic hemorrhage of fetus and newborn (P52), convulsions of newborn (P90), other disturbances of cerebral status of newborn (P 91), subgaleal hematoma (P12.2), cephalhematoma (P12.0), and brachial plexus injury (P14.0-3). The definitions of outcomes are described in detail in Table  1. Neonatal outcomes studied in 40 764 preterm infants Neonatal diagnoses of intracranial hemorrhages in preterm infants were mainly based on imaging of the brain using ultrasonography; however, some assessments of the brain at term-equivalent age were alternatively performed with CT and/or MRI. Imaging of the brain was performed on clinical indications only in cases born moderately or late preterm, whereas all very preterm infants (born before 32 weeks of gestation) were screened for intracranial lesions, even in asymptomatic infants. A diagnosis of convulsions included infants with clinical signs of convulsions and/or convulsions verified by EEG. Statistical analysis was performed using proportions and odds ratios (OR) with a 95% confidence interval (CI) for neonatal complications in relation to mode of delivery, using unassisted VD as the reference group (SPSS 20.0 for Windows software package). Three models were used to assess the relationship between the different modes of delivery and the risk for neonatal complications: one crude, and two adjusted (Models 1 and 2). The included covariates have been shown previously to be related to instrumental deliveries, and were related to the outcomes in cross tabulations. In Model 1, we adjusted for the following confounders or covariates: maternal age, height, BMI, and parity, as well as infant year of birth, birthweight and GA. In Model 2, we added the indication for operative delivery and preeclampsia. The year of birth was entered as a continuous variable in accordance with a linear secular trend, and all other variables were entered as categories. Furthermore, a separate logistic regression analysis was performed to investigate severe ICH in relation to mode of delivery. Here, intraventricular hemorrhages grades 1–2 were excluded and the analysis was adjusted for GA only. We also conducted separated analyses on potential relationships between sex and ICH in relation to mode of delivery. Missing data were entered as a separate category in the analyses. The study was approved by the Regional Ethical Review Board in Stockholm, Dnr 2008/1322-31. Results: Use of VE in relation to gestational age Among the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure  1. Mode of delivery in relation to gestational age. Figure 1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries. Among the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure  1. Mode of delivery in relation to gestational age. Figure 1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries. Distribution of risk factors or covariates in relation to mode of delivery Table  2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights. Maternal, pregnancy, delivery, and infant characteristics by mode of delivery Population based cohort consisting of 40 764 preterm deliveries. CS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia. The most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor. Table  2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights. Maternal, pregnancy, delivery, and infant characteristics by mode of delivery Population based cohort consisting of 40 764 preterm deliveries. CS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia. The most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor. Neonatal outcome in relation to gestational age The proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure  2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure  3. Proportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure 2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions. Proportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure 3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury. The proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure  2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure  3. Proportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure 2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions. Proportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure 3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury. Neonatal outcome in relation to mode of delivery To report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA. In Table  3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group. Neonatal outcomes in preterm infants by mode of delivery and gestational age Table  4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE. Logistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery CS = cesarean section, VE = vacuum extraction. Model 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight. Model 2 is also adjusted for indications for operative delivery. The ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group. A total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group. To report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA. In Table  3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group. Neonatal outcomes in preterm infants by mode of delivery and gestational age Table  4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE. Logistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery CS = cesarean section, VE = vacuum extraction. Model 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight. Model 2 is also adjusted for indications for operative delivery. The ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group. A total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group. Use of VE in relation to gestational age: Among the 40,764 (54% of all) preterm deliveries included in this study, 2,319 (5.7%) preterm infants were delivered by VE, 5,505 (13.5%) by CS during labor, and 32,940 (80.2%) by VD. The rate of VE deliveries increased gradually with gestational age, Figure  1. Mode of delivery in relation to gestational age. Figure 1 shows rates (%) of different modes of delivery in relation to gestational age (in completed weeks). The blue, dotted line represents unassisted vaginal deliveries, the red, dashed line represents cesarean sections performed after onset of labor, and the green line represents the vacuum extraction deliveries. Distribution of risk factors or covariates in relation to mode of delivery: Table  2 shows maternal and perinatal characteristics of the study population in relation to mode of delivery. The VE rate decreased with maternal height and 80% of the women who delivered by VE were primiparae, compared with 48% of those who underwent CS during labor (not in table). More than 45% of the women who delivered by VE had received epidural analgesia during labor compared with 22% of women with VD, and 12% with CS during labor (not in table). Given the association between GA and VE, infants delivered with VE had higher birthweights. Maternal, pregnancy, delivery, and infant characteristics by mode of delivery Population based cohort consisting of 40 764 preterm deliveries. CS = caesarean section, BMI = Body Mass Index (weight in kilograms/height in meters2), EA = Epidural Analgesia. The most common indication for VE was fetal distress (42%), followed by prolonged labor (25%). Having a non-occipitoanterior position (25%) or fetal distress (17%) were the most common indications for CS during labor, while only 3% in this group had a diagnosis of prolonged labor. Neonatal outcome in relation to gestational age: The proportion of preterm infants diagnosed with an ICH varied more than hundred-fold in relation to GA. It decreased from 21.5% among preterm infants born at 22–28 weeks of GA to 0.1% among those born after 36 weeks of gestation. The rates of neonatal convulsions among preterm infants decreased from 2.0% at 22–28 weeks to 0.25% among those born after 36 weeks of gestation, Figure  2. The proportion of preterm infants diagnosed with other disturbances of cerebral status (encephalopathy) decreased with GA, while proportion of infants with brachial plexus injuries or ECH increased slightly with GA, Figure  3. Proportion (%) preterm infants diagnosed with intracranial hemorrhage (ICH) or convulsions by gestational age. Figure 2 shows the proportions (%) of preterm infants diagnosed with ICH and neonatal convulsions in relation to gestational age (in completed weeks). The blue, dotted line represents intracranial hemorrhage (ICH) and the red, dashed line represents neonatal convulsions. Proportion (%) preterm infants diagnosed with extracranial hemorrhage (ECH), encephalopathy and brachial plexus injury by gestational age. Figure 3 shows the proportions (%) of ECH, encephalopathy (ICD-code P91: other disturbances of cerebral status of newborn), and brachial plexus injury in relation to gestational age (in completed weeks). The blue, dotted line represents extracranial hemorrhage (ECH), the red, dashed line represents encephalopathy and the green line represents brachial plexus injury. Neonatal outcome in relation to mode of delivery: To report outcome in relation to mode of delivery, the study cohort was divided according to the current guidelines on the use of VE as either preterm births occurring between 34–36 weeks of gestation (VE may be used), or those occurring before 34 gestational weeks (VE not recommended). In our cohort, 33,202 (81.4%) of all preterm births occurred at 34–36 gestational weeks, and 7,562 (18.6%) before 34 weeks of GA. In Table  3, neonatal outcomes before and after 34 + 0 weeks of gestation are presented in relation to mode of delivery. Overall, seven preterm infants were classified as having an ICH due to birth injury, corresponding to a rate of 0.02%; and 612 infants were diagnosed with non-traumatic ICH, corresponding to a rate of 1.5%. Diagnoses of neonatal convulsions and other disturbances of neonatal cerebral status were rare, especially in infants at less than 34 weeks of GA, and occurred more frequently after VE and CS than after VD. Cephalic hematoma was the most frequent complication after VE of preterm infants (n = 72 or 3.1%), occurring much more often after VE than after CS (0.16%) and VD (0.49%). Subgaleal hemorrhage was less frequent, with a total of only18 cases. More than two thirds of those cases occurred in the VE group. Neonatal outcomes in preterm infants by mode of delivery and gestational age Table  4 shows rates (per 1000), crude and adjusted odds ratios for the outcomes by mode of delivery, and uses infants born by VD as the reference group. After adjustments, preterm infants born by VE had almost doubled OR for ICH, while those born by CS had no increased risk. Preterm infants delivered by VE also had 4.5 times higher OR for extracranial hemorrhages, while those delivered by CS had no increased risk. A total of 540 (88.2%) of the ICH diagnoses consisted of intraventricular bleedings (ICD-codes 52.0-52.3), the type of ICH most frequently seen in preterm infants. Of these, 438 (71.6%) were graded as mild or moderate (grades 1–2). After excluding grades 1–2 of intraventricular hemorrhages from the analyses, the OR for ICH was still significantly increased for infants delivered by VE (OR 2.58, CI 1.27-5.24), but not for those delivered by CS during labor (OR 1.15, CI 0.78-1.70) after adjustment for GA. There was no difference in ICH rates between preterm boys and girls delivered by VE. Logistic regression (odds ratios) for intra– and extracranial hemorrhages, convulsions and other cerebral complications, and brachial plexus injury in preterm infants exposed to different modes of delivery CS = cesarean section, VE = vacuum extraction. Model 1 is adjusted for year of birth, gestational age, parity, maternal age, height, BMI, and infant birthweight. Model 2 is also adjusted for indications for operative delivery. The ORs for convulsions were almost doubled in both the VE and CS groups after adjustments for the variables in Model 1. However, further adjustment for indication for operative delivery decreased the odds and made the associations statistically insignificant. Other disturbances of the neonatal cerebral status were significantly increased (two to three times higher) both among infants delivered by VE, and by CS during labor, although the OR was higher in the VE group. A total of 53 infants were diagnosed with brachial plexus injury. Of these, 14 were born by VE, corresponding to a rate in the VE group of 0.6% and an OR of 6.21 (95% CI: 2.22-17.4) in the fully-adjusted model. In contrast, infants delivered by CS had no increased risk for this complication. Among infants with brachial plexus injury, there were 11 cases of shoulder dystocia (ICD-code O66.0), of which five occurred in the VE group. Discussion: In this large cohort study of singleton, non-breech preterm births after onset of labor, we identify three clinically important findings related to mode of delivery: First, VE was used in 5.7% of preterm births, and despite recommendations of no use, 3.3% of preterm infants born before 34 gestational weeks were delivered by VE. Secondly, VE for preterm birth was used more frequently in shorter mothers, primiparae and among women treated with EA as pain relief during labor. Finally, and adjusting for potential confounders and covariates, preterm infants delivered by VE had almost doubled OR for ICH, four times higher OR for extracranial hemorrhage, as well as a 6-fold risk for brachial plexus palsy compared with those delivered by VD. Exclusion of intraventricular hemorrhage grades I-II (the most common form of ICH in preterm infants) from the analysis increased the OR for ICH after VE, indicating that severe bleedings were more common among preterm infants delivered by VE. Although VE was related to significantly increased rates of ICH, it is not clear whether the extraction as such causes the injury, or if there is an underlying common pathway for both VE-assisted delivery and ICH, i.e., that the relationship is confounded by indication. Since the ORs for ICH were significantly higher in the VE group compared with both the CS and unassisted VD groups, whereas the ORs for other disturbances of cerebral status were slightly higher in both the VE and CS groups as compared with VD, different mechanisms may be involved in the development of these two complications. The forces by vacuum extraction could lead to significant vertical stress, which might be avoided with CS. In a case report of MRI findings after birth injuries among infants delivered by VE [21], it was suggested that vertical traction on the skull and brain may produce tentorial lacerations and rupture of intracranial veins. Another explanation for our findings of different outcomes after VE and CS could be that infants delivered by VE may have been exposed to contractions for a longer time than those delivered by CS. A protective effect of CS is indicated by lower ORs for ICH; however, the exposure to contractions as the sole explanation for the increased risks for ICH after VE is less likely, as the VD group – presumably the group exposed to the largest forces of labor – exhibited significantly lower odds for hemorrhagic complications compared with infants delivered with VE. During the study period, the overall rate of ICH increased from 6% originally, up to 12% at the end of the period, most likely reflecting the increased access and use of ultrasonography among Swedish neonatologists in recent years. Improved ultrasound technology and image resolution may also have contributed to this development. Finally, we cannot exclude a contribution from misclassification: a large but normal choroid plexus could have been classified as a small subependymal hemorrhage by less experienced investigators. The finding that the diagnosis of small subependymal hemorrhage (without intraventricular extension; P52.0) increased most compared to other types of ICH during the study period (from 0.5% to 1.2%), supports these interpretations. The overrepresentation of subgaleal hemorrhage and cephalhematoma after VE is less surprising, since earlier studies have stated the relation between these diagnoses and the use of VE. The risk of subgaleal hemorrhage seems to be unrelated to GA, as this study demonstrates rates similar to those in previous studies of infants born at term [12,13]. The present study showed that preterm infants delivered by VE had a 6- to 7-fold risk increase for injury to the plexus brachialis compared with unassisted VD. This injury is usually associated with large macrosomic infants and shoulder dystocia [22] and not to preterm birth. Our result emphasizes the importance of gentle maneuvers and avoiding application of excessive pressure or traction on the brachial plexus also when delivering the preterm infant, especially by VE. The major strengths of this study were the large study population covering all preterm deliveries in Sweden during a period of twelve years, and the high quality of the registers, making it possible to analyze rare diagnoses and unusual events such as ICH in preterm infants delivered by VE. We were able to include data on risk factors, potential confounders, and outcomes collected independently from one another and without involving the study subjects, thus minimizing various types of bias (e.g., selection and recall bias). Moreover, antenatal and obstetric care is free of charge in Sweden, management routines as well as GA determinations are standardized, and 99% of births are delivered in public hospitals. This minimizes the risk for confounding by unmeasured socio-demographic factors. Another advantage was the inclusion of the main indications for VE and CS, enabling us to address the question of confounding by indication. A major limitation of this study is that the registers do not contain detailed information about many important factors during the VE deliveries. For instance the registry does not provide specific information about the type of VE instrument used, level, position, and attitude of the fetal head in the pelvis when applying VE, location of placement of the vacuum cup, traction work, skill of the obstetrician, pressure exposure (duration and force), and cup detachments. The register does not provide information about confounders such as use of oxytocin and application of fundal pressure. There is a general recommendation not to use VE before a GA of 34 weeks. According to the Royal College of Obstetricians and Gynecologists, there is insufficient evidence to establish the safety on VE deliveries in gestations between 34 weeks + 0 days and 36 weeks + 0 days [9]. Our results show that the use of VE is related to rare, but serious complications also between gestational weeks 34–36. Conclusion: The rates of serious birth injuries and complications are generally low, but preterm infants delivered by VE have higher odds ratios for intra- and extracranial hemorrhages and brachial plexus injuries than those delivered by CS during labor or by unassisted vaginal delivery. We therefore support a continued conservative/cautious use of VE in preterm deliveries. Furthermore, the possible causal relationship between mode of delivery and ICH needs to be further investigated. Abbreviations: BMI: Body mass index; CI: Confidence interval; CS: Cesarean section; CT: Computerized tomography; EA: Epidural analgesia; ECH: Extracranial hemorrhage; EEG: Electroencephalography; GA: Gestational age; ICH: Intracranial hemorrhage; LGA: Large for gestational age; OR: Odds ratio; SGA: Small for gestational age; VD: Vaginal delivery; VE: Vacuum extraction. Competing interests: The authors declare that they have no competing interest. Authors’ contributions: CE had the idea for the study, designed it and carried out the statistical analysis. KÅ participated in the design and wrote the first draft of the manuscript together with CE. MN contributed to the interpretation of the results and writing of the manuscript. All authors approved the final version of the submitted article. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2393/14/42/prepub
Background: Very few studies have investigated the neonatal outcomes after vacuum extraction delivery (VE) in the preterm period and the results of these studies are inconclusive. The objective of this study was to describe the use of VE for preterm delivery in Sweden and to compare rates of neonatal complications after preterm delivery by VE to those found after cesarean section during labor (CS) or unassisted vaginal delivery (VD). Methods: Data was obtained from Swedish national registers. In a population-based cohort from 1999 to 2010, all live-born, singleton preterm infants in a non-breech presentation at birth, born after onset of labor (either spontaneously, by induction, or by rupture of membranes) by VD, CS, or VE were included, leaving a study population of 40,764 infants. Logistic regression analyses were used to calculate adjusted odds ratios (AOR), using unassisted vaginal delivery as reference group. Results: VE was used in 5.7% of the preterm deliveries, with lower rates in earlier gestations. Overall, intracranial hemorrhage (ICH) occurred in 1.51%, extracranial hemorrhage (ECH) in 0.64%, and brachial plexus injury in 0.13% of infants. Infants delivered by VE had higher risks for ICH (AOR = 1.84 (95% CI: 1.09-3.12)), ECH (AOR = 4.48 (95% CI: 2.84-7.07)) and brachial plexus injury (AOR = 6.21 (95% CI: 2.22-17.4)), while infants delivered by CS during labor had no increased risk for these complications, as compared to VD. Conclusions: While rates of neonatal complications after VE are generally low, higher odds ratios for intra- and extracranial hemorrhages and brachial plexus injuries after VE, compared with other modes of delivery, support a continued cautious use of VE for preterm delivery.
Background: Preterm birth is common [1] but still, the optimal mode of delivery of preterm infants is not known. Although neonatal outcomes in preterm infants delivered vaginally or by cesarean section (CS) [2-5] have been compared, there is no evidence to provide clear guidance on the method of choice [6]. Given the widespread assumption that assisted vaginal delivery could be harmful for fragile infants that are underweight and preterm, very few studies have addressed the use of vacuum extraction (VE) for preterm birth. Delivery by VE is a common obstetrical procedure, and in many countries it has replaced the use of forceps. VE is used to terminate a protracted second stage of labor and as an intervention for fetal or maternal distress. VE requires vertex presentation, a fully dilated cervix and ruptured membranes [7]. A cesarean section, on the other hand, can be performed at any stage of labor and does not require prerequisites of this kind. Most clinical guidelines do not recommend VE before 34 gestational weeks [8-10]. These recommendations are not based on results of randomized controlled trials, but rely on the observation that preterm infants are more likely than term infants to develop ICH, and on extrapolations from studies of term infants showing that VE is associated with an increased risk of ICH and other neonatal complications [11-17]. Only three studies have previously investigated the use and outcomes of VE in preterm births. The first was undertaken over 40 years ago and showed increased mortality and morbidity among preterm infants delivered by VE as compared with term infants delivered by VE [18]. The second study compared neonatal morbidity in preterm infants delivered vaginally with (n = 61) or without VE (n = 122), and found no differences in neonatal morbidity between the two groups [19]. The last study compared VE and forceps for preterm delivery (n = 64) [20]; the neonatal outcomes were similar in both groups. The available data are clearly untimely and hampered by limitations in power and, therefore, current knowledge on safety of preterm vacuum-assisted birth is unsatisfactory. The aim of this study was to 1) describe the use of VE and compare it to rates of CS during labor in preterm deliveries in Sweden from 1999–2010, 2) characterize the distribution of perinatal risk factors associated with each mode of delivery, and 3) compare rates of neonatal intra- and extracranial hemorrhages, as well as occurrence of brachial plexus injury after preterm delivery by VE or CS during labor, using unassisted vaginal birth as a reference. Conclusion: The rates of serious birth injuries and complications are generally low, but preterm infants delivered by VE have higher odds ratios for intra- and extracranial hemorrhages and brachial plexus injuries than those delivered by CS during labor or by unassisted vaginal delivery. We therefore support a continued conservative/cautious use of VE in preterm deliveries. Furthermore, the possible causal relationship between mode of delivery and ICH needs to be further investigated.
Background: Very few studies have investigated the neonatal outcomes after vacuum extraction delivery (VE) in the preterm period and the results of these studies are inconclusive. The objective of this study was to describe the use of VE for preterm delivery in Sweden and to compare rates of neonatal complications after preterm delivery by VE to those found after cesarean section during labor (CS) or unassisted vaginal delivery (VD). Methods: Data was obtained from Swedish national registers. In a population-based cohort from 1999 to 2010, all live-born, singleton preterm infants in a non-breech presentation at birth, born after onset of labor (either spontaneously, by induction, or by rupture of membranes) by VD, CS, or VE were included, leaving a study population of 40,764 infants. Logistic regression analyses were used to calculate adjusted odds ratios (AOR), using unassisted vaginal delivery as reference group. Results: VE was used in 5.7% of the preterm deliveries, with lower rates in earlier gestations. Overall, intracranial hemorrhage (ICH) occurred in 1.51%, extracranial hemorrhage (ECH) in 0.64%, and brachial plexus injury in 0.13% of infants. Infants delivered by VE had higher risks for ICH (AOR = 1.84 (95% CI: 1.09-3.12)), ECH (AOR = 4.48 (95% CI: 2.84-7.07)) and brachial plexus injury (AOR = 6.21 (95% CI: 2.22-17.4)), while infants delivered by CS during labor had no increased risk for these complications, as compared to VD. Conclusions: While rates of neonatal complications after VE are generally low, higher odds ratios for intra- and extracranial hemorrhages and brachial plexus injuries after VE, compared with other modes of delivery, support a continued cautious use of VE for preterm delivery.
9,636
359
[ 509, 607, 499, 130, 233, 282, 759, 74, 10, 59, 16 ]
15
[ "ve", "preterm", "infants", "delivery", "weeks", "preterm infants", "cs", "labor", "gestational", "ich" ]
[ "infants mode delivery", "vaginal delivery ve", "preterm infants delivered", "preterm birth delivery", "vacuum assisted birth" ]
[CONTENT] Mode of delivery | Preterm delivery | Intracranial hemorrhage | Extracranial hemorrhage | Brachial plexus injury [SUMMARY]
[CONTENT] Mode of delivery | Preterm delivery | Intracranial hemorrhage | Extracranial hemorrhage | Brachial plexus injury [SUMMARY]
[CONTENT] Mode of delivery | Preterm delivery | Intracranial hemorrhage | Extracranial hemorrhage | Brachial plexus injury [SUMMARY]
[CONTENT] Mode of delivery | Preterm delivery | Intracranial hemorrhage | Extracranial hemorrhage | Brachial plexus injury [SUMMARY]
[CONTENT] Mode of delivery | Preterm delivery | Intracranial hemorrhage | Extracranial hemorrhage | Brachial plexus injury [SUMMARY]
[CONTENT] Mode of delivery | Preterm delivery | Intracranial hemorrhage | Extracranial hemorrhage | Brachial plexus injury [SUMMARY]
[CONTENT] Adult | Birth Injuries | Brachial Plexus | Cesarean Section | Cohort Studies | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Intracranial Hemorrhage, Traumatic | Labor, Obstetric | Parturition | Pregnancy | Premature Birth | Registries | Scalp | Seizures | Sweden | Vacuum Extraction, Obstetrical | Young Adult [SUMMARY]
[CONTENT] Adult | Birth Injuries | Brachial Plexus | Cesarean Section | Cohort Studies | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Intracranial Hemorrhage, Traumatic | Labor, Obstetric | Parturition | Pregnancy | Premature Birth | Registries | Scalp | Seizures | Sweden | Vacuum Extraction, Obstetrical | Young Adult [SUMMARY]
[CONTENT] Adult | Birth Injuries | Brachial Plexus | Cesarean Section | Cohort Studies | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Intracranial Hemorrhage, Traumatic | Labor, Obstetric | Parturition | Pregnancy | Premature Birth | Registries | Scalp | Seizures | Sweden | Vacuum Extraction, Obstetrical | Young Adult [SUMMARY]
[CONTENT] Adult | Birth Injuries | Brachial Plexus | Cesarean Section | Cohort Studies | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Intracranial Hemorrhage, Traumatic | Labor, Obstetric | Parturition | Pregnancy | Premature Birth | Registries | Scalp | Seizures | Sweden | Vacuum Extraction, Obstetrical | Young Adult [SUMMARY]
[CONTENT] Adult | Birth Injuries | Brachial Plexus | Cesarean Section | Cohort Studies | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Intracranial Hemorrhage, Traumatic | Labor, Obstetric | Parturition | Pregnancy | Premature Birth | Registries | Scalp | Seizures | Sweden | Vacuum Extraction, Obstetrical | Young Adult [SUMMARY]
[CONTENT] Adult | Birth Injuries | Brachial Plexus | Cesarean Section | Cohort Studies | Female | Gestational Age | Humans | Infant, Newborn | Infant, Premature | Intracranial Hemorrhage, Traumatic | Labor, Obstetric | Parturition | Pregnancy | Premature Birth | Registries | Scalp | Seizures | Sweden | Vacuum Extraction, Obstetrical | Young Adult [SUMMARY]
[CONTENT] infants mode delivery | vaginal delivery ve | preterm infants delivered | preterm birth delivery | vacuum assisted birth [SUMMARY]
[CONTENT] infants mode delivery | vaginal delivery ve | preterm infants delivered | preterm birth delivery | vacuum assisted birth [SUMMARY]
[CONTENT] infants mode delivery | vaginal delivery ve | preterm infants delivered | preterm birth delivery | vacuum assisted birth [SUMMARY]
[CONTENT] infants mode delivery | vaginal delivery ve | preterm infants delivered | preterm birth delivery | vacuum assisted birth [SUMMARY]
[CONTENT] infants mode delivery | vaginal delivery ve | preterm infants delivered | preterm birth delivery | vacuum assisted birth [SUMMARY]
[CONTENT] infants mode delivery | vaginal delivery ve | preterm infants delivered | preterm birth delivery | vacuum assisted birth [SUMMARY]
[CONTENT] ve | preterm | infants | delivery | weeks | preterm infants | cs | labor | gestational | ich [SUMMARY]
[CONTENT] ve | preterm | infants | delivery | weeks | preterm infants | cs | labor | gestational | ich [SUMMARY]
[CONTENT] ve | preterm | infants | delivery | weeks | preterm infants | cs | labor | gestational | ich [SUMMARY]
[CONTENT] ve | preterm | infants | delivery | weeks | preterm infants | cs | labor | gestational | ich [SUMMARY]
[CONTENT] ve | preterm | infants | delivery | weeks | preterm infants | cs | labor | gestational | ich [SUMMARY]
[CONTENT] ve | preterm | infants | delivery | weeks | preterm infants | cs | labor | gestational | ich [SUMMARY]
[CONTENT] ve | preterm | infants | neonatal | morbidity | term infants | compared | studies | use | delivery [SUMMARY]
[CONTENT] preterm | delivery | grams | neonatal | labor | weeks | performed | m2 | kg | kg m2 [SUMMARY]
[CONTENT] ve | infants | preterm | preterm infants | cs | weeks | represents | line | line represents | gestational [SUMMARY]
[CONTENT] injuries | low preterm | injuries delivered cs labor | deliveries furthermore possible causal | plexus injuries delivered cs | plexus injuries delivered | ve preterm deliveries furthermore | ve preterm deliveries | injuries complications | injuries complications generally [SUMMARY]
[CONTENT] ve | preterm | infants | delivery | labor | cs | preterm infants | weeks | gestational | delivered [SUMMARY]
[CONTENT] ve | preterm | infants | delivery | labor | cs | preterm infants | weeks | gestational | delivered [SUMMARY]
[CONTENT] VE ||| VE | Sweden | VE | CS [SUMMARY]
[CONTENT] Swedish ||| 1999 | 2010 | VD | CS | VE | 40,764 ||| [SUMMARY]
[CONTENT] VE | 5.7% ||| 1.51% | 0.64% | 0.13% ||| VE | 1.84 | 95% | CI | 1.09-3.12 | ECH | 4.48 | 95% | CI | 2.84-7.07 | 6.21 | 95% | CI | 2.22 | CS | VD [SUMMARY]
[CONTENT] VE | VE | VE [SUMMARY]
[CONTENT] VE ||| VE | Sweden | VE | CS ||| Swedish ||| 1999 | 2010 | VD | CS | VE | 40,764 ||| ||| VE | 5.7% ||| 1.51% | 0.64% | 0.13% ||| VE | 1.84 | 95% | CI | 1.09-3.12 | ECH | 4.48 | 95% | CI | 2.84-7.07 | 6.21 | 95% | CI | 2.22 | CS | VD ||| VE | VE | VE [SUMMARY]
[CONTENT] VE ||| VE | Sweden | VE | CS ||| Swedish ||| 1999 | 2010 | VD | CS | VE | 40,764 ||| ||| VE | 5.7% ||| 1.51% | 0.64% | 0.13% ||| VE | 1.84 | 95% | CI | 1.09-3.12 | ECH | 4.48 | 95% | CI | 2.84-7.07 | 6.21 | 95% | CI | 2.22 | CS | VD ||| VE | VE | VE [SUMMARY]
Positive detection of exfoliated colon cancer cells on linear stapler cartridges was associated with depth of tumor invasion and preoperative bowel preparation in colon cancer.
27577701
The aim of this study was to investigate exfoliated cancer cells (ECCs) on linear stapler cartridges used for anastomotic sites in colon cancer.
BACKGROUND
We prospectively analyzed ECCs on linear stapler cartridges used for anastomosis in 100 colon cancer patients who underwent colectomy. Having completed the functional end-to-end anastomosis, the linear stapler cartridges were irrigated with saline, which was collected for cytological examination and cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining.
METHODS
The detection rate of ECCs on the linear stapler cartridges was 20 %. Positive detection of ECCs was significantly associated with depth of tumor invasion (p = 0.012) and preoperative bowel preparation (p = 0.003). There were no marked differences between ECC-positive and ECC-negative groups in terms of the operation methods, tumor location, histopathological classification, and surgical margins.
RESULTS
Since ECCs were identified on the cartridge of the linear stapler used for anastomosis, preoperative mechanical bowel preparation using polyethylene glycol solution and cleansing at anastomotic sites using tumoricidal agents before anastomosis may be necessary to decrease ECCs in advanced colon cancer.
CONCLUSIONS
[ "Aged", "Aged, 80 and over", "Anastomosis, Surgical", "Colectomy", "Colon", "Colonic Neoplasms", "Enema", "Female", "Humans", "Laxatives", "Male", "Margins of Excision", "Middle Aged", "Neoplasm Invasiveness", "Neoplasm Staging", "Polyethylene Glycols", "Preoperative Care", "Prospective Studies", "Surgical Staplers", "Surgical Stapling" ]
5006528
Background
The cause of suture-line recurrence following curative colorectal cancer surgery is believed to be the presence of exfoliated cancer cells (ECCs) at the anastomotic site [1]. Many studies of suture-line recurrence have been conducted in rectal cancer surgery; however, a greater margin is used for colon cancer than for rectal cancer cases, so that suture-line recurrence is less common in colon cancer patients [2]. As a result, few studies have reported on the presence of ECCs in the colonic lumen of colon cancer patients. However, even in patients with colon cancer, the incidence of suture-line recurrence has been reported to be 0.8–5.9 % [2, 3]. Reconstruction following colon cancer surgery commonly involves the fabrication of a functional end-to-end anastomosis [4] using a linear stapler, as a convenient procedure [5]. In the present study, ECCs in the colonic lumen of colon cancer patients were examined based on the cytological examination of the solutions washing the linear stapler used for functional end-to-end anastomosis, and we further analyzed the relationship between positive detection of ECCs and clinicopathological factors and prognosis.
null
null
Results
Cytological analysis The cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %) Cytology from a cartridge of a stapler The cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %) Cytology from a cartridge of a stapler The relationship between the status of ECCs and clinicopathological factors All positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group p value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %) Comparison of clinicopathological factors in the ECC-positive and ECC-negative groups All positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group p value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %) Comparison of clinicopathological factors in the ECC-positive and ECC-negative groups Follow-up outcome Among all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence. Among all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence.
Conclusions
We found ECCs on the cartridge of the linear stapler used for anastomosis in 20 % of the colon cancer cases we analyzed. Most of the positive ECCs were identified in advanced colon cancer without PEG. Therefore, preoperative mechanical bowel preparation using PEG and cleansing at anastomotic sites using tumoricidal agents before anastomosis may contribute to decrease ECCs in advanced colon cancer.
[ "Patients", "Preoperative bowel preparation", "Cytology procedures", "Evaluation of data", "Statistical analysis", "Cytological analysis", "The relationship between the status of ECCs and clinicopathological factors", "Follow-up outcome" ]
[ "The study subjects were 100 consecutive patients who underwent laparoscopic (n = 61) or open (n = 39) colectomy for colon cancer at our hospital, in whom a functional end-to-end anastomosis using a linear stapler was performed. After completing the anastomosis in each case, the cartridge of the linear stapler was washed with saline, which was collected for cytological examination. Of the 100 patients examined, the localization of tumor was as follows: cecum 17 cases, ascending colon 41, transverse colon 28, descending colon 2, and sigmoid colon 12. The depth of tumor invasion was pTis 8 cases, pT1 16, pT2 7, pT3 47, and pT4 22. The histological diagnosis was well-differentiated adenocarcinoma 46 cases, moderately 42, poorly 7, and others 5. All patients were followed up 5 years or more after surgery, or until metastasis was developed or death. Written informed consent was obtained from all patients, after an explanation of the design and aim of this study.", "Full-laxative cleansing via polyethylene glycol solution (PEG) was routinely used; however, for patients with stenosis, other preparation methods (sodium picosulfate or enema) were employed.", "The procedure for the cytology of the washing samples collected from the stapler cartridges was as follows. After tumor resection, the intestinal lumen, distal and proximal to the anastomotic site, was cleaned five times using cotton balls soaked in either 5 % povidone-iodine or 0.025 % benzalkonium chloride. Afterwards, a functional end-to-end anastomosis was performed. The stapler cartridge (GIA 80-3.8, Covidien, MA, USA) used for anastomosis was washed using 100 ml of physiological saline, and the washing samples were promptly subjected to cytological analysis.", "Cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Classes I, II, and III were included in the negative group, while classes IV and V were included in the positive group. Clinicopathological factors, including age, gender, operation methods (laparoscopic or open surgery), tumor location, depth of tumor invasion, tumor size, lymph node metastasis, distant metastasis, stage classification, histopathological classification, margin from tumor, and preoperative bowel preparation, and clinical outcome were compared between the two groups.", "In the present study, the χ2 test (Fisher’s exact test) was used to compare frequencies between the two groups, and the Mann-Whitney U test was used to compare intergroup differences. The level of significance was set at p < 0.05.", "The cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %)\nCytology from a cartridge of a stapler", "All positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group\np value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %)\nComparison of clinicopathological factors in the ECC-positive and ECC-negative groups", "Among all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Preoperative bowel preparation", "Cytology procedures", "Evaluation of data", "Statistical analysis", "Results", "Cytological analysis", "The relationship between the status of ECCs and clinicopathological factors", "Follow-up outcome", "Discussion", "Conclusions" ]
[ "The cause of suture-line recurrence following curative colorectal cancer surgery is believed to be the presence of exfoliated cancer cells (ECCs) at the anastomotic site [1]. Many studies of suture-line recurrence have been conducted in rectal cancer surgery; however, a greater margin is used for colon cancer than for rectal cancer cases, so that suture-line recurrence is less common in colon cancer patients [2]. As a result, few studies have reported on the presence of ECCs in the colonic lumen of colon cancer patients. However, even in patients with colon cancer, the incidence of suture-line recurrence has been reported to be 0.8–5.9 % [2, 3]. Reconstruction following colon cancer surgery commonly involves the fabrication of a functional end-to-end anastomosis [4] using a linear stapler, as a convenient procedure [5]. In the present study, ECCs in the colonic lumen of colon cancer patients were examined based on the cytological examination of the solutions washing the linear stapler used for functional end-to-end anastomosis, and we further analyzed the relationship between positive detection of ECCs and clinicopathological factors and prognosis.", " Patients The study subjects were 100 consecutive patients who underwent laparoscopic (n = 61) or open (n = 39) colectomy for colon cancer at our hospital, in whom a functional end-to-end anastomosis using a linear stapler was performed. After completing the anastomosis in each case, the cartridge of the linear stapler was washed with saline, which was collected for cytological examination. Of the 100 patients examined, the localization of tumor was as follows: cecum 17 cases, ascending colon 41, transverse colon 28, descending colon 2, and sigmoid colon 12. The depth of tumor invasion was pTis 8 cases, pT1 16, pT2 7, pT3 47, and pT4 22. The histological diagnosis was well-differentiated adenocarcinoma 46 cases, moderately 42, poorly 7, and others 5. All patients were followed up 5 years or more after surgery, or until metastasis was developed or death. Written informed consent was obtained from all patients, after an explanation of the design and aim of this study.\nThe study subjects were 100 consecutive patients who underwent laparoscopic (n = 61) or open (n = 39) colectomy for colon cancer at our hospital, in whom a functional end-to-end anastomosis using a linear stapler was performed. After completing the anastomosis in each case, the cartridge of the linear stapler was washed with saline, which was collected for cytological examination. Of the 100 patients examined, the localization of tumor was as follows: cecum 17 cases, ascending colon 41, transverse colon 28, descending colon 2, and sigmoid colon 12. The depth of tumor invasion was pTis 8 cases, pT1 16, pT2 7, pT3 47, and pT4 22. The histological diagnosis was well-differentiated adenocarcinoma 46 cases, moderately 42, poorly 7, and others 5. All patients were followed up 5 years or more after surgery, or until metastasis was developed or death. Written informed consent was obtained from all patients, after an explanation of the design and aim of this study.\n Preoperative bowel preparation Full-laxative cleansing via polyethylene glycol solution (PEG) was routinely used; however, for patients with stenosis, other preparation methods (sodium picosulfate or enema) were employed.\nFull-laxative cleansing via polyethylene glycol solution (PEG) was routinely used; however, for patients with stenosis, other preparation methods (sodium picosulfate or enema) were employed.\n Cytology procedures The procedure for the cytology of the washing samples collected from the stapler cartridges was as follows. After tumor resection, the intestinal lumen, distal and proximal to the anastomotic site, was cleaned five times using cotton balls soaked in either 5 % povidone-iodine or 0.025 % benzalkonium chloride. Afterwards, a functional end-to-end anastomosis was performed. The stapler cartridge (GIA 80-3.8, Covidien, MA, USA) used for anastomosis was washed using 100 ml of physiological saline, and the washing samples were promptly subjected to cytological analysis.\nThe procedure for the cytology of the washing samples collected from the stapler cartridges was as follows. After tumor resection, the intestinal lumen, distal and proximal to the anastomotic site, was cleaned five times using cotton balls soaked in either 5 % povidone-iodine or 0.025 % benzalkonium chloride. Afterwards, a functional end-to-end anastomosis was performed. The stapler cartridge (GIA 80-3.8, Covidien, MA, USA) used for anastomosis was washed using 100 ml of physiological saline, and the washing samples were promptly subjected to cytological analysis.\n Evaluation of data Cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Classes I, II, and III were included in the negative group, while classes IV and V were included in the positive group. Clinicopathological factors, including age, gender, operation methods (laparoscopic or open surgery), tumor location, depth of tumor invasion, tumor size, lymph node metastasis, distant metastasis, stage classification, histopathological classification, margin from tumor, and preoperative bowel preparation, and clinical outcome were compared between the two groups.\nCytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Classes I, II, and III were included in the negative group, while classes IV and V were included in the positive group. Clinicopathological factors, including age, gender, operation methods (laparoscopic or open surgery), tumor location, depth of tumor invasion, tumor size, lymph node metastasis, distant metastasis, stage classification, histopathological classification, margin from tumor, and preoperative bowel preparation, and clinical outcome were compared between the two groups.\n Statistical analysis In the present study, the χ2 test (Fisher’s exact test) was used to compare frequencies between the two groups, and the Mann-Whitney U test was used to compare intergroup differences. The level of significance was set at p < 0.05.\nIn the present study, the χ2 test (Fisher’s exact test) was used to compare frequencies between the two groups, and the Mann-Whitney U test was used to compare intergroup differences. The level of significance was set at p < 0.05.", "The study subjects were 100 consecutive patients who underwent laparoscopic (n = 61) or open (n = 39) colectomy for colon cancer at our hospital, in whom a functional end-to-end anastomosis using a linear stapler was performed. After completing the anastomosis in each case, the cartridge of the linear stapler was washed with saline, which was collected for cytological examination. Of the 100 patients examined, the localization of tumor was as follows: cecum 17 cases, ascending colon 41, transverse colon 28, descending colon 2, and sigmoid colon 12. The depth of tumor invasion was pTis 8 cases, pT1 16, pT2 7, pT3 47, and pT4 22. The histological diagnosis was well-differentiated adenocarcinoma 46 cases, moderately 42, poorly 7, and others 5. All patients were followed up 5 years or more after surgery, or until metastasis was developed or death. Written informed consent was obtained from all patients, after an explanation of the design and aim of this study.", "Full-laxative cleansing via polyethylene glycol solution (PEG) was routinely used; however, for patients with stenosis, other preparation methods (sodium picosulfate or enema) were employed.", "The procedure for the cytology of the washing samples collected from the stapler cartridges was as follows. After tumor resection, the intestinal lumen, distal and proximal to the anastomotic site, was cleaned five times using cotton balls soaked in either 5 % povidone-iodine or 0.025 % benzalkonium chloride. Afterwards, a functional end-to-end anastomosis was performed. The stapler cartridge (GIA 80-3.8, Covidien, MA, USA) used for anastomosis was washed using 100 ml of physiological saline, and the washing samples were promptly subjected to cytological analysis.", "Cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Classes I, II, and III were included in the negative group, while classes IV and V were included in the positive group. Clinicopathological factors, including age, gender, operation methods (laparoscopic or open surgery), tumor location, depth of tumor invasion, tumor size, lymph node metastasis, distant metastasis, stage classification, histopathological classification, margin from tumor, and preoperative bowel preparation, and clinical outcome were compared between the two groups.", "In the present study, the χ2 test (Fisher’s exact test) was used to compare frequencies between the two groups, and the Mann-Whitney U test was used to compare intergroup differences. The level of significance was set at p < 0.05.", " Cytological analysis The cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %)\nCytology from a cartridge of a stapler\nThe cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %)\nCytology from a cartridge of a stapler\n The relationship between the status of ECCs and clinicopathological factors All positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group\np value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %)\nComparison of clinicopathological factors in the ECC-positive and ECC-negative groups\nAll positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group\np value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %)\nComparison of clinicopathological factors in the ECC-positive and ECC-negative groups\n Follow-up outcome Among all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence.\nAmong all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence.", "The cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %)\nCytology from a cartridge of a stapler", "All positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group\np value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %)\nComparison of clinicopathological factors in the ECC-positive and ECC-negative groups", "Among all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence.", "We found that ECC-positive cases were recognized at 20 % on the stapler cartridges used for anastomosis in patients with colon cancer. According to the relationship between the status of ECCs and clinicopathological factors, the presence of ECCs was associated with depth of tumor invasion and preoperative bowel preparation. Our study included 86 patients with right-sided colon cancer in 100 colon cancer patients. Hasegawa et al. [6] reported that 2 (11.1 %) and 10 (55.6 %) of 18 patients, who underwent right hemicolectomy for right-sided colon cancer, had ECCs at the terminal ileum and distal colon anastomosis sites, respectively. They demonstrated that surgical bowel occlusion on both sides of the tumor before resection improved a decrease in the number of ECCs. Recently, Maeda et al. [7] have shown that ECCs were detected at distal colon and proximal colon of the tumor within each 3 cm in almost patients with sigmoid cancer, and ECCs decreased with distance from the tumor. They also suggested that bowel ligatures could decrease ECCs, leading to prevent local recurrence. These findings indicated that the no-touch isolation technique might influence the number of ECCs as previous reports suggested [8, 9]. In the present study, we performed colectomy according to the no-touch isolation technique. Consequently, the positive rate of ECCs was 20 %. The difference of positive rate might be explained by the operative technique in addition to other factors, including cleansing methods and bowel preparation methods.\nMost studies of ECCs in the colonic lumen of colorectal cancer patients have involved rectal cancer patients. The incidence of suture-line recurrence has been reported to be 11–18 % [10, 11] in rectal cancer, while that in colon cancer has been reported to be 0.8–5.9 % [2, 3]. The possibility that ECCs can be implanted into freshly cut tissues was first postulated by Sir Ryall in 1907 [12]. This hypothesis was later advanced to explain some cases of suture-line recurrence following resection of colorectal cancer. In order to demonstrate the potential involvement of ECCs in the colonic lumen with suture-line and local recurrences in an animal model, Fermor et al. [13] injected ECCs collected from the colonic lumen of 17 colorectal cancer patients into the caudal vein of immunocompromised mice and showed that six mice developed lung metastases. Several cases of suture-line recurrence following resection of colon cancer, especially sigmoid colon, have been reported in Japan [14–17]. They discussed that implantation was considered to be the most important factor in terms of the cause of recurrence at anastomotic sites. The frequency of suture-line recurrence in colon cancer was very low though ECCs were frequently detected in colonic lumen. Indeed, we detected ECCs in 20 % of the colon cancer cases we analyzed. However, no suture-line recurrence was observed in our series of colon cancer for 5-year observation after colectomy. The cell viability of ECCs was investigated to understand etiological mechanism of anastomotic recurrence. Rosenberg et al. [18] reported the presence of no viable cells, as assessed by trypan blue exclusion. However, Umpleby et al. [19], using trypan blue exclusion and hydrolysis of fluorescein diacetate, showed that 70 % of patients with colorectal cancer had viable cells exfoliated into the bowel lumen. In general, other factors, which are associated with the potential of implantation, must be required even if ECCs still have viability.\nIn the present study, we found that the positive rate of ECCs was 11.8 % in patients with PEG, while it was 37.5 % in patients without PEG. Mechanical bowel preparation using PEG may be effective for decreasing the number of ECCs. Previous reports [6, 7] also suggested same findings. Regarding mechanical bowel preparation using PEG, Slim et al. [20] have described that anastomotic leakage was significantly found after mechanical bowel preparation (odds ratio 1.75; p = 0.032) and concluded that mechanical bowel preparation using PEG should be omitted before elective colorectal surgery. Therefore, mechanical bowel preparation is routinely not performed in patients undergoing elective colon cancer because of employing “enhanced recovery after surgery” (ERAS) protocol widely. Recently, clinical practice guidelines [21] from the USA recommend the use of both mechanical bowel preparation and oral antibiotic bowel preparation. Many surgeons still conduct mechanical bowel preparation using PEG in the USA [22] and Japan [23]. Mechanical bowel preparation using PEG might contribute to decrease positive ECCs in advanced colorectal cancer, leading to decrease the possibility of anastomotic recurrence in addition to prevention of anastomotic leakage and surgical site infection.\nIn a previous study [5], the outcome of reconstruction following colon cancer surgery was compared between hand-sewn and stapled anastomosis. While the safety was comparable, it was radiographically confirmed that suture leakage was more common with hand-sewn anastomoses. Hence, functional end-to-end anastomosis has primarily been performed using a stapler. In the present study, the washing samples of the cartridge of the linear stapler used for functional end-to-end anastomosis were examined cytologically. The rationale for using the washing samples of the cartridge in this study was that they would better reflect the presence of ECCs than the washing solutions of the colonic lumen. In addition, the presence of ECCs on the cartridge was considered to be a cause of suture-line recurrence. Prior to the present study, ECCs had been cytologically identified in the physiological saline used to irrigate the colonic lumen. In the present study, 5 % povidone-iodine was first used as a tumoricidal agent to irrigate the colonic lumen prior to anastomosis [24, 25]. However, since povidone-iodine should not be used in the abdominal cavity due to severe damage of normal mucosa, we have employed 0.025 % benzalkonium chloride, as an alternative. Therefore, the ECC detection rate might be lower in the present study than that reported by Umpleby, et al. [19] who used normal saline for bowel irrigation.\nOur results indicated that the detection rate of ECCs, which could be detected at anastomotic sites even on the stapler cartridges, might be increased in advanced colon cancer with more than T2 and in elective colon cancer patients without mechanical bowel preparation using PEG, leading to increase the possibility of incidence of suture-line recurrence. To prevent the possibility of incidence of suture-line recurrence, we should conduct preoperative mechanical bowel preparation using PEG, perform surgical operation by the no-touch isolation technique, and irrigate the colonic lumen prior to anastomosis using 0.025 % benzalkonium chloride as a tumoricidal agent.", "We found ECCs on the cartridge of the linear stapler used for anastomosis in 20 % of the colon cancer cases we analyzed. Most of the positive ECCs were identified in advanced colon cancer without PEG. Therefore, preoperative mechanical bowel preparation using PEG and cleansing at anastomotic sites using tumoricidal agents before anastomosis may contribute to decrease ECCs in advanced colon cancer." ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", null, null, null, "discussion", "conclusion" ]
[ "Exfoliated cancer cell", "Colon cancer", "Functional end-to-end anastomosis", "Suture-line recurrence" ]
Background: The cause of suture-line recurrence following curative colorectal cancer surgery is believed to be the presence of exfoliated cancer cells (ECCs) at the anastomotic site [1]. Many studies of suture-line recurrence have been conducted in rectal cancer surgery; however, a greater margin is used for colon cancer than for rectal cancer cases, so that suture-line recurrence is less common in colon cancer patients [2]. As a result, few studies have reported on the presence of ECCs in the colonic lumen of colon cancer patients. However, even in patients with colon cancer, the incidence of suture-line recurrence has been reported to be 0.8–5.9 % [2, 3]. Reconstruction following colon cancer surgery commonly involves the fabrication of a functional end-to-end anastomosis [4] using a linear stapler, as a convenient procedure [5]. In the present study, ECCs in the colonic lumen of colon cancer patients were examined based on the cytological examination of the solutions washing the linear stapler used for functional end-to-end anastomosis, and we further analyzed the relationship between positive detection of ECCs and clinicopathological factors and prognosis. Methods: Patients The study subjects were 100 consecutive patients who underwent laparoscopic (n = 61) or open (n = 39) colectomy for colon cancer at our hospital, in whom a functional end-to-end anastomosis using a linear stapler was performed. After completing the anastomosis in each case, the cartridge of the linear stapler was washed with saline, which was collected for cytological examination. Of the 100 patients examined, the localization of tumor was as follows: cecum 17 cases, ascending colon 41, transverse colon 28, descending colon 2, and sigmoid colon 12. The depth of tumor invasion was pTis 8 cases, pT1 16, pT2 7, pT3 47, and pT4 22. The histological diagnosis was well-differentiated adenocarcinoma 46 cases, moderately 42, poorly 7, and others 5. All patients were followed up 5 years or more after surgery, or until metastasis was developed or death. Written informed consent was obtained from all patients, after an explanation of the design and aim of this study. The study subjects were 100 consecutive patients who underwent laparoscopic (n = 61) or open (n = 39) colectomy for colon cancer at our hospital, in whom a functional end-to-end anastomosis using a linear stapler was performed. After completing the anastomosis in each case, the cartridge of the linear stapler was washed with saline, which was collected for cytological examination. Of the 100 patients examined, the localization of tumor was as follows: cecum 17 cases, ascending colon 41, transverse colon 28, descending colon 2, and sigmoid colon 12. The depth of tumor invasion was pTis 8 cases, pT1 16, pT2 7, pT3 47, and pT4 22. The histological diagnosis was well-differentiated adenocarcinoma 46 cases, moderately 42, poorly 7, and others 5. All patients were followed up 5 years or more after surgery, or until metastasis was developed or death. Written informed consent was obtained from all patients, after an explanation of the design and aim of this study. Preoperative bowel preparation Full-laxative cleansing via polyethylene glycol solution (PEG) was routinely used; however, for patients with stenosis, other preparation methods (sodium picosulfate or enema) were employed. Full-laxative cleansing via polyethylene glycol solution (PEG) was routinely used; however, for patients with stenosis, other preparation methods (sodium picosulfate or enema) were employed. Cytology procedures The procedure for the cytology of the washing samples collected from the stapler cartridges was as follows. After tumor resection, the intestinal lumen, distal and proximal to the anastomotic site, was cleaned five times using cotton balls soaked in either 5 % povidone-iodine or 0.025 % benzalkonium chloride. Afterwards, a functional end-to-end anastomosis was performed. The stapler cartridge (GIA 80-3.8, Covidien, MA, USA) used for anastomosis was washed using 100 ml of physiological saline, and the washing samples were promptly subjected to cytological analysis. The procedure for the cytology of the washing samples collected from the stapler cartridges was as follows. After tumor resection, the intestinal lumen, distal and proximal to the anastomotic site, was cleaned five times using cotton balls soaked in either 5 % povidone-iodine or 0.025 % benzalkonium chloride. Afterwards, a functional end-to-end anastomosis was performed. The stapler cartridge (GIA 80-3.8, Covidien, MA, USA) used for anastomosis was washed using 100 ml of physiological saline, and the washing samples were promptly subjected to cytological analysis. Evaluation of data Cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Classes I, II, and III were included in the negative group, while classes IV and V were included in the positive group. Clinicopathological factors, including age, gender, operation methods (laparoscopic or open surgery), tumor location, depth of tumor invasion, tumor size, lymph node metastasis, distant metastasis, stage classification, histopathological classification, margin from tumor, and preoperative bowel preparation, and clinical outcome were compared between the two groups. Cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Classes I, II, and III were included in the negative group, while classes IV and V were included in the positive group. Clinicopathological factors, including age, gender, operation methods (laparoscopic or open surgery), tumor location, depth of tumor invasion, tumor size, lymph node metastasis, distant metastasis, stage classification, histopathological classification, margin from tumor, and preoperative bowel preparation, and clinical outcome were compared between the two groups. Statistical analysis In the present study, the χ2 test (Fisher’s exact test) was used to compare frequencies between the two groups, and the Mann-Whitney U test was used to compare intergroup differences. The level of significance was set at p < 0.05. In the present study, the χ2 test (Fisher’s exact test) was used to compare frequencies between the two groups, and the Mann-Whitney U test was used to compare intergroup differences. The level of significance was set at p < 0.05. Patients: The study subjects were 100 consecutive patients who underwent laparoscopic (n = 61) or open (n = 39) colectomy for colon cancer at our hospital, in whom a functional end-to-end anastomosis using a linear stapler was performed. After completing the anastomosis in each case, the cartridge of the linear stapler was washed with saline, which was collected for cytological examination. Of the 100 patients examined, the localization of tumor was as follows: cecum 17 cases, ascending colon 41, transverse colon 28, descending colon 2, and sigmoid colon 12. The depth of tumor invasion was pTis 8 cases, pT1 16, pT2 7, pT3 47, and pT4 22. The histological diagnosis was well-differentiated adenocarcinoma 46 cases, moderately 42, poorly 7, and others 5. All patients were followed up 5 years or more after surgery, or until metastasis was developed or death. Written informed consent was obtained from all patients, after an explanation of the design and aim of this study. Preoperative bowel preparation: Full-laxative cleansing via polyethylene glycol solution (PEG) was routinely used; however, for patients with stenosis, other preparation methods (sodium picosulfate or enema) were employed. Cytology procedures: The procedure for the cytology of the washing samples collected from the stapler cartridges was as follows. After tumor resection, the intestinal lumen, distal and proximal to the anastomotic site, was cleaned five times using cotton balls soaked in either 5 % povidone-iodine or 0.025 % benzalkonium chloride. Afterwards, a functional end-to-end anastomosis was performed. The stapler cartridge (GIA 80-3.8, Covidien, MA, USA) used for anastomosis was washed using 100 ml of physiological saline, and the washing samples were promptly subjected to cytological analysis. Evaluation of data: Cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Classes I, II, and III were included in the negative group, while classes IV and V were included in the positive group. Clinicopathological factors, including age, gender, operation methods (laparoscopic or open surgery), tumor location, depth of tumor invasion, tumor size, lymph node metastasis, distant metastasis, stage classification, histopathological classification, margin from tumor, and preoperative bowel preparation, and clinical outcome were compared between the two groups. Statistical analysis: In the present study, the χ2 test (Fisher’s exact test) was used to compare frequencies between the two groups, and the Mann-Whitney U test was used to compare intergroup differences. The level of significance was set at p < 0.05. Results: Cytological analysis The cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %) Cytology from a cartridge of a stapler The cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %) Cytology from a cartridge of a stapler The relationship between the status of ECCs and clinicopathological factors All positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group p value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %) Comparison of clinicopathological factors in the ECC-positive and ECC-negative groups All positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group p value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %) Comparison of clinicopathological factors in the ECC-positive and ECC-negative groups Follow-up outcome Among all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence. Among all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence. Cytological analysis: The cytological analysis results were as follows: class I, n = 17; class II, n = 61; class III, n = 2; class IV, n = 7; and class V, n = 13 (Table 1). Based on the pathological diagnosis, all the patients were assigned to the positive group (n = 20) and the negative group (n = 80).Table 1Cytology from a cartridge of a staplerClass I17 cases (17 %)Class II61 (61 %)Class III2 (2 %)Class IV7 (7 %)Class V13 (13 %) Cytology from a cartridge of a stapler The relationship between the status of ECCs and clinicopathological factors: All positive cases had advanced cancers that had invaded beyond the submucosal layer (T2–T4), while all of the early cancers that had invaded the submucosal layer (Tis, T1) were in the negative group (p = 0.012) (Table 2). Regarding preoperative bowel preparation, the detection rate of ECCs in cases where full-laxative mechanical cleansing with PEG was performed was significantly lower than that in cases where full-laxative mechanical cleansing was not possible (p = 0.003) (Table 2). The two groups did not differ significantly in terms of the operation method, tumor location, histopathological classification, and margin from tumor (Table 2).Table 2Comparison of clinicopathological factors in the ECC-positive and ECC-negative groupsECC-positive groupECC-negative group p value(n = 20)(n = 80)Age67.3 ± 12.0 years old72.0 ± 9.6 years oldN.SGender M843N.S F1237Operation method Laparoscopic surgery10 (50 %)51 (63.8 %)N.S Open surgery10 (50 %)29 (36.2 %)Tumor location Cecum4 (20 %)13 (16.3 %)N.S Ascending colon9 (45 %)32 (40 %) Transverse colon4 (20 %)24 (30 %) Descending colon0 (0 %)2 (2.5 %) Sigmoid colon3 (15 %)9 (11.2 %)Depth of tumor invasion pTis0 (0 %)8 (10 %)0.012 pT10 (0 %)16 (20 %)(Tis, T1 vs. T2–T4) pT22 (10 %)5 (6.2 %) pT316 (80 %)31 (38.8 %) pT42 (10 %)20 (25 %)Tumor size (mean ± SE)58.6 ± 16.7 mm49.2 ± 32.6 mm0.0239Lymph node metastasis pN(−)1348N.S pN(+)732Distant metastasis pM0112N.S pM11968cStage 007N.S I218 II1021 III721 IV113Histopathological classification Well838N.S Moderately933 Poorly16 Others23Margin from tumor (mean ± SE) Oral side16.7 ± 2.5 cm14.6 ± 1.1 cmN.S Anal side12.3 ± 1.5 cm12.8 ± 1.0 cmN.SPreoperative bowel preparation PEG8 (40 %)60 (75 %)0.003 Others12 (60 %)20 (25 %) Comparison of clinicopathological factors in the ECC-positive and ECC-negative groups Follow-up outcome: Among all the cases, 81 were followed by colonoscopy after surgery. Follow-up colonoscopy was performed in 74 patients (74 %) (observation range 3.9–121.4 months; median observation period 64.1 months), and none were found to have developed suture-line recurrence. Discussion: We found that ECC-positive cases were recognized at 20 % on the stapler cartridges used for anastomosis in patients with colon cancer. According to the relationship between the status of ECCs and clinicopathological factors, the presence of ECCs was associated with depth of tumor invasion and preoperative bowel preparation. Our study included 86 patients with right-sided colon cancer in 100 colon cancer patients. Hasegawa et al. [6] reported that 2 (11.1 %) and 10 (55.6 %) of 18 patients, who underwent right hemicolectomy for right-sided colon cancer, had ECCs at the terminal ileum and distal colon anastomosis sites, respectively. They demonstrated that surgical bowel occlusion on both sides of the tumor before resection improved a decrease in the number of ECCs. Recently, Maeda et al. [7] have shown that ECCs were detected at distal colon and proximal colon of the tumor within each 3 cm in almost patients with sigmoid cancer, and ECCs decreased with distance from the tumor. They also suggested that bowel ligatures could decrease ECCs, leading to prevent local recurrence. These findings indicated that the no-touch isolation technique might influence the number of ECCs as previous reports suggested [8, 9]. In the present study, we performed colectomy according to the no-touch isolation technique. Consequently, the positive rate of ECCs was 20 %. The difference of positive rate might be explained by the operative technique in addition to other factors, including cleansing methods and bowel preparation methods. Most studies of ECCs in the colonic lumen of colorectal cancer patients have involved rectal cancer patients. The incidence of suture-line recurrence has been reported to be 11–18 % [10, 11] in rectal cancer, while that in colon cancer has been reported to be 0.8–5.9 % [2, 3]. The possibility that ECCs can be implanted into freshly cut tissues was first postulated by Sir Ryall in 1907 [12]. This hypothesis was later advanced to explain some cases of suture-line recurrence following resection of colorectal cancer. In order to demonstrate the potential involvement of ECCs in the colonic lumen with suture-line and local recurrences in an animal model, Fermor et al. [13] injected ECCs collected from the colonic lumen of 17 colorectal cancer patients into the caudal vein of immunocompromised mice and showed that six mice developed lung metastases. Several cases of suture-line recurrence following resection of colon cancer, especially sigmoid colon, have been reported in Japan [14–17]. They discussed that implantation was considered to be the most important factor in terms of the cause of recurrence at anastomotic sites. The frequency of suture-line recurrence in colon cancer was very low though ECCs were frequently detected in colonic lumen. Indeed, we detected ECCs in 20 % of the colon cancer cases we analyzed. However, no suture-line recurrence was observed in our series of colon cancer for 5-year observation after colectomy. The cell viability of ECCs was investigated to understand etiological mechanism of anastomotic recurrence. Rosenberg et al. [18] reported the presence of no viable cells, as assessed by trypan blue exclusion. However, Umpleby et al. [19], using trypan blue exclusion and hydrolysis of fluorescein diacetate, showed that 70 % of patients with colorectal cancer had viable cells exfoliated into the bowel lumen. In general, other factors, which are associated with the potential of implantation, must be required even if ECCs still have viability. In the present study, we found that the positive rate of ECCs was 11.8 % in patients with PEG, while it was 37.5 % in patients without PEG. Mechanical bowel preparation using PEG may be effective for decreasing the number of ECCs. Previous reports [6, 7] also suggested same findings. Regarding mechanical bowel preparation using PEG, Slim et al. [20] have described that anastomotic leakage was significantly found after mechanical bowel preparation (odds ratio 1.75; p = 0.032) and concluded that mechanical bowel preparation using PEG should be omitted before elective colorectal surgery. Therefore, mechanical bowel preparation is routinely not performed in patients undergoing elective colon cancer because of employing “enhanced recovery after surgery” (ERAS) protocol widely. Recently, clinical practice guidelines [21] from the USA recommend the use of both mechanical bowel preparation and oral antibiotic bowel preparation. Many surgeons still conduct mechanical bowel preparation using PEG in the USA [22] and Japan [23]. Mechanical bowel preparation using PEG might contribute to decrease positive ECCs in advanced colorectal cancer, leading to decrease the possibility of anastomotic recurrence in addition to prevention of anastomotic leakage and surgical site infection. In a previous study [5], the outcome of reconstruction following colon cancer surgery was compared between hand-sewn and stapled anastomosis. While the safety was comparable, it was radiographically confirmed that suture leakage was more common with hand-sewn anastomoses. Hence, functional end-to-end anastomosis has primarily been performed using a stapler. In the present study, the washing samples of the cartridge of the linear stapler used for functional end-to-end anastomosis were examined cytologically. The rationale for using the washing samples of the cartridge in this study was that they would better reflect the presence of ECCs than the washing solutions of the colonic lumen. In addition, the presence of ECCs on the cartridge was considered to be a cause of suture-line recurrence. Prior to the present study, ECCs had been cytologically identified in the physiological saline used to irrigate the colonic lumen. In the present study, 5 % povidone-iodine was first used as a tumoricidal agent to irrigate the colonic lumen prior to anastomosis [24, 25]. However, since povidone-iodine should not be used in the abdominal cavity due to severe damage of normal mucosa, we have employed 0.025 % benzalkonium chloride, as an alternative. Therefore, the ECC detection rate might be lower in the present study than that reported by Umpleby, et al. [19] who used normal saline for bowel irrigation. Our results indicated that the detection rate of ECCs, which could be detected at anastomotic sites even on the stapler cartridges, might be increased in advanced colon cancer with more than T2 and in elective colon cancer patients without mechanical bowel preparation using PEG, leading to increase the possibility of incidence of suture-line recurrence. To prevent the possibility of incidence of suture-line recurrence, we should conduct preoperative mechanical bowel preparation using PEG, perform surgical operation by the no-touch isolation technique, and irrigate the colonic lumen prior to anastomosis using 0.025 % benzalkonium chloride as a tumoricidal agent. Conclusions: We found ECCs on the cartridge of the linear stapler used for anastomosis in 20 % of the colon cancer cases we analyzed. Most of the positive ECCs were identified in advanced colon cancer without PEG. Therefore, preoperative mechanical bowel preparation using PEG and cleansing at anastomotic sites using tumoricidal agents before anastomosis may contribute to decrease ECCs in advanced colon cancer.
Background: The aim of this study was to investigate exfoliated cancer cells (ECCs) on linear stapler cartridges used for anastomotic sites in colon cancer. Methods: We prospectively analyzed ECCs on linear stapler cartridges used for anastomosis in 100 colon cancer patients who underwent colectomy. Having completed the functional end-to-end anastomosis, the linear stapler cartridges were irrigated with saline, which was collected for cytological examination and cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Results: The detection rate of ECCs on the linear stapler cartridges was 20 %. Positive detection of ECCs was significantly associated with depth of tumor invasion (p = 0.012) and preoperative bowel preparation (p = 0.003). There were no marked differences between ECC-positive and ECC-negative groups in terms of the operation methods, tumor location, histopathological classification, and surgical margins. Conclusions: Since ECCs were identified on the cartridge of the linear stapler used for anastomosis, preoperative mechanical bowel preparation using polyethylene glycol solution and cleansing at anastomotic sites using tumoricidal agents before anastomosis may be necessary to decrease ECCs in advanced colon cancer.
Background: The cause of suture-line recurrence following curative colorectal cancer surgery is believed to be the presence of exfoliated cancer cells (ECCs) at the anastomotic site [1]. Many studies of suture-line recurrence have been conducted in rectal cancer surgery; however, a greater margin is used for colon cancer than for rectal cancer cases, so that suture-line recurrence is less common in colon cancer patients [2]. As a result, few studies have reported on the presence of ECCs in the colonic lumen of colon cancer patients. However, even in patients with colon cancer, the incidence of suture-line recurrence has been reported to be 0.8–5.9 % [2, 3]. Reconstruction following colon cancer surgery commonly involves the fabrication of a functional end-to-end anastomosis [4] using a linear stapler, as a convenient procedure [5]. In the present study, ECCs in the colonic lumen of colon cancer patients were examined based on the cytological examination of the solutions washing the linear stapler used for functional end-to-end anastomosis, and we further analyzed the relationship between positive detection of ECCs and clinicopathological factors and prognosis. Conclusions: We found ECCs on the cartridge of the linear stapler used for anastomosis in 20 % of the colon cancer cases we analyzed. Most of the positive ECCs were identified in advanced colon cancer without PEG. Therefore, preoperative mechanical bowel preparation using PEG and cleansing at anastomotic sites using tumoricidal agents before anastomosis may contribute to decrease ECCs in advanced colon cancer.
Background: The aim of this study was to investigate exfoliated cancer cells (ECCs) on linear stapler cartridges used for anastomotic sites in colon cancer. Methods: We prospectively analyzed ECCs on linear stapler cartridges used for anastomosis in 100 colon cancer patients who underwent colectomy. Having completed the functional end-to-end anastomosis, the linear stapler cartridges were irrigated with saline, which was collected for cytological examination and cytological diagnoses were made by board-certified pathologists based on Papanicolaou staining. Results: The detection rate of ECCs on the linear stapler cartridges was 20 %. Positive detection of ECCs was significantly associated with depth of tumor invasion (p = 0.012) and preoperative bowel preparation (p = 0.003). There were no marked differences between ECC-positive and ECC-negative groups in terms of the operation methods, tumor location, histopathological classification, and surgical margins. Conclusions: Since ECCs were identified on the cartridge of the linear stapler used for anastomosis, preoperative mechanical bowel preparation using polyethylene glycol solution and cleansing at anastomotic sites using tumoricidal agents before anastomosis may be necessary to decrease ECCs in advanced colon cancer.
5,232
223
[ 198, 34, 109, 100, 51, 141, 496, 54 ]
13
[ "tumor", "colon", "patients", "cancer", "eccs", "cases", "bowel", "preparation", "class", "20" ]
[ "tumor resection intestinal", "colectomy colon cancer", "colorectal cancer surgery", "recurrence colon cancer", "colon cancer rectal" ]
null
[CONTENT] Exfoliated cancer cell | Colon cancer | Functional end-to-end anastomosis | Suture-line recurrence [SUMMARY]
null
[CONTENT] Exfoliated cancer cell | Colon cancer | Functional end-to-end anastomosis | Suture-line recurrence [SUMMARY]
[CONTENT] Exfoliated cancer cell | Colon cancer | Functional end-to-end anastomosis | Suture-line recurrence [SUMMARY]
[CONTENT] Exfoliated cancer cell | Colon cancer | Functional end-to-end anastomosis | Suture-line recurrence [SUMMARY]
[CONTENT] Exfoliated cancer cell | Colon cancer | Functional end-to-end anastomosis | Suture-line recurrence [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Colectomy | Colon | Colonic Neoplasms | Enema | Female | Humans | Laxatives | Male | Margins of Excision | Middle Aged | Neoplasm Invasiveness | Neoplasm Staging | Polyethylene Glycols | Preoperative Care | Prospective Studies | Surgical Staplers | Surgical Stapling [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Colectomy | Colon | Colonic Neoplasms | Enema | Female | Humans | Laxatives | Male | Margins of Excision | Middle Aged | Neoplasm Invasiveness | Neoplasm Staging | Polyethylene Glycols | Preoperative Care | Prospective Studies | Surgical Staplers | Surgical Stapling [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Colectomy | Colon | Colonic Neoplasms | Enema | Female | Humans | Laxatives | Male | Margins of Excision | Middle Aged | Neoplasm Invasiveness | Neoplasm Staging | Polyethylene Glycols | Preoperative Care | Prospective Studies | Surgical Staplers | Surgical Stapling [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Colectomy | Colon | Colonic Neoplasms | Enema | Female | Humans | Laxatives | Male | Margins of Excision | Middle Aged | Neoplasm Invasiveness | Neoplasm Staging | Polyethylene Glycols | Preoperative Care | Prospective Studies | Surgical Staplers | Surgical Stapling [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Anastomosis, Surgical | Colectomy | Colon | Colonic Neoplasms | Enema | Female | Humans | Laxatives | Male | Margins of Excision | Middle Aged | Neoplasm Invasiveness | Neoplasm Staging | Polyethylene Glycols | Preoperative Care | Prospective Studies | Surgical Staplers | Surgical Stapling [SUMMARY]
[CONTENT] tumor resection intestinal | colectomy colon cancer | colorectal cancer surgery | recurrence colon cancer | colon cancer rectal [SUMMARY]
null
[CONTENT] tumor resection intestinal | colectomy colon cancer | colorectal cancer surgery | recurrence colon cancer | colon cancer rectal [SUMMARY]
[CONTENT] tumor resection intestinal | colectomy colon cancer | colorectal cancer surgery | recurrence colon cancer | colon cancer rectal [SUMMARY]
[CONTENT] tumor resection intestinal | colectomy colon cancer | colorectal cancer surgery | recurrence colon cancer | colon cancer rectal [SUMMARY]
[CONTENT] tumor resection intestinal | colectomy colon cancer | colorectal cancer surgery | recurrence colon cancer | colon cancer rectal [SUMMARY]
[CONTENT] tumor | colon | patients | cancer | eccs | cases | bowel | preparation | class | 20 [SUMMARY]
null
[CONTENT] tumor | colon | patients | cancer | eccs | cases | bowel | preparation | class | 20 [SUMMARY]
[CONTENT] tumor | colon | patients | cancer | eccs | cases | bowel | preparation | class | 20 [SUMMARY]
[CONTENT] tumor | colon | patients | cancer | eccs | cases | bowel | preparation | class | 20 [SUMMARY]
[CONTENT] tumor | colon | patients | cancer | eccs | cases | bowel | preparation | class | 20 [SUMMARY]
[CONTENT] cancer | colon | colon cancer | suture line | line | suture line recurrence | line recurrence | recurrence | suture | colon cancer patients [SUMMARY]
null
[CONTENT] class | table | 20 | tumor | negative | ecc | group | positive | 10 | cases [SUMMARY]
[CONTENT] colon cancer | colon | eccs | cancer | advanced colon | advanced colon cancer | advanced | anastomosis | peg | bowel preparation peg cleansing [SUMMARY]
[CONTENT] colon | class | cancer | tumor | patients | eccs | colon cancer | anastomosis | cases | end [SUMMARY]
[CONTENT] colon | class | cancer | tumor | patients | eccs | colon cancer | anastomosis | cases | end [SUMMARY]
[CONTENT] linear [SUMMARY]
null
[CONTENT] 20 % ||| 0.012 | 0.003 ||| ECC [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] linear ||| linear | 100 ||| Papanicolaou ||| ||| 20 % ||| 0.012 | 0.003 ||| ECC ||| [SUMMARY]
[CONTENT] linear ||| linear | 100 ||| Papanicolaou ||| ||| 20 % ||| 0.012 | 0.003 ||| ECC ||| [SUMMARY]
Association between Endocrine Disorders and Severe COVID-19 Disease in Pediatric Patients.
35417912
Though severe illness due to COVID-19 is uncommon in children, there is an urgent need to better determine the risk factors for disease severity in youth. This study aims to determine the impact a preexisting endocrine diagnosis has on severity of COVID-19 presentation in youth.
INTRODUCTION
The cross-sectional chart review study included all patients less than 25 years old with a positive SARS-CoV-2 PCR at St. Louis Children's Hospital between March 2020 and February 2021. Electronic medical record data for analysis included patient demographics, BMI percentile, inpatient hospitalization or admission to the pediatric intensive care unit (PICU), and the presence of a preexisting endocrine diagnosis such as diabetes mellitus (type 1 and type 2), adrenal insufficiency, and hypothyroidism. Two outcome measures were analyzed in multivariate analysis: inpatient admission and PICU admission. Adjusted odds ratios with a 95% CI were calculated using binary logistic regression, along with p values after Wald χ2 analysis.
METHODS
390 patients were included in the study. Mean age was 123.1 (±82.2) months old. 50.3% of patients were hospitalized, and 12.1% of patients were admitted to intensive care. Preexisting diagnoses of diabetes mellitus, obesity, and hypothyroidism were associated with an increased risk of hospital and ICU admission, independent of socioeconomic status.
RESULTS
This study provides evidence that unvaccinated youth with a preexisting diagnosis of obesity, hypothyroidism, or diabetes mellitus infected with COVID-19 are more likely to have a more severe clinical presentation requiring inpatient hospital admission and/or intensive care.
DISCUSSION/CONCLUSION
[ "Adolescent", "Adult", "Aged, 80 and over", "COVID-19", "Child", "Cross-Sectional Studies", "Diabetes Mellitus, Type 1", "Hospitalization", "Humans", "Hypothyroidism", "Obesity", "Retrospective Studies", "Risk Factors", "SARS-CoV-2" ]
9393781
Introduction
More than 12 million children in the USA have been diagnosed with COVID-19 since the start of the pandemic, representing approximately 18.9% of cumulative cases [1]. Though most children with COVID-19 do not develop severe disease or require specific therapy, children with certain medical conditions may be at increased risk for severe disease, and therefore may be eligible for therapeutic strategies to prevent disease progression [2]. Recent studies have looked at potential risk factors for severe COVID-19 presentation to guide these recommendations and have shown that diabetes mellitus and obesity are endocrine disorders that pose a significant risk for severe disease in both children and adults with COVID-19 [3, 4]. The impact of other preexisting endocrine disorders such as hypothyroidism and adrenal insufficiency on the clinical presentation of COVID-19 in youth has not been evaluated. Such studies may inform practitioners about potential patient groups at increased risk for developing severe infection and guide pediatric-specific recommendations for the treatment of COVID-19. Furthermore, it may help guide prioritization strategies for vaccination and therapeutic interventions in limited resource settings. This study aimed to determine if preexisting endocrine disorders such as adrenal insufficiency, obesity, overweight, diabetes mellitus, and hypothyroidism in youth are associated with higher rates of severe COVID-19.
null
null
Results
As shown in Figure 1, 540 patients had a positive SARS-CoV-2 PCR test at St. Louis Children's Hospital between March 2020 and February 2021. 150 patients were excluded from the analysis: 132 patients were missing BMI data, 11 were admitted to SLCH for either trauma or emergent surgery, and 7 were excluded due to missing socioeconomic data. The remaining 390 patients were included in this study. Population characteristics are shown in Table 1. Mean age was 123.1 months old (SD = 82.2), and 194 (49.7%) of the patients were male. Mean ADI was 63.63 (SD = 25.4) The most prevalent endocrine condition among the patients studied was obesity, followed by overweight and diabetes mellitus (both type 1 and type 2). The mean HbA1c of those with diabetes was 11.0% (SD = 3.2.). 158 (40.5%) patients were admitted, with 47 (12.1%) patients admitted to intensive care. Of the 13 patients with a preexisting diagnosis of hypothyroidism, 1 had central hypothyroidism and the rest had primary hypothyroidism. Only 2 of the patients were biochemically hypothyroid at the time of positive COVID-19 testing, and all others were euthyroid. Eleven of the 13 patients were hospitalized, and the two nonadmitted were euthyroid. Of the patients with endocrinopathies included in the study, 4 patients had coexisting hypothyroidism and adrenal insufficiency, and all 4 of these patients were admitted to the hospital. In addition, one patient had coexisting hypothyroidism and diabetes mellitus, and 2 patients had coexisting diabetes and adrenal insufficiency. National percentile socioeconomic status, diabetes mellitus, obesity, and hypothyroidism were associated with an increased odds of inpatient admission (p < 0.05). As shown in Table 2 and Figure 2, after controlling for age, SES, weight, hypothyroidism, and adrenal insufficiency in the multivariate model, children with diabetes had 8.96 times increased odds of being admitted than kids without diabetes. As shown in Table 3 and Figure 3, adrenal insufficiency, diabetes mellitus, hypothyroidism, and obesity were associated with an increased odds of PICU admission (p < 0.05). While not statistically significant when considering p values, adrenal insufficiency had an adjusted odds ratio of 1.692 (95% CI: 0.934–15.350), suggesting a possible association between adrenal insufficiency and an increased odds of intensive care.
null
null
[ "Statement of Ethics", "Funding Sources", "Author Contributions" ]
[ "This retrospective cohort study was conducted at Washington University School of Medicine and the Saint Louis Children's Hospital (SLCH) in Saint Louis, MO, USA. The study was approved by the University's Human Research Protection Office Committee, approval number 202101185. Chart review was performed on all patients less than 25 years old that had a positive SARS-CoV-2 PCR at St. Louis Children's Hospital between March 2020 and February 2021. Written informed consent was not required.", "Funding was obtained through the National Center for Advancing Translational Sciences through the National Institutes of Health (<UL1 TR002345> [to <William Powderly>]). Funding was used to support the CRTC, which N.R.B.'s work was funded by.", "Nicholas R. Banull: formal analysis, investigation, methodology, validation, visualization, writing − original draft, and writing − review & editing. Ana Maria Arbelaez: formal analysis, investigation, methodology, validation, and writing − review & editing. Patrick J. Reich: formal analysis, investigation, methodology, data collection, validation, and writing − review & editing. Dorina Kallogeri: methodology, validation, writing − review & editing, and formal analysis. Carine Anka, Jennifer May, Hope Shimony, and Kathleen Wharton: data collection, investigation, methodology, validation, and writing − review & editing." ]
[ null, null, null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion/Conclusion", "Statement of Ethics", "Conflict of Interest Statement", "Funding Sources", "Author Contributions", "Data Availability Statement" ]
[ "More than 12 million children in the USA have been diagnosed with COVID-19 since the start of the pandemic, representing approximately 18.9% of cumulative cases [1]. Though most children with COVID-19 do not develop severe disease or require specific therapy, children with certain medical conditions may be at increased risk for severe disease, and therefore may be eligible for therapeutic strategies to prevent disease progression [2]. Recent studies have looked at potential risk factors for severe COVID-19 presentation to guide these recommendations and have shown that diabetes mellitus and obesity are endocrine disorders that pose a significant risk for severe disease in both children and adults with COVID-19 [3, 4]. The impact of other preexisting endocrine disorders such as hypothyroidism and adrenal insufficiency on the clinical presentation of COVID-19 in youth has not been evaluated. Such studies may inform practitioners about potential patient groups at increased risk for developing severe infection and guide pediatric-specific recommendations for the treatment of COVID-19. Furthermore, it may help guide prioritization strategies for vaccination and therapeutic interventions in limited resource settings. This study aimed to determine if preexisting endocrine disorders such as adrenal insufficiency, obesity, overweight, diabetes mellitus, and hypothyroidism in youth are associated with higher rates of severe COVID-19.", "This retrospective cohort study was conducted at Washington University School of Medicine and Saint Louis Children's Hospital (SLCH), a 400-bed pediatric hospital with a 40-bed pediatric intensive care unit (PICU), in Saint Louis, MO, USA. The study was approved by the university's Human Research Protection Office Committee. Chart review was performed on all patients less than 25 years old who had a positive SARS-CoV-2 PCR at St. Louis Children's Hospital between March 2020 and February 2021.\nAfter chart review, data from the electronic medical record (EPIC) was collected for analysis. These measures included: age, sex, patient's address, BMI percentile, inpatient hospitalization or admission to the PICU, and the history of a preexisting endocrine diagnosis such as diabetes mellitus (type 1 and type 2), adrenal insufficiency, and hypothyroidism. The diagnosis of endocrinopathy in all patients included in this study preceded infection with COVID-19. Patients with preexisting primary or central hypothyroidism, regardless of whether they presented to the hospital biochemically euthyroid, were classified as having a diagnosis of hypothyroidism in this study. Patients were classified as having a history of adrenal insufficiency if they had a preexisting diagnosis of primary or central adrenal insufficiency or if they had a history of chronic use of high dose oral steroids that could cause adrenal suppression and be at risk for disturbed stress response during infectious stress [5]. Patients were classified as normal weight, overweight, or obese based on the Centers for Disease Control and Prevention definition [6]. Obese patients were defined as having a BMI percentile greater than 95, and overweight patients were defined as having a BMI percentile greater than 85 but less than 95. BMI percentiles were calculated using the PediTools Electronic Growth Chart Calculators with the 2000 CDC growth chart used for reference [7, 8]. To provide continuity from childhood to adulthood, we used the age 20 reference to calculate BMI percentiles when individuals were 20 years of age or older, as others have done [9, 10, 11]. Socioeconomic status was calculated using the patient's address and the Neighborhood Atlas® software from the University of Wisconsin [12]. This software uses the 2018 American Community Survey to generate a National Percentile Area Deprivation Index (ADI), which allows for an objective comparison of socioeconomic status across the USA. Severity of COVID-19 disease was determined by the need for inpatient admission and PICU admission, which is similar to the definitions in other literature [4].\nDescriptive statistics were performed on all variables. Mean, standard deviations, and confidence intervals were calculated. To examine the association between patients admitted to inpatient care, or PICU, with a preexisting endocrine disorder, binary logistic regression was performed. Two outcome measures were analyzed in multivariate analysis: inpatient admission and PICU admission. ADI score, obesity, diabetes mellitus, adrenal insufficiency, age, and hypothyroidism were included in binary logistic regression due to either statistical significance or clinical importance. Adjusted odds ratios with 95% CI were calculated, along with p values after Wald χ2 analysis. Results were interpreted as statistically significant if p < 0.05 in two-tailed tests. Analyses were performed with statistical package SPSS© Version 27 for Mac. Graphics were produced using Adobe Illustrator and PRISM.", "As shown in Figure 1, 540 patients had a positive SARS-CoV-2 PCR test at St. Louis Children's Hospital between March 2020 and February 2021. 150 patients were excluded from the analysis: 132 patients were missing BMI data, 11 were admitted to SLCH for either trauma or emergent surgery, and 7 were excluded due to missing socioeconomic data.\nThe remaining 390 patients were included in this study. Population characteristics are shown in Table 1. Mean age was 123.1 months old (SD = 82.2), and 194 (49.7%) of the patients were male. Mean ADI was 63.63 (SD = 25.4) The most prevalent endocrine condition among the patients studied was obesity, followed by overweight and diabetes mellitus (both type 1 and type 2). The mean HbA1c of those with diabetes was 11.0% (SD = 3.2.). 158 (40.5%) patients were admitted, with 47 (12.1%) patients admitted to intensive care. Of the 13 patients with a preexisting diagnosis of hypothyroidism, 1 had central hypothyroidism and the rest had primary hypothyroidism. Only 2 of the patients were biochemically hypothyroid at the time of positive COVID-19 testing, and all others were euthyroid. Eleven of the 13 patients were hospitalized, and the two nonadmitted were euthyroid. Of the patients with endocrinopathies included in the study, 4 patients had coexisting hypothyroidism and adrenal insufficiency, and all 4 of these patients were admitted to the hospital. In addition, one patient had coexisting hypothyroidism and diabetes mellitus, and 2 patients had coexisting diabetes and adrenal insufficiency. National percentile socioeconomic status, diabetes mellitus, obesity, and hypothyroidism were associated with an increased odds of inpatient admission (p < 0.05). As shown in Table 2 and Figure 2, after controlling for age, SES, weight, hypothyroidism, and adrenal insufficiency in the multivariate model, children with diabetes had 8.96 times increased odds of being admitted than kids without diabetes. As shown in Table 3 and Figure 3, adrenal insufficiency, diabetes mellitus, hypothyroidism, and obesity were associated with an increased odds of PICU admission (p < 0.05). While not statistically significant when considering p values, adrenal insufficiency had an adjusted odds ratio of 1.692 (95% CI: 0.934–15.350), suggesting a possible association between adrenal insufficiency and an increased odds of intensive care.", "Throughout 2020 and 2021, COVID-19 was a leading cause of death in the USA, with a high prevalence of disease in the pediatric population [1, 13]. COVID-19 hospitalization rates in adolescents increased 5-fold over the summer of 2021 with unvaccinated patients making up most admissions [14]. With this increasing number of hospitalized patients, there is an urgent need for a better understanding of predictors of disease severity to determine populations at risk that will benefit from preventative and therapeutic interventions to decrease hospitalization and mortality rates. This study supports the established association between obesity and poorly controlled diabetes mellitus, with severity of COVID-19 presentation in adults [15, 16], and it further demonstrates that pediatric patients with COVID-19 who had a preexisting diagnosis of obesity, hypothyroidism, or diabetes mellitus were more likely to have a severe clinical presentation requiring inpatient hospital admission. This association was independent of socioeconomic status. Furthermore, it demonstrates that pediatric patients with diabetes mellitus, hypothyroidism, and obesity were more likely to require intensive care. Though prior studies have discussed the association between severe COVID-19 disease and preexisting health conditions, many were based on billing code data, and the prevalence of endocrine disorders, such as obesity and diabetes, was not representative of those in the US pediatric population [17, 18, 19]; hence, this study provides evidence that preexisting endocrinopathies are risk factors for predisposing youth to severe COVID-19.\nObesity rates in children and adolescents have been increasing over the past few decades, with a current prevalence of approximately 19.3% in the general population [18]. This trend has been driven by socioeconomic status, with obesity prevalence being higher in low-income communities [20]. The prevalence of obesity in this study population was 23.6%, which is similar to national disease rates [18]. This study confirms prior data demonstrating that obesity is a risk factor for COVID-19 disease severity [4]. Children with obesity, but not overweight, were twice as likely to be admitted to the hospital with COVID-19. However, this association was present independent of socioeconomic status/residence in areas with an increased area deprivation index, age, or sex. Similarly, increased rates of inpatient admission due to other infectious diseases, such as H1N1 and bacterial infections, have been observed among obese populations [21, 22]. Chronic inflammation and altered immunity seen in patients with obesity may explain this relationship [23]. Furthermore, this data emphasizes the importance of strategies to mitigate COVID-19 in pediatric communities with higher rates of obesity regardless of socioeconomic status.\nIncreased risk for comorbidities and serious infections have been documented in patients with either type 1 or type 2 diabetes mellitus [24]. This study confirmed findings by Kompaniyets et al. [4] that showed increased hospitalization and ICU admission rates in children with diabetes mellitus and COVID-19. This association could be attributed to poor glycemic control, given that patients with diabetes mellitus in our study had an HbA1c higher than the target HbA1c recommended by the American Diabetes Association [25]. Even though some studies have demonstrated a clear relationship between poor glycemic control and lower socioeconomic status, our study failed to demonstrate that disease severity among patients with diabetes mellitus was dependent on socioeconomic status [26, 27]. This data emphasizes the importance of identifying and applying interventions that help children and young adults with diabetes mellitus reduce their risk for severe COVID-19 disease.\nThere is a paucity of data related to the association of underlying hypothyroidism with COVID-19 disease severity. Two prior studies in adults concluded that hypothyroidism does not predispose patients to a higher risk of hospitalization, ICU admission, or death [28, 29]. However, this has not previously been evaluated in COVID-19 pediatric cohort studies to date. This study showed that youth with hypothyroidism and COVID-19 were 6.9 times more likely to be admitted to a hospital and 11 times more likely to require intensive care than the general population. This data suggests that preexisting thyroid disease will likely make patients more vulnerable to a more severe COVID-19 clinical presentation. The multivariate model in this study accounted for the coexistence of hypothyroidism among patients with diabetes mellitus. Therefore, this data suggests that hypothyroidism is an independent risk factor for disease severity among pediatric patients. The increased risk of inpatient and PICU admission was independent of age, sex, BMI, or SES. Because thyroid hormone regulates multiple organ systems, different mechanisms have been speculated to be involved in the relationship between SARS-CoV-2 and thyroid illness [30]. Though the results from this study suggest that pediatric patients with hypothyroidism are at an increased risk of developing a more severe COVID-19 clinical presentation, it is unclear if preexisting thyroid disease increases their susceptibility to COVID-19 infection.\nPatients with adrenal suppression may be at higher risk of presenting with an adrenal crisis during an acute illness, such as COVID-19 [31]. However, there is currently no clear evidence that adrenal insufficiency patients are more likely to develop a severe course of disease. A study of adult COVID-19 patients early in the pandemic did not demonstrate an association between severe disease and adrenal insufficiency [32]. An abstract presented at the Endocrine Society suggested that youth with adrenal insufficiency and COVID-19 had higher rates of mortality and a more severe clinical presentation [33]. Though our study did not support that claim, prompt stress dosing (as used by the patients in this study) is key to prevent adrenal crises during COVID-19 illness. Further studies with larger sample size are needed to better understand the potential association between history of adrenal insufficiency and COVID-19 disease severity.\nLower socioeconomic status is often associated with limited healthcare access, higher rates of diabetes mellitus and cardiovascular disease, increased utilization of health services, and early death [34]. Likewise, lower socioeconomic status may also lead to increased exposure to COVID-19, placing much of the burden of the pandemic on the most disadvantaged communities [26]. In this study, we used ADI scores as a measure of socioeconomic status. ADI is based on census data that has been validated, allowing for an objective ranking of residence by degree of social deprivation. Patients in this study resided in neighborhoods that were more disadvantaged than the national average. Though ADI percentiles were higher in patients that were hospitalized or required intensive care, ADI was not a predictor of disease severity. This suggests that severe disease presentation is driven by BMI, diabetes mellitus, and hypothyroidism and not by socioeconomic status. Given the lower SES of the entire study population, findings from this study highlight the critical need for improved healthcare delivery and disease prevention strategies.\nThe findings in this study are subject to several limitations. As shown in Figure 1, 150 patients from the original sample were excluded from analysis for missing BMI data, admissions primarily for reasons other than COVID-19, and missing ADI scores. Though the exclusion of these patients may have limited the sample size, it prevented the overestimation of COVID-19 hospitalization rates. Endocrine disorders were present in a small number of patients, leading to wide 95% confidence intervals in multivariable analysis. However, the lower 95% confidence intervals for diabetes mellitus, hypothyroidism, and obesity were well above 1, suggesting that our conclusions may be generalizable and that those patients are still at elevated risk for a more severe clinical COVID-19 presentation. Despite the small number of patients with endocrinopathies, diabetes mellitus, adrenal insufficiency, and hypothyroidism had slightly higher prevalence in our study than reported rates in the general population [17, 18, 19]. This may be because SLCH is a tertiary medical center, where many patients with a higher prevalence of comorbidities are transferred from regional hospitals to receive a higher level of care. In this study, few patients had coexisting multiple endocrine disorders. While the binary logistic regression analysis accounts for these co-diagnoses, further statistical analyses in this study are limited due to the small sample size of this cohort. Thus, additional studies of this subset of patients are necessary. Endocrine conditions in this study were not subclassified by etiology; therefore, specific associations between the etiology of endocrinopathies (type 1 DM vs. type 2, or central vs. primary hypothyroidism, primary vs. secondary adrenal insufficiency, etc.) and COVID-19 disease severity may have been missed. Finally, data presented in this study predates COVID-19 vaccinations in youth; therefore, associations between endocrine disorders and COVID-19 severity presented in this study may not extrapolate to a vaccinated population.\nThis study highlights that presence of preexisting diagnosis of hypothyroidism, diabetes mellitus, and obesity predisposes unvaccinated children and adolescents to a more severe clinical presentation of COVID-19. Data from this study suggest that practitioners should consider patients with these endocrinopathies for available therapeutic and preventive strategies that decrease disease progression. Further studies are needed to analyze these relationships in the vaccinated population.", "This retrospective cohort study was conducted at Washington University School of Medicine and the Saint Louis Children's Hospital (SLCH) in Saint Louis, MO, USA. The study was approved by the University's Human Research Protection Office Committee, approval number 202101185. Chart review was performed on all patients less than 25 years old that had a positive SARS-CoV-2 PCR at St. Louis Children's Hospital between March 2020 and February 2021. Written informed consent was not required.", "The authors have no conflicts of interest to declare.", "Funding was obtained through the National Center for Advancing Translational Sciences through the National Institutes of Health (<UL1 TR002345> [to <William Powderly>]). Funding was used to support the CRTC, which N.R.B.'s work was funded by.", "Nicholas R. Banull: formal analysis, investigation, methodology, validation, visualization, writing − original draft, and writing − review & editing. Ana Maria Arbelaez: formal analysis, investigation, methodology, validation, and writing − review & editing. Patrick J. Reich: formal analysis, investigation, methodology, data collection, validation, and writing − review & editing. Dorina Kallogeri: methodology, validation, writing − review & editing, and formal analysis. Carine Anka, Jennifer May, Hope Shimony, and Kathleen Wharton: data collection, investigation, methodology, validation, and writing − review & editing.", "Requests for de-identified data can be made by contacting the corresponding author (A.M.A.)." ]
[ "intro", "materials|methods", "results", "discussion|conclusions", null, "COI-statement", null, null, "data-availability" ]
[ "COVID-19", "Obesity", "Diabetes", "Adrenal insufficiency", "Hypothyroidism" ]
Introduction: More than 12 million children in the USA have been diagnosed with COVID-19 since the start of the pandemic, representing approximately 18.9% of cumulative cases [1]. Though most children with COVID-19 do not develop severe disease or require specific therapy, children with certain medical conditions may be at increased risk for severe disease, and therefore may be eligible for therapeutic strategies to prevent disease progression [2]. Recent studies have looked at potential risk factors for severe COVID-19 presentation to guide these recommendations and have shown that diabetes mellitus and obesity are endocrine disorders that pose a significant risk for severe disease in both children and adults with COVID-19 [3, 4]. The impact of other preexisting endocrine disorders such as hypothyroidism and adrenal insufficiency on the clinical presentation of COVID-19 in youth has not been evaluated. Such studies may inform practitioners about potential patient groups at increased risk for developing severe infection and guide pediatric-specific recommendations for the treatment of COVID-19. Furthermore, it may help guide prioritization strategies for vaccination and therapeutic interventions in limited resource settings. This study aimed to determine if preexisting endocrine disorders such as adrenal insufficiency, obesity, overweight, diabetes mellitus, and hypothyroidism in youth are associated with higher rates of severe COVID-19. Materials and Methods: This retrospective cohort study was conducted at Washington University School of Medicine and Saint Louis Children's Hospital (SLCH), a 400-bed pediatric hospital with a 40-bed pediatric intensive care unit (PICU), in Saint Louis, MO, USA. The study was approved by the university's Human Research Protection Office Committee. Chart review was performed on all patients less than 25 years old who had a positive SARS-CoV-2 PCR at St. Louis Children's Hospital between March 2020 and February 2021. After chart review, data from the electronic medical record (EPIC) was collected for analysis. These measures included: age, sex, patient's address, BMI percentile, inpatient hospitalization or admission to the PICU, and the history of a preexisting endocrine diagnosis such as diabetes mellitus (type 1 and type 2), adrenal insufficiency, and hypothyroidism. The diagnosis of endocrinopathy in all patients included in this study preceded infection with COVID-19. Patients with preexisting primary or central hypothyroidism, regardless of whether they presented to the hospital biochemically euthyroid, were classified as having a diagnosis of hypothyroidism in this study. Patients were classified as having a history of adrenal insufficiency if they had a preexisting diagnosis of primary or central adrenal insufficiency or if they had a history of chronic use of high dose oral steroids that could cause adrenal suppression and be at risk for disturbed stress response during infectious stress [5]. Patients were classified as normal weight, overweight, or obese based on the Centers for Disease Control and Prevention definition [6]. Obese patients were defined as having a BMI percentile greater than 95, and overweight patients were defined as having a BMI percentile greater than 85 but less than 95. BMI percentiles were calculated using the PediTools Electronic Growth Chart Calculators with the 2000 CDC growth chart used for reference [7, 8]. To provide continuity from childhood to adulthood, we used the age 20 reference to calculate BMI percentiles when individuals were 20 years of age or older, as others have done [9, 10, 11]. Socioeconomic status was calculated using the patient's address and the Neighborhood Atlas® software from the University of Wisconsin [12]. This software uses the 2018 American Community Survey to generate a National Percentile Area Deprivation Index (ADI), which allows for an objective comparison of socioeconomic status across the USA. Severity of COVID-19 disease was determined by the need for inpatient admission and PICU admission, which is similar to the definitions in other literature [4]. Descriptive statistics were performed on all variables. Mean, standard deviations, and confidence intervals were calculated. To examine the association between patients admitted to inpatient care, or PICU, with a preexisting endocrine disorder, binary logistic regression was performed. Two outcome measures were analyzed in multivariate analysis: inpatient admission and PICU admission. ADI score, obesity, diabetes mellitus, adrenal insufficiency, age, and hypothyroidism were included in binary logistic regression due to either statistical significance or clinical importance. Adjusted odds ratios with 95% CI were calculated, along with p values after Wald χ2 analysis. Results were interpreted as statistically significant if p < 0.05 in two-tailed tests. Analyses were performed with statistical package SPSS© Version 27 for Mac. Graphics were produced using Adobe Illustrator and PRISM. Results: As shown in Figure 1, 540 patients had a positive SARS-CoV-2 PCR test at St. Louis Children's Hospital between March 2020 and February 2021. 150 patients were excluded from the analysis: 132 patients were missing BMI data, 11 were admitted to SLCH for either trauma or emergent surgery, and 7 were excluded due to missing socioeconomic data. The remaining 390 patients were included in this study. Population characteristics are shown in Table 1. Mean age was 123.1 months old (SD = 82.2), and 194 (49.7%) of the patients were male. Mean ADI was 63.63 (SD = 25.4) The most prevalent endocrine condition among the patients studied was obesity, followed by overweight and diabetes mellitus (both type 1 and type 2). The mean HbA1c of those with diabetes was 11.0% (SD = 3.2.). 158 (40.5%) patients were admitted, with 47 (12.1%) patients admitted to intensive care. Of the 13 patients with a preexisting diagnosis of hypothyroidism, 1 had central hypothyroidism and the rest had primary hypothyroidism. Only 2 of the patients were biochemically hypothyroid at the time of positive COVID-19 testing, and all others were euthyroid. Eleven of the 13 patients were hospitalized, and the two nonadmitted were euthyroid. Of the patients with endocrinopathies included in the study, 4 patients had coexisting hypothyroidism and adrenal insufficiency, and all 4 of these patients were admitted to the hospital. In addition, one patient had coexisting hypothyroidism and diabetes mellitus, and 2 patients had coexisting diabetes and adrenal insufficiency. National percentile socioeconomic status, diabetes mellitus, obesity, and hypothyroidism were associated with an increased odds of inpatient admission (p < 0.05). As shown in Table 2 and Figure 2, after controlling for age, SES, weight, hypothyroidism, and adrenal insufficiency in the multivariate model, children with diabetes had 8.96 times increased odds of being admitted than kids without diabetes. As shown in Table 3 and Figure 3, adrenal insufficiency, diabetes mellitus, hypothyroidism, and obesity were associated with an increased odds of PICU admission (p < 0.05). While not statistically significant when considering p values, adrenal insufficiency had an adjusted odds ratio of 1.692 (95% CI: 0.934–15.350), suggesting a possible association between adrenal insufficiency and an increased odds of intensive care. Discussion/Conclusion: Throughout 2020 and 2021, COVID-19 was a leading cause of death in the USA, with a high prevalence of disease in the pediatric population [1, 13]. COVID-19 hospitalization rates in adolescents increased 5-fold over the summer of 2021 with unvaccinated patients making up most admissions [14]. With this increasing number of hospitalized patients, there is an urgent need for a better understanding of predictors of disease severity to determine populations at risk that will benefit from preventative and therapeutic interventions to decrease hospitalization and mortality rates. This study supports the established association between obesity and poorly controlled diabetes mellitus, with severity of COVID-19 presentation in adults [15, 16], and it further demonstrates that pediatric patients with COVID-19 who had a preexisting diagnosis of obesity, hypothyroidism, or diabetes mellitus were more likely to have a severe clinical presentation requiring inpatient hospital admission. This association was independent of socioeconomic status. Furthermore, it demonstrates that pediatric patients with diabetes mellitus, hypothyroidism, and obesity were more likely to require intensive care. Though prior studies have discussed the association between severe COVID-19 disease and preexisting health conditions, many were based on billing code data, and the prevalence of endocrine disorders, such as obesity and diabetes, was not representative of those in the US pediatric population [17, 18, 19]; hence, this study provides evidence that preexisting endocrinopathies are risk factors for predisposing youth to severe COVID-19. Obesity rates in children and adolescents have been increasing over the past few decades, with a current prevalence of approximately 19.3% in the general population [18]. This trend has been driven by socioeconomic status, with obesity prevalence being higher in low-income communities [20]. The prevalence of obesity in this study population was 23.6%, which is similar to national disease rates [18]. This study confirms prior data demonstrating that obesity is a risk factor for COVID-19 disease severity [4]. Children with obesity, but not overweight, were twice as likely to be admitted to the hospital with COVID-19. However, this association was present independent of socioeconomic status/residence in areas with an increased area deprivation index, age, or sex. Similarly, increased rates of inpatient admission due to other infectious diseases, such as H1N1 and bacterial infections, have been observed among obese populations [21, 22]. Chronic inflammation and altered immunity seen in patients with obesity may explain this relationship [23]. Furthermore, this data emphasizes the importance of strategies to mitigate COVID-19 in pediatric communities with higher rates of obesity regardless of socioeconomic status. Increased risk for comorbidities and serious infections have been documented in patients with either type 1 or type 2 diabetes mellitus [24]. This study confirmed findings by Kompaniyets et al. [4] that showed increased hospitalization and ICU admission rates in children with diabetes mellitus and COVID-19. This association could be attributed to poor glycemic control, given that patients with diabetes mellitus in our study had an HbA1c higher than the target HbA1c recommended by the American Diabetes Association [25]. Even though some studies have demonstrated a clear relationship between poor glycemic control and lower socioeconomic status, our study failed to demonstrate that disease severity among patients with diabetes mellitus was dependent on socioeconomic status [26, 27]. This data emphasizes the importance of identifying and applying interventions that help children and young adults with diabetes mellitus reduce their risk for severe COVID-19 disease. There is a paucity of data related to the association of underlying hypothyroidism with COVID-19 disease severity. Two prior studies in adults concluded that hypothyroidism does not predispose patients to a higher risk of hospitalization, ICU admission, or death [28, 29]. However, this has not previously been evaluated in COVID-19 pediatric cohort studies to date. This study showed that youth with hypothyroidism and COVID-19 were 6.9 times more likely to be admitted to a hospital and 11 times more likely to require intensive care than the general population. This data suggests that preexisting thyroid disease will likely make patients more vulnerable to a more severe COVID-19 clinical presentation. The multivariate model in this study accounted for the coexistence of hypothyroidism among patients with diabetes mellitus. Therefore, this data suggests that hypothyroidism is an independent risk factor for disease severity among pediatric patients. The increased risk of inpatient and PICU admission was independent of age, sex, BMI, or SES. Because thyroid hormone regulates multiple organ systems, different mechanisms have been speculated to be involved in the relationship between SARS-CoV-2 and thyroid illness [30]. Though the results from this study suggest that pediatric patients with hypothyroidism are at an increased risk of developing a more severe COVID-19 clinical presentation, it is unclear if preexisting thyroid disease increases their susceptibility to COVID-19 infection. Patients with adrenal suppression may be at higher risk of presenting with an adrenal crisis during an acute illness, such as COVID-19 [31]. However, there is currently no clear evidence that adrenal insufficiency patients are more likely to develop a severe course of disease. A study of adult COVID-19 patients early in the pandemic did not demonstrate an association between severe disease and adrenal insufficiency [32]. An abstract presented at the Endocrine Society suggested that youth with adrenal insufficiency and COVID-19 had higher rates of mortality and a more severe clinical presentation [33]. Though our study did not support that claim, prompt stress dosing (as used by the patients in this study) is key to prevent adrenal crises during COVID-19 illness. Further studies with larger sample size are needed to better understand the potential association between history of adrenal insufficiency and COVID-19 disease severity. Lower socioeconomic status is often associated with limited healthcare access, higher rates of diabetes mellitus and cardiovascular disease, increased utilization of health services, and early death [34]. Likewise, lower socioeconomic status may also lead to increased exposure to COVID-19, placing much of the burden of the pandemic on the most disadvantaged communities [26]. In this study, we used ADI scores as a measure of socioeconomic status. ADI is based on census data that has been validated, allowing for an objective ranking of residence by degree of social deprivation. Patients in this study resided in neighborhoods that were more disadvantaged than the national average. Though ADI percentiles were higher in patients that were hospitalized or required intensive care, ADI was not a predictor of disease severity. This suggests that severe disease presentation is driven by BMI, diabetes mellitus, and hypothyroidism and not by socioeconomic status. Given the lower SES of the entire study population, findings from this study highlight the critical need for improved healthcare delivery and disease prevention strategies. The findings in this study are subject to several limitations. As shown in Figure 1, 150 patients from the original sample were excluded from analysis for missing BMI data, admissions primarily for reasons other than COVID-19, and missing ADI scores. Though the exclusion of these patients may have limited the sample size, it prevented the overestimation of COVID-19 hospitalization rates. Endocrine disorders were present in a small number of patients, leading to wide 95% confidence intervals in multivariable analysis. However, the lower 95% confidence intervals for diabetes mellitus, hypothyroidism, and obesity were well above 1, suggesting that our conclusions may be generalizable and that those patients are still at elevated risk for a more severe clinical COVID-19 presentation. Despite the small number of patients with endocrinopathies, diabetes mellitus, adrenal insufficiency, and hypothyroidism had slightly higher prevalence in our study than reported rates in the general population [17, 18, 19]. This may be because SLCH is a tertiary medical center, where many patients with a higher prevalence of comorbidities are transferred from regional hospitals to receive a higher level of care. In this study, few patients had coexisting multiple endocrine disorders. While the binary logistic regression analysis accounts for these co-diagnoses, further statistical analyses in this study are limited due to the small sample size of this cohort. Thus, additional studies of this subset of patients are necessary. Endocrine conditions in this study were not subclassified by etiology; therefore, specific associations between the etiology of endocrinopathies (type 1 DM vs. type 2, or central vs. primary hypothyroidism, primary vs. secondary adrenal insufficiency, etc.) and COVID-19 disease severity may have been missed. Finally, data presented in this study predates COVID-19 vaccinations in youth; therefore, associations between endocrine disorders and COVID-19 severity presented in this study may not extrapolate to a vaccinated population. This study highlights that presence of preexisting diagnosis of hypothyroidism, diabetes mellitus, and obesity predisposes unvaccinated children and adolescents to a more severe clinical presentation of COVID-19. Data from this study suggest that practitioners should consider patients with these endocrinopathies for available therapeutic and preventive strategies that decrease disease progression. Further studies are needed to analyze these relationships in the vaccinated population. Statement of Ethics: This retrospective cohort study was conducted at Washington University School of Medicine and the Saint Louis Children's Hospital (SLCH) in Saint Louis, MO, USA. The study was approved by the University's Human Research Protection Office Committee, approval number 202101185. Chart review was performed on all patients less than 25 years old that had a positive SARS-CoV-2 PCR at St. Louis Children's Hospital between March 2020 and February 2021. Written informed consent was not required. Conflict of Interest Statement: The authors have no conflicts of interest to declare. Funding Sources: Funding was obtained through the National Center for Advancing Translational Sciences through the National Institutes of Health (<UL1 TR002345> [to <William Powderly>]). Funding was used to support the CRTC, which N.R.B.'s work was funded by. Author Contributions: Nicholas R. Banull: formal analysis, investigation, methodology, validation, visualization, writing − original draft, and writing − review & editing. Ana Maria Arbelaez: formal analysis, investigation, methodology, validation, and writing − review & editing. Patrick J. Reich: formal analysis, investigation, methodology, data collection, validation, and writing − review & editing. Dorina Kallogeri: methodology, validation, writing − review & editing, and formal analysis. Carine Anka, Jennifer May, Hope Shimony, and Kathleen Wharton: data collection, investigation, methodology, validation, and writing − review & editing. Data Availability Statement: Requests for de-identified data can be made by contacting the corresponding author (A.M.A.).
Background: Though severe illness due to COVID-19 is uncommon in children, there is an urgent need to better determine the risk factors for disease severity in youth. This study aims to determine the impact a preexisting endocrine diagnosis has on severity of COVID-19 presentation in youth. Methods: The cross-sectional chart review study included all patients less than 25 years old with a positive SARS-CoV-2 PCR at St. Louis Children's Hospital between March 2020 and February 2021. Electronic medical record data for analysis included patient demographics, BMI percentile, inpatient hospitalization or admission to the pediatric intensive care unit (PICU), and the presence of a preexisting endocrine diagnosis such as diabetes mellitus (type 1 and type 2), adrenal insufficiency, and hypothyroidism. Two outcome measures were analyzed in multivariate analysis: inpatient admission and PICU admission. Adjusted odds ratios with a 95% CI were calculated using binary logistic regression, along with p values after Wald χ2 analysis. Results: 390 patients were included in the study. Mean age was 123.1 (±82.2) months old. 50.3% of patients were hospitalized, and 12.1% of patients were admitted to intensive care. Preexisting diagnoses of diabetes mellitus, obesity, and hypothyroidism were associated with an increased risk of hospital and ICU admission, independent of socioeconomic status. Conclusions: This study provides evidence that unvaccinated youth with a preexisting diagnosis of obesity, hypothyroidism, or diabetes mellitus infected with COVID-19 are more likely to have a more severe clinical presentation requiring inpatient hospital admission and/or intensive care.
null
null
3,287
293
[ 88, 47, 116 ]
9
[ "patients", "19", "covid", "covid 19", "study", "diabetes", "hypothyroidism", "disease", "diabetes mellitus", "adrenal" ]
[ "hypothyroidism covid 19", "endocrine disorders covid", "covid 19 pediatric", "covid 19 obesity", "youth hypothyroidism covid" ]
null
null
null
[CONTENT] COVID-19 | Obesity | Diabetes | Adrenal insufficiency | Hypothyroidism [SUMMARY]
null
[CONTENT] COVID-19 | Obesity | Diabetes | Adrenal insufficiency | Hypothyroidism [SUMMARY]
null
[CONTENT] COVID-19 | Obesity | Diabetes | Adrenal insufficiency | Hypothyroidism [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged, 80 and over | COVID-19 | Child | Cross-Sectional Studies | Diabetes Mellitus, Type 1 | Hospitalization | Humans | Hypothyroidism | Obesity | Retrospective Studies | Risk Factors | SARS-CoV-2 [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged, 80 and over | COVID-19 | Child | Cross-Sectional Studies | Diabetes Mellitus, Type 1 | Hospitalization | Humans | Hypothyroidism | Obesity | Retrospective Studies | Risk Factors | SARS-CoV-2 [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged, 80 and over | COVID-19 | Child | Cross-Sectional Studies | Diabetes Mellitus, Type 1 | Hospitalization | Humans | Hypothyroidism | Obesity | Retrospective Studies | Risk Factors | SARS-CoV-2 [SUMMARY]
null
[CONTENT] hypothyroidism covid 19 | endocrine disorders covid | covid 19 pediatric | covid 19 obesity | youth hypothyroidism covid [SUMMARY]
null
[CONTENT] hypothyroidism covid 19 | endocrine disorders covid | covid 19 pediatric | covid 19 obesity | youth hypothyroidism covid [SUMMARY]
null
[CONTENT] hypothyroidism covid 19 | endocrine disorders covid | covid 19 pediatric | covid 19 obesity | youth hypothyroidism covid [SUMMARY]
null
[CONTENT] patients | 19 | covid | covid 19 | study | diabetes | hypothyroidism | disease | diabetes mellitus | adrenal [SUMMARY]
null
[CONTENT] patients | 19 | covid | covid 19 | study | diabetes | hypothyroidism | disease | diabetes mellitus | adrenal [SUMMARY]
null
[CONTENT] patients | 19 | covid | covid 19 | study | diabetes | hypothyroidism | disease | diabetes mellitus | adrenal [SUMMARY]
null
[CONTENT] severe | 19 | covid | covid 19 | guide | disease | risk | disorders | endocrine disorders | severe disease [SUMMARY]
null
[CONTENT] patients | diabetes | hypothyroidism | odds | increased odds | adrenal | insufficiency | adrenal insufficiency | admitted | shown table [SUMMARY]
null
[CONTENT] patients | 19 | covid | covid 19 | study | hypothyroidism | diabetes | disease | severe | adrenal [SUMMARY]
null
[CONTENT] COVID-19 ||| COVID-19 [SUMMARY]
null
[CONTENT] 390 ||| 123.1 | ±82.2 | months old ||| 50.3% | 12.1% ||| ICU [SUMMARY]
null
[CONTENT] COVID-19 ||| COVID-19 ||| less than 25 years old | St. Louis Children's Hospital | between March 2020 | February 2021 ||| BMI | PICU | 1 | 2 ||| Two ||| 95% | CI | Wald χ2 ||| 390 ||| 123.1 | ±82.2 | months old ||| 50.3% | 12.1% ||| ICU ||| COVID-19 [SUMMARY]
null
Adult women's blood mercury concentrations vary regionally in the United States: association with patterns of fish consumption (NHANES 1999-2004).
19165386
The current, continuous National Health and Nutrition Examination Survey (NHANES) has included blood mercury (BHg) and fish/shellfish consumption since it began in 1999. NHANES 1999-2004 data form the basis for these analyses.
BACKGROUND
We performed univariate and bivariate analyses to determine the distribution of BHg and fish consumption in the population and to investigate differences by geography, race/ethnicity, and income. We used multivariate analysis (regression) to determine the strongest predictors of BHg among geography, demographic factors, and fish consumption.
METHODS
Elevated BHg occurred more commonly among women of childbearing age living in coastal areas of the United States (approximately one in six women). Regionally, exposures differ across the United States: Northeast > South and West > Midwest. Asian women and women with higher income ate more fish and had higher BHg. Time-trend analyses identified reduced BHg and reduced intake of Hg in the upper percentiles without an overall reduction of fish consumption.
RESULTS
BHg is associated with income, ethnicity, residence (census region and coastal proximity). From 1999 through 2004, BHg decreased without a concomitant decrease in fish consumption. Data are consistent with a shift over this time period in fish species in women's diets.
CONCLUSIONS
[ "Adult", "Animals", "Diet", "Ethnicity", "Female", "Humans", "Mercury", "Nutrition Surveys", "Seafood", "United States" ]
2627864
null
null
null
null
Results
Table 1 shows the distributions of estimates of the number of women with BHg concentrations > 3.5 μg/L and > 5.8 μg/L, by region. Analyses indicate that between 1999 and 2004, the Northeast had the highest percentage of women with BHg concentrations above the 3.5 μg/L level of concern (> 19%), whereas the South had the largest estimated number of women (1.21 million) with ≥ 3.5 μg/L BHg because of elevated population in that region. Geometric means (Figure 1) show similar trends, with the highest BHg concentrations in the Northeast, followed by the West, South, and Midwest census regions. In the Northeast, the highest 5% of BHg concentrations exceeded 8.2 μg/L. [For full distributions, see Supplemental Material, Table 2 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] When we included coastal regions in this analysis, additional spatial heterogeneity in BHg (Figure 2A) and estimated 30-day Hg intake (Figure 2B) was apparent, with elevated exposures in all coastal areas relative to their neighboring inland regions except in the Great Lakes. In the coastal areas, the highest 5% of BHg concentrations exceeded 7.2 μg/L, with the Atlantic Coast exceeding 10.9 μg/L. [For the full distributions, see Supplemental Material, Table 4 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] Fish species eaten by survey participants varied by region (Figure 3), with respondents in coastal regions reporting higher frequency of consumption of fish containing higher levels of Hg. BHg concentrations were strongly associated with the frequency of fish consumption (Figure 4). BHg increased with monthly estimated consumption of fish and shellfish over the range of never/rarely to 4 or more times per week. In multiple regression modeling, women from the Atlantic (p < 0.01), Pacific (p < 0.0001), and Gulf (p < 0.0001) coasts had higher BHg concentrations compared with women from the inland West, whereas women from the inland Northeast and inland Midwest had significantly lower BHg levels (p < 0.0001). [For the full regression results, see Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] Analysis of temporal trends through simple regression modeling showed no statistically significant difference among the three sets of study years (1999–2000–2001–2002, and 2003–2004) for BHg (p = 0.07), estimated 30-day Hg intake (p = 0.11), or reported frequency of seafood consumption (p = 0.69). However, in multiple regression modeling, adjusting for covariates including coastal/non-coastal residence, the years 1999–2000 had significantly higher BHg levels (p < 0.0001) compared with 2003–2004, and 2001–2002 had significantly lower BHg levels (p < 0.01) [Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. Although the analyses did not support the conclusion that there was a general downward trend in BHg concentrations over the 6-year study period, there was a decline in the upper percentiles reflecting the most highly exposed women with BHg concentrations greater than established levels of concern [Supplemental Material, Table 9 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. In addition, the percentage of examinees with BHg values ≥ 3.5 μg/L and ≥ 5.8 μg/L was much greater in 1999–2000 compared to 2001–2002 and 2003–2004 (Figure 5). We found no consistent trend in fish consumption across the study years. We observed a decrease in the 90th percentile of 30-day estimated intake of Hg through seafood consumption across the study years even though there was no similar decrease in the 90th percentile of 30-day estimated consumption of grams of fish and shellfish (Figure 6). This suggests a shift in consumption to seafood containing less Hg. We did not observe a similar pattern at the mean, suggesting that this shift in seafood consumption occurred mainly with the highest fish and shellfish consumers [Supplemental Material, Table 9 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. The RfD for Hg intake is 0.1 μg Hg/kg body weight (μg/kgbw) per day, or 3.0 μg Hg/kgbw per month (30 days). Results also showed that self-selected ethnic identity was associated with total BHg concentrations, estimated 30-day Hg intake, and frequency of either finfish or shellfish consumption. For example, BHg levels, reported frequency of seafood consumption, and 30-day Hg intakes were highest among women who designated themselves as being in the “other” category (mostly people whose ancestry is Asian, Native American, Pacific Islands, and the Caribbean Islands). [See also Supplemental Material, Tables 5–7 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] Table 2 presents the percentages of women by race/ethnicity that had ≥ 3.5 and ≥ 5.8 μg/L BHg. We identified statistically significant relationships between higher income and, respectively, increasing BHg concentration (p < 0.0001), estimated 30-day intake of Hg (p = 0.008), and 30-day frequency of finfish and shellfish consumption (p < 0.0001) through bivariate regressions. [For the distributions of blood total Hg, estimated Hg intake, and frequency of finfish and shellfish consumption by annual income, see Supplemental Material, Tables 6–8 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] In addition, women from families reporting incomes of ≥ $75,000 (the reference category) had statistically higher BHg levels than did women from families with incomes of ≤ $55,000 (p < 0.01). In all cases, BHg concentrations were also significantly associated with age and estimated 30-day Hg intake (p < 0.0001). Table 3 presents the percentage of women by annual income with ≥ 3.5 and ≥ 5.8 μg/L BHg. In multiple regression modeling, after adjusting for other factors related to BHg, both race/ethnicity and income remained statistically significant predictors of BHg levels observed in this study [Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. Non-Hispanic blacks (p < 0.0001) and women grouped in the “other” racial category (p = 0.002) had significantly higher BHg concentrations than did non-Hispanic whites.
Conclusions
Significant geographic differences in BHg concentrations occurred within the United States: We found highest exposures in coastal areas and the Northeast census region. In the Northeast, 19% of women had BHg concentrations ≥ 3.5 μg/L. The highest 5% of BHg concentrations exceeded 8.2 μg/L in the Northeast and 7.2 μg/L in coastal areas, concentrations more than twice the 3.5 μg/L level of concern. BHg levels were predicted by the quantity and type of fish consumed. Over the 6-year period (1999–2004), the frequency of elevated BHg levels among women of childbearing age declined without a significant change in quantities of fish and shellfish consumed. This pattern suggests a more discerning series of choices in type of fish eaten rather than an overall reduction in fish consumption. Within all geographic regions, women at highest risk of elevated Hg exposures were more affluent and more likely to be of Asian or island ethnicity.
[ "Methodology for data analysis", "Methodology for defining coastal and non-coastal areas", "Regional and coastal variation in BHg concentrations and in fish consumption", "Ethnic group variation on fish intake and BHg concentrations", "Income differences in association with fish intake and BHg concentrations", "Interactions between income and ethnic group", "Time trends in Hg exposure absent changes in total fish consumption", "Changes in MeHg exposure over time" ]
[ "We evaluated data for examinees who participated in NHANES during survey years 1999–2004 to assess the statistical association between seafood consumption and BHg by region of residence, race/ethnicity, and annual income. Further, we examined time trends for both BHg and fish consumption. NHANES is an annual survey conducted by the NCHS. The data include BHg levels, 24-hr dietary recall, and 30-day finfish and shellfish consumption frequency for women 16–49 years of age who reside in the United States. The NHANES sampling frame includes all 50 states. The documentation and publicly available data for NHANES can be found online [Centers for Disease Control and Prevention (CDC) 2006b]. The regional data are not publicly available but can be accessed by special request to the National Center for Health Statistics (NCHS) through its Research Data Center. Procedures for submitting a proposal in order to access data that are not publicly available can be found online (CDC 2006c). We performed all analyses using SAS, version 9.1 (SAS Institute Inc., Cary, NC).\nFollowing NHANES analytic guidelines (CDC 2006a), we used SAS procedures that accurately incorporate the stratification and multistage sampling of NHANES: Proc SurveyMeans, Proc SurveyReg, and Proc SurveyFreq (SAS Institute Inc.). The weights provided by NCHS compensate for the oversampling of various subpopulations and adjust for nonresponse bias. We used weights for estimating statistics for coastal and non-coastal regions to retain these adjustments. Because we considered the variables of interest (BHg and fish consumption) to be related to some of the factors that were oversampled, we retained these adjustments to minimize the bias in the estimates. For example, NHANES over-sampled Mexican Americans, who also have lower BHg than do other racial/ethnic groups. If the weights were not used to estimate the distribution of BHg, the results would be biased low. We recognize that some bias may remain within the estimates because the weights were not specifically created for the geographic regions of coastal and noncoastal; however, these subdivisions (e.g., coastal, noncoastal, Pacific, Atlantic) are based on counties, the primary sampling units of NHANES (CDC 2006a). All multivariate analyses were done unweighted because factors associated with the dependent variables and for which over-sampling was based were included as covariates and thus adjusted for in the modeling.\nIn order to estimate long-term Hg intake, we combined data collected through the 24-hr dietary recall and the 30-day fish frequency questionnaire to estimate 30-day Hg intake [for specific methodologies, see Mahaffey and Rice (1997); Mahaffey et al. (2004)]. If we found statistically significant differences in amount of fish consumed per meal from the 24-hr dietary recall by either coastal status (participants who lived in a county that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or the Great Lakes vs. those who did not) or data release (1999–2000–2001–2002, or 2003–2004), we calculated separate averages. We generated statistically representative estimates from these data using the statistical weights provided by NCHS and following the relevant analytical guidelines published by NCHS (CDC 2006a).", "Fish consumption is generally believed to be a major contributor to BHg concentration. We hypothesized that patterns of fish and shellfish consumption would vary between U.S. residents who live on or near the coast (within ~ 25–50 miles) and those who live inland. We further hypothesized that fish consumption patterns, and thus BHg concentrations, may also vary by specific coast (e.g., residents near the Atlantic Coast may have different BHg concentrations than those on the coast of the Gulf of Mexico) and specific inland region (e.g., West vs. Midwest). To test these hypotheses, we categorized NHANES respondents as living in either a coastal or a noncoastal county and further categorized them by eight regions: Atlantic Coast, Northeast, Great Lakes, Midwest, South, Gulf of Mexico, West, and Pacific Coast.\nThe geographic unit used by NHANES is a county or county equivalent (CDC 2006a); therefore, we limited our definitions of coastal and noncoastal to follow county boundaries. We defined all counties that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or any of the Great Lakes as coastal. Additionally, we defined counties that bordered estuaries and bays as coastal, as well as counties whose center point was within approximately 25 miles of any coast even if not directly bordering a coast. [For the list of counties, see Supplemental Material, Table 13 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] We then defined the four coastal regions based on nearest body of water; for example, counties in California, Oregon, Washington, Alaska, and Hawaii that we defined as coastal were categorized as Pacific Coast. We separated noncoastal counties into four inland regions using the U.S. Census regions; for example, noncoastal counties in California, Oregon, Washington, and Alaska along with the entire states of Idaho, Montana, Wyoming, Colorado, New Mexico, Arizona, Utah, and Nevada became the West region (we classified all of Hawaii as coastal). We also designated the entire state of Florida as coastal, split between the Atlantic Coast and the Gulf of Mexico. We designated Miami-Dade County as Atlantic Coast, and Monroe County as Gulf of Mexico. These subdivisions run the risk of small sample sizes; however, the definition of coastal was sufficiently broad to avoid single primary sampling units.", "Comparisons of the distribution of BHg data with reference values aimed at protecting the fetal nervous system have been made using national-level data (Mahaffey et al. 2004). NHANES data are often used to make population estimates through application of weighting factors to variables of interest such as BHg. Population estimates for U.S. Census regions and their distribution of BHg concentrations indicate that women living in the Northeast had BHg concentrations exceeding levels of concern more often than did women living in the South and West. The lowest Hg exposures were reported among women living in the Midwest.\nBecause NHANES was not designed to provide population estimates for coastal and noncoastal areas, unbiased estimates for the number of women having BHg concentrations ≥ 3.5 μg/L and ≥ 5.8 μg/L cannot be developed comparing coastal- and noncoastal-residing women. Although the following are not population estimates, they are statistics for a geographic region: Women living in coastal areas were at greater risk of having BHg concentrations ≥ 3.5 μg/L (16.25% for coastal and 5.99% for noncoastal residents) and ≥ 5.8 μg/L (8.11% for coastal and 2.06% for noncoastal residents). Women living near the coastal areas had approximately three to four times greater risk of exceeding acceptable levels of Hg exposure than did noncoastal-dwelling women. There may be some bias in these results due to the weighting issues (see “Materials and Methods”); however, we do not believe that this bias is a major factor underlying these great differences.\nMeHg exposures exceeding health-based standards, including U.S. EPA’s RfD (Rice et al. 2003), occurred more commonly among women living in coastal areas. These health-based standards were based on avoiding MeHg-associated delays and deficits in neurologic development of children after in utero exposure to MeHg (Mergler et al. 2007; Rice et al. 2003). At higher exposures to MeHg, including the highest concentrations reported during these survey years, the women themselves may risk adverse neuropsychological and neurobehavioral outcomes (Mergler et al. 2007).\nWithin the United States, people living in coastal areas consume more fish and shellfish than do those living in noncoastal areas and consume fish with higher Hg concentrations. Reports from New York City (McKelvey et al. 2007) and Florida (Denger et al. 1994; Karouna-Renier et al. 2008) support our identification of higher Hg exposures in U.S. coastal areas. This is part of a worldwide pattern. An overall pattern of higher BHg levels has also been reported among people living on U.S. islands [Hawaii (Sato et al. 2006)] and territories [e.g., Puerto Rico (Ortiz-Roque and López-Rivera 2004)]. A similar pattern has been repeated in other islands [Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Fiji (Kumar et al. 2006), Seychelles (Myers et al. 2007), and Tahiti (Chateau-Degat 2005; Dewailly et al. 2008)] compared with inland populations. Among these island populations, BHg concentrations at the upper end of the distribution fall into the range of 50 μg/L (~ 250 nmol/L) and higher (Chateau-Degat 2005). In Bermuda, cord BHg concentrations as high as 160 nmol/L (~ 35 μg/L) have been reported (arithmetic mean, 41.3 ± 4.7 nmol/L or 8.0 ± 1.0 μg/L) (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004).\nHigher BHg concentrations in the U.S. Northeast found in this study reflect, in part, more frequent fish and shellfish consumption. Additional variability may be a function of differences in Hg concentrations among species and geographic regions (Sunderland 2007). For example, recent information on “hot spots” for Hg in wildlife tissues (Evers et al. 2007) could be associated with higher Hg concentrations for locally obtained fish. One limitation of the present analysis was the use of a fish-species–specific mean Hg concentration (i.e., nondistributional values) to estimate individual exposure. Although most fish consumed by the U.S. population is not locally obtained (i.e., commercially obtained from diverse regions and countries) (Sunderland 2007), analytical results showing geographic differences in the distribution of BHg could reflect higher Hg concentrations in locally obtained fish within the Northeast states.", "Ethnic origins were associated with Hg exposures with those designated as “other” (i.e., Asian, Pacific and Caribbean Islander, Native American, Alaska Native, multiracial, and unknown race) having higher BHg concentrations. From additional studies, people of Asian descent whose food choices are influenced by Asian dietary patterns (Kudo et al. 2000; Sechena et al. 2003) tended to consume fish more frequently, in greater variety, and in greater quantity than did non-Asians. The ethnic diversity of the U.S. population is well known. As of 1997, 61% of the Asian population living in the United States was foreign-born (Council of Economic Advisors 1999). By comparison with overall U.S. data, higher BHg concentrations among Asians and islanders were reported for Taiwan (Hsu et al. 2007), Cambodia (Agusa et al. 2007), Fiji (Kumar et al. 2006), and Tahiti (Dewailly et al. 2008).\nWithin the United States, fish and shellfish consumption, predicting Hg exposure described previously, varies widely, in part a reflection of ethnicity. For example, Asian countries [e.g., Cambodia (Agusa et al. 2007), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], island nations [e.g., Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Seychelles (Myers et al. 2007), Tahiti (Chateau-Degat 2005), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], and some European countries [e.g., Spain (Falcó et al. 2006; Herreros et al. 2008) and the Faroe Islands (Weihe et al. 1996)] have reported fish/shellfish consumption levels greater than average worldwide consumption (World Health Organization 2008).", "In contrast to some other environmental exposures [e.g., higher blood lead concentrations] (Mahaffey et al. 1982), BHg concentrations increased with income. This is consistent with other studies in which women from higher income groups were at greater risk of MeHg exposure, as were women living in urban areas (Hightower and Moore 2003; Saint-Phard and Van Dorsten 2006).", "A more complex association between income and racial/ethnic group may also exist. According to the 1990 U.S. Census (U.S. Census Bureau 2008), the median family income of Japanese-American families exceeded that of non-Hispanic white families. By contrast, the income of Cambodian-American families was lower than that of black families. We could not address whether there is an interaction between belonging to the category designated as “other” and higher income within the NHANES data on BHg levels available at this time, because of sample size limitations.", "Our analysis of 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Our evaluation of NHANES fish intake data indicated no differences in the mean frequency or amount of particular fish and shellfish species consumed. However, the estimated 30-day Hg intake decreased at the 90th percentile and higher, whereas total fish consumption did not, which suggests a shift in fish species consumed. The BHg data indicated a reduction of the higher end of the distribution of BHg between the first 2-year interval (the 1999 and 2000 examinees) compared with the subsequent 4-year interval (the 2001–2004 examinees).\nThe basis for these differences could possibly reflect spillover from the federal fish advisory program (U.S. EPA 2008b) in terms of total fish and shellfish consumption. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007).\nThe four fish species listed in the federal advisory [swordfish, shark, tilefish, and king mackerel (U.S. EPA 2008b)] were rarely reported by the 5,465 women in this analysis. It is clear that these four fish species contributed little to Hg exposure in this general population of U.S. women. Individual states with higher Hg exposures [e.g., Hawaii (Sato et al., 2006), Florida (Karouna-Renier et al. 2008)] and greater fish consumption [Florida (Denger et al. 1994)] have substantially broader fish consumption advisories (e.g., Hawaii and Florida) aimed at reducing Hg exposure from high-Hg–containing species obtained locally (Florida Department of Health 2007; U.S. EPA 2008b). Despite the federal advisory’s emphasis on four species of highly contaminated fish (U.S. EPA 2008b) and the states’ emphasis on game-fish, the most commonly consumed finfish in the United States is tuna. Interpretation of Hg exposure from tuna was complicated by the specific wording of the dietary questions asked of the NHANES examinees, which did not differentiate between light or skipjack tuna and albacore tuna. The latter contains approximately three times more Hg than does the former: 0.38 μg/g for frozen and fresh tuna, 0.35 μg/g for canned albacore, and 0.12 μg/g for canned light tuna (Mahaffey et al. 2008).", "During the past decade, the U.S. EPA initiated substantial interventions aimed to reduce Hg releases and exposures (U.S. EPA 2008a) and issued advisories to limit consumption of high-Hg fish (U.S. EPA 2008b). Because of worldwide atmospheric distribution and subsequent deposition of Hg, local conditions and locally caught fish are not the main contributors to Hg intake for most people (Sunderland 2007). Although there are economic indications that consumption of some species of fish may have decreased in response to these advisories (Shimshack et al. 2007), Hg exposures may not follow a similar time trend despite regulatory efforts to reduce Hg exposures. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). Our analysis of NHANES data calculating 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004." ]
[ null, null, null, null, null, null, null, null ]
[ "Materials and Methods", "Methodology for data analysis", "Methodology for defining coastal and non-coastal areas", "Results", "Discussion", "Regional and coastal variation in BHg concentrations and in fish consumption", "Ethnic group variation on fish intake and BHg concentrations", "Income differences in association with fish intake and BHg concentrations", "Interactions between income and ethnic group", "Time trends in Hg exposure absent changes in total fish consumption", "Changes in MeHg exposure over time", "Conclusions" ]
[ " Methodology for data analysis We evaluated data for examinees who participated in NHANES during survey years 1999–2004 to assess the statistical association between seafood consumption and BHg by region of residence, race/ethnicity, and annual income. Further, we examined time trends for both BHg and fish consumption. NHANES is an annual survey conducted by the NCHS. The data include BHg levels, 24-hr dietary recall, and 30-day finfish and shellfish consumption frequency for women 16–49 years of age who reside in the United States. The NHANES sampling frame includes all 50 states. The documentation and publicly available data for NHANES can be found online [Centers for Disease Control and Prevention (CDC) 2006b]. The regional data are not publicly available but can be accessed by special request to the National Center for Health Statistics (NCHS) through its Research Data Center. Procedures for submitting a proposal in order to access data that are not publicly available can be found online (CDC 2006c). We performed all analyses using SAS, version 9.1 (SAS Institute Inc., Cary, NC).\nFollowing NHANES analytic guidelines (CDC 2006a), we used SAS procedures that accurately incorporate the stratification and multistage sampling of NHANES: Proc SurveyMeans, Proc SurveyReg, and Proc SurveyFreq (SAS Institute Inc.). The weights provided by NCHS compensate for the oversampling of various subpopulations and adjust for nonresponse bias. We used weights for estimating statistics for coastal and non-coastal regions to retain these adjustments. Because we considered the variables of interest (BHg and fish consumption) to be related to some of the factors that were oversampled, we retained these adjustments to minimize the bias in the estimates. For example, NHANES over-sampled Mexican Americans, who also have lower BHg than do other racial/ethnic groups. If the weights were not used to estimate the distribution of BHg, the results would be biased low. We recognize that some bias may remain within the estimates because the weights were not specifically created for the geographic regions of coastal and noncoastal; however, these subdivisions (e.g., coastal, noncoastal, Pacific, Atlantic) are based on counties, the primary sampling units of NHANES (CDC 2006a). All multivariate analyses were done unweighted because factors associated with the dependent variables and for which over-sampling was based were included as covariates and thus adjusted for in the modeling.\nIn order to estimate long-term Hg intake, we combined data collected through the 24-hr dietary recall and the 30-day fish frequency questionnaire to estimate 30-day Hg intake [for specific methodologies, see Mahaffey and Rice (1997); Mahaffey et al. (2004)]. If we found statistically significant differences in amount of fish consumed per meal from the 24-hr dietary recall by either coastal status (participants who lived in a county that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or the Great Lakes vs. those who did not) or data release (1999–2000–2001–2002, or 2003–2004), we calculated separate averages. We generated statistically representative estimates from these data using the statistical weights provided by NCHS and following the relevant analytical guidelines published by NCHS (CDC 2006a).\nWe evaluated data for examinees who participated in NHANES during survey years 1999–2004 to assess the statistical association between seafood consumption and BHg by region of residence, race/ethnicity, and annual income. Further, we examined time trends for both BHg and fish consumption. NHANES is an annual survey conducted by the NCHS. The data include BHg levels, 24-hr dietary recall, and 30-day finfish and shellfish consumption frequency for women 16–49 years of age who reside in the United States. The NHANES sampling frame includes all 50 states. The documentation and publicly available data for NHANES can be found online [Centers for Disease Control and Prevention (CDC) 2006b]. The regional data are not publicly available but can be accessed by special request to the National Center for Health Statistics (NCHS) through its Research Data Center. Procedures for submitting a proposal in order to access data that are not publicly available can be found online (CDC 2006c). We performed all analyses using SAS, version 9.1 (SAS Institute Inc., Cary, NC).\nFollowing NHANES analytic guidelines (CDC 2006a), we used SAS procedures that accurately incorporate the stratification and multistage sampling of NHANES: Proc SurveyMeans, Proc SurveyReg, and Proc SurveyFreq (SAS Institute Inc.). The weights provided by NCHS compensate for the oversampling of various subpopulations and adjust for nonresponse bias. We used weights for estimating statistics for coastal and non-coastal regions to retain these adjustments. Because we considered the variables of interest (BHg and fish consumption) to be related to some of the factors that were oversampled, we retained these adjustments to minimize the bias in the estimates. For example, NHANES over-sampled Mexican Americans, who also have lower BHg than do other racial/ethnic groups. If the weights were not used to estimate the distribution of BHg, the results would be biased low. We recognize that some bias may remain within the estimates because the weights were not specifically created for the geographic regions of coastal and noncoastal; however, these subdivisions (e.g., coastal, noncoastal, Pacific, Atlantic) are based on counties, the primary sampling units of NHANES (CDC 2006a). All multivariate analyses were done unweighted because factors associated with the dependent variables and for which over-sampling was based were included as covariates and thus adjusted for in the modeling.\nIn order to estimate long-term Hg intake, we combined data collected through the 24-hr dietary recall and the 30-day fish frequency questionnaire to estimate 30-day Hg intake [for specific methodologies, see Mahaffey and Rice (1997); Mahaffey et al. (2004)]. If we found statistically significant differences in amount of fish consumed per meal from the 24-hr dietary recall by either coastal status (participants who lived in a county that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or the Great Lakes vs. those who did not) or data release (1999–2000–2001–2002, or 2003–2004), we calculated separate averages. We generated statistically representative estimates from these data using the statistical weights provided by NCHS and following the relevant analytical guidelines published by NCHS (CDC 2006a).\n Methodology for defining coastal and non-coastal areas Fish consumption is generally believed to be a major contributor to BHg concentration. We hypothesized that patterns of fish and shellfish consumption would vary between U.S. residents who live on or near the coast (within ~ 25–50 miles) and those who live inland. We further hypothesized that fish consumption patterns, and thus BHg concentrations, may also vary by specific coast (e.g., residents near the Atlantic Coast may have different BHg concentrations than those on the coast of the Gulf of Mexico) and specific inland region (e.g., West vs. Midwest). To test these hypotheses, we categorized NHANES respondents as living in either a coastal or a noncoastal county and further categorized them by eight regions: Atlantic Coast, Northeast, Great Lakes, Midwest, South, Gulf of Mexico, West, and Pacific Coast.\nThe geographic unit used by NHANES is a county or county equivalent (CDC 2006a); therefore, we limited our definitions of coastal and noncoastal to follow county boundaries. We defined all counties that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or any of the Great Lakes as coastal. Additionally, we defined counties that bordered estuaries and bays as coastal, as well as counties whose center point was within approximately 25 miles of any coast even if not directly bordering a coast. [For the list of counties, see Supplemental Material, Table 13 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] We then defined the four coastal regions based on nearest body of water; for example, counties in California, Oregon, Washington, Alaska, and Hawaii that we defined as coastal were categorized as Pacific Coast. We separated noncoastal counties into four inland regions using the U.S. Census regions; for example, noncoastal counties in California, Oregon, Washington, and Alaska along with the entire states of Idaho, Montana, Wyoming, Colorado, New Mexico, Arizona, Utah, and Nevada became the West region (we classified all of Hawaii as coastal). We also designated the entire state of Florida as coastal, split between the Atlantic Coast and the Gulf of Mexico. We designated Miami-Dade County as Atlantic Coast, and Monroe County as Gulf of Mexico. These subdivisions run the risk of small sample sizes; however, the definition of coastal was sufficiently broad to avoid single primary sampling units.\nFish consumption is generally believed to be a major contributor to BHg concentration. We hypothesized that patterns of fish and shellfish consumption would vary between U.S. residents who live on or near the coast (within ~ 25–50 miles) and those who live inland. We further hypothesized that fish consumption patterns, and thus BHg concentrations, may also vary by specific coast (e.g., residents near the Atlantic Coast may have different BHg concentrations than those on the coast of the Gulf of Mexico) and specific inland region (e.g., West vs. Midwest). To test these hypotheses, we categorized NHANES respondents as living in either a coastal or a noncoastal county and further categorized them by eight regions: Atlantic Coast, Northeast, Great Lakes, Midwest, South, Gulf of Mexico, West, and Pacific Coast.\nThe geographic unit used by NHANES is a county or county equivalent (CDC 2006a); therefore, we limited our definitions of coastal and noncoastal to follow county boundaries. We defined all counties that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or any of the Great Lakes as coastal. Additionally, we defined counties that bordered estuaries and bays as coastal, as well as counties whose center point was within approximately 25 miles of any coast even if not directly bordering a coast. [For the list of counties, see Supplemental Material, Table 13 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] We then defined the four coastal regions based on nearest body of water; for example, counties in California, Oregon, Washington, Alaska, and Hawaii that we defined as coastal were categorized as Pacific Coast. We separated noncoastal counties into four inland regions using the U.S. Census regions; for example, noncoastal counties in California, Oregon, Washington, and Alaska along with the entire states of Idaho, Montana, Wyoming, Colorado, New Mexico, Arizona, Utah, and Nevada became the West region (we classified all of Hawaii as coastal). We also designated the entire state of Florida as coastal, split between the Atlantic Coast and the Gulf of Mexico. We designated Miami-Dade County as Atlantic Coast, and Monroe County as Gulf of Mexico. These subdivisions run the risk of small sample sizes; however, the definition of coastal was sufficiently broad to avoid single primary sampling units.", "We evaluated data for examinees who participated in NHANES during survey years 1999–2004 to assess the statistical association between seafood consumption and BHg by region of residence, race/ethnicity, and annual income. Further, we examined time trends for both BHg and fish consumption. NHANES is an annual survey conducted by the NCHS. The data include BHg levels, 24-hr dietary recall, and 30-day finfish and shellfish consumption frequency for women 16–49 years of age who reside in the United States. The NHANES sampling frame includes all 50 states. The documentation and publicly available data for NHANES can be found online [Centers for Disease Control and Prevention (CDC) 2006b]. The regional data are not publicly available but can be accessed by special request to the National Center for Health Statistics (NCHS) through its Research Data Center. Procedures for submitting a proposal in order to access data that are not publicly available can be found online (CDC 2006c). We performed all analyses using SAS, version 9.1 (SAS Institute Inc., Cary, NC).\nFollowing NHANES analytic guidelines (CDC 2006a), we used SAS procedures that accurately incorporate the stratification and multistage sampling of NHANES: Proc SurveyMeans, Proc SurveyReg, and Proc SurveyFreq (SAS Institute Inc.). The weights provided by NCHS compensate for the oversampling of various subpopulations and adjust for nonresponse bias. We used weights for estimating statistics for coastal and non-coastal regions to retain these adjustments. Because we considered the variables of interest (BHg and fish consumption) to be related to some of the factors that were oversampled, we retained these adjustments to minimize the bias in the estimates. For example, NHANES over-sampled Mexican Americans, who also have lower BHg than do other racial/ethnic groups. If the weights were not used to estimate the distribution of BHg, the results would be biased low. We recognize that some bias may remain within the estimates because the weights were not specifically created for the geographic regions of coastal and noncoastal; however, these subdivisions (e.g., coastal, noncoastal, Pacific, Atlantic) are based on counties, the primary sampling units of NHANES (CDC 2006a). All multivariate analyses were done unweighted because factors associated with the dependent variables and for which over-sampling was based were included as covariates and thus adjusted for in the modeling.\nIn order to estimate long-term Hg intake, we combined data collected through the 24-hr dietary recall and the 30-day fish frequency questionnaire to estimate 30-day Hg intake [for specific methodologies, see Mahaffey and Rice (1997); Mahaffey et al. (2004)]. If we found statistically significant differences in amount of fish consumed per meal from the 24-hr dietary recall by either coastal status (participants who lived in a county that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or the Great Lakes vs. those who did not) or data release (1999–2000–2001–2002, or 2003–2004), we calculated separate averages. We generated statistically representative estimates from these data using the statistical weights provided by NCHS and following the relevant analytical guidelines published by NCHS (CDC 2006a).", "Fish consumption is generally believed to be a major contributor to BHg concentration. We hypothesized that patterns of fish and shellfish consumption would vary between U.S. residents who live on or near the coast (within ~ 25–50 miles) and those who live inland. We further hypothesized that fish consumption patterns, and thus BHg concentrations, may also vary by specific coast (e.g., residents near the Atlantic Coast may have different BHg concentrations than those on the coast of the Gulf of Mexico) and specific inland region (e.g., West vs. Midwest). To test these hypotheses, we categorized NHANES respondents as living in either a coastal or a noncoastal county and further categorized them by eight regions: Atlantic Coast, Northeast, Great Lakes, Midwest, South, Gulf of Mexico, West, and Pacific Coast.\nThe geographic unit used by NHANES is a county or county equivalent (CDC 2006a); therefore, we limited our definitions of coastal and noncoastal to follow county boundaries. We defined all counties that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or any of the Great Lakes as coastal. Additionally, we defined counties that bordered estuaries and bays as coastal, as well as counties whose center point was within approximately 25 miles of any coast even if not directly bordering a coast. [For the list of counties, see Supplemental Material, Table 13 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] We then defined the four coastal regions based on nearest body of water; for example, counties in California, Oregon, Washington, Alaska, and Hawaii that we defined as coastal were categorized as Pacific Coast. We separated noncoastal counties into four inland regions using the U.S. Census regions; for example, noncoastal counties in California, Oregon, Washington, and Alaska along with the entire states of Idaho, Montana, Wyoming, Colorado, New Mexico, Arizona, Utah, and Nevada became the West region (we classified all of Hawaii as coastal). We also designated the entire state of Florida as coastal, split between the Atlantic Coast and the Gulf of Mexico. We designated Miami-Dade County as Atlantic Coast, and Monroe County as Gulf of Mexico. These subdivisions run the risk of small sample sizes; however, the definition of coastal was sufficiently broad to avoid single primary sampling units.", "Table 1 shows the distributions of estimates of the number of women with BHg concentrations > 3.5 μg/L and > 5.8 μg/L, by region. Analyses indicate that between 1999 and 2004, the Northeast had the highest percentage of women with BHg concentrations above the 3.5 μg/L level of concern (> 19%), whereas the South had the largest estimated number of women (1.21 million) with ≥ 3.5 μg/L BHg because of elevated population in that region. Geometric means (Figure 1) show similar trends, with the highest BHg concentrations in the Northeast, followed by the West, South, and Midwest census regions. In the Northeast, the highest 5% of BHg concentrations exceeded 8.2 μg/L. [For full distributions, see Supplemental Material, Table 2 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] When we included coastal regions in this analysis, additional spatial heterogeneity in BHg (Figure 2A) and estimated 30-day Hg intake (Figure 2B) was apparent, with elevated exposures in all coastal areas relative to their neighboring inland regions except in the Great Lakes. In the coastal areas, the highest 5% of BHg concentrations exceeded 7.2 μg/L, with the Atlantic Coast exceeding 10.9 μg/L. [For the full distributions, see Supplemental Material, Table 4 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] Fish species eaten by survey participants varied by region (Figure 3), with respondents in coastal regions reporting higher frequency of consumption of fish containing higher levels of Hg. BHg concentrations were strongly associated with the frequency of fish consumption (Figure 4). BHg increased with monthly estimated consumption of fish and shellfish over the range of never/rarely to 4 or more times per week. In multiple regression modeling, women from the Atlantic (p < 0.01), Pacific (p < 0.0001), and Gulf (p < 0.0001) coasts had higher BHg concentrations compared with women from the inland West, whereas women from the inland Northeast and inland Midwest had significantly lower BHg levels (p < 0.0001). [For the full regression results, see Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf).]\nAnalysis of temporal trends through simple regression modeling showed no statistically significant difference among the three sets of study years (1999–2000–2001–2002, and 2003–2004) for BHg (p = 0.07), estimated 30-day Hg intake (p = 0.11), or reported frequency of seafood consumption (p = 0.69). However, in multiple regression modeling, adjusting for covariates including coastal/non-coastal residence, the years 1999–2000 had significantly higher BHg levels (p < 0.0001) compared with 2003–2004, and 2001–2002 had significantly lower BHg levels (p < 0.01) [Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. Although the analyses did not support the conclusion that there was a general downward trend in BHg concentrations over the 6-year study period, there was a decline in the upper percentiles reflecting the most highly exposed women with BHg concentrations greater than established levels of concern [Supplemental Material, Table 9 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. In addition, the percentage of examinees with BHg values ≥ 3.5 μg/L and ≥ 5.8 μg/L was much greater in 1999–2000 compared to 2001–2002 and 2003–2004 (Figure 5). We found no consistent trend in fish consumption across the study years. We observed a decrease in the 90th percentile of 30-day estimated intake of Hg through seafood consumption across the study years even though there was no similar decrease in the 90th percentile of 30-day estimated consumption of grams of fish and shellfish (Figure 6). This suggests a shift in consumption to seafood containing less Hg. We did not observe a similar pattern at the mean, suggesting that this shift in seafood consumption occurred mainly with the highest fish and shellfish consumers [Supplemental Material, Table 9 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. The RfD for Hg intake is 0.1 μg Hg/kg body weight (μg/kgbw) per day, or 3.0 μg Hg/kgbw per month (30 days).\nResults also showed that self-selected ethnic identity was associated with total BHg concentrations, estimated 30-day Hg intake, and frequency of either finfish or shellfish consumption. For example, BHg levels, reported frequency of seafood consumption, and 30-day Hg intakes were highest among women who designated themselves as being in the “other” category (mostly people whose ancestry is Asian, Native American, Pacific Islands, and the Caribbean Islands). [See also Supplemental Material, Tables 5–7 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] Table 2 presents the percentages of women by race/ethnicity that had ≥ 3.5 and ≥ 5.8 μg/L BHg.\nWe identified statistically significant relationships between higher income and, respectively, increasing BHg concentration (p < 0.0001), estimated 30-day intake of Hg (p = 0.008), and 30-day frequency of finfish and shellfish consumption (p < 0.0001) through bivariate regressions. [For the distributions of blood total Hg, estimated Hg intake, and frequency of finfish and shellfish consumption by annual income, see Supplemental Material, Tables 6–8 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] In addition, women from families reporting incomes of ≥ $75,000 (the reference category) had statistically higher BHg levels than did women from families with incomes of ≤ $55,000 (p < 0.01). In all cases, BHg concentrations were also significantly associated with age and estimated 30-day Hg intake (p < 0.0001). Table 3 presents the percentage of women by annual income with ≥ 3.5 and ≥ 5.8 μg/L BHg.\nIn multiple regression modeling, after adjusting for other factors related to BHg, both race/ethnicity and income remained statistically significant predictors of BHg levels observed in this study [Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. Non-Hispanic blacks (p < 0.0001) and women grouped in the “other” racial category (p = 0.002) had significantly higher BHg concentrations than did non-Hispanic whites.", " Regional and coastal variation in BHg concentrations and in fish consumption Comparisons of the distribution of BHg data with reference values aimed at protecting the fetal nervous system have been made using national-level data (Mahaffey et al. 2004). NHANES data are often used to make population estimates through application of weighting factors to variables of interest such as BHg. Population estimates for U.S. Census regions and their distribution of BHg concentrations indicate that women living in the Northeast had BHg concentrations exceeding levels of concern more often than did women living in the South and West. The lowest Hg exposures were reported among women living in the Midwest.\nBecause NHANES was not designed to provide population estimates for coastal and noncoastal areas, unbiased estimates for the number of women having BHg concentrations ≥ 3.5 μg/L and ≥ 5.8 μg/L cannot be developed comparing coastal- and noncoastal-residing women. Although the following are not population estimates, they are statistics for a geographic region: Women living in coastal areas were at greater risk of having BHg concentrations ≥ 3.5 μg/L (16.25% for coastal and 5.99% for noncoastal residents) and ≥ 5.8 μg/L (8.11% for coastal and 2.06% for noncoastal residents). Women living near the coastal areas had approximately three to four times greater risk of exceeding acceptable levels of Hg exposure than did noncoastal-dwelling women. There may be some bias in these results due to the weighting issues (see “Materials and Methods”); however, we do not believe that this bias is a major factor underlying these great differences.\nMeHg exposures exceeding health-based standards, including U.S. EPA’s RfD (Rice et al. 2003), occurred more commonly among women living in coastal areas. These health-based standards were based on avoiding MeHg-associated delays and deficits in neurologic development of children after in utero exposure to MeHg (Mergler et al. 2007; Rice et al. 2003). At higher exposures to MeHg, including the highest concentrations reported during these survey years, the women themselves may risk adverse neuropsychological and neurobehavioral outcomes (Mergler et al. 2007).\nWithin the United States, people living in coastal areas consume more fish and shellfish than do those living in noncoastal areas and consume fish with higher Hg concentrations. Reports from New York City (McKelvey et al. 2007) and Florida (Denger et al. 1994; Karouna-Renier et al. 2008) support our identification of higher Hg exposures in U.S. coastal areas. This is part of a worldwide pattern. An overall pattern of higher BHg levels has also been reported among people living on U.S. islands [Hawaii (Sato et al. 2006)] and territories [e.g., Puerto Rico (Ortiz-Roque and López-Rivera 2004)]. A similar pattern has been repeated in other islands [Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Fiji (Kumar et al. 2006), Seychelles (Myers et al. 2007), and Tahiti (Chateau-Degat 2005; Dewailly et al. 2008)] compared with inland populations. Among these island populations, BHg concentrations at the upper end of the distribution fall into the range of 50 μg/L (~ 250 nmol/L) and higher (Chateau-Degat 2005). In Bermuda, cord BHg concentrations as high as 160 nmol/L (~ 35 μg/L) have been reported (arithmetic mean, 41.3 ± 4.7 nmol/L or 8.0 ± 1.0 μg/L) (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004).\nHigher BHg concentrations in the U.S. Northeast found in this study reflect, in part, more frequent fish and shellfish consumption. Additional variability may be a function of differences in Hg concentrations among species and geographic regions (Sunderland 2007). For example, recent information on “hot spots” for Hg in wildlife tissues (Evers et al. 2007) could be associated with higher Hg concentrations for locally obtained fish. One limitation of the present analysis was the use of a fish-species–specific mean Hg concentration (i.e., nondistributional values) to estimate individual exposure. Although most fish consumed by the U.S. population is not locally obtained (i.e., commercially obtained from diverse regions and countries) (Sunderland 2007), analytical results showing geographic differences in the distribution of BHg could reflect higher Hg concentrations in locally obtained fish within the Northeast states.\nComparisons of the distribution of BHg data with reference values aimed at protecting the fetal nervous system have been made using national-level data (Mahaffey et al. 2004). NHANES data are often used to make population estimates through application of weighting factors to variables of interest such as BHg. Population estimates for U.S. Census regions and their distribution of BHg concentrations indicate that women living in the Northeast had BHg concentrations exceeding levels of concern more often than did women living in the South and West. The lowest Hg exposures were reported among women living in the Midwest.\nBecause NHANES was not designed to provide population estimates for coastal and noncoastal areas, unbiased estimates for the number of women having BHg concentrations ≥ 3.5 μg/L and ≥ 5.8 μg/L cannot be developed comparing coastal- and noncoastal-residing women. Although the following are not population estimates, they are statistics for a geographic region: Women living in coastal areas were at greater risk of having BHg concentrations ≥ 3.5 μg/L (16.25% for coastal and 5.99% for noncoastal residents) and ≥ 5.8 μg/L (8.11% for coastal and 2.06% for noncoastal residents). Women living near the coastal areas had approximately three to four times greater risk of exceeding acceptable levels of Hg exposure than did noncoastal-dwelling women. There may be some bias in these results due to the weighting issues (see “Materials and Methods”); however, we do not believe that this bias is a major factor underlying these great differences.\nMeHg exposures exceeding health-based standards, including U.S. EPA’s RfD (Rice et al. 2003), occurred more commonly among women living in coastal areas. These health-based standards were based on avoiding MeHg-associated delays and deficits in neurologic development of children after in utero exposure to MeHg (Mergler et al. 2007; Rice et al. 2003). At higher exposures to MeHg, including the highest concentrations reported during these survey years, the women themselves may risk adverse neuropsychological and neurobehavioral outcomes (Mergler et al. 2007).\nWithin the United States, people living in coastal areas consume more fish and shellfish than do those living in noncoastal areas and consume fish with higher Hg concentrations. Reports from New York City (McKelvey et al. 2007) and Florida (Denger et al. 1994; Karouna-Renier et al. 2008) support our identification of higher Hg exposures in U.S. coastal areas. This is part of a worldwide pattern. An overall pattern of higher BHg levels has also been reported among people living on U.S. islands [Hawaii (Sato et al. 2006)] and territories [e.g., Puerto Rico (Ortiz-Roque and López-Rivera 2004)]. A similar pattern has been repeated in other islands [Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Fiji (Kumar et al. 2006), Seychelles (Myers et al. 2007), and Tahiti (Chateau-Degat 2005; Dewailly et al. 2008)] compared with inland populations. Among these island populations, BHg concentrations at the upper end of the distribution fall into the range of 50 μg/L (~ 250 nmol/L) and higher (Chateau-Degat 2005). In Bermuda, cord BHg concentrations as high as 160 nmol/L (~ 35 μg/L) have been reported (arithmetic mean, 41.3 ± 4.7 nmol/L or 8.0 ± 1.0 μg/L) (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004).\nHigher BHg concentrations in the U.S. Northeast found in this study reflect, in part, more frequent fish and shellfish consumption. Additional variability may be a function of differences in Hg concentrations among species and geographic regions (Sunderland 2007). For example, recent information on “hot spots” for Hg in wildlife tissues (Evers et al. 2007) could be associated with higher Hg concentrations for locally obtained fish. One limitation of the present analysis was the use of a fish-species–specific mean Hg concentration (i.e., nondistributional values) to estimate individual exposure. Although most fish consumed by the U.S. population is not locally obtained (i.e., commercially obtained from diverse regions and countries) (Sunderland 2007), analytical results showing geographic differences in the distribution of BHg could reflect higher Hg concentrations in locally obtained fish within the Northeast states.\n Ethnic group variation on fish intake and BHg concentrations Ethnic origins were associated with Hg exposures with those designated as “other” (i.e., Asian, Pacific and Caribbean Islander, Native American, Alaska Native, multiracial, and unknown race) having higher BHg concentrations. From additional studies, people of Asian descent whose food choices are influenced by Asian dietary patterns (Kudo et al. 2000; Sechena et al. 2003) tended to consume fish more frequently, in greater variety, and in greater quantity than did non-Asians. The ethnic diversity of the U.S. population is well known. As of 1997, 61% of the Asian population living in the United States was foreign-born (Council of Economic Advisors 1999). By comparison with overall U.S. data, higher BHg concentrations among Asians and islanders were reported for Taiwan (Hsu et al. 2007), Cambodia (Agusa et al. 2007), Fiji (Kumar et al. 2006), and Tahiti (Dewailly et al. 2008).\nWithin the United States, fish and shellfish consumption, predicting Hg exposure described previously, varies widely, in part a reflection of ethnicity. For example, Asian countries [e.g., Cambodia (Agusa et al. 2007), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], island nations [e.g., Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Seychelles (Myers et al. 2007), Tahiti (Chateau-Degat 2005), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], and some European countries [e.g., Spain (Falcó et al. 2006; Herreros et al. 2008) and the Faroe Islands (Weihe et al. 1996)] have reported fish/shellfish consumption levels greater than average worldwide consumption (World Health Organization 2008).\nEthnic origins were associated with Hg exposures with those designated as “other” (i.e., Asian, Pacific and Caribbean Islander, Native American, Alaska Native, multiracial, and unknown race) having higher BHg concentrations. From additional studies, people of Asian descent whose food choices are influenced by Asian dietary patterns (Kudo et al. 2000; Sechena et al. 2003) tended to consume fish more frequently, in greater variety, and in greater quantity than did non-Asians. The ethnic diversity of the U.S. population is well known. As of 1997, 61% of the Asian population living in the United States was foreign-born (Council of Economic Advisors 1999). By comparison with overall U.S. data, higher BHg concentrations among Asians and islanders were reported for Taiwan (Hsu et al. 2007), Cambodia (Agusa et al. 2007), Fiji (Kumar et al. 2006), and Tahiti (Dewailly et al. 2008).\nWithin the United States, fish and shellfish consumption, predicting Hg exposure described previously, varies widely, in part a reflection of ethnicity. For example, Asian countries [e.g., Cambodia (Agusa et al. 2007), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], island nations [e.g., Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Seychelles (Myers et al. 2007), Tahiti (Chateau-Degat 2005), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], and some European countries [e.g., Spain (Falcó et al. 2006; Herreros et al. 2008) and the Faroe Islands (Weihe et al. 1996)] have reported fish/shellfish consumption levels greater than average worldwide consumption (World Health Organization 2008).\n Income differences in association with fish intake and BHg concentrations In contrast to some other environmental exposures [e.g., higher blood lead concentrations] (Mahaffey et al. 1982), BHg concentrations increased with income. This is consistent with other studies in which women from higher income groups were at greater risk of MeHg exposure, as were women living in urban areas (Hightower and Moore 2003; Saint-Phard and Van Dorsten 2006).\nIn contrast to some other environmental exposures [e.g., higher blood lead concentrations] (Mahaffey et al. 1982), BHg concentrations increased with income. This is consistent with other studies in which women from higher income groups were at greater risk of MeHg exposure, as were women living in urban areas (Hightower and Moore 2003; Saint-Phard and Van Dorsten 2006).\n Interactions between income and ethnic group A more complex association between income and racial/ethnic group may also exist. According to the 1990 U.S. Census (U.S. Census Bureau 2008), the median family income of Japanese-American families exceeded that of non-Hispanic white families. By contrast, the income of Cambodian-American families was lower than that of black families. We could not address whether there is an interaction between belonging to the category designated as “other” and higher income within the NHANES data on BHg levels available at this time, because of sample size limitations.\nA more complex association between income and racial/ethnic group may also exist. According to the 1990 U.S. Census (U.S. Census Bureau 2008), the median family income of Japanese-American families exceeded that of non-Hispanic white families. By contrast, the income of Cambodian-American families was lower than that of black families. We could not address whether there is an interaction between belonging to the category designated as “other” and higher income within the NHANES data on BHg levels available at this time, because of sample size limitations.\n Time trends in Hg exposure absent changes in total fish consumption Our analysis of 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Our evaluation of NHANES fish intake data indicated no differences in the mean frequency or amount of particular fish and shellfish species consumed. However, the estimated 30-day Hg intake decreased at the 90th percentile and higher, whereas total fish consumption did not, which suggests a shift in fish species consumed. The BHg data indicated a reduction of the higher end of the distribution of BHg between the first 2-year interval (the 1999 and 2000 examinees) compared with the subsequent 4-year interval (the 2001–2004 examinees).\nThe basis for these differences could possibly reflect spillover from the federal fish advisory program (U.S. EPA 2008b) in terms of total fish and shellfish consumption. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007).\nThe four fish species listed in the federal advisory [swordfish, shark, tilefish, and king mackerel (U.S. EPA 2008b)] were rarely reported by the 5,465 women in this analysis. It is clear that these four fish species contributed little to Hg exposure in this general population of U.S. women. Individual states with higher Hg exposures [e.g., Hawaii (Sato et al., 2006), Florida (Karouna-Renier et al. 2008)] and greater fish consumption [Florida (Denger et al. 1994)] have substantially broader fish consumption advisories (e.g., Hawaii and Florida) aimed at reducing Hg exposure from high-Hg–containing species obtained locally (Florida Department of Health 2007; U.S. EPA 2008b). Despite the federal advisory’s emphasis on four species of highly contaminated fish (U.S. EPA 2008b) and the states’ emphasis on game-fish, the most commonly consumed finfish in the United States is tuna. Interpretation of Hg exposure from tuna was complicated by the specific wording of the dietary questions asked of the NHANES examinees, which did not differentiate between light or skipjack tuna and albacore tuna. The latter contains approximately three times more Hg than does the former: 0.38 μg/g for frozen and fresh tuna, 0.35 μg/g for canned albacore, and 0.12 μg/g for canned light tuna (Mahaffey et al. 2008).\nOur analysis of 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Our evaluation of NHANES fish intake data indicated no differences in the mean frequency or amount of particular fish and shellfish species consumed. However, the estimated 30-day Hg intake decreased at the 90th percentile and higher, whereas total fish consumption did not, which suggests a shift in fish species consumed. The BHg data indicated a reduction of the higher end of the distribution of BHg between the first 2-year interval (the 1999 and 2000 examinees) compared with the subsequent 4-year interval (the 2001–2004 examinees).\nThe basis for these differences could possibly reflect spillover from the federal fish advisory program (U.S. EPA 2008b) in terms of total fish and shellfish consumption. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007).\nThe four fish species listed in the federal advisory [swordfish, shark, tilefish, and king mackerel (U.S. EPA 2008b)] were rarely reported by the 5,465 women in this analysis. It is clear that these four fish species contributed little to Hg exposure in this general population of U.S. women. Individual states with higher Hg exposures [e.g., Hawaii (Sato et al., 2006), Florida (Karouna-Renier et al. 2008)] and greater fish consumption [Florida (Denger et al. 1994)] have substantially broader fish consumption advisories (e.g., Hawaii and Florida) aimed at reducing Hg exposure from high-Hg–containing species obtained locally (Florida Department of Health 2007; U.S. EPA 2008b). Despite the federal advisory’s emphasis on four species of highly contaminated fish (U.S. EPA 2008b) and the states’ emphasis on game-fish, the most commonly consumed finfish in the United States is tuna. Interpretation of Hg exposure from tuna was complicated by the specific wording of the dietary questions asked of the NHANES examinees, which did not differentiate between light or skipjack tuna and albacore tuna. The latter contains approximately three times more Hg than does the former: 0.38 μg/g for frozen and fresh tuna, 0.35 μg/g for canned albacore, and 0.12 μg/g for canned light tuna (Mahaffey et al. 2008).\n Changes in MeHg exposure over time During the past decade, the U.S. EPA initiated substantial interventions aimed to reduce Hg releases and exposures (U.S. EPA 2008a) and issued advisories to limit consumption of high-Hg fish (U.S. EPA 2008b). Because of worldwide atmospheric distribution and subsequent deposition of Hg, local conditions and locally caught fish are not the main contributors to Hg intake for most people (Sunderland 2007). Although there are economic indications that consumption of some species of fish may have decreased in response to these advisories (Shimshack et al. 2007), Hg exposures may not follow a similar time trend despite regulatory efforts to reduce Hg exposures. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). Our analysis of NHANES data calculating 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004.\nDuring the past decade, the U.S. EPA initiated substantial interventions aimed to reduce Hg releases and exposures (U.S. EPA 2008a) and issued advisories to limit consumption of high-Hg fish (U.S. EPA 2008b). Because of worldwide atmospheric distribution and subsequent deposition of Hg, local conditions and locally caught fish are not the main contributors to Hg intake for most people (Sunderland 2007). Although there are economic indications that consumption of some species of fish may have decreased in response to these advisories (Shimshack et al. 2007), Hg exposures may not follow a similar time trend despite regulatory efforts to reduce Hg exposures. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). Our analysis of NHANES data calculating 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004.", "Comparisons of the distribution of BHg data with reference values aimed at protecting the fetal nervous system have been made using national-level data (Mahaffey et al. 2004). NHANES data are often used to make population estimates through application of weighting factors to variables of interest such as BHg. Population estimates for U.S. Census regions and their distribution of BHg concentrations indicate that women living in the Northeast had BHg concentrations exceeding levels of concern more often than did women living in the South and West. The lowest Hg exposures were reported among women living in the Midwest.\nBecause NHANES was not designed to provide population estimates for coastal and noncoastal areas, unbiased estimates for the number of women having BHg concentrations ≥ 3.5 μg/L and ≥ 5.8 μg/L cannot be developed comparing coastal- and noncoastal-residing women. Although the following are not population estimates, they are statistics for a geographic region: Women living in coastal areas were at greater risk of having BHg concentrations ≥ 3.5 μg/L (16.25% for coastal and 5.99% for noncoastal residents) and ≥ 5.8 μg/L (8.11% for coastal and 2.06% for noncoastal residents). Women living near the coastal areas had approximately three to four times greater risk of exceeding acceptable levels of Hg exposure than did noncoastal-dwelling women. There may be some bias in these results due to the weighting issues (see “Materials and Methods”); however, we do not believe that this bias is a major factor underlying these great differences.\nMeHg exposures exceeding health-based standards, including U.S. EPA’s RfD (Rice et al. 2003), occurred more commonly among women living in coastal areas. These health-based standards were based on avoiding MeHg-associated delays and deficits in neurologic development of children after in utero exposure to MeHg (Mergler et al. 2007; Rice et al. 2003). At higher exposures to MeHg, including the highest concentrations reported during these survey years, the women themselves may risk adverse neuropsychological and neurobehavioral outcomes (Mergler et al. 2007).\nWithin the United States, people living in coastal areas consume more fish and shellfish than do those living in noncoastal areas and consume fish with higher Hg concentrations. Reports from New York City (McKelvey et al. 2007) and Florida (Denger et al. 1994; Karouna-Renier et al. 2008) support our identification of higher Hg exposures in U.S. coastal areas. This is part of a worldwide pattern. An overall pattern of higher BHg levels has also been reported among people living on U.S. islands [Hawaii (Sato et al. 2006)] and territories [e.g., Puerto Rico (Ortiz-Roque and López-Rivera 2004)]. A similar pattern has been repeated in other islands [Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Fiji (Kumar et al. 2006), Seychelles (Myers et al. 2007), and Tahiti (Chateau-Degat 2005; Dewailly et al. 2008)] compared with inland populations. Among these island populations, BHg concentrations at the upper end of the distribution fall into the range of 50 μg/L (~ 250 nmol/L) and higher (Chateau-Degat 2005). In Bermuda, cord BHg concentrations as high as 160 nmol/L (~ 35 μg/L) have been reported (arithmetic mean, 41.3 ± 4.7 nmol/L or 8.0 ± 1.0 μg/L) (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004).\nHigher BHg concentrations in the U.S. Northeast found in this study reflect, in part, more frequent fish and shellfish consumption. Additional variability may be a function of differences in Hg concentrations among species and geographic regions (Sunderland 2007). For example, recent information on “hot spots” for Hg in wildlife tissues (Evers et al. 2007) could be associated with higher Hg concentrations for locally obtained fish. One limitation of the present analysis was the use of a fish-species–specific mean Hg concentration (i.e., nondistributional values) to estimate individual exposure. Although most fish consumed by the U.S. population is not locally obtained (i.e., commercially obtained from diverse regions and countries) (Sunderland 2007), analytical results showing geographic differences in the distribution of BHg could reflect higher Hg concentrations in locally obtained fish within the Northeast states.", "Ethnic origins were associated with Hg exposures with those designated as “other” (i.e., Asian, Pacific and Caribbean Islander, Native American, Alaska Native, multiracial, and unknown race) having higher BHg concentrations. From additional studies, people of Asian descent whose food choices are influenced by Asian dietary patterns (Kudo et al. 2000; Sechena et al. 2003) tended to consume fish more frequently, in greater variety, and in greater quantity than did non-Asians. The ethnic diversity of the U.S. population is well known. As of 1997, 61% of the Asian population living in the United States was foreign-born (Council of Economic Advisors 1999). By comparison with overall U.S. data, higher BHg concentrations among Asians and islanders were reported for Taiwan (Hsu et al. 2007), Cambodia (Agusa et al. 2007), Fiji (Kumar et al. 2006), and Tahiti (Dewailly et al. 2008).\nWithin the United States, fish and shellfish consumption, predicting Hg exposure described previously, varies widely, in part a reflection of ethnicity. For example, Asian countries [e.g., Cambodia (Agusa et al. 2007), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], island nations [e.g., Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Seychelles (Myers et al. 2007), Tahiti (Chateau-Degat 2005), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], and some European countries [e.g., Spain (Falcó et al. 2006; Herreros et al. 2008) and the Faroe Islands (Weihe et al. 1996)] have reported fish/shellfish consumption levels greater than average worldwide consumption (World Health Organization 2008).", "In contrast to some other environmental exposures [e.g., higher blood lead concentrations] (Mahaffey et al. 1982), BHg concentrations increased with income. This is consistent with other studies in which women from higher income groups were at greater risk of MeHg exposure, as were women living in urban areas (Hightower and Moore 2003; Saint-Phard and Van Dorsten 2006).", "A more complex association between income and racial/ethnic group may also exist. According to the 1990 U.S. Census (U.S. Census Bureau 2008), the median family income of Japanese-American families exceeded that of non-Hispanic white families. By contrast, the income of Cambodian-American families was lower than that of black families. We could not address whether there is an interaction between belonging to the category designated as “other” and higher income within the NHANES data on BHg levels available at this time, because of sample size limitations.", "Our analysis of 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Our evaluation of NHANES fish intake data indicated no differences in the mean frequency or amount of particular fish and shellfish species consumed. However, the estimated 30-day Hg intake decreased at the 90th percentile and higher, whereas total fish consumption did not, which suggests a shift in fish species consumed. The BHg data indicated a reduction of the higher end of the distribution of BHg between the first 2-year interval (the 1999 and 2000 examinees) compared with the subsequent 4-year interval (the 2001–2004 examinees).\nThe basis for these differences could possibly reflect spillover from the federal fish advisory program (U.S. EPA 2008b) in terms of total fish and shellfish consumption. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007).\nThe four fish species listed in the federal advisory [swordfish, shark, tilefish, and king mackerel (U.S. EPA 2008b)] were rarely reported by the 5,465 women in this analysis. It is clear that these four fish species contributed little to Hg exposure in this general population of U.S. women. Individual states with higher Hg exposures [e.g., Hawaii (Sato et al., 2006), Florida (Karouna-Renier et al. 2008)] and greater fish consumption [Florida (Denger et al. 1994)] have substantially broader fish consumption advisories (e.g., Hawaii and Florida) aimed at reducing Hg exposure from high-Hg–containing species obtained locally (Florida Department of Health 2007; U.S. EPA 2008b). Despite the federal advisory’s emphasis on four species of highly contaminated fish (U.S. EPA 2008b) and the states’ emphasis on game-fish, the most commonly consumed finfish in the United States is tuna. Interpretation of Hg exposure from tuna was complicated by the specific wording of the dietary questions asked of the NHANES examinees, which did not differentiate between light or skipjack tuna and albacore tuna. The latter contains approximately three times more Hg than does the former: 0.38 μg/g for frozen and fresh tuna, 0.35 μg/g for canned albacore, and 0.12 μg/g for canned light tuna (Mahaffey et al. 2008).", "During the past decade, the U.S. EPA initiated substantial interventions aimed to reduce Hg releases and exposures (U.S. EPA 2008a) and issued advisories to limit consumption of high-Hg fish (U.S. EPA 2008b). Because of worldwide atmospheric distribution and subsequent deposition of Hg, local conditions and locally caught fish are not the main contributors to Hg intake for most people (Sunderland 2007). Although there are economic indications that consumption of some species of fish may have decreased in response to these advisories (Shimshack et al. 2007), Hg exposures may not follow a similar time trend despite regulatory efforts to reduce Hg exposures. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). Our analysis of NHANES data calculating 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004.", "Significant geographic differences in BHg concentrations occurred within the United States: We found highest exposures in coastal areas and the Northeast census region. In the Northeast, 19% of women had BHg concentrations ≥ 3.5 μg/L. The highest 5% of BHg concentrations exceeded 8.2 μg/L in the Northeast and 7.2 μg/L in coastal areas, concentrations more than twice the 3.5 μg/L level of concern. BHg levels were predicted by the quantity and type of fish consumed. Over the 6-year period (1999–2004), the frequency of elevated BHg levels among women of childbearing age declined without a significant change in quantities of fish and shellfish consumed. This pattern suggests a more discerning series of choices in type of fish eaten rather than an overall reduction in fish consumption. Within all geographic regions, women at highest risk of elevated Hg exposures were more affluent and more likely to be of Asian or island ethnicity." ]
[ "materials|methods", null, null, "results", "discussion", null, null, null, null, null, null, "conclusions" ]
[ "blood", "coastal", "fish", "mercury", "NHANES", "regional" ]
Materials and Methods: Methodology for data analysis We evaluated data for examinees who participated in NHANES during survey years 1999–2004 to assess the statistical association between seafood consumption and BHg by region of residence, race/ethnicity, and annual income. Further, we examined time trends for both BHg and fish consumption. NHANES is an annual survey conducted by the NCHS. The data include BHg levels, 24-hr dietary recall, and 30-day finfish and shellfish consumption frequency for women 16–49 years of age who reside in the United States. The NHANES sampling frame includes all 50 states. The documentation and publicly available data for NHANES can be found online [Centers for Disease Control and Prevention (CDC) 2006b]. The regional data are not publicly available but can be accessed by special request to the National Center for Health Statistics (NCHS) through its Research Data Center. Procedures for submitting a proposal in order to access data that are not publicly available can be found online (CDC 2006c). We performed all analyses using SAS, version 9.1 (SAS Institute Inc., Cary, NC). Following NHANES analytic guidelines (CDC 2006a), we used SAS procedures that accurately incorporate the stratification and multistage sampling of NHANES: Proc SurveyMeans, Proc SurveyReg, and Proc SurveyFreq (SAS Institute Inc.). The weights provided by NCHS compensate for the oversampling of various subpopulations and adjust for nonresponse bias. We used weights for estimating statistics for coastal and non-coastal regions to retain these adjustments. Because we considered the variables of interest (BHg and fish consumption) to be related to some of the factors that were oversampled, we retained these adjustments to minimize the bias in the estimates. For example, NHANES over-sampled Mexican Americans, who also have lower BHg than do other racial/ethnic groups. If the weights were not used to estimate the distribution of BHg, the results would be biased low. We recognize that some bias may remain within the estimates because the weights were not specifically created for the geographic regions of coastal and noncoastal; however, these subdivisions (e.g., coastal, noncoastal, Pacific, Atlantic) are based on counties, the primary sampling units of NHANES (CDC 2006a). All multivariate analyses were done unweighted because factors associated with the dependent variables and for which over-sampling was based were included as covariates and thus adjusted for in the modeling. In order to estimate long-term Hg intake, we combined data collected through the 24-hr dietary recall and the 30-day fish frequency questionnaire to estimate 30-day Hg intake [for specific methodologies, see Mahaffey and Rice (1997); Mahaffey et al. (2004)]. If we found statistically significant differences in amount of fish consumed per meal from the 24-hr dietary recall by either coastal status (participants who lived in a county that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or the Great Lakes vs. those who did not) or data release (1999–2000–2001–2002, or 2003–2004), we calculated separate averages. We generated statistically representative estimates from these data using the statistical weights provided by NCHS and following the relevant analytical guidelines published by NCHS (CDC 2006a). We evaluated data for examinees who participated in NHANES during survey years 1999–2004 to assess the statistical association between seafood consumption and BHg by region of residence, race/ethnicity, and annual income. Further, we examined time trends for both BHg and fish consumption. NHANES is an annual survey conducted by the NCHS. The data include BHg levels, 24-hr dietary recall, and 30-day finfish and shellfish consumption frequency for women 16–49 years of age who reside in the United States. The NHANES sampling frame includes all 50 states. The documentation and publicly available data for NHANES can be found online [Centers for Disease Control and Prevention (CDC) 2006b]. The regional data are not publicly available but can be accessed by special request to the National Center for Health Statistics (NCHS) through its Research Data Center. Procedures for submitting a proposal in order to access data that are not publicly available can be found online (CDC 2006c). We performed all analyses using SAS, version 9.1 (SAS Institute Inc., Cary, NC). Following NHANES analytic guidelines (CDC 2006a), we used SAS procedures that accurately incorporate the stratification and multistage sampling of NHANES: Proc SurveyMeans, Proc SurveyReg, and Proc SurveyFreq (SAS Institute Inc.). The weights provided by NCHS compensate for the oversampling of various subpopulations and adjust for nonresponse bias. We used weights for estimating statistics for coastal and non-coastal regions to retain these adjustments. Because we considered the variables of interest (BHg and fish consumption) to be related to some of the factors that were oversampled, we retained these adjustments to minimize the bias in the estimates. For example, NHANES over-sampled Mexican Americans, who also have lower BHg than do other racial/ethnic groups. If the weights were not used to estimate the distribution of BHg, the results would be biased low. We recognize that some bias may remain within the estimates because the weights were not specifically created for the geographic regions of coastal and noncoastal; however, these subdivisions (e.g., coastal, noncoastal, Pacific, Atlantic) are based on counties, the primary sampling units of NHANES (CDC 2006a). All multivariate analyses were done unweighted because factors associated with the dependent variables and for which over-sampling was based were included as covariates and thus adjusted for in the modeling. In order to estimate long-term Hg intake, we combined data collected through the 24-hr dietary recall and the 30-day fish frequency questionnaire to estimate 30-day Hg intake [for specific methodologies, see Mahaffey and Rice (1997); Mahaffey et al. (2004)]. If we found statistically significant differences in amount of fish consumed per meal from the 24-hr dietary recall by either coastal status (participants who lived in a county that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or the Great Lakes vs. those who did not) or data release (1999–2000–2001–2002, or 2003–2004), we calculated separate averages. We generated statistically representative estimates from these data using the statistical weights provided by NCHS and following the relevant analytical guidelines published by NCHS (CDC 2006a). Methodology for defining coastal and non-coastal areas Fish consumption is generally believed to be a major contributor to BHg concentration. We hypothesized that patterns of fish and shellfish consumption would vary between U.S. residents who live on or near the coast (within ~ 25–50 miles) and those who live inland. We further hypothesized that fish consumption patterns, and thus BHg concentrations, may also vary by specific coast (e.g., residents near the Atlantic Coast may have different BHg concentrations than those on the coast of the Gulf of Mexico) and specific inland region (e.g., West vs. Midwest). To test these hypotheses, we categorized NHANES respondents as living in either a coastal or a noncoastal county and further categorized them by eight regions: Atlantic Coast, Northeast, Great Lakes, Midwest, South, Gulf of Mexico, West, and Pacific Coast. The geographic unit used by NHANES is a county or county equivalent (CDC 2006a); therefore, we limited our definitions of coastal and noncoastal to follow county boundaries. We defined all counties that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or any of the Great Lakes as coastal. Additionally, we defined counties that bordered estuaries and bays as coastal, as well as counties whose center point was within approximately 25 miles of any coast even if not directly bordering a coast. [For the list of counties, see Supplemental Material, Table 13 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] We then defined the four coastal regions based on nearest body of water; for example, counties in California, Oregon, Washington, Alaska, and Hawaii that we defined as coastal were categorized as Pacific Coast. We separated noncoastal counties into four inland regions using the U.S. Census regions; for example, noncoastal counties in California, Oregon, Washington, and Alaska along with the entire states of Idaho, Montana, Wyoming, Colorado, New Mexico, Arizona, Utah, and Nevada became the West region (we classified all of Hawaii as coastal). We also designated the entire state of Florida as coastal, split between the Atlantic Coast and the Gulf of Mexico. We designated Miami-Dade County as Atlantic Coast, and Monroe County as Gulf of Mexico. These subdivisions run the risk of small sample sizes; however, the definition of coastal was sufficiently broad to avoid single primary sampling units. Fish consumption is generally believed to be a major contributor to BHg concentration. We hypothesized that patterns of fish and shellfish consumption would vary between U.S. residents who live on or near the coast (within ~ 25–50 miles) and those who live inland. We further hypothesized that fish consumption patterns, and thus BHg concentrations, may also vary by specific coast (e.g., residents near the Atlantic Coast may have different BHg concentrations than those on the coast of the Gulf of Mexico) and specific inland region (e.g., West vs. Midwest). To test these hypotheses, we categorized NHANES respondents as living in either a coastal or a noncoastal county and further categorized them by eight regions: Atlantic Coast, Northeast, Great Lakes, Midwest, South, Gulf of Mexico, West, and Pacific Coast. The geographic unit used by NHANES is a county or county equivalent (CDC 2006a); therefore, we limited our definitions of coastal and noncoastal to follow county boundaries. We defined all counties that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or any of the Great Lakes as coastal. Additionally, we defined counties that bordered estuaries and bays as coastal, as well as counties whose center point was within approximately 25 miles of any coast even if not directly bordering a coast. [For the list of counties, see Supplemental Material, Table 13 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] We then defined the four coastal regions based on nearest body of water; for example, counties in California, Oregon, Washington, Alaska, and Hawaii that we defined as coastal were categorized as Pacific Coast. We separated noncoastal counties into four inland regions using the U.S. Census regions; for example, noncoastal counties in California, Oregon, Washington, and Alaska along with the entire states of Idaho, Montana, Wyoming, Colorado, New Mexico, Arizona, Utah, and Nevada became the West region (we classified all of Hawaii as coastal). We also designated the entire state of Florida as coastal, split between the Atlantic Coast and the Gulf of Mexico. We designated Miami-Dade County as Atlantic Coast, and Monroe County as Gulf of Mexico. These subdivisions run the risk of small sample sizes; however, the definition of coastal was sufficiently broad to avoid single primary sampling units. Methodology for data analysis: We evaluated data for examinees who participated in NHANES during survey years 1999–2004 to assess the statistical association between seafood consumption and BHg by region of residence, race/ethnicity, and annual income. Further, we examined time trends for both BHg and fish consumption. NHANES is an annual survey conducted by the NCHS. The data include BHg levels, 24-hr dietary recall, and 30-day finfish and shellfish consumption frequency for women 16–49 years of age who reside in the United States. The NHANES sampling frame includes all 50 states. The documentation and publicly available data for NHANES can be found online [Centers for Disease Control and Prevention (CDC) 2006b]. The regional data are not publicly available but can be accessed by special request to the National Center for Health Statistics (NCHS) through its Research Data Center. Procedures for submitting a proposal in order to access data that are not publicly available can be found online (CDC 2006c). We performed all analyses using SAS, version 9.1 (SAS Institute Inc., Cary, NC). Following NHANES analytic guidelines (CDC 2006a), we used SAS procedures that accurately incorporate the stratification and multistage sampling of NHANES: Proc SurveyMeans, Proc SurveyReg, and Proc SurveyFreq (SAS Institute Inc.). The weights provided by NCHS compensate for the oversampling of various subpopulations and adjust for nonresponse bias. We used weights for estimating statistics for coastal and non-coastal regions to retain these adjustments. Because we considered the variables of interest (BHg and fish consumption) to be related to some of the factors that were oversampled, we retained these adjustments to minimize the bias in the estimates. For example, NHANES over-sampled Mexican Americans, who also have lower BHg than do other racial/ethnic groups. If the weights were not used to estimate the distribution of BHg, the results would be biased low. We recognize that some bias may remain within the estimates because the weights were not specifically created for the geographic regions of coastal and noncoastal; however, these subdivisions (e.g., coastal, noncoastal, Pacific, Atlantic) are based on counties, the primary sampling units of NHANES (CDC 2006a). All multivariate analyses were done unweighted because factors associated with the dependent variables and for which over-sampling was based were included as covariates and thus adjusted for in the modeling. In order to estimate long-term Hg intake, we combined data collected through the 24-hr dietary recall and the 30-day fish frequency questionnaire to estimate 30-day Hg intake [for specific methodologies, see Mahaffey and Rice (1997); Mahaffey et al. (2004)]. If we found statistically significant differences in amount of fish consumed per meal from the 24-hr dietary recall by either coastal status (participants who lived in a county that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or the Great Lakes vs. those who did not) or data release (1999–2000–2001–2002, or 2003–2004), we calculated separate averages. We generated statistically representative estimates from these data using the statistical weights provided by NCHS and following the relevant analytical guidelines published by NCHS (CDC 2006a). Methodology for defining coastal and non-coastal areas: Fish consumption is generally believed to be a major contributor to BHg concentration. We hypothesized that patterns of fish and shellfish consumption would vary between U.S. residents who live on or near the coast (within ~ 25–50 miles) and those who live inland. We further hypothesized that fish consumption patterns, and thus BHg concentrations, may also vary by specific coast (e.g., residents near the Atlantic Coast may have different BHg concentrations than those on the coast of the Gulf of Mexico) and specific inland region (e.g., West vs. Midwest). To test these hypotheses, we categorized NHANES respondents as living in either a coastal or a noncoastal county and further categorized them by eight regions: Atlantic Coast, Northeast, Great Lakes, Midwest, South, Gulf of Mexico, West, and Pacific Coast. The geographic unit used by NHANES is a county or county equivalent (CDC 2006a); therefore, we limited our definitions of coastal and noncoastal to follow county boundaries. We defined all counties that bordered the Pacific or Atlantic Oceans, the Gulf of Mexico, or any of the Great Lakes as coastal. Additionally, we defined counties that bordered estuaries and bays as coastal, as well as counties whose center point was within approximately 25 miles of any coast even if not directly bordering a coast. [For the list of counties, see Supplemental Material, Table 13 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] We then defined the four coastal regions based on nearest body of water; for example, counties in California, Oregon, Washington, Alaska, and Hawaii that we defined as coastal were categorized as Pacific Coast. We separated noncoastal counties into four inland regions using the U.S. Census regions; for example, noncoastal counties in California, Oregon, Washington, and Alaska along with the entire states of Idaho, Montana, Wyoming, Colorado, New Mexico, Arizona, Utah, and Nevada became the West region (we classified all of Hawaii as coastal). We also designated the entire state of Florida as coastal, split between the Atlantic Coast and the Gulf of Mexico. We designated Miami-Dade County as Atlantic Coast, and Monroe County as Gulf of Mexico. These subdivisions run the risk of small sample sizes; however, the definition of coastal was sufficiently broad to avoid single primary sampling units. Results: Table 1 shows the distributions of estimates of the number of women with BHg concentrations > 3.5 μg/L and > 5.8 μg/L, by region. Analyses indicate that between 1999 and 2004, the Northeast had the highest percentage of women with BHg concentrations above the 3.5 μg/L level of concern (> 19%), whereas the South had the largest estimated number of women (1.21 million) with ≥ 3.5 μg/L BHg because of elevated population in that region. Geometric means (Figure 1) show similar trends, with the highest BHg concentrations in the Northeast, followed by the West, South, and Midwest census regions. In the Northeast, the highest 5% of BHg concentrations exceeded 8.2 μg/L. [For full distributions, see Supplemental Material, Table 2 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] When we included coastal regions in this analysis, additional spatial heterogeneity in BHg (Figure 2A) and estimated 30-day Hg intake (Figure 2B) was apparent, with elevated exposures in all coastal areas relative to their neighboring inland regions except in the Great Lakes. In the coastal areas, the highest 5% of BHg concentrations exceeded 7.2 μg/L, with the Atlantic Coast exceeding 10.9 μg/L. [For the full distributions, see Supplemental Material, Table 4 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] Fish species eaten by survey participants varied by region (Figure 3), with respondents in coastal regions reporting higher frequency of consumption of fish containing higher levels of Hg. BHg concentrations were strongly associated with the frequency of fish consumption (Figure 4). BHg increased with monthly estimated consumption of fish and shellfish over the range of never/rarely to 4 or more times per week. In multiple regression modeling, women from the Atlantic (p < 0.01), Pacific (p < 0.0001), and Gulf (p < 0.0001) coasts had higher BHg concentrations compared with women from the inland West, whereas women from the inland Northeast and inland Midwest had significantly lower BHg levels (p < 0.0001). [For the full regression results, see Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] Analysis of temporal trends through simple regression modeling showed no statistically significant difference among the three sets of study years (1999–2000–2001–2002, and 2003–2004) for BHg (p = 0.07), estimated 30-day Hg intake (p = 0.11), or reported frequency of seafood consumption (p = 0.69). However, in multiple regression modeling, adjusting for covariates including coastal/non-coastal residence, the years 1999–2000 had significantly higher BHg levels (p < 0.0001) compared with 2003–2004, and 2001–2002 had significantly lower BHg levels (p < 0.01) [Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. Although the analyses did not support the conclusion that there was a general downward trend in BHg concentrations over the 6-year study period, there was a decline in the upper percentiles reflecting the most highly exposed women with BHg concentrations greater than established levels of concern [Supplemental Material, Table 9 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. In addition, the percentage of examinees with BHg values ≥ 3.5 μg/L and ≥ 5.8 μg/L was much greater in 1999–2000 compared to 2001–2002 and 2003–2004 (Figure 5). We found no consistent trend in fish consumption across the study years. We observed a decrease in the 90th percentile of 30-day estimated intake of Hg through seafood consumption across the study years even though there was no similar decrease in the 90th percentile of 30-day estimated consumption of grams of fish and shellfish (Figure 6). This suggests a shift in consumption to seafood containing less Hg. We did not observe a similar pattern at the mean, suggesting that this shift in seafood consumption occurred mainly with the highest fish and shellfish consumers [Supplemental Material, Table 9 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. The RfD for Hg intake is 0.1 μg Hg/kg body weight (μg/kgbw) per day, or 3.0 μg Hg/kgbw per month (30 days). Results also showed that self-selected ethnic identity was associated with total BHg concentrations, estimated 30-day Hg intake, and frequency of either finfish or shellfish consumption. For example, BHg levels, reported frequency of seafood consumption, and 30-day Hg intakes were highest among women who designated themselves as being in the “other” category (mostly people whose ancestry is Asian, Native American, Pacific Islands, and the Caribbean Islands). [See also Supplemental Material, Tables 5–7 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] Table 2 presents the percentages of women by race/ethnicity that had ≥ 3.5 and ≥ 5.8 μg/L BHg. We identified statistically significant relationships between higher income and, respectively, increasing BHg concentration (p < 0.0001), estimated 30-day intake of Hg (p = 0.008), and 30-day frequency of finfish and shellfish consumption (p < 0.0001) through bivariate regressions. [For the distributions of blood total Hg, estimated Hg intake, and frequency of finfish and shellfish consumption by annual income, see Supplemental Material, Tables 6–8 (http://www.ehponline.org/members/2008/11674/suppl.pdf).] In addition, women from families reporting incomes of ≥ $75,000 (the reference category) had statistically higher BHg levels than did women from families with incomes of ≤ $55,000 (p < 0.01). In all cases, BHg concentrations were also significantly associated with age and estimated 30-day Hg intake (p < 0.0001). Table 3 presents the percentage of women by annual income with ≥ 3.5 and ≥ 5.8 μg/L BHg. In multiple regression modeling, after adjusting for other factors related to BHg, both race/ethnicity and income remained statistically significant predictors of BHg levels observed in this study [Supplemental Material, Table 11 (http://www.ehponline.org/members/2008/11674/suppl.pdf)]. Non-Hispanic blacks (p < 0.0001) and women grouped in the “other” racial category (p = 0.002) had significantly higher BHg concentrations than did non-Hispanic whites. Discussion: Regional and coastal variation in BHg concentrations and in fish consumption Comparisons of the distribution of BHg data with reference values aimed at protecting the fetal nervous system have been made using national-level data (Mahaffey et al. 2004). NHANES data are often used to make population estimates through application of weighting factors to variables of interest such as BHg. Population estimates for U.S. Census regions and their distribution of BHg concentrations indicate that women living in the Northeast had BHg concentrations exceeding levels of concern more often than did women living in the South and West. The lowest Hg exposures were reported among women living in the Midwest. Because NHANES was not designed to provide population estimates for coastal and noncoastal areas, unbiased estimates for the number of women having BHg concentrations ≥ 3.5 μg/L and ≥ 5.8 μg/L cannot be developed comparing coastal- and noncoastal-residing women. Although the following are not population estimates, they are statistics for a geographic region: Women living in coastal areas were at greater risk of having BHg concentrations ≥ 3.5 μg/L (16.25% for coastal and 5.99% for noncoastal residents) and ≥ 5.8 μg/L (8.11% for coastal and 2.06% for noncoastal residents). Women living near the coastal areas had approximately three to four times greater risk of exceeding acceptable levels of Hg exposure than did noncoastal-dwelling women. There may be some bias in these results due to the weighting issues (see “Materials and Methods”); however, we do not believe that this bias is a major factor underlying these great differences. MeHg exposures exceeding health-based standards, including U.S. EPA’s RfD (Rice et al. 2003), occurred more commonly among women living in coastal areas. These health-based standards were based on avoiding MeHg-associated delays and deficits in neurologic development of children after in utero exposure to MeHg (Mergler et al. 2007; Rice et al. 2003). At higher exposures to MeHg, including the highest concentrations reported during these survey years, the women themselves may risk adverse neuropsychological and neurobehavioral outcomes (Mergler et al. 2007). Within the United States, people living in coastal areas consume more fish and shellfish than do those living in noncoastal areas and consume fish with higher Hg concentrations. Reports from New York City (McKelvey et al. 2007) and Florida (Denger et al. 1994; Karouna-Renier et al. 2008) support our identification of higher Hg exposures in U.S. coastal areas. This is part of a worldwide pattern. An overall pattern of higher BHg levels has also been reported among people living on U.S. islands [Hawaii (Sato et al. 2006)] and territories [e.g., Puerto Rico (Ortiz-Roque and López-Rivera 2004)]. A similar pattern has been repeated in other islands [Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Fiji (Kumar et al. 2006), Seychelles (Myers et al. 2007), and Tahiti (Chateau-Degat 2005; Dewailly et al. 2008)] compared with inland populations. Among these island populations, BHg concentrations at the upper end of the distribution fall into the range of 50 μg/L (~ 250 nmol/L) and higher (Chateau-Degat 2005). In Bermuda, cord BHg concentrations as high as 160 nmol/L (~ 35 μg/L) have been reported (arithmetic mean, 41.3 ± 4.7 nmol/L or 8.0 ± 1.0 μg/L) (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004). Higher BHg concentrations in the U.S. Northeast found in this study reflect, in part, more frequent fish and shellfish consumption. Additional variability may be a function of differences in Hg concentrations among species and geographic regions (Sunderland 2007). For example, recent information on “hot spots” for Hg in wildlife tissues (Evers et al. 2007) could be associated with higher Hg concentrations for locally obtained fish. One limitation of the present analysis was the use of a fish-species–specific mean Hg concentration (i.e., nondistributional values) to estimate individual exposure. Although most fish consumed by the U.S. population is not locally obtained (i.e., commercially obtained from diverse regions and countries) (Sunderland 2007), analytical results showing geographic differences in the distribution of BHg could reflect higher Hg concentrations in locally obtained fish within the Northeast states. Comparisons of the distribution of BHg data with reference values aimed at protecting the fetal nervous system have been made using national-level data (Mahaffey et al. 2004). NHANES data are often used to make population estimates through application of weighting factors to variables of interest such as BHg. Population estimates for U.S. Census regions and their distribution of BHg concentrations indicate that women living in the Northeast had BHg concentrations exceeding levels of concern more often than did women living in the South and West. The lowest Hg exposures were reported among women living in the Midwest. Because NHANES was not designed to provide population estimates for coastal and noncoastal areas, unbiased estimates for the number of women having BHg concentrations ≥ 3.5 μg/L and ≥ 5.8 μg/L cannot be developed comparing coastal- and noncoastal-residing women. Although the following are not population estimates, they are statistics for a geographic region: Women living in coastal areas were at greater risk of having BHg concentrations ≥ 3.5 μg/L (16.25% for coastal and 5.99% for noncoastal residents) and ≥ 5.8 μg/L (8.11% for coastal and 2.06% for noncoastal residents). Women living near the coastal areas had approximately three to four times greater risk of exceeding acceptable levels of Hg exposure than did noncoastal-dwelling women. There may be some bias in these results due to the weighting issues (see “Materials and Methods”); however, we do not believe that this bias is a major factor underlying these great differences. MeHg exposures exceeding health-based standards, including U.S. EPA’s RfD (Rice et al. 2003), occurred more commonly among women living in coastal areas. These health-based standards were based on avoiding MeHg-associated delays and deficits in neurologic development of children after in utero exposure to MeHg (Mergler et al. 2007; Rice et al. 2003). At higher exposures to MeHg, including the highest concentrations reported during these survey years, the women themselves may risk adverse neuropsychological and neurobehavioral outcomes (Mergler et al. 2007). Within the United States, people living in coastal areas consume more fish and shellfish than do those living in noncoastal areas and consume fish with higher Hg concentrations. Reports from New York City (McKelvey et al. 2007) and Florida (Denger et al. 1994; Karouna-Renier et al. 2008) support our identification of higher Hg exposures in U.S. coastal areas. This is part of a worldwide pattern. An overall pattern of higher BHg levels has also been reported among people living on U.S. islands [Hawaii (Sato et al. 2006)] and territories [e.g., Puerto Rico (Ortiz-Roque and López-Rivera 2004)]. A similar pattern has been repeated in other islands [Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Fiji (Kumar et al. 2006), Seychelles (Myers et al. 2007), and Tahiti (Chateau-Degat 2005; Dewailly et al. 2008)] compared with inland populations. Among these island populations, BHg concentrations at the upper end of the distribution fall into the range of 50 μg/L (~ 250 nmol/L) and higher (Chateau-Degat 2005). In Bermuda, cord BHg concentrations as high as 160 nmol/L (~ 35 μg/L) have been reported (arithmetic mean, 41.3 ± 4.7 nmol/L or 8.0 ± 1.0 μg/L) (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004). Higher BHg concentrations in the U.S. Northeast found in this study reflect, in part, more frequent fish and shellfish consumption. Additional variability may be a function of differences in Hg concentrations among species and geographic regions (Sunderland 2007). For example, recent information on “hot spots” for Hg in wildlife tissues (Evers et al. 2007) could be associated with higher Hg concentrations for locally obtained fish. One limitation of the present analysis was the use of a fish-species–specific mean Hg concentration (i.e., nondistributional values) to estimate individual exposure. Although most fish consumed by the U.S. population is not locally obtained (i.e., commercially obtained from diverse regions and countries) (Sunderland 2007), analytical results showing geographic differences in the distribution of BHg could reflect higher Hg concentrations in locally obtained fish within the Northeast states. Ethnic group variation on fish intake and BHg concentrations Ethnic origins were associated with Hg exposures with those designated as “other” (i.e., Asian, Pacific and Caribbean Islander, Native American, Alaska Native, multiracial, and unknown race) having higher BHg concentrations. From additional studies, people of Asian descent whose food choices are influenced by Asian dietary patterns (Kudo et al. 2000; Sechena et al. 2003) tended to consume fish more frequently, in greater variety, and in greater quantity than did non-Asians. The ethnic diversity of the U.S. population is well known. As of 1997, 61% of the Asian population living in the United States was foreign-born (Council of Economic Advisors 1999). By comparison with overall U.S. data, higher BHg concentrations among Asians and islanders were reported for Taiwan (Hsu et al. 2007), Cambodia (Agusa et al. 2007), Fiji (Kumar et al. 2006), and Tahiti (Dewailly et al. 2008). Within the United States, fish and shellfish consumption, predicting Hg exposure described previously, varies widely, in part a reflection of ethnicity. For example, Asian countries [e.g., Cambodia (Agusa et al. 2007), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], island nations [e.g., Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Seychelles (Myers et al. 2007), Tahiti (Chateau-Degat 2005), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], and some European countries [e.g., Spain (Falcó et al. 2006; Herreros et al. 2008) and the Faroe Islands (Weihe et al. 1996)] have reported fish/shellfish consumption levels greater than average worldwide consumption (World Health Organization 2008). Ethnic origins were associated with Hg exposures with those designated as “other” (i.e., Asian, Pacific and Caribbean Islander, Native American, Alaska Native, multiracial, and unknown race) having higher BHg concentrations. From additional studies, people of Asian descent whose food choices are influenced by Asian dietary patterns (Kudo et al. 2000; Sechena et al. 2003) tended to consume fish more frequently, in greater variety, and in greater quantity than did non-Asians. The ethnic diversity of the U.S. population is well known. As of 1997, 61% of the Asian population living in the United States was foreign-born (Council of Economic Advisors 1999). By comparison with overall U.S. data, higher BHg concentrations among Asians and islanders were reported for Taiwan (Hsu et al. 2007), Cambodia (Agusa et al. 2007), Fiji (Kumar et al. 2006), and Tahiti (Dewailly et al. 2008). Within the United States, fish and shellfish consumption, predicting Hg exposure described previously, varies widely, in part a reflection of ethnicity. For example, Asian countries [e.g., Cambodia (Agusa et al. 2007), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], island nations [e.g., Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Seychelles (Myers et al. 2007), Tahiti (Chateau-Degat 2005), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], and some European countries [e.g., Spain (Falcó et al. 2006; Herreros et al. 2008) and the Faroe Islands (Weihe et al. 1996)] have reported fish/shellfish consumption levels greater than average worldwide consumption (World Health Organization 2008). Income differences in association with fish intake and BHg concentrations In contrast to some other environmental exposures [e.g., higher blood lead concentrations] (Mahaffey et al. 1982), BHg concentrations increased with income. This is consistent with other studies in which women from higher income groups were at greater risk of MeHg exposure, as were women living in urban areas (Hightower and Moore 2003; Saint-Phard and Van Dorsten 2006). In contrast to some other environmental exposures [e.g., higher blood lead concentrations] (Mahaffey et al. 1982), BHg concentrations increased with income. This is consistent with other studies in which women from higher income groups were at greater risk of MeHg exposure, as were women living in urban areas (Hightower and Moore 2003; Saint-Phard and Van Dorsten 2006). Interactions between income and ethnic group A more complex association between income and racial/ethnic group may also exist. According to the 1990 U.S. Census (U.S. Census Bureau 2008), the median family income of Japanese-American families exceeded that of non-Hispanic white families. By contrast, the income of Cambodian-American families was lower than that of black families. We could not address whether there is an interaction between belonging to the category designated as “other” and higher income within the NHANES data on BHg levels available at this time, because of sample size limitations. A more complex association between income and racial/ethnic group may also exist. According to the 1990 U.S. Census (U.S. Census Bureau 2008), the median family income of Japanese-American families exceeded that of non-Hispanic white families. By contrast, the income of Cambodian-American families was lower than that of black families. We could not address whether there is an interaction between belonging to the category designated as “other” and higher income within the NHANES data on BHg levels available at this time, because of sample size limitations. Time trends in Hg exposure absent changes in total fish consumption Our analysis of 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Our evaluation of NHANES fish intake data indicated no differences in the mean frequency or amount of particular fish and shellfish species consumed. However, the estimated 30-day Hg intake decreased at the 90th percentile and higher, whereas total fish consumption did not, which suggests a shift in fish species consumed. The BHg data indicated a reduction of the higher end of the distribution of BHg between the first 2-year interval (the 1999 and 2000 examinees) compared with the subsequent 4-year interval (the 2001–2004 examinees). The basis for these differences could possibly reflect spillover from the federal fish advisory program (U.S. EPA 2008b) in terms of total fish and shellfish consumption. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). The four fish species listed in the federal advisory [swordfish, shark, tilefish, and king mackerel (U.S. EPA 2008b)] were rarely reported by the 5,465 women in this analysis. It is clear that these four fish species contributed little to Hg exposure in this general population of U.S. women. Individual states with higher Hg exposures [e.g., Hawaii (Sato et al., 2006), Florida (Karouna-Renier et al. 2008)] and greater fish consumption [Florida (Denger et al. 1994)] have substantially broader fish consumption advisories (e.g., Hawaii and Florida) aimed at reducing Hg exposure from high-Hg–containing species obtained locally (Florida Department of Health 2007; U.S. EPA 2008b). Despite the federal advisory’s emphasis on four species of highly contaminated fish (U.S. EPA 2008b) and the states’ emphasis on game-fish, the most commonly consumed finfish in the United States is tuna. Interpretation of Hg exposure from tuna was complicated by the specific wording of the dietary questions asked of the NHANES examinees, which did not differentiate between light or skipjack tuna and albacore tuna. The latter contains approximately three times more Hg than does the former: 0.38 μg/g for frozen and fresh tuna, 0.35 μg/g for canned albacore, and 0.12 μg/g for canned light tuna (Mahaffey et al. 2008). Our analysis of 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Our evaluation of NHANES fish intake data indicated no differences in the mean frequency or amount of particular fish and shellfish species consumed. However, the estimated 30-day Hg intake decreased at the 90th percentile and higher, whereas total fish consumption did not, which suggests a shift in fish species consumed. The BHg data indicated a reduction of the higher end of the distribution of BHg between the first 2-year interval (the 1999 and 2000 examinees) compared with the subsequent 4-year interval (the 2001–2004 examinees). The basis for these differences could possibly reflect spillover from the federal fish advisory program (U.S. EPA 2008b) in terms of total fish and shellfish consumption. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). The four fish species listed in the federal advisory [swordfish, shark, tilefish, and king mackerel (U.S. EPA 2008b)] were rarely reported by the 5,465 women in this analysis. It is clear that these four fish species contributed little to Hg exposure in this general population of U.S. women. Individual states with higher Hg exposures [e.g., Hawaii (Sato et al., 2006), Florida (Karouna-Renier et al. 2008)] and greater fish consumption [Florida (Denger et al. 1994)] have substantially broader fish consumption advisories (e.g., Hawaii and Florida) aimed at reducing Hg exposure from high-Hg–containing species obtained locally (Florida Department of Health 2007; U.S. EPA 2008b). Despite the federal advisory’s emphasis on four species of highly contaminated fish (U.S. EPA 2008b) and the states’ emphasis on game-fish, the most commonly consumed finfish in the United States is tuna. Interpretation of Hg exposure from tuna was complicated by the specific wording of the dietary questions asked of the NHANES examinees, which did not differentiate between light or skipjack tuna and albacore tuna. The latter contains approximately three times more Hg than does the former: 0.38 μg/g for frozen and fresh tuna, 0.35 μg/g for canned albacore, and 0.12 μg/g for canned light tuna (Mahaffey et al. 2008). Changes in MeHg exposure over time During the past decade, the U.S. EPA initiated substantial interventions aimed to reduce Hg releases and exposures (U.S. EPA 2008a) and issued advisories to limit consumption of high-Hg fish (U.S. EPA 2008b). Because of worldwide atmospheric distribution and subsequent deposition of Hg, local conditions and locally caught fish are not the main contributors to Hg intake for most people (Sunderland 2007). Although there are economic indications that consumption of some species of fish may have decreased in response to these advisories (Shimshack et al. 2007), Hg exposures may not follow a similar time trend despite regulatory efforts to reduce Hg exposures. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). Our analysis of NHANES data calculating 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. During the past decade, the U.S. EPA initiated substantial interventions aimed to reduce Hg releases and exposures (U.S. EPA 2008a) and issued advisories to limit consumption of high-Hg fish (U.S. EPA 2008b). Because of worldwide atmospheric distribution and subsequent deposition of Hg, local conditions and locally caught fish are not the main contributors to Hg intake for most people (Sunderland 2007). Although there are economic indications that consumption of some species of fish may have decreased in response to these advisories (Shimshack et al. 2007), Hg exposures may not follow a similar time trend despite regulatory efforts to reduce Hg exposures. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). Our analysis of NHANES data calculating 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Regional and coastal variation in BHg concentrations and in fish consumption: Comparisons of the distribution of BHg data with reference values aimed at protecting the fetal nervous system have been made using national-level data (Mahaffey et al. 2004). NHANES data are often used to make population estimates through application of weighting factors to variables of interest such as BHg. Population estimates for U.S. Census regions and their distribution of BHg concentrations indicate that women living in the Northeast had BHg concentrations exceeding levels of concern more often than did women living in the South and West. The lowest Hg exposures were reported among women living in the Midwest. Because NHANES was not designed to provide population estimates for coastal and noncoastal areas, unbiased estimates for the number of women having BHg concentrations ≥ 3.5 μg/L and ≥ 5.8 μg/L cannot be developed comparing coastal- and noncoastal-residing women. Although the following are not population estimates, they are statistics for a geographic region: Women living in coastal areas were at greater risk of having BHg concentrations ≥ 3.5 μg/L (16.25% for coastal and 5.99% for noncoastal residents) and ≥ 5.8 μg/L (8.11% for coastal and 2.06% for noncoastal residents). Women living near the coastal areas had approximately three to four times greater risk of exceeding acceptable levels of Hg exposure than did noncoastal-dwelling women. There may be some bias in these results due to the weighting issues (see “Materials and Methods”); however, we do not believe that this bias is a major factor underlying these great differences. MeHg exposures exceeding health-based standards, including U.S. EPA’s RfD (Rice et al. 2003), occurred more commonly among women living in coastal areas. These health-based standards were based on avoiding MeHg-associated delays and deficits in neurologic development of children after in utero exposure to MeHg (Mergler et al. 2007; Rice et al. 2003). At higher exposures to MeHg, including the highest concentrations reported during these survey years, the women themselves may risk adverse neuropsychological and neurobehavioral outcomes (Mergler et al. 2007). Within the United States, people living in coastal areas consume more fish and shellfish than do those living in noncoastal areas and consume fish with higher Hg concentrations. Reports from New York City (McKelvey et al. 2007) and Florida (Denger et al. 1994; Karouna-Renier et al. 2008) support our identification of higher Hg exposures in U.S. coastal areas. This is part of a worldwide pattern. An overall pattern of higher BHg levels has also been reported among people living on U.S. islands [Hawaii (Sato et al. 2006)] and territories [e.g., Puerto Rico (Ortiz-Roque and López-Rivera 2004)]. A similar pattern has been repeated in other islands [Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Fiji (Kumar et al. 2006), Seychelles (Myers et al. 2007), and Tahiti (Chateau-Degat 2005; Dewailly et al. 2008)] compared with inland populations. Among these island populations, BHg concentrations at the upper end of the distribution fall into the range of 50 μg/L (~ 250 nmol/L) and higher (Chateau-Degat 2005). In Bermuda, cord BHg concentrations as high as 160 nmol/L (~ 35 μg/L) have been reported (arithmetic mean, 41.3 ± 4.7 nmol/L or 8.0 ± 1.0 μg/L) (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004). Higher BHg concentrations in the U.S. Northeast found in this study reflect, in part, more frequent fish and shellfish consumption. Additional variability may be a function of differences in Hg concentrations among species and geographic regions (Sunderland 2007). For example, recent information on “hot spots” for Hg in wildlife tissues (Evers et al. 2007) could be associated with higher Hg concentrations for locally obtained fish. One limitation of the present analysis was the use of a fish-species–specific mean Hg concentration (i.e., nondistributional values) to estimate individual exposure. Although most fish consumed by the U.S. population is not locally obtained (i.e., commercially obtained from diverse regions and countries) (Sunderland 2007), analytical results showing geographic differences in the distribution of BHg could reflect higher Hg concentrations in locally obtained fish within the Northeast states. Ethnic group variation on fish intake and BHg concentrations: Ethnic origins were associated with Hg exposures with those designated as “other” (i.e., Asian, Pacific and Caribbean Islander, Native American, Alaska Native, multiracial, and unknown race) having higher BHg concentrations. From additional studies, people of Asian descent whose food choices are influenced by Asian dietary patterns (Kudo et al. 2000; Sechena et al. 2003) tended to consume fish more frequently, in greater variety, and in greater quantity than did non-Asians. The ethnic diversity of the U.S. population is well known. As of 1997, 61% of the Asian population living in the United States was foreign-born (Council of Economic Advisors 1999). By comparison with overall U.S. data, higher BHg concentrations among Asians and islanders were reported for Taiwan (Hsu et al. 2007), Cambodia (Agusa et al. 2007), Fiji (Kumar et al. 2006), and Tahiti (Dewailly et al. 2008). Within the United States, fish and shellfish consumption, predicting Hg exposure described previously, varies widely, in part a reflection of ethnicity. For example, Asian countries [e.g., Cambodia (Agusa et al. 2007), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], island nations [e.g., Bermuda (Dewailly and Pereg 2004; see also Bermuda Biological Stations for Research 2004), Seychelles (Myers et al. 2007), Tahiti (Chateau-Degat 2005), Taiwan (Hsu et al. 2007; Soong et al. 1991), Japan (Murata et al. 2007; Sakamoto et al. 2007)], and some European countries [e.g., Spain (Falcó et al. 2006; Herreros et al. 2008) and the Faroe Islands (Weihe et al. 1996)] have reported fish/shellfish consumption levels greater than average worldwide consumption (World Health Organization 2008). Income differences in association with fish intake and BHg concentrations: In contrast to some other environmental exposures [e.g., higher blood lead concentrations] (Mahaffey et al. 1982), BHg concentrations increased with income. This is consistent with other studies in which women from higher income groups were at greater risk of MeHg exposure, as were women living in urban areas (Hightower and Moore 2003; Saint-Phard and Van Dorsten 2006). Interactions between income and ethnic group: A more complex association between income and racial/ethnic group may also exist. According to the 1990 U.S. Census (U.S. Census Bureau 2008), the median family income of Japanese-American families exceeded that of non-Hispanic white families. By contrast, the income of Cambodian-American families was lower than that of black families. We could not address whether there is an interaction between belonging to the category designated as “other” and higher income within the NHANES data on BHg levels available at this time, because of sample size limitations. Time trends in Hg exposure absent changes in total fish consumption: Our analysis of 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Our evaluation of NHANES fish intake data indicated no differences in the mean frequency or amount of particular fish and shellfish species consumed. However, the estimated 30-day Hg intake decreased at the 90th percentile and higher, whereas total fish consumption did not, which suggests a shift in fish species consumed. The BHg data indicated a reduction of the higher end of the distribution of BHg between the first 2-year interval (the 1999 and 2000 examinees) compared with the subsequent 4-year interval (the 2001–2004 examinees). The basis for these differences could possibly reflect spillover from the federal fish advisory program (U.S. EPA 2008b) in terms of total fish and shellfish consumption. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). The four fish species listed in the federal advisory [swordfish, shark, tilefish, and king mackerel (U.S. EPA 2008b)] were rarely reported by the 5,465 women in this analysis. It is clear that these four fish species contributed little to Hg exposure in this general population of U.S. women. Individual states with higher Hg exposures [e.g., Hawaii (Sato et al., 2006), Florida (Karouna-Renier et al. 2008)] and greater fish consumption [Florida (Denger et al. 1994)] have substantially broader fish consumption advisories (e.g., Hawaii and Florida) aimed at reducing Hg exposure from high-Hg–containing species obtained locally (Florida Department of Health 2007; U.S. EPA 2008b). Despite the federal advisory’s emphasis on four species of highly contaminated fish (U.S. EPA 2008b) and the states’ emphasis on game-fish, the most commonly consumed finfish in the United States is tuna. Interpretation of Hg exposure from tuna was complicated by the specific wording of the dietary questions asked of the NHANES examinees, which did not differentiate between light or skipjack tuna and albacore tuna. The latter contains approximately three times more Hg than does the former: 0.38 μg/g for frozen and fresh tuna, 0.35 μg/g for canned albacore, and 0.12 μg/g for canned light tuna (Mahaffey et al. 2008). Changes in MeHg exposure over time: During the past decade, the U.S. EPA initiated substantial interventions aimed to reduce Hg releases and exposures (U.S. EPA 2008a) and issued advisories to limit consumption of high-Hg fish (U.S. EPA 2008b). Because of worldwide atmospheric distribution and subsequent deposition of Hg, local conditions and locally caught fish are not the main contributors to Hg intake for most people (Sunderland 2007). Although there are economic indications that consumption of some species of fish may have decreased in response to these advisories (Shimshack et al. 2007), Hg exposures may not follow a similar time trend despite regulatory efforts to reduce Hg exposures. A recent analysis of a nationally representative study specifically addressing fish-consumption patterns did not support this suggestion (Bradbury 2007). Our analysis of NHANES data calculating 30-day Hg intake indicated that there was no consistent trend in fish consumption by women of childbearing age over the 6-year period between 1999 and 2004. Conclusions: Significant geographic differences in BHg concentrations occurred within the United States: We found highest exposures in coastal areas and the Northeast census region. In the Northeast, 19% of women had BHg concentrations ≥ 3.5 μg/L. The highest 5% of BHg concentrations exceeded 8.2 μg/L in the Northeast and 7.2 μg/L in coastal areas, concentrations more than twice the 3.5 μg/L level of concern. BHg levels were predicted by the quantity and type of fish consumed. Over the 6-year period (1999–2004), the frequency of elevated BHg levels among women of childbearing age declined without a significant change in quantities of fish and shellfish consumed. This pattern suggests a more discerning series of choices in type of fish eaten rather than an overall reduction in fish consumption. Within all geographic regions, women at highest risk of elevated Hg exposures were more affluent and more likely to be of Asian or island ethnicity.
Background: The current, continuous National Health and Nutrition Examination Survey (NHANES) has included blood mercury (BHg) and fish/shellfish consumption since it began in 1999. NHANES 1999-2004 data form the basis for these analyses. Methods: We performed univariate and bivariate analyses to determine the distribution of BHg and fish consumption in the population and to investigate differences by geography, race/ethnicity, and income. We used multivariate analysis (regression) to determine the strongest predictors of BHg among geography, demographic factors, and fish consumption. Results: Elevated BHg occurred more commonly among women of childbearing age living in coastal areas of the United States (approximately one in six women). Regionally, exposures differ across the United States: Northeast > South and West > Midwest. Asian women and women with higher income ate more fish and had higher BHg. Time-trend analyses identified reduced BHg and reduced intake of Hg in the upper percentiles without an overall reduction of fish consumption. Conclusions: BHg is associated with income, ethnicity, residence (census region and coastal proximity). From 1999 through 2004, BHg decreased without a concomitant decrease in fish consumption. Data are consistent with a shift over this time period in fish species in women's diets.
null
null
10,715
246
[ 603, 435, 842, 381, 72, 104, 452, 179 ]
12
[ "fish", "bhg", "hg", "coastal", "consumption", "concentrations", "women", "2007", "bhg concentrations", "higher" ]
[ "fish consumption patterns", "bhg fish consumption", "fish intake data", "fish consumption nhanes", "seafood consumption study" ]
null
null
null
null
null
[CONTENT] blood | coastal | fish | mercury | NHANES | regional [SUMMARY]
[CONTENT] blood | coastal | fish | mercury | NHANES | regional [SUMMARY]
[CONTENT] blood | coastal | fish | mercury | NHANES | regional [SUMMARY]
null
null
null
[CONTENT] Adult | Animals | Diet | Ethnicity | Female | Humans | Mercury | Nutrition Surveys | Seafood | United States [SUMMARY]
[CONTENT] Adult | Animals | Diet | Ethnicity | Female | Humans | Mercury | Nutrition Surveys | Seafood | United States [SUMMARY]
[CONTENT] Adult | Animals | Diet | Ethnicity | Female | Humans | Mercury | Nutrition Surveys | Seafood | United States [SUMMARY]
null
null
null
[CONTENT] fish consumption patterns | bhg fish consumption | fish intake data | fish consumption nhanes | seafood consumption study [SUMMARY]
[CONTENT] fish consumption patterns | bhg fish consumption | fish intake data | fish consumption nhanes | seafood consumption study [SUMMARY]
[CONTENT] fish consumption patterns | bhg fish consumption | fish intake data | fish consumption nhanes | seafood consumption study [SUMMARY]
null
null
null
[CONTENT] fish | bhg | hg | coastal | consumption | concentrations | women | 2007 | bhg concentrations | higher [SUMMARY]
[CONTENT] fish | bhg | hg | coastal | consumption | concentrations | women | 2007 | bhg concentrations | higher [SUMMARY]
[CONTENT] fish | bhg | hg | coastal | consumption | concentrations | women | 2007 | bhg concentrations | higher [SUMMARY]
null
null
null
[CONTENT] bhg | μg | 0001 | estimated | table | figure | members | material | www | www ehponline [SUMMARY]
[CONTENT] μg | highest | type fish | type | concentrations | bhg | elevated | northeast | fish | bhg concentrations [SUMMARY]
[CONTENT] fish | hg | bhg | 2007 | coastal | concentrations | consumption | women | μg | higher [SUMMARY]
null
null
null
[CONTENT] the United States | six ||| the United States | Northeast | South | West | Midwest ||| Asian ||| [SUMMARY]
[CONTENT] ||| 1999 | 2004 ||| [SUMMARY]
[CONTENT] National Health and Nutrition Examination Survey | NHANES | 1999 ||| NHANES | 1999-2004 ||| BHg ||| ||| the United States | six ||| the United States | Northeast | South | West | Midwest ||| Asian ||| ||| ||| 1999 | 2004 ||| [SUMMARY]
null
Willingness to work in rural areas and the role of intrinsic versus extrinsic professional motivations - a survey of medical students in Ghana.
21827698
Retaining health workers in rural areas is challenging for a number of reasons, ranging from personal preferences to difficult work conditions and low remuneration. This paper assesses the influence of intrinsic and extrinsic motivation on willingness to accept postings to deprived areas among medical students in Ghana.
BACKGROUND
A computer-based survey involving 302 fourth year medical students was conducted from May-August 2009. Logistic regression was used to assess the association between students' willingness to accept rural postings and their professional motivations, rural exposure and family parental professional and educational status (PPES).
METHODS
Over 85% of students were born in urban areas and 57% came from affluent backgrounds. Nearly two-thirds of students reported strong intrinsic motivation to study medicine. After controlling for demographic characteristics and rural exposure, motivational factors did not influence willingness to practice in rural areas. High family PPES was consistently associated with lower willingness to work in rural areas.
RESULTS
Although most Ghanaian medical students are motivated to study medicine by the desire to help others, this does not translate into willingness to work in rural areas. Efforts should be made to build on intrinsic motivation during medical training and in designing rural postings, as well as favour lower PPES students for admission.
CONCLUSIONS
[ "Choice Behavior", "Data Collection", "Female", "Ghana", "Humans", "Logistic Models", "Male", "Motivation", "Professional Practice Location", "Rural Health Services", "Students, Medical", "Young Adult" ]
3170278
Background
The World Health Organization estimates that more than 4 million health workers are needed to fill the health workforce gap globally. This includes 2.4 million physicians, nurses and midwives. Fifty-seven countries are defined as having a critical shortage of health staff; of these, 36 are in sub-Saharan Africa. Of the total world's health work force of 59.2 million, Africa has only 3% of the world's health workers in spite of having 25% of the global burden of disease [1,2]. The shortage of health staff cripples the health delivery system. It is also a threat to provision of essential, life-saving interventions such as childhood immunizations, provision of safe water, safe pregnancy and childbirth services for mothers as well as access to treatment for AIDS, tuberculosis and malaria. Health workers are critical to the global preparedness for and response to threats posed by emerging and epidemic-prone diseases. Different interventions have been tried to address these shortages [3-5]. Retaining health staff in rural areas has proven extremely difficult as young professionals increasingly prefer urban postings and health systems do not reward (and through neglect often penalize) rural service [6, 7. 8]. For example, Wilson and colleagues, 2009, Dovlo, 2003 and Kruk and colleagues, 2010, found that rural exposure, poor working conditions, low job satisfaction, political and ethnic problems, and sometimes, civil strife and poor security in most underserved areas, predispose new graduates to select cities [9-11]. Qualitative research has also shown the importance of health care providers' personal characteristics and value systems, such as religious beliefs and sociopolitical convictions, to their motivation towards rural practice. Emigration of skilled professionals to high-income countries is another barrier to adequate staffing of health facilities. A study in Ghana in 2006 on trainee physicians and nurses revealed that the majority had considered emigrating. More physicians (68%) than nurses (57%) considered emigration. These findings imply that achieving improvements in the health status of people living in low-income countries, and particularly, in rural areas, will be extremely difficult and the attainment of the United Nations Millennium Development Goals 4, 5, and 6 by 2015 [12,13], in Ghana is unlikely. While previous research has looked at incentives and working conditions to promote uptake of rural posts, few studies have focused on motivation crowding and its effect on willingness to accept postings to rural area. Motivation crowding [14] is the conflict between external factors (extrinsic), such as monetary incentives or punishments, and the underlying desire or willingness to work (intrinsic) in areas needed most. Students may have a mix of extrinsic and intrinsic motivations for studying medicine. Extrinsic factors may either undermine or strengthen intrinsic motivation, led by the belief that medicine has the imperative to help others, as enshrined in the Hippocratic Oath [15-17]. Current monetary incentives, which favour urban practice, may crowd-out the intrinsic desire to give back to society by working in underserved areas [18,19]. This could have debilitating effects on health worker retention in rural areas [20-22]. To tackle the maldistribution of human resources for health (HRH), understanding the factors that crowd-out the intrinsic motivation of medical students and their willingness to accept postings to rural underserved area is integral. This paper analyzes the effect of extrinsic versus intrinsic motivational factors on stated willingness to accept postings to rural underserved areas in Ghana.
Methods
Study site Ghana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC). In Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice. Ghana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC). In Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice. Data collection Data collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST. Data collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST. Ethical considerations The study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms. The study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms. Statistical Analysis The study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented "I will definitely not work in a deprived area;" 2 "I am unlikely to work in a deprived area;" 3 "I am likely to work in a deprived area;" and 4 "I will definitely work in a deprived area." We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as "a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc."--as per Ministry of Health definition. Predictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having "strong" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, "strong" intrinsic and extrinsic motivation groups were mutually exclusive. Demographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as "non-Akans" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months). Bivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3. The study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented "I will definitely not work in a deprived area;" 2 "I am unlikely to work in a deprived area;" 3 "I am likely to work in a deprived area;" and 4 "I will definitely work in a deprived area." We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as "a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc."--as per Ministry of Health definition. Predictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having "strong" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, "strong" intrinsic and extrinsic motivation groups were mutually exclusive. Demographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as "non-Akans" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months). Bivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3.
null
null
Conclusions
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6920/11/56/prepub
[ "Background", "Study site", "Data collection", "Ethical considerations", "Statistical Analysis", "Results", "Socio-demographic characteristics", "Intrinsic and extrinsic motivation and likelihood of working in underserved area", "Regression analysis of motivations and the willingness to accept postings to rural underserved area after graduation", "Discussion", "Conclusions" ]
[ "The World Health Organization estimates that more than 4 million health workers are needed to fill the health workforce gap globally. This includes 2.4 million physicians, nurses and midwives. Fifty-seven countries are defined as having a critical shortage of health staff; of these, 36 are in sub-Saharan Africa. Of the total world's health work force of 59.2 million, Africa has only 3% of the world's health workers in spite of having 25% of the global burden of disease [1,2]. The shortage of health staff cripples the health delivery system. It is also a threat to provision of essential, life-saving interventions such as childhood immunizations, provision of safe water, safe pregnancy and childbirth services for mothers as well as access to treatment for AIDS, tuberculosis and malaria. Health workers are critical to the global preparedness for and response to threats posed by emerging and epidemic-prone diseases. Different interventions have been tried to address these shortages [3-5].\nRetaining health staff in rural areas has proven extremely difficult as young professionals increasingly prefer urban postings and health systems do not reward (and through neglect often penalize) rural service [6, 7. 8]. For example, Wilson and colleagues, 2009, Dovlo, 2003 and Kruk and colleagues, 2010, found that rural exposure, poor working conditions, low job satisfaction, political and ethnic problems, and sometimes, civil strife and poor security in most underserved areas, predispose new graduates to select cities [9-11]. Qualitative research has also shown the importance of health care providers' personal characteristics and value systems, such as religious beliefs and sociopolitical convictions, to their motivation towards rural practice. Emigration of skilled professionals to high-income countries is another barrier to adequate staffing of health facilities. A study in Ghana in 2006 on trainee physicians and nurses revealed that the majority had considered emigrating. More physicians (68%) than nurses (57%) considered emigration. These findings imply that achieving improvements in the health status of people living in low-income countries, and particularly, in rural areas, will be extremely difficult and the attainment of the United Nations Millennium Development Goals 4, 5, and 6 by 2015 [12,13], in Ghana is unlikely.\nWhile previous research has looked at incentives and working conditions to promote uptake of rural posts, few studies have focused on motivation crowding and its effect on willingness to accept postings to rural area. Motivation crowding [14] is the conflict between external factors (extrinsic), such as monetary incentives or punishments, and the underlying desire or willingness to work (intrinsic) in areas needed most. Students may have a mix of extrinsic and intrinsic motivations for studying medicine. Extrinsic factors may either undermine or strengthen intrinsic motivation, led by the belief that medicine has the imperative to help others, as enshrined in the Hippocratic Oath [15-17]. Current monetary incentives, which favour urban practice, may crowd-out the intrinsic desire to give back to society by working in underserved areas [18,19]. This could have debilitating effects on health worker retention in rural areas [20-22]. To tackle the maldistribution of human resources for health (HRH), understanding the factors that crowd-out the intrinsic motivation of medical students and their willingness to accept postings to rural underserved area is integral. This paper analyzes the effect of extrinsic versus intrinsic motivational factors on stated willingness to accept postings to rural underserved areas in Ghana.", "Ghana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC).\nIn Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice.", "Data collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST.", "The study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms.", "The study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented \"I will definitely not work in a deprived area;\" 2 \"I am unlikely to work in a deprived area;\" 3 \"I am likely to work in a deprived area;\" and 4 \"I will definitely work in a deprived area.\" We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as \"a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc.\"--as per Ministry of Health definition.\nPredictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having \"strong\" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, \"strong\" intrinsic and extrinsic motivation groups were mutually exclusive.\nDemographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as \"non-Akans\" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months).\nBivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3.", " Socio-demographic characteristics Of the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service.\nDemographic characteristics and rural exposure of respondents (N = 302)\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nOf the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service.\nDemographic characteristics and rural exposure of respondents (N = 302)\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\n Intrinsic and extrinsic motivation and likelihood of working in underserved area The intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013).\nIntrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students\n1. Excludes 17 missing values for willingness to work rural and/or current motivation.\n2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008.\n3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013.\nThe intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013).\nIntrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students\n1. Excludes 17 missing values for willingness to work rural and/or current motivation.\n2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008.\n3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013.\n Regression analysis of motivations and the willingness to accept postings to rural underserved area after graduation We present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nIn Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nWe present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nIn Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies", "Of the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service.\nDemographic characteristics and rural exposure of respondents (N = 302)\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies", "The intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013).\nIntrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students\n1. Excludes 17 missing values for willingness to work rural and/or current motivation.\n2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008.\n3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013.", "We present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nIn Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies", "We found that twice as many students reported high intrinsic motivation compared to high extrinsic motivation to study and practice medicine. This may reflect the underlying altruistic motivation for many students entering a profession focused on serving others [12,13]. There may also be an element of social desirability bias in the students' responses as intrinsic motivation may be thought to be more socially acceptable than extrinsic motivation. Nonetheless, we found that high extrinsic motivation was associated with low self-reported likelihood of rural practice and that the converse was true for high intrinsic motivation [2,3]. Interestingly, this association lost statistical significance at the 95% confidence level in models with demographic and rural exposure confounders, whereas socioeconomic status (PPES) retained a highly influential role, as discussed below.\nIn this study, rural origin did not influence students' willingness to practice in rural areas after controlling for intrinsic/extrinsic motivation and demographic characteristics. This is in contrast with studies which have found rural origin to be an important motivator for rural practice [8,17,23]. Our findings highlight the heterogeneity of trends in motivation dynamics for rural practice and the importance of locally-relevant data for decision making. High PPES, measured using parental education and profession, was consistently associated with lack of willingness to work in rural areas. This is concerning as nearly 6 in 10 medical students in this cohort were from high PPES backgrounds--which is typical for Ghanaian medical schools [7]. These findings suggest that admission policies that favour well-to do applicants may be reducing the pool of students willing to consider rural practice. Female gender was also strongly associated with reduced interest in rural practice for women even after controlling for extrinsic/intrinsic motivation and rural exposure variables. This is consistent with similar studies among health staff which revealed that women are less likely to accept positions in remote areas due to varying family reasons; they would like to live where their husbands jobs are, have difficulties convincing their husbands to follow them to rural areas and want their children to have better education in the urban areas [10,11,22]. The studies further explained that female doctors rarely live in the same village as their assigned post and have higher overall absentee rates in rural practice [19,20]. With increasing representation of female healthcare professionals in many places in sub-Saharan Africa, [10,11], it is likely that the supply of health staff to rural underserved areas will remain a major setback if professional motivations are designed to attract more female students to rural practice. Although our study showed a lower proportion of female medical students in Ghana compared to other areas, they are likely to become a more important cadre in the coming years [11,19]. More research is urgently needed to determine how female healthcare professionals' motivations towards rural practice can be better engaged by policy-makers.\nThe limitations of this study include the possibility of social desirability bias in responses on motivation and likelihood of rural practice, as noted above. Despite the fact that study participants were assured of anonymity, confidentiality, in responding to the questions, some social desirability bias is likely. For this reason, we selected a measure of high intrinsic and extrinsic motivation for use in the regression models. Research comparing students stated intentions with their actual career choices during internship is urgently needed as few studies on matched follow-ups are available. In addition, most the students participating in the study were young and had not yet tasted the rigors of working in a rural area, which may affect their job preferences. Thus the findings of this study may not be applicable to practicing physicians. Finally, these findings are only generalizable to students in the current medical education system. The findings may be different if selection criteria for medical school admission change.\nThe major strengths of this study are its high response rate of 99% and that its ability to capture an entire population of young medical students who are one of the targets for addressing the rural-urban health staff recruitment imbalance. Surveying practicing physicians would have missed out those who had migrated.\nThis study has several implications. First, the majority of students profess high intrinsic motivation for rural service. More research is needed to determine the potency of this motivation source in real-life decision making and how to best engage it via HRH policy. It is possible that emphasizing the community service aspect of medical practice and elevating the status of rural primary care in under-graduate and post-graduate training may help narrow the gap between motivation and eventual career choice in favour of rural areas. In addition, well-supervised and supported rural placements in which students experience the rewards of rural practice may help to persuade students who are largely unfamiliar with rural life. However, the success of these rural rotations is likely to depend heavily on having adequate local infrastructure and mentorship [13,23].\nSecond, admission criteria may need to be reconsidered in light of the strong relationship between high PPES and lack of interest in rural practice. For example, medical school admission slots might be reserved for qualified students from poorer families. These students may not need to come from rural areas as we found that none of the rural exposure factors were significant after controlling for motivation and demographics.\nThird, our results suggest that programmes to promote and support rural practice after graduation may have some success. Our focus groups and discrete choice experiment suggested that students may be willing to commit to short-term placements of 2 years or less in rural areas [9]. The Ministry of Health may want to consider the possibility of short contracts that rotate physicians in and out of difficult to staff rural areas.", "The majority of students still claim high intrinsic motivation and therefore it is important to appeal and build on this in medical school curricula and in designing rural postings. However, extrinsic motivation and, perhaps most importantly, PPES, will likely continue to be an important factor in deciding on job postings. Future research should explore how motivations could be directly supported, how motivations are formed, the influence of contextual factors on motivation among medical students, and motivation crowding among practicing health professionals in rural underserved areas. Qualitative work may be especially informative in this effort. Our research also suggests that increasing efforts to recruit medical students from low PPES backgrounds may be the most effective current pathway to increasing the yield of physicians willing to practice in underserved areas." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study site", "Data collection", "Ethical considerations", "Statistical Analysis", "Results", "Socio-demographic characteristics", "Intrinsic and extrinsic motivation and likelihood of working in underserved area", "Regression analysis of motivations and the willingness to accept postings to rural underserved area after graduation", "Discussion", "Conclusions" ]
[ "The World Health Organization estimates that more than 4 million health workers are needed to fill the health workforce gap globally. This includes 2.4 million physicians, nurses and midwives. Fifty-seven countries are defined as having a critical shortage of health staff; of these, 36 are in sub-Saharan Africa. Of the total world's health work force of 59.2 million, Africa has only 3% of the world's health workers in spite of having 25% of the global burden of disease [1,2]. The shortage of health staff cripples the health delivery system. It is also a threat to provision of essential, life-saving interventions such as childhood immunizations, provision of safe water, safe pregnancy and childbirth services for mothers as well as access to treatment for AIDS, tuberculosis and malaria. Health workers are critical to the global preparedness for and response to threats posed by emerging and epidemic-prone diseases. Different interventions have been tried to address these shortages [3-5].\nRetaining health staff in rural areas has proven extremely difficult as young professionals increasingly prefer urban postings and health systems do not reward (and through neglect often penalize) rural service [6, 7. 8]. For example, Wilson and colleagues, 2009, Dovlo, 2003 and Kruk and colleagues, 2010, found that rural exposure, poor working conditions, low job satisfaction, political and ethnic problems, and sometimes, civil strife and poor security in most underserved areas, predispose new graduates to select cities [9-11]. Qualitative research has also shown the importance of health care providers' personal characteristics and value systems, such as religious beliefs and sociopolitical convictions, to their motivation towards rural practice. Emigration of skilled professionals to high-income countries is another barrier to adequate staffing of health facilities. A study in Ghana in 2006 on trainee physicians and nurses revealed that the majority had considered emigrating. More physicians (68%) than nurses (57%) considered emigration. These findings imply that achieving improvements in the health status of people living in low-income countries, and particularly, in rural areas, will be extremely difficult and the attainment of the United Nations Millennium Development Goals 4, 5, and 6 by 2015 [12,13], in Ghana is unlikely.\nWhile previous research has looked at incentives and working conditions to promote uptake of rural posts, few studies have focused on motivation crowding and its effect on willingness to accept postings to rural area. Motivation crowding [14] is the conflict between external factors (extrinsic), such as monetary incentives or punishments, and the underlying desire or willingness to work (intrinsic) in areas needed most. Students may have a mix of extrinsic and intrinsic motivations for studying medicine. Extrinsic factors may either undermine or strengthen intrinsic motivation, led by the belief that medicine has the imperative to help others, as enshrined in the Hippocratic Oath [15-17]. Current monetary incentives, which favour urban practice, may crowd-out the intrinsic desire to give back to society by working in underserved areas [18,19]. This could have debilitating effects on health worker retention in rural areas [20-22]. To tackle the maldistribution of human resources for health (HRH), understanding the factors that crowd-out the intrinsic motivation of medical students and their willingness to accept postings to rural underserved area is integral. This paper analyzes the effect of extrinsic versus intrinsic motivational factors on stated willingness to accept postings to rural underserved areas in Ghana.", " Study site Ghana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC).\nIn Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice.\nGhana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC).\nIn Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice.\n Data collection Data collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST.\nData collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST.\n Ethical considerations The study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms.\nThe study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms.\n Statistical Analysis The study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented \"I will definitely not work in a deprived area;\" 2 \"I am unlikely to work in a deprived area;\" 3 \"I am likely to work in a deprived area;\" and 4 \"I will definitely work in a deprived area.\" We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as \"a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc.\"--as per Ministry of Health definition.\nPredictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having \"strong\" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, \"strong\" intrinsic and extrinsic motivation groups were mutually exclusive.\nDemographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as \"non-Akans\" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months).\nBivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3.\nThe study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented \"I will definitely not work in a deprived area;\" 2 \"I am unlikely to work in a deprived area;\" 3 \"I am likely to work in a deprived area;\" and 4 \"I will definitely work in a deprived area.\" We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as \"a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc.\"--as per Ministry of Health definition.\nPredictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having \"strong\" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, \"strong\" intrinsic and extrinsic motivation groups were mutually exclusive.\nDemographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as \"non-Akans\" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months).\nBivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3.", "Ghana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC).\nIn Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice.", "Data collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST.", "The study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms.", "The study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented \"I will definitely not work in a deprived area;\" 2 \"I am unlikely to work in a deprived area;\" 3 \"I am likely to work in a deprived area;\" and 4 \"I will definitely work in a deprived area.\" We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as \"a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc.\"--as per Ministry of Health definition.\nPredictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having \"strong\" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, \"strong\" intrinsic and extrinsic motivation groups were mutually exclusive.\nDemographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as \"non-Akans\" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months).\nBivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3.", " Socio-demographic characteristics Of the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service.\nDemographic characteristics and rural exposure of respondents (N = 302)\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nOf the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service.\nDemographic characteristics and rural exposure of respondents (N = 302)\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\n Intrinsic and extrinsic motivation and likelihood of working in underserved area The intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013).\nIntrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students\n1. Excludes 17 missing values for willingness to work rural and/or current motivation.\n2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008.\n3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013.\nThe intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013).\nIntrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students\n1. Excludes 17 missing values for willingness to work rural and/or current motivation.\n2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008.\n3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013.\n Regression analysis of motivations and the willingness to accept postings to rural underserved area after graduation We present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nIn Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nWe present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nIn Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies", "Of the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service.\nDemographic characteristics and rural exposure of respondents (N = 302)\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies", "The intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013).\nIntrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students\n1. Excludes 17 missing values for willingness to work rural and/or current motivation.\n2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008.\n3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013.", "We present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies\nIn Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area.\nMultivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation\n1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples\n2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional\n3. Urban area defined as a place with more than 5,000 residents\n4. From age five on\n5. Participated in outreach or service in a deprived area during medical studies", "We found that twice as many students reported high intrinsic motivation compared to high extrinsic motivation to study and practice medicine. This may reflect the underlying altruistic motivation for many students entering a profession focused on serving others [12,13]. There may also be an element of social desirability bias in the students' responses as intrinsic motivation may be thought to be more socially acceptable than extrinsic motivation. Nonetheless, we found that high extrinsic motivation was associated with low self-reported likelihood of rural practice and that the converse was true for high intrinsic motivation [2,3]. Interestingly, this association lost statistical significance at the 95% confidence level in models with demographic and rural exposure confounders, whereas socioeconomic status (PPES) retained a highly influential role, as discussed below.\nIn this study, rural origin did not influence students' willingness to practice in rural areas after controlling for intrinsic/extrinsic motivation and demographic characteristics. This is in contrast with studies which have found rural origin to be an important motivator for rural practice [8,17,23]. Our findings highlight the heterogeneity of trends in motivation dynamics for rural practice and the importance of locally-relevant data for decision making. High PPES, measured using parental education and profession, was consistently associated with lack of willingness to work in rural areas. This is concerning as nearly 6 in 10 medical students in this cohort were from high PPES backgrounds--which is typical for Ghanaian medical schools [7]. These findings suggest that admission policies that favour well-to do applicants may be reducing the pool of students willing to consider rural practice. Female gender was also strongly associated with reduced interest in rural practice for women even after controlling for extrinsic/intrinsic motivation and rural exposure variables. This is consistent with similar studies among health staff which revealed that women are less likely to accept positions in remote areas due to varying family reasons; they would like to live where their husbands jobs are, have difficulties convincing their husbands to follow them to rural areas and want their children to have better education in the urban areas [10,11,22]. The studies further explained that female doctors rarely live in the same village as their assigned post and have higher overall absentee rates in rural practice [19,20]. With increasing representation of female healthcare professionals in many places in sub-Saharan Africa, [10,11], it is likely that the supply of health staff to rural underserved areas will remain a major setback if professional motivations are designed to attract more female students to rural practice. Although our study showed a lower proportion of female medical students in Ghana compared to other areas, they are likely to become a more important cadre in the coming years [11,19]. More research is urgently needed to determine how female healthcare professionals' motivations towards rural practice can be better engaged by policy-makers.\nThe limitations of this study include the possibility of social desirability bias in responses on motivation and likelihood of rural practice, as noted above. Despite the fact that study participants were assured of anonymity, confidentiality, in responding to the questions, some social desirability bias is likely. For this reason, we selected a measure of high intrinsic and extrinsic motivation for use in the regression models. Research comparing students stated intentions with their actual career choices during internship is urgently needed as few studies on matched follow-ups are available. In addition, most the students participating in the study were young and had not yet tasted the rigors of working in a rural area, which may affect their job preferences. Thus the findings of this study may not be applicable to practicing physicians. Finally, these findings are only generalizable to students in the current medical education system. The findings may be different if selection criteria for medical school admission change.\nThe major strengths of this study are its high response rate of 99% and that its ability to capture an entire population of young medical students who are one of the targets for addressing the rural-urban health staff recruitment imbalance. Surveying practicing physicians would have missed out those who had migrated.\nThis study has several implications. First, the majority of students profess high intrinsic motivation for rural service. More research is needed to determine the potency of this motivation source in real-life decision making and how to best engage it via HRH policy. It is possible that emphasizing the community service aspect of medical practice and elevating the status of rural primary care in under-graduate and post-graduate training may help narrow the gap between motivation and eventual career choice in favour of rural areas. In addition, well-supervised and supported rural placements in which students experience the rewards of rural practice may help to persuade students who are largely unfamiliar with rural life. However, the success of these rural rotations is likely to depend heavily on having adequate local infrastructure and mentorship [13,23].\nSecond, admission criteria may need to be reconsidered in light of the strong relationship between high PPES and lack of interest in rural practice. For example, medical school admission slots might be reserved for qualified students from poorer families. These students may not need to come from rural areas as we found that none of the rural exposure factors were significant after controlling for motivation and demographics.\nThird, our results suggest that programmes to promote and support rural practice after graduation may have some success. Our focus groups and discrete choice experiment suggested that students may be willing to commit to short-term placements of 2 years or less in rural areas [9]. The Ministry of Health may want to consider the possibility of short contracts that rotate physicians in and out of difficult to staff rural areas.", "The majority of students still claim high intrinsic motivation and therefore it is important to appeal and build on this in medical school curricula and in designing rural postings. However, extrinsic motivation and, perhaps most importantly, PPES, will likely continue to be an important factor in deciding on job postings. Future research should explore how motivations could be directly supported, how motivations are formed, the influence of contextual factors on motivation among medical students, and motivation crowding among practicing health professionals in rural underserved areas. Qualitative work may be especially informative in this effort. Our research also suggests that increasing efforts to recruit medical students from low PPES backgrounds may be the most effective current pathway to increasing the yield of physicians willing to practice in underserved areas." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[ "Health Manpower", "Motivation", "Rural Health Services", "Ghana" ]
Background: The World Health Organization estimates that more than 4 million health workers are needed to fill the health workforce gap globally. This includes 2.4 million physicians, nurses and midwives. Fifty-seven countries are defined as having a critical shortage of health staff; of these, 36 are in sub-Saharan Africa. Of the total world's health work force of 59.2 million, Africa has only 3% of the world's health workers in spite of having 25% of the global burden of disease [1,2]. The shortage of health staff cripples the health delivery system. It is also a threat to provision of essential, life-saving interventions such as childhood immunizations, provision of safe water, safe pregnancy and childbirth services for mothers as well as access to treatment for AIDS, tuberculosis and malaria. Health workers are critical to the global preparedness for and response to threats posed by emerging and epidemic-prone diseases. Different interventions have been tried to address these shortages [3-5]. Retaining health staff in rural areas has proven extremely difficult as young professionals increasingly prefer urban postings and health systems do not reward (and through neglect often penalize) rural service [6, 7. 8]. For example, Wilson and colleagues, 2009, Dovlo, 2003 and Kruk and colleagues, 2010, found that rural exposure, poor working conditions, low job satisfaction, political and ethnic problems, and sometimes, civil strife and poor security in most underserved areas, predispose new graduates to select cities [9-11]. Qualitative research has also shown the importance of health care providers' personal characteristics and value systems, such as religious beliefs and sociopolitical convictions, to their motivation towards rural practice. Emigration of skilled professionals to high-income countries is another barrier to adequate staffing of health facilities. A study in Ghana in 2006 on trainee physicians and nurses revealed that the majority had considered emigrating. More physicians (68%) than nurses (57%) considered emigration. These findings imply that achieving improvements in the health status of people living in low-income countries, and particularly, in rural areas, will be extremely difficult and the attainment of the United Nations Millennium Development Goals 4, 5, and 6 by 2015 [12,13], in Ghana is unlikely. While previous research has looked at incentives and working conditions to promote uptake of rural posts, few studies have focused on motivation crowding and its effect on willingness to accept postings to rural area. Motivation crowding [14] is the conflict between external factors (extrinsic), such as monetary incentives or punishments, and the underlying desire or willingness to work (intrinsic) in areas needed most. Students may have a mix of extrinsic and intrinsic motivations for studying medicine. Extrinsic factors may either undermine or strengthen intrinsic motivation, led by the belief that medicine has the imperative to help others, as enshrined in the Hippocratic Oath [15-17]. Current monetary incentives, which favour urban practice, may crowd-out the intrinsic desire to give back to society by working in underserved areas [18,19]. This could have debilitating effects on health worker retention in rural areas [20-22]. To tackle the maldistribution of human resources for health (HRH), understanding the factors that crowd-out the intrinsic motivation of medical students and their willingness to accept postings to rural underserved area is integral. This paper analyzes the effect of extrinsic versus intrinsic motivational factors on stated willingness to accept postings to rural underserved areas in Ghana. Methods: Study site Ghana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC). In Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice. Ghana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC). In Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice. Data collection Data collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST. Data collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST. Ethical considerations The study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms. The study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms. Statistical Analysis The study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented "I will definitely not work in a deprived area;" 2 "I am unlikely to work in a deprived area;" 3 "I am likely to work in a deprived area;" and 4 "I will definitely work in a deprived area." We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as "a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc."--as per Ministry of Health definition. Predictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having "strong" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, "strong" intrinsic and extrinsic motivation groups were mutually exclusive. Demographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as "non-Akans" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months). Bivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3. The study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented "I will definitely not work in a deprived area;" 2 "I am unlikely to work in a deprived area;" 3 "I am likely to work in a deprived area;" and 4 "I will definitely work in a deprived area." We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as "a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc."--as per Ministry of Health definition. Predictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having "strong" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, "strong" intrinsic and extrinsic motivation groups were mutually exclusive. Demographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as "non-Akans" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months). Bivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3. Study site: Ghana is located in the west coast of Africa with an estimated population of 23 million [23]. It is mainly an agrarian economy with estimated per capita income of USD 1500. More than two-thirds of the economy is rural, with the cities of Kumasi and Accra having the highest population densities due to brisk economic activities, and relatively strong economic and cultural infrastructure [24]. These attributes make the two main cities destinations for rural-urban migration and a highly heterogeneous socio-cultural context. There are four medical schools in Ghana: the University of Ghana (UG), Kwame Nkrumah University of Science and Technology (KNUST), University for Development Studies (UDS), and University of Cape Coast (UCC). In Ghana, medical education consists of three years of basic science/para-clinical studies, three years of clinical training at a teaching hospital, and a two-year rotating housemanship. The study was conducted with two public universities in Ghana: University of Ghana (UG) in Accra and Kwame Nkrumah University of Science and Technology (KNUST) in Kumasi. These universities were selected because all the fourth year medical students in the public universities had their clinical training at either UG or KNUST at the time of the study. All 310 fourth year medical students in the country were invited to participate in the study; no sampling was conducted. Fourth-year medical students were selected because they had completed the BSc. Human Biology and had also been exposed to field work, but had not yet made their final decisions about rural or urban practice. Data collection: Data collection was preceded by discussions with the heads of medical training institutions, who informed the content of the questionnaire and provided access to the student population. The data collection instruments were developed after seven focus group discussions of 6-8 participants in each group facilitated by trained social scientists were held with third and fifth year medical students at UG and KNUST. The themes for the focus group discussion were motivation, willingness to work in deprived areas, experience in the field, and the influence of background characteristics on willingness to work in deprived areas. The survey instrument, which included semi-structured questions as well as a discrete choice experiment, were then pretested and finalized for the study. The questionnaires were administered electronically with Sawtooth Software SSI Web CAPI [25], in computer laboratories in UG and KNUST. Ethical considerations: The study received ethics approval from the Ghana Health Service Ethical Review Committee; the UG Medical School; the KNUST Committee on Human Research, Publications, and Ethics; and the University of Michigan Institutional Review Board. All respondents voluntarily participated after the intent and the design of the study had been explained to them and signing informed consent forms. Statistical Analysis: The study used STATA v10.1 for statistical analyses [26]. The main outcome of interest was the willingness to work in a deprived area after graduation. Students were asked to rate how likely they were to work in a deprived area on a scale from 1-4, where 1 represented "I will definitely not work in a deprived area;" 2 "I am unlikely to work in a deprived area;" 3 "I am likely to work in a deprived area;" and 4 "I will definitely work in a deprived area." We collapsed this response set to a dichotomous willing (groups 3 or 4) versus unwilling (groups 1 or 2) to practice in a deprived area. We defined deprived area as "a rural area that is distant from the big cities with few social amenities such as schools, roads, pipe-borne water, etc."--as per Ministry of Health definition. Predictors of interest included motivation (intrinsic and extrinsic), demographic characteristics, and rural exposure. Students were asked to identify the top three factors that motivated them to study medicine from a list of twelve factors identified as important by the focus group discussions. Five intrinsic motivations included: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one. Seven extrinsic motivation factors included: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities. Respondents were coded as having "strong" intrinsic or extrinsic motivation if two or more of their motivational factors were intrinsic or extrinsic. Thus, "strong" intrinsic and extrinsic motivation groups were mutually exclusive. Demographic factors included: sex, age, ethnicity (Akan vs. non-Akan), partnership status (in a relationship vs. not in a relationship), and parental parental professional and educational status (PPES). The Akan peoples (including the Asante, Fante, Kwahu, Akuapem, Bono and others) are the largest ethnic group, representing approximately half of Ghanaians; we grouped all other smaller ethnic groups (Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa) together as "non-Akans" for this analysis. High parental professional and educational status (PPES) was defined as having a mother and/or father who is a University-trained professional (e.g. doctor, lawyer, engineer, accountant, technical, etc) and low PPES was defined as neither mother nor father is a University-trained professional. Rural (an area with a population less than 5000) exposure factors included: birth location (urban vs. rural), location of pre-medical studies (urban/vs. rural), having ever lived in rural area (from the age of 5 onwards), exposure to rural service in medical training (for a minimum of 6 months). Bivariate associations and 95% confidence intervals were estimated using logistic regression. In model 1, the influence of strong intrinsic/extrinsic motivation on willingness to accept postings to rural area were assessed. Socio-demographic factors were added to the regression in Model 2, and rural exposure factors were further added in Model 3. Results: Socio-demographic characteristics Of the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service. Demographic characteristics and rural exposure of respondents (N = 302) 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies Of the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service. Demographic characteristics and rural exposure of respondents (N = 302) 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies Intrinsic and extrinsic motivation and likelihood of working in underserved area The intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013). Intrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students 1. Excludes 17 missing values for willingness to work rural and/or current motivation. 2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008. 3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013. The intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013). Intrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students 1. Excludes 17 missing values for willingness to work rural and/or current motivation. 2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008. 3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013. Regression analysis of motivations and the willingness to accept postings to rural underserved area after graduation We present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area. Multivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies In Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area. Multivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies We present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area. Multivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies In Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area. Multivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies Socio-demographic characteristics: Of the 310 eligible medical students, 307 participated in the survey (99%). Of these, five survey files were corrupted by viruses or lost due to computer malfunction; thus the analysis was conducted with 302 total records. The socio-demographic characteristics of respondents are presented in Table 1. Of the 302 respondents recruited for the study, the majority were male (183 or 60.6%), with a mean age of 22.9 (STD 1.4). Most respondents were born in or around urban areas (264 or 87.4%) and had never lived in rural underserved area (75.8%). In terms of socioeconomic status, 173 (57.3%) students came from high PPES families. About half of the respondents (142 or 47%) were exposed to rural service. Demographic characteristics and rural exposure of respondents (N = 302) 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High Family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low Family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents; rural area defined as a place with less than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies Intrinsic and extrinsic motivation and likelihood of working in underserved area: The intensities of current motivational factors are presented in Table 2. Overall, 158 (55.4%) of students stated that they were likely to or definitely would work in an underserved area. More than 6 in 10 respondents (181 or 63.5%) had strong intrinsic motivation. A higher proportion of respondents who had strong intrinsic motivation indicated willingness to work in a rural area, compared to those with weak intrinsic motivation (χ2 = 6.96, p = 0.008). These results were reversed for those with strong extrinsic motivation (χ2 = 6.12, p = 0.013). Intrinsic versus extrinsic motivations to study medicine predict reported likelihood to work in an under-served area, 285 Ghanaian Medical Students 1. Excludes 17 missing values for willingness to work rural and/or current motivation. 2. Intrinsic motivation includes: desire to help others, desire to give back to their home community or country, interest in medicine as a subject matter, inspiration by a role model, and loss of a loved one; Pearson χ2(1) = 6.9592, p = 0.008. 3. Extrinsic motivation includes: income of physicians, job security and lifestyle, social status/prestige, proposed by parents, opportunities to travel and work internationally, ability to use new cutting edge technologies, and research opportunities; Pearson χ2(1) = 6.1208, p = 0.013. Regression analysis of motivations and the willingness to accept postings to rural underserved area after graduation: We present the multivariate logistic regression results for the strength of intrinsic motivation and willingness to work in a rural underserved area after graduation in Table 3. In the unadjusted model (Model 1) there was a significant association between strong intrinsic motivation and willingness to work in rural underserved area (1.92[95% CI 1.18-3.13]). In Model 2, with demographic factors, female gender and high PPES were associated with reduced willingness to practice in a deprived area (0.50[95% CI 0.29-0.88]) and (0.42[95% CI 0.24, 0.71]), respectively, while age was associated with greater willingness to practice in a rural area (1.23[95% CI 1.00-1.52]). Rural exposure factors were not significant when added to the model with intrinsic motivation and demographic factors (Model 3). In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area. Multivariate logistic regression results for strength of intrinsic motivation and willingness to work in a rural underserved area after graduation, Ghanaian medical students 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies In Table 4 the influence of strong extrinsic motivation on willingness of students to work in rural underserved area is presented. In Model 1, having a strong extrinsic motivation reduced the odds of being willing to accept a job in an underserved area to (0.54[95% CI 0.33-0.88]). In the model adjusting for demographics, Model 2, female gender and high PPES were associated with reduced willingness to practice in underserved areas (0.50[95% CI 0.28-0.87] and (0.42[95% CI 0.24-0.71]) respectively, while age was associated with greater willingness to practice in a rural area (1.24[95% CI 1.00-1.53]). Rural exposure factors in model 3 did not influence the outcome of willingness to work in rural underserved area. In the adjusted models 2 and 3, motivation was no longer a significant predictor of willingness to practice in a deprived area. Multivariate logistic regression results for strength of extrinsic motivation and willingness to work in a rural underserved area after graduation 1. Akan includes Asante, Fante, Kwahu, Akuapim, Bono, etc; Non-Akan includes Ga/Dangme, Ewe, Guan, Mole-Dagbani, Grussi, Gruma, and Hausa peoples 2. High family PPES: Mother and/or father is a University-trained Professional (e.g. doctor, lawyer, engineer, accountant, technical, etc); Low family PPES: Neither mother nor father is a University-trained Professional 3. Urban area defined as a place with more than 5,000 residents 4. From age five on 5. Participated in outreach or service in a deprived area during medical studies Discussion: We found that twice as many students reported high intrinsic motivation compared to high extrinsic motivation to study and practice medicine. This may reflect the underlying altruistic motivation for many students entering a profession focused on serving others [12,13]. There may also be an element of social desirability bias in the students' responses as intrinsic motivation may be thought to be more socially acceptable than extrinsic motivation. Nonetheless, we found that high extrinsic motivation was associated with low self-reported likelihood of rural practice and that the converse was true for high intrinsic motivation [2,3]. Interestingly, this association lost statistical significance at the 95% confidence level in models with demographic and rural exposure confounders, whereas socioeconomic status (PPES) retained a highly influential role, as discussed below. In this study, rural origin did not influence students' willingness to practice in rural areas after controlling for intrinsic/extrinsic motivation and demographic characteristics. This is in contrast with studies which have found rural origin to be an important motivator for rural practice [8,17,23]. Our findings highlight the heterogeneity of trends in motivation dynamics for rural practice and the importance of locally-relevant data for decision making. High PPES, measured using parental education and profession, was consistently associated with lack of willingness to work in rural areas. This is concerning as nearly 6 in 10 medical students in this cohort were from high PPES backgrounds--which is typical for Ghanaian medical schools [7]. These findings suggest that admission policies that favour well-to do applicants may be reducing the pool of students willing to consider rural practice. Female gender was also strongly associated with reduced interest in rural practice for women even after controlling for extrinsic/intrinsic motivation and rural exposure variables. This is consistent with similar studies among health staff which revealed that women are less likely to accept positions in remote areas due to varying family reasons; they would like to live where their husbands jobs are, have difficulties convincing their husbands to follow them to rural areas and want their children to have better education in the urban areas [10,11,22]. The studies further explained that female doctors rarely live in the same village as their assigned post and have higher overall absentee rates in rural practice [19,20]. With increasing representation of female healthcare professionals in many places in sub-Saharan Africa, [10,11], it is likely that the supply of health staff to rural underserved areas will remain a major setback if professional motivations are designed to attract more female students to rural practice. Although our study showed a lower proportion of female medical students in Ghana compared to other areas, they are likely to become a more important cadre in the coming years [11,19]. More research is urgently needed to determine how female healthcare professionals' motivations towards rural practice can be better engaged by policy-makers. The limitations of this study include the possibility of social desirability bias in responses on motivation and likelihood of rural practice, as noted above. Despite the fact that study participants were assured of anonymity, confidentiality, in responding to the questions, some social desirability bias is likely. For this reason, we selected a measure of high intrinsic and extrinsic motivation for use in the regression models. Research comparing students stated intentions with their actual career choices during internship is urgently needed as few studies on matched follow-ups are available. In addition, most the students participating in the study were young and had not yet tasted the rigors of working in a rural area, which may affect their job preferences. Thus the findings of this study may not be applicable to practicing physicians. Finally, these findings are only generalizable to students in the current medical education system. The findings may be different if selection criteria for medical school admission change. The major strengths of this study are its high response rate of 99% and that its ability to capture an entire population of young medical students who are one of the targets for addressing the rural-urban health staff recruitment imbalance. Surveying practicing physicians would have missed out those who had migrated. This study has several implications. First, the majority of students profess high intrinsic motivation for rural service. More research is needed to determine the potency of this motivation source in real-life decision making and how to best engage it via HRH policy. It is possible that emphasizing the community service aspect of medical practice and elevating the status of rural primary care in under-graduate and post-graduate training may help narrow the gap between motivation and eventual career choice in favour of rural areas. In addition, well-supervised and supported rural placements in which students experience the rewards of rural practice may help to persuade students who are largely unfamiliar with rural life. However, the success of these rural rotations is likely to depend heavily on having adequate local infrastructure and mentorship [13,23]. Second, admission criteria may need to be reconsidered in light of the strong relationship between high PPES and lack of interest in rural practice. For example, medical school admission slots might be reserved for qualified students from poorer families. These students may not need to come from rural areas as we found that none of the rural exposure factors were significant after controlling for motivation and demographics. Third, our results suggest that programmes to promote and support rural practice after graduation may have some success. Our focus groups and discrete choice experiment suggested that students may be willing to commit to short-term placements of 2 years or less in rural areas [9]. The Ministry of Health may want to consider the possibility of short contracts that rotate physicians in and out of difficult to staff rural areas. Conclusions: The majority of students still claim high intrinsic motivation and therefore it is important to appeal and build on this in medical school curricula and in designing rural postings. However, extrinsic motivation and, perhaps most importantly, PPES, will likely continue to be an important factor in deciding on job postings. Future research should explore how motivations could be directly supported, how motivations are formed, the influence of contextual factors on motivation among medical students, and motivation crowding among practicing health professionals in rural underserved areas. Qualitative work may be especially informative in this effort. Our research also suggests that increasing efforts to recruit medical students from low PPES backgrounds may be the most effective current pathway to increasing the yield of physicians willing to practice in underserved areas.
Background: Retaining health workers in rural areas is challenging for a number of reasons, ranging from personal preferences to difficult work conditions and low remuneration. This paper assesses the influence of intrinsic and extrinsic motivation on willingness to accept postings to deprived areas among medical students in Ghana. Methods: A computer-based survey involving 302 fourth year medical students was conducted from May-August 2009. Logistic regression was used to assess the association between students' willingness to accept rural postings and their professional motivations, rural exposure and family parental professional and educational status (PPES). Results: Over 85% of students were born in urban areas and 57% came from affluent backgrounds. Nearly two-thirds of students reported strong intrinsic motivation to study medicine. After controlling for demographic characteristics and rural exposure, motivational factors did not influence willingness to practice in rural areas. High family PPES was consistently associated with lower willingness to work in rural areas. Conclusions: Although most Ghanaian medical students are motivated to study medicine by the desire to help others, this does not translate into willingness to work in rural areas. Efforts should be made to build on intrinsic motivation during medical training and in designing rural postings, as well as favour lower PPES students for admission.
Background: The World Health Organization estimates that more than 4 million health workers are needed to fill the health workforce gap globally. This includes 2.4 million physicians, nurses and midwives. Fifty-seven countries are defined as having a critical shortage of health staff; of these, 36 are in sub-Saharan Africa. Of the total world's health work force of 59.2 million, Africa has only 3% of the world's health workers in spite of having 25% of the global burden of disease [1,2]. The shortage of health staff cripples the health delivery system. It is also a threat to provision of essential, life-saving interventions such as childhood immunizations, provision of safe water, safe pregnancy and childbirth services for mothers as well as access to treatment for AIDS, tuberculosis and malaria. Health workers are critical to the global preparedness for and response to threats posed by emerging and epidemic-prone diseases. Different interventions have been tried to address these shortages [3-5]. Retaining health staff in rural areas has proven extremely difficult as young professionals increasingly prefer urban postings and health systems do not reward (and through neglect often penalize) rural service [6, 7. 8]. For example, Wilson and colleagues, 2009, Dovlo, 2003 and Kruk and colleagues, 2010, found that rural exposure, poor working conditions, low job satisfaction, political and ethnic problems, and sometimes, civil strife and poor security in most underserved areas, predispose new graduates to select cities [9-11]. Qualitative research has also shown the importance of health care providers' personal characteristics and value systems, such as religious beliefs and sociopolitical convictions, to their motivation towards rural practice. Emigration of skilled professionals to high-income countries is another barrier to adequate staffing of health facilities. A study in Ghana in 2006 on trainee physicians and nurses revealed that the majority had considered emigrating. More physicians (68%) than nurses (57%) considered emigration. These findings imply that achieving improvements in the health status of people living in low-income countries, and particularly, in rural areas, will be extremely difficult and the attainment of the United Nations Millennium Development Goals 4, 5, and 6 by 2015 [12,13], in Ghana is unlikely. While previous research has looked at incentives and working conditions to promote uptake of rural posts, few studies have focused on motivation crowding and its effect on willingness to accept postings to rural area. Motivation crowding [14] is the conflict between external factors (extrinsic), such as monetary incentives or punishments, and the underlying desire or willingness to work (intrinsic) in areas needed most. Students may have a mix of extrinsic and intrinsic motivations for studying medicine. Extrinsic factors may either undermine or strengthen intrinsic motivation, led by the belief that medicine has the imperative to help others, as enshrined in the Hippocratic Oath [15-17]. Current monetary incentives, which favour urban practice, may crowd-out the intrinsic desire to give back to society by working in underserved areas [18,19]. This could have debilitating effects on health worker retention in rural areas [20-22]. To tackle the maldistribution of human resources for health (HRH), understanding the factors that crowd-out the intrinsic motivation of medical students and their willingness to accept postings to rural underserved area is integral. This paper analyzes the effect of extrinsic versus intrinsic motivational factors on stated willingness to accept postings to rural underserved areas in Ghana. Conclusions: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6920/11/56/prepub
Background: Retaining health workers in rural areas is challenging for a number of reasons, ranging from personal preferences to difficult work conditions and low remuneration. This paper assesses the influence of intrinsic and extrinsic motivation on willingness to accept postings to deprived areas among medical students in Ghana. Methods: A computer-based survey involving 302 fourth year medical students was conducted from May-August 2009. Logistic regression was used to assess the association between students' willingness to accept rural postings and their professional motivations, rural exposure and family parental professional and educational status (PPES). Results: Over 85% of students were born in urban areas and 57% came from affluent backgrounds. Nearly two-thirds of students reported strong intrinsic motivation to study medicine. After controlling for demographic characteristics and rural exposure, motivational factors did not influence willingness to practice in rural areas. High family PPES was consistently associated with lower willingness to work in rural areas. Conclusions: Although most Ghanaian medical students are motivated to study medicine by the desire to help others, this does not translate into willingness to work in rural areas. Efforts should be made to build on intrinsic motivation during medical training and in designing rural postings, as well as favour lower PPES students for admission.
9,073
243
[ 670, 303, 153, 64, 642, 2418, 297, 257, 635, 1072, 140 ]
12
[ "rural", "area", "motivation", "work", "willingness", "intrinsic", "medical", "students", "extrinsic", "deprived" ]
[ "adequate staffing health", "shortage health", "health professionals rural", "supply health staff", "fill health workforce" ]
null
[CONTENT] Health Manpower | Motivation | Rural Health Services | Ghana [SUMMARY]
[CONTENT] Health Manpower | Motivation | Rural Health Services | Ghana [SUMMARY]
null
[CONTENT] Health Manpower | Motivation | Rural Health Services | Ghana [SUMMARY]
[CONTENT] Health Manpower | Motivation | Rural Health Services | Ghana [SUMMARY]
[CONTENT] Health Manpower | Motivation | Rural Health Services | Ghana [SUMMARY]
[CONTENT] Choice Behavior | Data Collection | Female | Ghana | Humans | Logistic Models | Male | Motivation | Professional Practice Location | Rural Health Services | Students, Medical | Young Adult [SUMMARY]
[CONTENT] Choice Behavior | Data Collection | Female | Ghana | Humans | Logistic Models | Male | Motivation | Professional Practice Location | Rural Health Services | Students, Medical | Young Adult [SUMMARY]
null
[CONTENT] Choice Behavior | Data Collection | Female | Ghana | Humans | Logistic Models | Male | Motivation | Professional Practice Location | Rural Health Services | Students, Medical | Young Adult [SUMMARY]
[CONTENT] Choice Behavior | Data Collection | Female | Ghana | Humans | Logistic Models | Male | Motivation | Professional Practice Location | Rural Health Services | Students, Medical | Young Adult [SUMMARY]
[CONTENT] Choice Behavior | Data Collection | Female | Ghana | Humans | Logistic Models | Male | Motivation | Professional Practice Location | Rural Health Services | Students, Medical | Young Adult [SUMMARY]
[CONTENT] adequate staffing health | shortage health | health professionals rural | supply health staff | fill health workforce [SUMMARY]
[CONTENT] adequate staffing health | shortage health | health professionals rural | supply health staff | fill health workforce [SUMMARY]
null
[CONTENT] adequate staffing health | shortage health | health professionals rural | supply health staff | fill health workforce [SUMMARY]
[CONTENT] adequate staffing health | shortage health | health professionals rural | supply health staff | fill health workforce [SUMMARY]
[CONTENT] adequate staffing health | shortage health | health professionals rural | supply health staff | fill health workforce [SUMMARY]
[CONTENT] rural | area | motivation | work | willingness | intrinsic | medical | students | extrinsic | deprived [SUMMARY]
[CONTENT] rural | area | motivation | work | willingness | intrinsic | medical | students | extrinsic | deprived [SUMMARY]
null
[CONTENT] rural | area | motivation | work | willingness | intrinsic | medical | students | extrinsic | deprived [SUMMARY]
[CONTENT] rural | area | motivation | work | willingness | intrinsic | medical | students | extrinsic | deprived [SUMMARY]
[CONTENT] rural | area | motivation | work | willingness | intrinsic | medical | students | extrinsic | deprived [SUMMARY]
[CONTENT] health | rural | areas | world health | world | countries | incentives | workers | nurses | health workers [SUMMARY]
[CONTENT] work deprived | area | deprived | work deprived area | deprived area | rural | university | included | ghana | ug [SUMMARY]
null
[CONTENT] motivation | increasing | important | postings | underserved areas | students | medical | motivations | ppes | underserved [SUMMARY]
[CONTENT] rural | area | motivation | intrinsic | work | willingness | students | medical | deprived | university [SUMMARY]
[CONTENT] rural | area | motivation | intrinsic | work | willingness | students | medical | deprived | university [SUMMARY]
[CONTENT] ||| Ghana [SUMMARY]
[CONTENT] 302 fourth year | May-August 2009 ||| PPES [SUMMARY]
null
[CONTENT] Ghanaian ||| PPES [SUMMARY]
[CONTENT] ||| Ghana ||| 302 fourth year | May-August 2009 ||| PPES ||| Over 85% | 57% ||| Nearly two-thirds ||| ||| PPES ||| Ghanaian ||| PPES [SUMMARY]
[CONTENT] ||| Ghana ||| 302 fourth year | May-August 2009 ||| PPES ||| Over 85% | 57% ||| Nearly two-thirds ||| ||| PPES ||| Ghanaian ||| PPES [SUMMARY]
Evaluation of the influence of genetic variants in Cereblon gene on the response to the treatment of erythema nodosum leprosum with thalidomide.
36383784
Erythema nodosum leprosum (ENL) is an acute and systemic inflammatory reaction of leprosy characterised by painful nodules and involvement of various organs. Thalidomide is an immunomodulatory and anti-inflammatory drug currently used to treat this condition. Cereblon (CRBN) protein is the primary target of thalidomide, and it has been pointed out as necessary for the efficacy of this drug in others therapeutics settings.
BACKGROUND
A total of 103 ENL patients in treatment with thalidomide were included in this study. DNA samples were obtained from saliva and molecular analysis of CRBN gene were performed to investigate the variants rs1620675, rs1672770 and rs4183. Different genotypes of CRBN variants were evaluated in relation to their influence on the dose of thalidomide and on the occurrence of adverse effects.
METHODS
No association was found between CRBN variants and thalidomide dose variation. However, the genotypes of rs1672770 showed association with gastrointestinal effects (p = 0.040). Moreover, the haplotype DEL/C/T (rs4183/rs1672770/rs1620675) was also associated with gastrointestinal adverse effects (p = 0.015).
FINDINGS
Our results show that CRBN variants affect the treatment of ENH with thalidomide, especially on the adverse effects related to the drug.
MAIN CONCLUSIONS
[ "Humans", "Erythema Nodosum", "Thalidomide", "Leprosy, Lepromatous", "Leprostatic Agents", "Leprosy, Multibacillary" ]
9668341
null
null
null
null
RESULTS
In total, 103 ENL patients were included, being 82 (79.6%) male and 79 (76.7%) presenting LL (Table I). Forty-six (44.7%) patients were using MDT and two of these used an alternative MDT regimen. The mean time of treatment with thalidomide was 226 days. TABLE IClinical and demographic characteristics of ENL patients (n = 103)Characteristic n (%)Male 82 (79.6)MDT for leprosy46 (44.7)MDT time in months median (P25/P75)*4.3 (1.5/10.9)Other medications81 (78.6)Thalidomide dose median (P25/P75)100 (100/200)Days of consultation mean (min/max)108.19 (0/721)Patient origin Northeast Region89 (86.4)North Region14 (13.6)Leprosy classification Borderline-lepromatous 24 (23.3)Lepromatous-leprosy 79 (76.7)Adverse effects Neurological a 31 (30.1)Grastrointestinal b 18 (17.5)Musculoskeletal c 29 (28.2)Ocular d 19 (18.4)Oedema16 (15.6)Dermatological e 2 (1.9)Fever11 (10.7) a: paresthesias, dizziness, tremor, neuritis and headache; b: diarrhoea, vomiting, gastric fullness and constipation; c: myalgia, arthralgia and weakness; d: decreased visual acuity and eye irritation; e: pruritus, dry skin and hair loss; *: time of multidrug therapy (MDT) when patients developed erythema nodosum leprosum (ENL) (n = 40). a: paresthesias, dizziness, tremor, neuritis and headache; b: diarrhoea, vomiting, gastric fullness and constipation; c: myalgia, arthralgia and weakness; d: decreased visual acuity and eye irritation; e: pruritus, dry skin and hair loss; *: time of multidrug therapy (MDT) when patients developed erythema nodosum leprosum (ENL) (n = 40). The genotypic distributions were in Hardy-Weinberg equilibrium and the allelic and genotypic frequencies of the polymorphisms are shown in Table II. A LD [rs4183/rs1672770 (r2 = 0.513), rs4183/rs1620675 (r2 = 0.943), rs1672770/rs1620675 (r2 = 0.567)] between the variants studied was identified (see Supplementary Table) and four haplotypes were identified in the sample (Table III). For all single nucleotide polymorphisms (SNPs), there was no association of the thalidomide dose with time of treatment (Table IV). In addition, no association was found between haplotypes and thalidomide dose. TABLE IIGenotypic and allelic frequencies of CRBN polymorphisms in ENL patients on treatment with thalidomidePolymorphismAlleles/genotypesFrequency n (%)rs1620675AA31 (30.1)AC53 (5.5)CC19 (18.4)A115 (55.8)C91 (44.2) rs1672770*AA31 (30.1)AG52 (50.5)GG17 (16.5)A114 (55.3)G86 (44.7)rs4183INS/INS19 (18.4)INS/DEL56 (54.4)DEL/DEL28 (27.2)INS94 (45.6)DEL115 (54.4)DEL: deletion; ENL: erythema nodosum leprosum; INS: insertion; MDT: multidrug therapy; *: n = 100. DEL: deletion; ENL: erythema nodosum leprosum; INS: insertion; MDT: multidrug therapy; *: n = 100. TABLE IIIHaplotype frequencies of CRBN Haplotypen(%)INS/C/T31.5INS/T/G9244.7DEL/C/T8340.3DEL/T/T2813.6Haplotypes in the following order: rs4183/rs1672770/rs1620675. DEL: deletion; CRBN: cereblon gene; INS: insertion. Haplotypes in the following order: rs4183/rs1672770/rs1620675. DEL: deletion; CRBN: cereblon gene; INS: insertion. TABLE IVAnalysis of interaction between CRBN genotype and time related to estimated thalidomide dose using the GEE model in ENL treatmentPolymorphism Interactionp-valuers1620675rs16206750.079Concomitant MDT for leprosy0.271Other medications0.205Region0.138Time0.001rs1672770rs1620675*Time0.392rs16727700.229Concomitant MDT for leprosy0.296Other medications0.071Region0.085Time0.001rs1672770*Time0.273rs4183rs41830.098Concomitant MDT for leprosy0.293Other medications0.208Region0.117Time0.001rs4183*Time0.669Dependent variable: thalidomide dose. GEE model: region, MDT, other medications, thalidomide dose, genotype, time, genotype*Time). CRBN: cereblon; ENL: erythema nodosum leprosum; GEE: generalised estimating equation model; MDT: multidrug therapy. Dependent variable: thalidomide dose. GEE model: region, MDT, other medications, thalidomide dose, genotype, time, genotype*Time). CRBN: cereblon; ENL: erythema nodosum leprosum; GEE: generalised estimating equation model; MDT: multidrug therapy. In regard to the relationship of CRBN variants and adverse effects of the use of thalidomide, it was found associations between the genotypes of rs1672770 (p = 0.040) and gastrointestinal effects, which include diarrhoea, vomiting, nausea, constipation and inappetence (Tables V and VI). Haplotype analysis was performed with all SNPs because only one combination of SNPs showed r2>0.8 (rs4183/rs1620675 with r2 = 0.943). The three polymorphisms were maintained in the analysis because when combining rs4183/rs1672770 or rs1672770/rs1620675, it showed r2<0.8. The analysis of the association between haplotypes and adverse effects also showed an association of gastrointestinal adverse effects (p = 0.015) with haplotype DEL/C/T (rs4183/rs1672770/rs1620675) (Tables VII and VIII). These effects were present in eight patients (9.6%) who carried this haplotype. TABLE VAnalysis of the association of CRBN variants on the occurrence of gastrointestinal adverse effects a PolymorphismOR b p-value*OR b p-value**rs1672770 AAREF-REF-AG0.325 (0.111-0.951)0.0400.040 (0.111-0.948)0.040GG0.131 (0.015-1.115)0.0630.127(0.015-1.100)0.061Dependent variable: gastrointestinal. CRBN: cereblon gene; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model rs1672770; **: model sex, rs1672770. Dependent variable: gastrointestinal. CRBN: cereblon gene; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model rs1672770; **: model sex, rs1672770. TABLE VIFrequency of gastrointestinal adverse effects a according to CRBN genotypesPolymorphismGenotypeAbsence (n) (% within effect)Presence (n) (% within effect)rs1672770AA23 (67.6)11 (32.4)AG45 (86.5)7 (13.5)GG16 (94.1)1 (5.9) CRBN: cereblon gene; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence). CRBN: cereblon gene; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence). TABLE VIIAnalysis of the association of CRBN haplotypes on the occurrence of gastrointestinal adverse effects a HaplotypeOR b p-value*OR b p-value**INS/T/GREF-REF-DEL/C/T0.339 (0.142-0.812)0.0150.333 (0.138-0.805)0.015DEL/T/T1.061 (0.398-2.827)0.9061.022 (0.371-2.818)0.966Dependent variable: gastrointestinal. DEL: deletion; CRBN: cereblon gene; INS: insertion; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model haplotypes; **: model sex, haplotypes. Dependent variable: gastrointestinal. DEL: deletion; CRBN: cereblon gene; INS: insertion; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model haplotypes; **: model sex, haplotypes. TABLE VIIIFrequency of gastrointestinal adverse effects a according to CRBN haplotypesHaplotypesAbsence (n) (% within effect)Presence (n) (% within effect)INS/T/G70 (76.1)22 (23.9)DEL/C/T75 (90.4)8 (9.6)DEL/T/T21 (75)7 (25)DEL: deletion; CRBN: cereblon gene; INS: insertion; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence). DEL: deletion; CRBN: cereblon gene; INS: insertion; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence).
null
null
[]
[]
[]
[ "SUBJECTS AND METHODS", "RESULTS", "DISCUSSION" ]
[ "\nSample - The sample consisted of 103 ENL patients who were selected from National Reference Center of Sanitary Dermatology Dona Libânia in Fortaleza (state of Ceará, Brazil), Humanized Reference Center of Sanitary Dermatology in Imperatriz and Aquiles Lisboa Hospital in São Luís (Maranhão State) in northeast Brazil, and from the Dermatology Ambulatory of the University of São Paulo in Monte Negro (state of Rondônia, Brazil) in north Brazil. \nThe patients used thalidomide at different doses and had a follow-up of up to six visits (average of 3.6 months). Data from up to six consultations annotated in the patient’s medical record were analysed with the collection of clinical and demographic information, including sex, age and region of origin, history of leprosy (moment of diagnosis and treatment used) and history of ENL (diagnosis, treatment, adverse effects, history of relapse and dose of medications used).\n\nGenetic analyses - DNA was extracted from saliva samples using the Oragene DNA Extraction Kit (DNA Genotek, Ottawa, Canada), according to the manufacturer’s instructions. A pair of primer was designed to amplify a fragment of 682 base pairs containing the region encompassing the three studied CRBN variants: forward 5’-TGTGGTCTTGGCAACCAGCAATTT-3’ and reverse 5’-ACTGCCGTTCATGCTTGTTTCCT-3’. This region was amplified by polymerase chain reaction (PCR). The fragment obtained was visualised on a 2% agarose gel, purified and sequenced using the same primers.\nSequences were visualised and analysed using CodonCodeAligner®, version 3.0.1 (CodonCode Corporation, Dedham, USA). The hg19 sequence deposited in GenBank was used as the reference sequence. When there was doubt about the variant, sequencing was repeated for confirmation.\n\nStatistical analyses - Chi-square test was used to evaluate Hardy-Weinberg equilibrium for all polymorphisms. Generalised estimating equations method (GEE) was used to evaluate the influence of CRBN variants on thalidomide dose. This method is a repeated measures analysis focused on average changes in response over time and on the impact of covariates on these changes. GEE can model the average response of variables as a linear function of covariates of interest through a transformation or link function and can be used in studies where the data is asymmetric or the data distribution is difficult to verify due to the small-size sample.\n24\n\n,\n\n25\n The covariates inserted in the model were place of origin of the patient, concomitant use of multidrug therapy (MDT) for leprosy and use of other medications and other treatments for ENL.\nThe evaluation of the association of CRBN variants and haplotypes in the occurrence of adverse effects due to thalidomide treatment was based on clinical data using logistic regression. Models with and without correction by gender were used. The INS/C/T (rs4183/rs1672770/rs1620675) haplotype was removed from the association analyses because it presented few events and disturbed the analyses. Peripheral polyneuropathy, although is an adverse effect common to the use of thalidomide, has not been evaluated because it is difficult to distinguish it from polyneuropathy caused by ENL and leprosy. All statistical analyses were performed with SPSS version 20 (www.spss.com).\nMLocus tool was used to calculate linkage disequilibrium (LD) for the variants,\n26\n and haplotypes were inferred using the Bayesian algorithm of the phase 2.1.1 program.\n27\n\n,\n\n28\n\n\n\nData availability - The data analysed during the current study are not publicly available to maintain patient confidentiality. Moreover, this type of request has not been previously approved by participants nor the human research committee. This data could, however, be available (anonymously) from the corresponding author on reasonable request.\n\nEthics - All participants were informed about the research objectives and signed an informed consent form. This study was approved by the Ethics Committee of the Hospital de Clínicas of Porto Alegre under number CAAE 21184413.0.0000.5327. All research was performed in accordance with Brazilian regulations and informed consent was obtained from all participants. This research has been performed in accordance with the Declaration of Helsinki.", "In total, 103 ENL patients were included, being 82 (79.6%) male and 79 (76.7%) presenting LL (Table I). Forty-six (44.7%) patients were using MDT and two of these used an alternative MDT regimen. The mean time of treatment with thalidomide was 226 days.\n\nTABLE IClinical and demographic characteristics of ENL patients (n = 103)Characteristic n (%)Male 82 (79.6)MDT for leprosy46 (44.7)MDT time in months median (P25/P75)*4.3 (1.5/10.9)Other medications81 (78.6)Thalidomide dose median (P25/P75)100 (100/200)Days of consultation mean (min/max)108.19 (0/721)Patient origin\nNortheast Region89 (86.4)North Region14 (13.6)Leprosy classification\nBorderline-lepromatous 24 (23.3)Lepromatous-leprosy 79 (76.7)Adverse effects\nNeurological\na\n\n31 (30.1)Grastrointestinal\nb\n\n18 (17.5)Musculoskeletal\nc\n\n29 (28.2)Ocular\nd\n\n19 (18.4)Oedema16 (15.6)Dermatological\ne\n\n2 (1.9)Fever11 (10.7)\na: paresthesias, dizziness, tremor, neuritis and headache; b: diarrhoea, vomiting, gastric fullness and constipation; c: myalgia, arthralgia and weakness; d: decreased visual acuity and eye irritation; e: pruritus, dry skin and hair loss; *: time of multidrug therapy (MDT) when patients developed erythema nodosum leprosum (ENL) (n = 40).\n\n\na: paresthesias, dizziness, tremor, neuritis and headache; b: diarrhoea, vomiting, gastric fullness and constipation; c: myalgia, arthralgia and weakness; d: decreased visual acuity and eye irritation; e: pruritus, dry skin and hair loss; *: time of multidrug therapy (MDT) when patients developed erythema nodosum leprosum (ENL) (n = 40).\nThe genotypic distributions were in Hardy-Weinberg equilibrium and the allelic and genotypic frequencies of the polymorphisms are shown in Table II. A LD [rs4183/rs1672770 (r2 = 0.513), rs4183/rs1620675 (r2 = 0.943), rs1672770/rs1620675 (r2 = 0.567)] between the variants studied was identified (see Supplementary Table) and four haplotypes were identified in the sample (Table III). For all single nucleotide polymorphisms (SNPs), there was no association of the thalidomide dose with time of treatment (Table IV). In addition, no association was found between haplotypes and thalidomide dose.\n\nTABLE IIGenotypic and allelic frequencies of CRBN polymorphisms in ENL patients on treatment with thalidomidePolymorphismAlleles/genotypesFrequency n (%)rs1620675AA31 (30.1)AC53 (5.5)CC19 (18.4)A115 (55.8)C91 (44.2)\n\nrs1672770*AA31 (30.1)AG52 (50.5)GG17 (16.5)A114 (55.3)G86 (44.7)rs4183INS/INS19 (18.4)INS/DEL56 (54.4)DEL/DEL28 (27.2)INS94 (45.6)DEL115 (54.4)DEL: deletion; ENL: erythema nodosum leprosum; INS: insertion; MDT: multidrug therapy; *: n = 100.\n\nDEL: deletion; ENL: erythema nodosum leprosum; INS: insertion; MDT: multidrug therapy; *: n = 100.\n\nTABLE IIIHaplotype frequencies of CRBN\nHaplotypen(%)INS/C/T31.5INS/T/G9244.7DEL/C/T8340.3DEL/T/T2813.6Haplotypes in the following order: rs4183/rs1672770/rs1620675. DEL: deletion; CRBN: cereblon gene; INS: insertion.\n\nHaplotypes in the following order: rs4183/rs1672770/rs1620675. DEL: deletion; CRBN: cereblon gene; INS: insertion.\n\nTABLE IVAnalysis of interaction between CRBN genotype and time related to estimated thalidomide dose using the GEE model in ENL treatmentPolymorphism Interactionp-valuers1620675rs16206750.079Concomitant MDT for leprosy0.271Other medications0.205Region0.138Time0.001rs1672770rs1620675*Time0.392rs16727700.229Concomitant MDT for leprosy0.296Other medications0.071Region0.085Time0.001rs1672770*Time0.273rs4183rs41830.098Concomitant MDT for leprosy0.293Other medications0.208Region0.117Time0.001rs4183*Time0.669Dependent variable: thalidomide dose. GEE model: region, MDT, other medications, thalidomide dose, genotype, time, genotype*Time). CRBN: cereblon; ENL: erythema nodosum leprosum; GEE: generalised estimating equation model; MDT: multidrug therapy.\n\nDependent variable: thalidomide dose. GEE model: region, MDT, other medications, thalidomide dose, genotype, time, genotype*Time). CRBN: cereblon; ENL: erythema nodosum leprosum; GEE: generalised estimating equation model; MDT: multidrug therapy.\nIn regard to the relationship of CRBN variants and adverse effects of the use of thalidomide, it was found associations between the genotypes of rs1672770 (p = 0.040) and gastrointestinal effects, which include diarrhoea, vomiting, nausea, constipation and inappetence (Tables V and VI). Haplotype analysis was performed with all SNPs because only one combination of SNPs showed r2>0.8 (rs4183/rs1620675 with r2 = 0.943). The three polymorphisms were maintained in the analysis because when combining rs4183/rs1672770 or rs1672770/rs1620675, it showed r2<0.8. The analysis of the association between haplotypes and adverse effects also showed an association of gastrointestinal adverse effects (p = 0.015) with haplotype DEL/C/T (rs4183/rs1672770/rs1620675) (Tables VII and VIII). These effects were present in eight patients (9.6%) who carried this haplotype. \n\nTABLE VAnalysis of the association of CRBN variants on the occurrence of gastrointestinal adverse effects\na\n\nPolymorphismOR\nb\n\np-value*OR\nb\n\np-value**rs1672770\n\n\n\nAAREF-REF-AG0.325 (0.111-0.951)0.0400.040 (0.111-0.948)0.040GG0.131 (0.015-1.115)0.0630.127(0.015-1.100)0.061Dependent variable: gastrointestinal. CRBN: cereblon gene; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model rs1672770; **: model sex, rs1672770.\n\nDependent variable: gastrointestinal. CRBN: cereblon gene; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model rs1672770; **: model sex, rs1672770.\n\nTABLE VIFrequency of gastrointestinal adverse effects\na\n according to CRBN genotypesPolymorphismGenotypeAbsence (n) (% within effect)Presence (n) (% within effect)rs1672770AA23 (67.6)11 (32.4)AG45 (86.5)7 (13.5)GG16 (94.1)1 (5.9)\nCRBN: cereblon gene; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence).\n\n\nCRBN: cereblon gene; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence).\n\nTABLE VIIAnalysis of the association of CRBN haplotypes on the occurrence of gastrointestinal adverse effects\na\n\nHaplotypeOR\nb\n\np-value*OR\nb\n\np-value**INS/T/GREF-REF-DEL/C/T0.339 (0.142-0.812)0.0150.333 (0.138-0.805)0.015DEL/T/T1.061 (0.398-2.827)0.9061.022 (0.371-2.818)0.966Dependent variable: gastrointestinal. DEL: deletion; CRBN: cereblon gene; INS: insertion; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model haplotypes; **: model sex, haplotypes.\n\nDependent variable: gastrointestinal. DEL: deletion; CRBN: cereblon gene; INS: insertion; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model haplotypes; **: model sex, haplotypes.\n\nTABLE VIIIFrequency of gastrointestinal adverse effects\na\n according to CRBN haplotypesHaplotypesAbsence (n) (% within effect)Presence (n) (% within effect)INS/T/G70 (76.1)22 (23.9)DEL/C/T75 (90.4)8 (9.6)DEL/T/T21 (75)7 (25)DEL: deletion; CRBN: cereblon gene; INS: insertion; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence).\n\nDEL: deletion; CRBN: cereblon gene; INS: insertion; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence).", "This study aimed to identify genetic variants in CRBN gene that might influence in response to the treatment of ENL with thalidomide. We identified that the SNP rs1672770 and the haplotype DEL/C/T (rs4183/rs1672770/rs1620675) were associated with the manifestation of adverse gastrointestinal effects.\nCRBN acts as a substrate receptor as part of the E3 ubiquitin ligase complex (CRL4CRBN), which controls the expression of target proteins by their ubiquitination and degradation. It is necessary for the teratogenic effect of thalidomide and also important for the antiproliferative effect of thalidomide and other IMiDs in MM.\n29\n It is postulated that when thalidomide binds to CRBN, it modifies its function causing teratogenic effects by preventing the degradation proteins and/or by creating neosubtrates for ubiquitination and proteasomal degradation, which play a crucial role in embryonic development.\n30\n\n,\n\n31\n In the case of MM, thalidomide binding to CRBN promotes recruitment of the neosubstrates Ikaros (IKZF1) and Aiolos (IKZF3) to the ubiquitin-ligase complex, resulting in increased ubiquitination and degradation of these transcription factors in T cells and MM cells.\n31\n\n-\n\n33\n In addition, some studies have associated low CRBN mRNA expression with poorer clinical response to IMiDs, suggesting a potential role of CRBN as a predictive biomarker for treatment response.\n17\n\n,\n\n31\n\n,\n\n32\n\n,\n\n34\n Since the 1990s, it is known that one of the main effects of thalidomide is to decrease the TNF mRNA half-life, which explains some of its therapeutic effects. Using CRBN knockdown, it has been shown that the inhibitory effect of IMiDs on TNF-α production was also impaired in the silencing of CRBN.\n16\n\n,\n\n35\n\n\n\nCRBN is composed of 11 exons extending over 30 Kb. Thalidomide binds to CRBN in a region of 104 amino acids (339-442) located in the C-terminal portion, encoded by exons 9, 10 and 11.\n13\n This gene is extremely conserved and a few polymorphisms was found in the coding region.\n23\n\n,\n\n36\n Studies performed with MM showed that variants in non-coding regions of CRBN were associated with response to thalidomide and others IMIDs therapy.\n20\n\n,\n\n21\n\n,\n\n37\n\n\nThe genetic variants evaluated in the present study were identified in a previous study as possible splicing sites.\n23\n The association of these SNPs and the dose of thalidomide could indicate that polymorphisms in these regions could interfere in the expression of the CRBN gene or in the activity of CRBN, being able to modulate the response to treatment with thalidomide. However, in this study, we were unable to identify an association between these variants and the dose variation of thalidomide over time.\nWe found an association between the genetic variant rs1672770 (p = 0.040) and the manifestation of gastrointestinal adverse effects. There was also an association of gastrointestinal adverse effects (p = 0.015) with haplotype DEL/C/T (rs4183/rs1672770/rs1620675). These adverse gastrointestinal effects consisted of diarrhoea, vomiting, nausea and constipation. The most common symptom was constipation, a commonly reported side effect of thalidomide.\n22\n\n,\n\n38\n\n,\n\n39\n The association of adverse effects with the polymorphisms studied may also be related to differences in CRBN expression. Mlak et al.\n22\n found an association between CRBN variants (rs6768972, rs16727) and the risk of polyneuropathy and gastrointestinal disorders in patients with MM treated with thalidomide-based regimens. These variants are located in the CRBN promoter region. Thus, they could influence the expression or activity of CRBN.\n23\n In the haplotype analysis, only one combination of SNPs showed r2>0.8 (rs4183/rs1620675 with r2 = 0.943), as can be observed in the Supplementary Table. Taking into account this result, our small sample size, and that this is the first study to evaluate CRBN haplotypes, we decided to maintain the logistic regression analysis considering all SNPs. Further studies should be performed in order to elucidate and confirm the haplotypes and associations found here.\nIn this study, 46 patients were using MDT during the treatment for ENL and only two patients received an alternative MDT regimen. For this reason, they were not divided into a subgroup during the analyses. In addition, the concomitant medications that patients used during treatment for ENL can be diverse. That is why they were not listed in a table. MDT and other drugs were used as covariates in the GEE analysis to avoid bias in the use of these drugs. However, it was not possible to use this correction in the other analyses. The epidemiological data of this study were in agreement with the data found in other studies.\n2\n\n,\n\n40\n Most of the individuals with ENL had LL and were male. These frequencies corroborate that men may be more affected by LL and that the bacillary index is a risk factor for the development of the reaction as demonstrated in other studies.\n2\n\n,\n\n41\n\n,\n\n42\n In addition, many patients were on MDT during treatment for ENL, confirming that the reaction manifests mainly in the first year of illness, during MDT.\n43\n\n,\n\n44\n\n\nSome limitations should be considered during the interpretation of our study. It was not possible to obtain samples of skin lesions from these patients. Thus, the DNA samples analysed in this work were obtained from saliva because it presents a methodology that is easier to obtain. However, this does not influence the results presented because it is about the analysis of genetic variants and not the gene expression. Clearly, further studies could focus on the impact of the CRBN expression in the skin lesions before and after the treatment. In addition, the use of clofazimine and in MDT for leprosy, because of its anti-inflammatory effect, and the other drugs may interfere with both the dose reduction of drugs and in the manifestation of adverse effects.\n2\n\n,\n\n3\n This is also a retrospective study carried out with clinical data of patients, thalidomide dose, and manifestation of adverse effects obtained through the analysis of medical records of up to six consultations. The lack of standardization in the description of this information or even the absence of a description of the adverse effects cannot be ruled out. Furthermore, as ENL is a chronic condition and its treatment is very long, in this study, the mean treatment time during the collection period was 226 days, but some patients had been in treatment for years and remained on thalidomide treatment after the data collection period. Thus, in some cases, it was not possible to completely monitor the treatment of patients. Peripheral neuropathy, one of the main adverse effects of thalidomide,\n3\n\n,\n\n4\n was not evaluated in this study due to its retrospective nature, making it difficult to differentiate neuropathy due to the disease itself or due to the use of thalidomide. Accordingly, the evaluation of the association of variants on the onset of peripheral neuropathy should be performed by a prospective study to reduce potential biases. Another limitation was that this study was carried out with individuals from different regions of the country and the heterogeneous genetic background of the Brazilian population might have been underestimated in the analysis and interpretation of the results. In addition, the limited sample size used in this study can become difficult the identification of minor effects from the genetic variants in different outcomes.\nOn basis of the results of this study, we identified some genetic variants of CRBN were associated with adverse gastrointestinal effects. These results indicate that such variants could impact the protein and influence the outcome of treatment with thalidomide. This also indicates that CRBN may also be necessary for the action of thalidomide in ENL, as already described in MM.\n29\n Clearly, more studies evaluating the impact the variants in CRBN, as well as associations with ENL treatment, must be performed in order to confirm this hypothesis. ENL is a chronic and difficult-to-control condition. Thalidomide is an effective drug, but has restrictions on its use due to peripheral neuropathy and its teratogenicity.\n7\n\n,\n\n8\n Therefore, it is important to identify useful biomarkers in predicting treatment response to limit its use to patients who will benefit most from treatment. To our knowledge, this is the first study to evaluate the association of genetic variants of CRBN with thalidomide treatment in ENL patients. There are still many gaps to be filled in our knowledge of the mechanism of action of thalidomide and on how CRBN participates in this process. Thus, this study shows that evaluation of CRBN and its expression may help to understand the action of thalidomide in ENL and perhaps, in the future, be a useful biomarker in ENL." ]
[ "materials|methods", "results", "discussion" ]
[ "CRBN", "leprosy", "pharmacogenomics", "pharmacogenetics", "personalised medicine" ]
SUBJECTS AND METHODS: Sample - The sample consisted of 103 ENL patients who were selected from National Reference Center of Sanitary Dermatology Dona Libânia in Fortaleza (state of Ceará, Brazil), Humanized Reference Center of Sanitary Dermatology in Imperatriz and Aquiles Lisboa Hospital in São Luís (Maranhão State) in northeast Brazil, and from the Dermatology Ambulatory of the University of São Paulo in Monte Negro (state of Rondônia, Brazil) in north Brazil. The patients used thalidomide at different doses and had a follow-up of up to six visits (average of 3.6 months). Data from up to six consultations annotated in the patient’s medical record were analysed with the collection of clinical and demographic information, including sex, age and region of origin, history of leprosy (moment of diagnosis and treatment used) and history of ENL (diagnosis, treatment, adverse effects, history of relapse and dose of medications used). Genetic analyses - DNA was extracted from saliva samples using the Oragene DNA Extraction Kit (DNA Genotek, Ottawa, Canada), according to the manufacturer’s instructions. A pair of primer was designed to amplify a fragment of 682 base pairs containing the region encompassing the three studied CRBN variants: forward 5’-TGTGGTCTTGGCAACCAGCAATTT-3’ and reverse 5’-ACTGCCGTTCATGCTTGTTTCCT-3’. This region was amplified by polymerase chain reaction (PCR). The fragment obtained was visualised on a 2% agarose gel, purified and sequenced using the same primers. Sequences were visualised and analysed using CodonCodeAligner®, version 3.0.1 (CodonCode Corporation, Dedham, USA). The hg19 sequence deposited in GenBank was used as the reference sequence. When there was doubt about the variant, sequencing was repeated for confirmation. Statistical analyses - Chi-square test was used to evaluate Hardy-Weinberg equilibrium for all polymorphisms. Generalised estimating equations method (GEE) was used to evaluate the influence of CRBN variants on thalidomide dose. This method is a repeated measures analysis focused on average changes in response over time and on the impact of covariates on these changes. GEE can model the average response of variables as a linear function of covariates of interest through a transformation or link function and can be used in studies where the data is asymmetric or the data distribution is difficult to verify due to the small-size sample. 24 , 25 The covariates inserted in the model were place of origin of the patient, concomitant use of multidrug therapy (MDT) for leprosy and use of other medications and other treatments for ENL. The evaluation of the association of CRBN variants and haplotypes in the occurrence of adverse effects due to thalidomide treatment was based on clinical data using logistic regression. Models with and without correction by gender were used. The INS/C/T (rs4183/rs1672770/rs1620675) haplotype was removed from the association analyses because it presented few events and disturbed the analyses. Peripheral polyneuropathy, although is an adverse effect common to the use of thalidomide, has not been evaluated because it is difficult to distinguish it from polyneuropathy caused by ENL and leprosy. All statistical analyses were performed with SPSS version 20 (www.spss.com). MLocus tool was used to calculate linkage disequilibrium (LD) for the variants, 26 and haplotypes were inferred using the Bayesian algorithm of the phase 2.1.1 program. 27 , 28 Data availability - The data analysed during the current study are not publicly available to maintain patient confidentiality. Moreover, this type of request has not been previously approved by participants nor the human research committee. This data could, however, be available (anonymously) from the corresponding author on reasonable request. Ethics - All participants were informed about the research objectives and signed an informed consent form. This study was approved by the Ethics Committee of the Hospital de Clínicas of Porto Alegre under number CAAE 21184413.0.0000.5327. All research was performed in accordance with Brazilian regulations and informed consent was obtained from all participants. This research has been performed in accordance with the Declaration of Helsinki. RESULTS: In total, 103 ENL patients were included, being 82 (79.6%) male and 79 (76.7%) presenting LL (Table I). Forty-six (44.7%) patients were using MDT and two of these used an alternative MDT regimen. The mean time of treatment with thalidomide was 226 days. TABLE IClinical and demographic characteristics of ENL patients (n = 103)Characteristic n (%)Male 82 (79.6)MDT for leprosy46 (44.7)MDT time in months median (P25/P75)*4.3 (1.5/10.9)Other medications81 (78.6)Thalidomide dose median (P25/P75)100 (100/200)Days of consultation mean (min/max)108.19 (0/721)Patient origin Northeast Region89 (86.4)North Region14 (13.6)Leprosy classification Borderline-lepromatous 24 (23.3)Lepromatous-leprosy 79 (76.7)Adverse effects Neurological a 31 (30.1)Grastrointestinal b 18 (17.5)Musculoskeletal c 29 (28.2)Ocular d 19 (18.4)Oedema16 (15.6)Dermatological e 2 (1.9)Fever11 (10.7) a: paresthesias, dizziness, tremor, neuritis and headache; b: diarrhoea, vomiting, gastric fullness and constipation; c: myalgia, arthralgia and weakness; d: decreased visual acuity and eye irritation; e: pruritus, dry skin and hair loss; *: time of multidrug therapy (MDT) when patients developed erythema nodosum leprosum (ENL) (n = 40). a: paresthesias, dizziness, tremor, neuritis and headache; b: diarrhoea, vomiting, gastric fullness and constipation; c: myalgia, arthralgia and weakness; d: decreased visual acuity and eye irritation; e: pruritus, dry skin and hair loss; *: time of multidrug therapy (MDT) when patients developed erythema nodosum leprosum (ENL) (n = 40). The genotypic distributions were in Hardy-Weinberg equilibrium and the allelic and genotypic frequencies of the polymorphisms are shown in Table II. A LD [rs4183/rs1672770 (r2 = 0.513), rs4183/rs1620675 (r2 = 0.943), rs1672770/rs1620675 (r2 = 0.567)] between the variants studied was identified (see Supplementary Table) and four haplotypes were identified in the sample (Table III). For all single nucleotide polymorphisms (SNPs), there was no association of the thalidomide dose with time of treatment (Table IV). In addition, no association was found between haplotypes and thalidomide dose. TABLE IIGenotypic and allelic frequencies of CRBN polymorphisms in ENL patients on treatment with thalidomidePolymorphismAlleles/genotypesFrequency n (%)rs1620675AA31 (30.1)AC53 (5.5)CC19 (18.4)A115 (55.8)C91 (44.2) rs1672770*AA31 (30.1)AG52 (50.5)GG17 (16.5)A114 (55.3)G86 (44.7)rs4183INS/INS19 (18.4)INS/DEL56 (54.4)DEL/DEL28 (27.2)INS94 (45.6)DEL115 (54.4)DEL: deletion; ENL: erythema nodosum leprosum; INS: insertion; MDT: multidrug therapy; *: n = 100. DEL: deletion; ENL: erythema nodosum leprosum; INS: insertion; MDT: multidrug therapy; *: n = 100. TABLE IIIHaplotype frequencies of CRBN Haplotypen(%)INS/C/T31.5INS/T/G9244.7DEL/C/T8340.3DEL/T/T2813.6Haplotypes in the following order: rs4183/rs1672770/rs1620675. DEL: deletion; CRBN: cereblon gene; INS: insertion. Haplotypes in the following order: rs4183/rs1672770/rs1620675. DEL: deletion; CRBN: cereblon gene; INS: insertion. TABLE IVAnalysis of interaction between CRBN genotype and time related to estimated thalidomide dose using the GEE model in ENL treatmentPolymorphism Interactionp-valuers1620675rs16206750.079Concomitant MDT for leprosy0.271Other medications0.205Region0.138Time0.001rs1672770rs1620675*Time0.392rs16727700.229Concomitant MDT for leprosy0.296Other medications0.071Region0.085Time0.001rs1672770*Time0.273rs4183rs41830.098Concomitant MDT for leprosy0.293Other medications0.208Region0.117Time0.001rs4183*Time0.669Dependent variable: thalidomide dose. GEE model: region, MDT, other medications, thalidomide dose, genotype, time, genotype*Time). CRBN: cereblon; ENL: erythema nodosum leprosum; GEE: generalised estimating equation model; MDT: multidrug therapy. Dependent variable: thalidomide dose. GEE model: region, MDT, other medications, thalidomide dose, genotype, time, genotype*Time). CRBN: cereblon; ENL: erythema nodosum leprosum; GEE: generalised estimating equation model; MDT: multidrug therapy. In regard to the relationship of CRBN variants and adverse effects of the use of thalidomide, it was found associations between the genotypes of rs1672770 (p = 0.040) and gastrointestinal effects, which include diarrhoea, vomiting, nausea, constipation and inappetence (Tables V and VI). Haplotype analysis was performed with all SNPs because only one combination of SNPs showed r2>0.8 (rs4183/rs1620675 with r2 = 0.943). The three polymorphisms were maintained in the analysis because when combining rs4183/rs1672770 or rs1672770/rs1620675, it showed r2<0.8. The analysis of the association between haplotypes and adverse effects also showed an association of gastrointestinal adverse effects (p = 0.015) with haplotype DEL/C/T (rs4183/rs1672770/rs1620675) (Tables VII and VIII). These effects were present in eight patients (9.6%) who carried this haplotype. TABLE VAnalysis of the association of CRBN variants on the occurrence of gastrointestinal adverse effects a PolymorphismOR b p-value*OR b p-value**rs1672770 AAREF-REF-AG0.325 (0.111-0.951)0.0400.040 (0.111-0.948)0.040GG0.131 (0.015-1.115)0.0630.127(0.015-1.100)0.061Dependent variable: gastrointestinal. CRBN: cereblon gene; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model rs1672770; **: model sex, rs1672770. Dependent variable: gastrointestinal. CRBN: cereblon gene; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model rs1672770; **: model sex, rs1672770. TABLE VIFrequency of gastrointestinal adverse effects a according to CRBN genotypesPolymorphismGenotypeAbsence (n) (% within effect)Presence (n) (% within effect)rs1672770AA23 (67.6)11 (32.4)AG45 (86.5)7 (13.5)GG16 (94.1)1 (5.9) CRBN: cereblon gene; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence). CRBN: cereblon gene; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence). TABLE VIIAnalysis of the association of CRBN haplotypes on the occurrence of gastrointestinal adverse effects a HaplotypeOR b p-value*OR b p-value**INS/T/GREF-REF-DEL/C/T0.339 (0.142-0.812)0.0150.333 (0.138-0.805)0.015DEL/T/T1.061 (0.398-2.827)0.9061.022 (0.371-2.818)0.966Dependent variable: gastrointestinal. DEL: deletion; CRBN: cereblon gene; INS: insertion; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model haplotypes; **: model sex, haplotypes. Dependent variable: gastrointestinal. DEL: deletion; CRBN: cereblon gene; INS: insertion; OR: odds ratio; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence); b: logistic regression (OR and 95% confidence interval); *: model haplotypes; **: model sex, haplotypes. TABLE VIIIFrequency of gastrointestinal adverse effects a according to CRBN haplotypesHaplotypesAbsence (n) (% within effect)Presence (n) (% within effect)INS/T/G70 (76.1)22 (23.9)DEL/C/T75 (90.4)8 (9.6)DEL/T/T21 (75)7 (25)DEL: deletion; CRBN: cereblon gene; INS: insertion; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence). DEL: deletion; CRBN: cereblon gene; INS: insertion; a: gastrointestinal adverse effects (diarrhoea, vomiting, nausea, constipation and inappetence). DISCUSSION: This study aimed to identify genetic variants in CRBN gene that might influence in response to the treatment of ENL with thalidomide. We identified that the SNP rs1672770 and the haplotype DEL/C/T (rs4183/rs1672770/rs1620675) were associated with the manifestation of adverse gastrointestinal effects. CRBN acts as a substrate receptor as part of the E3 ubiquitin ligase complex (CRL4CRBN), which controls the expression of target proteins by their ubiquitination and degradation. It is necessary for the teratogenic effect of thalidomide and also important for the antiproliferative effect of thalidomide and other IMiDs in MM. 29 It is postulated that when thalidomide binds to CRBN, it modifies its function causing teratogenic effects by preventing the degradation proteins and/or by creating neosubtrates for ubiquitination and proteasomal degradation, which play a crucial role in embryonic development. 30 , 31 In the case of MM, thalidomide binding to CRBN promotes recruitment of the neosubstrates Ikaros (IKZF1) and Aiolos (IKZF3) to the ubiquitin-ligase complex, resulting in increased ubiquitination and degradation of these transcription factors in T cells and MM cells. 31 - 33 In addition, some studies have associated low CRBN mRNA expression with poorer clinical response to IMiDs, suggesting a potential role of CRBN as a predictive biomarker for treatment response. 17 , 31 , 32 , 34 Since the 1990s, it is known that one of the main effects of thalidomide is to decrease the TNF mRNA half-life, which explains some of its therapeutic effects. Using CRBN knockdown, it has been shown that the inhibitory effect of IMiDs on TNF-α production was also impaired in the silencing of CRBN. 16 , 35 CRBN is composed of 11 exons extending over 30 Kb. Thalidomide binds to CRBN in a region of 104 amino acids (339-442) located in the C-terminal portion, encoded by exons 9, 10 and 11. 13 This gene is extremely conserved and a few polymorphisms was found in the coding region. 23 , 36 Studies performed with MM showed that variants in non-coding regions of CRBN were associated with response to thalidomide and others IMIDs therapy. 20 , 21 , 37 The genetic variants evaluated in the present study were identified in a previous study as possible splicing sites. 23 The association of these SNPs and the dose of thalidomide could indicate that polymorphisms in these regions could interfere in the expression of the CRBN gene or in the activity of CRBN, being able to modulate the response to treatment with thalidomide. However, in this study, we were unable to identify an association between these variants and the dose variation of thalidomide over time. We found an association between the genetic variant rs1672770 (p = 0.040) and the manifestation of gastrointestinal adverse effects. There was also an association of gastrointestinal adverse effects (p = 0.015) with haplotype DEL/C/T (rs4183/rs1672770/rs1620675). These adverse gastrointestinal effects consisted of diarrhoea, vomiting, nausea and constipation. The most common symptom was constipation, a commonly reported side effect of thalidomide. 22 , 38 , 39 The association of adverse effects with the polymorphisms studied may also be related to differences in CRBN expression. Mlak et al. 22 found an association between CRBN variants (rs6768972, rs16727) and the risk of polyneuropathy and gastrointestinal disorders in patients with MM treated with thalidomide-based regimens. These variants are located in the CRBN promoter region. Thus, they could influence the expression or activity of CRBN. 23 In the haplotype analysis, only one combination of SNPs showed r2>0.8 (rs4183/rs1620675 with r2 = 0.943), as can be observed in the Supplementary Table. Taking into account this result, our small sample size, and that this is the first study to evaluate CRBN haplotypes, we decided to maintain the logistic regression analysis considering all SNPs. Further studies should be performed in order to elucidate and confirm the haplotypes and associations found here. In this study, 46 patients were using MDT during the treatment for ENL and only two patients received an alternative MDT regimen. For this reason, they were not divided into a subgroup during the analyses. In addition, the concomitant medications that patients used during treatment for ENL can be diverse. That is why they were not listed in a table. MDT and other drugs were used as covariates in the GEE analysis to avoid bias in the use of these drugs. However, it was not possible to use this correction in the other analyses. The epidemiological data of this study were in agreement with the data found in other studies. 2 , 40 Most of the individuals with ENL had LL and were male. These frequencies corroborate that men may be more affected by LL and that the bacillary index is a risk factor for the development of the reaction as demonstrated in other studies. 2 , 41 , 42 In addition, many patients were on MDT during treatment for ENL, confirming that the reaction manifests mainly in the first year of illness, during MDT. 43 , 44 Some limitations should be considered during the interpretation of our study. It was not possible to obtain samples of skin lesions from these patients. Thus, the DNA samples analysed in this work were obtained from saliva because it presents a methodology that is easier to obtain. However, this does not influence the results presented because it is about the analysis of genetic variants and not the gene expression. Clearly, further studies could focus on the impact of the CRBN expression in the skin lesions before and after the treatment. In addition, the use of clofazimine and in MDT for leprosy, because of its anti-inflammatory effect, and the other drugs may interfere with both the dose reduction of drugs and in the manifestation of adverse effects. 2 , 3 This is also a retrospective study carried out with clinical data of patients, thalidomide dose, and manifestation of adverse effects obtained through the analysis of medical records of up to six consultations. The lack of standardization in the description of this information or even the absence of a description of the adverse effects cannot be ruled out. Furthermore, as ENL is a chronic condition and its treatment is very long, in this study, the mean treatment time during the collection period was 226 days, but some patients had been in treatment for years and remained on thalidomide treatment after the data collection period. Thus, in some cases, it was not possible to completely monitor the treatment of patients. Peripheral neuropathy, one of the main adverse effects of thalidomide, 3 , 4 was not evaluated in this study due to its retrospective nature, making it difficult to differentiate neuropathy due to the disease itself or due to the use of thalidomide. Accordingly, the evaluation of the association of variants on the onset of peripheral neuropathy should be performed by a prospective study to reduce potential biases. Another limitation was that this study was carried out with individuals from different regions of the country and the heterogeneous genetic background of the Brazilian population might have been underestimated in the analysis and interpretation of the results. In addition, the limited sample size used in this study can become difficult the identification of minor effects from the genetic variants in different outcomes. On basis of the results of this study, we identified some genetic variants of CRBN were associated with adverse gastrointestinal effects. These results indicate that such variants could impact the protein and influence the outcome of treatment with thalidomide. This also indicates that CRBN may also be necessary for the action of thalidomide in ENL, as already described in MM. 29 Clearly, more studies evaluating the impact the variants in CRBN, as well as associations with ENL treatment, must be performed in order to confirm this hypothesis. ENL is a chronic and difficult-to-control condition. Thalidomide is an effective drug, but has restrictions on its use due to peripheral neuropathy and its teratogenicity. 7 , 8 Therefore, it is important to identify useful biomarkers in predicting treatment response to limit its use to patients who will benefit most from treatment. To our knowledge, this is the first study to evaluate the association of genetic variants of CRBN with thalidomide treatment in ENL patients. There are still many gaps to be filled in our knowledge of the mechanism of action of thalidomide and on how CRBN participates in this process. Thus, this study shows that evaluation of CRBN and its expression may help to understand the action of thalidomide in ENL and perhaps, in the future, be a useful biomarker in ENL.
Background: Erythema nodosum leprosum (ENL) is an acute and systemic inflammatory reaction of leprosy characterised by painful nodules and involvement of various organs. Thalidomide is an immunomodulatory and anti-inflammatory drug currently used to treat this condition. Cereblon (CRBN) protein is the primary target of thalidomide, and it has been pointed out as necessary for the efficacy of this drug in others therapeutics settings. Methods: A total of 103 ENL patients in treatment with thalidomide were included in this study. DNA samples were obtained from saliva and molecular analysis of CRBN gene were performed to investigate the variants rs1620675, rs1672770 and rs4183. Different genotypes of CRBN variants were evaluated in relation to their influence on the dose of thalidomide and on the occurrence of adverse effects. Results: No association was found between CRBN variants and thalidomide dose variation. However, the genotypes of rs1672770 showed association with gastrointestinal effects (p = 0.040). Moreover, the haplotype DEL/C/T (rs4183/rs1672770/rs1620675) was also associated with gastrointestinal adverse effects (p = 0.015). Conclusions: Our results show that CRBN variants affect the treatment of ENH with thalidomide, especially on the adverse effects related to the drug.
null
null
3,934
234
[]
3
[ "crbn", "thalidomide", "effects", "adverse", "enl", "adverse effects", "gastrointestinal", "treatment", "mdt", "variants" ]
[ "snps dose thalidomide", "use thalidomide found", "patients thalidomide different", "thalidomide dose genotype", "brazil patients thalidomide" ]
null
null
null
null
null
null
[CONTENT] CRBN | leprosy | pharmacogenomics | pharmacogenetics | personalised medicine [SUMMARY]
null
[CONTENT] CRBN | leprosy | pharmacogenomics | pharmacogenetics | personalised medicine [SUMMARY]
null
null
null
[CONTENT] Humans | Erythema Nodosum | Thalidomide | Leprosy, Lepromatous | Leprostatic Agents | Leprosy, Multibacillary [SUMMARY]
null
[CONTENT] Humans | Erythema Nodosum | Thalidomide | Leprosy, Lepromatous | Leprostatic Agents | Leprosy, Multibacillary [SUMMARY]
null
null
null
[CONTENT] snps dose thalidomide | use thalidomide found | patients thalidomide different | thalidomide dose genotype | brazil patients thalidomide [SUMMARY]
null
[CONTENT] snps dose thalidomide | use thalidomide found | patients thalidomide different | thalidomide dose genotype | brazil patients thalidomide [SUMMARY]
null
null
null
[CONTENT] crbn | thalidomide | effects | adverse | enl | adverse effects | gastrointestinal | treatment | mdt | variants [SUMMARY]
null
[CONTENT] crbn | thalidomide | effects | adverse | enl | adverse effects | gastrointestinal | treatment | mdt | variants [SUMMARY]
null
null
null
[CONTENT] gastrointestinal | crbn cereblon | cereblon | crbn | effects | crbn cereblon gene | cereblon gene | gastrointestinal adverse | table | gastrointestinal adverse effects [SUMMARY]
null
[CONTENT] crbn | thalidomide | effects | adverse | study | enl | gastrointestinal | treatment | adverse effects | variants [SUMMARY]
null
null
null
[CONTENT] CRBN ||| rs1672770 | 0.040 ||| DEL/C/T | 0.015 [SUMMARY]
null
[CONTENT] ENL ||| ||| Cereblon | CRBN ||| 103 | ENL ||| CRBN | rs1672770 | rs4183 ||| CRBN ||| ||| CRBN ||| rs1672770 | 0.040 ||| DEL/C/T | 0.015 ||| CRBN | ENH [SUMMARY]
null
Objective parallel-forms reliability assessment of 3 dimension real time body posture screening tests.
25189936
Screening tests play a significant role in rapid and reliable assessment of normal individual development in the entire population of children and adolescents. Body posture screening tests carried out at schools reveal that 50-60% of children and adolescents demonstrate body posture abnormalities, with 10% of this group at risk for progressive spinal deformities. This necessitates the search for effective and economically feasible forms of screening diagnosis. The aim of this study was to assess the reliability of clinical evaluation of body posture compared to objective assessment with the Zebris CMS-10 system (Zebris Medical GmbH).
BACKGROUND
The study enrolled 13-15-year-old pupils attending a junior secondary school (mean age 14.2 years). The study group consisted of 138 participants, including 71 girls and 67 boys, who underwent a clinical evaluation of the body posture and an examination with the Zebris CMS 10 system.
METHODS
Statistically significant discrepancies between the clinical and objective evaluation were noted with regard to lumbar lordosis in boys (n = 67) and thoracic kyphosis in girls (n = 71). No statistically significant differences in both groups were noted for pelvic rotation and trunk position in the frontal plane.
RESULTS
1. The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane.2. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane.3. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects.
CONCLUSIONS
[ "Adolescent", "Female", "Humans", "Male", "Physical Examination", "Posture", "Reproducibility of Results", "Spinal Curvatures" ]
4169808
Background
Human body posture is a motor habit associated with daily activity with an underlying morphological and functional basis [1]. It reflects the psychophysical status of the individual and is an index of mechanical efficiency of the kinaesthetic sense, muscle balance and musculomotor co-ordination [2]. Normal human posture in the vertical position relies on the spine and its position against the head and pelvis [3, 4]. The spatial relations among and between bony structures and articulations are stabilised by a system of fasciae, ligaments and muscles, while the central nervous system is the superior controller of body posture [5, 6]. Body posture variability depends on age, sex and environmental factors influencing its development during body growth [7, 8]. The following conditions are regarded as postural defects: abnormal shape of the physiological spinal curvatures, asymmetrical positioning of the shoulder or pelvic girdle, disturbance of the knee joint axis and abnormal shape of the foot arches. Screening studies of postural defects carried out at schools reveal that 50-60% of children and adolescents demonstrate body posture abnormalities, with 10% of this group at risk for scoliosis or other progressive spinal deformities [9–12]. An alarmingly high percentage of these defects are attributable to poor motor activity of children and adolescents, rapid changes taking place in the body during individual development and excessive time spent in the seated position [13]. An early and reliable detection programme for the population of children and adolescents combined with prophylactic measures to prevent the persistent spinal and trunk deformities is an appropriate strategy that can also mininise the medical and financial outcome of the more complex process of future treatment of postural defects and scolioses that might be necessary. The findings of a clinical evaluation of body posture and trunk asymmetry in a child depend on the experience of the examiner, compliance of the child and availability of bedside diagnostic equipment. Screening tests rely mainly on clinical evaluation since screening is supposed to be available to the entire population of children and adolescents. The easy availability and simple procedure used also need to guarantee a high reliability of diagnoses of postural defects. Non-invasive methods that will make diagnosis easier and more comprehensive are being sought to ensure more objective measurements. The Zebris CMS-10, a system for assessing body posture in three planes, offers a non-invasive method for evaluating the spatial positioning of selected topographic reference points in the frontal, sagittal and transverse planes, thus supplying objective data to support a clinical evaluation. The Zebris CMS 10 system demonstrates a high degree of test-retest reliability, intertester reliability and intratester reliability [14, 15]. The inclinometer method demonstrates a high degree of intertester reliability and intratester reliability [16, 17]. A variation of up to 1.5° was allowed using this technique. Measurements were repeated several times in each participant until two consecutive attempts by two independent examiners yielded the same angle values (including the admissible variation of 1.5°), thus complying with the principles of intertester and alternate-forms reliability. The aim of this study was to assess the reliability of clinical evaluation of body posture compared to objective assessment with the Zebris CMS-10 system.
Methods
The methodology was approved by the Ethics Committee of the Rehabilitation Hospital for Children, Olsztyn, Poland. The study enrolled 13-15-year-old pupils attending a junior secondary school. The mean age was 14.2 (±0.6) years. The study group consisted of 138 participants, including 71 girls (mean age 14.1 ± 0.4 years, mean height 160.3 ± 3.4 cm, mean body weight 64.8 ± 3.9 kg) and 67 boys (mean age 14.4 ± 0.8 years, mean height 166.6 ± 2.9 cm, mean body weight 68.1 ± 3.6 kg). The exclusion criteria were a diagnosis of scoliosis and/or status post spinal surgery and/or feeling any pain. The screening test was carried out with the participants in a free standing position, involving specialists in rehabilitation as examiners and a Zebris CMS 10 system. The objective of the examination was not revealed to the examiners. In the first part, reference skeletal landmarks were marked on the body according to the principles of palpation anatomy. Trunk positioning was evaluated clinically in the sagittal and frontal planes. The findings were recorded in the study protocol (Additional file 1). Thoracic kyphosis and lumbar lordosis were evaluated in the sagittal plane with a Saunders inclinometer. Pelvic rotation was also evaluated in the sagittal plane. A Saunders inclinometer was placed in the cervicothoracic junction with the long arm pointing downwards from the spinous process at the apex of the curve and in the lumbosacral junction with the long arm pointing upwards from the spinous process at the apex of the curve (Figures  1, 2). The respective reference ranges assumed for kyphosis and lordosis were 30-40° and 25-35° [18]. The symmetry of position of the shoulder and pelvic girdles was evaluated in the frontal plane.Figure 1 Marking of anatomical skeletal reference landmarks. Figure 2 Clinical examination with Saunders inclinometer. Reference marker is a belt attached below iliac spines. Marking of anatomical skeletal reference landmarks. Clinical examination with Saunders inclinometer. Reference marker is a belt attached below iliac spines. Following the clinical evaluation, the same postural parameters were assessed with a Zebris CMS-10 device (Zebris Medical GmbH). The Zebris CMS 10 uses WinSpine software in the Microsoft Windows XP environment. Measurement error is defined by the manufacturer at 1.96 degrees and 2.2 millimeters for all parameters. Measurement sensitivity is 0.2 millimeters and 0.5 degrees. The software includes a data base of projects, patients and individual measurements. The core component of the testing system is a measuring device, an ultrasound point indicator probe and a reference marker [19]. The testing device is placed on an adjustable-height arm. The point indicator probe, which is placed directly onto skeletal reference landmarks on the patient’s body, has two ultrasound markers with their central points aligned with the tip of the probe. The skeletal reference landmarks are all thoracic and lumbar spinous processes of the spine. The software precisely calculates the position of the probe. A reference marker in the form of a belt is attached laterally below the posterior superior iliac spines and anterior superior iliac spines so as not to cover the measurement sites. The reference marker is used to eliminate changes of position during the examination. The testing unit was composed of a platform with built-in levels, a Zebris CMS-10 system and a computer. A transverse valve was mounted at one-third of the length of the platform in order to immobilise the Zebris CMS-10 device. Owing to this, the device could be placed in a fixed position and random movements during use were eliminated. A transverse red line was marked permanently at 80 cm from the transverse valve. One side of a 25 × 25 cm square was drawn on this line. The square was contoured with black lines. The lateral sides of the square were used to indicate where the examinees should place their feet in the standing position. The examinees were also instructed to place their feet in front of the red transverse line (Figure  3). The device was calibrated against the ground before each examination.Figure 3 Examination with Zebris CMS-10. Ultrasound probe recording the position of marked skeletal landmarks. Examination with Zebris CMS-10. Ultrasound probe recording the position of marked skeletal landmarks. For statistical analysis of the clinical vs. Zebris-based assessment, physiological spinal curvatures in the sagittal plane were assigned a value of 0 and accentuation or reduction of the curvatures in the sagittal plane below 30° or above 40° for thoracic kyphosis and below 25° or above 35° for lumbar lordosis was assigned a value of 1. A symmetrical position of the pelvis was assigned a value of - 0, and pelvic rotation, a value of 1. Pelvic rotation was assessed manually as a deficit of rotation of the iliac bone relative to the sacral bone on the left and right side of the body. In the frontal plane, symmetry of the acromions and of the pelvis was assigned a value of - 0, and an asymmetry greater than 1 cm in the vertical dimension, a value of 1. The statistical analysis was conducted in Statistica 7 software package, version 10.1 and based on the calculation of means, percentages and the Chi2 test statistics (empirical and expected), and Cramer’s V statistic, which reflects the strength of association of two parameters. The level of significance was set at p < 0.05. The parents of the children in the study group had provided written consent for their children to participate in the study. All the patients gave their written consent prior to their inclusion in the study. The study, funded from a scientific grant, was conducted in the years 2011–2014.
Results
Cramer’s V values confirmed a significant correlation between the parameters in the case of lordosis and a clear correlation in the case of kyphosis. Thus, the study demonstrated that the Zebris CMS-10 system for three-dimensional analysis of body posture contributed a statistically significant adjustment to the clinical evaluation of the spine in the sagittal view; Cramer’s V was 0.514 for the evaluation of thoracic kyphosis in girls and 0.433 in the evaluation of lumbar lordosis in boys (Figures  4, 5, 6, 7, 8, 9).Figure 4 Results of evaluation of thoracic kyphosis in the sagittal plane in boys. Figure 5 Results of evaluation of lumbar lordosis in the sagittal plane in boys. Figure 6 Results of evaluation of pelvic rotation in the sagittal plane in boys. Figure 7 Results of evaluation of thoracic kyphosis in the sagittal plane in girls. Figure 8 Results of evaluation of lumbar lordosis in the sagittal plane in girls. Figure 9 Results of evaluation of pelvic rotation in the sagittal plane in girls. Results of evaluation of thoracic kyphosis in the sagittal plane in boys. Results of evaluation of lumbar lordosis in the sagittal plane in boys. Results of evaluation of pelvic rotation in the sagittal plane in boys. Results of evaluation of thoracic kyphosis in the sagittal plane in girls. Results of evaluation of lumbar lordosis in the sagittal plane in girls. Results of evaluation of pelvic rotation in the sagittal plane in girls. No statistically significant differences were found with regard to the accuracy of evaluation of pelvic rotation, indicating a similar degree of precision of both techniques, with a Cramer’s V of 0.112 in boys and 0.042 in girls. Similarly, no significant differences were revealed in the frontal plane, also suggesting a similar precision (Tables  1 and 2). These results show that clinical and device-assisted topography is characterised by a similar degree of accuracy. Minor differences were noted with regard to trunk asymmetry in the frontal plane. The differences were not statistically significant (Table  1). Clinical evaluation is thus a reliable method for assessing trunk asymmetry in the frontal plane and does not need to be confirmed by measurement system-based assessment.Table 1 Trunk assessment in boys, sagittal and frontal planes Trunk assessmentCMS -10 evaluationClinical evaluationCalculated Chi 2 Significance level p < 0.05Degrees of freedomTabular Chi 2 Hypothesis acceptedCramer’s VKyphosis12.80.0525.9alternative0.309Accentuated2117Reduced2510Normal2140Total6767Lordosis25.10.0525.9alternative0.433Accentuated339Reduced1715Normal1743Total6767Pelvis1.70.0513.7null0.112Rotated4942Not rotated1825Total6767Asymmetry5.880.0525.9null0.173Shoulder girdle4825Inferior scapular angles4513Pelvic obliqueness3728Total13066Table 2 Trunk assessment in girls, sagittal and frontal planes Trunk assessmentCMS -10 evaluationClinical evaluationCalculated Chi 2 Significance level p < 0.05Degrees of freedomTabular Chi 2 Hypothesis acceptedCramer’s VKyphosis37.50.0525.9alternative0.514Accentuated438Reduced613Normal2250Total7171Lordosis20.10.0525.9alternative0.377Accentuated237Reduced2212Normal2652Total7171Pelvis0.30.0513.7null0.042Rotated3740Not rotated3431Total7171Asymmetry3.90.0525.9null0.151Shoulder girdle5224Inferior scapular angles4218Pelvic obliqueness1817Total11259 Trunk assessment in boys, sagittal and frontal planes Trunk assessment in girls, sagittal and frontal planes
Conclusions
The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects. The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects.
[ "Background", "" ]
[ "Human body posture is a motor habit associated with daily activity with an underlying morphological and functional basis\n[1]. It reflects the psychophysical status of the individual and is an index of mechanical efficiency of the kinaesthetic sense, muscle balance and musculomotor co-ordination\n[2]. Normal human posture in the vertical position relies on the spine and its position against the head and pelvis\n[3, 4]. The spatial relations among and between bony structures and articulations are stabilised by a system of fasciae, ligaments and muscles, while the central nervous system is the superior controller of body posture\n[5, 6]. Body posture variability depends on age, sex and environmental factors influencing its development during body growth\n[7, 8]. The following conditions are regarded as postural defects: abnormal shape of the physiological spinal curvatures, asymmetrical positioning of the shoulder or pelvic girdle, disturbance of the knee joint axis and abnormal shape of the foot arches. Screening studies of postural defects carried out at schools reveal that 50-60% of children and adolescents demonstrate body posture abnormalities, with 10% of this group at risk for scoliosis or other progressive spinal deformities\n[9–12]. An alarmingly high percentage of these defects are attributable to poor motor activity of children and adolescents, rapid changes taking place in the body during individual development and excessive time spent in the seated position\n[13]. An early and reliable detection programme for the population of children and adolescents combined with prophylactic measures to prevent the persistent spinal and trunk deformities is an appropriate strategy that can also mininise the medical and financial outcome of the more complex process of future treatment of postural defects and scolioses that might be necessary. The findings of a clinical evaluation of body posture and trunk asymmetry in a child depend on the experience of the examiner, compliance of the child and availability of bedside diagnostic equipment. Screening tests rely mainly on clinical evaluation since screening is supposed to be available to the entire population of children and adolescents. The easy availability and simple procedure used also need to guarantee a high reliability of diagnoses of postural defects. Non-invasive methods that will make diagnosis easier and more comprehensive are being sought to ensure more objective measurements. The Zebris CMS-10, a system for assessing body posture in three planes, offers a non-invasive method for evaluating the spatial positioning of selected topographic reference points in the frontal, sagittal and transverse planes, thus supplying objective data to support a clinical evaluation. The Zebris CMS 10 system demonstrates a high degree of test-retest reliability, intertester reliability and intratester reliability\n[14, 15]. The inclinometer method demonstrates a high degree of intertester reliability and intratester reliability\n[16, 17]. A variation of up to 1.5° was allowed using this technique. Measurements were repeated several times in each participant until two consecutive attempts by two independent examiners yielded the same angle values (including the admissible variation of 1.5°), thus complying with the principles of intertester and alternate-forms reliability. The aim of this study was to assess the reliability of clinical evaluation of body posture compared to objective assessment with the Zebris CMS-10 system.", "Additional file 1:\nSample examination protocol.\n(DOCX 14 KB)" ]
[ null, null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusions", "Electronic supplementary material", "" ]
[ "Human body posture is a motor habit associated with daily activity with an underlying morphological and functional basis\n[1]. It reflects the psychophysical status of the individual and is an index of mechanical efficiency of the kinaesthetic sense, muscle balance and musculomotor co-ordination\n[2]. Normal human posture in the vertical position relies on the spine and its position against the head and pelvis\n[3, 4]. The spatial relations among and between bony structures and articulations are stabilised by a system of fasciae, ligaments and muscles, while the central nervous system is the superior controller of body posture\n[5, 6]. Body posture variability depends on age, sex and environmental factors influencing its development during body growth\n[7, 8]. The following conditions are regarded as postural defects: abnormal shape of the physiological spinal curvatures, asymmetrical positioning of the shoulder or pelvic girdle, disturbance of the knee joint axis and abnormal shape of the foot arches. Screening studies of postural defects carried out at schools reveal that 50-60% of children and adolescents demonstrate body posture abnormalities, with 10% of this group at risk for scoliosis or other progressive spinal deformities\n[9–12]. An alarmingly high percentage of these defects are attributable to poor motor activity of children and adolescents, rapid changes taking place in the body during individual development and excessive time spent in the seated position\n[13]. An early and reliable detection programme for the population of children and adolescents combined with prophylactic measures to prevent the persistent spinal and trunk deformities is an appropriate strategy that can also mininise the medical and financial outcome of the more complex process of future treatment of postural defects and scolioses that might be necessary. The findings of a clinical evaluation of body posture and trunk asymmetry in a child depend on the experience of the examiner, compliance of the child and availability of bedside diagnostic equipment. Screening tests rely mainly on clinical evaluation since screening is supposed to be available to the entire population of children and adolescents. The easy availability and simple procedure used also need to guarantee a high reliability of diagnoses of postural defects. Non-invasive methods that will make diagnosis easier and more comprehensive are being sought to ensure more objective measurements. The Zebris CMS-10, a system for assessing body posture in three planes, offers a non-invasive method for evaluating the spatial positioning of selected topographic reference points in the frontal, sagittal and transverse planes, thus supplying objective data to support a clinical evaluation. The Zebris CMS 10 system demonstrates a high degree of test-retest reliability, intertester reliability and intratester reliability\n[14, 15]. The inclinometer method demonstrates a high degree of intertester reliability and intratester reliability\n[16, 17]. A variation of up to 1.5° was allowed using this technique. Measurements were repeated several times in each participant until two consecutive attempts by two independent examiners yielded the same angle values (including the admissible variation of 1.5°), thus complying with the principles of intertester and alternate-forms reliability. The aim of this study was to assess the reliability of clinical evaluation of body posture compared to objective assessment with the Zebris CMS-10 system.", "The methodology was approved by the Ethics Committee of the Rehabilitation Hospital for Children, Olsztyn, Poland. The study enrolled 13-15-year-old pupils attending a junior secondary school. The mean age was 14.2 (±0.6) years. The study group consisted of 138 participants, including 71 girls (mean age 14.1 ± 0.4 years, mean height 160.3 ± 3.4 cm, mean body weight 64.8 ± 3.9 kg) and 67 boys (mean age 14.4 ± 0.8 years, mean height 166.6 ± 2.9 cm, mean body weight 68.1 ± 3.6 kg). The exclusion criteria were a diagnosis of scoliosis and/or status post spinal surgery and/or feeling any pain. The screening test was carried out with the participants in a free standing position, involving specialists in rehabilitation as examiners and a Zebris CMS 10 system. The objective of the examination was not revealed to the examiners. In the first part, reference skeletal landmarks were marked on the body according to the principles of palpation anatomy. Trunk positioning was evaluated clinically in the sagittal and frontal planes. The findings were recorded in the study protocol (Additional file\n1). Thoracic kyphosis and lumbar lordosis were evaluated in the sagittal plane with a Saunders inclinometer. Pelvic rotation was also evaluated in the sagittal plane. A Saunders inclinometer was placed in the cervicothoracic junction with the long arm pointing downwards from the spinous process at the apex of the curve and in the lumbosacral junction with the long arm pointing upwards from the spinous process at the apex of the curve (Figures \n1,\n2). The respective reference ranges assumed for kyphosis and lordosis were 30-40° and 25-35°\n[18]. The symmetry of position of the shoulder and pelvic girdles was evaluated in the frontal plane.Figure 1\nMarking of anatomical skeletal reference landmarks.\nFigure 2\nClinical examination with Saunders inclinometer. Reference marker is a belt attached below iliac spines.\n\nMarking of anatomical skeletal reference landmarks.\n\n\nClinical examination with Saunders inclinometer. Reference marker is a belt attached below iliac spines.\nFollowing the clinical evaluation, the same postural parameters were assessed with a Zebris CMS-10 device (Zebris Medical GmbH). The Zebris CMS 10 uses WinSpine software in the Microsoft Windows XP environment. Measurement error is defined by the manufacturer at 1.96 degrees and 2.2 millimeters for all parameters. Measurement sensitivity is 0.2 millimeters and 0.5 degrees. The software includes a data base of projects, patients and individual measurements. The core component of the testing system is a measuring device, an ultrasound point indicator probe and a reference marker\n[19]. The testing device is placed on an adjustable-height arm. The point indicator probe, which is placed directly onto skeletal reference landmarks on the patient’s body, has two ultrasound markers with their central points aligned with the tip of the probe. The skeletal reference landmarks are all thoracic and lumbar spinous processes of the spine. The software precisely calculates the position of the probe. A reference marker in the form of a belt is attached laterally below the posterior superior iliac spines and anterior superior iliac spines so as not to cover the measurement sites. The reference marker is used to eliminate changes of position during the examination. The testing unit was composed of a platform with built-in levels, a Zebris CMS-10 system and a computer. A transverse valve was mounted at one-third of the length of the platform in order to immobilise the Zebris CMS-10 device. Owing to this, the device could be placed in a fixed position and random movements during use were eliminated. A transverse red line was marked permanently at 80 cm from the transverse valve. One side of a 25 × 25 cm square was drawn on this line. The square was contoured with black lines. The lateral sides of the square were used to indicate where the examinees should place their feet in the standing position. The examinees were also instructed to place their feet in front of the red transverse line (Figure \n3). The device was calibrated against the ground before each examination.Figure 3\nExamination with Zebris CMS-10. Ultrasound probe recording the position of marked skeletal landmarks.\n\nExamination with Zebris CMS-10. Ultrasound probe recording the position of marked skeletal landmarks.\nFor statistical analysis of the clinical vs. Zebris-based assessment, physiological spinal curvatures in the sagittal plane were assigned a value of 0 and accentuation or reduction of the curvatures in the sagittal plane below 30° or above 40° for thoracic kyphosis and below 25° or above 35° for lumbar lordosis was assigned a value of 1. A symmetrical position of the pelvis was assigned a value of - 0, and pelvic rotation, a value of 1. Pelvic rotation was assessed manually as a deficit of rotation of the iliac bone relative to the sacral bone on the left and right side of the body. In the frontal plane, symmetry of the acromions and of the pelvis was assigned a value of - 0, and an asymmetry greater than 1 cm in the vertical dimension, a value of 1. The statistical analysis was conducted in Statistica 7 software package, version 10.1 and based on the calculation of means, percentages and the Chi2 test statistics (empirical and expected), and Cramer’s V statistic, which reflects the strength of association of two parameters. The level of significance was set at p < 0.05.\nThe parents of the children in the study group had provided written consent for their children to participate in the study. All the patients gave their written consent prior to their inclusion in the study.\nThe study, funded from a scientific grant, was conducted in the years 2011–2014.", "Cramer’s V values confirmed a significant correlation between the parameters in the case of lordosis and a clear correlation in the case of kyphosis. Thus, the study demonstrated that the Zebris CMS-10 system for three-dimensional analysis of body posture contributed a statistically significant adjustment to the clinical evaluation of the spine in the sagittal view; Cramer’s V was 0.514 for the evaluation of thoracic kyphosis in girls and 0.433 in the evaluation of lumbar lordosis in boys (Figures \n4,\n5,\n6,\n7,\n8,\n9).Figure 4\nResults of evaluation of thoracic kyphosis in the sagittal plane in boys.\nFigure 5\nResults of evaluation of lumbar lordosis in the sagittal plane in boys.\nFigure 6\nResults of evaluation of pelvic rotation in the sagittal plane in boys.\nFigure 7\nResults of evaluation of thoracic kyphosis in the sagittal plane in girls.\nFigure 8\nResults of evaluation of lumbar lordosis in the sagittal plane in girls.\nFigure 9\nResults of evaluation of pelvic rotation in the sagittal plane in girls.\n\n\nResults of evaluation of thoracic kyphosis in the sagittal plane in boys.\n\n\nResults of evaluation of lumbar lordosis in the sagittal plane in boys.\n\n\nResults of evaluation of pelvic rotation in the sagittal plane in boys.\n\n\nResults of evaluation of thoracic kyphosis in the sagittal plane in girls.\n\n\nResults of evaluation of lumbar lordosis in the sagittal plane in girls.\n\n\nResults of evaluation of pelvic rotation in the sagittal plane in girls.\n\nNo statistically significant differences were found with regard to the accuracy of evaluation of pelvic rotation, indicating a similar degree of precision of both techniques, with a Cramer’s V of 0.112 in boys and 0.042 in girls. Similarly, no significant differences were revealed in the frontal plane, also suggesting a similar precision (Tables \n1 and\n2). These results show that clinical and device-assisted topography is characterised by a similar degree of accuracy. Minor differences were noted with regard to trunk asymmetry in the frontal plane. The differences were not statistically significant (Table \n1). Clinical evaluation is thus a reliable method for assessing trunk asymmetry in the frontal plane and does not need to be confirmed by measurement system-based assessment.Table 1\nTrunk assessment in boys, sagittal and frontal planes\nTrunk assessmentCMS -10 evaluationClinical evaluationCalculated Chi\n2\nSignificance level p < 0.05Degrees of freedomTabular Chi\n2\nHypothesis acceptedCramer’s VKyphosis12.80.0525.9alternative0.309Accentuated2117Reduced2510Normal2140Total6767Lordosis25.10.0525.9alternative0.433Accentuated339Reduced1715Normal1743Total6767Pelvis1.70.0513.7null0.112Rotated4942Not rotated1825Total6767Asymmetry5.880.0525.9null0.173Shoulder girdle4825Inferior scapular angles4513Pelvic obliqueness3728Total13066Table 2\nTrunk assessment in girls, sagittal and frontal planes\nTrunk assessmentCMS -10 evaluationClinical evaluationCalculated Chi\n2\nSignificance level p < 0.05Degrees of freedomTabular Chi\n2\nHypothesis acceptedCramer’s VKyphosis37.50.0525.9alternative0.514Accentuated438Reduced613Normal2250Total7171Lordosis20.10.0525.9alternative0.377Accentuated237Reduced2212Normal2652Total7171Pelvis0.30.0513.7null0.042Rotated3740Not rotated3431Total7171Asymmetry3.90.0525.9null0.151Shoulder girdle5224Inferior scapular angles4218Pelvic obliqueness1817Total11259\n\nTrunk assessment in boys, sagittal and frontal planes\n\n\nTrunk assessment in girls, sagittal and frontal planes\n", "Screening tests play a significant role in assessment of normal individual development in the population of children and adolescents. Accurate screening allows for selecting children and adolescents at risk for the development of postural defects or spinal and trunk deformities in order to refer them to appropriate specialists\n[20]. A clinical examination is the simplest and also the most common form of postural assessment.\nIn order to make the findings of clinical assessment more objective, measuring devices were gradually introduced in the 20th and 21st centuries, beginning with the Moire method, a photostereometric technique first used by Takasaki in 1970\n[21], followed by raster plots projected onto the object being assessed in raster photogrammetry for massive screening tests\n[22–24]. Modern devices for three-dimensional motion analysis (Metercom system) use the Saunders digital inclinometer and an anthropostereometric technique\n[15]. Techniques of video capturing of body posture are also available. Importantly, studies comparing postural parameters assessed using different devices do not reveal statistically significant differences in either device-to-device or device-to-clinical examination comparisons\n[25].\nNew achievements in objective assessment methods to support clinical examinations based on a mathematical system of three-dimensional body posture analysis were revealed by American and German centres as early as the late 1990’s and in the first decade of this century\n[26]. German studies show that the Zebris CMS-10 is a precise device that produces a detailed analysis of the trunk position based on anatomical skeletal reference landmarks in static positions with an option to expand sequences of functional movement\n[27]. Only static positions were analysed in the present study.\nThe present results confirm that the most difficult aspect of assessment of clinical deformities of the spine is the analysis of pathological spinal curvatures in the sagittal plane, while pelvic rotation and frontal positioning of the trunk are relatively easy to assess clinically. Similar results were obtained by Bibrowicz & Skolimowski\n[22]. Kyphosis and lordosis are subject to considerable interindividual variability and there are also no standards to define reference ranges for angle values in relation to sex and age in adolescents. The present study confirmed discrepancies between the populations of boys and girls. Abnormal spinal curvatures are one of many problems of adolescence\n[23]. The development and monitoring of a correct posture during the development of a child and adolescent is a prolonged process that depends on one’s somatic structure and the pace of individual development\n[3, 8]. The results for the frontal plane showed less discrepancy between the clinical examination and Zebris CMS-10-based assessment. Skolimowski et al. presented similar findings using other research tools\n[22].\nInternational scientific societies emphasise the need to verify clinical and scientific research to make it more objective. The terminological system proposed by SRS (Scoliosis Research Society) in 1994 reflects the three-dimensional nature of scoliosis and other spinal deformities\n[26]. The terminology serves the goal of promoting systematic descriptions of deformities and rationalising and facilitating examinations in clinical practice\n[28]. Consensus statements published by SOSORT (Society on Scoliosis Orthopeaedic and Rehabilitation Treatment) systematise the level of reliability of diagnostic and research procedures employed in the diagnosis of postural defects\n[2, 10–12].\nOur study shows that the Zebris CMS-10 system provides a detailed analysis of the position of set skeletal reference landmarks, thus representing a valuable adjunct to the clinical examination to increase the intrinsic value of screening tests.", "\n\n\nThe finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane.\n\n\nThe clinical evaluation of posture is reliable with regard to assessment in the frontal plane.\n\n\nThe Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects.\n\n\n\nThe finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane.\nThe clinical evaluation of posture is reliable with regard to assessment in the frontal plane.\nThe Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects.", " Additional file 1:\nSample examination protocol.\n(DOCX 14 KB)\nAdditional file 1:\nSample examination protocol.\n(DOCX 14 KB)", "Additional file 1:\nSample examination protocol.\n(DOCX 14 KB)" ]
[ null, "methods", "results", "discussion", "conclusions", "supplementary-material", null ]
[ "Postural defects", "Spinal deformities", "Screening tests", "Topography" ]
Background: Human body posture is a motor habit associated with daily activity with an underlying morphological and functional basis [1]. It reflects the psychophysical status of the individual and is an index of mechanical efficiency of the kinaesthetic sense, muscle balance and musculomotor co-ordination [2]. Normal human posture in the vertical position relies on the spine and its position against the head and pelvis [3, 4]. The spatial relations among and between bony structures and articulations are stabilised by a system of fasciae, ligaments and muscles, while the central nervous system is the superior controller of body posture [5, 6]. Body posture variability depends on age, sex and environmental factors influencing its development during body growth [7, 8]. The following conditions are regarded as postural defects: abnormal shape of the physiological spinal curvatures, asymmetrical positioning of the shoulder or pelvic girdle, disturbance of the knee joint axis and abnormal shape of the foot arches. Screening studies of postural defects carried out at schools reveal that 50-60% of children and adolescents demonstrate body posture abnormalities, with 10% of this group at risk for scoliosis or other progressive spinal deformities [9–12]. An alarmingly high percentage of these defects are attributable to poor motor activity of children and adolescents, rapid changes taking place in the body during individual development and excessive time spent in the seated position [13]. An early and reliable detection programme for the population of children and adolescents combined with prophylactic measures to prevent the persistent spinal and trunk deformities is an appropriate strategy that can also mininise the medical and financial outcome of the more complex process of future treatment of postural defects and scolioses that might be necessary. The findings of a clinical evaluation of body posture and trunk asymmetry in a child depend on the experience of the examiner, compliance of the child and availability of bedside diagnostic equipment. Screening tests rely mainly on clinical evaluation since screening is supposed to be available to the entire population of children and adolescents. The easy availability and simple procedure used also need to guarantee a high reliability of diagnoses of postural defects. Non-invasive methods that will make diagnosis easier and more comprehensive are being sought to ensure more objective measurements. The Zebris CMS-10, a system for assessing body posture in three planes, offers a non-invasive method for evaluating the spatial positioning of selected topographic reference points in the frontal, sagittal and transverse planes, thus supplying objective data to support a clinical evaluation. The Zebris CMS 10 system demonstrates a high degree of test-retest reliability, intertester reliability and intratester reliability [14, 15]. The inclinometer method demonstrates a high degree of intertester reliability and intratester reliability [16, 17]. A variation of up to 1.5° was allowed using this technique. Measurements were repeated several times in each participant until two consecutive attempts by two independent examiners yielded the same angle values (including the admissible variation of 1.5°), thus complying with the principles of intertester and alternate-forms reliability. The aim of this study was to assess the reliability of clinical evaluation of body posture compared to objective assessment with the Zebris CMS-10 system. Methods: The methodology was approved by the Ethics Committee of the Rehabilitation Hospital for Children, Olsztyn, Poland. The study enrolled 13-15-year-old pupils attending a junior secondary school. The mean age was 14.2 (±0.6) years. The study group consisted of 138 participants, including 71 girls (mean age 14.1 ± 0.4 years, mean height 160.3 ± 3.4 cm, mean body weight 64.8 ± 3.9 kg) and 67 boys (mean age 14.4 ± 0.8 years, mean height 166.6 ± 2.9 cm, mean body weight 68.1 ± 3.6 kg). The exclusion criteria were a diagnosis of scoliosis and/or status post spinal surgery and/or feeling any pain. The screening test was carried out with the participants in a free standing position, involving specialists in rehabilitation as examiners and a Zebris CMS 10 system. The objective of the examination was not revealed to the examiners. In the first part, reference skeletal landmarks were marked on the body according to the principles of palpation anatomy. Trunk positioning was evaluated clinically in the sagittal and frontal planes. The findings were recorded in the study protocol (Additional file 1). Thoracic kyphosis and lumbar lordosis were evaluated in the sagittal plane with a Saunders inclinometer. Pelvic rotation was also evaluated in the sagittal plane. A Saunders inclinometer was placed in the cervicothoracic junction with the long arm pointing downwards from the spinous process at the apex of the curve and in the lumbosacral junction with the long arm pointing upwards from the spinous process at the apex of the curve (Figures  1, 2). The respective reference ranges assumed for kyphosis and lordosis were 30-40° and 25-35° [18]. The symmetry of position of the shoulder and pelvic girdles was evaluated in the frontal plane.Figure 1 Marking of anatomical skeletal reference landmarks. Figure 2 Clinical examination with Saunders inclinometer. Reference marker is a belt attached below iliac spines. Marking of anatomical skeletal reference landmarks. Clinical examination with Saunders inclinometer. Reference marker is a belt attached below iliac spines. Following the clinical evaluation, the same postural parameters were assessed with a Zebris CMS-10 device (Zebris Medical GmbH). The Zebris CMS 10 uses WinSpine software in the Microsoft Windows XP environment. Measurement error is defined by the manufacturer at 1.96 degrees and 2.2 millimeters for all parameters. Measurement sensitivity is 0.2 millimeters and 0.5 degrees. The software includes a data base of projects, patients and individual measurements. The core component of the testing system is a measuring device, an ultrasound point indicator probe and a reference marker [19]. The testing device is placed on an adjustable-height arm. The point indicator probe, which is placed directly onto skeletal reference landmarks on the patient’s body, has two ultrasound markers with their central points aligned with the tip of the probe. The skeletal reference landmarks are all thoracic and lumbar spinous processes of the spine. The software precisely calculates the position of the probe. A reference marker in the form of a belt is attached laterally below the posterior superior iliac spines and anterior superior iliac spines so as not to cover the measurement sites. The reference marker is used to eliminate changes of position during the examination. The testing unit was composed of a platform with built-in levels, a Zebris CMS-10 system and a computer. A transverse valve was mounted at one-third of the length of the platform in order to immobilise the Zebris CMS-10 device. Owing to this, the device could be placed in a fixed position and random movements during use were eliminated. A transverse red line was marked permanently at 80 cm from the transverse valve. One side of a 25 × 25 cm square was drawn on this line. The square was contoured with black lines. The lateral sides of the square were used to indicate where the examinees should place their feet in the standing position. The examinees were also instructed to place their feet in front of the red transverse line (Figure  3). The device was calibrated against the ground before each examination.Figure 3 Examination with Zebris CMS-10. Ultrasound probe recording the position of marked skeletal landmarks. Examination with Zebris CMS-10. Ultrasound probe recording the position of marked skeletal landmarks. For statistical analysis of the clinical vs. Zebris-based assessment, physiological spinal curvatures in the sagittal plane were assigned a value of 0 and accentuation or reduction of the curvatures in the sagittal plane below 30° or above 40° for thoracic kyphosis and below 25° or above 35° for lumbar lordosis was assigned a value of 1. A symmetrical position of the pelvis was assigned a value of - 0, and pelvic rotation, a value of 1. Pelvic rotation was assessed manually as a deficit of rotation of the iliac bone relative to the sacral bone on the left and right side of the body. In the frontal plane, symmetry of the acromions and of the pelvis was assigned a value of - 0, and an asymmetry greater than 1 cm in the vertical dimension, a value of 1. The statistical analysis was conducted in Statistica 7 software package, version 10.1 and based on the calculation of means, percentages and the Chi2 test statistics (empirical and expected), and Cramer’s V statistic, which reflects the strength of association of two parameters. The level of significance was set at p < 0.05. The parents of the children in the study group had provided written consent for their children to participate in the study. All the patients gave their written consent prior to their inclusion in the study. The study, funded from a scientific grant, was conducted in the years 2011–2014. Results: Cramer’s V values confirmed a significant correlation between the parameters in the case of lordosis and a clear correlation in the case of kyphosis. Thus, the study demonstrated that the Zebris CMS-10 system for three-dimensional analysis of body posture contributed a statistically significant adjustment to the clinical evaluation of the spine in the sagittal view; Cramer’s V was 0.514 for the evaluation of thoracic kyphosis in girls and 0.433 in the evaluation of lumbar lordosis in boys (Figures  4, 5, 6, 7, 8, 9).Figure 4 Results of evaluation of thoracic kyphosis in the sagittal plane in boys. Figure 5 Results of evaluation of lumbar lordosis in the sagittal plane in boys. Figure 6 Results of evaluation of pelvic rotation in the sagittal plane in boys. Figure 7 Results of evaluation of thoracic kyphosis in the sagittal plane in girls. Figure 8 Results of evaluation of lumbar lordosis in the sagittal plane in girls. Figure 9 Results of evaluation of pelvic rotation in the sagittal plane in girls. Results of evaluation of thoracic kyphosis in the sagittal plane in boys. Results of evaluation of lumbar lordosis in the sagittal plane in boys. Results of evaluation of pelvic rotation in the sagittal plane in boys. Results of evaluation of thoracic kyphosis in the sagittal plane in girls. Results of evaluation of lumbar lordosis in the sagittal plane in girls. Results of evaluation of pelvic rotation in the sagittal plane in girls. No statistically significant differences were found with regard to the accuracy of evaluation of pelvic rotation, indicating a similar degree of precision of both techniques, with a Cramer’s V of 0.112 in boys and 0.042 in girls. Similarly, no significant differences were revealed in the frontal plane, also suggesting a similar precision (Tables  1 and 2). These results show that clinical and device-assisted topography is characterised by a similar degree of accuracy. Minor differences were noted with regard to trunk asymmetry in the frontal plane. The differences were not statistically significant (Table  1). Clinical evaluation is thus a reliable method for assessing trunk asymmetry in the frontal plane and does not need to be confirmed by measurement system-based assessment.Table 1 Trunk assessment in boys, sagittal and frontal planes Trunk assessmentCMS -10 evaluationClinical evaluationCalculated Chi 2 Significance level p < 0.05Degrees of freedomTabular Chi 2 Hypothesis acceptedCramer’s VKyphosis12.80.0525.9alternative0.309Accentuated2117Reduced2510Normal2140Total6767Lordosis25.10.0525.9alternative0.433Accentuated339Reduced1715Normal1743Total6767Pelvis1.70.0513.7null0.112Rotated4942Not rotated1825Total6767Asymmetry5.880.0525.9null0.173Shoulder girdle4825Inferior scapular angles4513Pelvic obliqueness3728Total13066Table 2 Trunk assessment in girls, sagittal and frontal planes Trunk assessmentCMS -10 evaluationClinical evaluationCalculated Chi 2 Significance level p < 0.05Degrees of freedomTabular Chi 2 Hypothesis acceptedCramer’s VKyphosis37.50.0525.9alternative0.514Accentuated438Reduced613Normal2250Total7171Lordosis20.10.0525.9alternative0.377Accentuated237Reduced2212Normal2652Total7171Pelvis0.30.0513.7null0.042Rotated3740Not rotated3431Total7171Asymmetry3.90.0525.9null0.151Shoulder girdle5224Inferior scapular angles4218Pelvic obliqueness1817Total11259 Trunk assessment in boys, sagittal and frontal planes Trunk assessment in girls, sagittal and frontal planes Discussion: Screening tests play a significant role in assessment of normal individual development in the population of children and adolescents. Accurate screening allows for selecting children and adolescents at risk for the development of postural defects or spinal and trunk deformities in order to refer them to appropriate specialists [20]. A clinical examination is the simplest and also the most common form of postural assessment. In order to make the findings of clinical assessment more objective, measuring devices were gradually introduced in the 20th and 21st centuries, beginning with the Moire method, a photostereometric technique first used by Takasaki in 1970 [21], followed by raster plots projected onto the object being assessed in raster photogrammetry for massive screening tests [22–24]. Modern devices for three-dimensional motion analysis (Metercom system) use the Saunders digital inclinometer and an anthropostereometric technique [15]. Techniques of video capturing of body posture are also available. Importantly, studies comparing postural parameters assessed using different devices do not reveal statistically significant differences in either device-to-device or device-to-clinical examination comparisons [25]. New achievements in objective assessment methods to support clinical examinations based on a mathematical system of three-dimensional body posture analysis were revealed by American and German centres as early as the late 1990’s and in the first decade of this century [26]. German studies show that the Zebris CMS-10 is a precise device that produces a detailed analysis of the trunk position based on anatomical skeletal reference landmarks in static positions with an option to expand sequences of functional movement [27]. Only static positions were analysed in the present study. The present results confirm that the most difficult aspect of assessment of clinical deformities of the spine is the analysis of pathological spinal curvatures in the sagittal plane, while pelvic rotation and frontal positioning of the trunk are relatively easy to assess clinically. Similar results were obtained by Bibrowicz & Skolimowski [22]. Kyphosis and lordosis are subject to considerable interindividual variability and there are also no standards to define reference ranges for angle values in relation to sex and age in adolescents. The present study confirmed discrepancies between the populations of boys and girls. Abnormal spinal curvatures are one of many problems of adolescence [23]. The development and monitoring of a correct posture during the development of a child and adolescent is a prolonged process that depends on one’s somatic structure and the pace of individual development [3, 8]. The results for the frontal plane showed less discrepancy between the clinical examination and Zebris CMS-10-based assessment. Skolimowski et al. presented similar findings using other research tools [22]. International scientific societies emphasise the need to verify clinical and scientific research to make it more objective. The terminological system proposed by SRS (Scoliosis Research Society) in 1994 reflects the three-dimensional nature of scoliosis and other spinal deformities [26]. The terminology serves the goal of promoting systematic descriptions of deformities and rationalising and facilitating examinations in clinical practice [28]. Consensus statements published by SOSORT (Society on Scoliosis Orthopeaedic and Rehabilitation Treatment) systematise the level of reliability of diagnostic and research procedures employed in the diagnosis of postural defects [2, 10–12]. Our study shows that the Zebris CMS-10 system provides a detailed analysis of the position of set skeletal reference landmarks, thus representing a valuable adjunct to the clinical examination to increase the intrinsic value of screening tests. Conclusions: The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects. The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects. Electronic supplementary material: Additional file 1: Sample examination protocol. (DOCX 14 KB) Additional file 1: Sample examination protocol. (DOCX 14 KB) : Additional file 1: Sample examination protocol. (DOCX 14 KB)
Background: Screening tests play a significant role in rapid and reliable assessment of normal individual development in the entire population of children and adolescents. Body posture screening tests carried out at schools reveal that 50-60% of children and adolescents demonstrate body posture abnormalities, with 10% of this group at risk for progressive spinal deformities. This necessitates the search for effective and economically feasible forms of screening diagnosis. The aim of this study was to assess the reliability of clinical evaluation of body posture compared to objective assessment with the Zebris CMS-10 system (Zebris Medical GmbH). Methods: The study enrolled 13-15-year-old pupils attending a junior secondary school (mean age 14.2 years). The study group consisted of 138 participants, including 71 girls and 67 boys, who underwent a clinical evaluation of the body posture and an examination with the Zebris CMS 10 system. Results: Statistically significant discrepancies between the clinical and objective evaluation were noted with regard to lumbar lordosis in boys (n = 67) and thoracic kyphosis in girls (n = 71). No statistically significant differences in both groups were noted for pelvic rotation and trunk position in the frontal plane. Conclusions: 1. The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane.2. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane.3. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects.
Background: Human body posture is a motor habit associated with daily activity with an underlying morphological and functional basis [1]. It reflects the psychophysical status of the individual and is an index of mechanical efficiency of the kinaesthetic sense, muscle balance and musculomotor co-ordination [2]. Normal human posture in the vertical position relies on the spine and its position against the head and pelvis [3, 4]. The spatial relations among and between bony structures and articulations are stabilised by a system of fasciae, ligaments and muscles, while the central nervous system is the superior controller of body posture [5, 6]. Body posture variability depends on age, sex and environmental factors influencing its development during body growth [7, 8]. The following conditions are regarded as postural defects: abnormal shape of the physiological spinal curvatures, asymmetrical positioning of the shoulder or pelvic girdle, disturbance of the knee joint axis and abnormal shape of the foot arches. Screening studies of postural defects carried out at schools reveal that 50-60% of children and adolescents demonstrate body posture abnormalities, with 10% of this group at risk for scoliosis or other progressive spinal deformities [9–12]. An alarmingly high percentage of these defects are attributable to poor motor activity of children and adolescents, rapid changes taking place in the body during individual development and excessive time spent in the seated position [13]. An early and reliable detection programme for the population of children and adolescents combined with prophylactic measures to prevent the persistent spinal and trunk deformities is an appropriate strategy that can also mininise the medical and financial outcome of the more complex process of future treatment of postural defects and scolioses that might be necessary. The findings of a clinical evaluation of body posture and trunk asymmetry in a child depend on the experience of the examiner, compliance of the child and availability of bedside diagnostic equipment. Screening tests rely mainly on clinical evaluation since screening is supposed to be available to the entire population of children and adolescents. The easy availability and simple procedure used also need to guarantee a high reliability of diagnoses of postural defects. Non-invasive methods that will make diagnosis easier and more comprehensive are being sought to ensure more objective measurements. The Zebris CMS-10, a system for assessing body posture in three planes, offers a non-invasive method for evaluating the spatial positioning of selected topographic reference points in the frontal, sagittal and transverse planes, thus supplying objective data to support a clinical evaluation. The Zebris CMS 10 system demonstrates a high degree of test-retest reliability, intertester reliability and intratester reliability [14, 15]. The inclinometer method demonstrates a high degree of intertester reliability and intratester reliability [16, 17]. A variation of up to 1.5° was allowed using this technique. Measurements were repeated several times in each participant until two consecutive attempts by two independent examiners yielded the same angle values (including the admissible variation of 1.5°), thus complying with the principles of intertester and alternate-forms reliability. The aim of this study was to assess the reliability of clinical evaluation of body posture compared to objective assessment with the Zebris CMS-10 system. Conclusions: The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects. The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects.
Background: Screening tests play a significant role in rapid and reliable assessment of normal individual development in the entire population of children and adolescents. Body posture screening tests carried out at schools reveal that 50-60% of children and adolescents demonstrate body posture abnormalities, with 10% of this group at risk for progressive spinal deformities. This necessitates the search for effective and economically feasible forms of screening diagnosis. The aim of this study was to assess the reliability of clinical evaluation of body posture compared to objective assessment with the Zebris CMS-10 system (Zebris Medical GmbH). Methods: The study enrolled 13-15-year-old pupils attending a junior secondary school (mean age 14.2 years). The study group consisted of 138 participants, including 71 girls and 67 boys, who underwent a clinical evaluation of the body posture and an examination with the Zebris CMS 10 system. Results: Statistically significant discrepancies between the clinical and objective evaluation were noted with regard to lumbar lordosis in boys (n = 67) and thoracic kyphosis in girls (n = 71). No statistically significant differences in both groups were noted for pelvic rotation and trunk position in the frontal plane. Conclusions: 1. The finding of significant discrepancies between the results of assessment in the sagittal plane obtained in the clinical examination and Zebris CMS-10-based assessment suggests that clinical evaluation should be used to provide a general estimation of accentuation or reduction of spinal curvatures in the sagittal plane.2. The clinical evaluation of posture is reliable with regard to assessment in the frontal plane.3. The Zebris CMS-10 system makes the clinical examination significantly more objective with regard to assessment of the physiological curvatures and may be used to make screening tests more objective with regard to detecting postural defects.
3,178
334
[ 604, 15 ]
7
[ "plane", "clinical", "sagittal", "evaluation", "10", "sagittal plane", "assessment", "zebris", "results", "examination" ]
[ "body posture compared", "body posture available", "body posture variability", "posture abnormalities 10", "correct posture development" ]
[CONTENT] Postural defects | Spinal deformities | Screening tests | Topography [SUMMARY]
[CONTENT] Postural defects | Spinal deformities | Screening tests | Topography [SUMMARY]
[CONTENT] Postural defects | Spinal deformities | Screening tests | Topography [SUMMARY]
[CONTENT] Postural defects | Spinal deformities | Screening tests | Topography [SUMMARY]
[CONTENT] Postural defects | Spinal deformities | Screening tests | Topography [SUMMARY]
[CONTENT] Postural defects | Spinal deformities | Screening tests | Topography [SUMMARY]
[CONTENT] Adolescent | Female | Humans | Male | Physical Examination | Posture | Reproducibility of Results | Spinal Curvatures [SUMMARY]
[CONTENT] Adolescent | Female | Humans | Male | Physical Examination | Posture | Reproducibility of Results | Spinal Curvatures [SUMMARY]
[CONTENT] Adolescent | Female | Humans | Male | Physical Examination | Posture | Reproducibility of Results | Spinal Curvatures [SUMMARY]
[CONTENT] Adolescent | Female | Humans | Male | Physical Examination | Posture | Reproducibility of Results | Spinal Curvatures [SUMMARY]
[CONTENT] Adolescent | Female | Humans | Male | Physical Examination | Posture | Reproducibility of Results | Spinal Curvatures [SUMMARY]
[CONTENT] Adolescent | Female | Humans | Male | Physical Examination | Posture | Reproducibility of Results | Spinal Curvatures [SUMMARY]
[CONTENT] body posture compared | body posture available | body posture variability | posture abnormalities 10 | correct posture development [SUMMARY]
[CONTENT] body posture compared | body posture available | body posture variability | posture abnormalities 10 | correct posture development [SUMMARY]
[CONTENT] body posture compared | body posture available | body posture variability | posture abnormalities 10 | correct posture development [SUMMARY]
[CONTENT] body posture compared | body posture available | body posture variability | posture abnormalities 10 | correct posture development [SUMMARY]
[CONTENT] body posture compared | body posture available | body posture variability | posture abnormalities 10 | correct posture development [SUMMARY]
[CONTENT] body posture compared | body posture available | body posture variability | posture abnormalities 10 | correct posture development [SUMMARY]
[CONTENT] plane | clinical | sagittal | evaluation | 10 | sagittal plane | assessment | zebris | results | examination [SUMMARY]
[CONTENT] plane | clinical | sagittal | evaluation | 10 | sagittal plane | assessment | zebris | results | examination [SUMMARY]
[CONTENT] plane | clinical | sagittal | evaluation | 10 | sagittal plane | assessment | zebris | results | examination [SUMMARY]
[CONTENT] plane | clinical | sagittal | evaluation | 10 | sagittal plane | assessment | zebris | results | examination [SUMMARY]
[CONTENT] plane | clinical | sagittal | evaluation | 10 | sagittal plane | assessment | zebris | results | examination [SUMMARY]
[CONTENT] plane | clinical | sagittal | evaluation | 10 | sagittal plane | assessment | zebris | results | examination [SUMMARY]
[CONTENT] reliability | body | body posture | posture | high | defects | adolescents | children adolescents | intertester | children [SUMMARY]
[CONTENT] reference | mean | position | probe | landmarks | skeletal | cm | reference marker | marker | iliac [SUMMARY]
[CONTENT] results evaluation | evaluation | plane | results | sagittal | sagittal plane | boys | girls | 0525 | sagittal plane boys [SUMMARY]
[CONTENT] regard | assessment | clinical | regard assessment | objective regard | plane | clinical examination | curvatures | clinical evaluation | objective [SUMMARY]
[CONTENT] examination | clinical | plane | examination protocol | file sample examination protocol | file sample examination | file sample | sample | sample examination | sample examination protocol [SUMMARY]
[CONTENT] examination | clinical | plane | examination protocol | file sample examination protocol | file sample examination | file sample | sample | sample examination | sample examination protocol [SUMMARY]
[CONTENT] ||| 50-60% | 10% ||| ||| Zebris | Zebris Medical GmbH [SUMMARY]
[CONTENT] 13 | 15-year-old | age 14.2 years ||| 138 | 71 | 67 | Zebris | 10 [SUMMARY]
[CONTENT] 67 | 71 ||| [SUMMARY]
[CONTENT] 1 ||| Zebris ||| ||| Zebris [SUMMARY]
[CONTENT] ||| 50-60% | 10% ||| ||| Zebris | Zebris Medical GmbH ||| 13 | 15-year-old | age 14.2 years ||| 138 | 71 | 67 | Zebris | 10 ||| 67 | 71 ||| ||| 1 ||| Zebris ||| ||| Zebris [SUMMARY]
[CONTENT] ||| 50-60% | 10% ||| ||| Zebris | Zebris Medical GmbH ||| 13 | 15-year-old | age 14.2 years ||| 138 | 71 | 67 | Zebris | 10 ||| 67 | 71 ||| ||| 1 ||| Zebris ||| ||| Zebris [SUMMARY]
Accelerative effect of topical
33378625
Zataria multiflora Boiss (Lamiaceae) essential oil (ZME) is believed to be a bactericide herbal medicine and might alleviate negative effects of infection.
CONTEXT
A full-thickness excisional skin wound was surgically created in each mouse and inoculated with 5 × 107 suspension containing Pseudomonas aeruginosa and Staphylococcus aureus. The BALB/c mice (n = 72) were divided into four groups: (1) negative control that received base ointment (NCG), (2) positive control that daily received Mupirocin® (MG), (3) therapeutic ointment containing 2% ZMEO and (4) therapeutic ointment containing 4% ZMEO, for 21 days. Wound contraction, total bacterial count, histopathological parameters, antioxidant activity, qRT-PCR analysis for expression of IL-1β, TNF-α, VEGF, IGF-1, TGF-β, IL-10, and FGF-2 mRNA levels were assessed on days 3, 7, and 14 following the wounding.
MATERIALS AND METHODS
Topical administration of ZMEO significantly decreased the total bacterial count and wound area and also expression of IL-1β and TNF-α compared to the control groups (p < 0.05) in all days. This could also increase significantly the expression of TGF-β, IL-10 IGF-1, FGF-2, and VEGF, and also angiogenesis, fibroblasts, fibrocytes, epithelialization ratio, and collagen deposition and improve antioxidant status compared to the control group (p < 0.05).
RESULTS
ZMEO accelerated the healing process of infected wounds by shortening the inflammatory factors and increasing proliferative phase. Applying ZMEO only and/or in combination with chemical agents for the treatment of wound healing could be suggested.
DISCUSSION AND CONCLUSION
[ "Administration, Topical", "Animals", "Collagen", "Inflammation", "Lamiaceae", "Mice", "Mice, Inbred BALB C", "Neovascularization, Pathologic", "Oils, Volatile", "Surgical Wound Infection", "Wound Healing" ]
7782911
Introduction
Wound infections are important challenges that cause the suffering of patients and economic losses (Vittorazzi et al. 2016; Daemi et al. 2019a; Farghali et al. 2019). Skin is a regenerative organ which acts as a barrier between the body and the external environment (Hassan et al. 2014). A wound is defined as a disturbance in anatomic structure of skin and its functional integrity (Farghali et al. 2019). Skin prevents the penetration of bacteria and fungi that are important challenges that induce mortality and morbidity. Pseudomonas aeruginosa and Staphylococcus aureus are opportunistic bacteria that induce infection in patients (Cardona and Wilson 2015). The bacteria are commonly detected in the upper and deepest region of wound bed (Serra et al. 2015). Colonization of S. aureus and P. aeruginosa in the wound site postpones the wound healing process (Farahpour et al. 2020; Khezri et al. 2020). The wound healing process comprises several phases, including coagulation, inflammation, epithelialization, granulation tissue formation, and tissue remodelling (Daemi et al. 2019a, 2019b; Farahpour et al. 2020). Inflammatory phase occurs after activation of inflammatory chemokines (Farahpour et al. 2020). Interleukin-1β (IL-1β) is significantly synthesized 12–24 h after infliction of the wound, and its level returns to basal level after the proliferative stage is completed (Fahey and Doyle 2019). Tumour necrosis factor-α (TNF-α) is significantly synthesized by macrophages and T lymphocytes and its level increases under inflammation and infection conditions (Xue and Falcon 2019). Following the induction of inflammation, proliferative phase occurs in which many genes are involved. Insulin-like growth factor 1 (IGF-1) promotes the production of keratinocyte and fibroblast proliferation and improves the re-epithelialization (Daemi et al. 2019b). Decreased endothelial insulin/IGF-1 signal delays in the wound healing process (Aghdam et al. 2012). Vascular endothelial growth factor (VEGF) induces angiogenesis and stimulates cell migration and proliferation (Farahpour et al. 2020). Fibroblast growth factor-2 (FGF-2) is a protein that participates in the wound healing process (Souto et al. 2020). Interleukin-10 (IL-10) reduces the production of pro-inflammatory cytokine, but tumour growth factor-β (TGF-β) has roles in increasing the proliferative phase improving the proliferation and differentiation of fibroblasts, collagen production, and wound contraction (Khezri et al. 2020). On the other hand, oxidative stress increases under wound disorders and increases the production of reactive oxygen species (ROS) in the site of wound (Ustuner et al. 2019). Applying antibacterial and antioxidant agents shortens the inflammatory phase, promotes the proliferative phase and accelerates the wound healing process (Bardaa et al. 2016; Vittorazzi et al. 2016; Daemi et al. 2019a, 2019b). Zataria multiflora Boiss (Lamiaceae) (ZM) has antibacterial activity against Pseudomonas spp. (Barkhori-Mehni et al. 2017). Its antibacterial activity is attributed to its phenolic compounds that destroy the cell wall in bacteria (Barkhori-Mehni et al. 2017). It demonstrates antioxidant properties due to its compounds such as thymol and carvacrol (Mojaddar Langroodi et al. 2019). Seemingly, ZM can accelerate the wound healing process due to its antibacterial and antioxidant features, but no studies have been conducted yet to evaluate the Zataria multiflora essential oil (ZME) on the wound healing process. Therefore, this study was conducted to evaluate the effect of an ointment prepared from ZME on the healing process of infected wound model by S. aureus and P. aeruginosa by assessing wound contraction, total bacterial count, histopathological and immunohistochemical, antioxidant activity, and qRT-PCR analyses.
null
null
Results
Composition of the Zataria multiflora essential oil The analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1). GC-MS chromatogram of the essential oil of Zataria multiflora. Chemical constituents of Zataria multiflora essential oil. RI-Cal: retention indices calculated based on C6–C24 n-alkenes from a DB-5 column. RI-Cal: retention indices retrieved from literature (Adams 2007). Bold values show major compounds in the essential oil. The analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1). GC-MS chromatogram of the essential oil of Zataria multiflora. Chemical constituents of Zataria multiflora essential oil. RI-Cal: retention indices calculated based on C6–C24 n-alkenes from a DB-5 column. RI-Cal: retention indices retrieved from literature (Adams 2007). Bold values show major compounds in the essential oil. Wound area The findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05). The effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05). The effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The bacteriological count The results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05). The results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05). Histopathological parameters The results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results. Immunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×. The effects of ZMEO on histopathological parameters in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results. Immunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×. The effects of ZMEO on histopathological parameters in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Molecular analyses The results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group. The effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group. The effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Antioxidant status Antioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group. The effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Antioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group. The effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05.
Conclusion
The topical application of ZMEO decreased the inflammation and the total bacterial count, which progresses the wound healing process towards the proliferative phase. It increases the expression of proliferative genes, helps to alleviate the inflammation phase and accelerates the proliferative phase. Antioxidant activity of ZMEO accelerated the wound healing process. All the above-mentioned factors hasten the wound healing process, as seen for the wound area. Accordingly, applying ZMEO only and/or in combination with chemical agents for the treatment of wound healing could be suggested.
[ "Materials", "Essential oil analysis and identification", "Animals", "Induction of wound", "Wound area", "The investigation of the bacteriological count in the wound area", "Histological analysis", "Immunohistochemical staining (IHC) for angiogenesis (CD31)", "Fluorescent staining for collagen", "RNA isolation and cDNA synthesis", "Antioxidant capacity", "Statistical analysis", "Composition of the Zataria multiflora essential oil", "Wound area", "The bacteriological count", "Histopathological parameters", "Molecular analyses", "Antioxidant status" ]
[ "Zataria multiflora essential oil was purchased from Barij Essence Company (Kashan-Iran) and approved by the same herbarium company (No.1203). Commercial base ointment and mupirocin were prepared from Pars Daruo. Ltd., Tehran, Iran. The commercial base ointment contained 90% soft paraffin, 5% hard paraffin and 5% lanolin. Rodent feeds were prepared from Javaneh Khorasan Company (Khorasan, Iran).", "Gas chromatography (GC) analysis was employed in order to identify the compounds. The components of the essential oil were recognized using calculation of their retention indices under temperature-programmed conditions for n-alkanes (C6–C24) and the oil on DB-5 column.", "Seventy-two BALB/c mice weighing 25 ± 3 g were prepared. The mice had free access to water and pelleted feed for rodents (Javaneh Khorasan Company, Khorasan, Iran). The mice were kept in an ambient temperature of 23 ± 3 °C, at a constant air humidity, and a natural day–night cycle. This study was approved by the Animal Research Committee of the Urmia Islamic Azad University with ethical No. IAU-UB 11044.", "To induce the wound, the mice were primarily anesthetized by intraperitoneal administration route using 50 mg/kg ketamine and 5 mg/kg xylazine. Following the induction of anaesthesia, dorsal region of each mouse was shaven for surgical actions. A circular wound model was induced in a size of 7 mm using a surgical biopsy punch and per wound was inoculated by an aliquot of 5 × 107 suspension containing S. aureus (ATCC 26313) and P. aeruginosa (ATCC 27853) in 50 μL phosphate-buffered saline (Farahpour et al. 2020; Khezri et al. 2019). Subsequently, animals were divided into four groups (n = 18), as (group I) negative control group (NCG) that was administered commercial base ointment, (group II) positive control group that were treated with Mupirocin® (MG), and (groups III and IV) treated groups with therapeutic ointments containing 2 g and 4 g ZME mixed in NCG (2% and 4% ZMEO), respectively. Concerning the colonization of the bacteria, 24 h after the induction of the wounds, ointments were applied on wounds, once a day. In addition, according to the tissue sampling on days 3, 7, and 14 after the wounding, the animals in each group were divided into three subgroups (n = 6).", "The rate of wound closure was estimated as reported previously (Nejati et al. 2015; Khezri et al. 2019). In order to calculate the percentage of wound closure rate, a transparent paper was placed on it and it was calculated based on the following formula:\nPercentage of wound closure = [(Wound area on Day 0 − Wound area on Day X)/Wound area on Day 0] × 100.", "To calculate the bacteriological count, the granulated tissues were excised and 0.1 g of the sample was crushed, minced, and homogenized in a sterile mortar containing 10 mL of sterile saline. It was the diluted in tubes containing 9 mL of sterile saline, was cultured on plate count agar (Merck KGaA, Darmstadt, Germany), incubated at 37 °C for 48 h, calculated as CFU/g of granulation tissue (Farahpour et al. 2020; Khezri et al. 2019).", "Concerning the evaluation of the histological parameters, the mice were euthanized by a special CO2 device and the granulation tissue were excised in along to 1 to 2 mm from surrounding normal skin. The samples were fixed in neutral-buffered formalin 10%, routinely processed, and paraffin wax was embedded, sectioned at 5 µm, and stained with Masson’s trichrome and then examined under light microscopy (Olympus CX31RBSF attached cameraman) as reported by Farahpour et al. (2018). Cellular infiltration, edoema and collagen deposition were assessed. An image pro-insight software was utilised for evaluating the collagen deposition. Morphometric lens (Olympus, Germany) was used for assessing the epithelial thickness. edoema was assessed as a 5-score as reported previously (Nejati et al. 2015).", "To assess the IHC, it was conducted as reported by Farahpour et al. (2018). The IHC was conducted based on manufacturer’s protocol (Biocare, Yorba Linda, CA). The tissue sections were rinsed gently in the washing buffer and placed in a buffer bath. A DAB chromogen was employed for assessing the tissue sections and then incubated for 5 min. The sections were then dipped in weak ammonia (0.037 M/L) for 10 times, rinsed with distilled water and cover slipped. Brown stains were considered as positive immunohistochemical.", "To wash the slides, acetic acid 1% was used for several minutes. The slides were stained using acridine-orange. Phosphate buffer was utilized to remove staining. The slides were the washed in distilled water, mounted with buffer drop and analyzed with a fluorescent microscope. The bunds with red and yellowish red colour were considered as collagen bunds (Farahpour et al. 2018).", "The wound tissues were isolated as previously reported and RNA was extracted applying a standard TRIZOL procedure as reported by Farahpour et al. (2018), and assessed by spectrophotometer (260 nm and 260/280 = 1.8–2.0). To prepare the cDNA, it was put in a reaction being mixed with 1 µg RNA, oligo (dT) primer (1 µL), 5 × reaction buffer (4 µL), RNAse inhibitor (1 µL), 10 mM dNTP mix (2 µL) and M-MuLV Reverse Transcriptase (1 µL) as described by producer protocol (Fermentas, GmbH, Germany). Time and temperature were as reported by Farahpour et al. (2018). The used primers included IL-1β, forward (5′-AAC AAA CCC TGC AGT GGT TCG-3′) and reverse (5′-AGCTGCTTCAGACACTTGCAC-3′); TGF-β, forward (5-CCAAACGCCGAAGACTTATCC-3′) and reverse (5′-CTTATTACCGATGGGATGGGATAGCCC-3′); IL-10, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), TNF-α, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), FGF-2, forward (5′-GGAACCCGGCGGGACACGGAC-3′) and reverse (5′-CCGCTGTGGCAGCTCTTG GGG-3′); VEGF, forward (5′-GCTCCGTAGTAGCCGTGGTCT-3′) and reverse (5′-GGAACCCGGCGGGACACGGAC-3′), and IGF-1, forward (5′-TAGGTGGTTGATGAATGGT-3′) and reverse (5′-GAAAGGGCAGGGCTAAT-3′).", "To assess the antioxidant capacity, the wound granulation tissue was homogenized in ice-cold KCl (150 mM) and the mixture was then centrifuged at 3000 g for 10 min. The supernatant was used to evaluate the total antioxidant capacity (TAC), malondialdehyde (MDA) and total tissue thiol molecules (TTM) content. The Lowry method was employed for evaluating the protein content of the samples (Daemi et al. 2019a) and MDA content was used for assessing the lipid peroxidation rate. To assess the TTM, the collected granulation tissue sample was homogenized and the supernatant was added to Tris-EDTA as reported by Daemi et al. (2019a).", "The results were reported as mean ± standard deviation and analyzed by Graph Pad Prism Software. One-way ANOVA was utilized to analyze the results. Dunnett’s test for pair-wise comparisons was used for evaluating the effect of treatments and p < 0.05 was considered to be of significance.", "The analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1).\nGC-MS chromatogram of the essential oil of Zataria multiflora.\nChemical constituents of Zataria multiflora essential oil.\nRI-Cal: retention indices calculated based on C6–C24\nn-alkenes from a DB-5 column.\nRI-Cal: retention indices retrieved from literature (Adams 2007).\nBold values show major compounds in the essential oil.", "The findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05).\nThe effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05.", "The results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05).", "The results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results.\nImmunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×.\nCross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification.\nCross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×.\nThe effects of ZMEO on histopathological parameters in different days.\nSuperscripts (a–d) indicate significant differences in same day at p < 0.05.", "The results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group.\nThe effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05.", "Antioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group.\nThe effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Materials", "Essential oil analysis and identification", "Animals", "Induction of wound", "Wound area", "The investigation of the bacteriological count in the wound area", "Histological analysis", "Immunohistochemical staining (IHC) for angiogenesis (CD31)", "Fluorescent staining for collagen", "RNA isolation and cDNA synthesis", "Antioxidant capacity", "Statistical analysis", "Results", "Composition of the Zataria multiflora essential oil", "Wound area", "The bacteriological count", "Histopathological parameters", "Molecular analyses", "Antioxidant status", "Discussion", "Conclusion" ]
[ "Wound infections are important challenges that cause the suffering of patients and economic losses (Vittorazzi et al. 2016; Daemi et al. 2019a; Farghali et al. 2019). Skin is a regenerative organ which acts as a barrier between the body and the external environment (Hassan et al. 2014). A wound is defined as a disturbance in anatomic structure of skin and its functional integrity (Farghali et al. 2019). Skin prevents the penetration of bacteria and fungi that are important challenges that induce mortality and morbidity. Pseudomonas aeruginosa and Staphylococcus aureus are opportunistic bacteria that induce infection in patients (Cardona and Wilson 2015). The bacteria are commonly detected in the upper and deepest region of wound bed (Serra et al. 2015). Colonization of S. aureus and P. aeruginosa in the wound site postpones the wound healing process (Farahpour et al. 2020; Khezri et al. 2020). The wound healing process comprises several phases, including coagulation, inflammation, epithelialization, granulation tissue formation, and tissue remodelling (Daemi et al. 2019a, 2019b; Farahpour et al. 2020). Inflammatory phase occurs after activation of inflammatory chemokines (Farahpour et al. 2020). Interleukin-1β (IL-1β) is significantly synthesized 12–24 h after infliction of the wound, and its level returns to basal level after the proliferative stage is completed (Fahey and Doyle 2019). Tumour necrosis factor-α (TNF-α) is significantly synthesized by macrophages and T lymphocytes and its level increases under inflammation and infection conditions (Xue and Falcon 2019). Following the induction of inflammation, proliferative phase occurs in which many genes are involved. Insulin-like growth factor 1 (IGF-1) promotes the production of keratinocyte and fibroblast proliferation and improves the re-epithelialization (Daemi et al. 2019b). Decreased endothelial insulin/IGF-1 signal delays in the wound healing process (Aghdam et al. 2012). Vascular endothelial growth factor (VEGF) induces angiogenesis and stimulates cell migration and proliferation (Farahpour et al. 2020). Fibroblast growth factor-2 (FGF-2) is a protein that participates in the wound healing process (Souto et al. 2020). Interleukin-10 (IL-10) reduces the production of pro-inflammatory cytokine, but tumour growth factor-β (TGF-β) has roles in increasing the proliferative phase improving the proliferation and differentiation of fibroblasts, collagen production, and wound contraction (Khezri et al. 2020). On the other hand, oxidative stress increases under wound disorders and increases the production of reactive oxygen species (ROS) in the site of wound (Ustuner et al. 2019). Applying antibacterial and antioxidant agents shortens the inflammatory phase, promotes the proliferative phase and accelerates the wound healing process (Bardaa et al. 2016; Vittorazzi et al. 2016; Daemi et al. 2019a, 2019b).\nZataria multiflora Boiss (Lamiaceae) (ZM) has antibacterial activity against Pseudomonas spp. (Barkhori-Mehni et al. 2017). Its antibacterial activity is attributed to its phenolic compounds that destroy the cell wall in bacteria (Barkhori-Mehni et al. 2017). It demonstrates antioxidant properties due to its compounds such as thymol and carvacrol (Mojaddar Langroodi et al. 2019). Seemingly, ZM can accelerate the wound healing process due to its antibacterial and antioxidant features, but no studies have been conducted yet to evaluate the Zataria multiflora essential oil (ZME) on the wound healing process. Therefore, this study was conducted to evaluate the effect of an ointment prepared from ZME on the healing process of infected wound model by S. aureus and P. aeruginosa by assessing wound contraction, total bacterial count, histopathological and immunohistochemical, antioxidant activity, and qRT-PCR analyses.", " Materials Zataria multiflora essential oil was purchased from Barij Essence Company (Kashan-Iran) and approved by the same herbarium company (No.1203). Commercial base ointment and mupirocin were prepared from Pars Daruo. Ltd., Tehran, Iran. The commercial base ointment contained 90% soft paraffin, 5% hard paraffin and 5% lanolin. Rodent feeds were prepared from Javaneh Khorasan Company (Khorasan, Iran).\nZataria multiflora essential oil was purchased from Barij Essence Company (Kashan-Iran) and approved by the same herbarium company (No.1203). Commercial base ointment and mupirocin were prepared from Pars Daruo. Ltd., Tehran, Iran. The commercial base ointment contained 90% soft paraffin, 5% hard paraffin and 5% lanolin. Rodent feeds were prepared from Javaneh Khorasan Company (Khorasan, Iran).\n Essential oil analysis and identification Gas chromatography (GC) analysis was employed in order to identify the compounds. The components of the essential oil were recognized using calculation of their retention indices under temperature-programmed conditions for n-alkanes (C6–C24) and the oil on DB-5 column.\nGas chromatography (GC) analysis was employed in order to identify the compounds. The components of the essential oil were recognized using calculation of their retention indices under temperature-programmed conditions for n-alkanes (C6–C24) and the oil on DB-5 column.\n Animals Seventy-two BALB/c mice weighing 25 ± 3 g were prepared. The mice had free access to water and pelleted feed for rodents (Javaneh Khorasan Company, Khorasan, Iran). The mice were kept in an ambient temperature of 23 ± 3 °C, at a constant air humidity, and a natural day–night cycle. This study was approved by the Animal Research Committee of the Urmia Islamic Azad University with ethical No. IAU-UB 11044.\nSeventy-two BALB/c mice weighing 25 ± 3 g were prepared. The mice had free access to water and pelleted feed for rodents (Javaneh Khorasan Company, Khorasan, Iran). The mice were kept in an ambient temperature of 23 ± 3 °C, at a constant air humidity, and a natural day–night cycle. This study was approved by the Animal Research Committee of the Urmia Islamic Azad University with ethical No. IAU-UB 11044.\n Induction of wound To induce the wound, the mice were primarily anesthetized by intraperitoneal administration route using 50 mg/kg ketamine and 5 mg/kg xylazine. Following the induction of anaesthesia, dorsal region of each mouse was shaven for surgical actions. A circular wound model was induced in a size of 7 mm using a surgical biopsy punch and per wound was inoculated by an aliquot of 5 × 107 suspension containing S. aureus (ATCC 26313) and P. aeruginosa (ATCC 27853) in 50 μL phosphate-buffered saline (Farahpour et al. 2020; Khezri et al. 2019). Subsequently, animals were divided into four groups (n = 18), as (group I) negative control group (NCG) that was administered commercial base ointment, (group II) positive control group that were treated with Mupirocin® (MG), and (groups III and IV) treated groups with therapeutic ointments containing 2 g and 4 g ZME mixed in NCG (2% and 4% ZMEO), respectively. Concerning the colonization of the bacteria, 24 h after the induction of the wounds, ointments were applied on wounds, once a day. In addition, according to the tissue sampling on days 3, 7, and 14 after the wounding, the animals in each group were divided into three subgroups (n = 6).\nTo induce the wound, the mice were primarily anesthetized by intraperitoneal administration route using 50 mg/kg ketamine and 5 mg/kg xylazine. Following the induction of anaesthesia, dorsal region of each mouse was shaven for surgical actions. A circular wound model was induced in a size of 7 mm using a surgical biopsy punch and per wound was inoculated by an aliquot of 5 × 107 suspension containing S. aureus (ATCC 26313) and P. aeruginosa (ATCC 27853) in 50 μL phosphate-buffered saline (Farahpour et al. 2020; Khezri et al. 2019). Subsequently, animals were divided into four groups (n = 18), as (group I) negative control group (NCG) that was administered commercial base ointment, (group II) positive control group that were treated with Mupirocin® (MG), and (groups III and IV) treated groups with therapeutic ointments containing 2 g and 4 g ZME mixed in NCG (2% and 4% ZMEO), respectively. Concerning the colonization of the bacteria, 24 h after the induction of the wounds, ointments were applied on wounds, once a day. In addition, according to the tissue sampling on days 3, 7, and 14 after the wounding, the animals in each group were divided into three subgroups (n = 6).\n Wound area The rate of wound closure was estimated as reported previously (Nejati et al. 2015; Khezri et al. 2019). In order to calculate the percentage of wound closure rate, a transparent paper was placed on it and it was calculated based on the following formula:\nPercentage of wound closure = [(Wound area on Day 0 − Wound area on Day X)/Wound area on Day 0] × 100.\nThe rate of wound closure was estimated as reported previously (Nejati et al. 2015; Khezri et al. 2019). In order to calculate the percentage of wound closure rate, a transparent paper was placed on it and it was calculated based on the following formula:\nPercentage of wound closure = [(Wound area on Day 0 − Wound area on Day X)/Wound area on Day 0] × 100.\n The investigation of the bacteriological count in the wound area To calculate the bacteriological count, the granulated tissues were excised and 0.1 g of the sample was crushed, minced, and homogenized in a sterile mortar containing 10 mL of sterile saline. It was the diluted in tubes containing 9 mL of sterile saline, was cultured on plate count agar (Merck KGaA, Darmstadt, Germany), incubated at 37 °C for 48 h, calculated as CFU/g of granulation tissue (Farahpour et al. 2020; Khezri et al. 2019).\nTo calculate the bacteriological count, the granulated tissues were excised and 0.1 g of the sample was crushed, minced, and homogenized in a sterile mortar containing 10 mL of sterile saline. It was the diluted in tubes containing 9 mL of sterile saline, was cultured on plate count agar (Merck KGaA, Darmstadt, Germany), incubated at 37 °C for 48 h, calculated as CFU/g of granulation tissue (Farahpour et al. 2020; Khezri et al. 2019).\n Histological analysis Concerning the evaluation of the histological parameters, the mice were euthanized by a special CO2 device and the granulation tissue were excised in along to 1 to 2 mm from surrounding normal skin. The samples were fixed in neutral-buffered formalin 10%, routinely processed, and paraffin wax was embedded, sectioned at 5 µm, and stained with Masson’s trichrome and then examined under light microscopy (Olympus CX31RBSF attached cameraman) as reported by Farahpour et al. (2018). Cellular infiltration, edoema and collagen deposition were assessed. An image pro-insight software was utilised for evaluating the collagen deposition. Morphometric lens (Olympus, Germany) was used for assessing the epithelial thickness. edoema was assessed as a 5-score as reported previously (Nejati et al. 2015).\nConcerning the evaluation of the histological parameters, the mice were euthanized by a special CO2 device and the granulation tissue were excised in along to 1 to 2 mm from surrounding normal skin. The samples were fixed in neutral-buffered formalin 10%, routinely processed, and paraffin wax was embedded, sectioned at 5 µm, and stained with Masson’s trichrome and then examined under light microscopy (Olympus CX31RBSF attached cameraman) as reported by Farahpour et al. (2018). Cellular infiltration, edoema and collagen deposition were assessed. An image pro-insight software was utilised for evaluating the collagen deposition. Morphometric lens (Olympus, Germany) was used for assessing the epithelial thickness. edoema was assessed as a 5-score as reported previously (Nejati et al. 2015).\n Immunohistochemical staining (IHC) for angiogenesis (CD31) To assess the IHC, it was conducted as reported by Farahpour et al. (2018). The IHC was conducted based on manufacturer’s protocol (Biocare, Yorba Linda, CA). The tissue sections were rinsed gently in the washing buffer and placed in a buffer bath. A DAB chromogen was employed for assessing the tissue sections and then incubated for 5 min. The sections were then dipped in weak ammonia (0.037 M/L) for 10 times, rinsed with distilled water and cover slipped. Brown stains were considered as positive immunohistochemical.\nTo assess the IHC, it was conducted as reported by Farahpour et al. (2018). The IHC was conducted based on manufacturer’s protocol (Biocare, Yorba Linda, CA). The tissue sections were rinsed gently in the washing buffer and placed in a buffer bath. A DAB chromogen was employed for assessing the tissue sections and then incubated for 5 min. The sections were then dipped in weak ammonia (0.037 M/L) for 10 times, rinsed with distilled water and cover slipped. Brown stains were considered as positive immunohistochemical.\n Fluorescent staining for collagen To wash the slides, acetic acid 1% was used for several minutes. The slides were stained using acridine-orange. Phosphate buffer was utilized to remove staining. The slides were the washed in distilled water, mounted with buffer drop and analyzed with a fluorescent microscope. The bunds with red and yellowish red colour were considered as collagen bunds (Farahpour et al. 2018).\nTo wash the slides, acetic acid 1% was used for several minutes. The slides were stained using acridine-orange. Phosphate buffer was utilized to remove staining. The slides were the washed in distilled water, mounted with buffer drop and analyzed with a fluorescent microscope. The bunds with red and yellowish red colour were considered as collagen bunds (Farahpour et al. 2018).\n RNA isolation and cDNA synthesis The wound tissues were isolated as previously reported and RNA was extracted applying a standard TRIZOL procedure as reported by Farahpour et al. (2018), and assessed by spectrophotometer (260 nm and 260/280 = 1.8–2.0). To prepare the cDNA, it was put in a reaction being mixed with 1 µg RNA, oligo (dT) primer (1 µL), 5 × reaction buffer (4 µL), RNAse inhibitor (1 µL), 10 mM dNTP mix (2 µL) and M-MuLV Reverse Transcriptase (1 µL) as described by producer protocol (Fermentas, GmbH, Germany). Time and temperature were as reported by Farahpour et al. (2018). The used primers included IL-1β, forward (5′-AAC AAA CCC TGC AGT GGT TCG-3′) and reverse (5′-AGCTGCTTCAGACACTTGCAC-3′); TGF-β, forward (5-CCAAACGCCGAAGACTTATCC-3′) and reverse (5′-CTTATTACCGATGGGATGGGATAGCCC-3′); IL-10, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), TNF-α, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), FGF-2, forward (5′-GGAACCCGGCGGGACACGGAC-3′) and reverse (5′-CCGCTGTGGCAGCTCTTG GGG-3′); VEGF, forward (5′-GCTCCGTAGTAGCCGTGGTCT-3′) and reverse (5′-GGAACCCGGCGGGACACGGAC-3′), and IGF-1, forward (5′-TAGGTGGTTGATGAATGGT-3′) and reverse (5′-GAAAGGGCAGGGCTAAT-3′).\nThe wound tissues were isolated as previously reported and RNA was extracted applying a standard TRIZOL procedure as reported by Farahpour et al. (2018), and assessed by spectrophotometer (260 nm and 260/280 = 1.8–2.0). To prepare the cDNA, it was put in a reaction being mixed with 1 µg RNA, oligo (dT) primer (1 µL), 5 × reaction buffer (4 µL), RNAse inhibitor (1 µL), 10 mM dNTP mix (2 µL) and M-MuLV Reverse Transcriptase (1 µL) as described by producer protocol (Fermentas, GmbH, Germany). Time and temperature were as reported by Farahpour et al. (2018). The used primers included IL-1β, forward (5′-AAC AAA CCC TGC AGT GGT TCG-3′) and reverse (5′-AGCTGCTTCAGACACTTGCAC-3′); TGF-β, forward (5-CCAAACGCCGAAGACTTATCC-3′) and reverse (5′-CTTATTACCGATGGGATGGGATAGCCC-3′); IL-10, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), TNF-α, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), FGF-2, forward (5′-GGAACCCGGCGGGACACGGAC-3′) and reverse (5′-CCGCTGTGGCAGCTCTTG GGG-3′); VEGF, forward (5′-GCTCCGTAGTAGCCGTGGTCT-3′) and reverse (5′-GGAACCCGGCGGGACACGGAC-3′), and IGF-1, forward (5′-TAGGTGGTTGATGAATGGT-3′) and reverse (5′-GAAAGGGCAGGGCTAAT-3′).\n Antioxidant capacity To assess the antioxidant capacity, the wound granulation tissue was homogenized in ice-cold KCl (150 mM) and the mixture was then centrifuged at 3000 g for 10 min. The supernatant was used to evaluate the total antioxidant capacity (TAC), malondialdehyde (MDA) and total tissue thiol molecules (TTM) content. The Lowry method was employed for evaluating the protein content of the samples (Daemi et al. 2019a) and MDA content was used for assessing the lipid peroxidation rate. To assess the TTM, the collected granulation tissue sample was homogenized and the supernatant was added to Tris-EDTA as reported by Daemi et al. (2019a).\nTo assess the antioxidant capacity, the wound granulation tissue was homogenized in ice-cold KCl (150 mM) and the mixture was then centrifuged at 3000 g for 10 min. The supernatant was used to evaluate the total antioxidant capacity (TAC), malondialdehyde (MDA) and total tissue thiol molecules (TTM) content. The Lowry method was employed for evaluating the protein content of the samples (Daemi et al. 2019a) and MDA content was used for assessing the lipid peroxidation rate. To assess the TTM, the collected granulation tissue sample was homogenized and the supernatant was added to Tris-EDTA as reported by Daemi et al. (2019a).\n Statistical analysis The results were reported as mean ± standard deviation and analyzed by Graph Pad Prism Software. One-way ANOVA was utilized to analyze the results. Dunnett’s test for pair-wise comparisons was used for evaluating the effect of treatments and p < 0.05 was considered to be of significance.\nThe results were reported as mean ± standard deviation and analyzed by Graph Pad Prism Software. One-way ANOVA was utilized to analyze the results. Dunnett’s test for pair-wise comparisons was used for evaluating the effect of treatments and p < 0.05 was considered to be of significance.", "Zataria multiflora essential oil was purchased from Barij Essence Company (Kashan-Iran) and approved by the same herbarium company (No.1203). Commercial base ointment and mupirocin were prepared from Pars Daruo. Ltd., Tehran, Iran. The commercial base ointment contained 90% soft paraffin, 5% hard paraffin and 5% lanolin. Rodent feeds were prepared from Javaneh Khorasan Company (Khorasan, Iran).", "Gas chromatography (GC) analysis was employed in order to identify the compounds. The components of the essential oil were recognized using calculation of their retention indices under temperature-programmed conditions for n-alkanes (C6–C24) and the oil on DB-5 column.", "Seventy-two BALB/c mice weighing 25 ± 3 g were prepared. The mice had free access to water and pelleted feed for rodents (Javaneh Khorasan Company, Khorasan, Iran). The mice were kept in an ambient temperature of 23 ± 3 °C, at a constant air humidity, and a natural day–night cycle. This study was approved by the Animal Research Committee of the Urmia Islamic Azad University with ethical No. IAU-UB 11044.", "To induce the wound, the mice were primarily anesthetized by intraperitoneal administration route using 50 mg/kg ketamine and 5 mg/kg xylazine. Following the induction of anaesthesia, dorsal region of each mouse was shaven for surgical actions. A circular wound model was induced in a size of 7 mm using a surgical biopsy punch and per wound was inoculated by an aliquot of 5 × 107 suspension containing S. aureus (ATCC 26313) and P. aeruginosa (ATCC 27853) in 50 μL phosphate-buffered saline (Farahpour et al. 2020; Khezri et al. 2019). Subsequently, animals were divided into four groups (n = 18), as (group I) negative control group (NCG) that was administered commercial base ointment, (group II) positive control group that were treated with Mupirocin® (MG), and (groups III and IV) treated groups with therapeutic ointments containing 2 g and 4 g ZME mixed in NCG (2% and 4% ZMEO), respectively. Concerning the colonization of the bacteria, 24 h after the induction of the wounds, ointments were applied on wounds, once a day. In addition, according to the tissue sampling on days 3, 7, and 14 after the wounding, the animals in each group were divided into three subgroups (n = 6).", "The rate of wound closure was estimated as reported previously (Nejati et al. 2015; Khezri et al. 2019). In order to calculate the percentage of wound closure rate, a transparent paper was placed on it and it was calculated based on the following formula:\nPercentage of wound closure = [(Wound area on Day 0 − Wound area on Day X)/Wound area on Day 0] × 100.", "To calculate the bacteriological count, the granulated tissues were excised and 0.1 g of the sample was crushed, minced, and homogenized in a sterile mortar containing 10 mL of sterile saline. It was the diluted in tubes containing 9 mL of sterile saline, was cultured on plate count agar (Merck KGaA, Darmstadt, Germany), incubated at 37 °C for 48 h, calculated as CFU/g of granulation tissue (Farahpour et al. 2020; Khezri et al. 2019).", "Concerning the evaluation of the histological parameters, the mice were euthanized by a special CO2 device and the granulation tissue were excised in along to 1 to 2 mm from surrounding normal skin. The samples were fixed in neutral-buffered formalin 10%, routinely processed, and paraffin wax was embedded, sectioned at 5 µm, and stained with Masson’s trichrome and then examined under light microscopy (Olympus CX31RBSF attached cameraman) as reported by Farahpour et al. (2018). Cellular infiltration, edoema and collagen deposition were assessed. An image pro-insight software was utilised for evaluating the collagen deposition. Morphometric lens (Olympus, Germany) was used for assessing the epithelial thickness. edoema was assessed as a 5-score as reported previously (Nejati et al. 2015).", "To assess the IHC, it was conducted as reported by Farahpour et al. (2018). The IHC was conducted based on manufacturer’s protocol (Biocare, Yorba Linda, CA). The tissue sections were rinsed gently in the washing buffer and placed in a buffer bath. A DAB chromogen was employed for assessing the tissue sections and then incubated for 5 min. The sections were then dipped in weak ammonia (0.037 M/L) for 10 times, rinsed with distilled water and cover slipped. Brown stains were considered as positive immunohistochemical.", "To wash the slides, acetic acid 1% was used for several minutes. The slides were stained using acridine-orange. Phosphate buffer was utilized to remove staining. The slides were the washed in distilled water, mounted with buffer drop and analyzed with a fluorescent microscope. The bunds with red and yellowish red colour were considered as collagen bunds (Farahpour et al. 2018).", "The wound tissues were isolated as previously reported and RNA was extracted applying a standard TRIZOL procedure as reported by Farahpour et al. (2018), and assessed by spectrophotometer (260 nm and 260/280 = 1.8–2.0). To prepare the cDNA, it was put in a reaction being mixed with 1 µg RNA, oligo (dT) primer (1 µL), 5 × reaction buffer (4 µL), RNAse inhibitor (1 µL), 10 mM dNTP mix (2 µL) and M-MuLV Reverse Transcriptase (1 µL) as described by producer protocol (Fermentas, GmbH, Germany). Time and temperature were as reported by Farahpour et al. (2018). The used primers included IL-1β, forward (5′-AAC AAA CCC TGC AGT GGT TCG-3′) and reverse (5′-AGCTGCTTCAGACACTTGCAC-3′); TGF-β, forward (5-CCAAACGCCGAAGACTTATCC-3′) and reverse (5′-CTTATTACCGATGGGATGGGATAGCCC-3′); IL-10, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), TNF-α, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), FGF-2, forward (5′-GGAACCCGGCGGGACACGGAC-3′) and reverse (5′-CCGCTGTGGCAGCTCTTG GGG-3′); VEGF, forward (5′-GCTCCGTAGTAGCCGTGGTCT-3′) and reverse (5′-GGAACCCGGCGGGACACGGAC-3′), and IGF-1, forward (5′-TAGGTGGTTGATGAATGGT-3′) and reverse (5′-GAAAGGGCAGGGCTAAT-3′).", "To assess the antioxidant capacity, the wound granulation tissue was homogenized in ice-cold KCl (150 mM) and the mixture was then centrifuged at 3000 g for 10 min. The supernatant was used to evaluate the total antioxidant capacity (TAC), malondialdehyde (MDA) and total tissue thiol molecules (TTM) content. The Lowry method was employed for evaluating the protein content of the samples (Daemi et al. 2019a) and MDA content was used for assessing the lipid peroxidation rate. To assess the TTM, the collected granulation tissue sample was homogenized and the supernatant was added to Tris-EDTA as reported by Daemi et al. (2019a).", "The results were reported as mean ± standard deviation and analyzed by Graph Pad Prism Software. One-way ANOVA was utilized to analyze the results. Dunnett’s test for pair-wise comparisons was used for evaluating the effect of treatments and p < 0.05 was considered to be of significance.", " Composition of the Zataria multiflora essential oil The analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1).\nGC-MS chromatogram of the essential oil of Zataria multiflora.\nChemical constituents of Zataria multiflora essential oil.\nRI-Cal: retention indices calculated based on C6–C24\nn-alkenes from a DB-5 column.\nRI-Cal: retention indices retrieved from literature (Adams 2007).\nBold values show major compounds in the essential oil.\nThe analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1).\nGC-MS chromatogram of the essential oil of Zataria multiflora.\nChemical constituents of Zataria multiflora essential oil.\nRI-Cal: retention indices calculated based on C6–C24\nn-alkenes from a DB-5 column.\nRI-Cal: retention indices retrieved from literature (Adams 2007).\nBold values show major compounds in the essential oil.\n Wound area The findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05).\nThe effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05.\nThe findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05).\nThe effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05.\n The bacteriological count The results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05).\nThe results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05).\n Histopathological parameters The results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results.\nImmunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×.\nCross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification.\nCross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×.\nThe effects of ZMEO on histopathological parameters in different days.\nSuperscripts (a–d) indicate significant differences in same day at p < 0.05.\nThe results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results.\nImmunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×.\nCross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification.\nCross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×.\nThe effects of ZMEO on histopathological parameters in different days.\nSuperscripts (a–d) indicate significant differences in same day at p < 0.05.\n Molecular analyses The results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group.\nThe effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05.\nThe results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group.\nThe effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05.\n Antioxidant status Antioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group.\nThe effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05.\nAntioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group.\nThe effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05.", "The analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1).\nGC-MS chromatogram of the essential oil of Zataria multiflora.\nChemical constituents of Zataria multiflora essential oil.\nRI-Cal: retention indices calculated based on C6–C24\nn-alkenes from a DB-5 column.\nRI-Cal: retention indices retrieved from literature (Adams 2007).\nBold values show major compounds in the essential oil.", "The findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05).\nThe effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05.", "The results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05).", "The results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results.\nImmunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×.\nCross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification.\nCross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×.\nThe effects of ZMEO on histopathological parameters in different days.\nSuperscripts (a–d) indicate significant differences in same day at p < 0.05.", "The results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group.\nThe effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05.", "Antioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group.\nThe effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05.", "Wounds infected by pathogenic bacteria such as S. aureus, and P. aeruginosa are believed to be serious challenges for regarding their management and treatment (Manzuoerh et al. 2019). Different agents are employed for the treatment of wounds. Active compounds of the essential oils can have pivotal roles in their treatment. The findings indicated that thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) were the major constituents in the essential oil. Similar to our results, previous studies demonstrated that the major compounds of Z. multiflora essential oil were thymol, carvacrol and p-cymene (Aida et al. 2015), thymol, c-terpinene, p-cymene, and carvacrol (Saei-Dehkordi et al. 2010), carvacrol (Misaghi and Basti 2007), thymol, carvacrol, and p-cymene (Sharififar et al. 2007). The results showed that ZMEO accelerated wound healing compared to the NCG group and synthetic agent of mupirocin. Herein, the results are described based on the mechanisms and active compounds, step-by-step.\nInflammation is the first step in the wound healing process, in which different factors and genes are involved. Inflammation is triggered once the injury induces to living tissues by live organisms, such as bacteria and/or physical injury and faulted immune response. Inflammation removes foreign substances in order to improve the wound healing process (Garrett et al. 2010). It promotes macrophages, neutrophils and immune cells for initiation of inflammation (Oguntibeju 2018). IL-1β and TNF-α are commonly produced by immune cells, such as macrophages and mast cells (Thacker et al. 2007). The results showed that the topical application of ZMEO decreased immune cells and the expressions of IL-1β and TNF-α in the first 3 days. The results implied a positive relation between the number of immune cells with the expression of IL-1β and TNF-α in the treatments. Meaning that immune cells increase the expression of IL-1β and TNF-α in the first 3 days. Increased inflammation delays the wound healing process. The findings revealed that topical administration of ZMEO decreased inflammation on days 7 and 14, and acted as an antibacterial and anti-inflammation agent. Antibacterial and anti-inflammatory activity of ZMEO is attributed to its compounds. The essence contains thymol and carvacrol that are of antibacterial and anti-inflammatory properties. It was reported that the administration of carvacrol significantly reduced TNF-α levels in pleural lavage (Guimarães et al. 2012). The anti-inflammatory activity of thymol leads to prevention of the phosphorylation of extracellular signal-regulated protein kinase, c-Jun N-terminal kinase, and nuclear factor-κB (Liang et al. 2014). Carvacrol prevented leukocyte migration, and decreased edoema, and showed anti-inflammatory effects (Fachini-Queiroz et al. 2012). The results indicated that immune cells decreased and expression of IL-1β and TNF-α also decreased in ZMEO groups. The findings showed that the total bacterial count significantly decreased in ZMEO groups compared to the NCG group. Mupirocin is used as a bactericide agent for the treatment of infected wounds. The results convey that ZMEO, particularly at a higher level, have better antibacterial activity compared to mupirocin. Higher total bacterial count increases the inflammation and delays the wound healing process. It means that topical administration of ZMEO decreases the inflammation with its antibacterial activity. It was reported that the antibacterial activity of Zataria multiflora is in view of its compounds (Shafiee et al. 1999; Barkhori-Mehni et al. 2017). In essence, ZMEO decreased the inflammation and inflammatory-associated parameters. Higher levels of ZMEO provide higher doses of active compounds that act as anti-inflammatory factors. Decreased inflammation leads the wound healing process towards the proliferative phase.\nOur findings implied that the topical application of ZMEO improved the proliferative phase. The results showed that the genes expression of VEGF, FGF-2 and IGF-1 increased in ZMEO groups. IGF-1 increases the transport of glucose during the wound healing process and improves the proliferative phase (Daemi et al. 2019b). VEGF induces angiogenesis and stimulates cell migration and proliferation (Norton & Popel 2016; Farahpour et al. 2020; Gharaboghaz et al. 2020). FGF-2, a protein that participates in the wound healing process (Souto et al. 2020), together with other growth factors, such as VEGF, are reported to increase the angiogenesis and indirectly support the cellular nutrient, oxygen and energy supplements (Harding et al. 2002). So far, studies have not investigated the effect of ZMEO on genes expression of VEGF, FGF-2 and IGF-1. It means that VEGF improves angiogenesis, and vessel number was significantly higher in ZMEO groups. VEGF improved angiogenesis and number of vessels. Other genes involved were TGF-β, and IL-10 whose expressions increased on day 7. IL-10 is a factor that reduces the production of pro-inflammatory cytokine, but TGF-β leads the phase towards the proliferative phase by improving the proliferation and differentiation of fibroblasts, collagen production, and wound contraction (Khezri et al. 2020). Seemingly, both genes are of greatly important roles in decreasing the inflammation and passing to the proliferative phase. The IHC results indicated improved collagen deposition. The biosynthesis of collagen during the proliferation stage has pivotal roles in dermal maturation in which some factors are involved (Yakaew et al. 2016; Daemi et al. 2019a). The results showed that the topical application of ZMEO increased fibroblast, which is due to its effect on IGF-1. It was reported that IGF-1 increases fibroblast cells in in vitro assessment (Lisa et al. 2011). Improving cellular proliferation and differentiation are essential for shortening the healing time (Oryan et al. 2015; Karimzadeh and Farahpour 2017). Proliferative effect of ZMEO is partly owing to its antioxidant activity. The topical application of ZMEO improved antioxidant activity. Previous studies have reported antioxidant activity of Zataria multiflora essential oil (Fatemi et al. 2012; Karimian et al. 2012; Kavoosi and Teixeira da Silva 2012). Oxidative stress in the wound area increases the damage of proteins, nucleotides and lipid levels (Cano Sanchez et al. 2018; Farahpour et al. 2020; Gharaboghaz et al. 2020) and decreases the antioxidant activity. This fact is associated with increased changes in the antioxidant profile, which are contributed to the increase in the levels of MDA content (Sharma et al. 2012; Bardaa et al. 2016; Vittorazzi et al. 2016).", "The topical application of ZMEO decreased the inflammation and the total bacterial count, which progresses the wound healing process towards the proliferative phase. It increases the expression of proliferative genes, helps to alleviate the inflammation phase and accelerates the proliferative phase. Antioxidant activity of ZMEO accelerated the wound healing process. All the above-mentioned factors hasten the wound healing process, as seen for the wound area. Accordingly, applying ZMEO only and/or in combination with chemical agents for the treatment of wound healing could be suggested." ]
[ "intro", "materials", null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions" ]
[ "Infected wound healing", "inflammatory cytokines", "antimicrobial properties", "thymol" ]
Introduction: Wound infections are important challenges that cause the suffering of patients and economic losses (Vittorazzi et al. 2016; Daemi et al. 2019a; Farghali et al. 2019). Skin is a regenerative organ which acts as a barrier between the body and the external environment (Hassan et al. 2014). A wound is defined as a disturbance in anatomic structure of skin and its functional integrity (Farghali et al. 2019). Skin prevents the penetration of bacteria and fungi that are important challenges that induce mortality and morbidity. Pseudomonas aeruginosa and Staphylococcus aureus are opportunistic bacteria that induce infection in patients (Cardona and Wilson 2015). The bacteria are commonly detected in the upper and deepest region of wound bed (Serra et al. 2015). Colonization of S. aureus and P. aeruginosa in the wound site postpones the wound healing process (Farahpour et al. 2020; Khezri et al. 2020). The wound healing process comprises several phases, including coagulation, inflammation, epithelialization, granulation tissue formation, and tissue remodelling (Daemi et al. 2019a, 2019b; Farahpour et al. 2020). Inflammatory phase occurs after activation of inflammatory chemokines (Farahpour et al. 2020). Interleukin-1β (IL-1β) is significantly synthesized 12–24 h after infliction of the wound, and its level returns to basal level after the proliferative stage is completed (Fahey and Doyle 2019). Tumour necrosis factor-α (TNF-α) is significantly synthesized by macrophages and T lymphocytes and its level increases under inflammation and infection conditions (Xue and Falcon 2019). Following the induction of inflammation, proliferative phase occurs in which many genes are involved. Insulin-like growth factor 1 (IGF-1) promotes the production of keratinocyte and fibroblast proliferation and improves the re-epithelialization (Daemi et al. 2019b). Decreased endothelial insulin/IGF-1 signal delays in the wound healing process (Aghdam et al. 2012). Vascular endothelial growth factor (VEGF) induces angiogenesis and stimulates cell migration and proliferation (Farahpour et al. 2020). Fibroblast growth factor-2 (FGF-2) is a protein that participates in the wound healing process (Souto et al. 2020). Interleukin-10 (IL-10) reduces the production of pro-inflammatory cytokine, but tumour growth factor-β (TGF-β) has roles in increasing the proliferative phase improving the proliferation and differentiation of fibroblasts, collagen production, and wound contraction (Khezri et al. 2020). On the other hand, oxidative stress increases under wound disorders and increases the production of reactive oxygen species (ROS) in the site of wound (Ustuner et al. 2019). Applying antibacterial and antioxidant agents shortens the inflammatory phase, promotes the proliferative phase and accelerates the wound healing process (Bardaa et al. 2016; Vittorazzi et al. 2016; Daemi et al. 2019a, 2019b). Zataria multiflora Boiss (Lamiaceae) (ZM) has antibacterial activity against Pseudomonas spp. (Barkhori-Mehni et al. 2017). Its antibacterial activity is attributed to its phenolic compounds that destroy the cell wall in bacteria (Barkhori-Mehni et al. 2017). It demonstrates antioxidant properties due to its compounds such as thymol and carvacrol (Mojaddar Langroodi et al. 2019). Seemingly, ZM can accelerate the wound healing process due to its antibacterial and antioxidant features, but no studies have been conducted yet to evaluate the Zataria multiflora essential oil (ZME) on the wound healing process. Therefore, this study was conducted to evaluate the effect of an ointment prepared from ZME on the healing process of infected wound model by S. aureus and P. aeruginosa by assessing wound contraction, total bacterial count, histopathological and immunohistochemical, antioxidant activity, and qRT-PCR analyses. Materials and methods: Materials Zataria multiflora essential oil was purchased from Barij Essence Company (Kashan-Iran) and approved by the same herbarium company (No.1203). Commercial base ointment and mupirocin were prepared from Pars Daruo. Ltd., Tehran, Iran. The commercial base ointment contained 90% soft paraffin, 5% hard paraffin and 5% lanolin. Rodent feeds were prepared from Javaneh Khorasan Company (Khorasan, Iran). Zataria multiflora essential oil was purchased from Barij Essence Company (Kashan-Iran) and approved by the same herbarium company (No.1203). Commercial base ointment and mupirocin were prepared from Pars Daruo. Ltd., Tehran, Iran. The commercial base ointment contained 90% soft paraffin, 5% hard paraffin and 5% lanolin. Rodent feeds were prepared from Javaneh Khorasan Company (Khorasan, Iran). Essential oil analysis and identification Gas chromatography (GC) analysis was employed in order to identify the compounds. The components of the essential oil were recognized using calculation of their retention indices under temperature-programmed conditions for n-alkanes (C6–C24) and the oil on DB-5 column. Gas chromatography (GC) analysis was employed in order to identify the compounds. The components of the essential oil were recognized using calculation of their retention indices under temperature-programmed conditions for n-alkanes (C6–C24) and the oil on DB-5 column. Animals Seventy-two BALB/c mice weighing 25 ± 3 g were prepared. The mice had free access to water and pelleted feed for rodents (Javaneh Khorasan Company, Khorasan, Iran). The mice were kept in an ambient temperature of 23 ± 3 °C, at a constant air humidity, and a natural day–night cycle. This study was approved by the Animal Research Committee of the Urmia Islamic Azad University with ethical No. IAU-UB 11044. Seventy-two BALB/c mice weighing 25 ± 3 g were prepared. The mice had free access to water and pelleted feed for rodents (Javaneh Khorasan Company, Khorasan, Iran). The mice were kept in an ambient temperature of 23 ± 3 °C, at a constant air humidity, and a natural day–night cycle. This study was approved by the Animal Research Committee of the Urmia Islamic Azad University with ethical No. IAU-UB 11044. Induction of wound To induce the wound, the mice were primarily anesthetized by intraperitoneal administration route using 50 mg/kg ketamine and 5 mg/kg xylazine. Following the induction of anaesthesia, dorsal region of each mouse was shaven for surgical actions. A circular wound model was induced in a size of 7 mm using a surgical biopsy punch and per wound was inoculated by an aliquot of 5 × 107 suspension containing S. aureus (ATCC 26313) and P. aeruginosa (ATCC 27853) in 50 μL phosphate-buffered saline (Farahpour et al. 2020; Khezri et al. 2019). Subsequently, animals were divided into four groups (n = 18), as (group I) negative control group (NCG) that was administered commercial base ointment, (group II) positive control group that were treated with Mupirocin® (MG), and (groups III and IV) treated groups with therapeutic ointments containing 2 g and 4 g ZME mixed in NCG (2% and 4% ZMEO), respectively. Concerning the colonization of the bacteria, 24 h after the induction of the wounds, ointments were applied on wounds, once a day. In addition, according to the tissue sampling on days 3, 7, and 14 after the wounding, the animals in each group were divided into three subgroups (n = 6). To induce the wound, the mice were primarily anesthetized by intraperitoneal administration route using 50 mg/kg ketamine and 5 mg/kg xylazine. Following the induction of anaesthesia, dorsal region of each mouse was shaven for surgical actions. A circular wound model was induced in a size of 7 mm using a surgical biopsy punch and per wound was inoculated by an aliquot of 5 × 107 suspension containing S. aureus (ATCC 26313) and P. aeruginosa (ATCC 27853) in 50 μL phosphate-buffered saline (Farahpour et al. 2020; Khezri et al. 2019). Subsequently, animals were divided into four groups (n = 18), as (group I) negative control group (NCG) that was administered commercial base ointment, (group II) positive control group that were treated with Mupirocin® (MG), and (groups III and IV) treated groups with therapeutic ointments containing 2 g and 4 g ZME mixed in NCG (2% and 4% ZMEO), respectively. Concerning the colonization of the bacteria, 24 h after the induction of the wounds, ointments were applied on wounds, once a day. In addition, according to the tissue sampling on days 3, 7, and 14 after the wounding, the animals in each group were divided into three subgroups (n = 6). Wound area The rate of wound closure was estimated as reported previously (Nejati et al. 2015; Khezri et al. 2019). In order to calculate the percentage of wound closure rate, a transparent paper was placed on it and it was calculated based on the following formula: Percentage of wound closure = [(Wound area on Day 0 − Wound area on Day X)/Wound area on Day 0] × 100. The rate of wound closure was estimated as reported previously (Nejati et al. 2015; Khezri et al. 2019). In order to calculate the percentage of wound closure rate, a transparent paper was placed on it and it was calculated based on the following formula: Percentage of wound closure = [(Wound area on Day 0 − Wound area on Day X)/Wound area on Day 0] × 100. The investigation of the bacteriological count in the wound area To calculate the bacteriological count, the granulated tissues were excised and 0.1 g of the sample was crushed, minced, and homogenized in a sterile mortar containing 10 mL of sterile saline. It was the diluted in tubes containing 9 mL of sterile saline, was cultured on plate count agar (Merck KGaA, Darmstadt, Germany), incubated at 37 °C for 48 h, calculated as CFU/g of granulation tissue (Farahpour et al. 2020; Khezri et al. 2019). To calculate the bacteriological count, the granulated tissues were excised and 0.1 g of the sample was crushed, minced, and homogenized in a sterile mortar containing 10 mL of sterile saline. It was the diluted in tubes containing 9 mL of sterile saline, was cultured on plate count agar (Merck KGaA, Darmstadt, Germany), incubated at 37 °C for 48 h, calculated as CFU/g of granulation tissue (Farahpour et al. 2020; Khezri et al. 2019). Histological analysis Concerning the evaluation of the histological parameters, the mice were euthanized by a special CO2 device and the granulation tissue were excised in along to 1 to 2 mm from surrounding normal skin. The samples were fixed in neutral-buffered formalin 10%, routinely processed, and paraffin wax was embedded, sectioned at 5 µm, and stained with Masson’s trichrome and then examined under light microscopy (Olympus CX31RBSF attached cameraman) as reported by Farahpour et al. (2018). Cellular infiltration, edoema and collagen deposition were assessed. An image pro-insight software was utilised for evaluating the collagen deposition. Morphometric lens (Olympus, Germany) was used for assessing the epithelial thickness. edoema was assessed as a 5-score as reported previously (Nejati et al. 2015). Concerning the evaluation of the histological parameters, the mice were euthanized by a special CO2 device and the granulation tissue were excised in along to 1 to 2 mm from surrounding normal skin. The samples were fixed in neutral-buffered formalin 10%, routinely processed, and paraffin wax was embedded, sectioned at 5 µm, and stained with Masson’s trichrome and then examined under light microscopy (Olympus CX31RBSF attached cameraman) as reported by Farahpour et al. (2018). Cellular infiltration, edoema and collagen deposition were assessed. An image pro-insight software was utilised for evaluating the collagen deposition. Morphometric lens (Olympus, Germany) was used for assessing the epithelial thickness. edoema was assessed as a 5-score as reported previously (Nejati et al. 2015). Immunohistochemical staining (IHC) for angiogenesis (CD31) To assess the IHC, it was conducted as reported by Farahpour et al. (2018). The IHC was conducted based on manufacturer’s protocol (Biocare, Yorba Linda, CA). The tissue sections were rinsed gently in the washing buffer and placed in a buffer bath. A DAB chromogen was employed for assessing the tissue sections and then incubated for 5 min. The sections were then dipped in weak ammonia (0.037 M/L) for 10 times, rinsed with distilled water and cover slipped. Brown stains were considered as positive immunohistochemical. To assess the IHC, it was conducted as reported by Farahpour et al. (2018). The IHC was conducted based on manufacturer’s protocol (Biocare, Yorba Linda, CA). The tissue sections were rinsed gently in the washing buffer and placed in a buffer bath. A DAB chromogen was employed for assessing the tissue sections and then incubated for 5 min. The sections were then dipped in weak ammonia (0.037 M/L) for 10 times, rinsed with distilled water and cover slipped. Brown stains were considered as positive immunohistochemical. Fluorescent staining for collagen To wash the slides, acetic acid 1% was used for several minutes. The slides were stained using acridine-orange. Phosphate buffer was utilized to remove staining. The slides were the washed in distilled water, mounted with buffer drop and analyzed with a fluorescent microscope. The bunds with red and yellowish red colour were considered as collagen bunds (Farahpour et al. 2018). To wash the slides, acetic acid 1% was used for several minutes. The slides were stained using acridine-orange. Phosphate buffer was utilized to remove staining. The slides were the washed in distilled water, mounted with buffer drop and analyzed with a fluorescent microscope. The bunds with red and yellowish red colour were considered as collagen bunds (Farahpour et al. 2018). RNA isolation and cDNA synthesis The wound tissues were isolated as previously reported and RNA was extracted applying a standard TRIZOL procedure as reported by Farahpour et al. (2018), and assessed by spectrophotometer (260 nm and 260/280 = 1.8–2.0). To prepare the cDNA, it was put in a reaction being mixed with 1 µg RNA, oligo (dT) primer (1 µL), 5 × reaction buffer (4 µL), RNAse inhibitor (1 µL), 10 mM dNTP mix (2 µL) and M-MuLV Reverse Transcriptase (1 µL) as described by producer protocol (Fermentas, GmbH, Germany). Time and temperature were as reported by Farahpour et al. (2018). The used primers included IL-1β, forward (5′-AAC AAA CCC TGC AGT GGT TCG-3′) and reverse (5′-AGCTGCTTCAGACACTTGCAC-3′); TGF-β, forward (5-CCAAACGCCGAAGACTTATCC-3′) and reverse (5′-CTTATTACCGATGGGATGGGATAGCCC-3′); IL-10, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), TNF-α, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), FGF-2, forward (5′-GGAACCCGGCGGGACACGGAC-3′) and reverse (5′-CCGCTGTGGCAGCTCTTG GGG-3′); VEGF, forward (5′-GCTCCGTAGTAGCCGTGGTCT-3′) and reverse (5′-GGAACCCGGCGGGACACGGAC-3′), and IGF-1, forward (5′-TAGGTGGTTGATGAATGGT-3′) and reverse (5′-GAAAGGGCAGGGCTAAT-3′). The wound tissues were isolated as previously reported and RNA was extracted applying a standard TRIZOL procedure as reported by Farahpour et al. (2018), and assessed by spectrophotometer (260 nm and 260/280 = 1.8–2.0). To prepare the cDNA, it was put in a reaction being mixed with 1 µg RNA, oligo (dT) primer (1 µL), 5 × reaction buffer (4 µL), RNAse inhibitor (1 µL), 10 mM dNTP mix (2 µL) and M-MuLV Reverse Transcriptase (1 µL) as described by producer protocol (Fermentas, GmbH, Germany). Time and temperature were as reported by Farahpour et al. (2018). The used primers included IL-1β, forward (5′-AAC AAA CCC TGC AGT GGT TCG-3′) and reverse (5′-AGCTGCTTCAGACACTTGCAC-3′); TGF-β, forward (5-CCAAACGCCGAAGACTTATCC-3′) and reverse (5′-CTTATTACCGATGGGATGGGATAGCCC-3′); IL-10, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), TNF-α, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), FGF-2, forward (5′-GGAACCCGGCGGGACACGGAC-3′) and reverse (5′-CCGCTGTGGCAGCTCTTG GGG-3′); VEGF, forward (5′-GCTCCGTAGTAGCCGTGGTCT-3′) and reverse (5′-GGAACCCGGCGGGACACGGAC-3′), and IGF-1, forward (5′-TAGGTGGTTGATGAATGGT-3′) and reverse (5′-GAAAGGGCAGGGCTAAT-3′). Antioxidant capacity To assess the antioxidant capacity, the wound granulation tissue was homogenized in ice-cold KCl (150 mM) and the mixture was then centrifuged at 3000 g for 10 min. The supernatant was used to evaluate the total antioxidant capacity (TAC), malondialdehyde (MDA) and total tissue thiol molecules (TTM) content. The Lowry method was employed for evaluating the protein content of the samples (Daemi et al. 2019a) and MDA content was used for assessing the lipid peroxidation rate. To assess the TTM, the collected granulation tissue sample was homogenized and the supernatant was added to Tris-EDTA as reported by Daemi et al. (2019a). To assess the antioxidant capacity, the wound granulation tissue was homogenized in ice-cold KCl (150 mM) and the mixture was then centrifuged at 3000 g for 10 min. The supernatant was used to evaluate the total antioxidant capacity (TAC), malondialdehyde (MDA) and total tissue thiol molecules (TTM) content. The Lowry method was employed for evaluating the protein content of the samples (Daemi et al. 2019a) and MDA content was used for assessing the lipid peroxidation rate. To assess the TTM, the collected granulation tissue sample was homogenized and the supernatant was added to Tris-EDTA as reported by Daemi et al. (2019a). Statistical analysis The results were reported as mean ± standard deviation and analyzed by Graph Pad Prism Software. One-way ANOVA was utilized to analyze the results. Dunnett’s test for pair-wise comparisons was used for evaluating the effect of treatments and p < 0.05 was considered to be of significance. The results were reported as mean ± standard deviation and analyzed by Graph Pad Prism Software. One-way ANOVA was utilized to analyze the results. Dunnett’s test for pair-wise comparisons was used for evaluating the effect of treatments and p < 0.05 was considered to be of significance. Materials: Zataria multiflora essential oil was purchased from Barij Essence Company (Kashan-Iran) and approved by the same herbarium company (No.1203). Commercial base ointment and mupirocin were prepared from Pars Daruo. Ltd., Tehran, Iran. The commercial base ointment contained 90% soft paraffin, 5% hard paraffin and 5% lanolin. Rodent feeds were prepared from Javaneh Khorasan Company (Khorasan, Iran). Essential oil analysis and identification: Gas chromatography (GC) analysis was employed in order to identify the compounds. The components of the essential oil were recognized using calculation of their retention indices under temperature-programmed conditions for n-alkanes (C6–C24) and the oil on DB-5 column. Animals: Seventy-two BALB/c mice weighing 25 ± 3 g were prepared. The mice had free access to water and pelleted feed for rodents (Javaneh Khorasan Company, Khorasan, Iran). The mice were kept in an ambient temperature of 23 ± 3 °C, at a constant air humidity, and a natural day–night cycle. This study was approved by the Animal Research Committee of the Urmia Islamic Azad University with ethical No. IAU-UB 11044. Induction of wound: To induce the wound, the mice were primarily anesthetized by intraperitoneal administration route using 50 mg/kg ketamine and 5 mg/kg xylazine. Following the induction of anaesthesia, dorsal region of each mouse was shaven for surgical actions. A circular wound model was induced in a size of 7 mm using a surgical biopsy punch and per wound was inoculated by an aliquot of 5 × 107 suspension containing S. aureus (ATCC 26313) and P. aeruginosa (ATCC 27853) in 50 μL phosphate-buffered saline (Farahpour et al. 2020; Khezri et al. 2019). Subsequently, animals were divided into four groups (n = 18), as (group I) negative control group (NCG) that was administered commercial base ointment, (group II) positive control group that were treated with Mupirocin® (MG), and (groups III and IV) treated groups with therapeutic ointments containing 2 g and 4 g ZME mixed in NCG (2% and 4% ZMEO), respectively. Concerning the colonization of the bacteria, 24 h after the induction of the wounds, ointments were applied on wounds, once a day. In addition, according to the tissue sampling on days 3, 7, and 14 after the wounding, the animals in each group were divided into three subgroups (n = 6). Wound area: The rate of wound closure was estimated as reported previously (Nejati et al. 2015; Khezri et al. 2019). In order to calculate the percentage of wound closure rate, a transparent paper was placed on it and it was calculated based on the following formula: Percentage of wound closure = [(Wound area on Day 0 − Wound area on Day X)/Wound area on Day 0] × 100. The investigation of the bacteriological count in the wound area: To calculate the bacteriological count, the granulated tissues were excised and 0.1 g of the sample was crushed, minced, and homogenized in a sterile mortar containing 10 mL of sterile saline. It was the diluted in tubes containing 9 mL of sterile saline, was cultured on plate count agar (Merck KGaA, Darmstadt, Germany), incubated at 37 °C for 48 h, calculated as CFU/g of granulation tissue (Farahpour et al. 2020; Khezri et al. 2019). Histological analysis: Concerning the evaluation of the histological parameters, the mice were euthanized by a special CO2 device and the granulation tissue were excised in along to 1 to 2 mm from surrounding normal skin. The samples were fixed in neutral-buffered formalin 10%, routinely processed, and paraffin wax was embedded, sectioned at 5 µm, and stained with Masson’s trichrome and then examined under light microscopy (Olympus CX31RBSF attached cameraman) as reported by Farahpour et al. (2018). Cellular infiltration, edoema and collagen deposition were assessed. An image pro-insight software was utilised for evaluating the collagen deposition. Morphometric lens (Olympus, Germany) was used for assessing the epithelial thickness. edoema was assessed as a 5-score as reported previously (Nejati et al. 2015). Immunohistochemical staining (IHC) for angiogenesis (CD31): To assess the IHC, it was conducted as reported by Farahpour et al. (2018). The IHC was conducted based on manufacturer’s protocol (Biocare, Yorba Linda, CA). The tissue sections were rinsed gently in the washing buffer and placed in a buffer bath. A DAB chromogen was employed for assessing the tissue sections and then incubated for 5 min. The sections were then dipped in weak ammonia (0.037 M/L) for 10 times, rinsed with distilled water and cover slipped. Brown stains were considered as positive immunohistochemical. Fluorescent staining for collagen: To wash the slides, acetic acid 1% was used for several minutes. The slides were stained using acridine-orange. Phosphate buffer was utilized to remove staining. The slides were the washed in distilled water, mounted with buffer drop and analyzed with a fluorescent microscope. The bunds with red and yellowish red colour were considered as collagen bunds (Farahpour et al. 2018). RNA isolation and cDNA synthesis: The wound tissues were isolated as previously reported and RNA was extracted applying a standard TRIZOL procedure as reported by Farahpour et al. (2018), and assessed by spectrophotometer (260 nm and 260/280 = 1.8–2.0). To prepare the cDNA, it was put in a reaction being mixed with 1 µg RNA, oligo (dT) primer (1 µL), 5 × reaction buffer (4 µL), RNAse inhibitor (1 µL), 10 mM dNTP mix (2 µL) and M-MuLV Reverse Transcriptase (1 µL) as described by producer protocol (Fermentas, GmbH, Germany). Time and temperature were as reported by Farahpour et al. (2018). The used primers included IL-1β, forward (5′-AAC AAA CCC TGC AGT GGT TCG-3′) and reverse (5′-AGCTGCTTCAGACACTTGCAC-3′); TGF-β, forward (5-CCAAACGCCGAAGACTTATCC-3′) and reverse (5′-CTTATTACCGATGGGATGGGATAGCCC-3′); IL-10, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), TNF-α, forward (5′-GAAGCTCCCTCAGCGAGGACA-3′) and reverse (5′-TTGGGCCAGTGAGTGAAAGGG-3′), FGF-2, forward (5′-GGAACCCGGCGGGACACGGAC-3′) and reverse (5′-CCGCTGTGGCAGCTCTTG GGG-3′); VEGF, forward (5′-GCTCCGTAGTAGCCGTGGTCT-3′) and reverse (5′-GGAACCCGGCGGGACACGGAC-3′), and IGF-1, forward (5′-TAGGTGGTTGATGAATGGT-3′) and reverse (5′-GAAAGGGCAGGGCTAAT-3′). Antioxidant capacity: To assess the antioxidant capacity, the wound granulation tissue was homogenized in ice-cold KCl (150 mM) and the mixture was then centrifuged at 3000 g for 10 min. The supernatant was used to evaluate the total antioxidant capacity (TAC), malondialdehyde (MDA) and total tissue thiol molecules (TTM) content. The Lowry method was employed for evaluating the protein content of the samples (Daemi et al. 2019a) and MDA content was used for assessing the lipid peroxidation rate. To assess the TTM, the collected granulation tissue sample was homogenized and the supernatant was added to Tris-EDTA as reported by Daemi et al. (2019a). Statistical analysis: The results were reported as mean ± standard deviation and analyzed by Graph Pad Prism Software. One-way ANOVA was utilized to analyze the results. Dunnett’s test for pair-wise comparisons was used for evaluating the effect of treatments and p < 0.05 was considered to be of significance. Results: Composition of the Zataria multiflora essential oil The analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1). GC-MS chromatogram of the essential oil of Zataria multiflora. Chemical constituents of Zataria multiflora essential oil. RI-Cal: retention indices calculated based on C6–C24 n-alkenes from a DB-5 column. RI-Cal: retention indices retrieved from literature (Adams 2007). Bold values show major compounds in the essential oil. The analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1). GC-MS chromatogram of the essential oil of Zataria multiflora. Chemical constituents of Zataria multiflora essential oil. RI-Cal: retention indices calculated based on C6–C24 n-alkenes from a DB-5 column. RI-Cal: retention indices retrieved from literature (Adams 2007). Bold values show major compounds in the essential oil. Wound area The findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05). The effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05). The effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The bacteriological count The results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05). The results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05). Histopathological parameters The results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results. Immunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×. The effects of ZMEO on histopathological parameters in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results. Immunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×. The effects of ZMEO on histopathological parameters in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Molecular analyses The results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group. The effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group. The effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Antioxidant status Antioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group. The effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Antioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group. The effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Composition of the Zataria multiflora essential oil: The analysis of Zataria multiflora essential oil with GC-FID and GC-mass spectrometry (MS) identified 29 compounds which accounted for 99.6% of the total essential oil composition (Table 1). It contained thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) as major constituents (Figure 1). GC-MS chromatogram of the essential oil of Zataria multiflora. Chemical constituents of Zataria multiflora essential oil. RI-Cal: retention indices calculated based on C6–C24 n-alkenes from a DB-5 column. RI-Cal: retention indices retrieved from literature (Adams 2007). Bold values show major compounds in the essential oil. Wound area: The findings for the effects of ZMEO on the percentage of wound contraction are shown in Figure 2. The results implied that topical administration of therapeutic ointments significantly (p < 0.05) increased wound contraction rate on days 7 and 14 compared to the NCG group (Figure 2(A,B)). The comparison between treatment groups revealed that a higher wound contraction rate was observed in the 4% ZMEO-treated group compared to the 2% ZMEO-treated and MG groups (p < 0.05). The effects of topical administration of ZMEO on (B) wound contraction rate and (C) tissue bacterial count (CFU/g) in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. The bacteriological count: The results concerning the effects of ZMEO on tissue bacteria counts are represented in Figure 2(C). The results showed that topical administration of therapeutic ointments significantly decreased bacterial colonization in all the days compared to the NCG group (p < 0.05). The comparison between treatment groups demonstrated that a higher level of ZMEO (4%) significantly decreased the total bacterial count compared to that of a lower level of it (2%) and MG groups (p < 0.05). Histopathological parameters: The results for histopathological parameters are represented in Table 2. The results showed that edoema score was significantly higher in the NCG group. It was observed to be significantly lower in the animals treated with the 4% ZMEO-treated group and also in 2% ZMEO and MG-treated groups. The findings implied that the lowest edoema was observed in 4% ZMEO in day 14; meanwhile, there was no observed edoema in the 4% ZMEO group on day 14. The results showed that immune cells were significantly lower in ZMEO groups in day 3 compared to other groups (p < 0.05), yet it decreased on days 7 and 14 in the ZMEO groups compared to other groups (p < 0.05). It means that the application of the ZMEO decreases immune cells in days 7 and 14, and increased it on day 3. Angiogenesis (Figure 3) and fibroblast infiltration were significantly higher in all treated groups compared to the NCG group and the highest level was observed in level of the 4%-ZMEO group. The results for collagen condensation (Figure 4) and re-epithelialization (Figure 5) revealed that parameters were significantly higher in the ZMEO groups, which confirms the results. Immunohistochemical staining for angiogenesis:; (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See elevated angiogenesis in ZMEO-treated group (arrows) on day 7 after wound induction. CD 31 staining, 400×. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. Note well-formed collagen deposition in cross sections from ZMEO-treated animals on day 7 after wound induction (first row), which is significantly increased on day 14 after injury (second row). The third row represents the software analyze for collagen intensity. The cross sections from animals in ZMEO-treated groups exhibited condense collagen deposition versus NCG and MG groups. Masson-trichrome staining and fluorescent staining for collagen, 400× magnification. Cross section from wound area: (A) NCG group, (B) MG group, (C and D) 2 and 4% ZMEO-treated groups. See well re-epithelialization in ZMEO-treated animals. The re-epithelialization initiated on day 7 after wound induction in ZMEO-treated animals. However, the cross sections from NCG and MG groups are not representing epithelialization. Note well-organized dermis and complete epithelialization with well-formed papillae in ZMEO-treated animals in comparison to NCG and MG groups. Masson-trichrome staining, 100×. The effects of ZMEO on histopathological parameters in different days. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Molecular analyses: The results for gene expression are presented in Figure 6. Gene expression of IL-1β and TNF-α was significantly (p < 0.05) lower on all sampling days at treated groups compared to the NCG group. The greatest decrease was observed in the 4% ZMEO-treated animals. Gene expressions of IGF-1 and VEGF was significantly (p < 0.05) higher at all treated groups on day 7 after the wounding, but the levels significantly decreased on day 14. The expression of IL-10 level increased on day 7 whereas it decreased on day 14 compared to the NCG group (p < 0.05). Further analysis of gene expressions of TGF-β and FGF-2 indicate that its level does not have significant difference (p > 0.05) on day 3, yet its level significantly (p < 0.05) increased in all treated groups, particularly at 4% ZMEO-treated animals on day 7. The amount of TGF-β and FGF-2 gene expression levels also were significantly (p < 0.05) reduced on day 14 compared to the NCG group. The effects of topical application of ZMEO on gene expression. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Antioxidant status: Antioxidant status in different groups is represented in Figure 7. Using the 2% and 4%-ZMEO significantly (p < 0.05) increased TAC and TTP levels compared to the NCG group in all of the days after the wounding. Interestingly, the highest increase was observed in the 4% ZMEO-treated animals. Further analysis of MDA level indicates significant (p < 0.05) decrease in all of the treated groups compared to the NCG group in all of the days after the wounding. The highest decrease was observed in the 4% ZMEO-treated group. The effects of topical application of ZMEO on antioxidant properties. Superscripts (a–d) indicate significant differences in same day at p < 0.05. Discussion: Wounds infected by pathogenic bacteria such as S. aureus, and P. aeruginosa are believed to be serious challenges for regarding their management and treatment (Manzuoerh et al. 2019). Different agents are employed for the treatment of wounds. Active compounds of the essential oils can have pivotal roles in their treatment. The findings indicated that thymol (52.90%), p-cymene (9.10), γ-terpinene (8.10%) and carvacrol (6.80%) were the major constituents in the essential oil. Similar to our results, previous studies demonstrated that the major compounds of Z. multiflora essential oil were thymol, carvacrol and p-cymene (Aida et al. 2015), thymol, c-terpinene, p-cymene, and carvacrol (Saei-Dehkordi et al. 2010), carvacrol (Misaghi and Basti 2007), thymol, carvacrol, and p-cymene (Sharififar et al. 2007). The results showed that ZMEO accelerated wound healing compared to the NCG group and synthetic agent of mupirocin. Herein, the results are described based on the mechanisms and active compounds, step-by-step. Inflammation is the first step in the wound healing process, in which different factors and genes are involved. Inflammation is triggered once the injury induces to living tissues by live organisms, such as bacteria and/or physical injury and faulted immune response. Inflammation removes foreign substances in order to improve the wound healing process (Garrett et al. 2010). It promotes macrophages, neutrophils and immune cells for initiation of inflammation (Oguntibeju 2018). IL-1β and TNF-α are commonly produced by immune cells, such as macrophages and mast cells (Thacker et al. 2007). The results showed that the topical application of ZMEO decreased immune cells and the expressions of IL-1β and TNF-α in the first 3 days. The results implied a positive relation between the number of immune cells with the expression of IL-1β and TNF-α in the treatments. Meaning that immune cells increase the expression of IL-1β and TNF-α in the first 3 days. Increased inflammation delays the wound healing process. The findings revealed that topical administration of ZMEO decreased inflammation on days 7 and 14, and acted as an antibacterial and anti-inflammation agent. Antibacterial and anti-inflammatory activity of ZMEO is attributed to its compounds. The essence contains thymol and carvacrol that are of antibacterial and anti-inflammatory properties. It was reported that the administration of carvacrol significantly reduced TNF-α levels in pleural lavage (Guimarães et al. 2012). The anti-inflammatory activity of thymol leads to prevention of the phosphorylation of extracellular signal-regulated protein kinase, c-Jun N-terminal kinase, and nuclear factor-κB (Liang et al. 2014). Carvacrol prevented leukocyte migration, and decreased edoema, and showed anti-inflammatory effects (Fachini-Queiroz et al. 2012). The results indicated that immune cells decreased and expression of IL-1β and TNF-α also decreased in ZMEO groups. The findings showed that the total bacterial count significantly decreased in ZMEO groups compared to the NCG group. Mupirocin is used as a bactericide agent for the treatment of infected wounds. The results convey that ZMEO, particularly at a higher level, have better antibacterial activity compared to mupirocin. Higher total bacterial count increases the inflammation and delays the wound healing process. It means that topical administration of ZMEO decreases the inflammation with its antibacterial activity. It was reported that the antibacterial activity of Zataria multiflora is in view of its compounds (Shafiee et al. 1999; Barkhori-Mehni et al. 2017). In essence, ZMEO decreased the inflammation and inflammatory-associated parameters. Higher levels of ZMEO provide higher doses of active compounds that act as anti-inflammatory factors. Decreased inflammation leads the wound healing process towards the proliferative phase. Our findings implied that the topical application of ZMEO improved the proliferative phase. The results showed that the genes expression of VEGF, FGF-2 and IGF-1 increased in ZMEO groups. IGF-1 increases the transport of glucose during the wound healing process and improves the proliferative phase (Daemi et al. 2019b). VEGF induces angiogenesis and stimulates cell migration and proliferation (Norton & Popel 2016; Farahpour et al. 2020; Gharaboghaz et al. 2020). FGF-2, a protein that participates in the wound healing process (Souto et al. 2020), together with other growth factors, such as VEGF, are reported to increase the angiogenesis and indirectly support the cellular nutrient, oxygen and energy supplements (Harding et al. 2002). So far, studies have not investigated the effect of ZMEO on genes expression of VEGF, FGF-2 and IGF-1. It means that VEGF improves angiogenesis, and vessel number was significantly higher in ZMEO groups. VEGF improved angiogenesis and number of vessels. Other genes involved were TGF-β, and IL-10 whose expressions increased on day 7. IL-10 is a factor that reduces the production of pro-inflammatory cytokine, but TGF-β leads the phase towards the proliferative phase by improving the proliferation and differentiation of fibroblasts, collagen production, and wound contraction (Khezri et al. 2020). Seemingly, both genes are of greatly important roles in decreasing the inflammation and passing to the proliferative phase. The IHC results indicated improved collagen deposition. The biosynthesis of collagen during the proliferation stage has pivotal roles in dermal maturation in which some factors are involved (Yakaew et al. 2016; Daemi et al. 2019a). The results showed that the topical application of ZMEO increased fibroblast, which is due to its effect on IGF-1. It was reported that IGF-1 increases fibroblast cells in in vitro assessment (Lisa et al. 2011). Improving cellular proliferation and differentiation are essential for shortening the healing time (Oryan et al. 2015; Karimzadeh and Farahpour 2017). Proliferative effect of ZMEO is partly owing to its antioxidant activity. The topical application of ZMEO improved antioxidant activity. Previous studies have reported antioxidant activity of Zataria multiflora essential oil (Fatemi et al. 2012; Karimian et al. 2012; Kavoosi and Teixeira da Silva 2012). Oxidative stress in the wound area increases the damage of proteins, nucleotides and lipid levels (Cano Sanchez et al. 2018; Farahpour et al. 2020; Gharaboghaz et al. 2020) and decreases the antioxidant activity. This fact is associated with increased changes in the antioxidant profile, which are contributed to the increase in the levels of MDA content (Sharma et al. 2012; Bardaa et al. 2016; Vittorazzi et al. 2016). Conclusion: The topical application of ZMEO decreased the inflammation and the total bacterial count, which progresses the wound healing process towards the proliferative phase. It increases the expression of proliferative genes, helps to alleviate the inflammation phase and accelerates the proliferative phase. Antioxidant activity of ZMEO accelerated the wound healing process. All the above-mentioned factors hasten the wound healing process, as seen for the wound area. Accordingly, applying ZMEO only and/or in combination with chemical agents for the treatment of wound healing could be suggested.
Background: Zataria multiflora Boiss (Lamiaceae) essential oil (ZME) is believed to be a bactericide herbal medicine and might alleviate negative effects of infection. Methods: A full-thickness excisional skin wound was surgically created in each mouse and inoculated with 5 × 107 suspension containing Pseudomonas aeruginosa and Staphylococcus aureus. The BALB/c mice (n = 72) were divided into four groups: (1) negative control that received base ointment (NCG), (2) positive control that daily received Mupirocin® (MG), (3) therapeutic ointment containing 2% ZMEO and (4) therapeutic ointment containing 4% ZMEO, for 21 days. Wound contraction, total bacterial count, histopathological parameters, antioxidant activity, qRT-PCR analysis for expression of IL-1β, TNF-α, VEGF, IGF-1, TGF-β, IL-10, and FGF-2 mRNA levels were assessed on days 3, 7, and 14 following the wounding. Results: Topical administration of ZMEO significantly decreased the total bacterial count and wound area and also expression of IL-1β and TNF-α compared to the control groups (p < 0.05) in all days. This could also increase significantly the expression of TGF-β, IL-10 IGF-1, FGF-2, and VEGF, and also angiogenesis, fibroblasts, fibrocytes, epithelialization ratio, and collagen deposition and improve antioxidant status compared to the control group (p < 0.05). Conclusions: ZMEO accelerated the healing process of infected wounds by shortening the inflammatory factors and increasing proliferative phase. Applying ZMEO only and/or in combination with chemical agents for the treatment of wound healing could be suggested.
Introduction: Wound infections are important challenges that cause the suffering of patients and economic losses (Vittorazzi et al. 2016; Daemi et al. 2019a; Farghali et al. 2019). Skin is a regenerative organ which acts as a barrier between the body and the external environment (Hassan et al. 2014). A wound is defined as a disturbance in anatomic structure of skin and its functional integrity (Farghali et al. 2019). Skin prevents the penetration of bacteria and fungi that are important challenges that induce mortality and morbidity. Pseudomonas aeruginosa and Staphylococcus aureus are opportunistic bacteria that induce infection in patients (Cardona and Wilson 2015). The bacteria are commonly detected in the upper and deepest region of wound bed (Serra et al. 2015). Colonization of S. aureus and P. aeruginosa in the wound site postpones the wound healing process (Farahpour et al. 2020; Khezri et al. 2020). The wound healing process comprises several phases, including coagulation, inflammation, epithelialization, granulation tissue formation, and tissue remodelling (Daemi et al. 2019a, 2019b; Farahpour et al. 2020). Inflammatory phase occurs after activation of inflammatory chemokines (Farahpour et al. 2020). Interleukin-1β (IL-1β) is significantly synthesized 12–24 h after infliction of the wound, and its level returns to basal level after the proliferative stage is completed (Fahey and Doyle 2019). Tumour necrosis factor-α (TNF-α) is significantly synthesized by macrophages and T lymphocytes and its level increases under inflammation and infection conditions (Xue and Falcon 2019). Following the induction of inflammation, proliferative phase occurs in which many genes are involved. Insulin-like growth factor 1 (IGF-1) promotes the production of keratinocyte and fibroblast proliferation and improves the re-epithelialization (Daemi et al. 2019b). Decreased endothelial insulin/IGF-1 signal delays in the wound healing process (Aghdam et al. 2012). Vascular endothelial growth factor (VEGF) induces angiogenesis and stimulates cell migration and proliferation (Farahpour et al. 2020). Fibroblast growth factor-2 (FGF-2) is a protein that participates in the wound healing process (Souto et al. 2020). Interleukin-10 (IL-10) reduces the production of pro-inflammatory cytokine, but tumour growth factor-β (TGF-β) has roles in increasing the proliferative phase improving the proliferation and differentiation of fibroblasts, collagen production, and wound contraction (Khezri et al. 2020). On the other hand, oxidative stress increases under wound disorders and increases the production of reactive oxygen species (ROS) in the site of wound (Ustuner et al. 2019). Applying antibacterial and antioxidant agents shortens the inflammatory phase, promotes the proliferative phase and accelerates the wound healing process (Bardaa et al. 2016; Vittorazzi et al. 2016; Daemi et al. 2019a, 2019b). Zataria multiflora Boiss (Lamiaceae) (ZM) has antibacterial activity against Pseudomonas spp. (Barkhori-Mehni et al. 2017). Its antibacterial activity is attributed to its phenolic compounds that destroy the cell wall in bacteria (Barkhori-Mehni et al. 2017). It demonstrates antioxidant properties due to its compounds such as thymol and carvacrol (Mojaddar Langroodi et al. 2019). Seemingly, ZM can accelerate the wound healing process due to its antibacterial and antioxidant features, but no studies have been conducted yet to evaluate the Zataria multiflora essential oil (ZME) on the wound healing process. Therefore, this study was conducted to evaluate the effect of an ointment prepared from ZME on the healing process of infected wound model by S. aureus and P. aeruginosa by assessing wound contraction, total bacterial count, histopathological and immunohistochemical, antioxidant activity, and qRT-PCR analyses. Conclusion: The topical application of ZMEO decreased the inflammation and the total bacterial count, which progresses the wound healing process towards the proliferative phase. It increases the expression of proliferative genes, helps to alleviate the inflammation phase and accelerates the proliferative phase. Antioxidant activity of ZMEO accelerated the wound healing process. All the above-mentioned factors hasten the wound healing process, as seen for the wound area. Accordingly, applying ZMEO only and/or in combination with chemical agents for the treatment of wound healing could be suggested.
Background: Zataria multiflora Boiss (Lamiaceae) essential oil (ZME) is believed to be a bactericide herbal medicine and might alleviate negative effects of infection. Methods: A full-thickness excisional skin wound was surgically created in each mouse and inoculated with 5 × 107 suspension containing Pseudomonas aeruginosa and Staphylococcus aureus. The BALB/c mice (n = 72) were divided into four groups: (1) negative control that received base ointment (NCG), (2) positive control that daily received Mupirocin® (MG), (3) therapeutic ointment containing 2% ZMEO and (4) therapeutic ointment containing 4% ZMEO, for 21 days. Wound contraction, total bacterial count, histopathological parameters, antioxidant activity, qRT-PCR analysis for expression of IL-1β, TNF-α, VEGF, IGF-1, TGF-β, IL-10, and FGF-2 mRNA levels were assessed on days 3, 7, and 14 following the wounding. Results: Topical administration of ZMEO significantly decreased the total bacterial count and wound area and also expression of IL-1β and TNF-α compared to the control groups (p < 0.05) in all days. This could also increase significantly the expression of TGF-β, IL-10 IGF-1, FGF-2, and VEGF, and also angiogenesis, fibroblasts, fibrocytes, epithelialization ratio, and collagen deposition and improve antioxidant status compared to the control group (p < 0.05). Conclusions: ZMEO accelerated the healing process of infected wounds by shortening the inflammatory factors and increasing proliferative phase. Applying ZMEO only and/or in combination with chemical agents for the treatment of wound healing could be suggested.
10,623
321
[ 76, 50, 96, 267, 82, 100, 152, 108, 74, 247, 131, 59, 144, 149, 94, 543, 242, 140 ]
23
[ "zmeo", "wound", "groups", "group", "treated", "day", "05", "ncg", "significantly", "zmeo treated" ]
[ "bacteriological count wound", "aeruginosa wound site", "wound infections important", "wounds infected pathogenic", "wound model aureus" ]
null
[CONTENT] Infected wound healing | inflammatory cytokines | antimicrobial properties | thymol [SUMMARY]
null
[CONTENT] Infected wound healing | inflammatory cytokines | antimicrobial properties | thymol [SUMMARY]
[CONTENT] Infected wound healing | inflammatory cytokines | antimicrobial properties | thymol [SUMMARY]
[CONTENT] Infected wound healing | inflammatory cytokines | antimicrobial properties | thymol [SUMMARY]
[CONTENT] Infected wound healing | inflammatory cytokines | antimicrobial properties | thymol [SUMMARY]
[CONTENT] Administration, Topical | Animals | Collagen | Inflammation | Lamiaceae | Mice | Mice, Inbred BALB C | Neovascularization, Pathologic | Oils, Volatile | Surgical Wound Infection | Wound Healing [SUMMARY]
null
[CONTENT] Administration, Topical | Animals | Collagen | Inflammation | Lamiaceae | Mice | Mice, Inbred BALB C | Neovascularization, Pathologic | Oils, Volatile | Surgical Wound Infection | Wound Healing [SUMMARY]
[CONTENT] Administration, Topical | Animals | Collagen | Inflammation | Lamiaceae | Mice | Mice, Inbred BALB C | Neovascularization, Pathologic | Oils, Volatile | Surgical Wound Infection | Wound Healing [SUMMARY]
[CONTENT] Administration, Topical | Animals | Collagen | Inflammation | Lamiaceae | Mice | Mice, Inbred BALB C | Neovascularization, Pathologic | Oils, Volatile | Surgical Wound Infection | Wound Healing [SUMMARY]
[CONTENT] Administration, Topical | Animals | Collagen | Inflammation | Lamiaceae | Mice | Mice, Inbred BALB C | Neovascularization, Pathologic | Oils, Volatile | Surgical Wound Infection | Wound Healing [SUMMARY]
[CONTENT] bacteriological count wound | aeruginosa wound site | wound infections important | wounds infected pathogenic | wound model aureus [SUMMARY]
null
[CONTENT] bacteriological count wound | aeruginosa wound site | wound infections important | wounds infected pathogenic | wound model aureus [SUMMARY]
[CONTENT] bacteriological count wound | aeruginosa wound site | wound infections important | wounds infected pathogenic | wound model aureus [SUMMARY]
[CONTENT] bacteriological count wound | aeruginosa wound site | wound infections important | wounds infected pathogenic | wound model aureus [SUMMARY]
[CONTENT] bacteriological count wound | aeruginosa wound site | wound infections important | wounds infected pathogenic | wound model aureus [SUMMARY]
[CONTENT] zmeo | wound | groups | group | treated | day | 05 | ncg | significantly | zmeo treated [SUMMARY]
null
[CONTENT] zmeo | wound | groups | group | treated | day | 05 | ncg | significantly | zmeo treated [SUMMARY]
[CONTENT] zmeo | wound | groups | group | treated | day | 05 | ncg | significantly | zmeo treated [SUMMARY]
[CONTENT] zmeo | wound | groups | group | treated | day | 05 | ncg | significantly | zmeo treated [SUMMARY]
[CONTENT] zmeo | wound | groups | group | treated | day | 05 | ncg | significantly | zmeo treated [SUMMARY]
[CONTENT] wound | process | healing | healing process | wound healing process | wound healing | 2020 | factor | phase | growth factor [SUMMARY]
null
[CONTENT] zmeo | treated | groups | group | zmeo treated | 05 | day | significantly | ncg | compared [SUMMARY]
[CONTENT] healing | wound healing | wound | process | phase | wound healing process | proliferative | healing process | inflammation | proliferative phase [SUMMARY]
[CONTENT] zmeo | wound | group | groups | treated | 05 | day | zmeo treated | significantly | ncg [SUMMARY]
[CONTENT] zmeo | wound | group | groups | treated | 05 | day | zmeo treated | significantly | ncg [SUMMARY]
[CONTENT] Boiss | Lamiaceae [SUMMARY]
null
[CONTENT] ZMEO | IL-1β | TNF | 0.05 | all days ||| FGF-2 | VEGF | 0.05 [SUMMARY]
[CONTENT] ZMEO ||| ZMEO [SUMMARY]
[CONTENT] Boiss | Lamiaceae ||| 5 | 107 | Staphylococcus ||| 72 | four | 1 | NCG | 2 | daily | Mupirocin® | 3 | 2% | ZMEO | 4 | 4% | ZMEO | 21 days ||| IL-1β | TNF | VEGF | IGF-1 | IL-10 | FGF-2 | days 3, 7 | 14 ||| ZMEO | IL-1β | TNF | 0.05 | all days ||| FGF-2 | VEGF | 0.05 ||| ||| ZMEO [SUMMARY]
[CONTENT] Boiss | Lamiaceae ||| 5 | 107 | Staphylococcus ||| 72 | four | 1 | NCG | 2 | daily | Mupirocin® | 3 | 2% | ZMEO | 4 | 4% | ZMEO | 21 days ||| IL-1β | TNF | VEGF | IGF-1 | IL-10 | FGF-2 | days 3, 7 | 14 ||| ZMEO | IL-1β | TNF | 0.05 | all days ||| FGF-2 | VEGF | 0.05 ||| ||| ZMEO [SUMMARY]
Submicroscopic malaria infection during pregnancy and the impact of intermittent preventive treatment.
25023697
Malaria during pregnancy results in adverse outcomes for mothers and infants. Intermittent preventive treatment (IPT) with sulphadoxine-pyrimethamine (SP) is the primary intervention aimed at reducing malaria infection during pregnancy. Although submicroscopic infection is common during pregnancy and at delivery, its impact throughout pregnancy on the development of placental malaria and adverse pregnancy outcomes has not been clearly established.
BACKGROUND
Quantitative PCR was used to detect submicroscopic infections in pregnant women enrolled in an observational study in Blantyre, Malawi to determine their effect on maternal, foetal and placental outcomes. The ability of SP to treat and prevent submicroscopic infections was also assessed.
METHODS
2,681 samples from 448 women were analysed and 95 submicroscopic infections were detected in 68 women, a rate of 0.6 episodes per person-year of follow-up. Submicroscopic infections were most often detected at enrolment. The majority of women with submicroscopic infections did not have a microscopically detectable infection detected during pregnancy. Submicroscopic infection was associated with placental malaria even after controlling for microscopically detectable infection and was associated with decreased maternal haemoglobin at the time of detection. However, submicroscopic infection was not associated with adverse maternal or foetal outcomes at delivery. One-third of women with evidence of placental malaria did not have documented peripheral infection during pregnancy. SP was moderately effective in treating submicroscopic infections, but did not prevent the development of new submicroscopic infections in the month after administration.
RESULTS
Submicroscopic malaria infection is common and occurs early in pregnancy. SP-IPT can clear some submicroscopic infections but does not prevent new infections after administration. To effectively control pregnancy-associated malaria, new interventions are required to target women prior to their first antenatal care visit and to effectively treat and prevent all malaria infections.
CONCLUSIONS
[ "Adult", "Antimalarials", "Artemether, Lumefantrine Drug Combination", "Artemisinins", "Asymptomatic Diseases", "DNA, Protozoan", "Drug Administration Schedule", "Drug Combinations", "Drug Resistance", "Erythrocytes", "Ethanolamines", "False Negative Reactions", "Female", "Fetal Diseases", "Fluorenes", "Follow-Up Studies", "Hemeproteins", "Humans", "Infant, Newborn", "Infectious Disease Transmission, Vertical", "Malaria", "Malawi", "Parasitemia", "Placenta", "Plasmodium", "Polymerase Chain Reaction", "Pregnancy", "Pregnancy Complications, Infectious", "Pregnancy Outcome", "Pregnancy Trimesters", "Prevalence", "Pyrimethamine", "Quinine", "Sulfadoxine" ]
4110536
Background
Each year, 25 million pregnant women in sub-Saharan Africa are at risk for malaria infection. Malaria during pregnancy is associated with maternal anaemia and infant low birth weight. Malaria during pregnancy is estimated to result in 100,000 infant deaths annually in Africa [1]. Malaria infections during pregnancy may be identified by microscopic examination of a blood smear or may be submicroscopic, detectable only by more sensitive molecular methods. Studies in a variety of transmission settings have shown that submicroscopic infections can be up to five times more common than microscopic infections during pregnancy [2-7]. While the associations between microscopically detectable malaria during pregnancy and maternal anaemia at delivery and low birth weight have been supported in most studies [1], the impact of submicroscopic infection during pregnancy on these adverse outcomes has not been systematically assessed. Because malaria infections during pregnancy are often asymptomatic, a key intervention to decrease the burden of malaria in pregnancy is intermittent preventive treatment (IPT), providing anti-malarial chemotherapy at routine intervals during pregnancy. Currently, the recommended anti-malarial chemotherapy for IPT in the majority of Africa is sulphadoxine-pyrimethamine (SP), which is intended to cure current infections and to provide a period of post-treatment prophylaxis to prevent future infections. In much of sub-Saharan Africa, increasing resistance to SP [8] is generating concerns that SP-IPT efficacy will be compromised [9]. Understanding the relative importance of the treatment and prophylactic effects of SP-IPT and the differential impact of resistance on these effects may have implications for the selection of a new drug to replace SP for IPT. In the context of an observational study of a large cohort of pregnant women in Malawi, the effect of submicroscopic infections on maternal and foetal outcomes was examined. All women received SP-IPT according to the national policy. At each visit a blood smear was examined and women were treated for malaria infections detected by microscopy. The primary analysis of infections detected solely by microscopy showed an association between infection at enrolment in antenatal care and placental malaria, but only infections detected at the time of delivery were associated with adverse pregnancy outcomes [10]. Because higher rates of submicroscopic infection than microscopically detectable infection were anticipated and submicroscopic infections were not treated, it was hypothesized that submicroscopic infections would be associated with placental malaria, maternal anaemia and infant low birth weight. It was further hypothesized that, due to high rates of SP resistance, SP-IPT would fail to clear and prevent submicroscopic infections.
Methods
Study population Four-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10]. Four-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10]. Laboratory procedures Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.
Results
Two thousand six hundred and eighty-one peripheral blood samples from 448 women enrolled in the study were screened for submicroscopic infection. Two women were not included in the analysis because they did not have any filter papers available for molecular analysis. Ninety-five incident submicroscopic infections were detected in 68 women. Pregnant women had 0.6 episodes of submicroscopic malaria per person-year of follow-up. As previously published, women in this cohort also experienced 0.6 episodes of microscopically detected infection per person-year of follow-up [10]. Thus, the overall rate of malaria infection was 1.2 episodes per person year of follow up. Among women with submicroscopic infections, the mean number of infections was 1.4 (95% CI 1.2-1.6, range 1–5). Forty-nine women (72%) with submicroscopic infections never had an infection detected by microscopy during pregnancy. Submicroscopic infection was not associated with first versus second pregnancy, age, bed net use, or malaria treatment during pregnancy prior to enrolment, but was associated with enrolment at an earlier gestational age, more frequent visits, and lack of secondary school attendance (Table  1). Gestational age at enrolment was inversely correlated with number of visits.Among 311 placentas with both histological and molecular data available for evaluation, 104 (33%) had evidence of placental malaria (Figure  1). Thirty-three (11%) had both parasites and haemozoin detected microscopically. Three (1%) had no haemozoin but did have parasites detected by histology and confirmed by qPCR. Thirty-eight (12%) had haemozoin without histological or molecular evidence of active parasite infection. Thirty (10%) had submicroscopic infection with no histological evidence of parasites or haemozoin. Enrolment characteristics of women with and without submicroscopic infection detected during pregnancy irrespective of the presence of microscopic infection CI: confidence interval. Frequency of histological and molecular findings of placental malaria. Histological results were not available for six placentas; all had parasites detected by qPCR. Timing and clinical presentation of submicroscopic infection Among women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003]. Nine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection. Submicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements. Among women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003]. Nine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection. Submicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements. Impact of submicroscopic infection Women who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table  2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery. Association of submicroscopic infection during pregnancy and placental malaria CI: confidence interval. Among women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table  3). Timing of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection *Four women did not have molecular data available from their enrolment visit. CI: confidence interval. Submicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table  4). Maternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta CI: confidence interval. [Hb]: haemoglobin concentration. Women who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table  2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery. Association of submicroscopic infection during pregnancy and placental malaria CI: confidence interval. Among women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table  3). Timing of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection *Four women did not have molecular data available from their enrolment visit. CI: confidence interval. Submicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table  4). Maternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta CI: confidence interval. [Hb]: haemoglobin concentration. Effect of SP-IPT on submicroscopic infections The ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02). The prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure  2). SP-IPT treatment of submicroscopic infection and prevention of any infection. The ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02). The prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure  2). SP-IPT treatment of submicroscopic infection and prevention of any infection.
Conclusions
Submicroscopic malaria infection is common, occurs early in pregnancy and is associated with placental malaria. SP-IPT appears to treat some submicroscopic infections but does not prevent new infections. To effectively control pregnancy-associated malaria, new interventions are required to target women prior to their first antenatal care visit and to effectively treat and prevent all malaria infections.
[ "Background", "Study population", "Laboratory procedures", "Microscopy", "Haemoglobin", "Placental biopsies", "Molecular detection", "Data analysis", "Ethical considerations", "Timing and clinical presentation of submicroscopic infection", "Impact of submicroscopic infection", "Effect of SP-IPT on submicroscopic infections", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Each year, 25 million pregnant women in sub-Saharan Africa are at risk for malaria infection. Malaria during pregnancy is associated with maternal anaemia and infant low birth weight. Malaria during pregnancy is estimated to result in 100,000 infant deaths annually in Africa\n[1].\nMalaria infections during pregnancy may be identified by microscopic examination of a blood smear or may be submicroscopic, detectable only by more sensitive molecular methods. Studies in a variety of transmission settings have shown that submicroscopic infections can be up to five times more common than microscopic infections during pregnancy\n[2-7]. While the associations between microscopically detectable malaria during pregnancy and maternal anaemia at delivery and low birth weight have been supported in most studies\n[1], the impact of submicroscopic infection during pregnancy on these adverse outcomes has not been systematically assessed.\nBecause malaria infections during pregnancy are often asymptomatic, a key intervention to decrease the burden of malaria in pregnancy is intermittent preventive treatment (IPT), providing anti-malarial chemotherapy at routine intervals during pregnancy. Currently, the recommended anti-malarial chemotherapy for IPT in the majority of Africa is sulphadoxine-pyrimethamine (SP), which is intended to cure current infections and to provide a period of post-treatment prophylaxis to prevent future infections.\nIn much of sub-Saharan Africa, increasing resistance to SP\n[8] is generating concerns that SP-IPT efficacy will be compromised\n[9]. Understanding the relative importance of the treatment and prophylactic effects of SP-IPT and the differential impact of resistance on these effects may have implications for the selection of a new drug to replace SP for IPT.\nIn the context of an observational study of a large cohort of pregnant women in Malawi, the effect of submicroscopic infections on maternal and foetal outcomes was examined. All women received SP-IPT according to the national policy. At each visit a blood smear was examined and women were treated for malaria infections detected by microscopy. The primary analysis of infections detected solely by microscopy showed an association between infection at enrolment in antenatal care and placental malaria, but only infections detected at the time of delivery were associated with adverse pregnancy outcomes\n[10]. Because higher rates of submicroscopic infection than microscopically detectable infection were anticipated and submicroscopic infections were not treated, it was hypothesized that submicroscopic infections would be associated with placental malaria, maternal anaemia and infant low birth weight. It was further hypothesized that, due to high rates of SP resistance, SP-IPT would fail to clear and prevent submicroscopic infections.", "Four-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10].", " Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.\nPeripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.\n Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.\nHaemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.\n Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.\nPlacental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.\n Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].\nDNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].\n Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.\nPeripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.\n Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.\nEthical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.", "Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.", "Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.", "Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.", "DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].", "Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.", "Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.", "Among women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003].\nNine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection.\nSubmicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements.", "Women who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table \n2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery.\nAssociation of submicroscopic infection during pregnancy and placental malaria\nCI: confidence interval.\nAmong women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table \n3).\nTiming of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection\n*Four women did not have molecular data available from their enrolment visit.\nCI: confidence interval.\nSubmicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table \n4).\nMaternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta\nCI: confidence interval.\n[Hb]: haemoglobin concentration.", "The ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02).\nThe prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure \n2).\nSP-IPT treatment of submicroscopic infection and prevention of any infection.", "IPT: Intermittent preventive treatment; SP: Sulphadoxine-pyrimethamine; SP-IPT: Intermittent preventive treatment with sulphadoxine-pyrimethamine; PCR: Polymerase chain reaction; qPCR: Quantitative polymerase chain reaction.", "The authors have declared that they have no financial disclosures or competing interests.", "LKP, TET and MKL conceived of and designed the field study. LKP, TET, MKL, PCT, MM, PW, and GM conducted the clinical study. LMC and MKL designed the experiments. KM, SK and AM completed the histopathology. LMC, SB, RM, SJ, and KBS conducted the experiments. LMC and MKL performed the analysis and led the writing of the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Laboratory procedures", "Microscopy", "Haemoglobin", "Placental biopsies", "Molecular detection", "Data analysis", "Ethical considerations", "Results", "Timing and clinical presentation of submicroscopic infection", "Impact of submicroscopic infection", "Effect of SP-IPT on submicroscopic infections", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Each year, 25 million pregnant women in sub-Saharan Africa are at risk for malaria infection. Malaria during pregnancy is associated with maternal anaemia and infant low birth weight. Malaria during pregnancy is estimated to result in 100,000 infant deaths annually in Africa\n[1].\nMalaria infections during pregnancy may be identified by microscopic examination of a blood smear or may be submicroscopic, detectable only by more sensitive molecular methods. Studies in a variety of transmission settings have shown that submicroscopic infections can be up to five times more common than microscopic infections during pregnancy\n[2-7]. While the associations between microscopically detectable malaria during pregnancy and maternal anaemia at delivery and low birth weight have been supported in most studies\n[1], the impact of submicroscopic infection during pregnancy on these adverse outcomes has not been systematically assessed.\nBecause malaria infections during pregnancy are often asymptomatic, a key intervention to decrease the burden of malaria in pregnancy is intermittent preventive treatment (IPT), providing anti-malarial chemotherapy at routine intervals during pregnancy. Currently, the recommended anti-malarial chemotherapy for IPT in the majority of Africa is sulphadoxine-pyrimethamine (SP), which is intended to cure current infections and to provide a period of post-treatment prophylaxis to prevent future infections.\nIn much of sub-Saharan Africa, increasing resistance to SP\n[8] is generating concerns that SP-IPT efficacy will be compromised\n[9]. Understanding the relative importance of the treatment and prophylactic effects of SP-IPT and the differential impact of resistance on these effects may have implications for the selection of a new drug to replace SP for IPT.\nIn the context of an observational study of a large cohort of pregnant women in Malawi, the effect of submicroscopic infections on maternal and foetal outcomes was examined. All women received SP-IPT according to the national policy. At each visit a blood smear was examined and women were treated for malaria infections detected by microscopy. The primary analysis of infections detected solely by microscopy showed an association between infection at enrolment in antenatal care and placental malaria, but only infections detected at the time of delivery were associated with adverse pregnancy outcomes\n[10]. Because higher rates of submicroscopic infection than microscopically detectable infection were anticipated and submicroscopic infections were not treated, it was hypothesized that submicroscopic infections would be associated with placental malaria, maternal anaemia and infant low birth weight. It was further hypothesized that, due to high rates of SP resistance, SP-IPT would fail to clear and prevent submicroscopic infections.", " Study population Four-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10].\nFour-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10].\n Laboratory procedures Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.\nPeripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.\n Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.\nHaemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.\n Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.\nPlacental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.\n Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].\nDNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].\n Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.\nPeripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.\n Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.\nEthical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.\n Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.\nPeripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.\n Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.\nHaemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.\n Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.\nPlacental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.\n Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].\nDNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].\n Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.\nPeripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.\n Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.\nEthical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.", "Four-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10].", " Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.\nPeripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.\n Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.\nHaemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.\n Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.\nPlacental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.\n Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].\nDNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].\n Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.\nPeripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.\n Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.\nEthical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.", "Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated.", "Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB.", "Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment.", "DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website\n[11].", "Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit.\nData analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05.", "Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously.", "Two thousand six hundred and eighty-one peripheral blood samples from 448 women enrolled in the study were screened for submicroscopic infection. Two women were not included in the analysis because they did not have any filter papers available for molecular analysis. Ninety-five incident submicroscopic infections were detected in 68 women. Pregnant women had 0.6 episodes of submicroscopic malaria per person-year of follow-up. As previously published, women in this cohort also experienced 0.6 episodes of microscopically detected infection per person-year of follow-up\n[10]. Thus, the overall rate of malaria infection was 1.2 episodes per person year of follow up.\nAmong women with submicroscopic infections, the mean number of infections was 1.4 (95% CI 1.2-1.6, range 1–5). Forty-nine women (72%) with submicroscopic infections never had an infection detected by microscopy during pregnancy. Submicroscopic infection was not associated with first versus second pregnancy, age, bed net use, or malaria treatment during pregnancy prior to enrolment, but was associated with enrolment at an earlier gestational age, more frequent visits, and lack of secondary school attendance (Table \n1). Gestational age at enrolment was inversely correlated with number of visits.Among 311 placentas with both histological and molecular data available for evaluation, 104 (33%) had evidence of placental malaria (Figure \n1). Thirty-three (11%) had both parasites and haemozoin detected microscopically. Three (1%) had no haemozoin but did have parasites detected by histology and confirmed by qPCR. Thirty-eight (12%) had haemozoin without histological or molecular evidence of active parasite infection. Thirty (10%) had submicroscopic infection with no histological evidence of parasites or haemozoin.\nEnrolment characteristics of women with and without submicroscopic infection detected during pregnancy irrespective of the presence of microscopic infection\nCI: confidence interval.\nFrequency of histological and molecular findings of placental malaria.\nHistological results were not available for six placentas; all had parasites detected by qPCR.\n Timing and clinical presentation of submicroscopic infection Among women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003].\nNine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection.\nSubmicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements.\nAmong women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003].\nNine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection.\nSubmicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements.\n Impact of submicroscopic infection Women who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table \n2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery.\nAssociation of submicroscopic infection during pregnancy and placental malaria\nCI: confidence interval.\nAmong women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table \n3).\nTiming of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection\n*Four women did not have molecular data available from their enrolment visit.\nCI: confidence interval.\nSubmicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table \n4).\nMaternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta\nCI: confidence interval.\n[Hb]: haemoglobin concentration.\nWomen who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table \n2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery.\nAssociation of submicroscopic infection during pregnancy and placental malaria\nCI: confidence interval.\nAmong women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table \n3).\nTiming of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection\n*Four women did not have molecular data available from their enrolment visit.\nCI: confidence interval.\nSubmicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table \n4).\nMaternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta\nCI: confidence interval.\n[Hb]: haemoglobin concentration.\n Effect of SP-IPT on submicroscopic infections The ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02).\nThe prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure \n2).\nSP-IPT treatment of submicroscopic infection and prevention of any infection.\nThe ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02).\nThe prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure \n2).\nSP-IPT treatment of submicroscopic infection and prevention of any infection.", "Among women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003].\nNine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection.\nSubmicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements.", "Women who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table \n2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery.\nAssociation of submicroscopic infection during pregnancy and placental malaria\nCI: confidence interval.\nAmong women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table \n3).\nTiming of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection\n*Four women did not have molecular data available from their enrolment visit.\nCI: confidence interval.\nSubmicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table \n4).\nMaternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta\nCI: confidence interval.\n[Hb]: haemoglobin concentration.", "The ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02).\nThe prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure \n2).\nSP-IPT treatment of submicroscopic infection and prevention of any infection.", "Even with adherence to SP-IPT administration three times during pregnancy and active case detection and treatment of malaria, submicroscopic infection of the peripheral blood during pregnancy is frequent, especially at the first antenatal visit. Low-density malaria infections, detected only by molecular methods, are associated with placental malaria infection and a decrease in maternal haemoglobin levels. SP-IPT appears to treat some submicroscopic infections but does not prevent new infections. These findings have important implications for the development of further interventions for prevention of malaria pregnancy.\nThis is the first report of the effect of submicroscopic infection throughout pregnancy on maternal and foetal outcomes. Prior studies of submicroscopic malaria during pregnancy have been limited by being cross-sectional, rather than longitudinal in nature\n[2,3,6,7,12-15]. Linking antenatal infections to specific delivery and neonatal outcomes is essential to evaluating the importance of submicroscopic infections and designing strategies to optimize maternal and infant health.\nThe results are consistent with previous studies indicating that submicroscopic infection is associated with anaemia at the time of detection\n[12,15]. Two previous studies reported an association between submicroscopic infection at delivery and infant low birth weight\n[6] or maternal anaemia at the time of delivery\n[7]. However, these cross-sectional studies took place prior to, or as part of, IPT trials and did not assess previous exposure to or treatment of malaria infection during pregnancy. Thus, submicroscopic infection may have been a marker of frequent malaria exposure or lack of IPT administration prior to delivery. Due to detailed follow-up and treatment of microscopically detectable infection, the results of this study more accurately reflect the specific contribution of submicroscopic infections to clinical outcomes.\nThe lack of association between submicroscopic infection during pregnancy and adverse outcomes despite the strong association with placental malaria may have been a function of the design of the study and the clinical care the participants received. The results from the examination of the effect of microscopically detectable infections on maternal and foetal outcomes yielded similar results, as published previously\n[10]. The combination of three doses of SP-IPT and active case detection and treatment may have protected against adverse effects. It is also possible that other interventions, such as screening for anaemia or the appropriate treatment of non-malaria infections, might have improved the overall health of women, decreasing the frequency of adverse outcomes. Given the low prevalence of adverse outcomes, sample size also limited the ability to detect associations. Previous studies have demonstrated a 50% increase in rates of low birth weight\n[6] and maternal anaemia\n[7] at delivery associated with malaria during pregnancy. This study had only 15 and 7% power to detect a 50% increase in low birth weight and maternal anaemia at delivery, respectively, given the baseline rates of these outcomes in this cohort.\nThe presence of placental malaria infection in women who did not have any peripheral malaria infection detected during pregnancy was unexpected. This phenomenon may be due to transient infections that were not captured during sampling intervals. Results from this study indicate that pregnant women frequently clear submicroscopic infections spontaneously. Another source of the placental malaria infection may be infection that occurred prior to enrolment in antenatal care. The strong association between malaria at enrolment and placental infection suggests that early infection, even those that had been cleared at the time of the first antenatal visit, may have the greatest likelihood of causing placental pathology.\nThe modest efficacy of SP in clearing submicroscopic infection and the lack of prophylactic efficacy likely reflects the high level of SP resistance among malaria parasites in Malawi\n[8,16]. This is consistent with studies of SP-IPT in infants, that showed the protective effect of SP was approximately one month and the duration of this protective efficacy is inversely related to presence of in vitro markers of SP resistance\n[17,18].\nThere are several important features of this study that may limit the generalizability of the findings. Women had an average of six visits during the study and were screened with microscopy at each encounter. Thus, women received more screening and treatment when participating in the study than they would have during routine antenatal care. Despite the intensive follow-up and medical attention provided in the study, submicroscopic infection is strongly associated with placental malaria and anaemia during pregnancy. In real life settings, the association between submicroscopic malaria and adverse outcomes would likely have been more pronounced. Having few encounters during the first trimester and having samples obtained intermittently limited analysis of the timing of infection leading to placental infection. Women with submicroscopic infection did have more visits during the study, suggesting that increasing sampling frequency would increase detection of submicroscopic infection. Alternatively, women with submicroscopic infections may have symptoms leading them to present to the antenatal clinic more frequently. However, there was not an association between submicroscopic infection and fever or routine versus sick visit.\nThese results have important implications for the development of alternatives to SP-IPT for prevention of malaria pregnancy. Two possible strategies are intermittent screening and treatment or IPT with an anti-malarial more effective than SP. In one trial, intermittent screening and treatment has been shown to be equivalent to SP-IPT\n[19]. The intermittent screening and treatment arm in that study used antigen-detecting rapid diagnostic tests for screening. The parasite density threshold for detection for rapid diagnostic tests, like that for microscopy, is ten- to 100-fold higher than for qPCR\n[20,21]. Rapid diagnostic tests, like microscopy, will fail to detect infection at the submicroscopic level. Thus, a rapid diagnostic test-based screening and treatment strategy will fail to detect infections that this study demonstrates are associated with placental malaria. Within the context of the usual antenatal package of four antenatal visits in pregnancy if IPT is not provided, many microscopic and all submicroscopic infections will not be treated. Under these conditions, with prolonged and untreated episodes of low-density malaria infection, it is expected that the adverse effects of placental malaria would occur\n[22-24].\nIf IPT with a more effective anti-malarial is considered, for example as in ongoing trials of IPT with dihydroartemisinin-piperaquine\n[25], azithromycin combined with chloroquine\n[26] and mefloquine\n[27], it will be important to characterize the treatment and prophylactic effect of candidate drugs. Most drug efficacy trials examine the ability of a drug to cure the infection at the time of drug administration. However, IPT is most beneficial in high transmission settings if it cures current infections and prevents subsequent infections. The results of this study indicate that the loss of prophylactic efficacy may precede the loss of curative efficacy. Thus, in trials of alternative anti-malarials for IPT, prophylactic efficacy should be included as an outcome measure.\nThe finding that the highest rates of asymptomatic and submicroscopic malaria are detected in women at their first antenatal care visit highlights an essential shortcoming of the current policies to prevent pregnancy-associated malaria. Interventions such as the distribution of bed nets at antenatal clinics and the administration of IPT are delayed until women seek antenatal care. In Malawi, as in other countries in sub-Saharan Africa, this is typically late in the second or into the third trimester\n[28]. By the time women have their first antenatal visit, peripheral and placental infection has already been established. Innovative interventions that can target women early in pregnancy, or even prior to pregnancy, are urgently needed to further decrease the burden of malaria during pregnancy.", "Submicroscopic malaria infection is common, occurs early in pregnancy and is associated with placental malaria. SP-IPT appears to treat some submicroscopic infections but does not prevent new infections. To effectively control pregnancy-associated malaria, new interventions are required to target women prior to their first antenatal care visit and to effectively treat and prevent all malaria infections.", "IPT: Intermittent preventive treatment; SP: Sulphadoxine-pyrimethamine; SP-IPT: Intermittent preventive treatment with sulphadoxine-pyrimethamine; PCR: Polymerase chain reaction; qPCR: Quantitative polymerase chain reaction.", "The authors have declared that they have no financial disclosures or competing interests.", "LKP, TET and MKL conceived of and designed the field study. LKP, TET, MKL, PCT, MM, PW, and GM conducted the clinical study. LMC and MKL designed the experiments. KM, SK and AM completed the histopathology. LMC, SB, RM, SJ, and KBS conducted the experiments. LMC and MKL performed the analysis and led the writing of the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", null, null, null ]
[ "Malaria in pregnancy", "Submicroscopic infection", "Placental malaria", "Sulphadoxine-pyrimethamine intermittent preventive treatment", "Sulphadoxine-pyrimethamine resistance" ]
Background: Each year, 25 million pregnant women in sub-Saharan Africa are at risk for malaria infection. Malaria during pregnancy is associated with maternal anaemia and infant low birth weight. Malaria during pregnancy is estimated to result in 100,000 infant deaths annually in Africa [1]. Malaria infections during pregnancy may be identified by microscopic examination of a blood smear or may be submicroscopic, detectable only by more sensitive molecular methods. Studies in a variety of transmission settings have shown that submicroscopic infections can be up to five times more common than microscopic infections during pregnancy [2-7]. While the associations between microscopically detectable malaria during pregnancy and maternal anaemia at delivery and low birth weight have been supported in most studies [1], the impact of submicroscopic infection during pregnancy on these adverse outcomes has not been systematically assessed. Because malaria infections during pregnancy are often asymptomatic, a key intervention to decrease the burden of malaria in pregnancy is intermittent preventive treatment (IPT), providing anti-malarial chemotherapy at routine intervals during pregnancy. Currently, the recommended anti-malarial chemotherapy for IPT in the majority of Africa is sulphadoxine-pyrimethamine (SP), which is intended to cure current infections and to provide a period of post-treatment prophylaxis to prevent future infections. In much of sub-Saharan Africa, increasing resistance to SP [8] is generating concerns that SP-IPT efficacy will be compromised [9]. Understanding the relative importance of the treatment and prophylactic effects of SP-IPT and the differential impact of resistance on these effects may have implications for the selection of a new drug to replace SP for IPT. In the context of an observational study of a large cohort of pregnant women in Malawi, the effect of submicroscopic infections on maternal and foetal outcomes was examined. All women received SP-IPT according to the national policy. At each visit a blood smear was examined and women were treated for malaria infections detected by microscopy. The primary analysis of infections detected solely by microscopy showed an association between infection at enrolment in antenatal care and placental malaria, but only infections detected at the time of delivery were associated with adverse pregnancy outcomes [10]. Because higher rates of submicroscopic infection than microscopically detectable infection were anticipated and submicroscopic infections were not treated, it was hypothesized that submicroscopic infections would be associated with placental malaria, maternal anaemia and infant low birth weight. It was further hypothesized that, due to high rates of SP resistance, SP-IPT would fail to clear and prevent submicroscopic infections. Methods: Study population Four-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10]. Four-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10]. Laboratory procedures Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Study population: Four-hundred and fifty pregnant women were enrolled in an observational cohort study of malaria during pregnancy in Blantyre, Malawi between June 2009 and June 2010. All women were in their first or second pregnancy and were less than or equal to 28 weeks gestational age based on clinical assessment at enrolment. Women were followed monthly during pregnancy and encouraged to come to the clinic if they had intercurrent illness. At each encounter, peripheral blood smears and dried blood spots on filter paper were collected. At delivery, these same specimens and placental blood and tissue samples were collected. After quickening, women received SP-IPT up to three times separated by at least four weeks. Women with malaria detectable by blood smear at routine or sick visits were treated for malaria in accordance with the national guidelines (quinine in the first trimester and artemether-lumefantrine in the second and third trimesters). Details are described by Kalilani-Phiri et al.[10]. Laboratory procedures: Microscopy Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Haemoglobin Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Placental biopsies Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Molecular detection DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. Data analysis Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Ethical considerations Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Microscopy: Peripheral blood slides were Field’s stained and examined using a 100× oil immersion objective to detect and quantify parasitaemia using an estimated white blood cell count of 8,000 per microlitre. A diagnosis of microscopically detectable malaria infection was made when asexual stage malaria parasites were detected on a thick film. A smear was recorded as negative after examining 100 high power fields. Two microscopists read all slides; in cases of disagreement between the readings, a third expert reader adjudicated. Haemoglobin: Haemoglobin measurement was obtained from a finger prick blood samples using HemoCue® AB. Placental biopsies: Placental biopsies were preserved in 10% neutral buffered formalin, embedded in paraffin wax, cut into four micron thick sections, and then stained with haematoxylin and eosin. Slides were examined for presence of malaria parasites and haemozoin pigment. Molecular detection: DNA was extracted from frozen placental samples and dried blood spots of peripheral blood and placental blood. Quantitative real time polymerase chain reaction (qPCR) was used to detect the gene for Plasmodium falciparum lactate dehydrogenase. Extraction and qPCR protocols are described on our website [11]. Data analysis: Peripheral blood infections were categorized as either microscopic (smear positive, confirmed by qPCR) or submicroscopic (smear negative, but qPCR positive). An episode of submicroscopic infection was defined as a positive qPCR and a negative malaria smear obtained at the same time. Sequential episodes of submicroscopic parasitaemia were only counted once. Gestational age at enrolment was calculated based on the last menstrual period or by the fundal height if the last menstrual period was not known. Gestational age at birth was determined based on the last menstrual period and the Ballard score. Fundal height at enrolment was included in the estimate of gestational age at birth if the last menstrual period and Ballard estimates were more than two weeks’ discrepant. The first trimester was defined as conception through 13 weeks and the second trimester was from 14 through 27 weeks. Low birth weight was defined as birth weight less than 2,500 g. To further investigate low birth weight, preterm birth and small for gestational age were analysed separately. If the gestational age at delivery was less than 37 weeks, the delivery was classified as preterm. Infants were considered small for gestational age if the birth weight for gestational age, based on WHO growth curves, was less than a Z-score of −2. Fever was defined as a measured axillary temperature ≥37.5°C. Maternal anaemia was defined as haemoglobin <11.0 g/dL. Placental malaria was classified as the presence of haemozoin pigment or parasites, by either histology or qPCR. Submicroscopic placental malaria was defined as parasites detected by qPCR, but not histology. Only women with both molecular and histological placental results were included in descriptive analyses of placental malaria. However, women missing histological results were included in analysis of the presence or absence of placental malaria because detection of parasites by qPCR alone is sufficient to categorize them as having placental malaria. Data from twin gestations were included in analyses of placental malaria but excluded from analyses of birth outcomes. Analyses of the effects of SP were restricted to visits that occurred within 28 days after the first visit. Data analysis was performed using STATA version 12.1 software (Stata Corp, College Station, TX, USA). Student’s t-tests or Wilcoxon rank-sum were used for comparisons of normal and non-normal distributions of continuous variables, respectively. Chi-squared and Fisher’s exact tests were used for comparisons of proportions. Odds ratios were calculated using univariate and multivariate logistic regression. Analyses of haemoglobin during pregnancy were controlled for gestational age and adjusted for repeated measurements using robust cluster variance estimation. All P-values are two-sided, and statistical significance was set at P ≤0.05. Ethical considerations: Ethical approval was obtained from the University of Malawi College of Medicine Research and Ethics Committee and the University of Maryland Baltimore Institutional Review Board. Written informed consent was obtained from all participants before conducting any study related activities. Participants had the option to withdraw from the study at any time. All data were recorded and analysed anonymously. Results: Two thousand six hundred and eighty-one peripheral blood samples from 448 women enrolled in the study were screened for submicroscopic infection. Two women were not included in the analysis because they did not have any filter papers available for molecular analysis. Ninety-five incident submicroscopic infections were detected in 68 women. Pregnant women had 0.6 episodes of submicroscopic malaria per person-year of follow-up. As previously published, women in this cohort also experienced 0.6 episodes of microscopically detected infection per person-year of follow-up [10]. Thus, the overall rate of malaria infection was 1.2 episodes per person year of follow up. Among women with submicroscopic infections, the mean number of infections was 1.4 (95% CI 1.2-1.6, range 1–5). Forty-nine women (72%) with submicroscopic infections never had an infection detected by microscopy during pregnancy. Submicroscopic infection was not associated with first versus second pregnancy, age, bed net use, or malaria treatment during pregnancy prior to enrolment, but was associated with enrolment at an earlier gestational age, more frequent visits, and lack of secondary school attendance (Table  1). Gestational age at enrolment was inversely correlated with number of visits.Among 311 placentas with both histological and molecular data available for evaluation, 104 (33%) had evidence of placental malaria (Figure  1). Thirty-three (11%) had both parasites and haemozoin detected microscopically. Three (1%) had no haemozoin but did have parasites detected by histology and confirmed by qPCR. Thirty-eight (12%) had haemozoin without histological or molecular evidence of active parasite infection. Thirty (10%) had submicroscopic infection with no histological evidence of parasites or haemozoin. Enrolment characteristics of women with and without submicroscopic infection detected during pregnancy irrespective of the presence of microscopic infection CI: confidence interval. Frequency of histological and molecular findings of placental malaria. Histological results were not available for six placentas; all had parasites detected by qPCR. Timing and clinical presentation of submicroscopic infection Among women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003]. Nine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection. Submicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements. Among women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003]. Nine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection. Submicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements. Impact of submicroscopic infection Women who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table  2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery. Association of submicroscopic infection during pregnancy and placental malaria CI: confidence interval. Among women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table  3). Timing of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection *Four women did not have molecular data available from their enrolment visit. CI: confidence interval. Submicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table  4). Maternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta CI: confidence interval. [Hb]: haemoglobin concentration. Women who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table  2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery. Association of submicroscopic infection during pregnancy and placental malaria CI: confidence interval. Among women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table  3). Timing of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection *Four women did not have molecular data available from their enrolment visit. CI: confidence interval. Submicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table  4). Maternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta CI: confidence interval. [Hb]: haemoglobin concentration. Effect of SP-IPT on submicroscopic infections The ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02). The prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure  2). SP-IPT treatment of submicroscopic infection and prevention of any infection. The ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02). The prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure  2). SP-IPT treatment of submicroscopic infection and prevention of any infection. Timing and clinical presentation of submicroscopic infection: Among women with at least one submicroscopic infection, 69% had submicroscopic infection detected at enrolment (47/68). Submicroscopic infections were detected in each trimester and at delivery. Based on estimated gestational age at enrolment, submicroscopic infection was detected in 12% (6/50) of first trimester, 6% (63/1,050) of second trimester, and 1.4% (19/1,372) of third trimester samples prior to delivery. These infections occurred in six, fifty-six, and eighteen women, respectively. Twelve women had infection in more than one trimester. After excluding the submicroscopic infections present at enrolment, the prevalence of submicroscopic infection was higher in the second trimester than the third trimester [22/650 (3.4%) versus 19/1,372 (1.4%) of samples, p = 0.003]. Nine of 308 women (3%) with peripheral samples obtained at delivery had submicroscopic infection. Two women with submicroscopic infection at delivery also had submicroscopic infection at their last antepartum visit, thus infection at delivery was not considered a new submicroscopic infection. Submicroscopic infection was not associated with documented fever (p = 0.4) or reported fever within 48 hours prior to the visit (p = 0.2). Three women reported cough, sneezing and body pains at a visit during which submicroscopic infection was detected. Otherwise, no women reported any symptoms of illness during visits at which submicroscopic infection was detected. However, submicroscopic infection was associated with a 0.6 g/dL decrease in mean haemoglobin at the time of detection (11.5 g/dL vs 12.1 g/dL, p = 0.001). This decrease remained significant after adjusting for gestational age and repeated measurements. Impact of submicroscopic infection: Women who had submicroscopic infections were more likely to have placental malaria than those without infections. After controlling for microscopically detectable infection, women with submicroscopic infection were more than seven-fold more likely to have placental malaria than women without any documented infection during pregnancy (Table  2). Each additional episode of submicroscopic infection increased the risk of placental malaria nearly five-fold (OR 4.9, 95% CI 2.7-9.1, p < 0.001). All women with submicroscopic infection detected in peripheral blood at delivery had placental malaria. Thirty-five percent (39/110) of women who had evidence of placental malaria did not have any microscopic or submicroscopic peripheral infections detected during pregnancy or at delivery. Association of submicroscopic infection during pregnancy and placental malaria CI: confidence interval. Among women without microscopic infection during pregnancy, submicroscopic infection detected at enrolment was more strongly associated with placental malaria than submicroscopic infection detected after enrolment. After excluding infections detected at enrolment, third trimester infections were more strongly associated with placental malaria than second trimester infections (Table  3). Timing of submicroscopic infection during pregnancy and odds of placental malaria among women without microscopically detected infection *Four women did not have molecular data available from their enrolment visit. CI: confidence interval. Submicroscopic infection during pregnancy, at delivery or in the placenta was not associated with adverse maternal or foetal outcomes. The overall rates of these adverse outcomes were: 18% low birth weight, 13% small for gestational age, 15% preterm delivery and 11% maternal anaemia at delivery. The prevalence of these outcomes was the same in women who did and did not have submicroscopic infection in their peripheral blood or placenta (Table  4). Maternal and foetal outcomes in singleton, live births from women with only submicroscopic infection compared to women without infection during pregnancy, at delivery or in the placenta CI: confidence interval. [Hb]: haemoglobin concentration. Effect of SP-IPT on submicroscopic infections: The ability of SP to clear submicroscopic infections was evaluated by examining women who had submicroscopic infection and a subsequent visit within 28 days (N = 79 visits). Women who received a dose of SP when they had a submicroscopic infection were less likely to have an infection at their next visit within 28 days than women with submicroscopic infections who did not receive SP (23.9 vs 48.5%; p = 0.02). The prophylactic efficacy of SP was assessed by comparing the rates of malaria infection within 28 days in women who did and did not receive SP when they had no malaria infection. A total of 1,779 visits were included in the analysis. Women who did and did not receive SP had the same rate of parasitaemia within the next 28 days (2.0 vs 2.2%; p = 0.83) (Figure  2). SP-IPT treatment of submicroscopic infection and prevention of any infection. Discussion: Even with adherence to SP-IPT administration three times during pregnancy and active case detection and treatment of malaria, submicroscopic infection of the peripheral blood during pregnancy is frequent, especially at the first antenatal visit. Low-density malaria infections, detected only by molecular methods, are associated with placental malaria infection and a decrease in maternal haemoglobin levels. SP-IPT appears to treat some submicroscopic infections but does not prevent new infections. These findings have important implications for the development of further interventions for prevention of malaria pregnancy. This is the first report of the effect of submicroscopic infection throughout pregnancy on maternal and foetal outcomes. Prior studies of submicroscopic malaria during pregnancy have been limited by being cross-sectional, rather than longitudinal in nature [2,3,6,7,12-15]. Linking antenatal infections to specific delivery and neonatal outcomes is essential to evaluating the importance of submicroscopic infections and designing strategies to optimize maternal and infant health. The results are consistent with previous studies indicating that submicroscopic infection is associated with anaemia at the time of detection [12,15]. Two previous studies reported an association between submicroscopic infection at delivery and infant low birth weight [6] or maternal anaemia at the time of delivery [7]. However, these cross-sectional studies took place prior to, or as part of, IPT trials and did not assess previous exposure to or treatment of malaria infection during pregnancy. Thus, submicroscopic infection may have been a marker of frequent malaria exposure or lack of IPT administration prior to delivery. Due to detailed follow-up and treatment of microscopically detectable infection, the results of this study more accurately reflect the specific contribution of submicroscopic infections to clinical outcomes. The lack of association between submicroscopic infection during pregnancy and adverse outcomes despite the strong association with placental malaria may have been a function of the design of the study and the clinical care the participants received. The results from the examination of the effect of microscopically detectable infections on maternal and foetal outcomes yielded similar results, as published previously [10]. The combination of three doses of SP-IPT and active case detection and treatment may have protected against adverse effects. It is also possible that other interventions, such as screening for anaemia or the appropriate treatment of non-malaria infections, might have improved the overall health of women, decreasing the frequency of adverse outcomes. Given the low prevalence of adverse outcomes, sample size also limited the ability to detect associations. Previous studies have demonstrated a 50% increase in rates of low birth weight [6] and maternal anaemia [7] at delivery associated with malaria during pregnancy. This study had only 15 and 7% power to detect a 50% increase in low birth weight and maternal anaemia at delivery, respectively, given the baseline rates of these outcomes in this cohort. The presence of placental malaria infection in women who did not have any peripheral malaria infection detected during pregnancy was unexpected. This phenomenon may be due to transient infections that were not captured during sampling intervals. Results from this study indicate that pregnant women frequently clear submicroscopic infections spontaneously. Another source of the placental malaria infection may be infection that occurred prior to enrolment in antenatal care. The strong association between malaria at enrolment and placental infection suggests that early infection, even those that had been cleared at the time of the first antenatal visit, may have the greatest likelihood of causing placental pathology. The modest efficacy of SP in clearing submicroscopic infection and the lack of prophylactic efficacy likely reflects the high level of SP resistance among malaria parasites in Malawi [8,16]. This is consistent with studies of SP-IPT in infants, that showed the protective effect of SP was approximately one month and the duration of this protective efficacy is inversely related to presence of in vitro markers of SP resistance [17,18]. There are several important features of this study that may limit the generalizability of the findings. Women had an average of six visits during the study and were screened with microscopy at each encounter. Thus, women received more screening and treatment when participating in the study than they would have during routine antenatal care. Despite the intensive follow-up and medical attention provided in the study, submicroscopic infection is strongly associated with placental malaria and anaemia during pregnancy. In real life settings, the association between submicroscopic malaria and adverse outcomes would likely have been more pronounced. Having few encounters during the first trimester and having samples obtained intermittently limited analysis of the timing of infection leading to placental infection. Women with submicroscopic infection did have more visits during the study, suggesting that increasing sampling frequency would increase detection of submicroscopic infection. Alternatively, women with submicroscopic infections may have symptoms leading them to present to the antenatal clinic more frequently. However, there was not an association between submicroscopic infection and fever or routine versus sick visit. These results have important implications for the development of alternatives to SP-IPT for prevention of malaria pregnancy. Two possible strategies are intermittent screening and treatment or IPT with an anti-malarial more effective than SP. In one trial, intermittent screening and treatment has been shown to be equivalent to SP-IPT [19]. The intermittent screening and treatment arm in that study used antigen-detecting rapid diagnostic tests for screening. The parasite density threshold for detection for rapid diagnostic tests, like that for microscopy, is ten- to 100-fold higher than for qPCR [20,21]. Rapid diagnostic tests, like microscopy, will fail to detect infection at the submicroscopic level. Thus, a rapid diagnostic test-based screening and treatment strategy will fail to detect infections that this study demonstrates are associated with placental malaria. Within the context of the usual antenatal package of four antenatal visits in pregnancy if IPT is not provided, many microscopic and all submicroscopic infections will not be treated. Under these conditions, with prolonged and untreated episodes of low-density malaria infection, it is expected that the adverse effects of placental malaria would occur [22-24]. If IPT with a more effective anti-malarial is considered, for example as in ongoing trials of IPT with dihydroartemisinin-piperaquine [25], azithromycin combined with chloroquine [26] and mefloquine [27], it will be important to characterize the treatment and prophylactic effect of candidate drugs. Most drug efficacy trials examine the ability of a drug to cure the infection at the time of drug administration. However, IPT is most beneficial in high transmission settings if it cures current infections and prevents subsequent infections. The results of this study indicate that the loss of prophylactic efficacy may precede the loss of curative efficacy. Thus, in trials of alternative anti-malarials for IPT, prophylactic efficacy should be included as an outcome measure. The finding that the highest rates of asymptomatic and submicroscopic malaria are detected in women at their first antenatal care visit highlights an essential shortcoming of the current policies to prevent pregnancy-associated malaria. Interventions such as the distribution of bed nets at antenatal clinics and the administration of IPT are delayed until women seek antenatal care. In Malawi, as in other countries in sub-Saharan Africa, this is typically late in the second or into the third trimester [28]. By the time women have their first antenatal visit, peripheral and placental infection has already been established. Innovative interventions that can target women early in pregnancy, or even prior to pregnancy, are urgently needed to further decrease the burden of malaria during pregnancy. Conclusions: Submicroscopic malaria infection is common, occurs early in pregnancy and is associated with placental malaria. SP-IPT appears to treat some submicroscopic infections but does not prevent new infections. To effectively control pregnancy-associated malaria, new interventions are required to target women prior to their first antenatal care visit and to effectively treat and prevent all malaria infections. Abbreviations: IPT: Intermittent preventive treatment; SP: Sulphadoxine-pyrimethamine; SP-IPT: Intermittent preventive treatment with sulphadoxine-pyrimethamine; PCR: Polymerase chain reaction; qPCR: Quantitative polymerase chain reaction. Competing interests: The authors have declared that they have no financial disclosures or competing interests. Authors’ contributions: LKP, TET and MKL conceived of and designed the field study. LKP, TET, MKL, PCT, MM, PW, and GM conducted the clinical study. LMC and MKL designed the experiments. KM, SK and AM completed the histopathology. LMC, SB, RM, SJ, and KBS conducted the experiments. LMC and MKL performed the analysis and led the writing of the manuscript. All authors read and approved the final manuscript.
Background: Malaria during pregnancy results in adverse outcomes for mothers and infants. Intermittent preventive treatment (IPT) with sulphadoxine-pyrimethamine (SP) is the primary intervention aimed at reducing malaria infection during pregnancy. Although submicroscopic infection is common during pregnancy and at delivery, its impact throughout pregnancy on the development of placental malaria and adverse pregnancy outcomes has not been clearly established. Methods: Quantitative PCR was used to detect submicroscopic infections in pregnant women enrolled in an observational study in Blantyre, Malawi to determine their effect on maternal, foetal and placental outcomes. The ability of SP to treat and prevent submicroscopic infections was also assessed. Results: 2,681 samples from 448 women were analysed and 95 submicroscopic infections were detected in 68 women, a rate of 0.6 episodes per person-year of follow-up. Submicroscopic infections were most often detected at enrolment. The majority of women with submicroscopic infections did not have a microscopically detectable infection detected during pregnancy. Submicroscopic infection was associated with placental malaria even after controlling for microscopically detectable infection and was associated with decreased maternal haemoglobin at the time of detection. However, submicroscopic infection was not associated with adverse maternal or foetal outcomes at delivery. One-third of women with evidence of placental malaria did not have documented peripheral infection during pregnancy. SP was moderately effective in treating submicroscopic infections, but did not prevent the development of new submicroscopic infections in the month after administration. Conclusions: Submicroscopic malaria infection is common and occurs early in pregnancy. SP-IPT can clear some submicroscopic infections but does not prevent new infections after administration. To effectively control pregnancy-associated malaria, new interventions are required to target women prior to their first antenatal care visit and to effectively treat and prevent all malaria infections.
Background: Each year, 25 million pregnant women in sub-Saharan Africa are at risk for malaria infection. Malaria during pregnancy is associated with maternal anaemia and infant low birth weight. Malaria during pregnancy is estimated to result in 100,000 infant deaths annually in Africa [1]. Malaria infections during pregnancy may be identified by microscopic examination of a blood smear or may be submicroscopic, detectable only by more sensitive molecular methods. Studies in a variety of transmission settings have shown that submicroscopic infections can be up to five times more common than microscopic infections during pregnancy [2-7]. While the associations between microscopically detectable malaria during pregnancy and maternal anaemia at delivery and low birth weight have been supported in most studies [1], the impact of submicroscopic infection during pregnancy on these adverse outcomes has not been systematically assessed. Because malaria infections during pregnancy are often asymptomatic, a key intervention to decrease the burden of malaria in pregnancy is intermittent preventive treatment (IPT), providing anti-malarial chemotherapy at routine intervals during pregnancy. Currently, the recommended anti-malarial chemotherapy for IPT in the majority of Africa is sulphadoxine-pyrimethamine (SP), which is intended to cure current infections and to provide a period of post-treatment prophylaxis to prevent future infections. In much of sub-Saharan Africa, increasing resistance to SP [8] is generating concerns that SP-IPT efficacy will be compromised [9]. Understanding the relative importance of the treatment and prophylactic effects of SP-IPT and the differential impact of resistance on these effects may have implications for the selection of a new drug to replace SP for IPT. In the context of an observational study of a large cohort of pregnant women in Malawi, the effect of submicroscopic infections on maternal and foetal outcomes was examined. All women received SP-IPT according to the national policy. At each visit a blood smear was examined and women were treated for malaria infections detected by microscopy. The primary analysis of infections detected solely by microscopy showed an association between infection at enrolment in antenatal care and placental malaria, but only infections detected at the time of delivery were associated with adverse pregnancy outcomes [10]. Because higher rates of submicroscopic infection than microscopically detectable infection were anticipated and submicroscopic infections were not treated, it was hypothesized that submicroscopic infections would be associated with placental malaria, maternal anaemia and infant low birth weight. It was further hypothesized that, due to high rates of SP resistance, SP-IPT would fail to clear and prevent submicroscopic infections. Conclusions: Submicroscopic malaria infection is common, occurs early in pregnancy and is associated with placental malaria. SP-IPT appears to treat some submicroscopic infections but does not prevent new infections. To effectively control pregnancy-associated malaria, new interventions are required to target women prior to their first antenatal care visit and to effectively treat and prevent all malaria infections.
Background: Malaria during pregnancy results in adverse outcomes for mothers and infants. Intermittent preventive treatment (IPT) with sulphadoxine-pyrimethamine (SP) is the primary intervention aimed at reducing malaria infection during pregnancy. Although submicroscopic infection is common during pregnancy and at delivery, its impact throughout pregnancy on the development of placental malaria and adverse pregnancy outcomes has not been clearly established. Methods: Quantitative PCR was used to detect submicroscopic infections in pregnant women enrolled in an observational study in Blantyre, Malawi to determine their effect on maternal, foetal and placental outcomes. The ability of SP to treat and prevent submicroscopic infections was also assessed. Results: 2,681 samples from 448 women were analysed and 95 submicroscopic infections were detected in 68 women, a rate of 0.6 episodes per person-year of follow-up. Submicroscopic infections were most often detected at enrolment. The majority of women with submicroscopic infections did not have a microscopically detectable infection detected during pregnancy. Submicroscopic infection was associated with placental malaria even after controlling for microscopically detectable infection and was associated with decreased maternal haemoglobin at the time of detection. However, submicroscopic infection was not associated with adverse maternal or foetal outcomes at delivery. One-third of women with evidence of placental malaria did not have documented peripheral infection during pregnancy. SP was moderately effective in treating submicroscopic infections, but did not prevent the development of new submicroscopic infections in the month after administration. Conclusions: Submicroscopic malaria infection is common and occurs early in pregnancy. SP-IPT can clear some submicroscopic infections but does not prevent new infections after administration. To effectively control pregnancy-associated malaria, new interventions are required to target women prior to their first antenatal care visit and to effectively treat and prevent all malaria infections.
11,226
334
[ 486, 178, 1550, 86, 15, 43, 53, 502, 62, 324, 374, 183, 37, 14, 85 ]
19
[ "infection", "submicroscopic", "malaria", "women", "placental", "submicroscopic infection", "placental malaria", "infections", "blood", "age" ]
[ "pregnancy associated malaria", "detectable malaria pregnancy", "study malaria pregnancy", "malaria infections pregnancy", "submicroscopic malaria pregnancy" ]
[CONTENT] Malaria in pregnancy | Submicroscopic infection | Placental malaria | Sulphadoxine-pyrimethamine intermittent preventive treatment | Sulphadoxine-pyrimethamine resistance [SUMMARY]
[CONTENT] Malaria in pregnancy | Submicroscopic infection | Placental malaria | Sulphadoxine-pyrimethamine intermittent preventive treatment | Sulphadoxine-pyrimethamine resistance [SUMMARY]
[CONTENT] Malaria in pregnancy | Submicroscopic infection | Placental malaria | Sulphadoxine-pyrimethamine intermittent preventive treatment | Sulphadoxine-pyrimethamine resistance [SUMMARY]
[CONTENT] Malaria in pregnancy | Submicroscopic infection | Placental malaria | Sulphadoxine-pyrimethamine intermittent preventive treatment | Sulphadoxine-pyrimethamine resistance [SUMMARY]
[CONTENT] Malaria in pregnancy | Submicroscopic infection | Placental malaria | Sulphadoxine-pyrimethamine intermittent preventive treatment | Sulphadoxine-pyrimethamine resistance [SUMMARY]
[CONTENT] Malaria in pregnancy | Submicroscopic infection | Placental malaria | Sulphadoxine-pyrimethamine intermittent preventive treatment | Sulphadoxine-pyrimethamine resistance [SUMMARY]
[CONTENT] Adult | Antimalarials | Artemether, Lumefantrine Drug Combination | Artemisinins | Asymptomatic Diseases | DNA, Protozoan | Drug Administration Schedule | Drug Combinations | Drug Resistance | Erythrocytes | Ethanolamines | False Negative Reactions | Female | Fetal Diseases | Fluorenes | Follow-Up Studies | Hemeproteins | Humans | Infant, Newborn | Infectious Disease Transmission, Vertical | Malaria | Malawi | Parasitemia | Placenta | Plasmodium | Polymerase Chain Reaction | Pregnancy | Pregnancy Complications, Infectious | Pregnancy Outcome | Pregnancy Trimesters | Prevalence | Pyrimethamine | Quinine | Sulfadoxine [SUMMARY]
[CONTENT] Adult | Antimalarials | Artemether, Lumefantrine Drug Combination | Artemisinins | Asymptomatic Diseases | DNA, Protozoan | Drug Administration Schedule | Drug Combinations | Drug Resistance | Erythrocytes | Ethanolamines | False Negative Reactions | Female | Fetal Diseases | Fluorenes | Follow-Up Studies | Hemeproteins | Humans | Infant, Newborn | Infectious Disease Transmission, Vertical | Malaria | Malawi | Parasitemia | Placenta | Plasmodium | Polymerase Chain Reaction | Pregnancy | Pregnancy Complications, Infectious | Pregnancy Outcome | Pregnancy Trimesters | Prevalence | Pyrimethamine | Quinine | Sulfadoxine [SUMMARY]
[CONTENT] Adult | Antimalarials | Artemether, Lumefantrine Drug Combination | Artemisinins | Asymptomatic Diseases | DNA, Protozoan | Drug Administration Schedule | Drug Combinations | Drug Resistance | Erythrocytes | Ethanolamines | False Negative Reactions | Female | Fetal Diseases | Fluorenes | Follow-Up Studies | Hemeproteins | Humans | Infant, Newborn | Infectious Disease Transmission, Vertical | Malaria | Malawi | Parasitemia | Placenta | Plasmodium | Polymerase Chain Reaction | Pregnancy | Pregnancy Complications, Infectious | Pregnancy Outcome | Pregnancy Trimesters | Prevalence | Pyrimethamine | Quinine | Sulfadoxine [SUMMARY]
[CONTENT] Adult | Antimalarials | Artemether, Lumefantrine Drug Combination | Artemisinins | Asymptomatic Diseases | DNA, Protozoan | Drug Administration Schedule | Drug Combinations | Drug Resistance | Erythrocytes | Ethanolamines | False Negative Reactions | Female | Fetal Diseases | Fluorenes | Follow-Up Studies | Hemeproteins | Humans | Infant, Newborn | Infectious Disease Transmission, Vertical | Malaria | Malawi | Parasitemia | Placenta | Plasmodium | Polymerase Chain Reaction | Pregnancy | Pregnancy Complications, Infectious | Pregnancy Outcome | Pregnancy Trimesters | Prevalence | Pyrimethamine | Quinine | Sulfadoxine [SUMMARY]
[CONTENT] Adult | Antimalarials | Artemether, Lumefantrine Drug Combination | Artemisinins | Asymptomatic Diseases | DNA, Protozoan | Drug Administration Schedule | Drug Combinations | Drug Resistance | Erythrocytes | Ethanolamines | False Negative Reactions | Female | Fetal Diseases | Fluorenes | Follow-Up Studies | Hemeproteins | Humans | Infant, Newborn | Infectious Disease Transmission, Vertical | Malaria | Malawi | Parasitemia | Placenta | Plasmodium | Polymerase Chain Reaction | Pregnancy | Pregnancy Complications, Infectious | Pregnancy Outcome | Pregnancy Trimesters | Prevalence | Pyrimethamine | Quinine | Sulfadoxine [SUMMARY]
[CONTENT] Adult | Antimalarials | Artemether, Lumefantrine Drug Combination | Artemisinins | Asymptomatic Diseases | DNA, Protozoan | Drug Administration Schedule | Drug Combinations | Drug Resistance | Erythrocytes | Ethanolamines | False Negative Reactions | Female | Fetal Diseases | Fluorenes | Follow-Up Studies | Hemeproteins | Humans | Infant, Newborn | Infectious Disease Transmission, Vertical | Malaria | Malawi | Parasitemia | Placenta | Plasmodium | Polymerase Chain Reaction | Pregnancy | Pregnancy Complications, Infectious | Pregnancy Outcome | Pregnancy Trimesters | Prevalence | Pyrimethamine | Quinine | Sulfadoxine [SUMMARY]
[CONTENT] pregnancy associated malaria | detectable malaria pregnancy | study malaria pregnancy | malaria infections pregnancy | submicroscopic malaria pregnancy [SUMMARY]
[CONTENT] pregnancy associated malaria | detectable malaria pregnancy | study malaria pregnancy | malaria infections pregnancy | submicroscopic malaria pregnancy [SUMMARY]
[CONTENT] pregnancy associated malaria | detectable malaria pregnancy | study malaria pregnancy | malaria infections pregnancy | submicroscopic malaria pregnancy [SUMMARY]
[CONTENT] pregnancy associated malaria | detectable malaria pregnancy | study malaria pregnancy | malaria infections pregnancy | submicroscopic malaria pregnancy [SUMMARY]
[CONTENT] pregnancy associated malaria | detectable malaria pregnancy | study malaria pregnancy | malaria infections pregnancy | submicroscopic malaria pregnancy [SUMMARY]
[CONTENT] pregnancy associated malaria | detectable malaria pregnancy | study malaria pregnancy | malaria infections pregnancy | submicroscopic malaria pregnancy [SUMMARY]
[CONTENT] infection | submicroscopic | malaria | women | placental | submicroscopic infection | placental malaria | infections | blood | age [SUMMARY]
[CONTENT] infection | submicroscopic | malaria | women | placental | submicroscopic infection | placental malaria | infections | blood | age [SUMMARY]
[CONTENT] infection | submicroscopic | malaria | women | placental | submicroscopic infection | placental malaria | infections | blood | age [SUMMARY]
[CONTENT] infection | submicroscopic | malaria | women | placental | submicroscopic infection | placental malaria | infections | blood | age [SUMMARY]
[CONTENT] infection | submicroscopic | malaria | women | placental | submicroscopic infection | placental malaria | infections | blood | age [SUMMARY]
[CONTENT] infection | submicroscopic | malaria | women | placental | submicroscopic infection | placental malaria | infections | blood | age [SUMMARY]
[CONTENT] infections | pregnancy | malaria | sp | submicroscopic | ipt | africa | infections pregnancy | submicroscopic infections | malaria pregnancy [SUMMARY]
[CONTENT] placental | malaria | gestational | age | gestational age | defined | birth | qpcr | blood | analyses [SUMMARY]
[CONTENT] infection | submicroscopic | submicroscopic infection | women | detected | women submicroscopic | infections | submicroscopic infection detected | malaria | infection detected [SUMMARY]
[CONTENT] effectively | treat | malaria | prevent | pregnancy associated | infections | new | associated | pregnancy | placental malaria sp [SUMMARY]
[CONTENT] submicroscopic | infection | malaria | women | submicroscopic infection | placental | infections | blood | pregnancy | placental malaria [SUMMARY]
[CONTENT] submicroscopic | infection | malaria | women | submicroscopic infection | placental | infections | blood | pregnancy | placental malaria [SUMMARY]
[CONTENT] Malaria ||| IPT ||| [SUMMARY]
[CONTENT] Blantyre | Malawi ||| SP [SUMMARY]
[CONTENT] 2,681 | 448 | 95 | 68 | 0.6 ||| ||| ||| ||| ||| One-third ||| the month [SUMMARY]
[CONTENT] ||| SP-IPT ||| first [SUMMARY]
[CONTENT] Malaria ||| IPT ||| ||| Blantyre | Malawi ||| SP ||| 2,681 | 448 | 95 | 68 | 0.6 ||| ||| ||| ||| ||| One-third ||| the month ||| ||| SP-IPT ||| first [SUMMARY]
[CONTENT] Malaria ||| IPT ||| ||| Blantyre | Malawi ||| SP ||| 2,681 | 448 | 95 | 68 | 0.6 ||| ||| ||| ||| ||| One-third ||| the month ||| ||| SP-IPT ||| first [SUMMARY]
Effect of Transport Time on the Use of Reperfusion Therapy for Patients with Acute Ischemic Stroke in Korea.
33754510
We investigated the association between geographic proximity to hospitals and the administration rate of reperfusion therapy for acute ischemic stroke.
BACKGROUND
We identified patients with acute ischemic stroke who visited the hospital within 12 hours of symptom onset from a prospective nationwide multicenter stroke registry. Reperfusion therapy was classified as intravenous thrombolysis (IVT), endovascular therapy (EVT), or combined therapy. The association between the proportion of patients who were treated with reperfusion therapy and the ground transport time was evaluated using a spline regression analysis adjusted for patient-level characteristics. We also estimated the proportion of Korean population that lived within each 30-minute incremental service area from 67 stroke centers accredited by the Korean Stroke Society.
METHODS
Of 12,172 patients (mean age, 68 ± 13 years; men, 59.7%) who met the eligibility criteria, 96.5% lived within 90 minutes of ground transport time from the admitting hospital. The proportion of patients treated with IVT decreased significantly when stroke patients lived beyond 90 minutes of the transport time (P = 0.006). The proportion treated with EVT also showed a similar trend with the transport time. Based on the residential area, 98.4% of Korean population was accessible to 67 stroke centers within 90 minutes.
RESULTS
The use of reperfusion therapy for acute stroke decreased when patients lived beyond 90 minutes of the ground transport time from the hospital. More than 95% of the South Korean population was accessible to 67 stroke centers within 90 minutes of the ground transport time.
CONCLUSION
[ "Administration, Intravenous", "Aged", "Aged, 80 and over", "Combined Modality Therapy", "Endovascular Procedures", "Female", "Fibrinolytic Agents", "Humans", "Ischemic Stroke", "Male", "Middle Aged", "Registries", "Republic of Korea", "Thrombolytic Therapy", "Time Factors", "Time-to-Treatment" ]
7985286
INTRODUCTION
Geographic access to hospitals that offer acute care for patients with trauma or acute myocardial infarction can be crucial because timely treatment can improve outcomes.1234 For patients with acute ischemic stroke, access to hospitals that provide reperfusion therapy is important because the reperfusion therapy for acute ischemic stroke is also highly time-sensitive.56 Several previous studies have suggested the probability of receiving reperfusion therapies for acute ischemic stroke increased when the patients lived in close geographic proximity to hospitals that provide the therapy, however other studies did not find a significant association between the transport time from home to hospital and the use of reperfusion therapy.78910 For example, longer driving time to the treating hospital was significantly associated with longer onset-to-arrival time to the hospitals and lower odds of receiving intravenous thrombolysis (IVT) in a retrospective study of 118,683 acute stroke patients admitted to 1,489 US hospitals.10 For endovascular therapy (EVT) for acute ischemic stroke, the proportion of stroke patients receiving EVT decreased significantly when patients lived beyond a 1-hour ground transport time from hospitals that offer EVT in California during 2009 and 2010.7 Among patients with ST-segment elevation myocardial infarction, a wide regional variation exists in the rate of reperfusion therapy and median time from symptom onset to treatment due to differences in geography, local resource, and organization of regional health system.11 Likewise the association between geographic proximity to a hospital that offers the reperfusion therapy for acute stroke patients and the probability of receiving the therapy can vary depending on population density, geography, and systems of care.1213 Therefore, each region needs to develop an optimal geospatial modeling of acute care centers to provide maximal population access to such centers.114 In this study, we investigated the association between geographic proximity to a hospital that offers reperfusion therapy and the administration rate of the therapy for acute ischemic stroke in South Korea, one of the planet's most densely populated countries with a population density of 529 people per square kilometer in 2018.15
METHODS
Patients The Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE). The Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE). Travel distance, ground transport time, and service area In this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses. In this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses. Statistical analysis Baseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant. Baseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant. Ethics statement This study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants. This study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants.
RESULTS
Of 27,122 acute stroke patients who had been admitted to CRCS-K hospitals between April 2008 and August 2014, 12,172 patients met the eligibility criteria (Supplementary Fig. 1). Mean age was 68 ± 13 years, and 59.7% were men. Of 12,172 patients, 2,216 patients (18.2%) were treated with IVT alone and 1238 patients (10.2%) were treated with either EVT alone or combined IVT and EVT. The patients who received reperfusion therapy were more likely to be older, to have atrial fibrillation and higher initial NIHSS score (Supplementary Table 1). In this study, 48.2% of the patients' residence were classified as metropolitan, 41.3% as urban, and 10.5% as rural. The average estimated ground transport time from home to hospital was 23 minutes (interquartile range [IQR], 14–39), and median travel distance was 9.4 km (IQR, 4.5–25.1). In this study, 64.4% of the patients lived within 30 minutes or less to the admitting hospital, 88.7% within 60 minutes or less, 96.5% within 90 minutes or less, and 99.2% with 120 minutes or less. Ground transport times of 30, 60, 90, and 120 minutes corresponded to travel distances of 20.4, 53.5, 86.6, and 119.8 km, respectively. Patients that lived farther from admitting hospitals were more likely to be older, to have atrial fibrillation and CE subtype, and to have a greater initial NIHSS score, but were less likely to have hypertension, diabetes, hyperlipidemia, coronary artery disease and had a lower initial systolic blood pressure (Table 1). The ground transport time did not differ between patients treated with IVT only and patients untreated with reperfusion therapy (rank sum test P = 0.450). However, the ground transport time was slightly but significantly greater for patients treated with EVT compared with patients untreated with reperfusion therapy (24 minutes [14–44] vs. 23 minutes [14–39], rank sum test P = 0.015; Fig. 1 and Supplementary Fig. 2). Data are presented as number (%) or mean ± standard deviation. IQR = interquartile range, NIHSS = National Institutes of Health Stroke Scale, LAA = large artery atherosclerosis, CE = cardioembolism, SVO = small-vessel occlusion, UDE = undetermined etiology, ODE = other determined etiology, LDL = low-density lipoprotein, BUN = blood urea nitrogen, IVT = intravenous thrombolysis, EVT = endovascular therapy. CRCS-K = The Clinical Research Collaboration for Stroke in Korea, IVT = intravenous thrombolysis, EVT = endovascular therapy. Exact treatment time was available in 1,669 patients (83.4%) treated with IVT alone. The median door-to-needle time was 40 minutes (30–53) and the median onset-to-needle time was 124 minutes (88–170). As expected, the onset-to-needle time was positively correlated with ground transport time (Pearson coefficient 0.201, P < 0.001) since it took more time to arrive at the hospital. However, the door-to-needle was inversely associated with ground transport time (Pearson coefficient −0.108, P < 0.001). Regarding EVT, exact treatment time for patients treated with EVT was available for 1,077 patients (87.0%). The median door-to-puncture time was 107 minutes (85–135) and the median onset-to-puncture time was 240 minutes (170–340). The onset-to-puncture time was positively correlated with ground transport time (Pearson coefficient 0.192, P < 0.001). Like the door-to-needle time in IVT, the door-to-puncture showed a negative association with ground transport time in patients treated with EVT (Pearson coefficient −0.089, P < 0.001) (Supplementary Fig. 3). The proportion of patients treated with IVT ranged from 17.1% to 22.1% among the groups with 30-minute increments of ground transport time, and proportion of EVT ranged from 7.5% to 12.4% (Supplementary Fig. 4). In the restricted cubic spline analyses, the proportion of patients treated with IVT decreased significantly when the patients lived beyond a 90-minute ground transport time from the hospital (unadjusted P = 0.003; adjusted P = 0.006). The proportion of stroke patients treated with EVT showed a similar trend with estimated ground transport time in the restricted cubic spline analyses, but it did not reach statistical significance in both unadjusted and adjusted analyses (unadjusted P = 0.086; adjusted P = 0.105) (Fig. 2). (A) IVT, unadjusted analysis (P = 0.003); (B) IVT, adjusted analysis (P = 0.006); adjusted for age, hypertension, diabetes, atrial fibrillation, stroke history, coronary artery disease, use of an antiplatelet agent, oral anticoagulants or statins before the index stroke, diastolic blood pressure, fasting blood glucose, low-density lipoprotein cholesterol, platelet count, onset-to-arrival time, initial NIHSS score, and TOAST classification. (C) EVT, unadjusted analysis (P = 0.090); (D) EVT, adjusted analysis (P = 0.100); adjusted for age, sex, hypertension, diabetes, atrial fibrillation, hyperlipidemia, smoking, coronary artery disease, use of an antiplatelet agent or oral anticoagulants before the index stroke, systolic blood pressure, hemoglobin, platelet count, blood urea nitrogen, onset-to-arrival time, initial NIHSS score, and TOAST classification. IVT = intravenous thrombolysis, EVT = endovascular therapy, NIHSS = National Institutes of Health Stroke Scale, TOAST = Trial of Org 10172 in Acute Stroke Treatment. Of 779 patients who had been treated with combined IVT-EVT, we could identify exact referral pattern in 439 patients (56.4%). Among 439 patients, 69 patients (15.7%) were treated by drip- and-ship paradigm. The use of drip-and-ship paradigm increased significantly with greater ground transport time (P < 0.001) (Supplementary Table 2). For national coverage from the 67 stroke centers, only 9.3% of the total land area of South Korea was reachable within 30 minutes while 75.0% of the land was accessible within 90 minutes. In contrast, 73.0% of the residential area was reachable within 30 minutes of the ground transport time, and 98.4% of the area was accessible within 90 minutes (Fig. 3 and Table 2). CRCS-K = The Clinical Research Collaboration for Stroke in Korea.
null
null
[ "Patients", "Travel distance, ground transport time, and service area", "Statistical analysis", "Ethics statement" ]
[ "The Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE).", "In this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses.", "Baseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant.", "This study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants." ]
[ null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Patients", "Travel distance, ground transport time, and service area", "Statistical analysis", "Ethics statement", "RESULTS", "DISCUSSION" ]
[ "Geographic access to hospitals that offer acute care for patients with trauma or acute myocardial infarction can be crucial because timely treatment can improve outcomes.1234 For patients with acute ischemic stroke, access to hospitals that provide reperfusion therapy is important because the reperfusion therapy for acute ischemic stroke is also highly time-sensitive.56 Several previous studies have suggested the probability of receiving reperfusion therapies for acute ischemic stroke increased when the patients lived in close geographic proximity to hospitals that provide the therapy, however other studies did not find a significant association between the transport time from home to hospital and the use of reperfusion therapy.78910 For example, longer driving time to the treating hospital was significantly associated with longer onset-to-arrival time to the hospitals and lower odds of receiving intravenous thrombolysis (IVT) in a retrospective study of 118,683 acute stroke patients admitted to 1,489 US hospitals.10 For endovascular therapy (EVT) for acute ischemic stroke, the proportion of stroke patients receiving EVT decreased significantly when patients lived beyond a 1-hour ground transport time from hospitals that offer EVT in California during 2009 and 2010.7\nAmong patients with ST-segment elevation myocardial infarction, a wide regional variation exists in the rate of reperfusion therapy and median time from symptom onset to treatment due to differences in geography, local resource, and organization of regional health system.11 Likewise the association between geographic proximity to a hospital that offers the reperfusion therapy for acute stroke patients and the probability of receiving the therapy can vary depending on population density, geography, and systems of care.1213 Therefore, each region needs to develop an optimal geospatial modeling of acute care centers to provide maximal population access to such centers.114\nIn this study, we investigated the association between geographic proximity to a hospital that offers reperfusion therapy and the administration rate of the therapy for acute ischemic stroke in South Korea, one of the planet's most densely populated countries with a population density of 529 people per square kilometer in 2018.15", "Patients The Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE).\nThe Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE).\nTravel distance, ground transport time, and service area In this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses.\nIn this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses.\nStatistical analysis Baseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant.\nBaseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant.\nEthics statement This study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants.\nThis study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants.", "The Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE).", "In this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses.", "Baseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant.", "This study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants.", "Of 27,122 acute stroke patients who had been admitted to CRCS-K hospitals between April 2008 and August 2014, 12,172 patients met the eligibility criteria (Supplementary Fig. 1). Mean age was 68 ± 13 years, and 59.7% were men. Of 12,172 patients, 2,216 patients (18.2%) were treated with IVT alone and 1238 patients (10.2%) were treated with either EVT alone or combined IVT and EVT. The patients who received reperfusion therapy were more likely to be older, to have atrial fibrillation and higher initial NIHSS score (Supplementary Table 1). In this study, 48.2% of the patients' residence were classified as metropolitan, 41.3% as urban, and 10.5% as rural. The average estimated ground transport time from home to hospital was 23 minutes (interquartile range [IQR], 14–39), and median travel distance was 9.4 km (IQR, 4.5–25.1). In this study, 64.4% of the patients lived within 30 minutes or less to the admitting hospital, 88.7% within 60 minutes or less, 96.5% within 90 minutes or less, and 99.2% with 120 minutes or less. Ground transport times of 30, 60, 90, and 120 minutes corresponded to travel distances of 20.4, 53.5, 86.6, and 119.8 km, respectively. Patients that lived farther from admitting hospitals were more likely to be older, to have atrial fibrillation and CE subtype, and to have a greater initial NIHSS score, but were less likely to have hypertension, diabetes, hyperlipidemia, coronary artery disease and had a lower initial systolic blood pressure (Table 1). The ground transport time did not differ between patients treated with IVT only and patients untreated with reperfusion therapy (rank sum test P = 0.450). However, the ground transport time was slightly but significantly greater for patients treated with EVT compared with patients untreated with reperfusion therapy (24 minutes [14–44] vs. 23 minutes [14–39], rank sum test P = 0.015; Fig. 1 and Supplementary Fig. 2).\nData are presented as number (%) or mean ± standard deviation.\nIQR = interquartile range, NIHSS = National Institutes of Health Stroke Scale, LAA = large artery atherosclerosis, CE = cardioembolism, SVO = small-vessel occlusion, UDE = undetermined etiology, ODE = other determined etiology, LDL = low-density lipoprotein, BUN = blood urea nitrogen, IVT = intravenous thrombolysis, EVT = endovascular therapy.\nCRCS-K = The Clinical Research Collaboration for Stroke in Korea, IVT = intravenous thrombolysis, EVT = endovascular therapy.\nExact treatment time was available in 1,669 patients (83.4%) treated with IVT alone. The median door-to-needle time was 40 minutes (30–53) and the median onset-to-needle time was 124 minutes (88–170). As expected, the onset-to-needle time was positively correlated with ground transport time (Pearson coefficient 0.201, P < 0.001) since it took more time to arrive at the hospital. However, the door-to-needle was inversely associated with ground transport time (Pearson coefficient −0.108, P < 0.001). Regarding EVT, exact treatment time for patients treated with EVT was available for 1,077 patients (87.0%). The median door-to-puncture time was 107 minutes (85–135) and the median onset-to-puncture time was 240 minutes (170–340). The onset-to-puncture time was positively correlated with ground transport time (Pearson coefficient 0.192, P < 0.001). Like the door-to-needle time in IVT, the door-to-puncture showed a negative association with ground transport time in patients treated with EVT (Pearson coefficient −0.089, P < 0.001) (Supplementary Fig. 3).\nThe proportion of patients treated with IVT ranged from 17.1% to 22.1% among the groups with 30-minute increments of ground transport time, and proportion of EVT ranged from 7.5% to 12.4% (Supplementary Fig. 4). In the restricted cubic spline analyses, the proportion of patients treated with IVT decreased significantly when the patients lived beyond a 90-minute ground transport time from the hospital (unadjusted P = 0.003; adjusted P = 0.006). The proportion of stroke patients treated with EVT showed a similar trend with estimated ground transport time in the restricted cubic spline analyses, but it did not reach statistical significance in both unadjusted and adjusted analyses (unadjusted P = 0.086; adjusted P = 0.105) (Fig. 2).\n(A) IVT, unadjusted analysis (P = 0.003); (B) IVT, adjusted analysis (P = 0.006); adjusted for age, hypertension, diabetes, atrial fibrillation, stroke history, coronary artery disease, use of an antiplatelet agent, oral anticoagulants or statins before the index stroke, diastolic blood pressure, fasting blood glucose, low-density lipoprotein cholesterol, platelet count, onset-to-arrival time, initial NIHSS score, and TOAST classification. (C) EVT, unadjusted analysis (P = 0.090); (D) EVT, adjusted analysis (P = 0.100); adjusted for age, sex, hypertension, diabetes, atrial fibrillation, hyperlipidemia, smoking, coronary artery disease, use of an antiplatelet agent or oral anticoagulants before the index stroke, systolic blood pressure, hemoglobin, platelet count, blood urea nitrogen, onset-to-arrival time, initial NIHSS score, and TOAST classification.\nIVT = intravenous thrombolysis, EVT = endovascular therapy, NIHSS = National Institutes of Health Stroke Scale, TOAST = Trial of Org 10172 in Acute Stroke Treatment.\nOf 779 patients who had been treated with combined IVT-EVT, we could identify exact referral pattern in 439 patients (56.4%). Among 439 patients, 69 patients (15.7%) were treated by drip- and-ship paradigm. The use of drip-and-ship paradigm increased significantly with greater ground transport time (P < 0.001) (Supplementary Table 2).\nFor national coverage from the 67 stroke centers, only 9.3% of the total land area of South Korea was reachable within 30 minutes while 75.0% of the land was accessible within 90 minutes. In contrast, 73.0% of the residential area was reachable within 30 minutes of the ground transport time, and 98.4% of the area was accessible within 90 minutes (Fig. 3 and Table 2).\nCRCS-K = The Clinical Research Collaboration for Stroke in Korea.", "In this nationwide analysis, we found that more than 95% of patients with acute ischemic stroke who visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset had less than 90 minutes of ground transport time. The proportion of patients treated with IVT decreased significantly when patients had more than 90 minutes of the ground transport time from the hospital. The proportion of stroke patients treated with EVT also showed a similar trend with the transport time. From a national viewpoint, more than 98% of the residential area was accessible within 90 minutes of ground transport time from 67 accredited stroke centers in South Korea.\nGeographic access to hospitals offering reperfusion therapy can be crucial to successful reperfusion therapy because the time spent on symptom recognition to decision to medical seeking and the transport time from the scene to the hospital are important contributors to prehospital delays. Living close to treating hospitals was associated with a significantly higher chance of receiving thrombolytic therapy in a study conducted in St. Louis in the US.8 Interestingly, the increased use of thrombolytic therapy was not explained by earlier arrival time in that study, which indicates the presence of other factors such as the use of ambulance transport and health-seeking behavior. Other previous research that used claims data from more than 100,000 Japanese stroke patients did not find an association between driving time and administration of tissue plasminogen activator.9 Instead, they found a significant association between the use of ambulance and the use of thrombolytic therapy regardless of driving time and population density of the area. However, a recent large study enrolling 118,683 patients from the Get With The Guidelines-Stroke Registry (GWTG-Stroke) in the US found a significant association between a longer driving time to hospital and longer onset to arrival time and lower odds of IVT.10 In our study, the proportion of patients treated with IVT decreased significantly when patients had more than 90 minutes of the ground transport time from the hospital. Our findings suggest that the distance to the hospital will not be a major barrier to the use of IVT in South Korea as more than 98% of the South Korean population lives within 90 min from 67 accredited stroke centers in this study.\nAlthough several recent clinical trials have extended the treatment time window for EVT using various perfusion imaging studies, the time interval between stroke onset and reperfusion therapy is still the most important factor affecting eligibility and outcomes for reperfusion therapy in most patients with acute ischemic stroke.212223 For the association between the use of EVT and distance from the hospital, there is not much evidence from the previous literature despite the prehospital delay due to greater transport time could be also crucial to the timely performance of this life-saving therapy for acute ischemic stroke due to large vessel occlusion. In our previous study conducted in California, 94% of the stroke patients lived within a 2-hour ground transport time to the hospital offering the treatment. However, those who lived within 1-hour ground transport time had a significantly increased chance of receiving EVT compared with those who lived beyond 1-hour ground transport time (0.9% vs. 0.2%, P < 0.001).7 In this study, the proportion of stroke patients treated with EVT also decreased when the patients lived beyond a 90-minute ground transport time, however the association did not reach statistical significance. This finding is possibly due to selective referral of the patients who were candidates for EVT from other hospitals although we could not verify this due to lack of information on referral patterns. As with IVT, the ground transport time will not be an important obstacle to receive EVT in South Korea because more than 98% of the population lives within 90 min from 67 stroke centers where most of the EVT procedures would take place.\nIn addition to personal factors such as health-seeking behavior, or recognition of stroke symptoms, various systemic factors can be associated with thrombolysis administration rates in acute ischemic stroke. Studies have found that urban location, centralized acute stroke care system, use of ambulance, neurologist staffing, and use of acute stroke protocol had a significant impact on the increased use of thrombolysis.24 According to the recent report on the national averages for acute stroke care in Korea during 2013–2014, median arrival time to the hospital was 6 h and only 50% of patients with ischemic stroke used ambulance.25 Overall, IVT was used in 10.7% and EVT in 3.6% of patients with acute ischemic stroke. IVT seems to have been used more frequently than other countries although the drip-and-ship paradigm was used only in 1%. However, significant regional disparities in the rates of reperfusion therapy were found and substantial number of acute stroke patients were treated with reperfusion therapy at low-volume hospitals.2526 In another research conducted with Korean national stroke audit data, only one-third of patients who were EVT candidates were initially taken to EVT-capable hospitals and initial routing to EVT-capable hospitals was associated with more than two-folds increased chance of receiving EVT compared with initial routing to primary stroke hospital.27 Since South Korea has a universal national health insurance system with a high population density, it is advantageous for treating acute stroke. Therefore, there are a lot of opportunities to improve the rates of reperfusion therapy for stroke patients by enhancing collaboration between stroke centers and emergency medical system, and redefining roles of comprehensive and primary stroke centers with special care for rural residence.\nAs of 2018, 81.4% of the Korean population live in the urban area according to the world bank data.28 Of 12,172 patients treated at 12 CRCS-K hospitals, 1,272 patients (10.5%) lived in rural region. Therefore, the rural population was underrepresented in our study population and the actual proportion of the stroke patients who received IVT or EVT could be lower in rural regions compared with the urban area. In this study, drip-and-ship paradigm was used in only 15.7% of patients treated with combined IVT-EVT and the use of the paradigm significantly correlated with increased ground transport time. This finding is consistent with our previous work that also reported infrequent use of the paradigm (13.2%) among 1843 patients treated with intravenous tPA at the CRCS-K hospitals.29 From a national perspective, the drip-and-ship paradigm was used in only 1.0% of acute stroke patients and only one-third of EVT candidates were initially routed to EVT-capable hospitals.2527 Additionally, interhospital transfer of patients initially routed to primary stroke hospitals to EVT-capable hospital occurred in only 17.4% in Korea. Because the stroke patients who live in the rural region would be the most probable patients who need more than 90 min of ground transportation time to stroke centers, placing EVT-capable hospitals in strategic locations readily accessible to the rural population should be considered in planning nationwide coverage for stroke systems of care. The nationwide stroke systems of care should ensure effective interhospital transfer of patients for further treatment and monitoring whenever it is needed and direct routing of patients suspected of large vessel occlusion to EVT-capable hospitals.\nThis study has several limitations. First, we used the data from 2008 to 2014 when the collection of patients' home addresses was permitted. Therefore, the current study may not reflect the exact nature of the EVT population because the use of a stentriever or aspiration catheter was not as common during most of this period. Second, we do not have information on referrals and use of ambulance which could be associated with the increased use of reperfusion therapy. Referral bias might play an important role in the estimating the proportion of EVT among the patients who lived farther from the admitting hospital. Third, various personal and systemic factors could affect the rate of reperfusion therapy and many of them were not measured in our registry. Fourth, we do not have exact information for the annual volume of IVT or EVT at 67 stroke centers, and therefore we could not estimate the accessibility of the Korean population for high-volume centers separately. Lastly, we also do not have exact information on the place where the patient had suffered the stroke that led to the current admission. Patients could have developed stroke symptoms at other places and therefore the estimation of the ground transport time between the patients' residence and the admitting hospital serves only as a proxy for geographic access to treating hospitals.\nIn conclusion, more than 95% of the patients with acute ischemic stroke who visited the hospitals within 12 hours of symptom onset lived within 90 minutes of ground transport time from the hospital in this nationwide study. The use of reperfusion therapy decreased when the patients lived beyond 90 minutes of transport time from the hospital. From a national perspective, more than 98% of the South Korean population was accessible to 67 stroke centers within 90 minutes of the ground transport time." ]
[ "intro", "methods", null, null, null, null, "results", "discussion" ]
[ "Ischemic Stroke", "Reperfusion", "Thrombolysis", "Endovascular Treatment", "Utilization" ]
INTRODUCTION: Geographic access to hospitals that offer acute care for patients with trauma or acute myocardial infarction can be crucial because timely treatment can improve outcomes.1234 For patients with acute ischemic stroke, access to hospitals that provide reperfusion therapy is important because the reperfusion therapy for acute ischemic stroke is also highly time-sensitive.56 Several previous studies have suggested the probability of receiving reperfusion therapies for acute ischemic stroke increased when the patients lived in close geographic proximity to hospitals that provide the therapy, however other studies did not find a significant association between the transport time from home to hospital and the use of reperfusion therapy.78910 For example, longer driving time to the treating hospital was significantly associated with longer onset-to-arrival time to the hospitals and lower odds of receiving intravenous thrombolysis (IVT) in a retrospective study of 118,683 acute stroke patients admitted to 1,489 US hospitals.10 For endovascular therapy (EVT) for acute ischemic stroke, the proportion of stroke patients receiving EVT decreased significantly when patients lived beyond a 1-hour ground transport time from hospitals that offer EVT in California during 2009 and 2010.7 Among patients with ST-segment elevation myocardial infarction, a wide regional variation exists in the rate of reperfusion therapy and median time from symptom onset to treatment due to differences in geography, local resource, and organization of regional health system.11 Likewise the association between geographic proximity to a hospital that offers the reperfusion therapy for acute stroke patients and the probability of receiving the therapy can vary depending on population density, geography, and systems of care.1213 Therefore, each region needs to develop an optimal geospatial modeling of acute care centers to provide maximal population access to such centers.114 In this study, we investigated the association between geographic proximity to a hospital that offers reperfusion therapy and the administration rate of the therapy for acute ischemic stroke in South Korea, one of the planet's most densely populated countries with a population density of 529 people per square kilometer in 2018.15 METHODS: Patients The Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE). The Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE). Travel distance, ground transport time, and service area In this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses. In this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses. Statistical analysis Baseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant. Baseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant. Ethics statement This study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants. This study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants. Patients: The Clinical Research Collaboration for Stroke in Korea (CRCS-K) is a web-based prospective nationwide multicenter stroke registry.1617 The registry started enrolling patients in April 2008 and the collection of each patient's home address was permitted until August 2014. From the registry, we identified patients with acute ischemic stroke who were over 18 years old and visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset time. The last known normal time was used when the onset of stroke symptom was unclear. Records were excluded if address information was incorrect or if symptom onset or hospital arrival time was unavailable. We also collected the exact time of intravenous tissue plasminogen activator (tPA) administration and groin-puncture time, and calculated onset-to-door, door-to-needle, and door-to-puncture time. We limited our analyses to the addresses within the same provincial area of the treating hospital because transferring acute stroke patients across provincial line is not considered a usual practice. Reperfusion therapy was classified as either IVT alone, vs. EVT or combined IVT and EVT. For patients who were treated with combined IVT and EVT, we identified patients who had received intravenous tPA at an outside hospital and then transferred to the CRCS-K center for EVT, i.e., drip-and-ship paradigm. For an initial stroke severity measurement, we used the National Institutes of Health Stroke Scale (NIHSS) score. For the subtyping of ischemic stroke, we employed the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification with minor modification1819 and classified the subtype as large-artery atherosclerosis (LAA), small-vessel occlusion (SVO), cardioembolism (CE), or undetermined etiology (UDE)/other determined etiology (ODE). Travel distance, ground transport time, and service area: In this study, we classified the patients' residence as metropolitan (city with a population of more than 500,000), urban (city with a population of more than 50,000), or rural region according to the local government act of Korea. After geocoding of the home address and treating hospital for each patient, ground transport time was estimated based on the driving time along the best route from the patient's address to the admitting hospitals using a web-based map service which is optimized for actual traffic condition in Korea (https://map.kakao.com/). For analyses, the travel time was divided into 30-minute increments of ground transport time (i.e., 0–30, 31–60, 61–90, 90–120, and > 120 minutes). To project the findings from CRCS-K hospitals to the national level, we estimated the proportion of the Korean population that lived within each incremental service area from 67 stroke centers accredited by the Korean Stroke Society in 2018 (https://www.stroke.or.kr:4454/hospital/index2.php). To do this, we first obtained the service area from 67 stroke centers using the 30-minute incremental ground transport time threshold. Then, we overlaid these service area maps on the residential areas of 255 administrative districts of South Korea. Finally, we calculated the proportion of the population from each administrative district of South Korea within each service area based on intersection of the service area maps and residential areas. In addition, we repeated the same analysis using the whole territorial area of South Korea including both residential and nonresidential areas. Population data for each administrative district in South Korea were from the Korean National Statistical Office as of December 2019.20 ArcGIS Pro (Version 2.4.0; Esri Inc, Redlands, CA, USA) was used for all geographic analyses. Statistical analysis: Baseline characteristics of patients were compared using a test for trend across the ordered groups divided into the 30-minute increments of ground transport time. Regression analyses were used to explore the relationship between ground transport time and the proportion of patients that received reperfusion therapies (either IVT alone or EVT). For the non-linear relationship, a restricted cubic spline function with 4 knots defined at the 30 minutes, 60 minutes, 90 minutes, and 120 minutes of ground transport time was used. In the regression analyses, covariates entered the regression models when they had P values less than 0.10 in the univariate analyses. The regression analyses were performed with R software using the “rms” package (version 3.6.0, R Foundation for Statistical Computing, Vienna, Austria). In all analyses, a two-tailed test with P value of less than 0.05 was considered significant. Ethics statement: This study was approved by the Institutional Review Boards of Jeju National University Hospital (approval No. JEJUNUH 2018-09-003) and all other participating centers with a waiver of informed consent of individual patients because of study subject anonymity and minimal risk to participants. RESULTS: Of 27,122 acute stroke patients who had been admitted to CRCS-K hospitals between April 2008 and August 2014, 12,172 patients met the eligibility criteria (Supplementary Fig. 1). Mean age was 68 ± 13 years, and 59.7% were men. Of 12,172 patients, 2,216 patients (18.2%) were treated with IVT alone and 1238 patients (10.2%) were treated with either EVT alone or combined IVT and EVT. The patients who received reperfusion therapy were more likely to be older, to have atrial fibrillation and higher initial NIHSS score (Supplementary Table 1). In this study, 48.2% of the patients' residence were classified as metropolitan, 41.3% as urban, and 10.5% as rural. The average estimated ground transport time from home to hospital was 23 minutes (interquartile range [IQR], 14–39), and median travel distance was 9.4 km (IQR, 4.5–25.1). In this study, 64.4% of the patients lived within 30 minutes or less to the admitting hospital, 88.7% within 60 minutes or less, 96.5% within 90 minutes or less, and 99.2% with 120 minutes or less. Ground transport times of 30, 60, 90, and 120 minutes corresponded to travel distances of 20.4, 53.5, 86.6, and 119.8 km, respectively. Patients that lived farther from admitting hospitals were more likely to be older, to have atrial fibrillation and CE subtype, and to have a greater initial NIHSS score, but were less likely to have hypertension, diabetes, hyperlipidemia, coronary artery disease and had a lower initial systolic blood pressure (Table 1). The ground transport time did not differ between patients treated with IVT only and patients untreated with reperfusion therapy (rank sum test P = 0.450). However, the ground transport time was slightly but significantly greater for patients treated with EVT compared with patients untreated with reperfusion therapy (24 minutes [14–44] vs. 23 minutes [14–39], rank sum test P = 0.015; Fig. 1 and Supplementary Fig. 2). Data are presented as number (%) or mean ± standard deviation. IQR = interquartile range, NIHSS = National Institutes of Health Stroke Scale, LAA = large artery atherosclerosis, CE = cardioembolism, SVO = small-vessel occlusion, UDE = undetermined etiology, ODE = other determined etiology, LDL = low-density lipoprotein, BUN = blood urea nitrogen, IVT = intravenous thrombolysis, EVT = endovascular therapy. CRCS-K = The Clinical Research Collaboration for Stroke in Korea, IVT = intravenous thrombolysis, EVT = endovascular therapy. Exact treatment time was available in 1,669 patients (83.4%) treated with IVT alone. The median door-to-needle time was 40 minutes (30–53) and the median onset-to-needle time was 124 minutes (88–170). As expected, the onset-to-needle time was positively correlated with ground transport time (Pearson coefficient 0.201, P < 0.001) since it took more time to arrive at the hospital. However, the door-to-needle was inversely associated with ground transport time (Pearson coefficient −0.108, P < 0.001). Regarding EVT, exact treatment time for patients treated with EVT was available for 1,077 patients (87.0%). The median door-to-puncture time was 107 minutes (85–135) and the median onset-to-puncture time was 240 minutes (170–340). The onset-to-puncture time was positively correlated with ground transport time (Pearson coefficient 0.192, P < 0.001). Like the door-to-needle time in IVT, the door-to-puncture showed a negative association with ground transport time in patients treated with EVT (Pearson coefficient −0.089, P < 0.001) (Supplementary Fig. 3). The proportion of patients treated with IVT ranged from 17.1% to 22.1% among the groups with 30-minute increments of ground transport time, and proportion of EVT ranged from 7.5% to 12.4% (Supplementary Fig. 4). In the restricted cubic spline analyses, the proportion of patients treated with IVT decreased significantly when the patients lived beyond a 90-minute ground transport time from the hospital (unadjusted P = 0.003; adjusted P = 0.006). The proportion of stroke patients treated with EVT showed a similar trend with estimated ground transport time in the restricted cubic spline analyses, but it did not reach statistical significance in both unadjusted and adjusted analyses (unadjusted P = 0.086; adjusted P = 0.105) (Fig. 2). (A) IVT, unadjusted analysis (P = 0.003); (B) IVT, adjusted analysis (P = 0.006); adjusted for age, hypertension, diabetes, atrial fibrillation, stroke history, coronary artery disease, use of an antiplatelet agent, oral anticoagulants or statins before the index stroke, diastolic blood pressure, fasting blood glucose, low-density lipoprotein cholesterol, platelet count, onset-to-arrival time, initial NIHSS score, and TOAST classification. (C) EVT, unadjusted analysis (P = 0.090); (D) EVT, adjusted analysis (P = 0.100); adjusted for age, sex, hypertension, diabetes, atrial fibrillation, hyperlipidemia, smoking, coronary artery disease, use of an antiplatelet agent or oral anticoagulants before the index stroke, systolic blood pressure, hemoglobin, platelet count, blood urea nitrogen, onset-to-arrival time, initial NIHSS score, and TOAST classification. IVT = intravenous thrombolysis, EVT = endovascular therapy, NIHSS = National Institutes of Health Stroke Scale, TOAST = Trial of Org 10172 in Acute Stroke Treatment. Of 779 patients who had been treated with combined IVT-EVT, we could identify exact referral pattern in 439 patients (56.4%). Among 439 patients, 69 patients (15.7%) were treated by drip- and-ship paradigm. The use of drip-and-ship paradigm increased significantly with greater ground transport time (P < 0.001) (Supplementary Table 2). For national coverage from the 67 stroke centers, only 9.3% of the total land area of South Korea was reachable within 30 minutes while 75.0% of the land was accessible within 90 minutes. In contrast, 73.0% of the residential area was reachable within 30 minutes of the ground transport time, and 98.4% of the area was accessible within 90 minutes (Fig. 3 and Table 2). CRCS-K = The Clinical Research Collaboration for Stroke in Korea. DISCUSSION: In this nationwide analysis, we found that more than 95% of patients with acute ischemic stroke who visited one of the 12 CRCS-K hospitals within 12 hours of symptom onset had less than 90 minutes of ground transport time. The proportion of patients treated with IVT decreased significantly when patients had more than 90 minutes of the ground transport time from the hospital. The proportion of stroke patients treated with EVT also showed a similar trend with the transport time. From a national viewpoint, more than 98% of the residential area was accessible within 90 minutes of ground transport time from 67 accredited stroke centers in South Korea. Geographic access to hospitals offering reperfusion therapy can be crucial to successful reperfusion therapy because the time spent on symptom recognition to decision to medical seeking and the transport time from the scene to the hospital are important contributors to prehospital delays. Living close to treating hospitals was associated with a significantly higher chance of receiving thrombolytic therapy in a study conducted in St. Louis in the US.8 Interestingly, the increased use of thrombolytic therapy was not explained by earlier arrival time in that study, which indicates the presence of other factors such as the use of ambulance transport and health-seeking behavior. Other previous research that used claims data from more than 100,000 Japanese stroke patients did not find an association between driving time and administration of tissue plasminogen activator.9 Instead, they found a significant association between the use of ambulance and the use of thrombolytic therapy regardless of driving time and population density of the area. However, a recent large study enrolling 118,683 patients from the Get With The Guidelines-Stroke Registry (GWTG-Stroke) in the US found a significant association between a longer driving time to hospital and longer onset to arrival time and lower odds of IVT.10 In our study, the proportion of patients treated with IVT decreased significantly when patients had more than 90 minutes of the ground transport time from the hospital. Our findings suggest that the distance to the hospital will not be a major barrier to the use of IVT in South Korea as more than 98% of the South Korean population lives within 90 min from 67 accredited stroke centers in this study. Although several recent clinical trials have extended the treatment time window for EVT using various perfusion imaging studies, the time interval between stroke onset and reperfusion therapy is still the most important factor affecting eligibility and outcomes for reperfusion therapy in most patients with acute ischemic stroke.212223 For the association between the use of EVT and distance from the hospital, there is not much evidence from the previous literature despite the prehospital delay due to greater transport time could be also crucial to the timely performance of this life-saving therapy for acute ischemic stroke due to large vessel occlusion. In our previous study conducted in California, 94% of the stroke patients lived within a 2-hour ground transport time to the hospital offering the treatment. However, those who lived within 1-hour ground transport time had a significantly increased chance of receiving EVT compared with those who lived beyond 1-hour ground transport time (0.9% vs. 0.2%, P < 0.001).7 In this study, the proportion of stroke patients treated with EVT also decreased when the patients lived beyond a 90-minute ground transport time, however the association did not reach statistical significance. This finding is possibly due to selective referral of the patients who were candidates for EVT from other hospitals although we could not verify this due to lack of information on referral patterns. As with IVT, the ground transport time will not be an important obstacle to receive EVT in South Korea because more than 98% of the population lives within 90 min from 67 stroke centers where most of the EVT procedures would take place. In addition to personal factors such as health-seeking behavior, or recognition of stroke symptoms, various systemic factors can be associated with thrombolysis administration rates in acute ischemic stroke. Studies have found that urban location, centralized acute stroke care system, use of ambulance, neurologist staffing, and use of acute stroke protocol had a significant impact on the increased use of thrombolysis.24 According to the recent report on the national averages for acute stroke care in Korea during 2013–2014, median arrival time to the hospital was 6 h and only 50% of patients with ischemic stroke used ambulance.25 Overall, IVT was used in 10.7% and EVT in 3.6% of patients with acute ischemic stroke. IVT seems to have been used more frequently than other countries although the drip-and-ship paradigm was used only in 1%. However, significant regional disparities in the rates of reperfusion therapy were found and substantial number of acute stroke patients were treated with reperfusion therapy at low-volume hospitals.2526 In another research conducted with Korean national stroke audit data, only one-third of patients who were EVT candidates were initially taken to EVT-capable hospitals and initial routing to EVT-capable hospitals was associated with more than two-folds increased chance of receiving EVT compared with initial routing to primary stroke hospital.27 Since South Korea has a universal national health insurance system with a high population density, it is advantageous for treating acute stroke. Therefore, there are a lot of opportunities to improve the rates of reperfusion therapy for stroke patients by enhancing collaboration between stroke centers and emergency medical system, and redefining roles of comprehensive and primary stroke centers with special care for rural residence. As of 2018, 81.4% of the Korean population live in the urban area according to the world bank data.28 Of 12,172 patients treated at 12 CRCS-K hospitals, 1,272 patients (10.5%) lived in rural region. Therefore, the rural population was underrepresented in our study population and the actual proportion of the stroke patients who received IVT or EVT could be lower in rural regions compared with the urban area. In this study, drip-and-ship paradigm was used in only 15.7% of patients treated with combined IVT-EVT and the use of the paradigm significantly correlated with increased ground transport time. This finding is consistent with our previous work that also reported infrequent use of the paradigm (13.2%) among 1843 patients treated with intravenous tPA at the CRCS-K hospitals.29 From a national perspective, the drip-and-ship paradigm was used in only 1.0% of acute stroke patients and only one-third of EVT candidates were initially routed to EVT-capable hospitals.2527 Additionally, interhospital transfer of patients initially routed to primary stroke hospitals to EVT-capable hospital occurred in only 17.4% in Korea. Because the stroke patients who live in the rural region would be the most probable patients who need more than 90 min of ground transportation time to stroke centers, placing EVT-capable hospitals in strategic locations readily accessible to the rural population should be considered in planning nationwide coverage for stroke systems of care. The nationwide stroke systems of care should ensure effective interhospital transfer of patients for further treatment and monitoring whenever it is needed and direct routing of patients suspected of large vessel occlusion to EVT-capable hospitals. This study has several limitations. First, we used the data from 2008 to 2014 when the collection of patients' home addresses was permitted. Therefore, the current study may not reflect the exact nature of the EVT population because the use of a stentriever or aspiration catheter was not as common during most of this period. Second, we do not have information on referrals and use of ambulance which could be associated with the increased use of reperfusion therapy. Referral bias might play an important role in the estimating the proportion of EVT among the patients who lived farther from the admitting hospital. Third, various personal and systemic factors could affect the rate of reperfusion therapy and many of them were not measured in our registry. Fourth, we do not have exact information for the annual volume of IVT or EVT at 67 stroke centers, and therefore we could not estimate the accessibility of the Korean population for high-volume centers separately. Lastly, we also do not have exact information on the place where the patient had suffered the stroke that led to the current admission. Patients could have developed stroke symptoms at other places and therefore the estimation of the ground transport time between the patients' residence and the admitting hospital serves only as a proxy for geographic access to treating hospitals. In conclusion, more than 95% of the patients with acute ischemic stroke who visited the hospitals within 12 hours of symptom onset lived within 90 minutes of ground transport time from the hospital in this nationwide study. The use of reperfusion therapy decreased when the patients lived beyond 90 minutes of transport time from the hospital. From a national perspective, more than 98% of the South Korean population was accessible to 67 stroke centers within 90 minutes of the ground transport time.
Background: We investigated the association between geographic proximity to hospitals and the administration rate of reperfusion therapy for acute ischemic stroke. Methods: We identified patients with acute ischemic stroke who visited the hospital within 12 hours of symptom onset from a prospective nationwide multicenter stroke registry. Reperfusion therapy was classified as intravenous thrombolysis (IVT), endovascular therapy (EVT), or combined therapy. The association between the proportion of patients who were treated with reperfusion therapy and the ground transport time was evaluated using a spline regression analysis adjusted for patient-level characteristics. We also estimated the proportion of Korean population that lived within each 30-minute incremental service area from 67 stroke centers accredited by the Korean Stroke Society. Results: Of 12,172 patients (mean age, 68 ± 13 years; men, 59.7%) who met the eligibility criteria, 96.5% lived within 90 minutes of ground transport time from the admitting hospital. The proportion of patients treated with IVT decreased significantly when stroke patients lived beyond 90 minutes of the transport time (P = 0.006). The proportion treated with EVT also showed a similar trend with the transport time. Based on the residential area, 98.4% of Korean population was accessible to 67 stroke centers within 90 minutes. Conclusions: The use of reperfusion therapy for acute stroke decreased when patients lived beyond 90 minutes of the ground transport time from the hospital. More than 95% of the South Korean population was accessible to 67 stroke centers within 90 minutes of the ground transport time.
null
null
5,940
291
[ 334, 324, 165, 50 ]
8
[ "time", "patients", "stroke", "evt", "transport", "transport time", "ground", "ground transport", "ground transport time", "hospital" ]
[ "reperfusion therapy time", "rates reperfusion therapy", "stroke access hospitals", "stroke patients receiving", "transferring acute stroke" ]
null
null
[CONTENT] Ischemic Stroke | Reperfusion | Thrombolysis | Endovascular Treatment | Utilization [SUMMARY]
[CONTENT] Ischemic Stroke | Reperfusion | Thrombolysis | Endovascular Treatment | Utilization [SUMMARY]
[CONTENT] Ischemic Stroke | Reperfusion | Thrombolysis | Endovascular Treatment | Utilization [SUMMARY]
null
[CONTENT] Ischemic Stroke | Reperfusion | Thrombolysis | Endovascular Treatment | Utilization [SUMMARY]
null
[CONTENT] Administration, Intravenous | Aged | Aged, 80 and over | Combined Modality Therapy | Endovascular Procedures | Female | Fibrinolytic Agents | Humans | Ischemic Stroke | Male | Middle Aged | Registries | Republic of Korea | Thrombolytic Therapy | Time Factors | Time-to-Treatment [SUMMARY]
[CONTENT] Administration, Intravenous | Aged | Aged, 80 and over | Combined Modality Therapy | Endovascular Procedures | Female | Fibrinolytic Agents | Humans | Ischemic Stroke | Male | Middle Aged | Registries | Republic of Korea | Thrombolytic Therapy | Time Factors | Time-to-Treatment [SUMMARY]
[CONTENT] Administration, Intravenous | Aged | Aged, 80 and over | Combined Modality Therapy | Endovascular Procedures | Female | Fibrinolytic Agents | Humans | Ischemic Stroke | Male | Middle Aged | Registries | Republic of Korea | Thrombolytic Therapy | Time Factors | Time-to-Treatment [SUMMARY]
null
[CONTENT] Administration, Intravenous | Aged | Aged, 80 and over | Combined Modality Therapy | Endovascular Procedures | Female | Fibrinolytic Agents | Humans | Ischemic Stroke | Male | Middle Aged | Registries | Republic of Korea | Thrombolytic Therapy | Time Factors | Time-to-Treatment [SUMMARY]
null
[CONTENT] reperfusion therapy time | rates reperfusion therapy | stroke access hospitals | stroke patients receiving | transferring acute stroke [SUMMARY]
[CONTENT] reperfusion therapy time | rates reperfusion therapy | stroke access hospitals | stroke patients receiving | transferring acute stroke [SUMMARY]
[CONTENT] reperfusion therapy time | rates reperfusion therapy | stroke access hospitals | stroke patients receiving | transferring acute stroke [SUMMARY]
null
[CONTENT] reperfusion therapy time | rates reperfusion therapy | stroke access hospitals | stroke patients receiving | transferring acute stroke [SUMMARY]
null
[CONTENT] time | patients | stroke | evt | transport | transport time | ground | ground transport | ground transport time | hospital [SUMMARY]
[CONTENT] time | patients | stroke | evt | transport | transport time | ground | ground transport | ground transport time | hospital [SUMMARY]
[CONTENT] time | patients | stroke | evt | transport | transport time | ground | ground transport | ground transport time | hospital [SUMMARY]
null
[CONTENT] time | patients | stroke | evt | transport | transport time | ground | ground transport | ground transport time | hospital [SUMMARY]
null
[CONTENT] therapy | acute | stroke | reperfusion | reperfusion therapy | receiving | patients | ischemic | ischemic stroke | acute ischemic stroke [SUMMARY]
[CONTENT] stroke | time | service | service area | analyses | area | patients | korea | regression | transport [SUMMARY]
[CONTENT] time | patients | minutes | treated | fig | adjusted | evt | ground | ivt | transport [SUMMARY]
null
[CONTENT] stroke | time | patients | evt | transport | transport time | ground | ground transport | ground transport time | minutes [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] 12 hours ||| IVT | EVT ||| ||| Korean | 30-minute | 67 | the Korean Stroke Society [SUMMARY]
[CONTENT] 12,172 | 68 | 13 years | 59.7% | 96.5% | 90 minutes ||| IVT | 90 minutes | 0.006 ||| EVT ||| 98.4% | Korean | 67 | 90 minutes [SUMMARY]
null
[CONTENT] ||| 12 hours ||| IVT | EVT ||| ||| Korean | 30-minute | 67 | the Korean Stroke Society ||| 12,172 | 68 | 13 years | 59.7% | 96.5% | 90 minutes ||| IVT | 90 minutes | 0.006 ||| EVT ||| 98.4% | Korean | 67 | 90 minutes ||| 90 minutes ||| More than 95% | South Korean | 67 | 90 minutes [SUMMARY]
null
Medical science students' experiences of test anxiety: a phenomenological study.
35906665
The studies show test anxiety is a common disorder in students that causes academic failure. There are not enough studies and specific theoretical background about test anxiety and ways to deal with it, so the purpose of this study was to do a qualitative study to fully understand the ways to deal with test anxiety in medical Sciences students.
INTRODUCTION
This is a qualitative study. The participants are the students of the last 2 years of pharmacy, medicine and dentistry at Isfahan University of Medical Sciences. Ten students were selected by purposeful sampling, and interviews continued until the data saturation stage and the lack of access to new data. The data were analyzed by seven-level Colaizzi method.
MATERIALS AND METHODS
After analyzing data, about 50 codes were extracted. These codes divided into 16 subclasses, and among them, ultimately five main themes are extracted: "Prayer and Dialogue with God", "Interaction and communication with friends and relatives", "studying strategies", "Finding ways to relax and self-care" and "Negative strategies" were extracted.
FINDINGS
The result of this study showed that students often use positive strategies to overcome the test anxiety and try to use positive strategies, but some students are advised of undesirable strategies such as misuse of authorized drugs and writing cheating that that lead to a lot of complex problems. The educational system should do its utmost effort to empower students to manage the anxiety by learning the best strategies.
CONCLUSIONS
[ "Anxiety", "Humans", "Learning", "Qualitative Research", "Students, Medical", "Test Anxiety" ]
9336078
Introduction
One of the concerns of the educational system in universities is the test anxiety [1]. Test anxiety is a special type of anxiety that is characterized by physical, cognitive, and behavioral symptoms when preparing for the test and performing test, It becomes a problem when high levels of anxiety interfere with preparing for and taking the test. Test anxiety is characterized by severe fear of poor performance in tests [2]. In fact, the test anxiety is an emotional experience, feelings and anxiety in situations in which a person feels that his/her performance is evaluated [3]. Test anxiety is defined as the experience of fear, apprehension, and worry before, during, or after a test that can lead to mental distraction, memory impairment, and physical symptoms such as nausea, headache, and tachycardia [4]. This type of anxiety occurs between the ages of 10 and 12 and increases with increasing age and maximizes in higher education [5]. According to estimates by researchers in the United States, 15% of students experience different levels of anxiety annually [6]. In the study by Tsegay et al. [7] it was reported that the rate of test anxiety in medical students was 52.3% and significantly higher in female students. Test anxiety is important because it affects test success [8]. Test anxiety can cause academic failure by reducing intrinsic motivation and existing cognitive ability [9]. Sarason [10] considers test anxiety as a self-preoccupation, which is characterized by self-defeat and doubt about his/her abilities. this anxiety often leads to negative cognitive assessment, undesirable physiological reactions and the decline of academic performance. As a result, there is a significant reverse relationship between anxiety scores and test scores [10]. Various studies have shown that test anxiety has a significant and important effect on the academic performance of students. Cassady [11] showed in his study that students with higher test anxiety had a weaker academic performance [11]. Students try to cope with anxiety in order to manage it. Coping is the effort to control and manage situations that seem dangerous and tense [12]. There are two main strategies for coping with anxiety that are known as problem-focused coping and emotion-focused coping strategies. The problem-focused coping strategy in which the main objective is to dominate the position and change in the source of anxiety, and emotion-focused coping strategies in which the main objective is to reduce or transform the emotional disturbance quickly [13]. Narimany and et al. stated that students manage their anxiety through conventional methods such as: they think about good memories of the past, praying and eliminating negative thoughts and beliefs [14]. In previous studies, descriptive studies have focused more on estimating the extent to which positive methods have been used, and less on the overall evaluation of all coping strategies that students use. Also, in this study, an attempt has been made to study the strategies of medical students to deal with test anxiety in Eastern and Islamic countries. In Eastern countries, beliefs are different from some other countries where previous studies have been conducted, so its findings can help improve knowledge in this area. On the other hand, this study has selected medical students as the target population because these students face a high volume of difficult. They may also choose different strategies to deal with test anxiety. In this regard, the present study was designed in qualitative study in this target group and conducted with the aim of identifying strategies to deal with test anxiety in medical students.
null
null
null
null
Conclusions
The result of this study showed that students often use positive strategies to overcome the test anxiety and try to use positive strategies, but some students are advised of undesirable strategies such as misuse of authorized drug and writing cheating that lead to a lot of complex problems. Given that the test are an important part of academic life and that every student is always involved in a test and study evaluation, the educational system should do its utmost effort to empower students to manage anxiety by learning the best strategies.
[ "Findings" ]
[ "Ten students participated in the study, of which 7 persons were females and 3 persons were males. Students were selected of pharmacy, medicine and dentistry.\nAfter analyzing the data, about 50 codes were\nextracted. These codes are divided into 16 subclasses, and among them,\nultimately, five main themes, called “prayer\nand dialogue with God”, “Interaction and communication with friends and relatives”,\n“studying strategies”, “Finding ways to relax and self-care” and “Negative\nstrategies” were extracted.\nPrayer and dialogue with God: Pray to God and trust in God led to the extraction of this theme. Participant No. 3 stated: “I am asking the mother to pray for me to take my test successfully and reduce my anxiety”. Participant No. 7 stated that “I believe in the Jafar e Tayyar prayers (it is a special continuous prayer for one’s requests from God) and at night before of test, and I would be very calm”.\nParticipant No. 1 stated that “I already have a good relationship with the Quran (Islam’s book), and I read a Quran page before the test and it calmed me”.\nThe other subtheme was trusting in God. A student on this subtheme stated that “I vow about tests that I have a lot of anxiety, and this creates peace in me”.\nParticipant No.3 said: “I trust in God and I ask him to help me, and thereby keep calm down and begin studying with greater focus”.\n\nInteraction and communication with friends and relatives: This theme was included two subthemes: “communication with the family” and “communication with friends”. Participant No. 4 about communication with the family said, “I talk to my parents over the phone at the test night in the dormitory, and they will calm me with their words”.\nOn communication with friends, Participant No. 10 stated: “I talk to friends who are very intimate and express my anxiety and this reduces my anxiety”. Participant No. 2 said, “I talk to classmates who have a joint test, and talk about the test. This will reduce my anxiety”.\n\nStudying strategies: This theme includes two subthemes: “More effort in education” and “Applying different study strategies”. For “Studying earlier” Participant No. 1 stated that “I’ve been studying for weeks before the test, this will reduce my anxiety”.\nParticipant No. 4 about using the different strategies of study, said that “I will try to study with other friends (group study) for any test that I have anxiety”. Participant No. 3 stated: “I am studying similar questions to reduce my anxiety”. Participant No. 7 said that; “Reading the summary of the important content that other friends extracting from my booklet and book reduces my anxiety”.\n\nFinding ways to relax and self-care: it was another major theme that was included subthemes: “relaxing activities”, “exercise”, and “consuming Caffeinated beverages”.\nRegarding this theme, Participant No. 5 stated that; “When I have anxiety due to the test, I try to listen to my favorite music. This will manage my thoughts and also do not sleep”. Or Participant No. 1 said that; “Walking in the open air helps me become more fluent and calm down”. Participant No. 3 noted that “I usually drink coffee or Nescafe when I have hard and difficult test, this will reduce my anxiety”.\n\n“Negative strategies” This theme has two subthemes: “drug abuse” and “rely on cheating”. Participant No. 9 about rely to the cheating said that “I am cheating to reduce the anxiety of the test, even if I do not intend to use it, this makes calm down me”. Participant No. 8 about drug abuse stated that, “I’m eating a Propranolol 10 when I having an overwhelming anxiety”. Participant No. 6 stated that, “I used the Ritalin tablet when I fear from test and anxietyed me, although it did not work as a result of my test”." ]
[ null ]
[ "Introduction", "Materials and methods", "Findings", "Discussion", "Conclusions" ]
[ "One of the concerns of the educational system in universities is the test anxiety [1]. Test anxiety is a special type of anxiety that is characterized by physical, cognitive, and behavioral symptoms when preparing for the test and performing test, It becomes a problem when high levels of anxiety interfere with preparing for and taking the test. Test anxiety is characterized by severe fear of poor performance in tests [2]. In fact, the test anxiety is an emotional experience, feelings and anxiety in situations in which a person feels that his/her performance is evaluated [3]. Test anxiety is defined as the experience of fear, apprehension, and worry before, during, or after a test that can lead to mental distraction, memory impairment, and physical symptoms such as nausea, headache, and tachycardia [4].\nThis type of anxiety occurs between the ages of 10 and 12 and increases with increasing age and maximizes in higher education [5]. According to estimates by researchers in the United States, 15% of students experience different levels of anxiety annually [6]. In the study by Tsegay et al. [7] it was reported that the rate of test anxiety in medical students was 52.3% and significantly higher in female students. Test anxiety is important because it affects test success [8]. Test anxiety can cause academic failure by reducing intrinsic motivation and existing cognitive ability [9]. Sarason [10] considers test anxiety as a self-preoccupation, which is characterized by self-defeat and doubt about his/her abilities. this anxiety often leads to negative cognitive assessment, undesirable physiological reactions and the decline of academic performance. As a result, there is a significant reverse relationship between anxiety scores and test scores [10]. Various studies have shown that test anxiety has a significant and important effect on the academic performance of students. Cassady [11] showed in his study that students with higher test anxiety had a weaker academic performance [11].\nStudents try to cope with anxiety in order to manage it. Coping is the effort to control and manage situations that seem dangerous and tense [12]. There are two main strategies for coping with anxiety that are known as problem-focused coping and emotion-focused coping strategies. The problem-focused coping strategy in which the main objective is to dominate the position and change in the source of anxiety, and emotion-focused coping strategies in which the main objective is to reduce or transform the emotional disturbance quickly [13].\nNarimany and et al. stated that students manage their anxiety through conventional methods such as: they think about good memories of the past, praying and eliminating negative thoughts and beliefs [14].\nIn previous studies, descriptive studies have focused more on estimating the extent to which positive methods have been used, and less on the overall evaluation of all coping strategies that students use. Also, in this study, an attempt has been made to study the strategies of medical students to deal with test anxiety in Eastern and Islamic countries. In Eastern countries, beliefs are different from some other countries where previous studies have been conducted, so its findings can help improve knowledge in this area. On the other hand, this study has selected medical students as the target population because these students face a high volume of difficult. They may also choose different strategies to deal with test anxiety. In this regard, the present study was designed in qualitative study in this target group and conducted with the aim of identifying strategies to deal with test anxiety in medical students.", "This study is qualitative. The qualitative research method adopted for this study is interpretative phenomenological analysis (IPA), because IPA is a qualitative thematic approach developed within psychology, focusing on the subjective lived experiences of individuals. The participants were students of pharmacy, medicine and dentistry at Isfahan University of Medical Sciences. A purposeful method was used for targeted sampling. Inclusion criteria included studying in the last 2 years of dentistry, medicine and pharmacy, as well as having the desire to participate in the study. Sampling was continued until data saturation.\nThis study received ethical approval from the Institutional Review Board (IRB) to which the researchers are affiliated. All study protocols were performed in accordance with the Declaration of Helsinki. This study considered ethical considerations such as the confidentiality of the interviewees’ names and the written consent of interviewees. Interviews were conducted in 2020. Informed consent of every participant was obtained after clearly explaining the objectives as well as the significance of the study for each participant. We advised the participants about the right to participate as well as refuse or discontinue participation at any time they want and the chance to ask anything about the study. The participants were also advised that all data collected would remain confidential.\nData collection was done in a semi-structured interview. The questions were about the test anxiety manage strategies used by students. Interviews continued until data saturation and lack of access to new data. Interview questions were semi-structured and probing questions were asked. Every interview lasted roughly between 30 and 70 min. Interviews were recorded by audio tape and at the earliest opportunity verbatim transcription of interview data was done. Statements were written word-by-word and then were manually coded.\nData was analyzed by the seven-level Colaizzi method. The Colaizzi steps were performed as follows; (1) Transcribing all the participants’ descriptions. participant narratives transcribed from the audio-taped interviews. We didn’t use any software. (2) Extracting significant statements, statements that directly relate to the test anxiety. The researchers repeated all participants’ descriptions and in order to understand these concepts, it was felt by them, then extracted the sentences and vocabulary related to the phenomenon under study and gave a special meaning to each of the extracted sentences. (3) Creating formulated meanings. In this stage, each significant statement is extracted from the participant’s narratives. (4) Aggregating formulated meanings into theme clusters. We organized formulated meanings into groups of similar type. (5) Developing an exhaustive description. An exhaustive description developed through a synthesis of all theme clusters and associated formulated meanings explicated by the researchers. (6) After articulation of the symbolic representation which occurred during the interview. Researchers did an interpretative analysis of symbolic representations for test anxiety. (7) We tried to identify the fundamental structure of the test anxiety by explication’ through a rigorous analysis of the exhaustive description of it.\nIn order to ensure the accuracy of the data, rigor and trustworthiness was determined based on Guba and Lincholn criteria (1994) which include Credibility, Dependability, Confirmability and Transferability [15, 16]. Therefore, we used member checking, researcher creditability, prolonged engagement (semi-structured interview) with the participants, the use of peer debriefing. A follow-up appointment was made between the researcher and each participant for the purpose of validating the essence of the phenomenon with students.\nIn order to get rigor and trustworthiness data, we established comfortable interactions at the beginning of the interviews, which was maintained until the end of the interview. Participants were also surveyed about the codes for approval after each interview. Data, coding and themes were also reviewed by an expert in this subject.", "Ten students participated in the study, of which 7 persons were females and 3 persons were males. Students were selected of pharmacy, medicine and dentistry.\nAfter analyzing the data, about 50 codes were\nextracted. These codes are divided into 16 subclasses, and among them,\nultimately, five main themes, called “prayer\nand dialogue with God”, “Interaction and communication with friends and relatives”,\n“studying strategies”, “Finding ways to relax and self-care” and “Negative\nstrategies” were extracted.\nPrayer and dialogue with God: Pray to God and trust in God led to the extraction of this theme. Participant No. 3 stated: “I am asking the mother to pray for me to take my test successfully and reduce my anxiety”. Participant No. 7 stated that “I believe in the Jafar e Tayyar prayers (it is a special continuous prayer for one’s requests from God) and at night before of test, and I would be very calm”.\nParticipant No. 1 stated that “I already have a good relationship with the Quran (Islam’s book), and I read a Quran page before the test and it calmed me”.\nThe other subtheme was trusting in God. A student on this subtheme stated that “I vow about tests that I have a lot of anxiety, and this creates peace in me”.\nParticipant No.3 said: “I trust in God and I ask him to help me, and thereby keep calm down and begin studying with greater focus”.\n\nInteraction and communication with friends and relatives: This theme was included two subthemes: “communication with the family” and “communication with friends”. Participant No. 4 about communication with the family said, “I talk to my parents over the phone at the test night in the dormitory, and they will calm me with their words”.\nOn communication with friends, Participant No. 10 stated: “I talk to friends who are very intimate and express my anxiety and this reduces my anxiety”. Participant No. 2 said, “I talk to classmates who have a joint test, and talk about the test. This will reduce my anxiety”.\n\nStudying strategies: This theme includes two subthemes: “More effort in education” and “Applying different study strategies”. For “Studying earlier” Participant No. 1 stated that “I’ve been studying for weeks before the test, this will reduce my anxiety”.\nParticipant No. 4 about using the different strategies of study, said that “I will try to study with other friends (group study) for any test that I have anxiety”. Participant No. 3 stated: “I am studying similar questions to reduce my anxiety”. Participant No. 7 said that; “Reading the summary of the important content that other friends extracting from my booklet and book reduces my anxiety”.\n\nFinding ways to relax and self-care: it was another major theme that was included subthemes: “relaxing activities”, “exercise”, and “consuming Caffeinated beverages”.\nRegarding this theme, Participant No. 5 stated that; “When I have anxiety due to the test, I try to listen to my favorite music. This will manage my thoughts and also do not sleep”. Or Participant No. 1 said that; “Walking in the open air helps me become more fluent and calm down”. Participant No. 3 noted that “I usually drink coffee or Nescafe when I have hard and difficult test, this will reduce my anxiety”.\n\n“Negative strategies” This theme has two subthemes: “drug abuse” and “rely on cheating”. Participant No. 9 about rely to the cheating said that “I am cheating to reduce the anxiety of the test, even if I do not intend to use it, this makes calm down me”. Participant No. 8 about drug abuse stated that, “I’m eating a Propranolol 10 when I having an overwhelming anxiety”. Participant No. 6 stated that, “I used the Ritalin tablet when I fear from test and anxietyed me, although it did not work as a result of my test”.", "In the present study, with a qualitative approach and interviews with students, researchers tried to identify and describe the experiences of strategies for coping with the anxiety of the test among students in dentistry, medicine and pharmacy.\nOne of the main themes that students use to cope test anxiety is “prayer and dialogue with God”. Students use prayer and dialogue with God to reduce their anxiety. Among the religious and spiritual sources, the greatest source used to reduce anxiety and anxiety is “prayer”. The prayer is derived from the Latin word of precarious meaning “obtained by pray and pleasure” [17]. The other studies showed praying is an effective strategy to cope with test anxiety. Adroishi et al. [18] showed that listening to pray could significantly reduce the test anxiety in students. The study of Masomi et al. showed that listening to the Quran sound, such as the sound of music before the test, is an effective strategy for the anxiety test, and the Quran sound is more effective in reducing student test anxiety [19]. In a study conducted by Narimany et al., 44.5% and 27.5% of the students used the use of prayer as a strategy for coping with anxiety [14]. Also, training religious values [20] and spiritual training, including prayer, forgiveness, transcendental and spiritual meditation [21] have also been reported to reduce the depression and anxiety of students. Ganji et al. also reported that religious beliefs are related with anxiety levels and reduce it [22]. In Papazisis et al. research, strong religious and spiritual beliefs have positive relationships increasing, the self-confidence, and a negative relationship with depression, anxiety and anxiety as a personality trait [23].\nBased on the findings of various studies, spiritual health determines the integrity of the person, and is the only force that harmonizes the physical, psychological and social dimensions. Religious and spiritual beliefs make a person calm and play a vital role in adapting to tension [24]. Most people believe that the influence of uncontrollable positions can be controlled through relay to God [25]. Religion, spirituality, and existential health are important predictors of mental health [26, 27].\nAnother strategy for students to calm down and manage their anxiety was to communicate with those who had some kind of attachment to them. In fact, this communication with people such as parents or intimate friends has made them relaxed and helped them better control the tension of the test. Cassidy and Shaver, in their justification of the relationship between attachment style and mental health, stated that the consequence of a safe attachment process is creating a sense of safety in a person, and the consequence of an unsafe attachment process is to create fear in a person [28]. Roberts et al. in their justification of this relationship, believe that the psychological consequence of unsafe attachment styles in tension situations is anxiety and depression. The psychological consequence of a safe attachment style in such situations is mental relaxation [29]. By studying 314 surviving adults from Bam earthquake, Rahimian Bouger et al. found that there was a significant positive relationship between safe attachment style and mental health, and a significant negative relationship between avoidance and ambivalent attachment styles with mental health [30]. The study of Besharat et al. revealed that the subjects with a safe attachment style rather than an unsafe type and those with an avoidant attachment style had fewer interpersonal problems than ambivalent styles. The results of this study refer to authenticity of safe attachment to the first requirement and its transfer to subsequent generations [31]. Safford has shown that people with an unsafe attachment style are more likely to experience anxiety and depression [32].\nMorey and Taylor [33] study, which was similar to the present study in terms of quality with in-depth interviews, found that exercise and talking to friends are two important strategies for students to deal with stress. Fujii [34] reported that one of the ways to deal with English test anxiety in students is “cooperation with others” and “building confidence”.\n\nStudying strategies which were students’ strategies to cope with test anxiety, can be considered as a kind of sensitization to manage anxiety and manage of the situation. These strategies are effective and positive strategies that can help the student in a constructive way to gain more control and to manage negative thoughts, and reduce his own anxiety through such actions. Motevalli et al. [35] reported that teaching new and practical study skills helps students manage test anxiety. They found that learning some skills such as time managing for studying, properly review and summarize, how to answer multiple-choice questions, correct/incorrect and descriptive questions can control test anxiety [35]. Yusefzadeh et al. [36] reported that based on the findings of a training and evaluation program (teacher 10–35% of the final grade of the course is based on activity during the semester), the implementation of such programs significantly reduces anxiety Students are tested. Ozbiçakçi et al. [37] also believe that teaching method reduces test anxiety. But L. Hsu mentioned that further research is needed to determine best practices for alleviating student stress and anxiety [38].\n\n“Finding ways to relax and self-care” is the other main theme for coping with test anxiety. It included Relaxing activities, consuming caffeine and Soothing food. Findings of the study by Mojarrab et al. [39] showed that the use of relaxation techniques along with playing soothing music reduces nursing students’ test anxiety and also improves their clinical test scores. Morey and Taylor [33] reported that many students believed exercise would distract them from test anxiety and lead to be calm. Walking, running and yoga were among the activities that students focused on, with more emphasis on walking [33].\nIn this study, negative strategies were also used to manage the anxiety by the students, such as Unusual use of some medications. Evidences show that the consumption of stimulants and non-drug use of a variety of drugs is one of the threats to students. Several evidences suggest that consuming drug abuse among young prople, especially students, is increasing. Methylphenidate or Ritalin is one of the most widely used drugs that has recently been abused among adolescents, especially students [40]. Many students take this medicine to stay awake for several hours and to maintain their unusual focuses for a long time. During the tests, the use of such drugs, including Ritalin, increases. Ritalin is the most common prescribed psychotropic drug for children in the USA [14]. Currently, non-medical consumption of prescription stimulants, including Methylphenidate, is a growing problem among students in the USA. Statistics have shown that 7% of American students have at least once used this drug over their lifetime, and the prevalence of Methylphenidate consumption among students over the course of 1 year was 3% [13]. In this regard, several studies have reported 3–35% of Ritalin abuse among students [41–46]. The most common side effects of consuming Ritalin are insomnia, nervousness, anxiety, headache, and loss of appetite. In excessive consumption, restlessness, delirium, psychosis, hypertension, seizure, hyperthermia, and arrhythmias may occur [47].\n\nWriting cheating was one of the other negative strategies that students used to reduce their test anxiety, although they had no intention of using this cheating during the test. The student, by writing cheat, wants to create a reliance point on himself/herself and thereby calm himself down and overcome his anxiety. In fact, cheating as a reliance point of its existence is an important part of assurance, not to applying it.", "The result of this study showed that students often use positive strategies to overcome the test anxiety and try to use positive strategies, but some students are advised of undesirable strategies such as misuse of authorized drug and writing cheating that lead to a lot of complex problems. Given that the test are an important part of academic life and that every student is always involved in a test and study evaluation, the educational system should do its utmost effort to empower students to manage anxiety by learning the best strategies." ]
[ "introduction", "materials|methods", null, "discussion", "conclusion" ]
[ "Test anxiety", "Higher education", "Coping strategies", "Qualitative study" ]
Introduction: One of the concerns of the educational system in universities is the test anxiety [1]. Test anxiety is a special type of anxiety that is characterized by physical, cognitive, and behavioral symptoms when preparing for the test and performing test, It becomes a problem when high levels of anxiety interfere with preparing for and taking the test. Test anxiety is characterized by severe fear of poor performance in tests [2]. In fact, the test anxiety is an emotional experience, feelings and anxiety in situations in which a person feels that his/her performance is evaluated [3]. Test anxiety is defined as the experience of fear, apprehension, and worry before, during, or after a test that can lead to mental distraction, memory impairment, and physical symptoms such as nausea, headache, and tachycardia [4]. This type of anxiety occurs between the ages of 10 and 12 and increases with increasing age and maximizes in higher education [5]. According to estimates by researchers in the United States, 15% of students experience different levels of anxiety annually [6]. In the study by Tsegay et al. [7] it was reported that the rate of test anxiety in medical students was 52.3% and significantly higher in female students. Test anxiety is important because it affects test success [8]. Test anxiety can cause academic failure by reducing intrinsic motivation and existing cognitive ability [9]. Sarason [10] considers test anxiety as a self-preoccupation, which is characterized by self-defeat and doubt about his/her abilities. this anxiety often leads to negative cognitive assessment, undesirable physiological reactions and the decline of academic performance. As a result, there is a significant reverse relationship between anxiety scores and test scores [10]. Various studies have shown that test anxiety has a significant and important effect on the academic performance of students. Cassady [11] showed in his study that students with higher test anxiety had a weaker academic performance [11]. Students try to cope with anxiety in order to manage it. Coping is the effort to control and manage situations that seem dangerous and tense [12]. There are two main strategies for coping with anxiety that are known as problem-focused coping and emotion-focused coping strategies. The problem-focused coping strategy in which the main objective is to dominate the position and change in the source of anxiety, and emotion-focused coping strategies in which the main objective is to reduce or transform the emotional disturbance quickly [13]. Narimany and et al. stated that students manage their anxiety through conventional methods such as: they think about good memories of the past, praying and eliminating negative thoughts and beliefs [14]. In previous studies, descriptive studies have focused more on estimating the extent to which positive methods have been used, and less on the overall evaluation of all coping strategies that students use. Also, in this study, an attempt has been made to study the strategies of medical students to deal with test anxiety in Eastern and Islamic countries. In Eastern countries, beliefs are different from some other countries where previous studies have been conducted, so its findings can help improve knowledge in this area. On the other hand, this study has selected medical students as the target population because these students face a high volume of difficult. They may also choose different strategies to deal with test anxiety. In this regard, the present study was designed in qualitative study in this target group and conducted with the aim of identifying strategies to deal with test anxiety in medical students. Materials and methods: This study is qualitative. The qualitative research method adopted for this study is interpretative phenomenological analysis (IPA), because IPA is a qualitative thematic approach developed within psychology, focusing on the subjective lived experiences of individuals. The participants were students of pharmacy, medicine and dentistry at Isfahan University of Medical Sciences. A purposeful method was used for targeted sampling. Inclusion criteria included studying in the last 2 years of dentistry, medicine and pharmacy, as well as having the desire to participate in the study. Sampling was continued until data saturation. This study received ethical approval from the Institutional Review Board (IRB) to which the researchers are affiliated. All study protocols were performed in accordance with the Declaration of Helsinki. This study considered ethical considerations such as the confidentiality of the interviewees’ names and the written consent of interviewees. Interviews were conducted in 2020. Informed consent of every participant was obtained after clearly explaining the objectives as well as the significance of the study for each participant. We advised the participants about the right to participate as well as refuse or discontinue participation at any time they want and the chance to ask anything about the study. The participants were also advised that all data collected would remain confidential. Data collection was done in a semi-structured interview. The questions were about the test anxiety manage strategies used by students. Interviews continued until data saturation and lack of access to new data. Interview questions were semi-structured and probing questions were asked. Every interview lasted roughly between 30 and 70 min. Interviews were recorded by audio tape and at the earliest opportunity verbatim transcription of interview data was done. Statements were written word-by-word and then were manually coded. Data was analyzed by the seven-level Colaizzi method. The Colaizzi steps were performed as follows; (1) Transcribing all the participants’ descriptions. participant narratives transcribed from the audio-taped interviews. We didn’t use any software. (2) Extracting significant statements, statements that directly relate to the test anxiety. The researchers repeated all participants’ descriptions and in order to understand these concepts, it was felt by them, then extracted the sentences and vocabulary related to the phenomenon under study and gave a special meaning to each of the extracted sentences. (3) Creating formulated meanings. In this stage, each significant statement is extracted from the participant’s narratives. (4) Aggregating formulated meanings into theme clusters. We organized formulated meanings into groups of similar type. (5) Developing an exhaustive description. An exhaustive description developed through a synthesis of all theme clusters and associated formulated meanings explicated by the researchers. (6) After articulation of the symbolic representation which occurred during the interview. Researchers did an interpretative analysis of symbolic representations for test anxiety. (7) We tried to identify the fundamental structure of the test anxiety by explication’ through a rigorous analysis of the exhaustive description of it. In order to ensure the accuracy of the data, rigor and trustworthiness was determined based on Guba and Lincholn criteria (1994) which include Credibility, Dependability, Confirmability and Transferability [15, 16]. Therefore, we used member checking, researcher creditability, prolonged engagement (semi-structured interview) with the participants, the use of peer debriefing. A follow-up appointment was made between the researcher and each participant for the purpose of validating the essence of the phenomenon with students. In order to get rigor and trustworthiness data, we established comfortable interactions at the beginning of the interviews, which was maintained until the end of the interview. Participants were also surveyed about the codes for approval after each interview. Data, coding and themes were also reviewed by an expert in this subject. Findings: Ten students participated in the study, of which 7 persons were females and 3 persons were males. Students were selected of pharmacy, medicine and dentistry. After analyzing the data, about 50 codes were extracted. These codes are divided into 16 subclasses, and among them, ultimately, five main themes, called “prayer and dialogue with God”, “Interaction and communication with friends and relatives”, “studying strategies”, “Finding ways to relax and self-care” and “Negative strategies” were extracted. Prayer and dialogue with God: Pray to God and trust in God led to the extraction of this theme. Participant No. 3 stated: “I am asking the mother to pray for me to take my test successfully and reduce my anxiety”. Participant No. 7 stated that “I believe in the Jafar e Tayyar prayers (it is a special continuous prayer for one’s requests from God) and at night before of test, and I would be very calm”. Participant No. 1 stated that “I already have a good relationship with the Quran (Islam’s book), and I read a Quran page before the test and it calmed me”. The other subtheme was trusting in God. A student on this subtheme stated that “I vow about tests that I have a lot of anxiety, and this creates peace in me”. Participant No.3 said: “I trust in God and I ask him to help me, and thereby keep calm down and begin studying with greater focus”. Interaction and communication with friends and relatives: This theme was included two subthemes: “communication with the family” and “communication with friends”. Participant No. 4 about communication with the family said, “I talk to my parents over the phone at the test night in the dormitory, and they will calm me with their words”. On communication with friends, Participant No. 10 stated: “I talk to friends who are very intimate and express my anxiety and this reduces my anxiety”. Participant No. 2 said, “I talk to classmates who have a joint test, and talk about the test. This will reduce my anxiety”. Studying strategies: This theme includes two subthemes: “More effort in education” and “Applying different study strategies”. For “Studying earlier” Participant No. 1 stated that “I’ve been studying for weeks before the test, this will reduce my anxiety”. Participant No. 4 about using the different strategies of study, said that “I will try to study with other friends (group study) for any test that I have anxiety”. Participant No. 3 stated: “I am studying similar questions to reduce my anxiety”. Participant No. 7 said that; “Reading the summary of the important content that other friends extracting from my booklet and book reduces my anxiety”. Finding ways to relax and self-care: it was another major theme that was included subthemes: “relaxing activities”, “exercise”, and “consuming Caffeinated beverages”. Regarding this theme, Participant No. 5 stated that; “When I have anxiety due to the test, I try to listen to my favorite music. This will manage my thoughts and also do not sleep”. Or Participant No. 1 said that; “Walking in the open air helps me become more fluent and calm down”. Participant No. 3 noted that “I usually drink coffee or Nescafe when I have hard and difficult test, this will reduce my anxiety”. “Negative strategies” This theme has two subthemes: “drug abuse” and “rely on cheating”. Participant No. 9 about rely to the cheating said that “I am cheating to reduce the anxiety of the test, even if I do not intend to use it, this makes calm down me”. Participant No. 8 about drug abuse stated that, “I’m eating a Propranolol 10 when I having an overwhelming anxiety”. Participant No. 6 stated that, “I used the Ritalin tablet when I fear from test and anxietyed me, although it did not work as a result of my test”. Discussion: In the present study, with a qualitative approach and interviews with students, researchers tried to identify and describe the experiences of strategies for coping with the anxiety of the test among students in dentistry, medicine and pharmacy. One of the main themes that students use to cope test anxiety is “prayer and dialogue with God”. Students use prayer and dialogue with God to reduce their anxiety. Among the religious and spiritual sources, the greatest source used to reduce anxiety and anxiety is “prayer”. The prayer is derived from the Latin word of precarious meaning “obtained by pray and pleasure” [17]. The other studies showed praying is an effective strategy to cope with test anxiety. Adroishi et al. [18] showed that listening to pray could significantly reduce the test anxiety in students. The study of Masomi et al. showed that listening to the Quran sound, such as the sound of music before the test, is an effective strategy for the anxiety test, and the Quran sound is more effective in reducing student test anxiety [19]. In a study conducted by Narimany et al., 44.5% and 27.5% of the students used the use of prayer as a strategy for coping with anxiety [14]. Also, training religious values [20] and spiritual training, including prayer, forgiveness, transcendental and spiritual meditation [21] have also been reported to reduce the depression and anxiety of students. Ganji et al. also reported that religious beliefs are related with anxiety levels and reduce it [22]. In Papazisis et al. research, strong religious and spiritual beliefs have positive relationships increasing, the self-confidence, and a negative relationship with depression, anxiety and anxiety as a personality trait [23]. Based on the findings of various studies, spiritual health determines the integrity of the person, and is the only force that harmonizes the physical, psychological and social dimensions. Religious and spiritual beliefs make a person calm and play a vital role in adapting to tension [24]. Most people believe that the influence of uncontrollable positions can be controlled through relay to God [25]. Religion, spirituality, and existential health are important predictors of mental health [26, 27]. Another strategy for students to calm down and manage their anxiety was to communicate with those who had some kind of attachment to them. In fact, this communication with people such as parents or intimate friends has made them relaxed and helped them better control the tension of the test. Cassidy and Shaver, in their justification of the relationship between attachment style and mental health, stated that the consequence of a safe attachment process is creating a sense of safety in a person, and the consequence of an unsafe attachment process is to create fear in a person [28]. Roberts et al. in their justification of this relationship, believe that the psychological consequence of unsafe attachment styles in tension situations is anxiety and depression. The psychological consequence of a safe attachment style in such situations is mental relaxation [29]. By studying 314 surviving adults from Bam earthquake, Rahimian Bouger et al. found that there was a significant positive relationship between safe attachment style and mental health, and a significant negative relationship between avoidance and ambivalent attachment styles with mental health [30]. The study of Besharat et al. revealed that the subjects with a safe attachment style rather than an unsafe type and those with an avoidant attachment style had fewer interpersonal problems than ambivalent styles. The results of this study refer to authenticity of safe attachment to the first requirement and its transfer to subsequent generations [31]. Safford has shown that people with an unsafe attachment style are more likely to experience anxiety and depression [32]. Morey and Taylor [33] study, which was similar to the present study in terms of quality with in-depth interviews, found that exercise and talking to friends are two important strategies for students to deal with stress. Fujii [34] reported that one of the ways to deal with English test anxiety in students is “cooperation with others” and “building confidence”. Studying strategies which were students’ strategies to cope with test anxiety, can be considered as a kind of sensitization to manage anxiety and manage of the situation. These strategies are effective and positive strategies that can help the student in a constructive way to gain more control and to manage negative thoughts, and reduce his own anxiety through such actions. Motevalli et al. [35] reported that teaching new and practical study skills helps students manage test anxiety. They found that learning some skills such as time managing for studying, properly review and summarize, how to answer multiple-choice questions, correct/incorrect and descriptive questions can control test anxiety [35]. Yusefzadeh et al. [36] reported that based on the findings of a training and evaluation program (teacher 10–35% of the final grade of the course is based on activity during the semester), the implementation of such programs significantly reduces anxiety Students are tested. Ozbiçakçi et al. [37] also believe that teaching method reduces test anxiety. But L. Hsu mentioned that further research is needed to determine best practices for alleviating student stress and anxiety [38]. “Finding ways to relax and self-care” is the other main theme for coping with test anxiety. It included Relaxing activities, consuming caffeine and Soothing food. Findings of the study by Mojarrab et al. [39] showed that the use of relaxation techniques along with playing soothing music reduces nursing students’ test anxiety and also improves their clinical test scores. Morey and Taylor [33] reported that many students believed exercise would distract them from test anxiety and lead to be calm. Walking, running and yoga were among the activities that students focused on, with more emphasis on walking [33]. In this study, negative strategies were also used to manage the anxiety by the students, such as Unusual use of some medications. Evidences show that the consumption of stimulants and non-drug use of a variety of drugs is one of the threats to students. Several evidences suggest that consuming drug abuse among young prople, especially students, is increasing. Methylphenidate or Ritalin is one of the most widely used drugs that has recently been abused among adolescents, especially students [40]. Many students take this medicine to stay awake for several hours and to maintain their unusual focuses for a long time. During the tests, the use of such drugs, including Ritalin, increases. Ritalin is the most common prescribed psychotropic drug for children in the USA [14]. Currently, non-medical consumption of prescription stimulants, including Methylphenidate, is a growing problem among students in the USA. Statistics have shown that 7% of American students have at least once used this drug over their lifetime, and the prevalence of Methylphenidate consumption among students over the course of 1 year was 3% [13]. In this regard, several studies have reported 3–35% of Ritalin abuse among students [41–46]. The most common side effects of consuming Ritalin are insomnia, nervousness, anxiety, headache, and loss of appetite. In excessive consumption, restlessness, delirium, psychosis, hypertension, seizure, hyperthermia, and arrhythmias may occur [47]. Writing cheating was one of the other negative strategies that students used to reduce their test anxiety, although they had no intention of using this cheating during the test. The student, by writing cheat, wants to create a reliance point on himself/herself and thereby calm himself down and overcome his anxiety. In fact, cheating as a reliance point of its existence is an important part of assurance, not to applying it. Conclusions: The result of this study showed that students often use positive strategies to overcome the test anxiety and try to use positive strategies, but some students are advised of undesirable strategies such as misuse of authorized drug and writing cheating that lead to a lot of complex problems. Given that the test are an important part of academic life and that every student is always involved in a test and study evaluation, the educational system should do its utmost effort to empower students to manage anxiety by learning the best strategies.
Background: The studies show test anxiety is a common disorder in students that causes academic failure. There are not enough studies and specific theoretical background about test anxiety and ways to deal with it, so the purpose of this study was to do a qualitative study to fully understand the ways to deal with test anxiety in medical Sciences students. Methods: This is a qualitative study. The participants are the students of the last 2 years of pharmacy, medicine and dentistry at Isfahan University of Medical Sciences. Ten students were selected by purposeful sampling, and interviews continued until the data saturation stage and the lack of access to new data. The data were analyzed by seven-level Colaizzi method. Results: After analyzing data, about 50 codes were extracted. These codes divided into 16 subclasses, and among them, ultimately five main themes are extracted: "Prayer and Dialogue with God", "Interaction and communication with friends and relatives", "studying strategies", "Finding ways to relax and self-care" and "Negative strategies" were extracted. Conclusions: The result of this study showed that students often use positive strategies to overcome the test anxiety and try to use positive strategies, but some students are advised of undesirable strategies such as misuse of authorized drugs and writing cheating that that lead to a lot of complex problems. The educational system should do its utmost effort to empower students to manage the anxiety by learning the best strategies.
Introduction: One of the concerns of the educational system in universities is the test anxiety [1]. Test anxiety is a special type of anxiety that is characterized by physical, cognitive, and behavioral symptoms when preparing for the test and performing test, It becomes a problem when high levels of anxiety interfere with preparing for and taking the test. Test anxiety is characterized by severe fear of poor performance in tests [2]. In fact, the test anxiety is an emotional experience, feelings and anxiety in situations in which a person feels that his/her performance is evaluated [3]. Test anxiety is defined as the experience of fear, apprehension, and worry before, during, or after a test that can lead to mental distraction, memory impairment, and physical symptoms such as nausea, headache, and tachycardia [4]. This type of anxiety occurs between the ages of 10 and 12 and increases with increasing age and maximizes in higher education [5]. According to estimates by researchers in the United States, 15% of students experience different levels of anxiety annually [6]. In the study by Tsegay et al. [7] it was reported that the rate of test anxiety in medical students was 52.3% and significantly higher in female students. Test anxiety is important because it affects test success [8]. Test anxiety can cause academic failure by reducing intrinsic motivation and existing cognitive ability [9]. Sarason [10] considers test anxiety as a self-preoccupation, which is characterized by self-defeat and doubt about his/her abilities. this anxiety often leads to negative cognitive assessment, undesirable physiological reactions and the decline of academic performance. As a result, there is a significant reverse relationship between anxiety scores and test scores [10]. Various studies have shown that test anxiety has a significant and important effect on the academic performance of students. Cassady [11] showed in his study that students with higher test anxiety had a weaker academic performance [11]. Students try to cope with anxiety in order to manage it. Coping is the effort to control and manage situations that seem dangerous and tense [12]. There are two main strategies for coping with anxiety that are known as problem-focused coping and emotion-focused coping strategies. The problem-focused coping strategy in which the main objective is to dominate the position and change in the source of anxiety, and emotion-focused coping strategies in which the main objective is to reduce or transform the emotional disturbance quickly [13]. Narimany and et al. stated that students manage their anxiety through conventional methods such as: they think about good memories of the past, praying and eliminating negative thoughts and beliefs [14]. In previous studies, descriptive studies have focused more on estimating the extent to which positive methods have been used, and less on the overall evaluation of all coping strategies that students use. Also, in this study, an attempt has been made to study the strategies of medical students to deal with test anxiety in Eastern and Islamic countries. In Eastern countries, beliefs are different from some other countries where previous studies have been conducted, so its findings can help improve knowledge in this area. On the other hand, this study has selected medical students as the target population because these students face a high volume of difficult. They may also choose different strategies to deal with test anxiety. In this regard, the present study was designed in qualitative study in this target group and conducted with the aim of identifying strategies to deal with test anxiety in medical students. Conclusions: The result of this study showed that students often use positive strategies to overcome the test anxiety and try to use positive strategies, but some students are advised of undesirable strategies such as misuse of authorized drug and writing cheating that lead to a lot of complex problems. Given that the test are an important part of academic life and that every student is always involved in a test and study evaluation, the educational system should do its utmost effort to empower students to manage anxiety by learning the best strategies.
Background: The studies show test anxiety is a common disorder in students that causes academic failure. There are not enough studies and specific theoretical background about test anxiety and ways to deal with it, so the purpose of this study was to do a qualitative study to fully understand the ways to deal with test anxiety in medical Sciences students. Methods: This is a qualitative study. The participants are the students of the last 2 years of pharmacy, medicine and dentistry at Isfahan University of Medical Sciences. Ten students were selected by purposeful sampling, and interviews continued until the data saturation stage and the lack of access to new data. The data were analyzed by seven-level Colaizzi method. Results: After analyzing data, about 50 codes were extracted. These codes divided into 16 subclasses, and among them, ultimately five main themes are extracted: "Prayer and Dialogue with God", "Interaction and communication with friends and relatives", "studying strategies", "Finding ways to relax and self-care" and "Negative strategies" were extracted. Conclusions: The result of this study showed that students often use positive strategies to overcome the test anxiety and try to use positive strategies, but some students are advised of undesirable strategies such as misuse of authorized drugs and writing cheating that that lead to a lot of complex problems. The educational system should do its utmost effort to empower students to manage the anxiety by learning the best strategies.
3,824
283
[ 825 ]
5
[ "anxiety", "test", "students", "test anxiety", "study", "strategies", "participant", "reduce", "use", "attachment" ]
[ "study test anxiety", "test anxiety cause", "test anxiety characterized", "student test anxiety", "students test anxiety" ]
null
null
[CONTENT] Test anxiety | Higher education | Coping strategies | Qualitative study [SUMMARY]
null
null
[CONTENT] Test anxiety | Higher education | Coping strategies | Qualitative study [SUMMARY]
[CONTENT] Test anxiety | Higher education | Coping strategies | Qualitative study [SUMMARY]
[CONTENT] Test anxiety | Higher education | Coping strategies | Qualitative study [SUMMARY]
[CONTENT] Anxiety | Humans | Learning | Qualitative Research | Students, Medical | Test Anxiety [SUMMARY]
null
null
[CONTENT] Anxiety | Humans | Learning | Qualitative Research | Students, Medical | Test Anxiety [SUMMARY]
[CONTENT] Anxiety | Humans | Learning | Qualitative Research | Students, Medical | Test Anxiety [SUMMARY]
[CONTENT] Anxiety | Humans | Learning | Qualitative Research | Students, Medical | Test Anxiety [SUMMARY]
[CONTENT] study test anxiety | test anxiety cause | test anxiety characterized | student test anxiety | students test anxiety [SUMMARY]
null
null
[CONTENT] study test anxiety | test anxiety cause | test anxiety characterized | student test anxiety | students test anxiety [SUMMARY]
[CONTENT] study test anxiety | test anxiety cause | test anxiety characterized | student test anxiety | students test anxiety [SUMMARY]
[CONTENT] study test anxiety | test anxiety cause | test anxiety characterized | student test anxiety | students test anxiety [SUMMARY]
[CONTENT] anxiety | test | students | test anxiety | study | strategies | participant | reduce | use | attachment [SUMMARY]
null
null
[CONTENT] anxiety | test | students | test anxiety | study | strategies | participant | reduce | use | attachment [SUMMARY]
[CONTENT] anxiety | test | students | test anxiety | study | strategies | participant | reduce | use | attachment [SUMMARY]
[CONTENT] anxiety | test | students | test anxiety | study | strategies | participant | reduce | use | attachment [SUMMARY]
[CONTENT] anxiety | test | test anxiety | students | coping | performance | focused | focused coping | medical students | strategies [SUMMARY]
null
null
[CONTENT] use positive | use positive strategies | strategies | positive strategies | test | students | positive | overcome test anxiety try | overcome test anxiety | educational system utmost effort [SUMMARY]
[CONTENT] anxiety | test | students | participant | study | strategies | test anxiety | data | interview | attachment [SUMMARY]
[CONTENT] anxiety | test | students | participant | study | strategies | test anxiety | data | interview | attachment [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| the last 2 years | Isfahan University of Medical Sciences ||| Ten ||| seven ||| about 50 ||| 16 | five ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| the last 2 years | Isfahan University of Medical Sciences ||| Ten ||| seven ||| about 50 ||| 16 | five ||| ||| [SUMMARY]
Peer Reviewers in Central Asia: Publons Based Analysis.
34184435
The five Central Asian republics comprise of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. Their research and publication activities are gradually improving but there is limited data on how good their peer reviewing practices are.
BACKGROUND
We have use the Publons database to extract information on the reviewers registered including the number of verified review, Publons award winners, and top universities in the domain of peer reviewing. This has been analysed overall and country wise.
METHODS
Of 15,764 researchers registered on Publons, only 370 (11.7%) have verified records of peer-reviewing. There are 8 Publons award winners. There is great heterogeneity in the number of active reviewers across the five countries. Kazakhstan and Uzbekistan account for more than 90% of verified reviewers. Only Kazakhstan has more than 100 active reviewers and 6 Publons award recipients. Amongst the top 20 reviewers from Central Asia, half of them are from the Nazarbayev University, Nur-Sultan, Kazakhstan. Three countries have less than 10 universities registered on Publons.
RESULTS
Central Asia has a good number of peer reviewers on Publons though only a minority of researchers are involved in peer reviewing. However, the heterogeneity between the nations can be best dealt with by promoting awareness and international networking including e-learning and mentoring programs.
CONCLUSION
[ "Asia, Central", "Databases, Bibliographic", "Humans", "Peer Review", "Periodicals as Topic", "Publications" ]
8239425
INTRODUCTION
Central Asia (CA) has a rich scientific temperament but has reduced visibility in the field of scientific literature. This may stem from the previous dependence on the Russian language and reduced exposure to international science.1 Things are slowly changing. Some countries are doing well and some have yet to catch up in terms of publications.2 Previous analyses have focussed on publications from these regions.234 The increasing number of manuscripts portrays growing scientific curiosity, research intent and the increasing ability to understand the intricacies and ethics of publication. Complementary to scientific publishing, another integral requirement for the development of scientific rationale and thinking is peer-reviewing.5 Peer reviewing is an integral part of the research publication process, and often it is considered a thankless job.6 However, mainly thanks to the Publons initiative, more and more researchers are being recognised for their contributions to peer review.7 The coronavirus disease 2019 pandemic has stressed the peer review process8 and might be the refining fire that forges the next generation of peer reviewers. A peer reviewer, who is below par, may disrupt the trust that underlies this privileged gate-keeping function with some offhand remark.9 Thus, it is pertinent to have peer reviewers at par with the best in the world. Overall, Asia is estimated to have a good pool of upcoming reviewers.10 In the field of publishing, Kazakhstan seems to be doing well.11 A recent analysis of the top institutes in CA and the neighbouring region has shown dominance by China in various Publons ratings.4 The same Publons platform provides a unique opportunity to explore the reviewing experiences and capacities of these nations.7 Publons was initially launched to provide credit for reviewers. However, it has expanded laterally and now also includes editorial records. After being brought under Clarivate Analytics, it is now also laterally integrated with the Web of Science from which publication information is synchronized. The ‘Publons Reviewer Connect’ is an artificial intelligence based program that can help editors identify potential reviewers.12 It can also synchronize records with the ORCID database.13 Thus, it is one of the best available tools to analyse reviewer profiles, in relationship to the reviewers' publications and editorial activities. Thus, we have attempted to analyse the numbers and expertise of peer reviews as well as top universities involved in peer reviewing from various Central Asian countries using Publons.
METHODS
There are 5 Central Asian republics with diverse research evaluation and publication strategies: Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan. The inbuilt search engine of Publons was used to extract data on May the 10th, 2021. Data import First, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted. aCountry code is provided for Kazakhstan (“181”) as an example. First, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted. aCountry code is provided for Kazakhstan (“181”) as an example. Data analysis Normality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant. Normality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant. Software Data was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization. Data was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization.
RESULTS
The total number of researchers from CA registered on Publons was 15,764 and the countrywide distribution is mentioned in Table 2. However, only 370 (11.7%) have at least 1 verified review (Fig. 1). This varies amongst the countries (Fig. 2). Six reviewers from Kazakhstan and one each from Uzbekistan and Kyrgyzstan have achieved the distinction of being awarded the Publons Reviewer Award. The top 20 reviewers across CA with the maximum number of verified reviews are listed in Table 3. Ten of the top 20 reviewers belong to Nazarbayev University, Nur-Sultan, Kazakhstan. Seven of the top reviewers are also active editors with a different number of editorial records to their names. The top 10 universities with the maximum number of reviews per country have been listed in Table 4. However, this list is abbreviated for 3 countries that have less than 10 universities registered on Publons. Fig. 3 shows the relationship between the number of verified reviews and publication from these 29 universities across the five countries.
null
null
[ "Data import", "Data analysis", "Software" ]
[ "First, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted.\naCountry code is provided for Kazakhstan (“181”) as an example.", "Normality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant.", "Data was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization." ]
[ null, null, null ]
[ "INTRODUCTION", "METHODS", "Data import", "Data analysis", "Software", "RESULTS", "DISCUSSION" ]
[ "Central Asia (CA) has a rich scientific temperament but has reduced visibility in the field of scientific literature. This may stem from the previous dependence on the Russian language and reduced exposure to international science.1 Things are slowly changing. Some countries are doing well and some have yet to catch up in terms of publications.2 Previous analyses have focussed on publications from these regions.234 The increasing number of manuscripts portrays growing scientific curiosity, research intent and the increasing ability to understand the intricacies and ethics of publication. Complementary to scientific publishing, another integral requirement for the development of scientific rationale and thinking is peer-reviewing.5\nPeer reviewing is an integral part of the research publication process, and often it is considered a thankless job.6 However, mainly thanks to the Publons initiative, more and more researchers are being recognised for their contributions to peer review.7 The coronavirus disease 2019 pandemic has stressed the peer review process8 and might be the refining fire that forges the next generation of peer reviewers. A peer reviewer, who is below par, may disrupt the trust that underlies this privileged gate-keeping function with some offhand remark.9 Thus, it is pertinent to have peer reviewers at par with the best in the world.\nOverall, Asia is estimated to have a good pool of upcoming reviewers.10 In the field of publishing, Kazakhstan seems to be doing well.11 A recent analysis of the top institutes in CA and the neighbouring region has shown dominance by China in various Publons ratings.4 The same Publons platform provides a unique opportunity to explore the reviewing experiences and capacities of these nations.7 Publons was initially launched to provide credit for reviewers. However, it has expanded laterally and now also includes editorial records. After being brought under Clarivate Analytics, it is now also laterally integrated with the Web of Science from which publication information is synchronized. The ‘Publons Reviewer Connect’ is an artificial intelligence based program that can help editors identify potential reviewers.12 It can also synchronize records with the ORCID database.13 Thus, it is one of the best available tools to analyse reviewer profiles, in relationship to the reviewers' publications and editorial activities. Thus, we have attempted to analyse the numbers and expertise of peer reviews as well as top universities involved in peer reviewing from various Central Asian countries using Publons.", "There are 5 Central Asian republics with diverse research evaluation and publication strategies: Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan. The inbuilt search engine of Publons was used to extract data on May the 10th, 2021.\nData import First, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted.\naCountry code is provided for Kazakhstan (“181”) as an example.\nFirst, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted.\naCountry code is provided for Kazakhstan (“181”) as an example.\nData analysis Normality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant.\nNormality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant.\nSoftware Data was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization.\nData was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization.", "First, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted.\naCountry code is provided for Kazakhstan (“181”) as an example.", "Normality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant.", "Data was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization.", "The total number of researchers from CA registered on Publons was 15,764 and the countrywide distribution is mentioned in Table 2. However, only 370 (11.7%) have at least 1 verified review (Fig. 1). This varies amongst the countries (Fig. 2). Six reviewers from Kazakhstan and one each from Uzbekistan and Kyrgyzstan have achieved the distinction of being awarded the Publons Reviewer Award.\nThe top 20 reviewers across CA with the maximum number of verified reviews are listed in Table 3. Ten of the top 20 reviewers belong to Nazarbayev University, Nur-Sultan, Kazakhstan. Seven of the top reviewers are also active editors with a different number of editorial records to their names.\nThe top 10 universities with the maximum number of reviews per country have been listed in Table 4. However, this list is abbreviated for 3 countries that have less than 10 universities registered on Publons. Fig. 3 shows the relationship between the number of verified reviews and publication from these 29 universities across the five countries.", "This analysis of the reviewers from the core CA countries reveals that Kazakhstan seems to top the list while two specific countries need a major boost. The number of active reviewers as well institutes in Kazakhstan reveal its potential as a rising star in the region. Uzbekistan also seems to be doing well. These two countries represent more than 90% of researchers registered on Publons as well as those involved in active peer-reviewing.\nKazakhstan has natural resources and is a major contributor to the economy of CA.14 Similarly, Uzbekistan has been gradually improving its economy and contribution to education in the last decade. These can be helping them made rapid strides in research and publications.4 Similarly, the increase in publications can also influence knowledge of publication ethics and promote peer reviewing practices.\nLooking at the performance of their top institutes, Kazakhstan has a good ratio of peer-reviews to publications. This ratio is less for all others including Uzbekistan. Also, the eight reviewers from the region who have received recognition from Publons as top reviewers highlight the point that the potential of this region is second to none. This recognition can also help stimulate their peer to achieve such distinctions.\nPeer reviewing not only trains the mind to be analytical but also teaches many aspects of publication ethics. These include understanding plagiarism,15 authorship criteria,16 declaring potential conflicts of interest17 and also the skill of scientific writing. Thus, it has been proposed that peer-reviewing should be a part of ongoing medical education.18\nThere are different metrics to measure researcher and author impact.13 However, for peer reviewers, they are no other metrics beyond what is present in Publons: the number of journals reviewed for, and the number of verified reviews (overall and in the last 12 months). Also, Publons has an option in which Editor can recommend someone as an excellent reviewer and also, it has its mechanism to award top reviewer recognition.\nThe most pertinent question is how to mine the potential of Central Asian reviewers. The Publons Academy is an initiation that enables experienced peer reviewers to mentor novice and upcoming reviewers. There is a need to involve both experts on this initiative to devote their time for mentoring and also spread awareness amongst the CA reviewers to get registered for such mentoring. Journal editors can help organize online or offline courses on peer-reviewing for these nations. Online education has added the advantage of bringing together experts from varied regions of the world. Nevertheless, it cannot be a one-way process. The CA countries, especially the underperforming ones, will also need to pull up their bootstraps. There has been freedom of knowledge, an open platform for learning and limitless striving to yearn for perfection. The other countries can take a leaf out of the book from Kazakhstan and stress for more open networking and learning resources.\nThe leading university of Kazakhstan, the Nazarbayev University, is contributing to half the top reviewers in the region, as well as the highest proportion of verified reviews. Thus, it has an unparalleled capacity for the region and can take an initiative to bridge across to other universities in the region and disseminate their expertise.\nThere are limitations to this analysis. Firstly, some active reviewers may not be registered on Publons. All registered reviewers also might not be active on Publons. Secondly, there may be predatory journals also registered on Publons and all the peer reviews verified may not have any minimum quality assurance.19\nThere seems to be no practical way to include reviewers not on Publons, in our analysis. We presume they would be in equal proportion in each country. However, a greater proportion may be from Tajikistan and Turkmenistan possible due to reduced awareness about Publons. The CA countries have different access to the internet and social media.20 Also, Publons is evolving and trying out different strategies to exclude fake or poor reviews.12\nThus, there is heterogeneity in the distribution of peer reviewer in the CA countries. There seems to be a good number of upcoming reviewers in a couple of countries while the others may need more awareness. This region may benefit much from international networking, symposia and e-learning platforms dedicated to promoting peer-reviewing." ]
[ "intro", "methods", null, null, null, "results", "discussion" ]
[ "Central Asia", "Peer Review", "Publications", "Publons", "Universities" ]
INTRODUCTION: Central Asia (CA) has a rich scientific temperament but has reduced visibility in the field of scientific literature. This may stem from the previous dependence on the Russian language and reduced exposure to international science.1 Things are slowly changing. Some countries are doing well and some have yet to catch up in terms of publications.2 Previous analyses have focussed on publications from these regions.234 The increasing number of manuscripts portrays growing scientific curiosity, research intent and the increasing ability to understand the intricacies and ethics of publication. Complementary to scientific publishing, another integral requirement for the development of scientific rationale and thinking is peer-reviewing.5 Peer reviewing is an integral part of the research publication process, and often it is considered a thankless job.6 However, mainly thanks to the Publons initiative, more and more researchers are being recognised for their contributions to peer review.7 The coronavirus disease 2019 pandemic has stressed the peer review process8 and might be the refining fire that forges the next generation of peer reviewers. A peer reviewer, who is below par, may disrupt the trust that underlies this privileged gate-keeping function with some offhand remark.9 Thus, it is pertinent to have peer reviewers at par with the best in the world. Overall, Asia is estimated to have a good pool of upcoming reviewers.10 In the field of publishing, Kazakhstan seems to be doing well.11 A recent analysis of the top institutes in CA and the neighbouring region has shown dominance by China in various Publons ratings.4 The same Publons platform provides a unique opportunity to explore the reviewing experiences and capacities of these nations.7 Publons was initially launched to provide credit for reviewers. However, it has expanded laterally and now also includes editorial records. After being brought under Clarivate Analytics, it is now also laterally integrated with the Web of Science from which publication information is synchronized. The ‘Publons Reviewer Connect’ is an artificial intelligence based program that can help editors identify potential reviewers.12 It can also synchronize records with the ORCID database.13 Thus, it is one of the best available tools to analyse reviewer profiles, in relationship to the reviewers' publications and editorial activities. Thus, we have attempted to analyse the numbers and expertise of peer reviews as well as top universities involved in peer reviewing from various Central Asian countries using Publons. METHODS: There are 5 Central Asian republics with diverse research evaluation and publication strategies: Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan. The inbuilt search engine of Publons was used to extract data on May the 10th, 2021. Data import First, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted. aCountry code is provided for Kazakhstan (“181”) as an example. First, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted. aCountry code is provided for Kazakhstan (“181”) as an example. Data analysis Normality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant. Normality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant. Software Data was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization. Data was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization. Data import: First, the total number of researchers registered for each of these countries was noted. Then, the total number of reviewers with at least one verified review, with more than one verified review, the number of Publons recognized Top Reviewers, and the number of reviewers with at least one verified review in the last 12 months was extracted (Table 1). Also, the total number of reviews for each country as well as the total number of reviews for each country in the preceding 12 months was totalled. Finally, the top institutes of CA having the most number of reviews to their name, and the top 10 institutes per country were imported from Publons. Additionally, the number of Web of Science publications for the reviewers and the top institutes were also noted. aCountry code is provided for Kazakhstan (“181”) as an example. Data analysis: Normality of data was assessed by the Shapiro-Wilk test. Descriptive statistics are described along with graphical presentations including bar-graphs, pie-charts and scatter-plots as relevant. Software: Data was formatted into spreadsheets in MS Excel and then imported into the R statistical environment version 4.0.3. Both Excel and R packages ggpubr, dplyr, GGally, ggplot2 were used for data analysis and data visualization. RESULTS: The total number of researchers from CA registered on Publons was 15,764 and the countrywide distribution is mentioned in Table 2. However, only 370 (11.7%) have at least 1 verified review (Fig. 1). This varies amongst the countries (Fig. 2). Six reviewers from Kazakhstan and one each from Uzbekistan and Kyrgyzstan have achieved the distinction of being awarded the Publons Reviewer Award. The top 20 reviewers across CA with the maximum number of verified reviews are listed in Table 3. Ten of the top 20 reviewers belong to Nazarbayev University, Nur-Sultan, Kazakhstan. Seven of the top reviewers are also active editors with a different number of editorial records to their names. The top 10 universities with the maximum number of reviews per country have been listed in Table 4. However, this list is abbreviated for 3 countries that have less than 10 universities registered on Publons. Fig. 3 shows the relationship between the number of verified reviews and publication from these 29 universities across the five countries. DISCUSSION: This analysis of the reviewers from the core CA countries reveals that Kazakhstan seems to top the list while two specific countries need a major boost. The number of active reviewers as well institutes in Kazakhstan reveal its potential as a rising star in the region. Uzbekistan also seems to be doing well. These two countries represent more than 90% of researchers registered on Publons as well as those involved in active peer-reviewing. Kazakhstan has natural resources and is a major contributor to the economy of CA.14 Similarly, Uzbekistan has been gradually improving its economy and contribution to education in the last decade. These can be helping them made rapid strides in research and publications.4 Similarly, the increase in publications can also influence knowledge of publication ethics and promote peer reviewing practices. Looking at the performance of their top institutes, Kazakhstan has a good ratio of peer-reviews to publications. This ratio is less for all others including Uzbekistan. Also, the eight reviewers from the region who have received recognition from Publons as top reviewers highlight the point that the potential of this region is second to none. This recognition can also help stimulate their peer to achieve such distinctions. Peer reviewing not only trains the mind to be analytical but also teaches many aspects of publication ethics. These include understanding plagiarism,15 authorship criteria,16 declaring potential conflicts of interest17 and also the skill of scientific writing. Thus, it has been proposed that peer-reviewing should be a part of ongoing medical education.18 There are different metrics to measure researcher and author impact.13 However, for peer reviewers, they are no other metrics beyond what is present in Publons: the number of journals reviewed for, and the number of verified reviews (overall and in the last 12 months). Also, Publons has an option in which Editor can recommend someone as an excellent reviewer and also, it has its mechanism to award top reviewer recognition. The most pertinent question is how to mine the potential of Central Asian reviewers. The Publons Academy is an initiation that enables experienced peer reviewers to mentor novice and upcoming reviewers. There is a need to involve both experts on this initiative to devote their time for mentoring and also spread awareness amongst the CA reviewers to get registered for such mentoring. Journal editors can help organize online or offline courses on peer-reviewing for these nations. Online education has added the advantage of bringing together experts from varied regions of the world. Nevertheless, it cannot be a one-way process. The CA countries, especially the underperforming ones, will also need to pull up their bootstraps. There has been freedom of knowledge, an open platform for learning and limitless striving to yearn for perfection. The other countries can take a leaf out of the book from Kazakhstan and stress for more open networking and learning resources. The leading university of Kazakhstan, the Nazarbayev University, is contributing to half the top reviewers in the region, as well as the highest proportion of verified reviews. Thus, it has an unparalleled capacity for the region and can take an initiative to bridge across to other universities in the region and disseminate their expertise. There are limitations to this analysis. Firstly, some active reviewers may not be registered on Publons. All registered reviewers also might not be active on Publons. Secondly, there may be predatory journals also registered on Publons and all the peer reviews verified may not have any minimum quality assurance.19 There seems to be no practical way to include reviewers not on Publons, in our analysis. We presume they would be in equal proportion in each country. However, a greater proportion may be from Tajikistan and Turkmenistan possible due to reduced awareness about Publons. The CA countries have different access to the internet and social media.20 Also, Publons is evolving and trying out different strategies to exclude fake or poor reviews.12 Thus, there is heterogeneity in the distribution of peer reviewer in the CA countries. There seems to be a good number of upcoming reviewers in a couple of countries while the others may need more awareness. This region may benefit much from international networking, symposia and e-learning platforms dedicated to promoting peer-reviewing.
Background: The five Central Asian republics comprise of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. Their research and publication activities are gradually improving but there is limited data on how good their peer reviewing practices are. Methods: We have use the Publons database to extract information on the reviewers registered including the number of verified review, Publons award winners, and top universities in the domain of peer reviewing. This has been analysed overall and country wise. Results: Of 15,764 researchers registered on Publons, only 370 (11.7%) have verified records of peer-reviewing. There are 8 Publons award winners. There is great heterogeneity in the number of active reviewers across the five countries. Kazakhstan and Uzbekistan account for more than 90% of verified reviewers. Only Kazakhstan has more than 100 active reviewers and 6 Publons award recipients. Amongst the top 20 reviewers from Central Asia, half of them are from the Nazarbayev University, Nur-Sultan, Kazakhstan. Three countries have less than 10 universities registered on Publons. Conclusions: Central Asia has a good number of peer reviewers on Publons though only a minority of researchers are involved in peer reviewing. However, the heterogeneity between the nations can be best dealt with by promoting awareness and international networking including e-learning and mentoring programs.
null
null
2,200
252
[ 162, 35, 40 ]
7
[ "reviewers", "number", "publons", "peer", "reviews", "data", "countries", "verified", "kazakhstan", "total number" ]
[ "peer reviewer", "peer reviews publications", "experienced peer reviewers", "peer reviewing nations", "pertinent peer reviewers" ]
null
null
[CONTENT] Central Asia | Peer Review | Publications | Publons | Universities [SUMMARY]
[CONTENT] Central Asia | Peer Review | Publications | Publons | Universities [SUMMARY]
[CONTENT] Central Asia | Peer Review | Publications | Publons | Universities [SUMMARY]
null
[CONTENT] Central Asia | Peer Review | Publications | Publons | Universities [SUMMARY]
null
[CONTENT] Asia, Central | Databases, Bibliographic | Humans | Peer Review | Periodicals as Topic | Publications [SUMMARY]
[CONTENT] Asia, Central | Databases, Bibliographic | Humans | Peer Review | Periodicals as Topic | Publications [SUMMARY]
[CONTENT] Asia, Central | Databases, Bibliographic | Humans | Peer Review | Periodicals as Topic | Publications [SUMMARY]
null
[CONTENT] Asia, Central | Databases, Bibliographic | Humans | Peer Review | Periodicals as Topic | Publications [SUMMARY]
null
[CONTENT] peer reviewer | peer reviews publications | experienced peer reviewers | peer reviewing nations | pertinent peer reviewers [SUMMARY]
[CONTENT] peer reviewer | peer reviews publications | experienced peer reviewers | peer reviewing nations | pertinent peer reviewers [SUMMARY]
[CONTENT] peer reviewer | peer reviews publications | experienced peer reviewers | peer reviewing nations | pertinent peer reviewers [SUMMARY]
null
[CONTENT] peer reviewer | peer reviews publications | experienced peer reviewers | peer reviewing nations | pertinent peer reviewers [SUMMARY]
null
[CONTENT] reviewers | number | publons | peer | reviews | data | countries | verified | kazakhstan | total number [SUMMARY]
[CONTENT] reviewers | number | publons | peer | reviews | data | countries | verified | kazakhstan | total number [SUMMARY]
[CONTENT] reviewers | number | publons | peer | reviews | data | countries | verified | kazakhstan | total number [SUMMARY]
null
[CONTENT] reviewers | number | publons | peer | reviews | data | countries | verified | kazakhstan | total number [SUMMARY]
null
[CONTENT] peer | scientific | reviewing | reviewers | publons | peer reviewing | reviewer | increasing | par | best [SUMMARY]
[CONTENT] number | data | total number | total | reviewers | number reviews | verified review | country | institutes | verified [SUMMARY]
[CONTENT] fig | number | reviewers | table | universities | maximum | maximum number | listed table | listed | 10 universities [SUMMARY]
null
[CONTENT] number | reviewers | data | peer | publons | total number | total | verified | reviews | excel [SUMMARY]
null
[CONTENT] five | Central Asian | Kazakhstan | Kyrgyzstan | Tajikistan | Turkmenistan | Uzbekistan ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 15,764 | Publons | only 370 | 11.7% ||| 8 ||| five ||| Kazakhstan | Uzbekistan | more than 90% ||| Kazakhstan | more than 100 | 6 ||| 20 | Central Asia | half | the Nazarbayev University | Nur-Sultan | Kazakhstan ||| Three | less than 10 | Publons [SUMMARY]
null
[CONTENT] five | Central Asian | Kazakhstan | Kyrgyzstan | Tajikistan | Turkmenistan | Uzbekistan ||| ||| ||| ||| 15,764 | Publons | only 370 | 11.7% ||| 8 ||| five ||| Kazakhstan | Uzbekistan | more than 90% ||| Kazakhstan | more than 100 | 6 ||| 20 | Central Asia | half | the Nazarbayev University | Nur-Sultan | Kazakhstan ||| Three | less than 10 | Publons ||| Central Asia | Publons ||| [SUMMARY]
null
Laboratory and field evaluation of MAÏA
34016099
Malaria vector control relies upon the use of insecticide-treated nets and indoor residual spraying. However, as the emergency of insecticide resistance in malaria vectors grows, the effectiveness of these measures could be limited. Alternative tools are needed. In this context, repellents can play an important role against exophagic and exophilic mosquitoes. This study evaluated the efficacy of MAÏA®, a novel repellent ointment, in laboratory and field conditions in Burkina Faso.
BACKGROUND
For laboratory and field assessment, 20 volunteers were enrolled and trained for nocturnal collection of mosquitoes using human landing catches (HLC). In the laboratory tests, 2 mg/sq cm of treatment (either MAIA® or 20 % DEET) were used to assess median complete protection time (CPT) against two species: Anopheles gambiae and Aedes aegypti, following WHO guidelines. For both species, two strains consisting of susceptible and local strains were used. The susceptible strains were Kisumu and Bora Bora for An. gambiae and Ae. aegypti, respectively. For the field test, the median CPT of MAÏA® was compared to that of a negative (70 % ethanol) and positive (20 % DEET) after carrying out HLCs in rural Burkina Faso in both indoor and outdoor settings.
METHODS
Laboratory tests showed median Kaplan-Meier CPT of 6 h 30 min for An. gambiae (Kisumu), 5 h 30 min for An. gambiae (Goden, local strain), and 4 h for Ae. aegypti for both the local and sensitive strain. These laboratory results suggest that MAÏA® is a good repellent against the three mosquito species. During these field tests, a total of 3979 mosquitoes were caught. In this population, anophelines represented 98.5 %, with culicines (Aedes) making up the remaining 1.5 %. Among anopheline mosquitoes, 95 % belonged to the An. gambiae complex, followed by Anopheles funestus and Anopheles pharoensis. The median CPT of 20 % DEET and MAÏA® were similar (8 h) and much longer than that of the negative control (2 h).
RESULTS
Results from the present studies showed that MAÏA® offers high protection against anophelines biting indoors and outdoors and could play an important role in malaria prevention in Africa.
CONCLUSIONS
[ "Adult", "Aedes", "Animals", "Anopheles", "Burkina Faso", "DEET", "Female", "Humans", "Insect Repellents", "Malaria", "Male", "Ointments", "Young Adult" ]
8139107
Background
Malaria is one of the deadliest diseases in many low- and middle-income countries, affecting mainly children and pregnant women in sub-Saharan Africa [1]. Long-lasting insecticide-treated nets (LLINs) have been regarded as the most effective method for controlling mosquitoes transmitting malaria parasites. Since 2000, about one billion nets have been distributed in Africa, resulting in a significant decline in malaria-related deaths on the continent between 2000 and 2015 [24]. However, the massive use of insecticides in public health in addition to that in agriculture causes concern regarding insecticide resistance [57] and changing behaviour [8, 9] of the malaria vectors. For example, a study conducted in Papua New Guinea showed a shift in mosquito biting from night to earlier hours in the evening after a nationwide distribution of LLINs [10]. Similar changes in the behaviour of Anopheles funestus have been observed in Benin and Senegal after LLIN distribution achieved a high level of coverage [9, 10]. Furthermore, studies suggest that the scaling up of LLIN distribution and indoor residual spraying (IRS) have led to more outdoor biting by Anopheles gambiae sensu lato (s.l.), commonly considered endophagic mosquitoes [1113]. A recent study in the Cascades region of Burkina Faso showed a high level of insecticide resistance [14] where more than 50% of the major vector, An. gambiae s.l., were collected biting outdoors [15]. These altered patterns of outdoors, early evening and morning biting, by anophelines, combined with resistance to insecticides appear to be caused by the mass distribution of LLINs and imply the inexorable loss of efficacy of these interventions [16, 17]. A recent study highlighted that an increase in early evening biting could increase transmission not only because people are unprotected by nets, but also because there is a higher chance of malaria vectors becoming infectious [18]. The development of new vector control tools, in addition to LLINs, is therefore necessary to protect people, when they are not under a bed net. Topical repellents could play an important role in addressing this problem if they are effective and accepted by the population. A systematic review of repellent interventions and mathematical modeling has shown that user compliance is indeed one of the most decisive factors for the success of this intervention [19]. In sub-Saharan Africa, ointments are used primarily by mothers and children to moisturize their skins. In Burkina Faso, ointments are applied to 80% of children every evening, when mosquitoes start biting (Kadidia Ouedraogo et al., in prep). Maa Africa, a company based in Burkina Faso, has developed a mosquito repellent ointment, MAA, uniquely designed with local mothers, to be used daily within their families. The underlying idea is to leverage the existing habits of mothers to protect their families from infectious bites whenever they are not under a net. Affordability is a key criterion for the products adoption and use. MAA has been industrially produced since June 2020 in Cte dIvoire and integrates a large share of ingredients sourced in West Africa. The ointment was officially launched in August 2020 in Burkina Faso and over 50,000 units were sold in the first four months. In March 2021, over 500 points of sales (mainly general stores, pharmacies and kiosks) distributed the product in the country. If MAA proves that it is both effective and accepted by the population, it could play a key role in reducing the probability of children experiencing infectious bites during the evening and be positioned as a complementary intervention to LLINs. The aim of this study is to evaluate the effectiveness of MAA in both laboratory and field conditions, especially the median complete protection time (CPT) offered by the product. Results from these evaluations are important for validating how effective this new repellent is; behavioral responses to repellent differ between wild mosquito and laboratory-reared mosquito populations [20].
null
null
Results
Laboratory tests Overall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2. Table 1 Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions An. gambiae Kisumu An. gambiae Goden Ae. aegypti Bora bora Ae. aegypti LocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Overall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2. Table 1 Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions An. gambiae Kisumu An. gambiae Goden Ae. aegypti Bora bora Ae. aegypti LocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Field test Mosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% Repellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Complete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Mosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% Repellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Complete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)
Conclusions
MAA, a novel ointment formulated with shea butter widely used in West Africa to moisturize the skin of children, has shown high repellency against laboratory-reared and wild malaria vectors. In the context of widespread vector resistance to insecticide and growing tendency of mosquitoes to bite outside houses, there is a need to add MAA ointment to the vector control tools used in sub-Saharan countries with high malaria burden.
[ "Background", "Methods", "Study area", "Human volunteer preparation", "Repellents", "Strains of mosquitoes", "Evaluation in the laboratory", "Field evaluation", "Side effects", "Ethical clearance", "Data analysis", "Laboratory tests", "Field test", "Mosquito species composition and biting behaviours", "Repellency against mosquitoes", "Complete protection time" ]
[ "Malaria is one of the deadliest diseases in many low- and middle-income countries, affecting mainly children and pregnant women in sub-Saharan Africa [1]. Long-lasting insecticide-treated nets (LLINs) have been regarded as the most effective method for controlling mosquitoes transmitting malaria parasites. Since 2000, about one billion nets have been distributed in Africa, resulting in a significant decline in malaria-related deaths on the continent between 2000 and 2015 [24].\nHowever, the massive use of insecticides in public health in addition to that in agriculture causes concern regarding insecticide resistance [57] and changing behaviour [8, 9] of the malaria vectors. For example, a study conducted in Papua New Guinea showed a shift in mosquito biting from night to earlier hours in the evening after a nationwide distribution of LLINs [10]. Similar changes in the behaviour of Anopheles funestus have been observed in Benin and Senegal after LLIN distribution achieved a high level of coverage [9, 10]. Furthermore, studies suggest that the scaling up of LLIN distribution and indoor residual spraying (IRS) have led to more outdoor biting by Anopheles gambiae sensu lato (s.l.), commonly considered endophagic mosquitoes [1113]. A recent study in the Cascades region of Burkina Faso showed a high level of insecticide resistance [14] where more than 50% of the major vector, An. gambiae s.l., were collected biting outdoors [15]. These altered patterns of outdoors, early evening and morning biting, by anophelines, combined with resistance to insecticides appear to be caused by the mass distribution of LLINs and imply the inexorable loss of efficacy of these interventions [16, 17]. A recent study highlighted that an increase in early evening biting could increase transmission not only because people are unprotected by nets, but also because there is a higher chance of malaria vectors becoming infectious [18]. The development of new vector control tools, in addition to LLINs, is therefore necessary to protect people, when they are not under a bed net.\nTopical repellents could play an important role in addressing this problem if they are effective and accepted by the population. A systematic review of repellent interventions and mathematical modeling has shown that user compliance is indeed one of the most decisive factors for the success of this intervention [19]. In sub-Saharan Africa, ointments are used primarily by mothers and children to moisturize their skins. In Burkina Faso, ointments are applied to 80% of children every evening, when mosquitoes start biting (Kadidia Ouedraogo et al., in prep). Maa Africa, a company based in Burkina Faso, has developed a mosquito repellent ointment, MAA, uniquely designed with local mothers, to be used daily within their families. The underlying idea is to leverage the existing habits of mothers to protect their families from infectious bites whenever they are not under a net. Affordability is a key criterion for the products adoption and use. MAA has been industrially produced since June 2020 in Cte dIvoire and integrates a large share of ingredients sourced in West Africa. The ointment was officially launched in August 2020 in Burkina Faso and over 50,000 units were sold in the first four months. In March 2021, over 500 points of sales (mainly general stores, pharmacies and kiosks) distributed the product in the country. If MAA proves that it is both effective and accepted by the population, it could play a key role in reducing the probability of children experiencing infectious bites during the evening and be positioned as a complementary intervention to LLINs.\nThe aim of this study is to evaluate the effectiveness of MAA in both laboratory and field conditions, especially the median complete protection time (CPT) offered by the product. Results from these evaluations are important for validating how effective this new repellent is; behavioral responses to repellent differ between wild mosquito and laboratory-reared mosquito populations [20].", "Study area Laboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started.\n\nFig. 1Study area\nStudy area\nLaboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started.\n\nFig. 1Study area\nStudy area\nHuman volunteer preparation Healthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe.\nHere Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer.\nHealthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe.\nHere Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer.\nRepellents MAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference.\nMAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference.\nStrains of mosquitoes Four strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment.\nFour strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment.\nEvaluation in the laboratory The laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%.\nOverall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers.\nNegative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21].\nThe laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%.\nOverall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers.\nNegative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21].\nField evaluation The lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes.\nMosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design.\nCollected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25].\nThe lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes.\nMosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design.\nCollected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25].\nSide effects No side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field.\nNo side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field.\nEthical clearance Written informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB.\nWritten informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB.\nData analysis All data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed.\nThe median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21].\nAll data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed.\nThe median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21].", "Laboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started.\n\nFig. 1Study area\nStudy area", "Healthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe.\nHere Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer.", "MAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference.", "Four strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment.", "The laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%.\nOverall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers.\nNegative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21].", "The lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes.\nMosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design.\nCollected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25].", "No side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field.", "Written informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB.", "All data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed.\nThe median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21].", "Overall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2.\n\nTable 1\nMedian complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions \nAn. gambiae\nKisumu\nAn. gambiae\nGoden\nAe. aegypti\nBora bora\nAe. aegypti\nLocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers\n\nMedian complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions \nKaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers", "Mosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%\nA total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%\nRepellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.\nRepellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.\nComplete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nThe overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)", "A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%", "Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.", "The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study area", "Human volunteer preparation", "Repellents", "Strains of mosquitoes", "Evaluation in the laboratory", "Field evaluation", "Side effects", "Ethical clearance", "Data analysis", "Results", "Laboratory tests", "Field test", "Mosquito species composition and biting behaviours", "Repellency against mosquitoes", "Complete protection time", "Discussion", "Conclusions" ]
[ "Malaria is one of the deadliest diseases in many low- and middle-income countries, affecting mainly children and pregnant women in sub-Saharan Africa [1]. Long-lasting insecticide-treated nets (LLINs) have been regarded as the most effective method for controlling mosquitoes transmitting malaria parasites. Since 2000, about one billion nets have been distributed in Africa, resulting in a significant decline in malaria-related deaths on the continent between 2000 and 2015 [24].\nHowever, the massive use of insecticides in public health in addition to that in agriculture causes concern regarding insecticide resistance [57] and changing behaviour [8, 9] of the malaria vectors. For example, a study conducted in Papua New Guinea showed a shift in mosquito biting from night to earlier hours in the evening after a nationwide distribution of LLINs [10]. Similar changes in the behaviour of Anopheles funestus have been observed in Benin and Senegal after LLIN distribution achieved a high level of coverage [9, 10]. Furthermore, studies suggest that the scaling up of LLIN distribution and indoor residual spraying (IRS) have led to more outdoor biting by Anopheles gambiae sensu lato (s.l.), commonly considered endophagic mosquitoes [1113]. A recent study in the Cascades region of Burkina Faso showed a high level of insecticide resistance [14] where more than 50% of the major vector, An. gambiae s.l., were collected biting outdoors [15]. These altered patterns of outdoors, early evening and morning biting, by anophelines, combined with resistance to insecticides appear to be caused by the mass distribution of LLINs and imply the inexorable loss of efficacy of these interventions [16, 17]. A recent study highlighted that an increase in early evening biting could increase transmission not only because people are unprotected by nets, but also because there is a higher chance of malaria vectors becoming infectious [18]. The development of new vector control tools, in addition to LLINs, is therefore necessary to protect people, when they are not under a bed net.\nTopical repellents could play an important role in addressing this problem if they are effective and accepted by the population. A systematic review of repellent interventions and mathematical modeling has shown that user compliance is indeed one of the most decisive factors for the success of this intervention [19]. In sub-Saharan Africa, ointments are used primarily by mothers and children to moisturize their skins. In Burkina Faso, ointments are applied to 80% of children every evening, when mosquitoes start biting (Kadidia Ouedraogo et al., in prep). Maa Africa, a company based in Burkina Faso, has developed a mosquito repellent ointment, MAA, uniquely designed with local mothers, to be used daily within their families. The underlying idea is to leverage the existing habits of mothers to protect their families from infectious bites whenever they are not under a net. Affordability is a key criterion for the products adoption and use. MAA has been industrially produced since June 2020 in Cte dIvoire and integrates a large share of ingredients sourced in West Africa. The ointment was officially launched in August 2020 in Burkina Faso and over 50,000 units were sold in the first four months. In March 2021, over 500 points of sales (mainly general stores, pharmacies and kiosks) distributed the product in the country. If MAA proves that it is both effective and accepted by the population, it could play a key role in reducing the probability of children experiencing infectious bites during the evening and be positioned as a complementary intervention to LLINs.\nThe aim of this study is to evaluate the effectiveness of MAA in both laboratory and field conditions, especially the median complete protection time (CPT) offered by the product. Results from these evaluations are important for validating how effective this new repellent is; behavioral responses to repellent differ between wild mosquito and laboratory-reared mosquito populations [20].", "Study area Laboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started.\n\nFig. 1Study area\nStudy area\nLaboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started.\n\nFig. 1Study area\nStudy area\nHuman volunteer preparation Healthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe.\nHere Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer.\nHealthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe.\nHere Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer.\nRepellents MAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference.\nMAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference.\nStrains of mosquitoes Four strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment.\nFour strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment.\nEvaluation in the laboratory The laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%.\nOverall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers.\nNegative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21].\nThe laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%.\nOverall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers.\nNegative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21].\nField evaluation The lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes.\nMosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design.\nCollected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25].\nThe lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes.\nMosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design.\nCollected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25].\nSide effects No side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field.\nNo side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field.\nEthical clearance Written informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB.\nWritten informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB.\nData analysis All data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed.\nThe median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21].\nAll data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed.\nThe median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21].", "Laboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started.\n\nFig. 1Study area\nStudy area", "Healthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe.\nHere Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer.", "MAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference.", "Four strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment.", "The laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%.\nOverall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers.\nNegative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21].", "The lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes.\nMosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design.\nCollected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25].", "No side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field.", "Written informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB.", "All data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed.\nThe median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21].", "Laboratory tests Overall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2.\n\nTable 1\nMedian complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions \nAn. gambiae\nKisumu\nAn. gambiae\nGoden\nAe. aegypti\nBora bora\nAe. aegypti\nLocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers\n\nMedian complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions \nKaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers\nOverall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2.\n\nTable 1\nMedian complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions \nAn. gambiae\nKisumu\nAn. gambiae\nGoden\nAe. aegypti\nBora bora\nAe. aegypti\nLocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers\n\nMedian complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions \nKaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers\nField test Mosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%\nA total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%\nRepellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.\nRepellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.\nComplete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nThe overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nMosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%\nA total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%\nRepellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.\nRepellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.\nComplete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nThe overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)", "Overall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2.\n\nTable 1\nMedian complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions \nAn. gambiae\nKisumu\nAn. gambiae\nGoden\nAe. aegypti\nBora bora\nAe. aegypti\nLocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers\n\nMedian complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions \nKaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers", "Mosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%\nA total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%\nRepellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.\nRepellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.\nComplete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nThe overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)", "A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA\nAnopheles gambiae sensu lato (s.l.)\n6862660480\nAnopheles funestus\n121Other Anopheles193Culicines710623\nTotal number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%", "Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection\nRepellency of 20% DEET and MAA indoor and outdoor collection\nWhen data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors.", "The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nAnopheles gambiae s.l.\nDEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nAnopheles gambiae s.l\n\nIndoors \nOutdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)\nEstimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70%\nThe estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors\nOverall estimated probabilities of no mosquitoes landing for each treatment according to time of collections\nEstimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B)", "The results of this study demonstrated that MAA, a shea butter-based ointment with 15% DEET provides high protection against mosquitoes in Goden, a rural area of Burkina Faso. Both tests under field and laboratory conditions suggested that MAA has equal repellency effect as 20% DEET ethanolic solution during the period of collection. Similar results were also found both indoors and outdoors. The percentage of repellency when using MAA varied between 100 and 80% over the major malaria vector biting period, which occurs between 18:00 and 06:00. The median CPT were also similar and estimated around 480min. Both MAA and the 20% DEET ethanolic solution were found to provide up to 90% of repellency during the first 6h after their applications. Furthermore, results suggested that using MAA, the average hourly bites received was significantly lower (less than 1 bite per hour) compared to that of both 20% DEET and negative control. However, one of the limitations is that biting activity on control arms was not checked at the end of laboratory experiments to validate consistency of mosquito avidity, but this did not affect the overall result. Overall, it can be argued that MAA could provide protection to people before they go to bed.\nPrevious studies in the same locality comparing repellency between three different repellents found that DEET, IR3535 and KBR 3023 were effective against An. gambiae s.l. and other Afrotropical vector mosquitoes [26]. In this study authors showed that protection from KBR 3023, DEET and IR3535 were still high against anophelines for up to 10h post-exposure. In contrast, results from the current study indicated that the relative repellency was 100% for ~8h. Results were similar to that from a recent study in Ethiopia comparing DEET (N, N-diethyl-1,3-methylbenzamide) and MyggA (p-methane diol) and other laboratory products (20% neem oil and 20% chinaberry oil), where the mean CPT was 8h for DEET whilst an estimated 6h for MyggA [27]. Eight hours of repellency may suffice to protect against earlier vector biting both indoors and outdoors before residents take protection from insecticide-treated nets deployed indoors.\nTo date, 11 countries across the world are classified as having a high burden of malaria (2). In these countries malaria vector control is still based on the use of insecticides, either in the form of indoor spraying or by promoting the large-scale distribution of LLINs. These strategies can effectively reduce the number of malaria cases [28], however the major challenge is the resistance of malaria vectors to different classes of insecticides and the shifts in their feeding and resting behaviours, with the tendency of biting and resting outdoors. For example, a study in the Cascades region in Burkina Faso indicate that in addition to insecticide resistance, more than 50% of the malaria vector biting were taking place outdoors. Therefore, new and supplementary methods are urgently needed to complement these tools in the perspective of malaria elimination [29]. In accordance with the spirit of locally adapted-integrated vector and disease control [30], repellents can usefully complement existing control strategies and provide an additional tool in the management of insecticide resistance. In the context of widespread resistant vectors to insecticide and the tendency of mosquitoes to bite outside houses, there is a need to add MAA ointment to vector control tools in sub-Saharan, malaria-burdened countries.\nThe originality of MAA comes from its formulation based on local butter extensively used in rural areas of West African, and which contributes to womens economic income. Promotion of use of local endogen strategies can sustain malaria control, and also improve the economic situation for African women. Additionally, it has been shown that shea butter is a source of anti-inflammatory and anti-tumour promoting compounds [31]. Another interesting compound in shea butter is cinnamic acid which is known for its antibacterial, antifungal and antiviral properties [32]. Shea butter both moisturizes and heals the skin. Clinical studies have shown it to be safe for skin [33]. MAA ointment will not only protect from mosquito-borne diseases but also against other micro-organisms.\nBesides the level of protection offered by repellents, daily compliance and appropriate use seem to be major obstacles to achieving the potential impact on malaria [34]. An efficacy study carried out in Tanzania has shown that volunteers preferred MAA ointment to a more classical 20% DEET solution [35]. Sales by Maa Africa SAS of 50,000 units in local stores in Burkina Faso between August and November 2021 further illustrate its desirability. However, more data are needed to understand who is likely to use the product and whether its usage is appropriate in terms of frequency and application in order to have an impact on mosquito-borne diseases, such as malaria.", "MAA, a novel ointment formulated with shea butter widely used in West Africa to moisturize the skin of children, has shown high repellency against laboratory-reared and wild malaria vectors. In the context of widespread vector resistance to insecticide and growing tendency of mosquitoes to bite outside houses, there is a need to add MAA ointment to the vector control tools used in sub-Saharan countries with high malaria burden." ]
[ null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Malaria", "Mosquito", "\nAnopheles gambiae\n", "\nAedes aegypti\n", "Repellent", "MAA", "Burkina Faso" ]
Background: Malaria is one of the deadliest diseases in many low- and middle-income countries, affecting mainly children and pregnant women in sub-Saharan Africa [1]. Long-lasting insecticide-treated nets (LLINs) have been regarded as the most effective method for controlling mosquitoes transmitting malaria parasites. Since 2000, about one billion nets have been distributed in Africa, resulting in a significant decline in malaria-related deaths on the continent between 2000 and 2015 [24]. However, the massive use of insecticides in public health in addition to that in agriculture causes concern regarding insecticide resistance [57] and changing behaviour [8, 9] of the malaria vectors. For example, a study conducted in Papua New Guinea showed a shift in mosquito biting from night to earlier hours in the evening after a nationwide distribution of LLINs [10]. Similar changes in the behaviour of Anopheles funestus have been observed in Benin and Senegal after LLIN distribution achieved a high level of coverage [9, 10]. Furthermore, studies suggest that the scaling up of LLIN distribution and indoor residual spraying (IRS) have led to more outdoor biting by Anopheles gambiae sensu lato (s.l.), commonly considered endophagic mosquitoes [1113]. A recent study in the Cascades region of Burkina Faso showed a high level of insecticide resistance [14] where more than 50% of the major vector, An. gambiae s.l., were collected biting outdoors [15]. These altered patterns of outdoors, early evening and morning biting, by anophelines, combined with resistance to insecticides appear to be caused by the mass distribution of LLINs and imply the inexorable loss of efficacy of these interventions [16, 17]. A recent study highlighted that an increase in early evening biting could increase transmission not only because people are unprotected by nets, but also because there is a higher chance of malaria vectors becoming infectious [18]. The development of new vector control tools, in addition to LLINs, is therefore necessary to protect people, when they are not under a bed net. Topical repellents could play an important role in addressing this problem if they are effective and accepted by the population. A systematic review of repellent interventions and mathematical modeling has shown that user compliance is indeed one of the most decisive factors for the success of this intervention [19]. In sub-Saharan Africa, ointments are used primarily by mothers and children to moisturize their skins. In Burkina Faso, ointments are applied to 80% of children every evening, when mosquitoes start biting (Kadidia Ouedraogo et al., in prep). Maa Africa, a company based in Burkina Faso, has developed a mosquito repellent ointment, MAA, uniquely designed with local mothers, to be used daily within their families. The underlying idea is to leverage the existing habits of mothers to protect their families from infectious bites whenever they are not under a net. Affordability is a key criterion for the products adoption and use. MAA has been industrially produced since June 2020 in Cte dIvoire and integrates a large share of ingredients sourced in West Africa. The ointment was officially launched in August 2020 in Burkina Faso and over 50,000 units were sold in the first four months. In March 2021, over 500 points of sales (mainly general stores, pharmacies and kiosks) distributed the product in the country. If MAA proves that it is both effective and accepted by the population, it could play a key role in reducing the probability of children experiencing infectious bites during the evening and be positioned as a complementary intervention to LLINs. The aim of this study is to evaluate the effectiveness of MAA in both laboratory and field conditions, especially the median complete protection time (CPT) offered by the product. Results from these evaluations are important for validating how effective this new repellent is; behavioral responses to repellent differ between wild mosquito and laboratory-reared mosquito populations [20]. Methods: Study area Laboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started. Fig. 1Study area Study area Laboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started. Fig. 1Study area Study area Human volunteer preparation Healthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe. Here Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer. Healthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe. Here Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer. Repellents MAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference. MAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference. Strains of mosquitoes Four strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment. Four strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment. Evaluation in the laboratory The laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%. Overall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers. Negative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21]. The laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%. Overall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers. Negative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21]. Field evaluation The lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes. Mosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design. Collected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25]. The lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes. Mosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design. Collected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25]. Side effects No side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field. No side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field. Ethical clearance Written informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB. Written informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB. Data analysis All data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed. The median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21]. All data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed. The median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21]. Study area: Laboratory tests were conducted in May 2019 in the insectary of Centre National de Recherche et de Formation sur le Paludisme (CNRFP) in Ouagadougou, Burkina Faso. Field tests were carried out at Goden (1225N, 120W), a site located at 15km northeast of Ouagadougou, the capital city of Burkina Faso (Fig.1). Goden is a rural village with a Sudanian savanna climate and rainfall under 900 mm annually. The ~800 inhabitants mainly belong to the Mossi ethnic group, and are mostly devoted to agriculture and raising pigs, dogs, goats, and chickens within their compounds. LLINs were distributed in 2016 to ~90% of the population. Goden is known for its high density of malaria vectors due to its proximity to the Massili River. The field study was carried out during the rainy season (August to November 2019) corresponding to high vector density and high malaria transmission. A preliminary assessment of the mosquito density on the collection site was carried out using human landing catches (HLCs) before the tests started. Fig. 1Study area Study area Human volunteer preparation: Healthy adult male volunteers aged between 18 and 40 years were enrolled in this study. The volunteers were instructed not to use fragranced soaps, perfume, tobacco, or alcohol 12h before the start and throughout their participation. To establish the amount of repellent required for application, the surface area of the arm (for laboratory tests) or the leg (for field tests) of volunteers was determined using the following formula: Area= (Cw+ Ce) Dwe. Here Cw is the circumference of the wrist or ankle in cm, Ce is the elbow cubital fossa or the knee circumference in cm, and Dwe is the distance in cm between Ce and Cw [21]. The amount of ointment needed for each volunteer was determined depending on the area of their forearm or leg length. The quantity of product left in bottles was weighed using a precision weighing balance (KERN & SOHN GmbH, Balingen, Germany) to determine the amount applied by each volunteer. Repellents: MAA, a shea butter-based ointment containing 15% DEET (N, N-diethyl-3-methylbenzamide) was received from Maa Africa SAS. It was tested against an ethanolic solution of 20% DEET as positive control and a negative control of 70% ethanol. DEET is known as the standard repellent reference. Strains of mosquitoes: Four strains of mosquitoes were used in the laboratory tests, including Kisumu F57 and Bora bora F58 susceptible strains of respectively An. gambiae and Aedes aegypti. In addition, local strains laboratory-colonized from rural areas of Goden, in Burkina Faso, were used: hereafter named An-Goden (An. gambiae local strain, F418) and Loc-Aedes (local Ae. aegypti, F318). These species were maintained under a 12:12h (light: dark) photoperiod. During rearing, larvae were fed on fish food while glucose was used for adults. The temperature and relative humidity in the rearing room were 2528C and 6080%, respectively. Individual mosquitoes used in these experiments were 5 to 10 days old nulliparous females starved from sugar solution for 12h before the experiment. Evaluation in the laboratory: The laboratory experiments were conducted following WHO guidelines for the arm-in-cage test [21]. Cages were 454545cm screen enclosures. Two test cages were used, one for the repellent candidate and the other for the positive control. The test cages contained 200 females aged 5 to 10 days of one of the four mosquito strains: Kisumu, An-Goden, Bora bora and loc-Aedes. The experiment in the laboratory was carried out at temperatures ranging between 25 and 28C, with relative humidity between 60 and 80%. Overall, 2mg of ointment were applied per sq cm on the left forearm of each volunteer, a concentration estimated from the average of 5 volunteers from the laboratory asked to apply ointment on their left forearm as they would normally do in real life. In all subsequent repellent trials, volunteers were then supplied with a total volume that would achieve this concentration over the surface area of their forearms and/or legs (determined as described above). A steel spatula was used to apply the ointment on the forearm of each volunteer prior to each experiment to avoid absorbing ointment on non-targeted areas of skin. Positive control consisted of 1 ml of 20% DEET solution applied to the right forearms of volunteers. Negative controls and MAA test arms were prepared by first washing left forearms with odorless soap, drying and rinsing with 70% ethanol solution and then drying again. All volunteers wore latex gloves to protect their hands from mosquitoes. To assess the readiness of the mosquitoes to land, both left and right cleaned forearms of volunteers were exposed in the experimental cages for 30s (or until 10 landings of mosquito were counted). Then, for each volunteer, the right forearm was treated from wrist to elbow using 1 ml of the 20% DEET solution whilst the left forearm was treated from the wrist to elbow with MAA ointment. Thirty mins after application of the repellents, the volunteer exposed their treated forearm in the test cage for 3min. The procedure was repeated every 30min until the first bite occurred and the elapsed time to the first bite was recorded. The test was performed three times for each volunteer per mosquito species. Considering the difference in the relative periods of biting activity of each mosquito species, the tests using Ae. aegypti strains were carried out between 09:00 and 18:00, whereas those for An. gambiae were conducted between 17:00 and 05:00 [21]. Field evaluation: The lower legs of volunteers were washed with neutral soap, rinsed with 70% of ethanol solution and naturally dried. Once their legs were treated volunteers were asked to avoid rubbing, touching or wetting the repellent-treated area. Two mg of MAA per sq cm (2.40.2g per 118979.2 sq cm) and 2 ml per sq cm of 20% DEET (2 ml0.1 ml per 118979.2 sq cm, as a positive control) were applied to volunteers lower legs, from knee to ankle. A total of 20 volunteers were recruited from Goden village and trained for nocturnal mosquito collection using HLC. Each volunteer was later randomly allocated to one of the five groups (2 for MAIA, 2 for positive control, 1 for negative control) of four volunteers according to the repellent received. Each night of collection, the experiment took place at five different households, located at least 20m apart as per WHO guidelines [21], in order to avoid biases in attractiveness to the mosquitoes. Mosquito collection started 30min following treatments. Volunteers acting as bait, sat on a chair in pairs (one indoor and one outdoor) and actively collected mosquitoes that landed on their treated lower leg using mouth aspirator and flash torch [22] for 45min, followed by a 15-mins break. Volunteers wore long-sleeved shirts, buttoned at the wrist, long trousers, closed shoes and latex gloves with a hat on their head, but with the treated lower leg to be exposed to mosquitoes by rolling up trousers to the knee. During these experiments, mosquitoes were collected simultaneously indoors and outdoors between 19:00 and 06:00. To avoid biases introduced by individual attractiveness and skills [23, 24] volunteers at the same household rotated between indoors and outdoors hourly. In each household two groups of two people rotated collecting from 18:00 to 24:00 and from 00:00 to 06:00, following the Williams balanced Latin Square design. Collected mosquitoes were transferred into plastic cups, covered with a piece of untreated net, with a small hole at the bottom to allow mosquitoes to be easily aspirated into them. After collection mosquitoes were brought to the entomological laboratory of CNRFP and morphologically identified using a stereo microscope and identification keys [25]. Side effects: No side effects were observed or reported by any of the volunteers throughout the period of tests both in the laboratory and in the field. Ethical clearance: Written informed consent was obtained from all volunteers and household owners recruited in this study. The study was approved by the institutional ethic committee of CNRFP under 2019/000008/MS/SG/CNRFP/CIB. Data analysis: All data were collected on standard forms and entered twice in a database by different people. Databases have been compared using Epi Info 3.5.3, and inconsistencies were verified using printed and corrected forms. The performance of the repellent was measured by calculating the repulsive efficiency and the median full protection time. A generalized linear mixed model (GLMM) was used to further analyse the effect of the location (indoors vs. outdoors) on the performance of the treatments. Variation in the average number of bites received between treatments was also assessed. The median CPT is defined as the interval of time between the beginning of collection/test and the first mosquito landing. To estimate the median CPT of each treatment, a Kaplan-Meier survival analysis was performed for each vector species and strain used in the laboratory experiments and on field data through survival function from R software-version 3.5.0 (2018-04-23). However, for the field test, the analysis was performed on only An. gambiae s.l. as it was the most abundant species collected (~96% of the total collection). The analysis consisted of assessing the median CPT and the repulsive efficacy. The repulsive efficacy was calculated as a percentage of repulsion (% R) according to the formula % R = ((C T) / C) 100, where C is the number of mosquitoes collected on the treated legs of the two control treatments separately, and T is the total number of mosquito bite attempts on the volunteers legs treated with the test product [21]. Results: Laboratory tests Overall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2. Table 1 Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions An. gambiae Kisumu An. gambiae Goden Ae. aegypti Bora bora Ae. aegypti LocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Overall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2. Table 1 Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions An. gambiae Kisumu An. gambiae Goden Ae. aegypti Bora bora Ae. aegypti LocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Field test Mosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% Repellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Complete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Mosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% Repellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Complete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Laboratory tests: Overall, under laboratory conditions the relative repellency (median CPT) was higher for both MAA and 20% DEET against An. gambiae compare to Ae. aegypti (Table1). MAA performed well in repelling the four mosquito species used in this study. The median CPTs were, respectively, 6.5h for An. gambiae (Kisumu), 5.5h for An. gambiae (Goden, local strain) and 4h for Ae. aegypti for both the local and sensitive strain. There was no significant difference between the two treatments for each of the experiment (Kisumu: 2=2.1, p value=0.14; Goden: 2=0.8, p value=0.36; Bora bora: 2=1.7, p value=0.19; Ae. aegypti (local strain): 2=0.9, p value=0. 35) indicating that both MAA and 20% DEET have equal repellency for these strains. The Kaplan-Meier curves for MAA and 20% DEET, respectively, for Kisumu, An-Goden, Bora bora and Loc-Aedes are shown in Fig.2. Table 1 Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions An. gambiae Kisumu An. gambiae Goden Ae. aegypti Bora bora Ae. aegypti LocalDEETMAADEETMAADEETMAADEETMAAMedian. CPT390390300330270240240240Lower CI368334272216252239212225Upper CI412446328444288241268255Fig. 2Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Median complete protection times (CPT) in minutes and their 95% confidence intervals (CI) against mosquito strains, according to treatments 20% DEET and MAA, under laboratory conditions Kaplan-Meier plots for 20% DEET and MAA tested against the four species on five volunteers Field test: Mosquito species composition and biting behaviours A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% Repellency against mosquitoes Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Complete protection time The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Mosquito species composition and biting behaviours: A total of 3,979 mosquitoes, stratified by treatment and species (Table2), were caught using HLC. Anophelines represented 98.5% of the total catch, with culicines (Aedes) making up the remaining 1.5%. Among anopheline species, 99.6% belonged to An. gambiae complex, followed by An. funestus (0.1%), and Anopheles pharoensis (0.3%). The frequency of mosquitoes landing on treated collectors, compared with control subjects, varied according to the repellent used (Table2). The hourly mosquito biting rate varied significantly between treatments (df=2, 2=426.22, p<0.0001). An average of 0.68 (95% CI: 0.510.91) mosquito bites were received per person per hour for MAA compared to 1.01 (95% CI: 0.761.33) for 20% DEET, and 8.98 (95% CI: 6.5612.29) for the 70% ethanol. In addition, there was no variation between treatments according to location (outdoors and indoors, df=2, 2=1.703, p=0.42). Overall, the ratio outdoors:indoors biting was 1.26 (95% CI: 1.251.27) showing that more bites were taking place outdoors compared to indoors (df=1, 2=5.79, p=0.016).Table 2Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70%Mosquito speciesTreatmentDEETEthanolMAA Anopheles gambiae sensu lato (s.l.) 6862660480 Anopheles funestus 121Other Anopheles193Culicines710623 Total number of common mosquitoes collected after treatment of 20% DEET, MAA and ethanol 70% Repellency against mosquitoes: Repellency against An. gambiae s.l. was stratified by time of collection. From 18:00 to 24:00 (6h after application), the percentage of repellency varied from 100 to 90% for MAA and DEET. Between 00:00 to 03:00 (9h after application), the percentage was between 90 and 80% (Fig.3). After 03:00 (10h after application), this percentage was under 80% for 20% DEET, but MAA was over 80%. MAAgave a high percentage of repellency thoughout, however during the first 9h after applications no difference in the repellency was observed between MAA and the positive control. Fig. 3Repellency of 20% DEET and MAA indoor and outdoor collection Repellency of 20% DEET and MAA indoor and outdoor collection When data were stratified by location of mosquito biting, the trend was the same for indoors and outdoors. No difference was observed during the first 9h between MAA and 20% DEET. These results show that MAA can protect both indoors and outdoors. Complete protection time: The overall median CPTs of 20% DEET and MAA (Table3) were estimated at 480min (8h) against 120min (2h) for the negative control. For outdoor collections, the median CPTs were 480, 450 and 120min for 20% DEET, MAA and negative control respectively (Table4). For indoor collections, these estimates were 480, 480 and 60min, respectively, for 20% DEET, MAA and ethanol (Table4). Statistical analyses showed that there was no difference in the median CPT between 20% DEET and MAA (df=1, 2=0.2, p=0.7). However, there was a significant difference between median CPT as estimated for MAA and the negative control (df=2, 2=106, p<0.0001, Fig.4). Even when the collection was stratified by location this difference still occurred in both indoor (Fig.5A; df=2, 2=41.6, p<0.0001) and outdoor collections; df=2, 2=66.7, p<0.0001) Fig.5B). Table 3Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% Anopheles gambiae s.l. DEETEthanolMAAMedian CPT480120480Lower CI45491448Upper CI506149512Table 4The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Anopheles gambiae s.l Indoors Outdoors 20% DEETEthanolMAA20% DEETEthanolMAAMedian CPT48060480480120450Lower CI440<6044044795428Upper CI521NA521513145472Fig. 4Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collectionsFig. 5Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Estimated complete protection time (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAAand ethanol 70% The estimated complete protection times (mins) with 95% CI, against Anopheles gambiae s.l. for 20% DEET, MAA and ethanol 70%, indoors and outdoors Overall estimated probabilities of no mosquitoes landing for each treatment according to time of collections Estimated probabilities of no mosquitoes landing for each treatment according to the time at indoor (A) and outdoor locations (B) Discussion: The results of this study demonstrated that MAA, a shea butter-based ointment with 15% DEET provides high protection against mosquitoes in Goden, a rural area of Burkina Faso. Both tests under field and laboratory conditions suggested that MAA has equal repellency effect as 20% DEET ethanolic solution during the period of collection. Similar results were also found both indoors and outdoors. The percentage of repellency when using MAA varied between 100 and 80% over the major malaria vector biting period, which occurs between 18:00 and 06:00. The median CPT were also similar and estimated around 480min. Both MAA and the 20% DEET ethanolic solution were found to provide up to 90% of repellency during the first 6h after their applications. Furthermore, results suggested that using MAA, the average hourly bites received was significantly lower (less than 1 bite per hour) compared to that of both 20% DEET and negative control. However, one of the limitations is that biting activity on control arms was not checked at the end of laboratory experiments to validate consistency of mosquito avidity, but this did not affect the overall result. Overall, it can be argued that MAA could provide protection to people before they go to bed. Previous studies in the same locality comparing repellency between three different repellents found that DEET, IR3535 and KBR 3023 were effective against An. gambiae s.l. and other Afrotropical vector mosquitoes [26]. In this study authors showed that protection from KBR 3023, DEET and IR3535 were still high against anophelines for up to 10h post-exposure. In contrast, results from the current study indicated that the relative repellency was 100% for ~8h. Results were similar to that from a recent study in Ethiopia comparing DEET (N, N-diethyl-1,3-methylbenzamide) and MyggA (p-methane diol) and other laboratory products (20% neem oil and 20% chinaberry oil), where the mean CPT was 8h for DEET whilst an estimated 6h for MyggA [27]. Eight hours of repellency may suffice to protect against earlier vector biting both indoors and outdoors before residents take protection from insecticide-treated nets deployed indoors. To date, 11 countries across the world are classified as having a high burden of malaria (2). In these countries malaria vector control is still based on the use of insecticides, either in the form of indoor spraying or by promoting the large-scale distribution of LLINs. These strategies can effectively reduce the number of malaria cases [28], however the major challenge is the resistance of malaria vectors to different classes of insecticides and the shifts in their feeding and resting behaviours, with the tendency of biting and resting outdoors. For example, a study in the Cascades region in Burkina Faso indicate that in addition to insecticide resistance, more than 50% of the malaria vector biting were taking place outdoors. Therefore, new and supplementary methods are urgently needed to complement these tools in the perspective of malaria elimination [29]. In accordance with the spirit of locally adapted-integrated vector and disease control [30], repellents can usefully complement existing control strategies and provide an additional tool in the management of insecticide resistance. In the context of widespread resistant vectors to insecticide and the tendency of mosquitoes to bite outside houses, there is a need to add MAA ointment to vector control tools in sub-Saharan, malaria-burdened countries. The originality of MAA comes from its formulation based on local butter extensively used in rural areas of West African, and which contributes to womens economic income. Promotion of use of local endogen strategies can sustain malaria control, and also improve the economic situation for African women. Additionally, it has been shown that shea butter is a source of anti-inflammatory and anti-tumour promoting compounds [31]. Another interesting compound in shea butter is cinnamic acid which is known for its antibacterial, antifungal and antiviral properties [32]. Shea butter both moisturizes and heals the skin. Clinical studies have shown it to be safe for skin [33]. MAA ointment will not only protect from mosquito-borne diseases but also against other micro-organisms. Besides the level of protection offered by repellents, daily compliance and appropriate use seem to be major obstacles to achieving the potential impact on malaria [34]. An efficacy study carried out in Tanzania has shown that volunteers preferred MAA ointment to a more classical 20% DEET solution [35]. Sales by Maa Africa SAS of 50,000 units in local stores in Burkina Faso between August and November 2021 further illustrate its desirability. However, more data are needed to understand who is likely to use the product and whether its usage is appropriate in terms of frequency and application in order to have an impact on mosquito-borne diseases, such as malaria. Conclusions: MAA, a novel ointment formulated with shea butter widely used in West Africa to moisturize the skin of children, has shown high repellency against laboratory-reared and wild malaria vectors. In the context of widespread vector resistance to insecticide and growing tendency of mosquitoes to bite outside houses, there is a need to add MAA ointment to the vector control tools used in sub-Saharan countries with high malaria burden.
Background: Malaria vector control relies upon the use of insecticide-treated nets and indoor residual spraying. However, as the emergency of insecticide resistance in malaria vectors grows, the effectiveness of these measures could be limited. Alternative tools are needed. In this context, repellents can play an important role against exophagic and exophilic mosquitoes. This study evaluated the efficacy of MAÏA®, a novel repellent ointment, in laboratory and field conditions in Burkina Faso. Methods: For laboratory and field assessment, 20 volunteers were enrolled and trained for nocturnal collection of mosquitoes using human landing catches (HLC). In the laboratory tests, 2 mg/sq cm of treatment (either MAIA® or 20 % DEET) were used to assess median complete protection time (CPT) against two species: Anopheles gambiae and Aedes aegypti, following WHO guidelines. For both species, two strains consisting of susceptible and local strains were used. The susceptible strains were Kisumu and Bora Bora for An. gambiae and Ae. aegypti, respectively. For the field test, the median CPT of MAÏA® was compared to that of a negative (70 % ethanol) and positive (20 % DEET) after carrying out HLCs in rural Burkina Faso in both indoor and outdoor settings. Results: Laboratory tests showed median Kaplan-Meier CPT of 6 h 30 min for An. gambiae (Kisumu), 5 h 30 min for An. gambiae (Goden, local strain), and 4 h for Ae. aegypti for both the local and sensitive strain. These laboratory results suggest that MAÏA® is a good repellent against the three mosquito species. During these field tests, a total of 3979 mosquitoes were caught. In this population, anophelines represented 98.5 %, with culicines (Aedes) making up the remaining 1.5 %. Among anopheline mosquitoes, 95 % belonged to the An. gambiae complex, followed by Anopheles funestus and Anopheles pharoensis. The median CPT of 20 % DEET and MAÏA® were similar (8 h) and much longer than that of the negative control (2 h). Conclusions: Results from the present studies showed that MAÏA® offers high protection against anophelines biting indoors and outdoors and could play an important role in malaria prevention in Africa.
Background: Malaria is one of the deadliest diseases in many low- and middle-income countries, affecting mainly children and pregnant women in sub-Saharan Africa [1]. Long-lasting insecticide-treated nets (LLINs) have been regarded as the most effective method for controlling mosquitoes transmitting malaria parasites. Since 2000, about one billion nets have been distributed in Africa, resulting in a significant decline in malaria-related deaths on the continent between 2000 and 2015 [24]. However, the massive use of insecticides in public health in addition to that in agriculture causes concern regarding insecticide resistance [57] and changing behaviour [8, 9] of the malaria vectors. For example, a study conducted in Papua New Guinea showed a shift in mosquito biting from night to earlier hours in the evening after a nationwide distribution of LLINs [10]. Similar changes in the behaviour of Anopheles funestus have been observed in Benin and Senegal after LLIN distribution achieved a high level of coverage [9, 10]. Furthermore, studies suggest that the scaling up of LLIN distribution and indoor residual spraying (IRS) have led to more outdoor biting by Anopheles gambiae sensu lato (s.l.), commonly considered endophagic mosquitoes [1113]. A recent study in the Cascades region of Burkina Faso showed a high level of insecticide resistance [14] where more than 50% of the major vector, An. gambiae s.l., were collected biting outdoors [15]. These altered patterns of outdoors, early evening and morning biting, by anophelines, combined with resistance to insecticides appear to be caused by the mass distribution of LLINs and imply the inexorable loss of efficacy of these interventions [16, 17]. A recent study highlighted that an increase in early evening biting could increase transmission not only because people are unprotected by nets, but also because there is a higher chance of malaria vectors becoming infectious [18]. The development of new vector control tools, in addition to LLINs, is therefore necessary to protect people, when they are not under a bed net. Topical repellents could play an important role in addressing this problem if they are effective and accepted by the population. A systematic review of repellent interventions and mathematical modeling has shown that user compliance is indeed one of the most decisive factors for the success of this intervention [19]. In sub-Saharan Africa, ointments are used primarily by mothers and children to moisturize their skins. In Burkina Faso, ointments are applied to 80% of children every evening, when mosquitoes start biting (Kadidia Ouedraogo et al., in prep). Maa Africa, a company based in Burkina Faso, has developed a mosquito repellent ointment, MAA, uniquely designed with local mothers, to be used daily within their families. The underlying idea is to leverage the existing habits of mothers to protect their families from infectious bites whenever they are not under a net. Affordability is a key criterion for the products adoption and use. MAA has been industrially produced since June 2020 in Cte dIvoire and integrates a large share of ingredients sourced in West Africa. The ointment was officially launched in August 2020 in Burkina Faso and over 50,000 units were sold in the first four months. In March 2021, over 500 points of sales (mainly general stores, pharmacies and kiosks) distributed the product in the country. If MAA proves that it is both effective and accepted by the population, it could play a key role in reducing the probability of children experiencing infectious bites during the evening and be positioned as a complementary intervention to LLINs. The aim of this study is to evaluate the effectiveness of MAA in both laboratory and field conditions, especially the median complete protection time (CPT) offered by the product. Results from these evaluations are important for validating how effective this new repellent is; behavioral responses to repellent differ between wild mosquito and laboratory-reared mosquito populations [20]. Conclusions: MAA, a novel ointment formulated with shea butter widely used in West Africa to moisturize the skin of children, has shown high repellency against laboratory-reared and wild malaria vectors. In the context of widespread vector resistance to insecticide and growing tendency of mosquitoes to bite outside houses, there is a need to add MAA ointment to the vector control tools used in sub-Saharan countries with high malaria burden.
Background: Malaria vector control relies upon the use of insecticide-treated nets and indoor residual spraying. However, as the emergency of insecticide resistance in malaria vectors grows, the effectiveness of these measures could be limited. Alternative tools are needed. In this context, repellents can play an important role against exophagic and exophilic mosquitoes. This study evaluated the efficacy of MAÏA®, a novel repellent ointment, in laboratory and field conditions in Burkina Faso. Methods: For laboratory and field assessment, 20 volunteers were enrolled and trained for nocturnal collection of mosquitoes using human landing catches (HLC). In the laboratory tests, 2 mg/sq cm of treatment (either MAIA® or 20 % DEET) were used to assess median complete protection time (CPT) against two species: Anopheles gambiae and Aedes aegypti, following WHO guidelines. For both species, two strains consisting of susceptible and local strains were used. The susceptible strains were Kisumu and Bora Bora for An. gambiae and Ae. aegypti, respectively. For the field test, the median CPT of MAÏA® was compared to that of a negative (70 % ethanol) and positive (20 % DEET) after carrying out HLCs in rural Burkina Faso in both indoor and outdoor settings. Results: Laboratory tests showed median Kaplan-Meier CPT of 6 h 30 min for An. gambiae (Kisumu), 5 h 30 min for An. gambiae (Goden, local strain), and 4 h for Ae. aegypti for both the local and sensitive strain. These laboratory results suggest that MAÏA® is a good repellent against the three mosquito species. During these field tests, a total of 3979 mosquitoes were caught. In this population, anophelines represented 98.5 %, with culicines (Aedes) making up the remaining 1.5 %. Among anopheline mosquitoes, 95 % belonged to the An. gambiae complex, followed by Anopheles funestus and Anopheles pharoensis. The median CPT of 20 % DEET and MAÏA® were similar (8 h) and much longer than that of the negative control (2 h). Conclusions: Results from the present studies showed that MAÏA® offers high protection against anophelines biting indoors and outdoors and could play an important role in malaria prevention in Africa.
14,461
447
[ 747, 3692, 202, 182, 60, 148, 458, 420, 26, 38, 293, 327, 1751, 276, 189, 402 ]
19
[ "maa", "deet", "20", "20 deet", "mosquitoes", "gambiae", "20 deet maa", "deet maa", "00", "mosquito" ]
[ "mosquitoes repellency gambiae", "malaria vector control", "impact malaria", "biting anopheles gambiae", "insecticide tendency mosquitoes" ]
null
[CONTENT] Malaria | Mosquito | Anopheles gambiae | Aedes aegypti | Repellent | MAA | Burkina Faso [SUMMARY]
null
[CONTENT] Malaria | Mosquito | Anopheles gambiae | Aedes aegypti | Repellent | MAA | Burkina Faso [SUMMARY]
[CONTENT] Malaria | Mosquito | Anopheles gambiae | Aedes aegypti | Repellent | MAA | Burkina Faso [SUMMARY]
[CONTENT] Malaria | Mosquito | Anopheles gambiae | Aedes aegypti | Repellent | MAA | Burkina Faso [SUMMARY]
[CONTENT] Malaria | Mosquito | Anopheles gambiae | Aedes aegypti | Repellent | MAA | Burkina Faso [SUMMARY]
[CONTENT] Adult | Aedes | Animals | Anopheles | Burkina Faso | DEET | Female | Humans | Insect Repellents | Malaria | Male | Ointments | Young Adult [SUMMARY]
null
[CONTENT] Adult | Aedes | Animals | Anopheles | Burkina Faso | DEET | Female | Humans | Insect Repellents | Malaria | Male | Ointments | Young Adult [SUMMARY]
[CONTENT] Adult | Aedes | Animals | Anopheles | Burkina Faso | DEET | Female | Humans | Insect Repellents | Malaria | Male | Ointments | Young Adult [SUMMARY]
[CONTENT] Adult | Aedes | Animals | Anopheles | Burkina Faso | DEET | Female | Humans | Insect Repellents | Malaria | Male | Ointments | Young Adult [SUMMARY]
[CONTENT] Adult | Aedes | Animals | Anopheles | Burkina Faso | DEET | Female | Humans | Insect Repellents | Malaria | Male | Ointments | Young Adult [SUMMARY]
[CONTENT] mosquitoes repellency gambiae | malaria vector control | impact malaria | biting anopheles gambiae | insecticide tendency mosquitoes [SUMMARY]
null
[CONTENT] mosquitoes repellency gambiae | malaria vector control | impact malaria | biting anopheles gambiae | insecticide tendency mosquitoes [SUMMARY]
[CONTENT] mosquitoes repellency gambiae | malaria vector control | impact malaria | biting anopheles gambiae | insecticide tendency mosquitoes [SUMMARY]
[CONTENT] mosquitoes repellency gambiae | malaria vector control | impact malaria | biting anopheles gambiae | insecticide tendency mosquitoes [SUMMARY]
[CONTENT] mosquitoes repellency gambiae | malaria vector control | impact malaria | biting anopheles gambiae | insecticide tendency mosquitoes [SUMMARY]
[CONTENT] maa | deet | 20 | 20 deet | mosquitoes | gambiae | 20 deet maa | deet maa | 00 | mosquito [SUMMARY]
null
[CONTENT] maa | deet | 20 | 20 deet | mosquitoes | gambiae | 20 deet maa | deet maa | 00 | mosquito [SUMMARY]
[CONTENT] maa | deet | 20 | 20 deet | mosquitoes | gambiae | 20 deet maa | deet maa | 00 | mosquito [SUMMARY]
[CONTENT] maa | deet | 20 | 20 deet | mosquitoes | gambiae | 20 deet maa | deet maa | 00 | mosquito [SUMMARY]
[CONTENT] maa | deet | 20 | 20 deet | mosquitoes | gambiae | 20 deet maa | deet maa | 00 | mosquito [SUMMARY]
[CONTENT] evening | llins | effective | distribution | children | malaria | africa | biting | mothers | infectious [SUMMARY]
null
[CONTENT] deet | maa | 20 deet | 20 | deet maa | 20 deet maa | anopheles | 95 | ci | 95 ci [SUMMARY]
[CONTENT] malaria | vector | ointment | high | butter widely west | malaria vectors context widespread | malaria vectors context | butter widely west africa | novel ointment formulated shea | west africa moisturize [SUMMARY]
[CONTENT] maa | deet | 20 | 20 deet | volunteers | 00 | mosquitoes | deet maa | 20 deet maa | gambiae [SUMMARY]
[CONTENT] maa | deet | 20 | 20 deet | volunteers | 00 | mosquitoes | deet maa | 20 deet maa | gambiae [SUMMARY]
[CONTENT] Malaria ||| ||| ||| ||| Burkina Faso [SUMMARY]
null
[CONTENT] 6 | 30 | 5 | 30 | Goden | 4 ||| MAÏA | three ||| 3979 ||| 98.5 % | 1.5 % ||| 95 % ||| 20 % | MAÏA | 2 [SUMMARY]
[CONTENT] MAÏA | Africa [SUMMARY]
[CONTENT] Malaria ||| ||| ||| ||| Burkina Faso ||| 20 | HLC ||| 2 | MAIA | 20 % | two | Aedes | WHO ||| two ||| Kisumu | Bora Bora ||| 70 % | 20 % | Burkina Faso ||| ||| 6 | 30 | 5 | 30 | Goden | 4 ||| MAÏA | three ||| 3979 ||| 98.5 % | 1.5 % ||| 95 % ||| 20 % | MAÏA | 2 ||| MAÏA | Africa [SUMMARY]
[CONTENT] Malaria ||| ||| ||| ||| Burkina Faso ||| 20 | HLC ||| 2 | MAIA | 20 % | two | Aedes | WHO ||| two ||| Kisumu | Bora Bora ||| 70 % | 20 % | Burkina Faso ||| ||| 6 | 30 | 5 | 30 | Goden | 4 ||| MAÏA | three ||| 3979 ||| 98.5 % | 1.5 % ||| 95 % ||| 20 % | MAÏA | 2 ||| MAÏA | Africa [SUMMARY]
The role of histological subtype in hormone receptor positive metastatic breast cancer: similar survival but different therapeutic approaches.
27121067
This study describes the differences between the two largest histological breast cancer subtypes (invasive ductal carcinoma (IDC) and invasive (mixed) lobular carcinoma (ILC) with respect to patient and tumor characteristics, treatment-choices and outcome in metastatic breast cancer.
INTRODUCTION
We included 437 patients with hormone receptor-positive IDC and 131 patients with hormone receptor-positive ILC, all diagnosed with metastatic breast cancer between 2007-2009, irrespective of date of the primary diagnosis. Patient and tumor characteristics and data on treatment and outcome were collected. Survival curves were obtained using the Kaplan-Meier method.
MATERIALS AND METHODS
Patients with ILC were older at diagnosis of primary breast cancer and had more often initial bone metastasis (46.5% versus 34.8%, P = 0.01) and less often multiple metastatic sites compared to IDC (23.7% versus 30.9%, P = 0.11). Six months after diagnosis of metastatic breast cancer, 28.1% of patients with ILC and 39.8% of patients with IDC had received chemotherapy with a longer median time to first chemotherapy for those with ILC (P = 0.001). After six months 84.8% of patients with ILC had received endocrine therapy versus 72.5% of patients with IDC (P = 0.0001). Median overall survival was 29 months for ILC and 25 months for IDC (P = 0.53).
RESULTS
Treatment strategies of hormone receptor-positive metastatic breast cancer were remarkably different for patients with ILC and IDC. Further research is required to understand tumor behavior and treatment-choices in real-life.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Antineoplastic Agents", "Breast Neoplasms", "Carcinoma, Ductal, Breast", "Carcinoma, Lobular", "Disease-Free Survival", "Female", "Follow-Up Studies", "Humans", "Kaplan-Meier Estimate", "Middle Aged", "Receptor, ErbB-2", "Receptors, Estrogen", "Receptors, Progesterone", "Treatment Outcome" ]
5045405
INTRODUCTION
The two most frequent histological subtypes of breast cancer are invasive ductal carcinoma (IDC) and invasive lobular carcinoma (ILC), with IDC comprising 75–80% and ILC 5–15% of all breast cancer cases. ILC is being associated with larger tumor size at presentation, more bilateral and multifocal involvement and with a different pattern of metastatic spread compared with IDC [1, 2]. Furthermore, ILC is more often HR-positive, HER2-negative, with lower S-phase fraction and less often positive for the tumor suppressor gene p53, compared with IDC [1]. Also, treatment response is known to be different. In a combined analysis on a number of retrospective series, the pathological complete response rate of neo-adjuvant chemotherapy was significantly lower in ILC (1.7%) than in IDC (11.6%) [3]. In the adjuvant setting, retrospective data suggest a higher efficacy of endocrine therapy for ILC than IDC [4]. More recently, a large retrospective study strongly suggested that HR-positive breast cancer patients with ILC do not seem to benefit from adjuvant chemotherapy in addition to endocrine therapy [5]. Whether these differences have a prognostic impact is controversial. Some studies found no effect of histology on survival [1, 6, 7], others found a better prognosis for patients with ILC compared to those with IDC [8–10] and some found a change in prognosis over time with a better prognosis for ILC during the first years of follow-up and a worse prognosis during later years [2, 11]. So far, studies on histological subtypes consider early breast cancer and not metastatic breast cancer. The aim of this study was to describe the differences between IDC and ILC with respect to patient and tumor characteristics, treatment-choices and outcome in metastatic breast cancer. In order to account for the effect of the hormone receptor (HR) on outcome and treatment-decision making, we only included patients with HR-positive breast cancer.
MATERIALS AND METHODS
Patient selection We identified all patients diagnosed with metastatic breast cancer between 2007–2009 in eight hospitals in the South-East of the Netherlands. All patients with metastatic breast cancer were selected irrespective of the date of primary breast cancer diagnosis (also including patients with de novo metastatic breast cancer; 18.5% of patients with IDC and 19.9% of patients with ILC), with the exception of patients with a diagnosis of primary breast cancer before 1990, due to limitations in the availability of data on the primary tumor and initial treatment. Histology was classified according to the International Classification of Diseases for Oncology (ICD-O) [23]. ILC and mixed histology were defined as code 8520 and 8522. IDC was defined as code 8500. From the total of 815 metastatic breast cancer patients, we excluded 76 patients with either unknown histology or histological subtypes other than IDC or ILC. Of the remaining 739 metastatic breast cancer patients, we excluded 171 patients with HR-negative tumors (27% of patients with IDC and 8% of patients with ILC) in order to rule out the impact of HR status. In total, our study population consisted of 568 patients divided in two groups based on histology; one group of 437 patients with IDC and the other group of 131 patients with ILC. We identified all patients diagnosed with metastatic breast cancer between 2007–2009 in eight hospitals in the South-East of the Netherlands. All patients with metastatic breast cancer were selected irrespective of the date of primary breast cancer diagnosis (also including patients with de novo metastatic breast cancer; 18.5% of patients with IDC and 19.9% of patients with ILC), with the exception of patients with a diagnosis of primary breast cancer before 1990, due to limitations in the availability of data on the primary tumor and initial treatment. Histology was classified according to the International Classification of Diseases for Oncology (ICD-O) [23]. ILC and mixed histology were defined as code 8520 and 8522. IDC was defined as code 8500. From the total of 815 metastatic breast cancer patients, we excluded 76 patients with either unknown histology or histological subtypes other than IDC or ILC. Of the remaining 739 metastatic breast cancer patients, we excluded 171 patients with HR-negative tumors (27% of patients with IDC and 8% of patients with ILC) in order to rule out the impact of HR status. In total, our study population consisted of 568 patients divided in two groups based on histology; one group of 437 patients with IDC and the other group of 131 patients with ILC. Data collection Information was collected on patient and tumor characteristics, treatment and outcome. Tumors were characterized by the sixth edition of the TNM classification of malignant tumors [24] and Scarff Bloom Richardson (SBR) histological grading [25]. Estrogen receptor (ER) and progesterone receptor (PR) positivity was defined as positive nuclear staining of ≥ 10%. HER2 positivity was defined as immunohistochemistry score of 3+ or 2+ with positive FISH. In case of missing HER2 status a dedicated pathologist centrally reviewed missing data when material was available. Initial sites of metastasis were categorized as: bone, visceral (including lung, liver, pleural, peritoneal, pericardial and lymphangitic carcinomatosis), brain (including leptomeningeal and CNS), skin and lymph nodes, and multiple metastases (more than one of the metastatic sites). Information was collected on patient and tumor characteristics, treatment and outcome. Tumors were characterized by the sixth edition of the TNM classification of malignant tumors [24] and Scarff Bloom Richardson (SBR) histological grading [25]. Estrogen receptor (ER) and progesterone receptor (PR) positivity was defined as positive nuclear staining of ≥ 10%. HER2 positivity was defined as immunohistochemistry score of 3+ or 2+ with positive FISH. In case of missing HER2 status a dedicated pathologist centrally reviewed missing data when material was available. Initial sites of metastasis were categorized as: bone, visceral (including lung, liver, pleural, peritoneal, pericardial and lymphangitic carcinomatosis), brain (including leptomeningeal and CNS), skin and lymph nodes, and multiple metastases (more than one of the metastatic sites). Statistical analysis Baseline characteristics between the two histological groups were compared using chi-square tests for categorical variables and Wilcoxon rank sum tests for continuous variables. Metastatic-free interval was defined as time between date of primary diagnosis and date of first distant metastasis. Overall survival after diagnosis of metastatic breast cancer was defined as time between date of first distant metastasis and date of death. Survival curves and time to first palliative systemic therapy (either chemotherapy or endocrine therapy) were estimated using the Kaplan-Meier method and compared using log-rank tests. All patients still alive were censored at the date of last follow-up of each individual patient. Patients who died without palliative therapy were censored at the date of death in the analysis of time to first palliative therapy. To explore the association of palliative systemic therapy with the survival of patients with metastatic breast cancer for both histological subtypes a Cox proportional hazards model was performed with palliative chemotherapy and endocrine therapy as a time-dependent covariate, since the administration of treatment can change over time and is dependent on the time available for each patient to receive the treatment. We did not explore the association between palliative targeted therapy, such as trastuzumab and bevacizumab, and survival since the number of patients with IDC and ILC receiving targeted therapy was very low. Since all these patients received targeted therapy with chemotherapy, these patients were included in the analysis on initial chemotherapy. In addition, the landmark method was used to estimate survival after a specific time-point, the so-called residual survival [26]. As we were interested to learn about the obtained survival in relation to the initial palliative treatment choices, we chose six months after diagnosis of metastatic breast cancer as landmark. Consequently, patients who already died within 6 months were excluded for the residual survival curves. All analyses were performed using SAS version 9.2. All reported P-values are two-sided and P-value ≤ 0.05 was considered statistically significant. Baseline characteristics between the two histological groups were compared using chi-square tests for categorical variables and Wilcoxon rank sum tests for continuous variables. Metastatic-free interval was defined as time between date of primary diagnosis and date of first distant metastasis. Overall survival after diagnosis of metastatic breast cancer was defined as time between date of first distant metastasis and date of death. Survival curves and time to first palliative systemic therapy (either chemotherapy or endocrine therapy) were estimated using the Kaplan-Meier method and compared using log-rank tests. All patients still alive were censored at the date of last follow-up of each individual patient. Patients who died without palliative therapy were censored at the date of death in the analysis of time to first palliative therapy. To explore the association of palliative systemic therapy with the survival of patients with metastatic breast cancer for both histological subtypes a Cox proportional hazards model was performed with palliative chemotherapy and endocrine therapy as a time-dependent covariate, since the administration of treatment can change over time and is dependent on the time available for each patient to receive the treatment. We did not explore the association between palliative targeted therapy, such as trastuzumab and bevacizumab, and survival since the number of patients with IDC and ILC receiving targeted therapy was very low. Since all these patients received targeted therapy with chemotherapy, these patients were included in the analysis on initial chemotherapy. In addition, the landmark method was used to estimate survival after a specific time-point, the so-called residual survival [26]. As we were interested to learn about the obtained survival in relation to the initial palliative treatment choices, we chose six months after diagnosis of metastatic breast cancer as landmark. Consequently, patients who already died within 6 months were excluded for the residual survival curves. All analyses were performed using SAS version 9.2. All reported P-values are two-sided and P-value ≤ 0.05 was considered statistically significant.
null
null
DISCUSSION
Baseline characteristics between the two histological groups were compared using chi-square tests for categorical variables and Wilcoxon rank sum tests for continuous variables. Metastatic-free interval was defined as time between date of primary diagnosis and date of first distant metastasis. Overall survival after diagnosis of metastatic breast cancer was defined as time between date of first distant metastasis and date of death. Survival curves and time to first palliative systemic therapy (either chemotherapy or endocrine therapy) were estimated using the Kaplan-Meier method and compared using log-rank tests. All patients still alive were censored at the date of last follow-up of each individual patient. Patients who died without palliative therapy were censored at the date of death in the analysis of time to first palliative therapy. To explore the association of palliative systemic therapy with the survival of patients with metastatic breast cancer for both histological subtypes a Cox proportional hazards model was performed with palliative chemotherapy and endocrine therapy as a time-dependent covariate, since the administration of treatment can change over time and is dependent on the time available for each patient to receive the treatment. We did not explore the association between palliative targeted therapy, such as trastuzumab and bevacizumab, and survival since the number of patients with IDC and ILC receiving targeted therapy was very low. Since all these patients received targeted therapy with chemotherapy, these patients were included in the analysis on initial chemotherapy. In addition, the landmark method was used to estimate survival after a specific time-point, the so-called residual survival [26]. As we were interested to learn about the obtained survival in relation to the initial palliative treatment choices, we chose six months after diagnosis of metastatic breast cancer as landmark. Consequently, patients who already died within 6 months were excluded for the residual survival curves. All analyses were performed using SAS version 9.2. All reported P-values are two-sided and P-value ≤ 0.05 was considered statistically significant.
[ "INTRODUCTION", "RESULTS", "Patient characteristics", "Palliative systemic treatment", "Outcome", "Residual survival: six months after diagnosis", "DISCUSSION" ]
[ "The two most frequent histological subtypes of breast cancer are invasive ductal carcinoma (IDC) and invasive lobular carcinoma (ILC), with IDC comprising 75–80% and ILC 5–15% of all breast cancer cases. ILC is being associated with larger tumor size at presentation, more bilateral and multifocal involvement and with a different pattern of metastatic spread compared with IDC [1, 2]. Furthermore, ILC is more often HR-positive, HER2-negative, with lower S-phase fraction and less often positive for the tumor suppressor gene p53, compared with IDC [1]. Also, treatment response is known to be different. In a combined analysis on a number of retrospective series, the pathological complete response rate of neo-adjuvant chemotherapy was significantly lower in ILC (1.7%) than in IDC (11.6%) [3]. In the adjuvant setting, retrospective data suggest a higher efficacy of endocrine therapy for ILC than IDC [4]. More recently, a large retrospective study strongly suggested that HR-positive breast cancer patients with ILC do not seem to benefit from adjuvant chemotherapy in addition to endocrine therapy [5].\nWhether these differences have a prognostic impact is controversial. Some studies found no effect of histology on survival [1, 6, 7], others found a better prognosis for patients with ILC compared to those with IDC [8–10] and some found a change in prognosis over time with a better prognosis for ILC during the first years of follow-up and a worse prognosis during later years [2, 11].\nSo far, studies on histological subtypes consider early breast cancer and not metastatic breast cancer. The aim of this study was to describe the differences between IDC and ILC with respect to patient and tumor characteristics, treatment-choices and outcome in metastatic breast cancer. In order to account for the effect of the hormone receptor (HR) on outcome and treatment-decision making, we only included patients with HR-positive breast cancer.", " Patient characteristics Of the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1).\nAbbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis.\nMetastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes.\nBone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11).\nOf the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1).\nAbbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis.\nMetastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes.\nBone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11).\n Palliative systemic treatment Median follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively).\nTime to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC.\nMedian follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively).\nTime to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC.\n Outcome Median overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53).\nIn multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period.\nIn multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4).\nMedian overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53).\nIn multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period.\nIn multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4).\n Residual survival: six months after diagnosis For ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer.\nReversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer.\nFor ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer.\nReversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer.", "Of the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1).\nAbbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis.\nMetastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes.\nBone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11).", "Median follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively).\nTime to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC.", "Median overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53).\nIn multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period.\nIn multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4).", "For ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer.\nReversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer.", "To our knowledge, the role of histological subtype in palliative systemic treatment of metastatic breast cancer has not been studied before. In a cohort of 568 HR-positive metastatic breast cancer patients, we showed that patients with HR-positive ILC had more often bone and less often multiple sites as initial site of distant metastasis compared to those with HR-positive IDC. Six months after metastatic diagnosis, patients with HR-positive ILC had received significantly less often chemotherapy (28% versus 40%) and more often endocrine therapy (85% versus 73%) as compared to patients with HR-positive IDC. Patients starting with palliative chemotherapy during the first six months had a significantly shorter median residual survival thereafter as compared to those who did not (15 months versus 44 months for ILC, and 17 and 42 months for IDC, respectively). Reversely, patients treated with endocrine therapy in the first six months had a longer median residual survival compared to those who did not (37 months versus 11 months for ILC and 33 and 19 months for IDC, respectively). Our results confirm that, in addition to HR-status, histology correlates with presentation of metastatic disease by number and type of distant metastatic sites, and thereby affects treatment-decision making in real-life.\nThe intriguing scientific question is to determine how histological subtype influences the metastatic spread in HR-positive breast cancer. It is hypothesized that the loss of E-cadherin, a cell-cell adhesion molecule, in ILC may result in less adhesiveness of the tumor cells and therefore disseminate and infiltrate certain distant locations more easily [1].\nIn this HR-positive metastatic breast cancer cohort we also looked at the association of palliative systemic treatment with survival of patients with metastatic breast cancer for both histological subtypes. With the favorable presentation of distant metastasis by number and location of patients with HR-positive ILC as compared to HR-positive IDC, one would expect a more indolent disease course and a better outcome. However, we showed that for patients with HR-positive breast cancer, overall disease outcome was comparable irrespective of histology and irrespective of different treatment choices made in real-life, as discussed above. And interestingly, the efficacy of chemotherapy - once the treatment choice was made - was comparable irrespective of histology. However, treatment with palliative endocrine therapy was only associated with a favorable prognosis of patients with HR-positive ILC, whereas this was not seen in patients with HR-positive IDC, also suggesting the impact of histology over HR-status. This is in concordance with the early breast cancer setting, in which there is evidence suggesting differences in efficacy of systemic therapy between the two histological subtypes [3–5, 12].\nAlthough there is some proof that histology could be helpful in treatment decision-making [13–15], the current guidelines do not include histological subtype as an indicator for the use of systemic treatment in general, or for a specific regimen [16, 17]. Interestingly, in the adjuvant setting, histology has been shown to be of importance when deciding between the use of breast-conserving surgery or mastectomy after neo-adjuvant chemotherapy [18]. The current study shows that histological subtype can be of predictive value for HR-positive metastatic breast cancer with regards to the effectiveness of early initiation of palliative endocrine therapy. It may be possible that in patients with HR-positive lobular breast cancer, the higher incidence of bone metastases as initial metastatic site partly accounts for the initial choice of endocrine therapy. It underlines the relevance of acknowledging a different metastatic pattern, and thereby different initial treatment choices, between breast cancer patients with ductal versus lobular histology.\nMuch more than in the adjuvant setting, treatment decisions in the metastatic setting are based on the observed and anticipated clinical course of the disease, which is not only determined by the tumor characteristics as described earlier. Also age, performance status, previous therapies and toxicities, comorbidity and patient and doctor preferences play a role. The complexity of this process, together with the retrospective design of our study make it impossible to identify and rule out confounding by indication. Furthermore, localization of the metastases (visceral versus bone metastasis for instance) could be related to symptomatology and thereby timing of detection, and could therefore have introduced lead time bias.\nIn this study patients with pure lobular carcinoma and mixed lobular carcinomas were combined for the analyses. In other studies on histological subtypes of breast cancer mixed and lobular carcinoma had similar outcome [14, 19]. Even with combining these subgroups, the proportion of HER2-positive tumors was too low to further analyze anti-HER2 therapy. In the adjuvant treatment setting the magnitude of benefit from adjuvant trastuzumab between patients with ILC and IDC was shown not to be different [20]. Another limitation of our study is that patient numbers were too small to perform analysis on specific chemotherapy regimens. Microarray analysis have demonstrated that ILC and IDC can be distinguished on the basis of genomic and expression profiles [21]. The increasing knowledge on genomic differences between ILC and IDC can help to answer questions on in vivo chemosensitivity. For example, topoisomerase-IIα gene amplification is a predictive biomarker for response to anthracyclines and in ILC this gene amplification is lacking, which could help understand the poor responsiveness of ILC to neo-adjuvant chemotherapy, including anthracyclines [22]. This genetic information can help guide further research and may eventually be useful in making treatment decisions for histological breast cancer subtypes.\nIn conclusion, to our knowledge, this is the first study that investigates the role of histological subtype in HR-positive metastatic breast cancer. Although survival was comparable for the two histological subtypes, this was achieved with different treatment strategies. As patients with HR-positive ILC were less likely to receive chemotherapy than those with HR-positive IDC, histology may be a relevant factor in treatment-decision making. For a more definite conclusion on the role of histology, we recommend to incorporate histological subtype as a stratification factor in future clinical trials." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "RESULTS", "Patient characteristics", "Palliative systemic treatment", "Outcome", "Residual survival: six months after diagnosis", "DISCUSSION", "MATERIALS AND METHODS" ]
[ "The two most frequent histological subtypes of breast cancer are invasive ductal carcinoma (IDC) and invasive lobular carcinoma (ILC), with IDC comprising 75–80% and ILC 5–15% of all breast cancer cases. ILC is being associated with larger tumor size at presentation, more bilateral and multifocal involvement and with a different pattern of metastatic spread compared with IDC [1, 2]. Furthermore, ILC is more often HR-positive, HER2-negative, with lower S-phase fraction and less often positive for the tumor suppressor gene p53, compared with IDC [1]. Also, treatment response is known to be different. In a combined analysis on a number of retrospective series, the pathological complete response rate of neo-adjuvant chemotherapy was significantly lower in ILC (1.7%) than in IDC (11.6%) [3]. In the adjuvant setting, retrospective data suggest a higher efficacy of endocrine therapy for ILC than IDC [4]. More recently, a large retrospective study strongly suggested that HR-positive breast cancer patients with ILC do not seem to benefit from adjuvant chemotherapy in addition to endocrine therapy [5].\nWhether these differences have a prognostic impact is controversial. Some studies found no effect of histology on survival [1, 6, 7], others found a better prognosis for patients with ILC compared to those with IDC [8–10] and some found a change in prognosis over time with a better prognosis for ILC during the first years of follow-up and a worse prognosis during later years [2, 11].\nSo far, studies on histological subtypes consider early breast cancer and not metastatic breast cancer. The aim of this study was to describe the differences between IDC and ILC with respect to patient and tumor characteristics, treatment-choices and outcome in metastatic breast cancer. In order to account for the effect of the hormone receptor (HR) on outcome and treatment-decision making, we only included patients with HR-positive breast cancer.", " Patient characteristics Of the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1).\nAbbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis.\nMetastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes.\nBone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11).\nOf the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1).\nAbbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis.\nMetastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes.\nBone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11).\n Palliative systemic treatment Median follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively).\nTime to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC.\nMedian follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively).\nTime to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC.\n Outcome Median overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53).\nIn multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period.\nIn multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4).\nMedian overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53).\nIn multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period.\nIn multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4).\n Residual survival: six months after diagnosis For ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer.\nReversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer.\nFor ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer.\nReversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer.", "Of the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1).\nAbbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis.\nMetastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes.\nBone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11).", "Median follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively).\nTime to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype.\nTime between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC.", "Median overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53).\nIn multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period.\nIn multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4).", "For ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer.\nReversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B).\nResidual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer.", "To our knowledge, the role of histological subtype in palliative systemic treatment of metastatic breast cancer has not been studied before. In a cohort of 568 HR-positive metastatic breast cancer patients, we showed that patients with HR-positive ILC had more often bone and less often multiple sites as initial site of distant metastasis compared to those with HR-positive IDC. Six months after metastatic diagnosis, patients with HR-positive ILC had received significantly less often chemotherapy (28% versus 40%) and more often endocrine therapy (85% versus 73%) as compared to patients with HR-positive IDC. Patients starting with palliative chemotherapy during the first six months had a significantly shorter median residual survival thereafter as compared to those who did not (15 months versus 44 months for ILC, and 17 and 42 months for IDC, respectively). Reversely, patients treated with endocrine therapy in the first six months had a longer median residual survival compared to those who did not (37 months versus 11 months for ILC and 33 and 19 months for IDC, respectively). Our results confirm that, in addition to HR-status, histology correlates with presentation of metastatic disease by number and type of distant metastatic sites, and thereby affects treatment-decision making in real-life.\nThe intriguing scientific question is to determine how histological subtype influences the metastatic spread in HR-positive breast cancer. It is hypothesized that the loss of E-cadherin, a cell-cell adhesion molecule, in ILC may result in less adhesiveness of the tumor cells and therefore disseminate and infiltrate certain distant locations more easily [1].\nIn this HR-positive metastatic breast cancer cohort we also looked at the association of palliative systemic treatment with survival of patients with metastatic breast cancer for both histological subtypes. With the favorable presentation of distant metastasis by number and location of patients with HR-positive ILC as compared to HR-positive IDC, one would expect a more indolent disease course and a better outcome. However, we showed that for patients with HR-positive breast cancer, overall disease outcome was comparable irrespective of histology and irrespective of different treatment choices made in real-life, as discussed above. And interestingly, the efficacy of chemotherapy - once the treatment choice was made - was comparable irrespective of histology. However, treatment with palliative endocrine therapy was only associated with a favorable prognosis of patients with HR-positive ILC, whereas this was not seen in patients with HR-positive IDC, also suggesting the impact of histology over HR-status. This is in concordance with the early breast cancer setting, in which there is evidence suggesting differences in efficacy of systemic therapy between the two histological subtypes [3–5, 12].\nAlthough there is some proof that histology could be helpful in treatment decision-making [13–15], the current guidelines do not include histological subtype as an indicator for the use of systemic treatment in general, or for a specific regimen [16, 17]. Interestingly, in the adjuvant setting, histology has been shown to be of importance when deciding between the use of breast-conserving surgery or mastectomy after neo-adjuvant chemotherapy [18]. The current study shows that histological subtype can be of predictive value for HR-positive metastatic breast cancer with regards to the effectiveness of early initiation of palliative endocrine therapy. It may be possible that in patients with HR-positive lobular breast cancer, the higher incidence of bone metastases as initial metastatic site partly accounts for the initial choice of endocrine therapy. It underlines the relevance of acknowledging a different metastatic pattern, and thereby different initial treatment choices, between breast cancer patients with ductal versus lobular histology.\nMuch more than in the adjuvant setting, treatment decisions in the metastatic setting are based on the observed and anticipated clinical course of the disease, which is not only determined by the tumor characteristics as described earlier. Also age, performance status, previous therapies and toxicities, comorbidity and patient and doctor preferences play a role. The complexity of this process, together with the retrospective design of our study make it impossible to identify and rule out confounding by indication. Furthermore, localization of the metastases (visceral versus bone metastasis for instance) could be related to symptomatology and thereby timing of detection, and could therefore have introduced lead time bias.\nIn this study patients with pure lobular carcinoma and mixed lobular carcinomas were combined for the analyses. In other studies on histological subtypes of breast cancer mixed and lobular carcinoma had similar outcome [14, 19]. Even with combining these subgroups, the proportion of HER2-positive tumors was too low to further analyze anti-HER2 therapy. In the adjuvant treatment setting the magnitude of benefit from adjuvant trastuzumab between patients with ILC and IDC was shown not to be different [20]. Another limitation of our study is that patient numbers were too small to perform analysis on specific chemotherapy regimens. Microarray analysis have demonstrated that ILC and IDC can be distinguished on the basis of genomic and expression profiles [21]. The increasing knowledge on genomic differences between ILC and IDC can help to answer questions on in vivo chemosensitivity. For example, topoisomerase-IIα gene amplification is a predictive biomarker for response to anthracyclines and in ILC this gene amplification is lacking, which could help understand the poor responsiveness of ILC to neo-adjuvant chemotherapy, including anthracyclines [22]. This genetic information can help guide further research and may eventually be useful in making treatment decisions for histological breast cancer subtypes.\nIn conclusion, to our knowledge, this is the first study that investigates the role of histological subtype in HR-positive metastatic breast cancer. Although survival was comparable for the two histological subtypes, this was achieved with different treatment strategies. As patients with HR-positive ILC were less likely to receive chemotherapy than those with HR-positive IDC, histology may be a relevant factor in treatment-decision making. For a more definite conclusion on the role of histology, we recommend to incorporate histological subtype as a stratification factor in future clinical trials.", " Patient selection We identified all patients diagnosed with metastatic breast cancer between 2007–2009 in eight hospitals in the South-East of the Netherlands. All patients with metastatic breast cancer were selected irrespective of the date of primary breast cancer diagnosis (also including patients with de novo metastatic breast cancer; 18.5% of patients with IDC and 19.9% of patients with ILC), with the exception of patients with a diagnosis of primary breast cancer before 1990, due to limitations in the availability of data on the primary tumor and initial treatment.\nHistology was classified according to the International Classification of Diseases for Oncology (ICD-O) [23]. ILC and mixed histology were defined as code 8520 and 8522. IDC was defined as code 8500. From the total of 815 metastatic breast cancer patients, we excluded 76 patients with either unknown histology or histological subtypes other than IDC or ILC. Of the remaining 739 metastatic breast cancer patients, we excluded 171 patients with HR-negative tumors (27% of patients with IDC and 8% of patients with ILC) in order to rule out the impact of HR status. In total, our study population consisted of 568 patients divided in two groups based on histology; one group of 437 patients with IDC and the other group of 131 patients with ILC.\nWe identified all patients diagnosed with metastatic breast cancer between 2007–2009 in eight hospitals in the South-East of the Netherlands. All patients with metastatic breast cancer were selected irrespective of the date of primary breast cancer diagnosis (also including patients with de novo metastatic breast cancer; 18.5% of patients with IDC and 19.9% of patients with ILC), with the exception of patients with a diagnosis of primary breast cancer before 1990, due to limitations in the availability of data on the primary tumor and initial treatment.\nHistology was classified according to the International Classification of Diseases for Oncology (ICD-O) [23]. ILC and mixed histology were defined as code 8520 and 8522. IDC was defined as code 8500. From the total of 815 metastatic breast cancer patients, we excluded 76 patients with either unknown histology or histological subtypes other than IDC or ILC. Of the remaining 739 metastatic breast cancer patients, we excluded 171 patients with HR-negative tumors (27% of patients with IDC and 8% of patients with ILC) in order to rule out the impact of HR status. In total, our study population consisted of 568 patients divided in two groups based on histology; one group of 437 patients with IDC and the other group of 131 patients with ILC.\n Data collection Information was collected on patient and tumor characteristics, treatment and outcome. Tumors were characterized by the sixth edition of the TNM classification of malignant tumors [24] and Scarff Bloom Richardson (SBR) histological grading [25]. Estrogen receptor (ER) and progesterone receptor (PR) positivity was defined as positive nuclear staining of ≥ 10%. HER2 positivity was defined as immunohistochemistry score of 3+ or 2+ with positive FISH. In case of missing HER2 status a dedicated pathologist centrally reviewed missing data when material was available. Initial sites of metastasis were categorized as: bone, visceral (including lung, liver, pleural, peritoneal, pericardial and lymphangitic carcinomatosis), brain (including leptomeningeal and CNS), skin and lymph nodes, and multiple metastases (more than one of the metastatic sites).\nInformation was collected on patient and tumor characteristics, treatment and outcome. Tumors were characterized by the sixth edition of the TNM classification of malignant tumors [24] and Scarff Bloom Richardson (SBR) histological grading [25]. Estrogen receptor (ER) and progesterone receptor (PR) positivity was defined as positive nuclear staining of ≥ 10%. HER2 positivity was defined as immunohistochemistry score of 3+ or 2+ with positive FISH. In case of missing HER2 status a dedicated pathologist centrally reviewed missing data when material was available. Initial sites of metastasis were categorized as: bone, visceral (including lung, liver, pleural, peritoneal, pericardial and lymphangitic carcinomatosis), brain (including leptomeningeal and CNS), skin and lymph nodes, and multiple metastases (more than one of the metastatic sites).\n Statistical analysis Baseline characteristics between the two histological groups were compared using chi-square tests for categorical variables and Wilcoxon rank sum tests for continuous variables.\nMetastatic-free interval was defined as time between date of primary diagnosis and date of first distant metastasis. Overall survival after diagnosis of metastatic breast cancer was defined as time between date of first distant metastasis and date of death. Survival curves and time to first palliative systemic therapy (either chemotherapy or endocrine therapy) were estimated using the Kaplan-Meier method and compared using log-rank tests. All patients still alive were censored at the date of last follow-up of each individual patient. Patients who died without palliative therapy were censored at the date of death in the analysis of time to first palliative therapy.\nTo explore the association of palliative systemic therapy with the survival of patients with metastatic breast cancer for both histological subtypes a Cox proportional hazards model was performed with palliative chemotherapy and endocrine therapy as a time-dependent covariate, since the administration of treatment can change over time and is dependent on the time available for each patient to receive the treatment. We did not explore the association between palliative targeted therapy, such as trastuzumab and bevacizumab, and survival since the number of patients with IDC and ILC receiving targeted therapy was very low. Since all these patients received targeted therapy with chemotherapy, these patients were included in the analysis on initial chemotherapy. In addition, the landmark method was used to estimate survival after a specific time-point, the so-called residual survival [26]. As we were interested to learn about the obtained survival in relation to the initial palliative treatment choices, we chose six months after diagnosis of metastatic breast cancer as landmark. Consequently, patients who already died within 6 months were excluded for the residual survival curves.\nAll analyses were performed using SAS version 9.2. All reported P-values are two-sided and P-value ≤ 0.05 was considered statistically significant.\nBaseline characteristics between the two histological groups were compared using chi-square tests for categorical variables and Wilcoxon rank sum tests for continuous variables.\nMetastatic-free interval was defined as time between date of primary diagnosis and date of first distant metastasis. Overall survival after diagnosis of metastatic breast cancer was defined as time between date of first distant metastasis and date of death. Survival curves and time to first palliative systemic therapy (either chemotherapy or endocrine therapy) were estimated using the Kaplan-Meier method and compared using log-rank tests. All patients still alive were censored at the date of last follow-up of each individual patient. Patients who died without palliative therapy were censored at the date of death in the analysis of time to first palliative therapy.\nTo explore the association of palliative systemic therapy with the survival of patients with metastatic breast cancer for both histological subtypes a Cox proportional hazards model was performed with palliative chemotherapy and endocrine therapy as a time-dependent covariate, since the administration of treatment can change over time and is dependent on the time available for each patient to receive the treatment. We did not explore the association between palliative targeted therapy, such as trastuzumab and bevacizumab, and survival since the number of patients with IDC and ILC receiving targeted therapy was very low. Since all these patients received targeted therapy with chemotherapy, these patients were included in the analysis on initial chemotherapy. In addition, the landmark method was used to estimate survival after a specific time-point, the so-called residual survival [26]. As we were interested to learn about the obtained survival in relation to the initial palliative treatment choices, we chose six months after diagnosis of metastatic breast cancer as landmark. Consequently, patients who already died within 6 months were excluded for the residual survival curves.\nAll analyses were performed using SAS version 9.2. All reported P-values are two-sided and P-value ≤ 0.05 was considered statistically significant." ]
[ null, null, null, null, null, null, null, "methods" ]
[ "metastatic breast cancer", "histology", "invasive lobular carcinoma", "invasive ductal carcinoma", "treatment" ]
INTRODUCTION: The two most frequent histological subtypes of breast cancer are invasive ductal carcinoma (IDC) and invasive lobular carcinoma (ILC), with IDC comprising 75–80% and ILC 5–15% of all breast cancer cases. ILC is being associated with larger tumor size at presentation, more bilateral and multifocal involvement and with a different pattern of metastatic spread compared with IDC [1, 2]. Furthermore, ILC is more often HR-positive, HER2-negative, with lower S-phase fraction and less often positive for the tumor suppressor gene p53, compared with IDC [1]. Also, treatment response is known to be different. In a combined analysis on a number of retrospective series, the pathological complete response rate of neo-adjuvant chemotherapy was significantly lower in ILC (1.7%) than in IDC (11.6%) [3]. In the adjuvant setting, retrospective data suggest a higher efficacy of endocrine therapy for ILC than IDC [4]. More recently, a large retrospective study strongly suggested that HR-positive breast cancer patients with ILC do not seem to benefit from adjuvant chemotherapy in addition to endocrine therapy [5]. Whether these differences have a prognostic impact is controversial. Some studies found no effect of histology on survival [1, 6, 7], others found a better prognosis for patients with ILC compared to those with IDC [8–10] and some found a change in prognosis over time with a better prognosis for ILC during the first years of follow-up and a worse prognosis during later years [2, 11]. So far, studies on histological subtypes consider early breast cancer and not metastatic breast cancer. The aim of this study was to describe the differences between IDC and ILC with respect to patient and tumor characteristics, treatment-choices and outcome in metastatic breast cancer. In order to account for the effect of the hormone receptor (HR) on outcome and treatment-decision making, we only included patients with HR-positive breast cancer. RESULTS: Patient characteristics Of the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1). Abbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis. Metastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes. Bone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11). Of the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1). Abbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis. Metastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes. Bone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11). Palliative systemic treatment Median follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period. Time between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively). Time to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype. Time between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC. Median follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period. Time between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively). Time to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype. Time between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC. Outcome Median overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53). In multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period. In multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4). Median overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53). In multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period. In multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4). Residual survival: six months after diagnosis For ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B). Residual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer. Reversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B). Residual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer. For ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B). Residual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer. Reversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B). Residual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer. Patient characteristics: Of the 568 patients with HR-positive metastatic breast cancer, 23% had (mixed type) ILC (131 patients), hereafter referred to as ILC and 77% had IDC (437 patients) (Table 1). Abbreviations; HR+ hormone receptor positive, N number, SBR Scarff Bloom Richardson, MFI metastatic-free interval * ≥ 1 of the aforementioned initial sites of metastasis. Metastatic breast cancer patients with ILC were older at the time of primary breast cancer diagnosis, compared with patients with IDC (median age at diagnosis 62 years for ILC versus 58 years for IDC, P = 0.03). At initial presentation, patients with HR-positive ILC had larger tumors (T2-3, 62.5% versus 49.7%, P = 0.002) and slightly more node-positive disease (66% versus 56.9%, P = 0.10), but with less often a HER2 positive status (7.6% versus 15.3%, P = 0.02) and lower grade (not significant) compared with patients with HR-positive IDC. No differences were seen in use of adjuvant chemotherapy or endocrine therapy between the histological subtypes. Bone was the most common initial site of distant metastasis in both HR-positive histological subtypes, although bone metastasis as initial site was more frequently observed for patients with ILC compared to those with IDC (46.5% versus 34.8%, P = 0.01). Fewer patients with ILC were diagnosed with multiple sites of distant metastasis compared to those with IDC (23.7% versus 30.9%, P = 0.11). Palliative systemic treatment: Median follow-up after diagnosis of metastatic disease was 37.1 months (range 5.2–54.6) with 239 patients (42%) alive at the end of the follow-up period. Time between diagnosis of HR-positive metastatic breast cancer and start of palliative chemotherapy was significantly longer for patients with ILC (median not yet reached) compared with IDC (median 16.9 months, 95% Confidence Interval (CI) 9.6-22.3, P = 0.001) (Figure 1A). Six months after diagnosis of HR-positive metastatic breast cancer less patients with ILC had received palliative chemotherapy compared with IDC (28.1% versus 39.8% respectively). Time to first palliative chemotherapy (A) and endocrine therapy (B) by histological subtype. Time between diagnosis of HR-positive metastatic breast cancer and start of palliative endocrine therapy was significantly shorter for patients with ILC (median, 0.6 months, 95% CI 0.5–0.8) compared with patients with IDC (median, 1.1 months, 95% CI 1.0–1.5, P = 0.0001) (Figure 1B). Six months after diagnosis of metastatic breast cancer 84.8% of patients with ILC had received palliative endocrine therapy compared with 72.5% of patients with IDC. Outcome: Median overall survival was 29.4 months (95% CI 22.5–36.6) for patients with HR-positive ILC and 25.4 months (95% CI 21.8–31.7) for patients with HR-positive IDC (P = 0.53). In multivariable analysis for patients with ILC with palliative endocrine therapy and palliative chemotherapy as time-dependent covariates, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.8, 95% CI 1.7–4.6, P < .0001) compared to no palliative chemotherapy during the observation period. Conversely, early initiation of palliative endocrine therapy was associated with a favorable survival (hazard ratio 0.4, 95% CI 0.2–0.8, P = 0.005) compared to no palliative endocrine therapy during the observation period. In multivariable analysis for patients with IDC, early initiation of palliative chemotherapy was associated with an unfavorable survival (hazard ratio 2.1, 95% CI 1.6–2.7. P < .0001) when compared with no chemotherapy, whereas early treatment with palliative endocrine therapy was not associated with survival (hazard ratio 0.9, 95% CI 0.6–1.2, P = 0.4). Residual survival: six months after diagnosis: For ILC, the residual survival was significantly longer for patients not treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (median 44.0 months, 95% CI 30.2-not yet reached) versus patients treated with palliative chemotherapy (median 15.2 months, 95% CI 7.8–19.2) (Figure 2A). For IDC, residual survival when not treated with palliative chemotherapy was 41.8 (95% CI 33.3-not yet reached) compared with 16.8 months (95% CI 13.6–22.5) for patients treated with palliative chemotherapy within the first six months after diagnosis of metastatic breast cancer (Figure 2B). Residual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative chemotherapy during the first six months after diagnosis of metastatic breast cancer. Reversely, for ILC-patients with palliative endocrine treatment within the first six months after diagnosis of metastatic breast cancer residual survival was 37.5 months (95% CI 26.9-not yet reached) compared with 11.2 months (95% CI 3.0–20.4) for patients with HR-positive metastatic ILC without palliative endocrine therapy before the first six months (Figure 3A). For patients with HR-positive metastatic IDC treated with palliative endocrine therapy during the first six months, residual survival was 33.5 months (95% CI 25.7-not yet reached) compared with 18.9 months (95% CI 14.0-26.8) for patients with HR-positive metastatic IDC without palliative endocrine therapy in the first six months of metastatic disease (Figure 3B). Residual survival for patients with HR-positive ILC (A) and HR-positive IDC (B) treated with or without any palliative endocrine therapy during the first six months after diagnosis of metastatic breast cancer. DISCUSSION: To our knowledge, the role of histological subtype in palliative systemic treatment of metastatic breast cancer has not been studied before. In a cohort of 568 HR-positive metastatic breast cancer patients, we showed that patients with HR-positive ILC had more often bone and less often multiple sites as initial site of distant metastasis compared to those with HR-positive IDC. Six months after metastatic diagnosis, patients with HR-positive ILC had received significantly less often chemotherapy (28% versus 40%) and more often endocrine therapy (85% versus 73%) as compared to patients with HR-positive IDC. Patients starting with palliative chemotherapy during the first six months had a significantly shorter median residual survival thereafter as compared to those who did not (15 months versus 44 months for ILC, and 17 and 42 months for IDC, respectively). Reversely, patients treated with endocrine therapy in the first six months had a longer median residual survival compared to those who did not (37 months versus 11 months for ILC and 33 and 19 months for IDC, respectively). Our results confirm that, in addition to HR-status, histology correlates with presentation of metastatic disease by number and type of distant metastatic sites, and thereby affects treatment-decision making in real-life. The intriguing scientific question is to determine how histological subtype influences the metastatic spread in HR-positive breast cancer. It is hypothesized that the loss of E-cadherin, a cell-cell adhesion molecule, in ILC may result in less adhesiveness of the tumor cells and therefore disseminate and infiltrate certain distant locations more easily [1]. In this HR-positive metastatic breast cancer cohort we also looked at the association of palliative systemic treatment with survival of patients with metastatic breast cancer for both histological subtypes. With the favorable presentation of distant metastasis by number and location of patients with HR-positive ILC as compared to HR-positive IDC, one would expect a more indolent disease course and a better outcome. However, we showed that for patients with HR-positive breast cancer, overall disease outcome was comparable irrespective of histology and irrespective of different treatment choices made in real-life, as discussed above. And interestingly, the efficacy of chemotherapy - once the treatment choice was made - was comparable irrespective of histology. However, treatment with palliative endocrine therapy was only associated with a favorable prognosis of patients with HR-positive ILC, whereas this was not seen in patients with HR-positive IDC, also suggesting the impact of histology over HR-status. This is in concordance with the early breast cancer setting, in which there is evidence suggesting differences in efficacy of systemic therapy between the two histological subtypes [3–5, 12]. Although there is some proof that histology could be helpful in treatment decision-making [13–15], the current guidelines do not include histological subtype as an indicator for the use of systemic treatment in general, or for a specific regimen [16, 17]. Interestingly, in the adjuvant setting, histology has been shown to be of importance when deciding between the use of breast-conserving surgery or mastectomy after neo-adjuvant chemotherapy [18]. The current study shows that histological subtype can be of predictive value for HR-positive metastatic breast cancer with regards to the effectiveness of early initiation of palliative endocrine therapy. It may be possible that in patients with HR-positive lobular breast cancer, the higher incidence of bone metastases as initial metastatic site partly accounts for the initial choice of endocrine therapy. It underlines the relevance of acknowledging a different metastatic pattern, and thereby different initial treatment choices, between breast cancer patients with ductal versus lobular histology. Much more than in the adjuvant setting, treatment decisions in the metastatic setting are based on the observed and anticipated clinical course of the disease, which is not only determined by the tumor characteristics as described earlier. Also age, performance status, previous therapies and toxicities, comorbidity and patient and doctor preferences play a role. The complexity of this process, together with the retrospective design of our study make it impossible to identify and rule out confounding by indication. Furthermore, localization of the metastases (visceral versus bone metastasis for instance) could be related to symptomatology and thereby timing of detection, and could therefore have introduced lead time bias. In this study patients with pure lobular carcinoma and mixed lobular carcinomas were combined for the analyses. In other studies on histological subtypes of breast cancer mixed and lobular carcinoma had similar outcome [14, 19]. Even with combining these subgroups, the proportion of HER2-positive tumors was too low to further analyze anti-HER2 therapy. In the adjuvant treatment setting the magnitude of benefit from adjuvant trastuzumab between patients with ILC and IDC was shown not to be different [20]. Another limitation of our study is that patient numbers were too small to perform analysis on specific chemotherapy regimens. Microarray analysis have demonstrated that ILC and IDC can be distinguished on the basis of genomic and expression profiles [21]. The increasing knowledge on genomic differences between ILC and IDC can help to answer questions on in vivo chemosensitivity. For example, topoisomerase-IIα gene amplification is a predictive biomarker for response to anthracyclines and in ILC this gene amplification is lacking, which could help understand the poor responsiveness of ILC to neo-adjuvant chemotherapy, including anthracyclines [22]. This genetic information can help guide further research and may eventually be useful in making treatment decisions for histological breast cancer subtypes. In conclusion, to our knowledge, this is the first study that investigates the role of histological subtype in HR-positive metastatic breast cancer. Although survival was comparable for the two histological subtypes, this was achieved with different treatment strategies. As patients with HR-positive ILC were less likely to receive chemotherapy than those with HR-positive IDC, histology may be a relevant factor in treatment-decision making. For a more definite conclusion on the role of histology, we recommend to incorporate histological subtype as a stratification factor in future clinical trials. MATERIALS AND METHODS: Patient selection We identified all patients diagnosed with metastatic breast cancer between 2007–2009 in eight hospitals in the South-East of the Netherlands. All patients with metastatic breast cancer were selected irrespective of the date of primary breast cancer diagnosis (also including patients with de novo metastatic breast cancer; 18.5% of patients with IDC and 19.9% of patients with ILC), with the exception of patients with a diagnosis of primary breast cancer before 1990, due to limitations in the availability of data on the primary tumor and initial treatment. Histology was classified according to the International Classification of Diseases for Oncology (ICD-O) [23]. ILC and mixed histology were defined as code 8520 and 8522. IDC was defined as code 8500. From the total of 815 metastatic breast cancer patients, we excluded 76 patients with either unknown histology or histological subtypes other than IDC or ILC. Of the remaining 739 metastatic breast cancer patients, we excluded 171 patients with HR-negative tumors (27% of patients with IDC and 8% of patients with ILC) in order to rule out the impact of HR status. In total, our study population consisted of 568 patients divided in two groups based on histology; one group of 437 patients with IDC and the other group of 131 patients with ILC. We identified all patients diagnosed with metastatic breast cancer between 2007–2009 in eight hospitals in the South-East of the Netherlands. All patients with metastatic breast cancer were selected irrespective of the date of primary breast cancer diagnosis (also including patients with de novo metastatic breast cancer; 18.5% of patients with IDC and 19.9% of patients with ILC), with the exception of patients with a diagnosis of primary breast cancer before 1990, due to limitations in the availability of data on the primary tumor and initial treatment. Histology was classified according to the International Classification of Diseases for Oncology (ICD-O) [23]. ILC and mixed histology were defined as code 8520 and 8522. IDC was defined as code 8500. From the total of 815 metastatic breast cancer patients, we excluded 76 patients with either unknown histology or histological subtypes other than IDC or ILC. Of the remaining 739 metastatic breast cancer patients, we excluded 171 patients with HR-negative tumors (27% of patients with IDC and 8% of patients with ILC) in order to rule out the impact of HR status. In total, our study population consisted of 568 patients divided in two groups based on histology; one group of 437 patients with IDC and the other group of 131 patients with ILC. Data collection Information was collected on patient and tumor characteristics, treatment and outcome. Tumors were characterized by the sixth edition of the TNM classification of malignant tumors [24] and Scarff Bloom Richardson (SBR) histological grading [25]. Estrogen receptor (ER) and progesterone receptor (PR) positivity was defined as positive nuclear staining of ≥ 10%. HER2 positivity was defined as immunohistochemistry score of 3+ or 2+ with positive FISH. In case of missing HER2 status a dedicated pathologist centrally reviewed missing data when material was available. Initial sites of metastasis were categorized as: bone, visceral (including lung, liver, pleural, peritoneal, pericardial and lymphangitic carcinomatosis), brain (including leptomeningeal and CNS), skin and lymph nodes, and multiple metastases (more than one of the metastatic sites). Information was collected on patient and tumor characteristics, treatment and outcome. Tumors were characterized by the sixth edition of the TNM classification of malignant tumors [24] and Scarff Bloom Richardson (SBR) histological grading [25]. Estrogen receptor (ER) and progesterone receptor (PR) positivity was defined as positive nuclear staining of ≥ 10%. HER2 positivity was defined as immunohistochemistry score of 3+ or 2+ with positive FISH. In case of missing HER2 status a dedicated pathologist centrally reviewed missing data when material was available. Initial sites of metastasis were categorized as: bone, visceral (including lung, liver, pleural, peritoneal, pericardial and lymphangitic carcinomatosis), brain (including leptomeningeal and CNS), skin and lymph nodes, and multiple metastases (more than one of the metastatic sites). Statistical analysis Baseline characteristics between the two histological groups were compared using chi-square tests for categorical variables and Wilcoxon rank sum tests for continuous variables. Metastatic-free interval was defined as time between date of primary diagnosis and date of first distant metastasis. Overall survival after diagnosis of metastatic breast cancer was defined as time between date of first distant metastasis and date of death. Survival curves and time to first palliative systemic therapy (either chemotherapy or endocrine therapy) were estimated using the Kaplan-Meier method and compared using log-rank tests. All patients still alive were censored at the date of last follow-up of each individual patient. Patients who died without palliative therapy were censored at the date of death in the analysis of time to first palliative therapy. To explore the association of palliative systemic therapy with the survival of patients with metastatic breast cancer for both histological subtypes a Cox proportional hazards model was performed with palliative chemotherapy and endocrine therapy as a time-dependent covariate, since the administration of treatment can change over time and is dependent on the time available for each patient to receive the treatment. We did not explore the association between palliative targeted therapy, such as trastuzumab and bevacizumab, and survival since the number of patients with IDC and ILC receiving targeted therapy was very low. Since all these patients received targeted therapy with chemotherapy, these patients were included in the analysis on initial chemotherapy. In addition, the landmark method was used to estimate survival after a specific time-point, the so-called residual survival [26]. As we were interested to learn about the obtained survival in relation to the initial palliative treatment choices, we chose six months after diagnosis of metastatic breast cancer as landmark. Consequently, patients who already died within 6 months were excluded for the residual survival curves. All analyses were performed using SAS version 9.2. All reported P-values are two-sided and P-value ≤ 0.05 was considered statistically significant. Baseline characteristics between the two histological groups were compared using chi-square tests for categorical variables and Wilcoxon rank sum tests for continuous variables. Metastatic-free interval was defined as time between date of primary diagnosis and date of first distant metastasis. Overall survival after diagnosis of metastatic breast cancer was defined as time between date of first distant metastasis and date of death. Survival curves and time to first palliative systemic therapy (either chemotherapy or endocrine therapy) were estimated using the Kaplan-Meier method and compared using log-rank tests. All patients still alive were censored at the date of last follow-up of each individual patient. Patients who died without palliative therapy were censored at the date of death in the analysis of time to first palliative therapy. To explore the association of palliative systemic therapy with the survival of patients with metastatic breast cancer for both histological subtypes a Cox proportional hazards model was performed with palliative chemotherapy and endocrine therapy as a time-dependent covariate, since the administration of treatment can change over time and is dependent on the time available for each patient to receive the treatment. We did not explore the association between palliative targeted therapy, such as trastuzumab and bevacizumab, and survival since the number of patients with IDC and ILC receiving targeted therapy was very low. Since all these patients received targeted therapy with chemotherapy, these patients were included in the analysis on initial chemotherapy. In addition, the landmark method was used to estimate survival after a specific time-point, the so-called residual survival [26]. As we were interested to learn about the obtained survival in relation to the initial palliative treatment choices, we chose six months after diagnosis of metastatic breast cancer as landmark. Consequently, patients who already died within 6 months were excluded for the residual survival curves. All analyses were performed using SAS version 9.2. All reported P-values are two-sided and P-value ≤ 0.05 was considered statistically significant.
Background: This study describes the differences between the two largest histological breast cancer subtypes (invasive ductal carcinoma (IDC) and invasive (mixed) lobular carcinoma (ILC) with respect to patient and tumor characteristics, treatment-choices and outcome in metastatic breast cancer. Methods: We included 437 patients with hormone receptor-positive IDC and 131 patients with hormone receptor-positive ILC, all diagnosed with metastatic breast cancer between 2007-2009, irrespective of date of the primary diagnosis. Patient and tumor characteristics and data on treatment and outcome were collected. Survival curves were obtained using the Kaplan-Meier method. Results: Patients with ILC were older at diagnosis of primary breast cancer and had more often initial bone metastasis (46.5% versus 34.8%, P = 0.01) and less often multiple metastatic sites compared to IDC (23.7% versus 30.9%, P = 0.11). Six months after diagnosis of metastatic breast cancer, 28.1% of patients with ILC and 39.8% of patients with IDC had received chemotherapy with a longer median time to first chemotherapy for those with ILC (P = 0.001). After six months 84.8% of patients with ILC had received endocrine therapy versus 72.5% of patients with IDC (P = 0.0001). Median overall survival was 29 months for ILC and 25 months for IDC (P = 0.53). Conclusions: Treatment strategies of hormone receptor-positive metastatic breast cancer were remarkably different for patients with ILC and IDC. Further research is required to understand tumor behavior and treatment-choices in real-life.
INTRODUCTION: The two most frequent histological subtypes of breast cancer are invasive ductal carcinoma (IDC) and invasive lobular carcinoma (ILC), with IDC comprising 75–80% and ILC 5–15% of all breast cancer cases. ILC is being associated with larger tumor size at presentation, more bilateral and multifocal involvement and with a different pattern of metastatic spread compared with IDC [1, 2]. Furthermore, ILC is more often HR-positive, HER2-negative, with lower S-phase fraction and less often positive for the tumor suppressor gene p53, compared with IDC [1]. Also, treatment response is known to be different. In a combined analysis on a number of retrospective series, the pathological complete response rate of neo-adjuvant chemotherapy was significantly lower in ILC (1.7%) than in IDC (11.6%) [3]. In the adjuvant setting, retrospective data suggest a higher efficacy of endocrine therapy for ILC than IDC [4]. More recently, a large retrospective study strongly suggested that HR-positive breast cancer patients with ILC do not seem to benefit from adjuvant chemotherapy in addition to endocrine therapy [5]. Whether these differences have a prognostic impact is controversial. Some studies found no effect of histology on survival [1, 6, 7], others found a better prognosis for patients with ILC compared to those with IDC [8–10] and some found a change in prognosis over time with a better prognosis for ILC during the first years of follow-up and a worse prognosis during later years [2, 11]. So far, studies on histological subtypes consider early breast cancer and not metastatic breast cancer. The aim of this study was to describe the differences between IDC and ILC with respect to patient and tumor characteristics, treatment-choices and outcome in metastatic breast cancer. In order to account for the effect of the hormone receptor (HR) on outcome and treatment-decision making, we only included patients with HR-positive breast cancer. DISCUSSION: Baseline characteristics between the two histological groups were compared using chi-square tests for categorical variables and Wilcoxon rank sum tests for continuous variables. Metastatic-free interval was defined as time between date of primary diagnosis and date of first distant metastasis. Overall survival after diagnosis of metastatic breast cancer was defined as time between date of first distant metastasis and date of death. Survival curves and time to first palliative systemic therapy (either chemotherapy or endocrine therapy) were estimated using the Kaplan-Meier method and compared using log-rank tests. All patients still alive were censored at the date of last follow-up of each individual patient. Patients who died without palliative therapy were censored at the date of death in the analysis of time to first palliative therapy. To explore the association of palliative systemic therapy with the survival of patients with metastatic breast cancer for both histological subtypes a Cox proportional hazards model was performed with palliative chemotherapy and endocrine therapy as a time-dependent covariate, since the administration of treatment can change over time and is dependent on the time available for each patient to receive the treatment. We did not explore the association between palliative targeted therapy, such as trastuzumab and bevacizumab, and survival since the number of patients with IDC and ILC receiving targeted therapy was very low. Since all these patients received targeted therapy with chemotherapy, these patients were included in the analysis on initial chemotherapy. In addition, the landmark method was used to estimate survival after a specific time-point, the so-called residual survival [26]. As we were interested to learn about the obtained survival in relation to the initial palliative treatment choices, we chose six months after diagnosis of metastatic breast cancer as landmark. Consequently, patients who already died within 6 months were excluded for the residual survival curves. All analyses were performed using SAS version 9.2. All reported P-values are two-sided and P-value ≤ 0.05 was considered statistically significant.
Background: This study describes the differences between the two largest histological breast cancer subtypes (invasive ductal carcinoma (IDC) and invasive (mixed) lobular carcinoma (ILC) with respect to patient and tumor characteristics, treatment-choices and outcome in metastatic breast cancer. Methods: We included 437 patients with hormone receptor-positive IDC and 131 patients with hormone receptor-positive ILC, all diagnosed with metastatic breast cancer between 2007-2009, irrespective of date of the primary diagnosis. Patient and tumor characteristics and data on treatment and outcome were collected. Survival curves were obtained using the Kaplan-Meier method. Results: Patients with ILC were older at diagnosis of primary breast cancer and had more often initial bone metastasis (46.5% versus 34.8%, P = 0.01) and less often multiple metastatic sites compared to IDC (23.7% versus 30.9%, P = 0.11). Six months after diagnosis of metastatic breast cancer, 28.1% of patients with ILC and 39.8% of patients with IDC had received chemotherapy with a longer median time to first chemotherapy for those with ILC (P = 0.001). After six months 84.8% of patients with ILC had received endocrine therapy versus 72.5% of patients with IDC (P = 0.0001). Median overall survival was 29 months for ILC and 25 months for IDC (P = 0.53). Conclusions: Treatment strategies of hormone receptor-positive metastatic breast cancer were remarkably different for patients with ILC and IDC. Further research is required to understand tumor behavior and treatment-choices in real-life.
6,360
301
[ 382, 2159, 297, 229, 203, 338, 1154 ]
8
[ "patients", "palliative", "ilc", "metastatic", "months", "positive", "idc", "hr", "breast", "breast cancer" ]
[ "adjuvant chemotherapy significantly", "lobular carcinoma ilc", "metastatic breast cancer", "breast cancer subtypes", "chemotherapy compared idc" ]
null
[CONTENT] metastatic breast cancer | histology | invasive lobular carcinoma | invasive ductal carcinoma | treatment [SUMMARY]
[CONTENT] metastatic breast cancer | histology | invasive lobular carcinoma | invasive ductal carcinoma | treatment [SUMMARY]
null
[CONTENT] metastatic breast cancer | histology | invasive lobular carcinoma | invasive ductal carcinoma | treatment [SUMMARY]
[CONTENT] metastatic breast cancer | histology | invasive lobular carcinoma | invasive ductal carcinoma | treatment [SUMMARY]
[CONTENT] metastatic breast cancer | histology | invasive lobular carcinoma | invasive ductal carcinoma | treatment [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Antineoplastic Agents | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Lobular | Disease-Free Survival | Female | Follow-Up Studies | Humans | Kaplan-Meier Estimate | Middle Aged | Receptor, ErbB-2 | Receptors, Estrogen | Receptors, Progesterone | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Antineoplastic Agents | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Lobular | Disease-Free Survival | Female | Follow-Up Studies | Humans | Kaplan-Meier Estimate | Middle Aged | Receptor, ErbB-2 | Receptors, Estrogen | Receptors, Progesterone | Treatment Outcome [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Antineoplastic Agents | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Lobular | Disease-Free Survival | Female | Follow-Up Studies | Humans | Kaplan-Meier Estimate | Middle Aged | Receptor, ErbB-2 | Receptors, Estrogen | Receptors, Progesterone | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Antineoplastic Agents | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Lobular | Disease-Free Survival | Female | Follow-Up Studies | Humans | Kaplan-Meier Estimate | Middle Aged | Receptor, ErbB-2 | Receptors, Estrogen | Receptors, Progesterone | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Antineoplastic Agents | Breast Neoplasms | Carcinoma, Ductal, Breast | Carcinoma, Lobular | Disease-Free Survival | Female | Follow-Up Studies | Humans | Kaplan-Meier Estimate | Middle Aged | Receptor, ErbB-2 | Receptors, Estrogen | Receptors, Progesterone | Treatment Outcome [SUMMARY]
[CONTENT] adjuvant chemotherapy significantly | lobular carcinoma ilc | metastatic breast cancer | breast cancer subtypes | chemotherapy compared idc [SUMMARY]
[CONTENT] adjuvant chemotherapy significantly | lobular carcinoma ilc | metastatic breast cancer | breast cancer subtypes | chemotherapy compared idc [SUMMARY]
null
[CONTENT] adjuvant chemotherapy significantly | lobular carcinoma ilc | metastatic breast cancer | breast cancer subtypes | chemotherapy compared idc [SUMMARY]
[CONTENT] adjuvant chemotherapy significantly | lobular carcinoma ilc | metastatic breast cancer | breast cancer subtypes | chemotherapy compared idc [SUMMARY]
[CONTENT] adjuvant chemotherapy significantly | lobular carcinoma ilc | metastatic breast cancer | breast cancer subtypes | chemotherapy compared idc [SUMMARY]
[CONTENT] patients | palliative | ilc | metastatic | months | positive | idc | hr | breast | breast cancer [SUMMARY]
[CONTENT] patients | palliative | ilc | metastatic | months | positive | idc | hr | breast | breast cancer [SUMMARY]
null
[CONTENT] patients | palliative | ilc | metastatic | months | positive | idc | hr | breast | breast cancer [SUMMARY]
[CONTENT] patients | palliative | ilc | metastatic | months | positive | idc | hr | breast | breast cancer [SUMMARY]
[CONTENT] patients | palliative | ilc | metastatic | months | positive | idc | hr | breast | breast cancer [SUMMARY]
[CONTENT] ilc | prognosis | idc | breast cancer | breast | cancer | found | retrospective | ilc idc | tumor [SUMMARY]
[CONTENT] patients | date | defined | cancer | metastatic | breast | breast cancer | survival | therapy | metastatic breast [SUMMARY]
null
[CONTENT] hr positive | hr | positive | treatment | histology | patients | breast | cancer | metastatic | breast cancer [SUMMARY]
[CONTENT] patients | palliative | months | ilc | metastatic | positive | idc | breast | breast cancer | cancer [SUMMARY]
[CONTENT] patients | palliative | months | ilc | metastatic | positive | idc | breast | breast cancer | cancer [SUMMARY]
[CONTENT] two [SUMMARY]
[CONTENT] 437 | 131 | ILC | between 2007-2009 ||| ||| [SUMMARY]
null
[CONTENT] ILC | IDC ||| [SUMMARY]
[CONTENT] two ||| 437 | 131 | ILC | between 2007-2009 ||| ||| ||| ILC | 46.5% | 34.8% | 0.01 | IDC | 23.7% | 30.9% | 0.11 ||| Six months | 28.1% | ILC | 39.8% | IDC | first | ILC | 0.001 ||| six months | 84.8% | ILC | 72.5% | 0.0001 ||| Median | 29 months | ILC | 25 months | 0.53 ||| ILC | IDC ||| [SUMMARY]
[CONTENT] two ||| 437 | 131 | ILC | between 2007-2009 ||| ||| ||| ILC | 46.5% | 34.8% | 0.01 | IDC | 23.7% | 30.9% | 0.11 ||| Six months | 28.1% | ILC | 39.8% | IDC | first | ILC | 0.001 ||| six months | 84.8% | ILC | 72.5% | 0.0001 ||| Median | 29 months | ILC | 25 months | 0.53 ||| ILC | IDC ||| [SUMMARY]
The Impact of Inflammatory Profile on Selenium Levels in Hemodialysis Patients.
30666918
Hemodialysis stands out as an eligible treatment for patients with chronic kidney disease. The subsequent inflammatory process resulting from this disease and hemodialysis per se is exacerbated in this therapy. Selenium (Se) is an essential trace element that can participate in the inhibition of pro-oxidant and pro-inflammatory processes and could be considered a measurement that indicates the progression of chronic kidney disease and inflammation.
INTRODUCTION
This is an observational cross-sectional study of the Faculdade de Medicina do ABC in patients submitted to hemodialysis three times a week for at least six months. The eligible group composed of 21 patients, who filled out forms and underwent biochemical tests (colorimetric enzyme methods, flow cytometer, turbidimetric method and mass spectrometry).
METHODS
The study population showed, women (70%), men (30%) with a mean age of 47 ± 17 years, Caucasians (36%) and non-Caucasian (64%), hypertensive (68%), smokers (53%) and non-smokers (64%). There was a hegemonic prevalence of systolic arterial hypertension (SAH) 68.1% in relation to diabetes mellitus (DM) (50%). Pre and post hemodialysis (HD) selenemia showed statistical significance, which did not occur with Creactive protein. There was a predominance of females in our sample; the pre- and post- HD selenemia were within the normal range of the reference values; there was a statistically significant correlation between pre and post-HD selenemia; there was no correlation with statistical significance between values of pre and post-HD C-reactive protein.
RESULTS
Our data showed that there was no direct relationship between pre- and post- HD inflammation and pre- and post-HD selenemia.
CONCLUSION
[ "Adult", "C-Reactive Protein", "Female", "Humans", "Inflammation", "Male", "Middle Aged", "Renal Dialysis", "Renal Insufficiency, Chronic", "Selenium" ]
7460750
INTRODUCTION
Current evidence suggests that systemic arterial hypertension (SAH) and type II diabetes mellitus (DM) are the main causes of chronic kidney disease (CKD) worldwide [1-3]. CKD has a direct relationship with cardiovascular diseases and is considered a high risk factor for death and comorbidities [4-6]. Therefore, renal dysfunction is an additional target for intervention and prevention of cardiopathies [5]. Both SAH and DM are progressive, and in relation to renal function, are the main causes that contribute to the high risk of progression to end-stage renal disease. This condition requires long-term maintenance of survival therapies such as hemodialysis or renal transplantation [6]. Studies that evaluated the prevalence and incidence of CKD in the world are based on patients who started renal replacement therapy (RRT) [6-8]. In America, a 1.1% increase in incidence was detected, representing 116.395 new cases in the terminal stage [9]. In Japan, the European Union, India and sub-Saharan Africa, the number of TS patients reaches 2000, 800 and 100 per million of the population, respectively [1]. In Brazil, prevalence and incidence increase each year, concurrent with the increase in the number of patients in RRT [8, 9]. In the last Brazilian census of dialysis of 2011-2014, dialytics accounted for 112,004 [10]. Patients with inflammatory CKD constitute an independent risk factor for morbidity and mortality. Studies have shown that inflammation increases cardiovascular risk and mortality in patients with terminal CKD [11, 12]. Inflammation is characterized by elevated levels of C-reactive protein (CRP), interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-α), albumin and pre-albumin that constitute common findings in dialysis patients. It is also postulated that the traditional risk factors for cardiovascular disease such as hypertension, DM, dyslipidemia and obesity would not be sufficient to explain the high incidence of cardiovascular complications in terminal CKD [12, 13]. Recent findings in the literature attribute to inflammation, oxidative stress, malnutrition, accelerated atherosclerosis as well as renin angiotensin aldosterone system imbalance the responsibility for worse prognosis in kidney diseases. These findings have established an association between inflammation, malnutrition and atherosclerosis in terminal CKD [14, 15]. Beneficial effects are attributed to selenium (Se) in areas such as immuno-competence, cardio-protection and cancer prevention. And low selenium status has been associated with increased risk of mortality, and cognitive decline [16, 17]. The tissue levels depend on daily intake and the form in which Se is consumed, as well as on the chemical form found and metabolism [18, 19]. Associated with this is also the case that selenemia in people from seleniferous regions surpasses those of non-seleniferous areas, especially when observed with antitumor actions [20-22]. The biological effect of Se is due to the amino acid selenocysteine, an active component of glutathione peroxidase (GPX), whose main function is to neutralize hydrogen peroxide and oxidative damage [23-25]. The chronic renal patient has deficient intestinal absorption of Se, on the other hand, there is increased urinary and dialytic loss, in addition to abnormal binding to the proteins involved in its transport. In addition to that, there is a reduction in the intake of Se associated with restricted consumption of protein foods and the urinary loss of albumin. The kidneys are responsible for the synthesis of the enzyme glutathione peroxidase (GSH-Px), a factor that contributes to the low plasma levels of this enzyme in the more advanced stages of CKF [26-28]. Zachara and collaborators have shown that reduced blood levels of Se in association with the decrease of plasma GSH-Px enzyme are correlated with the progression of the kidney disease to the terminal stage [24]. Oxidative stress, a classical risk factor for cardiovascular events and deficiency of selenium antioxidant mechanisms, compose reports from the literature on dialysis. Reduced levels of Se can contribute to endothelial dysfunction, affect coronary flow, and promote accelerated atherosclerosis. In addition to these findings, serum levels of Se are known to be inversely associated with the risk of mortality in patients in HD [17, 24, 25, 27].
null
null
RESULTS
In this study, the following variables were evaluated: blood urea, CRP and plasma Se before and after hemodialysis. The magnitude of inflammatory impairment was assessed through CRP levels and of the renal impairment through the levels of urea, besides the variation in the levels of Se. Table 1 shows the baseline characteristics of the individuals evaluated in this study, the percentage values were: women (70%), hypertensive (68%), non-Caucasian (64%) and non-smokers (64%), with a mean age of 47 ± 17 years, contrasting with men (30%), non-caucasian, smokers. There was a hegemonic prevalence of SAH 68.1% in relation to DM (50%). We observed a statistical reduction in plasma hemoglobin and selenium levels after the hemodialysis session post HD compared to pre HD as shown in Table 2. We did not observe an alteration in the other biochemical parameters studied.
CONCLUSION
Summing up the data of this study, we verified that there was a predominance of females in our sample. Pre- and post-HD selenemia values were within the normal range of the literature reference values. There is a correlation between pre- and post-HD selenemia, but not between selenemia pre-HD and post-HD with CRP values. Therefore, this suggest that the concentration of selenium does not affect CRP levels in pre- and post-HD patients.
[ "Inclusion Criteria", "Exclusion Criteria" ]
[ "Patients over 21 years of age who underwent hemodialysis 3 times a week for at least 6 months and who signed the informed consent form were included.", "Patients undergoing antibiotic therapy, with chronic and acute inflammatory disorders, zinc supplementation, who were discharged from the hospital 30 days prior to collection, patients with human immunodeficiency virus or acquired immunodeficiency syndrome, and those with viral hepatitis B and C, who had rheumatoid arthritis, scleroderma, systemic lupus erythematosus and who were undergoing continuous corticosteroid therapy were excluded.\nThe sample size was calculated for convenience because it did not have the expected value for the difference between patients with dialytic renal disease.\nDuplicate, individualized blood samples were collected from eligible patients for the analysis of routine parameters: urea, creatinine, serum albumin, complete blood count, in addition to serum Se concentration and C-reactive protein (CRP).\nThe samples were kept in a freezer at -20°C and later processed in automatic analyzers from the Laboratory of Clinical Analysis of the School of Medicine.\nUrea, creatinine and albumin were evaluated by using colorimetric enzyme methods in Cobas 6000® apparatus (Roche®). The total blood counting cells were performed using flow cytometer in ABX Penta 120® apparatus (HORIBA®). CRP ultrasensitive was measured by a turbidimetric method in Immulite 1000® (Siemens®). Finally, selenium concentration was determined by ICP-MS plasma inductively coupled to mass spectrometer.\nAll parameters were performed following a practical clinical laboratory analysis.\nResults are shown as mean ± standard deviation of the mean. The data were analyzed by unpaired T test by STATA 15 software. The level of statistical significance was defined as p<0.05." ]
[ null, null ]
[ "INTRODUCTION", "OBJECTIVES", "MATERIALS AND METHODS", "Inclusion Criteria", "Exclusion Criteria", "RESULTS", "DISCUSSION", "CONCLUSION", "ETHICS APPROVAL AND CONSENT TO PARTICIPATE", "HUMAN AND ANIMAL RIGHTS", "CONSENT FOR PUBLICATION", "AVAILABILITY OF DATA AND MATERIALS", "FUNDING" ]
[ "Current evidence suggests that systemic arterial hypertension (SAH) and type II diabetes mellitus (DM) are the main causes of chronic kidney disease (CKD) worldwide [1-3]. CKD has a direct relationship with cardiovascular diseases and is considered a high risk factor for death and comorbidities [4-6]. Therefore, renal dysfunction is an additional target for intervention and prevention of cardiopathies [5].\nBoth SAH and DM are progressive, and in relation to renal function, are the main causes that contribute to the high risk of progression to end-stage renal disease. This condition requires long-term maintenance of survival therapies such as hemodialysis or renal transplantation [6].\nStudies that evaluated the prevalence and incidence of CKD in the world are based on patients who started renal replacement therapy (RRT) [6-8]. In America, a 1.1% increase in incidence was detected, representing 116.395 new cases in the terminal stage [9]. In Japan, the European Union, India and sub-Saharan Africa, the number of TS patients reaches 2000, 800 and 100 per million of the population, respectively [1]. In Brazil, prevalence and incidence increase each year, concurrent with the increase in the number of patients in RRT [8, 9]. In the last Brazilian census of dialysis of 2011-2014, dialytics accounted for 112,004 [10].\nPatients with inflammatory CKD constitute an independent risk factor for morbidity and mortality. Studies have shown that inflammation increases cardiovascular risk and mortality in patients with terminal CKD [11, 12]. Inflammation is characterized by elevated levels of C-reactive protein (CRP), interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-α), albumin and pre-albumin that constitute common findings in dialysis patients. It is also postulated that the traditional risk factors for cardiovascular disease such as hypertension, DM, dyslipidemia and obesity would not be sufficient to explain the high incidence of cardiovascular complications in terminal CKD [12, 13].\nRecent findings in the literature attribute to inflammation, oxidative stress, malnutrition, accelerated atherosclerosis as well as renin angiotensin aldosterone system imbalance the responsibility for worse prognosis in kidney diseases. These findings have established an association between inflammation, malnutrition and atherosclerosis in terminal CKD [14, 15]. Beneficial effects are attributed to selenium (Se) in areas such as immuno-competence, cardio-protection and cancer prevention. And low selenium status has been associated with increased risk of mortality, and cognitive decline [16, 17].\nThe tissue levels depend on daily intake and the form in which Se is consumed, as well as on the chemical form found and metabolism [18, 19]. Associated with this is also the case that selenemia in people from seleniferous regions surpasses those of non-seleniferous areas, especially when observed with antitumor actions [20-22].\nThe biological effect of Se is due to the amino acid selenocysteine, an active component of glutathione peroxidase (GPX), whose main function is to neutralize hydrogen peroxide and oxidative damage [23-25].\nThe chronic renal patient has deficient intestinal absorption of Se, on the other hand, there is increased urinary and dialytic loss, in addition to abnormal binding to the proteins involved in its transport. In addition to that, there is a reduction in the intake of Se associated with restricted consumption of protein foods and the urinary loss of albumin. The kidneys are responsible for the synthesis of the enzyme glutathione peroxidase (GSH-Px), a factor that contributes to the low plasma levels of this enzyme in the more advanced stages of CKF [26-28].\nZachara and collaborators have shown that reduced blood levels of Se in association with the decrease of plasma GSH-Px enzyme are correlated with the progression of the kidney disease to the terminal stage [24]. Oxidative stress, a classical risk factor for cardiovascular events and deficiency of selenium antioxidant mechanisms, compose reports from the literature on dialysis. Reduced levels of Se can contribute to endothelial dysfunction, affect coronary flow, and promote accelerated atherosclerosis. In addition to these findings, serum levels of Se are known to be inversely associated with the risk of mortality in patients in HD [17, 24, 25, 27].", "Our aim was to investigate serum levels of selenium in hemodialysis patients and to establish the correlation between the inflammatory marker and selenemia in hemodialysis patients.", "This is an observational cross-sectional study of patients at the Reference Center for Nephrology at the Mário Covas State Hospital of the Faculdade de Medicina do ABC submitted to hemodialysis 3 times a week for at least 6 months to investigate the correlation between serum Se levels and inflammatory markers.\nThe eligible group was comprised of 22 patients who filled out forms and submitted to laboratory tests. Individual blood samples were collected for the analysis of inflammatory markers of the serum concentration of Se.\n Inclusion Criteria Patients over 21 years of age who underwent hemodialysis 3 times a week for at least 6 months and who signed the informed consent form were included.\nPatients over 21 years of age who underwent hemodialysis 3 times a week for at least 6 months and who signed the informed consent form were included.\n Exclusion Criteria Patients undergoing antibiotic therapy, with chronic and acute inflammatory disorders, zinc supplementation, who were discharged from the hospital 30 days prior to collection, patients with human immunodeficiency virus or acquired immunodeficiency syndrome, and those with viral hepatitis B and C, who had rheumatoid arthritis, scleroderma, systemic lupus erythematosus and who were undergoing continuous corticosteroid therapy were excluded.\nThe sample size was calculated for convenience because it did not have the expected value for the difference between patients with dialytic renal disease.\nDuplicate, individualized blood samples were collected from eligible patients for the analysis of routine parameters: urea, creatinine, serum albumin, complete blood count, in addition to serum Se concentration and C-reactive protein (CRP).\nThe samples were kept in a freezer at -20°C and later processed in automatic analyzers from the Laboratory of Clinical Analysis of the School of Medicine.\nUrea, creatinine and albumin were evaluated by using colorimetric enzyme methods in Cobas 6000® apparatus (Roche®). The total blood counting cells were performed using flow cytometer in ABX Penta 120® apparatus (HORIBA®). CRP ultrasensitive was measured by a turbidimetric method in Immulite 1000® (Siemens®). Finally, selenium concentration was determined by ICP-MS plasma inductively coupled to mass spectrometer.\nAll parameters were performed following a practical clinical laboratory analysis.\nResults are shown as mean ± standard deviation of the mean. The data were analyzed by unpaired T test by STATA 15 software. The level of statistical significance was defined as p<0.05.\nPatients undergoing antibiotic therapy, with chronic and acute inflammatory disorders, zinc supplementation, who were discharged from the hospital 30 days prior to collection, patients with human immunodeficiency virus or acquired immunodeficiency syndrome, and those with viral hepatitis B and C, who had rheumatoid arthritis, scleroderma, systemic lupus erythematosus and who were undergoing continuous corticosteroid therapy were excluded.\nThe sample size was calculated for convenience because it did not have the expected value for the difference between patients with dialytic renal disease.\nDuplicate, individualized blood samples were collected from eligible patients for the analysis of routine parameters: urea, creatinine, serum albumin, complete blood count, in addition to serum Se concentration and C-reactive protein (CRP).\nThe samples were kept in a freezer at -20°C and later processed in automatic analyzers from the Laboratory of Clinical Analysis of the School of Medicine.\nUrea, creatinine and albumin were evaluated by using colorimetric enzyme methods in Cobas 6000® apparatus (Roche®). The total blood counting cells were performed using flow cytometer in ABX Penta 120® apparatus (HORIBA®). CRP ultrasensitive was measured by a turbidimetric method in Immulite 1000® (Siemens®). Finally, selenium concentration was determined by ICP-MS plasma inductively coupled to mass spectrometer.\nAll parameters were performed following a practical clinical laboratory analysis.\nResults are shown as mean ± standard deviation of the mean. The data were analyzed by unpaired T test by STATA 15 software. The level of statistical significance was defined as p<0.05.", "Patients over 21 years of age who underwent hemodialysis 3 times a week for at least 6 months and who signed the informed consent form were included.", "Patients undergoing antibiotic therapy, with chronic and acute inflammatory disorders, zinc supplementation, who were discharged from the hospital 30 days prior to collection, patients with human immunodeficiency virus or acquired immunodeficiency syndrome, and those with viral hepatitis B and C, who had rheumatoid arthritis, scleroderma, systemic lupus erythematosus and who were undergoing continuous corticosteroid therapy were excluded.\nThe sample size was calculated for convenience because it did not have the expected value for the difference between patients with dialytic renal disease.\nDuplicate, individualized blood samples were collected from eligible patients for the analysis of routine parameters: urea, creatinine, serum albumin, complete blood count, in addition to serum Se concentration and C-reactive protein (CRP).\nThe samples were kept in a freezer at -20°C and later processed in automatic analyzers from the Laboratory of Clinical Analysis of the School of Medicine.\nUrea, creatinine and albumin were evaluated by using colorimetric enzyme methods in Cobas 6000® apparatus (Roche®). The total blood counting cells were performed using flow cytometer in ABX Penta 120® apparatus (HORIBA®). CRP ultrasensitive was measured by a turbidimetric method in Immulite 1000® (Siemens®). Finally, selenium concentration was determined by ICP-MS plasma inductively coupled to mass spectrometer.\nAll parameters were performed following a practical clinical laboratory analysis.\nResults are shown as mean ± standard deviation of the mean. The data were analyzed by unpaired T test by STATA 15 software. The level of statistical significance was defined as p<0.05.", "In this study, the following variables were evaluated: blood urea, CRP and plasma Se before and after hemodialysis. The magnitude of inflammatory impairment was assessed through CRP levels and of the renal impairment through the levels of urea, besides the variation in the levels of Se.\nTable 1 shows the baseline characteristics of the individuals evaluated in this study, the percentage values were: women (70%), hypertensive (68%), non-Caucasian (64%) and non-smokers (64%), with a mean age of 47 ± 17 years, contrasting with men (30%), non-caucasian, smokers. There was a hegemonic prevalence of SAH 68.1% in relation to DM (50%).\nWe observed a statistical reduction in plasma hemoglobin and selenium levels after the hemodialysis session post HD compared to pre HD as shown in Table 2. We did not observe an alteration in the other biochemical parameters studied.", "The demographic characteristics of the study differed from those of other studies conducted in other countries [28-34].\nA prevalence of 68% of SAH was found in this study, a result that was lower than several published studies such as Portolés et al. [34], which showed a percentage of 75.8% in Spain within a decade, the study by Ohsawa et al. [33] in Japan (87%), and the study by Agarwal et al. [35] in the USA, in which 86% of the patients were hypertensive.\nTwo meta-analyses emphasize the importance of treating hypertension in hemodialysis patients, with the reduction of all-cause mortality and morbidity in those treated with antihypertensive drugs [36, 37].\nA prevalence of 50% of DM in this study was higher than the USRDS report, which showed that 38.5% of patients undergoing hemodialysis in the USA are diabetic [38]. Other study [39], this prevalence ranged from 26% to 43%, with the exception of the CHOICE study [40] with a prevalence of 54% of diabetics. Among the individual cardiovascular risk factors, smoking has a significant association with the incidence of cardiac events.\nIn our study, the prevalence of smoking was 36%, differing from others in the literature which ranged from 40% to 61% in CHOICE [40] and 65% in ANSWER [39]. It is possible that this difference between the data found in this study and others already described present different population, in diet and period studied. We did not identify differences in selenium or CRP levels when we separated gender.\nIn the population sample of ABC, pre-HD selenemia was at the lower limit, wherein the selenemia reduced before hemodialysis. The variability of these values stems from innumerable factors, which can be due to the selenium content in the soil of each region, which determines the concentration of this element in foods of plant origin and subsequently those of animal origin, both terrestrial and aquatic [21].\nThe values in the northeastern region has consubstantiated to the consumption of Brazil nuts, which contain high amounts of selenium due to the high concentration of this micronutrient in the soil. Stockler-Pinto et al. [20-22], determined the selenium content in foods consumed in Brazil from samples taken from the retail trade in several states and showed higher levels of selenium in plasma of hemodialysis patients. Selenium contents in foods of plant origin were observed to be generally below 5.0μg/100g. In Brazil, the presence of fish, mainly, and other products of animal origin in the diet is important to guarantee the recommended levels of selenium. This variability evidences the lack of homogeneity of the results due to the factors, already reported, that influence the concentration of Se, especially food habits.\nThe normal values of Se are needed to saturate GPX, resulting in maximum activity of the enzyme [41-43]. What remains unknown in this sample is lack of information about the eating habits that are not contextualized in the applied questionnaire. There was a statistically significant reduction of selenium between pre- and post-HD, which is consistent with the literature showing that HD promotes a decrease in this micronutrient. Selenium deficiency is a constant in dialysis patients (Table 3).\nSerum selenium levels are markedly low in hemodialytics [44] compared to non-dialytics; in view of the essential role of selenium in GPX activity, selenium deficiency is associated with increased cancer [45] and coronary artery disease [46] in the general population. Selenium is an essential micronutrient with known antioxidant properties [42, 43]. Prabhu et al. (2002) suggested that Se supplementation dietary can be used to prevent and/or treat inflammatory diseases mediated by oxidative stress, considering that HD patients present an oxidative imbalance and an increase of inflammation. It is possible that selenium supplementation is a viable treatment in this case [42]. Furthermore, strongly positive correlations between selenium levels and nutritional markers such as serum albumin have been reported [44, 47].\nTherefore, selenium deficiency may also contribute to malnutrition in patients with HD [48]. The decrease in post-HD levels of selenium did not promote any clinically detectable infectious alterations during the observational period, since pre- and post-HD values of C-reactive protein did not show statistical significance. Thus, our data paradoxically showed no correlation between selenemia and inflammation.", "Summing up the data of this study, we verified that there was a predominance of females in our sample. Pre- and post-HD selenemia values were within the normal range of the literature reference values. There is a correlation between pre- and post-HD selenemia, but not between selenemia pre-HD and post-HD with CRP values. Therefore, this suggest that the concentration of selenium does not affect CRP levels in pre- and post-HD patients.", "This study was approved by the Institutional Research Ethics Committee of Faculdade de Medicina do ABC, Brazil, under Protocol no. 2.302.284.", "The present study was carried out in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national), and with the Helsinki Declaration of 1975, as revised in 2013.", "Patients who agreed to participate in the study were given a free and informed consent form (FICF) with the appropriate explanations of the adopted protocols and the informed consent forms were signed by the participants.", "The data that support the findings of this study are available from the corresponding author, [G.L. da V.] upon reasonable request.", "None." ]
[ "intro", "other1", "materials|methods", null, null, "results", "discussion", "conclusions", "other2", "other3", "other4", "data-availability", "other5" ]
[ "Chronic kidney disease", "hemodialysis", "selenium", "pro-oxidant", "inflammation", "systemic arterial hypertension" ]
INTRODUCTION: Current evidence suggests that systemic arterial hypertension (SAH) and type II diabetes mellitus (DM) are the main causes of chronic kidney disease (CKD) worldwide [1-3]. CKD has a direct relationship with cardiovascular diseases and is considered a high risk factor for death and comorbidities [4-6]. Therefore, renal dysfunction is an additional target for intervention and prevention of cardiopathies [5]. Both SAH and DM are progressive, and in relation to renal function, are the main causes that contribute to the high risk of progression to end-stage renal disease. This condition requires long-term maintenance of survival therapies such as hemodialysis or renal transplantation [6]. Studies that evaluated the prevalence and incidence of CKD in the world are based on patients who started renal replacement therapy (RRT) [6-8]. In America, a 1.1% increase in incidence was detected, representing 116.395 new cases in the terminal stage [9]. In Japan, the European Union, India and sub-Saharan Africa, the number of TS patients reaches 2000, 800 and 100 per million of the population, respectively [1]. In Brazil, prevalence and incidence increase each year, concurrent with the increase in the number of patients in RRT [8, 9]. In the last Brazilian census of dialysis of 2011-2014, dialytics accounted for 112,004 [10]. Patients with inflammatory CKD constitute an independent risk factor for morbidity and mortality. Studies have shown that inflammation increases cardiovascular risk and mortality in patients with terminal CKD [11, 12]. Inflammation is characterized by elevated levels of C-reactive protein (CRP), interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-α), albumin and pre-albumin that constitute common findings in dialysis patients. It is also postulated that the traditional risk factors for cardiovascular disease such as hypertension, DM, dyslipidemia and obesity would not be sufficient to explain the high incidence of cardiovascular complications in terminal CKD [12, 13]. Recent findings in the literature attribute to inflammation, oxidative stress, malnutrition, accelerated atherosclerosis as well as renin angiotensin aldosterone system imbalance the responsibility for worse prognosis in kidney diseases. These findings have established an association between inflammation, malnutrition and atherosclerosis in terminal CKD [14, 15]. Beneficial effects are attributed to selenium (Se) in areas such as immuno-competence, cardio-protection and cancer prevention. And low selenium status has been associated with increased risk of mortality, and cognitive decline [16, 17]. The tissue levels depend on daily intake and the form in which Se is consumed, as well as on the chemical form found and metabolism [18, 19]. Associated with this is also the case that selenemia in people from seleniferous regions surpasses those of non-seleniferous areas, especially when observed with antitumor actions [20-22]. The biological effect of Se is due to the amino acid selenocysteine, an active component of glutathione peroxidase (GPX), whose main function is to neutralize hydrogen peroxide and oxidative damage [23-25]. The chronic renal patient has deficient intestinal absorption of Se, on the other hand, there is increased urinary and dialytic loss, in addition to abnormal binding to the proteins involved in its transport. In addition to that, there is a reduction in the intake of Se associated with restricted consumption of protein foods and the urinary loss of albumin. The kidneys are responsible for the synthesis of the enzyme glutathione peroxidase (GSH-Px), a factor that contributes to the low plasma levels of this enzyme in the more advanced stages of CKF [26-28]. Zachara and collaborators have shown that reduced blood levels of Se in association with the decrease of plasma GSH-Px enzyme are correlated with the progression of the kidney disease to the terminal stage [24]. Oxidative stress, a classical risk factor for cardiovascular events and deficiency of selenium antioxidant mechanisms, compose reports from the literature on dialysis. Reduced levels of Se can contribute to endothelial dysfunction, affect coronary flow, and promote accelerated atherosclerosis. In addition to these findings, serum levels of Se are known to be inversely associated with the risk of mortality in patients in HD [17, 24, 25, 27]. OBJECTIVES: Our aim was to investigate serum levels of selenium in hemodialysis patients and to establish the correlation between the inflammatory marker and selenemia in hemodialysis patients. MATERIALS AND METHODS: This is an observational cross-sectional study of patients at the Reference Center for Nephrology at the Mário Covas State Hospital of the Faculdade de Medicina do ABC submitted to hemodialysis 3 times a week for at least 6 months to investigate the correlation between serum Se levels and inflammatory markers. The eligible group was comprised of 22 patients who filled out forms and submitted to laboratory tests. Individual blood samples were collected for the analysis of inflammatory markers of the serum concentration of Se. Inclusion Criteria Patients over 21 years of age who underwent hemodialysis 3 times a week for at least 6 months and who signed the informed consent form were included. Patients over 21 years of age who underwent hemodialysis 3 times a week for at least 6 months and who signed the informed consent form were included. Exclusion Criteria Patients undergoing antibiotic therapy, with chronic and acute inflammatory disorders, zinc supplementation, who were discharged from the hospital 30 days prior to collection, patients with human immunodeficiency virus or acquired immunodeficiency syndrome, and those with viral hepatitis B and C, who had rheumatoid arthritis, scleroderma, systemic lupus erythematosus and who were undergoing continuous corticosteroid therapy were excluded. The sample size was calculated for convenience because it did not have the expected value for the difference between patients with dialytic renal disease. Duplicate, individualized blood samples were collected from eligible patients for the analysis of routine parameters: urea, creatinine, serum albumin, complete blood count, in addition to serum Se concentration and C-reactive protein (CRP). The samples were kept in a freezer at -20°C and later processed in automatic analyzers from the Laboratory of Clinical Analysis of the School of Medicine. Urea, creatinine and albumin were evaluated by using colorimetric enzyme methods in Cobas 6000® apparatus (Roche®). The total blood counting cells were performed using flow cytometer in ABX Penta 120® apparatus (HORIBA®). CRP ultrasensitive was measured by a turbidimetric method in Immulite 1000® (Siemens®). Finally, selenium concentration was determined by ICP-MS plasma inductively coupled to mass spectrometer. All parameters were performed following a practical clinical laboratory analysis. Results are shown as mean ± standard deviation of the mean. The data were analyzed by unpaired T test by STATA 15 software. The level of statistical significance was defined as p<0.05. Patients undergoing antibiotic therapy, with chronic and acute inflammatory disorders, zinc supplementation, who were discharged from the hospital 30 days prior to collection, patients with human immunodeficiency virus or acquired immunodeficiency syndrome, and those with viral hepatitis B and C, who had rheumatoid arthritis, scleroderma, systemic lupus erythematosus and who were undergoing continuous corticosteroid therapy were excluded. The sample size was calculated for convenience because it did not have the expected value for the difference between patients with dialytic renal disease. Duplicate, individualized blood samples were collected from eligible patients for the analysis of routine parameters: urea, creatinine, serum albumin, complete blood count, in addition to serum Se concentration and C-reactive protein (CRP). The samples were kept in a freezer at -20°C and later processed in automatic analyzers from the Laboratory of Clinical Analysis of the School of Medicine. Urea, creatinine and albumin were evaluated by using colorimetric enzyme methods in Cobas 6000® apparatus (Roche®). The total blood counting cells were performed using flow cytometer in ABX Penta 120® apparatus (HORIBA®). CRP ultrasensitive was measured by a turbidimetric method in Immulite 1000® (Siemens®). Finally, selenium concentration was determined by ICP-MS plasma inductively coupled to mass spectrometer. All parameters were performed following a practical clinical laboratory analysis. Results are shown as mean ± standard deviation of the mean. The data were analyzed by unpaired T test by STATA 15 software. The level of statistical significance was defined as p<0.05. Inclusion Criteria: Patients over 21 years of age who underwent hemodialysis 3 times a week for at least 6 months and who signed the informed consent form were included. Exclusion Criteria: Patients undergoing antibiotic therapy, with chronic and acute inflammatory disorders, zinc supplementation, who were discharged from the hospital 30 days prior to collection, patients with human immunodeficiency virus or acquired immunodeficiency syndrome, and those with viral hepatitis B and C, who had rheumatoid arthritis, scleroderma, systemic lupus erythematosus and who were undergoing continuous corticosteroid therapy were excluded. The sample size was calculated for convenience because it did not have the expected value for the difference between patients with dialytic renal disease. Duplicate, individualized blood samples were collected from eligible patients for the analysis of routine parameters: urea, creatinine, serum albumin, complete blood count, in addition to serum Se concentration and C-reactive protein (CRP). The samples were kept in a freezer at -20°C and later processed in automatic analyzers from the Laboratory of Clinical Analysis of the School of Medicine. Urea, creatinine and albumin were evaluated by using colorimetric enzyme methods in Cobas 6000® apparatus (Roche®). The total blood counting cells were performed using flow cytometer in ABX Penta 120® apparatus (HORIBA®). CRP ultrasensitive was measured by a turbidimetric method in Immulite 1000® (Siemens®). Finally, selenium concentration was determined by ICP-MS plasma inductively coupled to mass spectrometer. All parameters were performed following a practical clinical laboratory analysis. Results are shown as mean ± standard deviation of the mean. The data were analyzed by unpaired T test by STATA 15 software. The level of statistical significance was defined as p<0.05. RESULTS: In this study, the following variables were evaluated: blood urea, CRP and plasma Se before and after hemodialysis. The magnitude of inflammatory impairment was assessed through CRP levels and of the renal impairment through the levels of urea, besides the variation in the levels of Se. Table 1 shows the baseline characteristics of the individuals evaluated in this study, the percentage values were: women (70%), hypertensive (68%), non-Caucasian (64%) and non-smokers (64%), with a mean age of 47 ± 17 years, contrasting with men (30%), non-caucasian, smokers. There was a hegemonic prevalence of SAH 68.1% in relation to DM (50%). We observed a statistical reduction in plasma hemoglobin and selenium levels after the hemodialysis session post HD compared to pre HD as shown in Table 2. We did not observe an alteration in the other biochemical parameters studied. DISCUSSION: The demographic characteristics of the study differed from those of other studies conducted in other countries [28-34]. A prevalence of 68% of SAH was found in this study, a result that was lower than several published studies such as Portolés et al. [34], which showed a percentage of 75.8% in Spain within a decade, the study by Ohsawa et al. [33] in Japan (87%), and the study by Agarwal et al. [35] in the USA, in which 86% of the patients were hypertensive. Two meta-analyses emphasize the importance of treating hypertension in hemodialysis patients, with the reduction of all-cause mortality and morbidity in those treated with antihypertensive drugs [36, 37]. A prevalence of 50% of DM in this study was higher than the USRDS report, which showed that 38.5% of patients undergoing hemodialysis in the USA are diabetic [38]. Other study [39], this prevalence ranged from 26% to 43%, with the exception of the CHOICE study [40] with a prevalence of 54% of diabetics. Among the individual cardiovascular risk factors, smoking has a significant association with the incidence of cardiac events. In our study, the prevalence of smoking was 36%, differing from others in the literature which ranged from 40% to 61% in CHOICE [40] and 65% in ANSWER [39]. It is possible that this difference between the data found in this study and others already described present different population, in diet and period studied. We did not identify differences in selenium or CRP levels when we separated gender. In the population sample of ABC, pre-HD selenemia was at the lower limit, wherein the selenemia reduced before hemodialysis. The variability of these values stems from innumerable factors, which can be due to the selenium content in the soil of each region, which determines the concentration of this element in foods of plant origin and subsequently those of animal origin, both terrestrial and aquatic [21]. The values in the northeastern region has consubstantiated to the consumption of Brazil nuts, which contain high amounts of selenium due to the high concentration of this micronutrient in the soil. Stockler-Pinto et al. [20-22], determined the selenium content in foods consumed in Brazil from samples taken from the retail trade in several states and showed higher levels of selenium in plasma of hemodialysis patients. Selenium contents in foods of plant origin were observed to be generally below 5.0μg/100g. In Brazil, the presence of fish, mainly, and other products of animal origin in the diet is important to guarantee the recommended levels of selenium. This variability evidences the lack of homogeneity of the results due to the factors, already reported, that influence the concentration of Se, especially food habits. The normal values of Se are needed to saturate GPX, resulting in maximum activity of the enzyme [41-43]. What remains unknown in this sample is lack of information about the eating habits that are not contextualized in the applied questionnaire. There was a statistically significant reduction of selenium between pre- and post-HD, which is consistent with the literature showing that HD promotes a decrease in this micronutrient. Selenium deficiency is a constant in dialysis patients (Table 3). Serum selenium levels are markedly low in hemodialytics [44] compared to non-dialytics; in view of the essential role of selenium in GPX activity, selenium deficiency is associated with increased cancer [45] and coronary artery disease [46] in the general population. Selenium is an essential micronutrient with known antioxidant properties [42, 43]. Prabhu et al. (2002) suggested that Se supplementation dietary can be used to prevent and/or treat inflammatory diseases mediated by oxidative stress, considering that HD patients present an oxidative imbalance and an increase of inflammation. It is possible that selenium supplementation is a viable treatment in this case [42]. Furthermore, strongly positive correlations between selenium levels and nutritional markers such as serum albumin have been reported [44, 47]. Therefore, selenium deficiency may also contribute to malnutrition in patients with HD [48]. The decrease in post-HD levels of selenium did not promote any clinically detectable infectious alterations during the observational period, since pre- and post-HD values of C-reactive protein did not show statistical significance. Thus, our data paradoxically showed no correlation between selenemia and inflammation. CONCLUSION: Summing up the data of this study, we verified that there was a predominance of females in our sample. Pre- and post-HD selenemia values were within the normal range of the literature reference values. There is a correlation between pre- and post-HD selenemia, but not between selenemia pre-HD and post-HD with CRP values. Therefore, this suggest that the concentration of selenium does not affect CRP levels in pre- and post-HD patients. ETHICS APPROVAL AND CONSENT TO PARTICIPATE: This study was approved by the Institutional Research Ethics Committee of Faculdade de Medicina do ABC, Brazil, under Protocol no. 2.302.284. HUMAN AND ANIMAL RIGHTS: The present study was carried out in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national), and with the Helsinki Declaration of 1975, as revised in 2013. CONSENT FOR PUBLICATION: Patients who agreed to participate in the study were given a free and informed consent form (FICF) with the appropriate explanations of the adopted protocols and the informed consent forms were signed by the participants. AVAILABILITY OF DATA AND MATERIALS: The data that support the findings of this study are available from the corresponding author, [G.L. da V.] upon reasonable request. FUNDING: None.
Background: Hemodialysis stands out as an eligible treatment for patients with chronic kidney disease. The subsequent inflammatory process resulting from this disease and hemodialysis per se is exacerbated in this therapy. Selenium (Se) is an essential trace element that can participate in the inhibition of pro-oxidant and pro-inflammatory processes and could be considered a measurement that indicates the progression of chronic kidney disease and inflammation. Methods: This is an observational cross-sectional study of the Faculdade de Medicina do ABC in patients submitted to hemodialysis three times a week for at least six months. The eligible group composed of 21 patients, who filled out forms and underwent biochemical tests (colorimetric enzyme methods, flow cytometer, turbidimetric method and mass spectrometry). Results: The study population showed, women (70%), men (30%) with a mean age of 47 ± 17 years, Caucasians (36%) and non-Caucasian (64%), hypertensive (68%), smokers (53%) and non-smokers (64%). There was a hegemonic prevalence of systolic arterial hypertension (SAH) 68.1% in relation to diabetes mellitus (DM) (50%). Pre and post hemodialysis (HD) selenemia showed statistical significance, which did not occur with Creactive protein. There was a predominance of females in our sample; the pre- and post- HD selenemia were within the normal range of the reference values; there was a statistically significant correlation between pre and post-HD selenemia; there was no correlation with statistical significance between values of pre and post-HD C-reactive protein. Conclusions: Our data showed that there was no direct relationship between pre- and post- HD inflammation and pre- and post-HD selenemia.
INTRODUCTION: Current evidence suggests that systemic arterial hypertension (SAH) and type II diabetes mellitus (DM) are the main causes of chronic kidney disease (CKD) worldwide [1-3]. CKD has a direct relationship with cardiovascular diseases and is considered a high risk factor for death and comorbidities [4-6]. Therefore, renal dysfunction is an additional target for intervention and prevention of cardiopathies [5]. Both SAH and DM are progressive, and in relation to renal function, are the main causes that contribute to the high risk of progression to end-stage renal disease. This condition requires long-term maintenance of survival therapies such as hemodialysis or renal transplantation [6]. Studies that evaluated the prevalence and incidence of CKD in the world are based on patients who started renal replacement therapy (RRT) [6-8]. In America, a 1.1% increase in incidence was detected, representing 116.395 new cases in the terminal stage [9]. In Japan, the European Union, India and sub-Saharan Africa, the number of TS patients reaches 2000, 800 and 100 per million of the population, respectively [1]. In Brazil, prevalence and incidence increase each year, concurrent with the increase in the number of patients in RRT [8, 9]. In the last Brazilian census of dialysis of 2011-2014, dialytics accounted for 112,004 [10]. Patients with inflammatory CKD constitute an independent risk factor for morbidity and mortality. Studies have shown that inflammation increases cardiovascular risk and mortality in patients with terminal CKD [11, 12]. Inflammation is characterized by elevated levels of C-reactive protein (CRP), interleukin-6 (IL-6), tumor necrosis factor-alpha (TNF-α), albumin and pre-albumin that constitute common findings in dialysis patients. It is also postulated that the traditional risk factors for cardiovascular disease such as hypertension, DM, dyslipidemia and obesity would not be sufficient to explain the high incidence of cardiovascular complications in terminal CKD [12, 13]. Recent findings in the literature attribute to inflammation, oxidative stress, malnutrition, accelerated atherosclerosis as well as renin angiotensin aldosterone system imbalance the responsibility for worse prognosis in kidney diseases. These findings have established an association between inflammation, malnutrition and atherosclerosis in terminal CKD [14, 15]. Beneficial effects are attributed to selenium (Se) in areas such as immuno-competence, cardio-protection and cancer prevention. And low selenium status has been associated with increased risk of mortality, and cognitive decline [16, 17]. The tissue levels depend on daily intake and the form in which Se is consumed, as well as on the chemical form found and metabolism [18, 19]. Associated with this is also the case that selenemia in people from seleniferous regions surpasses those of non-seleniferous areas, especially when observed with antitumor actions [20-22]. The biological effect of Se is due to the amino acid selenocysteine, an active component of glutathione peroxidase (GPX), whose main function is to neutralize hydrogen peroxide and oxidative damage [23-25]. The chronic renal patient has deficient intestinal absorption of Se, on the other hand, there is increased urinary and dialytic loss, in addition to abnormal binding to the proteins involved in its transport. In addition to that, there is a reduction in the intake of Se associated with restricted consumption of protein foods and the urinary loss of albumin. The kidneys are responsible for the synthesis of the enzyme glutathione peroxidase (GSH-Px), a factor that contributes to the low plasma levels of this enzyme in the more advanced stages of CKF [26-28]. Zachara and collaborators have shown that reduced blood levels of Se in association with the decrease of plasma GSH-Px enzyme are correlated with the progression of the kidney disease to the terminal stage [24]. Oxidative stress, a classical risk factor for cardiovascular events and deficiency of selenium antioxidant mechanisms, compose reports from the literature on dialysis. Reduced levels of Se can contribute to endothelial dysfunction, affect coronary flow, and promote accelerated atherosclerosis. In addition to these findings, serum levels of Se are known to be inversely associated with the risk of mortality in patients in HD [17, 24, 25, 27]. CONCLUSION: Summing up the data of this study, we verified that there was a predominance of females in our sample. Pre- and post-HD selenemia values were within the normal range of the literature reference values. There is a correlation between pre- and post-HD selenemia, but not between selenemia pre-HD and post-HD with CRP values. Therefore, this suggest that the concentration of selenium does not affect CRP levels in pre- and post-HD patients.
Background: Hemodialysis stands out as an eligible treatment for patients with chronic kidney disease. The subsequent inflammatory process resulting from this disease and hemodialysis per se is exacerbated in this therapy. Selenium (Se) is an essential trace element that can participate in the inhibition of pro-oxidant and pro-inflammatory processes and could be considered a measurement that indicates the progression of chronic kidney disease and inflammation. Methods: This is an observational cross-sectional study of the Faculdade de Medicina do ABC in patients submitted to hemodialysis three times a week for at least six months. The eligible group composed of 21 patients, who filled out forms and underwent biochemical tests (colorimetric enzyme methods, flow cytometer, turbidimetric method and mass spectrometry). Results: The study population showed, women (70%), men (30%) with a mean age of 47 ± 17 years, Caucasians (36%) and non-Caucasian (64%), hypertensive (68%), smokers (53%) and non-smokers (64%). There was a hegemonic prevalence of systolic arterial hypertension (SAH) 68.1% in relation to diabetes mellitus (DM) (50%). Pre and post hemodialysis (HD) selenemia showed statistical significance, which did not occur with Creactive protein. There was a predominance of females in our sample; the pre- and post- HD selenemia were within the normal range of the reference values; there was a statistically significant correlation between pre and post-HD selenemia; there was no correlation with statistical significance between values of pre and post-HD C-reactive protein. Conclusions: Our data showed that there was no direct relationship between pre- and post- HD inflammation and pre- and post-HD selenemia.
3,244
339
[ 28, 292 ]
13
[ "patients", "selenium", "levels", "se", "study", "hd", "hemodialysis", "crp", "blood", "serum" ]
[ "hemodialysis usa diabetic", "causes chronic kidney", "comorbidities renal dysfunction", "treating hypertension hemodialysis", "prognosis kidney diseases" ]
null
[CONTENT] Chronic kidney disease | hemodialysis | selenium | pro-oxidant | inflammation | systemic arterial hypertension [SUMMARY]
null
[CONTENT] Chronic kidney disease | hemodialysis | selenium | pro-oxidant | inflammation | systemic arterial hypertension [SUMMARY]
[CONTENT] Chronic kidney disease | hemodialysis | selenium | pro-oxidant | inflammation | systemic arterial hypertension [SUMMARY]
[CONTENT] Chronic kidney disease | hemodialysis | selenium | pro-oxidant | inflammation | systemic arterial hypertension [SUMMARY]
[CONTENT] Chronic kidney disease | hemodialysis | selenium | pro-oxidant | inflammation | systemic arterial hypertension [SUMMARY]
[CONTENT] Adult | C-Reactive Protein | Female | Humans | Inflammation | Male | Middle Aged | Renal Dialysis | Renal Insufficiency, Chronic | Selenium [SUMMARY]
null
[CONTENT] Adult | C-Reactive Protein | Female | Humans | Inflammation | Male | Middle Aged | Renal Dialysis | Renal Insufficiency, Chronic | Selenium [SUMMARY]
[CONTENT] Adult | C-Reactive Protein | Female | Humans | Inflammation | Male | Middle Aged | Renal Dialysis | Renal Insufficiency, Chronic | Selenium [SUMMARY]
[CONTENT] Adult | C-Reactive Protein | Female | Humans | Inflammation | Male | Middle Aged | Renal Dialysis | Renal Insufficiency, Chronic | Selenium [SUMMARY]
[CONTENT] Adult | C-Reactive Protein | Female | Humans | Inflammation | Male | Middle Aged | Renal Dialysis | Renal Insufficiency, Chronic | Selenium [SUMMARY]
[CONTENT] hemodialysis usa diabetic | causes chronic kidney | comorbidities renal dysfunction | treating hypertension hemodialysis | prognosis kidney diseases [SUMMARY]
null
[CONTENT] hemodialysis usa diabetic | causes chronic kidney | comorbidities renal dysfunction | treating hypertension hemodialysis | prognosis kidney diseases [SUMMARY]
[CONTENT] hemodialysis usa diabetic | causes chronic kidney | comorbidities renal dysfunction | treating hypertension hemodialysis | prognosis kidney diseases [SUMMARY]
[CONTENT] hemodialysis usa diabetic | causes chronic kidney | comorbidities renal dysfunction | treating hypertension hemodialysis | prognosis kidney diseases [SUMMARY]
[CONTENT] hemodialysis usa diabetic | causes chronic kidney | comorbidities renal dysfunction | treating hypertension hemodialysis | prognosis kidney diseases [SUMMARY]
[CONTENT] patients | selenium | levels | se | study | hd | hemodialysis | crp | blood | serum [SUMMARY]
null
[CONTENT] patients | selenium | levels | se | study | hd | hemodialysis | crp | blood | serum [SUMMARY]
[CONTENT] patients | selenium | levels | se | study | hd | hemodialysis | crp | blood | serum [SUMMARY]
[CONTENT] patients | selenium | levels | se | study | hd | hemodialysis | crp | blood | serum [SUMMARY]
[CONTENT] patients | selenium | levels | se | study | hd | hemodialysis | crp | blood | serum [SUMMARY]
[CONTENT] ckd | risk | se | terminal | factor | cardiovascular | renal | inflammation | incidence | associated [SUMMARY]
null
[CONTENT] levels | non | impairment | non caucasian | 64 | smokers | caucasian | table | 68 | urea [SUMMARY]
[CONTENT] hd | post hd | post | pre | pre post | pre post hd | values | selenemia | post hd selenemia | pre post hd selenemia [SUMMARY]
[CONTENT] patients | study | hd | selenium | hemodialysis | levels | consent | informed | informed consent | post hd [SUMMARY]
[CONTENT] patients | study | hd | selenium | hemodialysis | levels | consent | informed | informed consent | post hd [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] 70% | 30% | 17 years | Caucasians | 36% | non-Caucasian | 64% | 68% | 53% | 64% ||| SAH | 68.1% | 50% ||| Creactive ||| [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| the Faculdade de Medicina | ABC | three | at least six months ||| 21 ||| ||| 70% | 30% | 17 years | Caucasians | 36% | non-Caucasian | 64% | 68% | 53% | 64% ||| SAH | 68.1% | 50% ||| Creactive ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| the Faculdade de Medicina | ABC | three | at least six months ||| 21 ||| ||| 70% | 30% | 17 years | Caucasians | 36% | non-Caucasian | 64% | 68% | 53% | 64% ||| SAH | 68.1% | 50% ||| Creactive ||| ||| [SUMMARY]
Nutrition Interventions for Children with Cerebral Palsy in Low- and Middle-Income Countries: A Scoping Review.
35334869
Malnutrition is substantially higher among children with cerebral palsy (CP) in low- and middle-income countries (LMICs) when compared with the general population. Access to appropriate interventions is crucial for better management of malnutrition and nutritional outcomes of those children. We aimed to review the existing evidence on nutrition interventions for children with CP in LMICs.
BACKGROUND
Online databases, i.e., PubMed and Scopus, and Google Scholar were searched up to 10 January 2022, to identify peer-reviewed publications/evidence on LMIC focused nutritional management guidelines/interventions. Following title screening and abstract review, full articles that met the inclusion/exclusion criteria were retained for data charting. Information about the study characteristics, nutrition interventions, and their effectiveness were extracted. Descriptive data were reported.
METHODS
Eight articles published between 2008 and 2019 were included with data from a total of n = 252 children with CP (age range: 1 y 0 m-18 y 7 m, 42% female). Five studies followed experimental design; n = 6 were conducted in hospital/clinic/center-based settings. Four studies focused on parental/caregiver training; n = 2 studies had surgical interventions (i.e., gastrostomy) and n = 1 provided neurodevelopmental therapy feeding intervention. Dietary modification as an intervention (or component) was reported in n = 5 studies and had better effect on the nutritional outcomes of children with CP compared to interventions focused on feeding skills or other behavioral modifications. Surgical interventions improved nutritional outcomes in both studies; however, none documented any adverse consequences of the surgical interventions.
RESULTS
There is a substantial knowledge gap on nutrition interventions for children with CP in LMICs. This hinders the development of best practice guidelines for the nutritional management of children with CP in those settings. Findings suggest interventions directly related to growth/feeding of children had a better outcome than behavioral interventions. This should be considered in planning of nutrition-focused intervention or comprehensive services for children with CP in LMICs.
CONCLUSION
[ "Cerebral Palsy", "Child", "Developing Countries", "Female", "Humans", "Income", "Infant", "Male", "Malnutrition", "Poverty" ]
8951851
1. Introduction
Undernutrition is a major global public health challenge, and children with disability, such as cerebral palsy (CP), often suffer from undernutrition, especially in low- and middle-income countries (LMICs). The proportion of malnutrition is substantially higher among children with CP in LMICs compared to the general population [1,2,3,4,5]. Ensuring availability and accessibility to evidence-based nutrition interventions is essential to avert adverse outcomes of malnutrition among those vulnerable children [6]. Nutritional management of children with CP is complex as several interlinked factors interfere with their growth directly and indirectly [7]. While a few factors have been identified as common predictors of malnutrition (e.g., gross motor and oromotor function limitations) [1,8,9], the conceptual framework of malnutrition in children with CP is not yet clearly understood, especially in LMICs. To date, several clinical nutrition guidelines based on current evidence on surgical and non-surgical intervention outcomes have been published [6,10]. However, most of those studies were conducted in high-income countries (HICs) [6], whereas 85% of children with disabilities live in LMICs and the majority have no or limited access to any rehabilitation services [11,12]. With the extreme shortage of trained professionals including dietitians, in addition to the limited availability and accessibility to institutionalized intervention programs in LMICs [13], it is likely that not all the interventions found effective in HIC settings may be applicable in LMIC settings. In absence of optimal management, these children are at high risk of malnutrition, which can in turn impact their functional outcomes, quality of life, and survival [14,15,16]. Any available evidence in the context of LMICs could guide resource mobilization, assist in understanding the need and cost-effectiveness as well as to identify interventions that may improve the nutritional status of children with CP in low-resource settings. This is important to establish a platform for developing and implementing best practice CP-specific nutrition intervention guidelines in LMICs. In this scoping review, we aimed to systematically map the existing evidence on nutritional interventions for children with CP living in LMIC settings. The following research questions were explored: (i) What is known about the available nutrition interventions for children with CP in LMICs? (ii) What are the outcomes of those interventions on the nutritional status of participating children with CP in LMICs?
null
null
3. Results
A total of n = 4885 citations were identified from the databases after deduplication. Following title screening, n = 132 abstracts were reviewed; of those, n = 26 articles were selected for full review. However, of those, n = 5 full articles were not available and an additional n = 4 were identified from handsearching the bibliographies of selected articles. So, a total of n = 25 full articles met the inclusion criteria. However, of those n = 13 were conducted in high-income settings, n = 1 did not report information for children with CP separately, thus the outcome could not be differentiated from other study participants, and n = 3 were conducted in HICs and did not report data on CP separately. Hence, those n = 17 were excluded. Finally, n = 8 articles published between the years 2008 and 2019 were included in the review for data charting. A flow diagram of the study selection procedure is shown in Figure 1. A summary of the excluded studies is also available in Supplementary Table S2. The details about study characteristics, study participants, intervention details, and outcomes of the included studies are summarized in the subsequent sections. 3.1. Study Characteristics The characteristics of the included studies have been summarized in Table 1. 3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. 3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. 3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. 3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. The characteristics of the included studies have been summarized in Table 1. 3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. 3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. 3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. 3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. 3.2. Intervention Details Of the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22]. The details of different interventions provided in each of the studies included in this review are summarized in Table 2. Of the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22]. The details of different interventions provided in each of the studies included in this review are summarized in Table 2. 3.3. Outcome Measures Different anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26]. In all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23]. All but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3). Different anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26]. In all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23]. All but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3). 3.4. Effect of Different Interventions on Nutritional Status of Children with CP Of the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs. Four out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24]. Both studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3) Of the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs. Four out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24]. Both studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3)
5. Conclusions
The current review highlights the knowledge gap and lack of available evidence on nutrition interventions for children with CP in LMICs. Strong evidence is essential to identify and determine the best practices and nutritional management guidelines for children with CP, especially for optimal utilization of the limited available resources in LMICs. National and international stakeholders, therefore, should make this a priority for future research and services. Considering the high proportion of children with CP with undernutrition in LMICs, existing interventions and services for children with CP should integrate a nutrition component. The current limited evidence suggests that nutrition-specific programs have more of a positive effect on the growth of children with CP than other strategies. This should be taken under consideration when planning for any nutrition focus and other comprehensive services for children with CP in LMICs.
[ "2. Materials and Methods", "2.1. Study Design", "2.2. Database Searching", "2.3. Inclusion and Exclusion Criteria", "2.4. Data Charting Process", "2.5. Assessment of Risk of Bias and Synthesis of Results", "2.6. Ethics", "3.1. Study Characteristics", "3.1.1. Study Design", "3.1.2. Study Location", "3.1.3. Study Settings", "3.2. Intervention Details", "3.3. Outcome Measures", "3.4. Effect of Different Interventions on Nutritional Status of Children with CP" ]
[ "2.1. Study Design We conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author.\nWe conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author.\n2.2. Database Searching The key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.).\nThe key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.).\n2.3. Inclusion and Exclusion Criteria Articles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries.\nArticles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries.\n2.4. Data Charting Process Data charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”.\nData charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”.\n2.5. Assessment of Risk of Bias and Synthesis of Results The risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives.\nThe risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives.\n2.6. Ethics This study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information.\nThis study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information.", "We conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author.", "The key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.).", "Articles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries.", "Data charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”.", "The risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives.", "This study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information.", "The characteristics of the included studies have been summarized in Table 1.\n3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.\nFive out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.\n3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].\nOf the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].\n3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].\nSix of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].\n3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.\nFive studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.", "Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.", "Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].", "Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].", "Of the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22].\nThe details of different interventions provided in each of the studies included in this review are summarized in Table 2.", "Different anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26].\nIn all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23].\nAll but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3).", "Of the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs.\nFour out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24].\nBoth studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3)" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Study Design", "2.2. Database Searching", "2.3. Inclusion and Exclusion Criteria", "2.4. Data Charting Process", "2.5. Assessment of Risk of Bias and Synthesis of Results", "2.6. Ethics", "3. Results", "3.1. Study Characteristics", "3.1.1. Study Design", "3.1.2. Study Location", "3.1.3. Study Settings", "3.1.4. Study Participants and Sample Size", "3.2. Intervention Details", "3.3. Outcome Measures", "3.4. Effect of Different Interventions on Nutritional Status of Children with CP", "4. Discussion", "5. Conclusions" ]
[ "Undernutrition is a major global public health challenge, and children with disability, such as cerebral palsy (CP), often suffer from undernutrition, especially in low- and middle-income countries (LMICs). The proportion of malnutrition is substantially higher among children with CP in LMICs compared to the general population [1,2,3,4,5]. Ensuring availability and accessibility to evidence-based nutrition interventions is essential to avert adverse outcomes of malnutrition among those vulnerable children [6].\nNutritional management of children with CP is complex as several interlinked factors interfere with their growth directly and indirectly [7]. While a few factors have been identified as common predictors of malnutrition (e.g., gross motor and oromotor function limitations) [1,8,9], the conceptual framework of malnutrition in children with CP is not yet clearly understood, especially in LMICs. To date, several clinical nutrition guidelines based on current evidence on surgical and non-surgical intervention outcomes have been published [6,10]. However, most of those studies were conducted in high-income countries (HICs) [6], whereas 85% of children with disabilities live in LMICs and the majority have no or limited access to any rehabilitation services [11,12]. With the extreme shortage of trained professionals including dietitians, in addition to the limited availability and accessibility to institutionalized intervention programs in LMICs [13], it is likely that not all the interventions found effective in HIC settings may be applicable in LMIC settings. In absence of optimal management, these children are at high risk of malnutrition, which can in turn impact their functional outcomes, quality of life, and survival [14,15,16].\nAny available evidence in the context of LMICs could guide resource mobilization, assist in understanding the need and cost-effectiveness as well as to identify interventions that may improve the nutritional status of children with CP in low-resource settings. This is important to establish a platform for developing and implementing best practice CP-specific nutrition intervention guidelines in LMICs. In this scoping review, we aimed to systematically map the existing evidence on nutritional interventions for children with CP living in LMIC settings. The following research questions were explored: (i) What is known about the available nutrition interventions for children with CP in LMICs? (ii) What are the outcomes of those interventions on the nutritional status of participating children with CP in LMICs?", "2.1. Study Design We conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author.\nWe conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author.\n2.2. Database Searching The key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.).\nThe key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.).\n2.3. Inclusion and Exclusion Criteria Articles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries.\nArticles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries.\n2.4. Data Charting Process Data charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”.\nData charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”.\n2.5. Assessment of Risk of Bias and Synthesis of Results The risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives.\nThe risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives.\n2.6. Ethics This study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information.\nThis study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information.", "We conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author.", "The key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.).", "Articles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries.", "Data charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”.", "The risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives.", "This study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information.", "A total of n = 4885 citations were identified from the databases after deduplication. Following title screening, n = 132 abstracts were reviewed; of those, n = 26 articles were selected for full review. However, of those, n = 5 full articles were not available and an additional n = 4 were identified from handsearching the bibliographies of selected articles. So, a total of n = 25 full articles met the inclusion criteria. However, of those n = 13 were conducted in high-income settings, n = 1 did not report information for children with CP separately, thus the outcome could not be differentiated from other study participants, and n = 3 were conducted in HICs and did not report data on CP separately. Hence, those n = 17 were excluded. Finally, n = 8 articles published between the years 2008 and 2019 were included in the review for data charting. A flow diagram of the study selection procedure is shown in Figure 1. A summary of the excluded studies is also available in Supplementary Table S2.\nThe details about study characteristics, study participants, intervention details, and outcomes of the included studies are summarized in the subsequent sections.\n3.1. Study Characteristics The characteristics of the included studies have been summarized in Table 1.\n3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.\nFive out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.\n3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].\nOf the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].\n3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].\nSix of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].\n3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.\nFive studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.\nThe characteristics of the included studies have been summarized in Table 1.\n3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.\nFive out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.\n3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].\nOf the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].\n3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].\nSix of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].\n3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.\nFive studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.\n3.2. Intervention Details Of the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22].\nThe details of different interventions provided in each of the studies included in this review are summarized in Table 2.\nOf the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22].\nThe details of different interventions provided in each of the studies included in this review are summarized in Table 2.\n3.3. Outcome Measures Different anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26].\nIn all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23].\nAll but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3).\nDifferent anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26].\nIn all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23].\nAll but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3).\n3.4. Effect of Different Interventions on Nutritional Status of Children with CP Of the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs.\nFour out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24].\nBoth studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3)\nOf the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs.\nFour out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24].\nBoth studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3)", "The characteristics of the included studies have been summarized in Table 1.\n3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.\nFive out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.\n3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].\nOf the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].\n3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].\nSix of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].\n3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.\nFive studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.", "Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27].\nCharacteristics of the included study.\nCP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System.", "Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23].", "Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26].", "Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26].\nA total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24].\nOverall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25].\nA summary of individual study design, locations, settings, participants, and sample size is presented in Table 1.", "Of the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22].\nThe details of different interventions provided in each of the studies included in this review are summarized in Table 2.", "Different anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26].\nIn all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23].\nAll but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3).", "Of the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs.\nFour out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24].\nBoth studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3)", "In this review we provide a summary of the available evidence on different nutrition interventions for children with CP in LMICs. Our search identified only a few relevant studies on the topic that were conducted in resource-constrained settings of LMICs. Only eight articles were identified, none of which had representation from low-income countries. Such an evidence gap from LMICs where the majority of children with CP reside is concerning [11]. Considering the high burden of malnourished children with CP in LMICs [1,2,3,4,5,8,9], there is a dire need to reduce this evidence gap, and integration of nutrition interventions in the existing service plan for children with CP is essential.\nThere are global efforts toward reducing the burden of malnutrition among children, especially in LMICs. The United Nations Sustainable Development Goals (UN SDGs) has an emphasis on childhood nutrition for human growth and development [28]. The SDGs also strongly advocate for disability inclusiveness, especially the SDGs which are related to growth, health, education, employment, and addressing inequality globally [29]. National and international partners are working together to develop standard guidelines, strategies, and interventions addressing the immediate determinants (i.e., nutrition-specific intervention) and the underlying or root causes (i.e., nutrition-sensitive interventions) to improve the overall nutritional status of children, especially in LMICs [30,31]. These interventions should be disability-inclusive, and priority should be given to reporting/documentation of the effectiveness of different nutrition interventions on growth and nutrition of children with disabilities, including CP.\nWhile reviewing the limited available studies, we observed that a majority of the interventions were trialed or implemented in institution-based settings. Though interventions implemented in the institution-based settings have been proven to result in better outcomes in the past [6], our evidence suggests the majority of children with CP in LMICs lack access to any rehabilitation services for numerous reasons, including limited availability of services in their neighborhoods, transport difficulties, and financial constraints [12,32]. Furthermore, the severe shortage of trained health professionals, e.g., rehabilitation service providers and dietitians, limits the scope to implement and scale-up different institution-based interventions for children with CP in LMICs [13]. In such a context, community-based approaches are highly recommended and there is a lack of evidence on community-based nutrition interventions for children with CP in LMICs. Nevertheless, in one recent study from Bangladesh, community-based parent-led intervention was found to be highly successful in improving functional outcomes of children with CP [33]. Emphasis should be given to the capacity development of mid-level service providers on growth monitoring and promotion, early identification, different blanket interventions, and referral to prevent and treat malnutrition among children with CP in LMICs.\nIn addition to the settings, the sample sizes of the studies were relatively low. Although the included studies provided valuable insights on different nutritional interventions, the small sample size limits the strength and generalizability of the study findings. Moreover, most children with CP in the selected studies had severe functional motor limitations. These could be due to selection bias related to more severe institutional samples, as well as the more severe end of the clinical spectrum of CP in LMICs, likely exacerbated by delayed age of diagnosis and lack of access to rehabilitation services [12,32,34]. Studies also show that the likelihood of receiving intervention and rehabilitation is higher among those with severe CP (e.g., GMFCS level III–V) compared to milder forms (e.g., GMFCS I–II) [12]. However, it is also known that severe functional motor limitations (e.g., GMFCS level III–V) in children with CP are often accompanied by severe oromotor dysfunction, e.g., dysphagia [35]. It remains unclear whether the interventions for children with more severe motor limitations are also appropriate for children with CP who are GMFCS I–II.\nMost of the interventions identified in this review improved the nutritional status of participating children with CP. Importantly, nutrition-specific interventions (i.e., interventions that aim to intervene in or correct the immediate causes of malnutrition, such as inadequate diet, disease severity and caring practices) had comparatively better outcomes than other intervention strategies [20,22,24,30]. We observed that the interventions that included dietary modifications (only, or as a component of multiple approaches) had better effect on nutritional outcomes of children with CP. Studies where participants received surgical interventions (e.g., GTF) also showed a positive effect, whereas studies that provided behavioral intervention on feeding skills only did not show any significant change.\nTo the best of our knowledge, this is one of the first reviews conducted on nutrition interventions for children with CP in LMICs. Although during our search we identified several studies that aimed to improve the nutritional status of children with CP in HICs, our search revealed only a few reviews (including one systematic review) that summarized findings from different nutrition intervention studies among children with CP in HICs [6,36,37]. The data provided in our study, therefore, contribute important information to the existing knowledge gap. Nevertheless, this review has several limitations. We focused our search on two databases, so a risk remains that there may have been other publications available through other databases. However, our search revealed a large number of duplicates; therefore, we are confident that we have covered all or almost all of the studies published online. Second, we only searched for articles that had our key words in title/abstracts/author specified key words in the articles. This may have reduced the number of articles initially identified for title screening. Third, we did not assess the risk of bias and quality of evidence of the articles finally included in the review, which may have posed bias in the information reported. Finally, due to the differences in reporting format and data availability, we could only include a few common variables and could not report on some of the important characteristics of the participants. For the same reason, we had to rely on reporting the descriptive findings only.", "The current review highlights the knowledge gap and lack of available evidence on nutrition interventions for children with CP in LMICs. Strong evidence is essential to identify and determine the best practices and nutritional management guidelines for children with CP, especially for optimal utilization of the limited available resources in LMICs. National and international stakeholders, therefore, should make this a priority for future research and services. Considering the high proportion of children with CP with undernutrition in LMICs, existing interventions and services for children with CP should integrate a nutrition component. The current limited evidence suggests that nutrition-specific programs have more of a positive effect on the growth of children with CP than other strategies. This should be taken under consideration when planning for any nutrition focus and other comprehensive services for children with CP in LMICs." ]
[ "intro", null, null, null, null, null, null, null, "results", null, null, null, null, "subjects", null, null, null, "discussion", "conclusions" ]
[ "nutrition", "intervention", "children", "cerebral palsy", "disability", "LMICs" ]
1. Introduction: Undernutrition is a major global public health challenge, and children with disability, such as cerebral palsy (CP), often suffer from undernutrition, especially in low- and middle-income countries (LMICs). The proportion of malnutrition is substantially higher among children with CP in LMICs compared to the general population [1,2,3,4,5]. Ensuring availability and accessibility to evidence-based nutrition interventions is essential to avert adverse outcomes of malnutrition among those vulnerable children [6]. Nutritional management of children with CP is complex as several interlinked factors interfere with their growth directly and indirectly [7]. While a few factors have been identified as common predictors of malnutrition (e.g., gross motor and oromotor function limitations) [1,8,9], the conceptual framework of malnutrition in children with CP is not yet clearly understood, especially in LMICs. To date, several clinical nutrition guidelines based on current evidence on surgical and non-surgical intervention outcomes have been published [6,10]. However, most of those studies were conducted in high-income countries (HICs) [6], whereas 85% of children with disabilities live in LMICs and the majority have no or limited access to any rehabilitation services [11,12]. With the extreme shortage of trained professionals including dietitians, in addition to the limited availability and accessibility to institutionalized intervention programs in LMICs [13], it is likely that not all the interventions found effective in HIC settings may be applicable in LMIC settings. In absence of optimal management, these children are at high risk of malnutrition, which can in turn impact their functional outcomes, quality of life, and survival [14,15,16]. Any available evidence in the context of LMICs could guide resource mobilization, assist in understanding the need and cost-effectiveness as well as to identify interventions that may improve the nutritional status of children with CP in low-resource settings. This is important to establish a platform for developing and implementing best practice CP-specific nutrition intervention guidelines in LMICs. In this scoping review, we aimed to systematically map the existing evidence on nutritional interventions for children with CP living in LMIC settings. The following research questions were explored: (i) What is known about the available nutrition interventions for children with CP in LMICs? (ii) What are the outcomes of those interventions on the nutritional status of participating children with CP in LMICs? 2. Materials and Methods: 2.1. Study Design We conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author. We conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author. 2.2. Database Searching The key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.). The key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.). 2.3. Inclusion and Exclusion Criteria Articles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries. Articles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries. 2.4. Data Charting Process Data charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”. Data charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”. 2.5. Assessment of Risk of Bias and Synthesis of Results The risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives. The risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives. 2.6. Ethics This study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information. This study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information. 2.1. Study Design: We conducted a scoping review to summarize the available evidence and provide an overview of different intervention programs and their outcomes related to the nutritional status of children with CP in LMICs. A protocol was developed following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) extension for scoping reviews guideline [17] and is available upon request from the corresponding author. 2.2. Database Searching: The key words and search strategy were developed by two of the authors (I.J. and G.K.) and pretested prior to the initiation of the search. PubMed (up to 15 December 2021) and Scopus (up to 10 January 2022) were searched to identify relevant articles. Google Scholar was searched for any additional relevant articles or scientific guidelines. The search was not restricted to any specific language. The key terms used included “cerebral palsy”/”neurodevelopmental disorder”; children/adolescent/infant; parent/mother/caregiver; intervention/trial/outcome; training/education/feeding/technique/surgical/teaching; nutrition/malnutrition/growth/health; “low- and middle income countries (LMICs)”, and individual country listed as LMICs according to the 2020 World Bank country classifications by income level (i.e., low-income, lower middle-income, and upper-middle income countries) [18]. The electronic search strategy for Scopus is outlined in Supplementary Table S1. Two reviewers (I.J. and R.S.) independently searched the databases and included articles following the inclusion and exclusion criteria. Any disagreement was resolved by discussion between the reviewers and in consultation with a third reviewer (G.K.). 2.3. Inclusion and Exclusion Criteria: Articles that met the following criteria were included in the review: (i) the study participants were children with CP, (ii) the outcome measures included nutritional status as determined by anthropometric measurements, e.g., weight, height/length of children with CP and/or body composition, and (iii) followed analytical study design (e.g., experimental/quasi-experimental, pre- and post-intervention study) or descriptive study with a control group or comparison group (e.g., children with gastrostomy versus children fed orally). Articles were excluded (i) if the study was conducted in HICs, (ii) if data about children with CP could not be differentiated and was reported together with children with other form of impairments, and (iii) if they were not peer-reviewed publications and were protocols, guidelines, book chapters, conference presentations, forewords, or replies to commentaries. 2.4. Data Charting Process: Data charting was primarily completed by two independent reviewers (I.J. and R.S.) using an a priori template developed by I.J. and G.K. Information on study design, settings, country, study participants (age, sex, motor function severity, sample size), intervention provided (type, settings, intervention contents, number of sessions as reported, follow-up period), and outcome (measures used and outcome reported) were extracted as available. Any missing information was documented as “not reported”. 2.5. Assessment of Risk of Bias and Synthesis of Results: The risk of bias of the included studies were not assessed. Descriptive findings from individual studies including study characteristics (name and economic classification of the country according to the World Bank definitions in 2020, study locations, settings, study design, and study period), participants (sample size, age, sex, gross motor function classification system (GMFCS) level [19]), and outcome measures were reported. The different types of interventions were reported under broad headings, e.g., “training to parents/caregivers”, “gastrostomy tube placement/feeding/nasogastric tube feeding”, “nutritional rehabilitation/therapy”, and “dietary modification”. No statistical test was used considering the study objectives. 2.6. Ethics: This study did not require ethics approval as data were collected from existing publications (i.e., secondary data) and no humans were directly contacted to collect/gather any information. 3. Results: A total of n = 4885 citations were identified from the databases after deduplication. Following title screening, n = 132 abstracts were reviewed; of those, n = 26 articles were selected for full review. However, of those, n = 5 full articles were not available and an additional n = 4 were identified from handsearching the bibliographies of selected articles. So, a total of n = 25 full articles met the inclusion criteria. However, of those n = 13 were conducted in high-income settings, n = 1 did not report information for children with CP separately, thus the outcome could not be differentiated from other study participants, and n = 3 were conducted in HICs and did not report data on CP separately. Hence, those n = 17 were excluded. Finally, n = 8 articles published between the years 2008 and 2019 were included in the review for data charting. A flow diagram of the study selection procedure is shown in Figure 1. A summary of the excluded studies is also available in Supplementary Table S2. The details about study characteristics, study participants, intervention details, and outcomes of the included studies are summarized in the subsequent sections. 3.1. Study Characteristics The characteristics of the included studies have been summarized in Table 1. 3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. 3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. 3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. 3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. The characteristics of the included studies have been summarized in Table 1. 3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. 3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. 3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. 3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. 3.2. Intervention Details Of the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22]. The details of different interventions provided in each of the studies included in this review are summarized in Table 2. Of the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22]. The details of different interventions provided in each of the studies included in this review are summarized in Table 2. 3.3. Outcome Measures Different anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26]. In all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23]. All but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3). Different anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26]. In all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23]. All but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3). 3.4. Effect of Different Interventions on Nutritional Status of Children with CP Of the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs. Four out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24]. Both studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3) Of the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs. Four out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24]. Both studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3) 3.1. Study Characteristics: The characteristics of the included studies have been summarized in Table 1. 3.1.1. Study Design Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. 3.1.2. Study Location Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. 3.1.3. Study Settings Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. 3.1.4. Study Participants and Sample Size Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. 3.1.1. Study Design: Five out of eight studies were experimental in design (three conducted in lower middle-income countries (lower MICs) and two in upper middle-income countries (UMICs)) [20,21,22,23,24], two followed a descriptive analytical study design [25,26], and the remaining one was a qualitative study [27]. Characteristics of the included study. CP, Cerebral Palsy; MIC, Middle-income country; GMFCS, Gross Motor Function Classification System. 3.1.2. Study Location: Of the eight studies, n = 4 were from UMICs (Brazil, South Africa, Turkey, Mexico) [22,23,25,26], and n = 4 were from lower MICs (Bangladesh, Ghana × 2, Egypt) [20,21,24,27]; no studies from low-income countries (LICs) were identified. As per the geographical distribution, n = 3 studies were conducted in sub-Saharan Africa (Ghana × 2 and South Africa) [22,24,27], n = 2 were from Latin America and the Caribbean region (Brazil and Mexico) [25,26], n = 1 from the South Asia region (Bangladesh) [20], n = 1 from the Middle East and the North Africa region (Egypt) [21], and n = 1 from the Europe and Central Asia region (Turkey) [23]. 3.1.3. Study Settings: Six of the eight studies were conducted in hospital/clinic/institution-based settings [20,21,22,23,25,26]. The remaining two were conducted in community-based settings (both evaluated the impact of the same intervention model on different outcome measures) in Ghana; one followed an experimental study design and the other a qualitative approach [24,27]. Among the hospital/clinic/institution-based studies, four were experimental studies conducted in Bangladesh, Egypt, South Africa, and Turkey [20,21,22,23], and two were descriptive analytical studies conducted in Brazil and Mexico [25,26]. 3.1.4. Study Participants and Sample Size: Five studies had children with CP and their primary caregivers as study participants [20,21,22,24,27], and three included children with quadriplegic CP only [23,25,26]. A total of 252 children with CP were included in the selected studies (age ranged between 1 year (y) and 18 y 7months (m)). The lowest sample size was n = 13 (a descriptive analytical study conducted in hospital/clinic/institution-based settings in Mexico [26], and the highest sample size was n = 64 (a community-based experimental study conducted in Ghana) [24]. Overall, 12%, n = 23/198 children, had GMFCS level I–II and the 88%, n = 175/198 children, had GMFCS level III–V [20,21,22,23,26,27]; one study did not report the GMFCS level of the participants [24,25]. A summary of individual study design, locations, settings, participants, and sample size is presented in Table 1. 3.2. Intervention Details: Of the eight studies, n = 6 had single type interventions [20,21,23,24,25,27] and the remaining n = 2 had multiple interventions [22,26]. Of all, n = 4 provided training to parents/caregivers [20,21,24,27] and n = 2 involved surgical interventions (e.g., gastrostomy or nasogastric tube feeding) [25,26]. Furthermore, dietary modification (e.g., modification of calorie and nutrient density/balanced diet/frequency/consistency/nutritional adequacy) was an intervention component in n = 5 studies [20,21,22,23,26]. Both studies with surgical interventions were descriptive in design (i.e., comparative study, before-after study, case series) and were conducted in Brazil [25] and Mexico [26]. The studies that emphasized parental/caregiver training covered a wide range of content, such as therapy, feeding skills, dietary modifications, position, and carrying [20,21,22,24,27]. The number of sessions varied between 5–11 [20,21,22,24], and the follow-up duration ranged between 1–18 months [20,21,22,23,24,26,27]. The study implementation team was documented only in one study by Pike et al. (2016) where a team of physiotherapists, occupational therapists, and speech-language therapists provided need-based neurodevelopmental therapy (NDT) feeding intervention to children with CP, trained the caregivers on recommended NDT feeding intervention and dietary modifications for children with CP in a hospital-based/institution-based setting in South Africa [22]. The details of different interventions provided in each of the studies included in this review are summarized in Table 2. 3.3. Outcome Measures: Different anthropometric measurements were used to evaluate changes in the nutritional status of participating children. The most commonly used anthropometric measurements in the selected articles were weight (n = 8) [20,21,22,23,24,25,26,27], length/height (n = 7) [21,22,23,24,25,26,27], mid-upper arm circumference (MUAC) (n = 7) [20,21,22,23,24,26,27], and at least one skin-fold thickness measurement (n = 4) [22,23,25,26]. In all studies, nutritional status was determined by comparing the anthropometric measurements of children with CP to national standards/general population/WHO reference population [20,21,22,23,24,25,26,27]. However, one study also compared the anthropometric measurements with CP specific growth charts [23]. All but one article had other outcome measures in addition to the nutritional assessment; these include (i) dietary intake practices (n = 3) [20,25,27], feeding skills/feeding practices/feeding profiles (n = 5) [20,21,22,26,27], outcomes related to caregivers’ quality of life, knowledge and confidence about child care, perception about child’s health, feelings about child’s feeding difficulties, and compliance with the training [20,24], and child’s health-related quality of life [22]. Any adverse outcomes related to intervention were reported in n = 3 articles [20,23,24]; these outcomes included infections (one study) [23], chest health (one study) [20], and mortality (two studies) [20,24] (Table 3). 3.4. Effect of Different Interventions on Nutritional Status of Children with CP: Of the eight studies, seven included follow-up data of the participating children with CP (i.e., two or more data points for each participant) [20,21,22,23,24,26,27] and the other one compared between two groups (i.e., intervention vs. control group) [25]. Of those n = 7 studies with longitudinal data [20,21,22,23,24,26,27], n = 4 showed improvement in nutritional status among children with CP following intervention [20,22,23,26], one showed no change [21], whereas two showed deterioration [24,27] in the nutritional status following respective interventions among children with CP in selected LMICs. Four out of the five studies that focused on dietary modifications showed improvement in nutritional status of participating children [20,22,23,26]. Studies that focused on improving feeding skills (n = 3) had both positive and negative nutritional outcomes [20,21,24]. Both studies with GTF as an intervention showed a positive effect of GTF on the nutritional outcome of children with CP [25,26], however, of those none reported any adverse outcome of GTF [25,26]. (Table 3) 4. Discussion: In this review we provide a summary of the available evidence on different nutrition interventions for children with CP in LMICs. Our search identified only a few relevant studies on the topic that were conducted in resource-constrained settings of LMICs. Only eight articles were identified, none of which had representation from low-income countries. Such an evidence gap from LMICs where the majority of children with CP reside is concerning [11]. Considering the high burden of malnourished children with CP in LMICs [1,2,3,4,5,8,9], there is a dire need to reduce this evidence gap, and integration of nutrition interventions in the existing service plan for children with CP is essential. There are global efforts toward reducing the burden of malnutrition among children, especially in LMICs. The United Nations Sustainable Development Goals (UN SDGs) has an emphasis on childhood nutrition for human growth and development [28]. The SDGs also strongly advocate for disability inclusiveness, especially the SDGs which are related to growth, health, education, employment, and addressing inequality globally [29]. National and international partners are working together to develop standard guidelines, strategies, and interventions addressing the immediate determinants (i.e., nutrition-specific intervention) and the underlying or root causes (i.e., nutrition-sensitive interventions) to improve the overall nutritional status of children, especially in LMICs [30,31]. These interventions should be disability-inclusive, and priority should be given to reporting/documentation of the effectiveness of different nutrition interventions on growth and nutrition of children with disabilities, including CP. While reviewing the limited available studies, we observed that a majority of the interventions were trialed or implemented in institution-based settings. Though interventions implemented in the institution-based settings have been proven to result in better outcomes in the past [6], our evidence suggests the majority of children with CP in LMICs lack access to any rehabilitation services for numerous reasons, including limited availability of services in their neighborhoods, transport difficulties, and financial constraints [12,32]. Furthermore, the severe shortage of trained health professionals, e.g., rehabilitation service providers and dietitians, limits the scope to implement and scale-up different institution-based interventions for children with CP in LMICs [13]. In such a context, community-based approaches are highly recommended and there is a lack of evidence on community-based nutrition interventions for children with CP in LMICs. Nevertheless, in one recent study from Bangladesh, community-based parent-led intervention was found to be highly successful in improving functional outcomes of children with CP [33]. Emphasis should be given to the capacity development of mid-level service providers on growth monitoring and promotion, early identification, different blanket interventions, and referral to prevent and treat malnutrition among children with CP in LMICs. In addition to the settings, the sample sizes of the studies were relatively low. Although the included studies provided valuable insights on different nutritional interventions, the small sample size limits the strength and generalizability of the study findings. Moreover, most children with CP in the selected studies had severe functional motor limitations. These could be due to selection bias related to more severe institutional samples, as well as the more severe end of the clinical spectrum of CP in LMICs, likely exacerbated by delayed age of diagnosis and lack of access to rehabilitation services [12,32,34]. Studies also show that the likelihood of receiving intervention and rehabilitation is higher among those with severe CP (e.g., GMFCS level III–V) compared to milder forms (e.g., GMFCS I–II) [12]. However, it is also known that severe functional motor limitations (e.g., GMFCS level III–V) in children with CP are often accompanied by severe oromotor dysfunction, e.g., dysphagia [35]. It remains unclear whether the interventions for children with more severe motor limitations are also appropriate for children with CP who are GMFCS I–II. Most of the interventions identified in this review improved the nutritional status of participating children with CP. Importantly, nutrition-specific interventions (i.e., interventions that aim to intervene in or correct the immediate causes of malnutrition, such as inadequate diet, disease severity and caring practices) had comparatively better outcomes than other intervention strategies [20,22,24,30]. We observed that the interventions that included dietary modifications (only, or as a component of multiple approaches) had better effect on nutritional outcomes of children with CP. Studies where participants received surgical interventions (e.g., GTF) also showed a positive effect, whereas studies that provided behavioral intervention on feeding skills only did not show any significant change. To the best of our knowledge, this is one of the first reviews conducted on nutrition interventions for children with CP in LMICs. Although during our search we identified several studies that aimed to improve the nutritional status of children with CP in HICs, our search revealed only a few reviews (including one systematic review) that summarized findings from different nutrition intervention studies among children with CP in HICs [6,36,37]. The data provided in our study, therefore, contribute important information to the existing knowledge gap. Nevertheless, this review has several limitations. We focused our search on two databases, so a risk remains that there may have been other publications available through other databases. However, our search revealed a large number of duplicates; therefore, we are confident that we have covered all or almost all of the studies published online. Second, we only searched for articles that had our key words in title/abstracts/author specified key words in the articles. This may have reduced the number of articles initially identified for title screening. Third, we did not assess the risk of bias and quality of evidence of the articles finally included in the review, which may have posed bias in the information reported. Finally, due to the differences in reporting format and data availability, we could only include a few common variables and could not report on some of the important characteristics of the participants. For the same reason, we had to rely on reporting the descriptive findings only. 5. Conclusions: The current review highlights the knowledge gap and lack of available evidence on nutrition interventions for children with CP in LMICs. Strong evidence is essential to identify and determine the best practices and nutritional management guidelines for children with CP, especially for optimal utilization of the limited available resources in LMICs. National and international stakeholders, therefore, should make this a priority for future research and services. Considering the high proportion of children with CP with undernutrition in LMICs, existing interventions and services for children with CP should integrate a nutrition component. The current limited evidence suggests that nutrition-specific programs have more of a positive effect on the growth of children with CP than other strategies. This should be taken under consideration when planning for any nutrition focus and other comprehensive services for children with CP in LMICs.
Background: Malnutrition is substantially higher among children with cerebral palsy (CP) in low- and middle-income countries (LMICs) when compared with the general population. Access to appropriate interventions is crucial for better management of malnutrition and nutritional outcomes of those children. We aimed to review the existing evidence on nutrition interventions for children with CP in LMICs. Methods: Online databases, i.e., PubMed and Scopus, and Google Scholar were searched up to 10 January 2022, to identify peer-reviewed publications/evidence on LMIC focused nutritional management guidelines/interventions. Following title screening and abstract review, full articles that met the inclusion/exclusion criteria were retained for data charting. Information about the study characteristics, nutrition interventions, and their effectiveness were extracted. Descriptive data were reported. Results: Eight articles published between 2008 and 2019 were included with data from a total of n = 252 children with CP (age range: 1 y 0 m-18 y 7 m, 42% female). Five studies followed experimental design; n = 6 were conducted in hospital/clinic/center-based settings. Four studies focused on parental/caregiver training; n = 2 studies had surgical interventions (i.e., gastrostomy) and n = 1 provided neurodevelopmental therapy feeding intervention. Dietary modification as an intervention (or component) was reported in n = 5 studies and had better effect on the nutritional outcomes of children with CP compared to interventions focused on feeding skills or other behavioral modifications. Surgical interventions improved nutritional outcomes in both studies; however, none documented any adverse consequences of the surgical interventions. Conclusions: There is a substantial knowledge gap on nutrition interventions for children with CP in LMICs. This hinders the development of best practice guidelines for the nutritional management of children with CP in those settings. Findings suggest interventions directly related to growth/feeding of children had a better outcome than behavioral interventions. This should be considered in planning of nutrition-focused intervention or comprehensive services for children with CP in LMICs.
1. Introduction: Undernutrition is a major global public health challenge, and children with disability, such as cerebral palsy (CP), often suffer from undernutrition, especially in low- and middle-income countries (LMICs). The proportion of malnutrition is substantially higher among children with CP in LMICs compared to the general population [1,2,3,4,5]. Ensuring availability and accessibility to evidence-based nutrition interventions is essential to avert adverse outcomes of malnutrition among those vulnerable children [6]. Nutritional management of children with CP is complex as several interlinked factors interfere with their growth directly and indirectly [7]. While a few factors have been identified as common predictors of malnutrition (e.g., gross motor and oromotor function limitations) [1,8,9], the conceptual framework of malnutrition in children with CP is not yet clearly understood, especially in LMICs. To date, several clinical nutrition guidelines based on current evidence on surgical and non-surgical intervention outcomes have been published [6,10]. However, most of those studies were conducted in high-income countries (HICs) [6], whereas 85% of children with disabilities live in LMICs and the majority have no or limited access to any rehabilitation services [11,12]. With the extreme shortage of trained professionals including dietitians, in addition to the limited availability and accessibility to institutionalized intervention programs in LMICs [13], it is likely that not all the interventions found effective in HIC settings may be applicable in LMIC settings. In absence of optimal management, these children are at high risk of malnutrition, which can in turn impact their functional outcomes, quality of life, and survival [14,15,16]. Any available evidence in the context of LMICs could guide resource mobilization, assist in understanding the need and cost-effectiveness as well as to identify interventions that may improve the nutritional status of children with CP in low-resource settings. This is important to establish a platform for developing and implementing best practice CP-specific nutrition intervention guidelines in LMICs. In this scoping review, we aimed to systematically map the existing evidence on nutritional interventions for children with CP living in LMIC settings. The following research questions were explored: (i) What is known about the available nutrition interventions for children with CP in LMICs? (ii) What are the outcomes of those interventions on the nutritional status of participating children with CP in LMICs? 5. Conclusions: The current review highlights the knowledge gap and lack of available evidence on nutrition interventions for children with CP in LMICs. Strong evidence is essential to identify and determine the best practices and nutritional management guidelines for children with CP, especially for optimal utilization of the limited available resources in LMICs. National and international stakeholders, therefore, should make this a priority for future research and services. Considering the high proportion of children with CP with undernutrition in LMICs, existing interventions and services for children with CP should integrate a nutrition component. The current limited evidence suggests that nutrition-specific programs have more of a positive effect on the growth of children with CP than other strategies. This should be taken under consideration when planning for any nutrition focus and other comprehensive services for children with CP in LMICs.
Background: Malnutrition is substantially higher among children with cerebral palsy (CP) in low- and middle-income countries (LMICs) when compared with the general population. Access to appropriate interventions is crucial for better management of malnutrition and nutritional outcomes of those children. We aimed to review the existing evidence on nutrition interventions for children with CP in LMICs. Methods: Online databases, i.e., PubMed and Scopus, and Google Scholar were searched up to 10 January 2022, to identify peer-reviewed publications/evidence on LMIC focused nutritional management guidelines/interventions. Following title screening and abstract review, full articles that met the inclusion/exclusion criteria were retained for data charting. Information about the study characteristics, nutrition interventions, and their effectiveness were extracted. Descriptive data were reported. Results: Eight articles published between 2008 and 2019 were included with data from a total of n = 252 children with CP (age range: 1 y 0 m-18 y 7 m, 42% female). Five studies followed experimental design; n = 6 were conducted in hospital/clinic/center-based settings. Four studies focused on parental/caregiver training; n = 2 studies had surgical interventions (i.e., gastrostomy) and n = 1 provided neurodevelopmental therapy feeding intervention. Dietary modification as an intervention (or component) was reported in n = 5 studies and had better effect on the nutritional outcomes of children with CP compared to interventions focused on feeding skills or other behavioral modifications. Surgical interventions improved nutritional outcomes in both studies; however, none documented any adverse consequences of the surgical interventions. Conclusions: There is a substantial knowledge gap on nutrition interventions for children with CP in LMICs. This hinders the development of best practice guidelines for the nutritional management of children with CP in those settings. Findings suggest interventions directly related to growth/feeding of children had a better outcome than behavioral interventions. This should be considered in planning of nutrition-focused intervention or comprehensive services for children with CP in LMICs.
10,720
385
[ 1506, 70, 229, 169, 95, 135, 33, 1134, 89, 160, 109, 297, 287, 205 ]
19
[ "study", "studies", "children", "20", "26", "cp", "22", "23", "24", "21" ]
[ "nutritional outcomes children", "malnutrition vulnerable children", "framework malnutrition children", "nutrition children disabilities", "children cp undernutrition" ]
null
[CONTENT] nutrition | intervention | children | cerebral palsy | disability | LMICs [SUMMARY]
null
[CONTENT] nutrition | intervention | children | cerebral palsy | disability | LMICs [SUMMARY]
[CONTENT] nutrition | intervention | children | cerebral palsy | disability | LMICs [SUMMARY]
[CONTENT] nutrition | intervention | children | cerebral palsy | disability | LMICs [SUMMARY]
[CONTENT] nutrition | intervention | children | cerebral palsy | disability | LMICs [SUMMARY]
[CONTENT] Cerebral Palsy | Child | Developing Countries | Female | Humans | Income | Infant | Male | Malnutrition | Poverty [SUMMARY]
null
[CONTENT] Cerebral Palsy | Child | Developing Countries | Female | Humans | Income | Infant | Male | Malnutrition | Poverty [SUMMARY]
[CONTENT] Cerebral Palsy | Child | Developing Countries | Female | Humans | Income | Infant | Male | Malnutrition | Poverty [SUMMARY]
[CONTENT] Cerebral Palsy | Child | Developing Countries | Female | Humans | Income | Infant | Male | Malnutrition | Poverty [SUMMARY]
[CONTENT] Cerebral Palsy | Child | Developing Countries | Female | Humans | Income | Infant | Male | Malnutrition | Poverty [SUMMARY]
[CONTENT] nutritional outcomes children | malnutrition vulnerable children | framework malnutrition children | nutrition children disabilities | children cp undernutrition [SUMMARY]
null
[CONTENT] nutritional outcomes children | malnutrition vulnerable children | framework malnutrition children | nutrition children disabilities | children cp undernutrition [SUMMARY]
[CONTENT] nutritional outcomes children | malnutrition vulnerable children | framework malnutrition children | nutrition children disabilities | children cp undernutrition [SUMMARY]
[CONTENT] nutritional outcomes children | malnutrition vulnerable children | framework malnutrition children | nutrition children disabilities | children cp undernutrition [SUMMARY]
[CONTENT] nutritional outcomes children | malnutrition vulnerable children | framework malnutrition children | nutrition children disabilities | children cp undernutrition [SUMMARY]
[CONTENT] study | studies | children | 20 | 26 | cp | 22 | 23 | 24 | 21 [SUMMARY]
null
[CONTENT] study | studies | children | 20 | 26 | cp | 22 | 23 | 24 | 21 [SUMMARY]
[CONTENT] study | studies | children | 20 | 26 | cp | 22 | 23 | 24 | 21 [SUMMARY]
[CONTENT] study | studies | children | 20 | 26 | cp | 22 | 23 | 24 | 21 [SUMMARY]
[CONTENT] study | studies | children | 20 | 26 | cp | 22 | 23 | 24 | 21 [SUMMARY]
[CONTENT] lmics | children | cp | malnutrition | interventions | children cp | evidence | nutrition | outcomes | settings [SUMMARY]
null
[CONTENT] 26 | 20 | 23 | 22 | 21 | 24 | studies | 25 | study | 27 [SUMMARY]
[CONTENT] nutrition | children cp | children | cp | services | lmics | evidence | services children | services children cp | current [SUMMARY]
[CONTENT] study | children | 26 | studies | cp | 20 | 23 | 22 | 21 | 24 [SUMMARY]
[CONTENT] study | children | 26 | studies | cp | 20 | 23 | 22 | 21 | 24 [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] Eight | between 2008 and 2019 | 252 | 1 | 7 | 42% ||| Five | 6 ||| Four | 2 | 1 ||| 5 ||| [SUMMARY]
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| PubMed | Google Scholar | 10 January 2022 ||| ||| ||| ||| ||| Eight | between 2008 and 2019 | 252 | 1 | 7 | 42% ||| Five | 6 ||| Four | 2 | 1 ||| 5 ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| PubMed | Google Scholar | 10 January 2022 ||| ||| ||| ||| ||| Eight | between 2008 and 2019 | 252 | 1 | 7 | 42% ||| Five | 6 ||| Four | 2 | 1 ||| 5 ||| ||| ||| ||| ||| [SUMMARY]
Predicting necrosis in residual mass analysis after retroperitoneal lymph node dissection: a retrospective study.
23021209
Recent studies have demonstrated that pathological analysis of retroperitoneal residual masses of patients with testicular germ cell tumors revealed findings of necrotic debris or fibrosis in up to 50% of patients. We aimed at pursuing a clinical and pathological review of patients undergoing post chemotherapy retroperitoneal lymph node dissection (PC-RPLND) in order to identify variables that may help predict necrosis in the retroperitoneum.
BACKGROUND
We performed a retrospective analysis of all patients who underwent PC-RPLND at the University Hospital of the University of São Paulo and Cancer Institute of Sao Paulo between January 2005 and September 2011. Clinical and pathological data were obtained and consisted basically of: measures of retroperitoneal masses, histology of the orchiectomy specimen, serum tumor marker and retroperitoneal nodal size before and after chemotherapy.
METHODS
We gathered a total of 32 patients with a mean age of 29.7; pathological analysis in our series demonstrated that 15 (47%) had necrosis in residual retroperitoneal masses, 15 had teratoma (47%) and 2 (6.4%) had viable germ cell tumors (GCT). The mean size of the retroperitoneal mass was 4.94 cm in our sample, without a difference between the groups (P = 0.176). From all studied variables, relative changes in retroperitoneal lymph node size (P = 0.04), the absence of teratoma in the orchiectomy specimen (P = 0.03) and the presence of choriocarcinoma in the testicular analysis after orchiectomy (P = 0.03) were statistically significant predictors of the presence of necrosis. A reduction level of 35% was therefore suggested to be the best cutoff for predicting the absence of tumor in the retroperitoneum with a sensitivity of 73.3% and specificity of 82.4%.
RESULTS
Even though retroperitoneal lymph node dissection remains the gold standard for patients with residual masses, those without teratoma in the primary tumor and a shrinkage of 35% or more in retroperitoneal mass have a considerably smaller chance of having viable GCT or teratoma in the retroperitoneum and a surveillance program could be considered.
CONCLUSIONS
[ "Adolescent", "Adult", "Humans", "Lymph Node Excision", "Male", "Middle Aged", "Necrosis", "Neoplasms, Germ Cell and Embryonal", "Retroperitoneal Neoplasms", "Retrospective Studies", "Teratoma", "Testicular Neoplasms" ]
3502267
Background
Testicular cancer has become one of the most curable solid neoplasms and serves as a paradigm for the multimodal treatment of malignancies. The appropriate integration of chemotherapy, retroperitoneal lymph node dissection (RPLND) and observation for the management of testis cancer has resulted in overall survival rates greater than 90% [1,2]. RPLND plays an important role in the management of patients with testicular germ cell tumors (GCTs), especially in those with residual masses after chemotherapy [3]. To date, series have demonstrated that pathological analysis of these masses reveals findings of necrotic debris or fibrosis in 40% to 50% of patients, teratoma in 35% to 40% and viable malignant cells in 10% to 15% of patients [4]. Even though early recognition and resection of teratoma in the retroperitoneum after chemotherapy have been accompanied by excellent prognosis, once it was initially thought to represent a benign course when present in the retroperitoneal space, but the untreated disease may have a lethal potential due to progressive local growth or malignant transformation, not to mention its classical unresponsiveness to conventional cisplatin-based chemotherapy regimens [3-6]. As for residual viable GCT, the consequence of its incomplete resection is certain disease progression [7]. Therefore, a more aggressive approach should always be considered when treating patients with teratoma or viable GCT in the retroperitoneum [7,8]. Thus, the appropriate approach to residual masses following chemotherapy remains a controversial issue, since the literature has shown that as many as half of resected masses are basically composed of necrosis or fibrotic tissue and any sort of adjuvant therapy could be waived [9]. In order to avoid a great number of apparently unnecessary post-chemotherapy RPLND (PC-RPLND), many studies have tried to develop algorithms to predict the presence of necrosis in the retroperitoneum. However, currently predictive models and imaging modalities cannot reliably predict the pathological finding of necrosis/fibrosis at PC-RPLND [9]. Some authors have established that patients without teratoma in the primary tumor and a shrinkage of 90% or more in retroperitoneal mass had little chance of having viable GCT or teratoma and could be safely put under a surveillance program with periodic imaging scans [10]. However, prospective analyses have demonstrated that approximately 30% of patients will harbor teratoma or viable malignancy even with normal post-chemotherapy computed tomography (CT) results and no teratoma in the primary tumor [9]. The purpose of this work is to pursue a clinical and pathological review of patients undergoing PC-RPLND at a reference university oncology center in Brazil, in order to identify variables that may help predict the histological finding of necrosis in the retroperitoneum and perhaps establish a differentiated surveillance protocol.
Methods
We performed a retrospective analysis of all patients from our computerized database who underwent PC-RPLND at our service between January 2005 and May 2011. Patients were operated on after having undergone three to four cycles of primary chemotherapy with bleomycin, etoposide and cisplatin. Clinical and pathological data were obtained and consisted basically of measures of retroperitoneal masses, serum tumor markers, histology of the orchiectomy specimen, tumor marker values and retroperitoneal nodal size before and after chemotherapy. The presence of either immature or mature teratoma in the resected specimen, as well as teratoma with malignant transformation, was considered part of the same group. Choriocarcinoma, yolk sac tumors, embryonal carcinoma and seminoma were considered viable GCTs. All histological findings were submitted to quantitative and qualitative analysis. Post-chemotherapy alpha-fetoprotein (AFP) and lactate dehydrogenase (LDH) levels were registered as a continuous variable while human chorionic gonadotropin (hCG) levels were considered a categorical variable ranging from undetectable to detectable when serum concentrations were greater than 3 mIU. Retroperitoneal nodal size before and after chemotherapy was determined by the longest transverse diameter of the largest mass on CT imaging. Relative change in nodal size before and after chemotherapy was calculated by dividing post-chemotherapy nodal size by pre-chemotherapy nodal size and was analyzed as a continuous variable. Multiple variables were analyzed independently in order to establish any predictive value for finding necrotic tissue in the retroperitoneum. Statistical analysis was performed using the Statistical Package for Social Sciences (SPSS, version 12.0, Chicago, IL, USA), applying the Mann–Whitney U test for non-parametric variables and the Fisher's exact test for categorical variables, with the level of significance set at P < 0.05. A cut-off level for predicting necrosis at retroperitoneal residual mass analysis was sought by constructing a receiver operating characteristic (ROC) curve of all significant variables, which were generated using graphical visualization in our statistical software. This study was carried-out in accordance with the Ethics Committee regulations.
null
null
Conclusions
Even though RPLND remains the gold standard for patients with residual masses, those without teratoma in the primary tumor and a shrinkage of 35% or more in retroperitoneal mass have a considerably smaller chance of having viable GCTs or teratoma in the retroperitoneum.
[ "Background", "Results and discussion", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Testicular cancer has become one of the most curable solid neoplasms and serves as a paradigm for the multimodal treatment of malignancies. The appropriate integration of chemotherapy, retroperitoneal lymph node dissection (RPLND) and observation for the management of testis cancer has resulted in overall survival rates greater than 90% [1,2].\nRPLND plays an important role in the management of patients with testicular germ cell tumors (GCTs), especially in those with residual masses after chemotherapy [3]. To date, series have demonstrated that pathological analysis of these masses reveals findings of necrotic debris or fibrosis in 40% to 50% of patients, teratoma in 35% to 40% and viable malignant cells in 10% to 15% of patients [4].\nEven though early recognition and resection of teratoma in the retroperitoneum after chemotherapy have been accompanied by excellent prognosis, once it was initially thought to represent a benign course when present in the retroperitoneal space, but the untreated disease may have a lethal potential due to progressive local growth or malignant transformation, not to mention its classical unresponsiveness to conventional cisplatin-based chemotherapy regimens [3-6]. As for residual viable GCT, the consequence of its incomplete resection is certain disease progression [7]. Therefore, a more aggressive approach should always be considered when treating patients with teratoma or viable GCT in the retroperitoneum [7,8].\nThus, the appropriate approach to residual masses following chemotherapy remains a controversial issue, since the literature has shown that as many as half of resected masses are basically composed of necrosis or fibrotic tissue and any sort of adjuvant therapy could be waived [9]. In order to avoid a great number of apparently unnecessary post-chemotherapy RPLND (PC-RPLND), many studies have tried to develop algorithms to predict the presence of necrosis in the retroperitoneum. However, currently predictive models and imaging modalities cannot reliably predict the pathological finding of necrosis/fibrosis at PC-RPLND [9].\nSome authors have established that patients without teratoma in the primary tumor and a shrinkage of 90% or more in retroperitoneal mass had little chance of having viable GCT or teratoma and could be safely put under a surveillance program with periodic imaging scans [10]. However, prospective analyses have demonstrated that approximately 30% of patients will harbor teratoma or viable malignancy even with normal post-chemotherapy computed tomography (CT) results and no teratoma in the primary tumor [9].\nThe purpose of this work is to pursue a clinical and pathological review of patients undergoing PC-RPLND at a reference university oncology center in Brazil, in order to identify variables that may help predict the histological finding of necrosis in the retroperitoneum and perhaps establish a differentiated surveillance protocol.", "We gathered information on a total of 32 patients, who were 18- to 49-years old (mean age 30.5) and harbored seminomatous, nonseminomatous or mixed tumors in the testis tissue in different clinical stages. Three patients (9.4%) had seminoma, five (16%) had pure nonseminoma (choriocarcinoma, yolk sac tumors, embryonal carcinoma, teratoma, teratocarcinoma) and 24 (75%) harbored mixed nonseminomatous GCT (NSGCT). None of the patients with seminoma had normal AFP levels, indicating that these patients may have had tumors with similar biological behavior to those with NSGCT.\nAt diagnosis, seven (22%) patients were classified as having a stage I disease, while 21 (66%) were stage II and four (12.5%) were stage III. Clinical stage I patients were individuals with longer follow-up, who presented with retroperitoneal disease nonresponsive to chemotherapy and ultimately underwent PC-RPLND.\nPathological analysis in our series demonstrated that 15 (47%) patients had necrosis in residual retroperitoneal masses, 15 had teratoma (47%) and two (6%) had viable GCT: one seminoma and one yolk sac tumor. For statistical reasons and in alignment with the aims of this study, we divided those patients into two groups, assembling patients with viable GCT and teratoma and analyzing them as one group. Mean size of the retroperitoneal mass was 4.94 cm in our sample, 3.79 cm in the group of necrosis and 5.96 cm in the group of teratoma and viable GCT (P = 0.176). There was also no difference between groups regarding stratification of nodal size, as shown in Table 1.\n\nDistribution according to nodal size in men undergoing PC-RPLND\nData are shown as number (%). GCT, germ cell tumor; PC-RPLND, post-chemotherapy retroperitoneal lymph node dissection.\nPrimary tumor histology revealed embryonal cell carcinoma in 56%, seminomatous elements in 16%, yolk sac tumor in 19% and teratomatous elements in 56% of the patients. Of the 18 patients with teratomatous elements in the primary tumor, 13 (72%) had teratoma in the retroperitoneum at PC-RPLND. Even in the absence of teratoma in the primary tumor, teratoma was present in the retroperitoneum in five patients (16%).\nWhen comparing pathological analysis of primary tumor specimens, we found a statistical difference when comparing the prevalence of teratoma and choriocarcinoma. Other findings, such as the presence of seminoma, yolk sac tumor, embryonal carcinoma, endodermal sinus tumor lymph vascular invasion (LVI), rete testis invasion and spermatic invasion were similar between groups.\nUnivariate analysis of quantitative components at orchiectomy histology, retroperitoneal node size with its relative measures and serum markers are shown in Table 2.\n\nUnivariate analysis predicting necrosis at PC-RPLND\n** unable to calculate. Data expressed as mean ± standard deviation. AFP, alpha-fetoprotein; GCT, germ cell tumor; hCG, human chorionic gonadotropin; LDH, lactate dehydrogenase; PC-RPLND, post-chemotherapy retroperitoneal lymph node dissection; RP, retroperitoneal.\nEven though the presence of teratoma in the primary tumor was an important negative predictor for the finding of necrosis, the quantitative analysis has proven to be statistically irrelevant.\nWhile relative reduction in mass size after chemotherapy has been shown to be an important predictor of necrosis, even when considering patients not responding to chemotherapy, absolute reduction and enlargement of the residual mass did not show a significant difference between groups.\nThere was a statistical difference in AFP levels between groups; however, when comparing relative changes in AFP levels after chemotherapy, no difference was found. Comparison of LDH levels and their relative changes was nonsignificant. We were unable to compare hCG levels between groups because we only had two patients whose hCG levels were not undetectable.\nOf all the studied variables, relative changes in retroperitoneal lymph node size (P =0.04), the absence of teratoma in the orchiectomy specimen (P =0.03) and the presence of choriocarcinoma in the testicular analysis after orchiectomy (P =0.03) were statistically significant predictors of the presence of necrosis in the retroperitoneum.\nROC curves were built for variables that were independent predictors of necrosis at PC-RPLND. Even though the size of the retroperitoneal mass after chemotherapy showed no statistical difference, we also built a curve for it. The variable relative reduction after chemotherapy was the only one with predictive value according to the Youlden index. An area under the curve (AUC) of 0.710 was obtained and a reduction level of 35% was therefore suggested to be the best cutoff for predicting the absence of tumor in the retroperitoneum with a sensitivity of 73% and a specificity of 82%. ROC curves of isolated size of retroperitoneal mass, absence of teratoma and presence of choriocarcinoma did not indicate significant findings.\nIn our institution, relative change in retroperitoneal node size, absence of teratoma in the orchiectomy specimen and the presence of choriocarcinoma in the testicular analysis after orchiectomy were statistically significant predictors of the presence of necrosis in the retroperitoneum and a surveillance program could be considered, given the uncertainty in predicting the histology of residual masses after chemotherapy in patients with metastatic testicular tumors.\nAnalysis of residual retroperitoneal masses after chemotherapy is being increasingly regarded as a fundamental issue, not only for orienting adjuvant therapies, but also because it has prognostic implications [11]. Outcome assessments in patients with NSGCT have demonstrated that incomplete resection of residual retroperitoneal masses, the size of residual retroperitoneal masses and the finding of teratoma and viable GCT at RPLND independently predict disease progression and relapse [11,12].\nThe size of residual retroperitoneal masses after chemotherapy has traditionally been considered when choosing the subsequent treatment modality [9]. Studies have shown that residual masses smaller than 2 cm are considered one of the most significant predictors for finding necrosis at PC-RPLND at logistic regression [4]. Furthermore, a number of investigators continue to base the decision to perform PC-RPLND on residual mass size alone, obviating PC-RPLND in patients with residual masses of 1 cm or less [13].\nHowever, recent studies have shown that after chemotherapy a third of retroperitoneal masses of <2 cm harbored either teratoma or viable GCT [14]. Even though we found a trend of having only necrosis in the retroperitoneum for masses <2 cm (P = 0.09), in the present study residual mass size alone has not appeared to be a good predictor of necrosis, with nonsignificant size differences between groups and an underrated AUC in ROC curves. In addition, one case of teratoma in a 1 cm mass was registered in our sample. Therefore, we advocate that the decision of not operating on patients cannot be based on mass size alone due to the lack of consensus in the literature [9,10,13,14].\nTraditional series report necrotic debris or fibrosis in 40% to 50%, teratoma in 35% to 40% and viable malignant cells in 10% to 15% of patients [3]. In more recent analyses, the incidence of residual microscopic teratoma in the retroperitoneum has decreased to approximately 20% to 25%, with an increase of necrotic tissue findings of up to 60% and stable rates of viable GCT [15]. It has been reported that an increase in the proportion of necrosis is generally attributed to stage migration and the use of more effective chemotherapy regimens, especially in patients achieving a complete response to chemotherapy [15].\nIn our series we found a distribution pattern similar to former studies, with almost 50% of patients harboring teratoma in the retroperitoneal histology at RPLND, which may suggest that our chemotherapy regimens have been having inferior rates of complete responders or simply because our group is composed of patients in more advanced stages.\nTeratoma-negative primary tumor and volumetric regression of at least 90% after chemotherapy have been described as being highly predictive of harboring necrosis only at PC-RPLND [9,16]. Other series include normal serum tumor markers, and node size < 2 cm, the presence of yolk sac tumor or embryonal carcinoma on primary tumor and lymph vascular invasion, among others [9,10,15-17]. On the other hand, some authors have been unable to identify variables to be highly predictive of harboring necrosis only at PC-RPLND [15,18]. Nevertheless, currently predictive models fail to accurately predict necrosis in the retroperitoneum, since almost 30% of patients will harbor teratoma or viable malignancy even with normal post-chemotherapy CT and no teratoma in the primary tumor [9]. A study carried out at the Memorial Sloan-Kettering Cancer Center revealed that 26% of patients had teratoma or viable GCT, even those with radiographically normal retroperitoneum who underwent PC-RPLND [9].\nThe relatively small number of patients when compared to other institutional retrospective studies is a limitation of the present study; however, other studies with similar sample sizes have also come to significant findings.\nIn the present study, relative changes in retroperitoneal node size stood out as the single best predictor of the presence of necrosis in the retroperitoneum, which is in accordance with actual logistic regression models [9,10]. While other series suggest 90% as a shrinkage cutoff value to reliably predict necrosis, our quantitative analysis have demonstrated a cutoff level of 35% with approximate sensitivity and specificity [9].\nThe absence of teratoma has also been demonstrated to be significant in qualitative analysis, but not in quantitative analysis. The presence of choriocarcinoma in the testicular histology predicted necrosis in the retroperitoneum, despite only four (12.5%) patients who had such a finding. This association is an interesting finding that has not been previously reported. We reviewed the literature and no possible explanations were found. We believe further investigation is necessary to confirm this finding, since the number of patients in our sample is relatively small.", "AFP: Alphs-fetoprotein; AUC: Area under the curve; CT: Computed tomography; GCT: Germ cell tumor; hCG: Human chorionic gonadotropin; LDH: Lactate dehydrogenase; LVI: Lymph vascular invasion; NSGCT: Nonseminomatous germ cell tumor; PC-RPLND: Post chemotherapy retroperitoneal lymph node dissection; ROC: Receiver operating characteristic; RPLND: Retroperitoneal lymph node dissection.", "The authors declare that they have no competing interests.", "EPM and DKA gathered and compiled data and have been involved in drafting the manuscript. AJN and STL participated in the design of the study and performed the statistical analysis. ACS and MS revised the manuscript critically for important intellectual content. MS and MFD have given final approval of the version to be published. All authors read and approved the final manuscript." ]
[ null, null, null, null, null ]
[ "Background", "Methods", "Results and discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Testicular cancer has become one of the most curable solid neoplasms and serves as a paradigm for the multimodal treatment of malignancies. The appropriate integration of chemotherapy, retroperitoneal lymph node dissection (RPLND) and observation for the management of testis cancer has resulted in overall survival rates greater than 90% [1,2].\nRPLND plays an important role in the management of patients with testicular germ cell tumors (GCTs), especially in those with residual masses after chemotherapy [3]. To date, series have demonstrated that pathological analysis of these masses reveals findings of necrotic debris or fibrosis in 40% to 50% of patients, teratoma in 35% to 40% and viable malignant cells in 10% to 15% of patients [4].\nEven though early recognition and resection of teratoma in the retroperitoneum after chemotherapy have been accompanied by excellent prognosis, once it was initially thought to represent a benign course when present in the retroperitoneal space, but the untreated disease may have a lethal potential due to progressive local growth or malignant transformation, not to mention its classical unresponsiveness to conventional cisplatin-based chemotherapy regimens [3-6]. As for residual viable GCT, the consequence of its incomplete resection is certain disease progression [7]. Therefore, a more aggressive approach should always be considered when treating patients with teratoma or viable GCT in the retroperitoneum [7,8].\nThus, the appropriate approach to residual masses following chemotherapy remains a controversial issue, since the literature has shown that as many as half of resected masses are basically composed of necrosis or fibrotic tissue and any sort of adjuvant therapy could be waived [9]. In order to avoid a great number of apparently unnecessary post-chemotherapy RPLND (PC-RPLND), many studies have tried to develop algorithms to predict the presence of necrosis in the retroperitoneum. However, currently predictive models and imaging modalities cannot reliably predict the pathological finding of necrosis/fibrosis at PC-RPLND [9].\nSome authors have established that patients without teratoma in the primary tumor and a shrinkage of 90% or more in retroperitoneal mass had little chance of having viable GCT or teratoma and could be safely put under a surveillance program with periodic imaging scans [10]. However, prospective analyses have demonstrated that approximately 30% of patients will harbor teratoma or viable malignancy even with normal post-chemotherapy computed tomography (CT) results and no teratoma in the primary tumor [9].\nThe purpose of this work is to pursue a clinical and pathological review of patients undergoing PC-RPLND at a reference university oncology center in Brazil, in order to identify variables that may help predict the histological finding of necrosis in the retroperitoneum and perhaps establish a differentiated surveillance protocol.", "We performed a retrospective analysis of all patients from our computerized database who underwent PC-RPLND at our service between January 2005 and May 2011. Patients were operated on after having undergone three to four cycles of primary chemotherapy with bleomycin, etoposide and cisplatin.\nClinical and pathological data were obtained and consisted basically of measures of retroperitoneal masses, serum tumor markers, histology of the orchiectomy specimen, tumor marker values and retroperitoneal nodal size before and after chemotherapy.\nThe presence of either immature or mature teratoma in the resected specimen, as well as teratoma with malignant transformation, was considered part of the same group. Choriocarcinoma, yolk sac tumors, embryonal carcinoma and seminoma were considered viable GCTs. All histological findings were submitted to quantitative and qualitative analysis.\nPost-chemotherapy alpha-fetoprotein (AFP) and lactate dehydrogenase (LDH) levels were registered as a continuous variable while human chorionic gonadotropin (hCG) levels were considered a categorical variable ranging from undetectable to detectable when serum concentrations were greater than 3 mIU.\nRetroperitoneal nodal size before and after chemotherapy was determined by the longest transverse diameter of the largest mass on CT imaging. Relative change in nodal size before and after chemotherapy was calculated by dividing post-chemotherapy nodal size by pre-chemotherapy nodal size and was analyzed as a continuous variable.\nMultiple variables were analyzed independently in order to establish any predictive value for finding necrotic tissue in the retroperitoneum. Statistical analysis was performed using the Statistical Package for Social Sciences (SPSS, version 12.0, Chicago, IL, USA), applying the Mann–Whitney U test for non-parametric variables and the Fisher's exact test for categorical variables, with the level of significance set at P < 0.05.\nA cut-off level for predicting necrosis at retroperitoneal residual mass analysis was sought by constructing a receiver operating characteristic (ROC) curve of all significant variables, which were generated using graphical visualization in our statistical software. This study was carried-out in accordance with the Ethics Committee regulations.", "We gathered information on a total of 32 patients, who were 18- to 49-years old (mean age 30.5) and harbored seminomatous, nonseminomatous or mixed tumors in the testis tissue in different clinical stages. Three patients (9.4%) had seminoma, five (16%) had pure nonseminoma (choriocarcinoma, yolk sac tumors, embryonal carcinoma, teratoma, teratocarcinoma) and 24 (75%) harbored mixed nonseminomatous GCT (NSGCT). None of the patients with seminoma had normal AFP levels, indicating that these patients may have had tumors with similar biological behavior to those with NSGCT.\nAt diagnosis, seven (22%) patients were classified as having a stage I disease, while 21 (66%) were stage II and four (12.5%) were stage III. Clinical stage I patients were individuals with longer follow-up, who presented with retroperitoneal disease nonresponsive to chemotherapy and ultimately underwent PC-RPLND.\nPathological analysis in our series demonstrated that 15 (47%) patients had necrosis in residual retroperitoneal masses, 15 had teratoma (47%) and two (6%) had viable GCT: one seminoma and one yolk sac tumor. For statistical reasons and in alignment with the aims of this study, we divided those patients into two groups, assembling patients with viable GCT and teratoma and analyzing them as one group. Mean size of the retroperitoneal mass was 4.94 cm in our sample, 3.79 cm in the group of necrosis and 5.96 cm in the group of teratoma and viable GCT (P = 0.176). There was also no difference between groups regarding stratification of nodal size, as shown in Table 1.\n\nDistribution according to nodal size in men undergoing PC-RPLND\nData are shown as number (%). GCT, germ cell tumor; PC-RPLND, post-chemotherapy retroperitoneal lymph node dissection.\nPrimary tumor histology revealed embryonal cell carcinoma in 56%, seminomatous elements in 16%, yolk sac tumor in 19% and teratomatous elements in 56% of the patients. Of the 18 patients with teratomatous elements in the primary tumor, 13 (72%) had teratoma in the retroperitoneum at PC-RPLND. Even in the absence of teratoma in the primary tumor, teratoma was present in the retroperitoneum in five patients (16%).\nWhen comparing pathological analysis of primary tumor specimens, we found a statistical difference when comparing the prevalence of teratoma and choriocarcinoma. Other findings, such as the presence of seminoma, yolk sac tumor, embryonal carcinoma, endodermal sinus tumor lymph vascular invasion (LVI), rete testis invasion and spermatic invasion were similar between groups.\nUnivariate analysis of quantitative components at orchiectomy histology, retroperitoneal node size with its relative measures and serum markers are shown in Table 2.\n\nUnivariate analysis predicting necrosis at PC-RPLND\n** unable to calculate. Data expressed as mean ± standard deviation. AFP, alpha-fetoprotein; GCT, germ cell tumor; hCG, human chorionic gonadotropin; LDH, lactate dehydrogenase; PC-RPLND, post-chemotherapy retroperitoneal lymph node dissection; RP, retroperitoneal.\nEven though the presence of teratoma in the primary tumor was an important negative predictor for the finding of necrosis, the quantitative analysis has proven to be statistically irrelevant.\nWhile relative reduction in mass size after chemotherapy has been shown to be an important predictor of necrosis, even when considering patients not responding to chemotherapy, absolute reduction and enlargement of the residual mass did not show a significant difference between groups.\nThere was a statistical difference in AFP levels between groups; however, when comparing relative changes in AFP levels after chemotherapy, no difference was found. Comparison of LDH levels and their relative changes was nonsignificant. We were unable to compare hCG levels between groups because we only had two patients whose hCG levels were not undetectable.\nOf all the studied variables, relative changes in retroperitoneal lymph node size (P =0.04), the absence of teratoma in the orchiectomy specimen (P =0.03) and the presence of choriocarcinoma in the testicular analysis after orchiectomy (P =0.03) were statistically significant predictors of the presence of necrosis in the retroperitoneum.\nROC curves were built for variables that were independent predictors of necrosis at PC-RPLND. Even though the size of the retroperitoneal mass after chemotherapy showed no statistical difference, we also built a curve for it. The variable relative reduction after chemotherapy was the only one with predictive value according to the Youlden index. An area under the curve (AUC) of 0.710 was obtained and a reduction level of 35% was therefore suggested to be the best cutoff for predicting the absence of tumor in the retroperitoneum with a sensitivity of 73% and a specificity of 82%. ROC curves of isolated size of retroperitoneal mass, absence of teratoma and presence of choriocarcinoma did not indicate significant findings.\nIn our institution, relative change in retroperitoneal node size, absence of teratoma in the orchiectomy specimen and the presence of choriocarcinoma in the testicular analysis after orchiectomy were statistically significant predictors of the presence of necrosis in the retroperitoneum and a surveillance program could be considered, given the uncertainty in predicting the histology of residual masses after chemotherapy in patients with metastatic testicular tumors.\nAnalysis of residual retroperitoneal masses after chemotherapy is being increasingly regarded as a fundamental issue, not only for orienting adjuvant therapies, but also because it has prognostic implications [11]. Outcome assessments in patients with NSGCT have demonstrated that incomplete resection of residual retroperitoneal masses, the size of residual retroperitoneal masses and the finding of teratoma and viable GCT at RPLND independently predict disease progression and relapse [11,12].\nThe size of residual retroperitoneal masses after chemotherapy has traditionally been considered when choosing the subsequent treatment modality [9]. Studies have shown that residual masses smaller than 2 cm are considered one of the most significant predictors for finding necrosis at PC-RPLND at logistic regression [4]. Furthermore, a number of investigators continue to base the decision to perform PC-RPLND on residual mass size alone, obviating PC-RPLND in patients with residual masses of 1 cm or less [13].\nHowever, recent studies have shown that after chemotherapy a third of retroperitoneal masses of <2 cm harbored either teratoma or viable GCT [14]. Even though we found a trend of having only necrosis in the retroperitoneum for masses <2 cm (P = 0.09), in the present study residual mass size alone has not appeared to be a good predictor of necrosis, with nonsignificant size differences between groups and an underrated AUC in ROC curves. In addition, one case of teratoma in a 1 cm mass was registered in our sample. Therefore, we advocate that the decision of not operating on patients cannot be based on mass size alone due to the lack of consensus in the literature [9,10,13,14].\nTraditional series report necrotic debris or fibrosis in 40% to 50%, teratoma in 35% to 40% and viable malignant cells in 10% to 15% of patients [3]. In more recent analyses, the incidence of residual microscopic teratoma in the retroperitoneum has decreased to approximately 20% to 25%, with an increase of necrotic tissue findings of up to 60% and stable rates of viable GCT [15]. It has been reported that an increase in the proportion of necrosis is generally attributed to stage migration and the use of more effective chemotherapy regimens, especially in patients achieving a complete response to chemotherapy [15].\nIn our series we found a distribution pattern similar to former studies, with almost 50% of patients harboring teratoma in the retroperitoneal histology at RPLND, which may suggest that our chemotherapy regimens have been having inferior rates of complete responders or simply because our group is composed of patients in more advanced stages.\nTeratoma-negative primary tumor and volumetric regression of at least 90% after chemotherapy have been described as being highly predictive of harboring necrosis only at PC-RPLND [9,16]. Other series include normal serum tumor markers, and node size < 2 cm, the presence of yolk sac tumor or embryonal carcinoma on primary tumor and lymph vascular invasion, among others [9,10,15-17]. On the other hand, some authors have been unable to identify variables to be highly predictive of harboring necrosis only at PC-RPLND [15,18]. Nevertheless, currently predictive models fail to accurately predict necrosis in the retroperitoneum, since almost 30% of patients will harbor teratoma or viable malignancy even with normal post-chemotherapy CT and no teratoma in the primary tumor [9]. A study carried out at the Memorial Sloan-Kettering Cancer Center revealed that 26% of patients had teratoma or viable GCT, even those with radiographically normal retroperitoneum who underwent PC-RPLND [9].\nThe relatively small number of patients when compared to other institutional retrospective studies is a limitation of the present study; however, other studies with similar sample sizes have also come to significant findings.\nIn the present study, relative changes in retroperitoneal node size stood out as the single best predictor of the presence of necrosis in the retroperitoneum, which is in accordance with actual logistic regression models [9,10]. While other series suggest 90% as a shrinkage cutoff value to reliably predict necrosis, our quantitative analysis have demonstrated a cutoff level of 35% with approximate sensitivity and specificity [9].\nThe absence of teratoma has also been demonstrated to be significant in qualitative analysis, but not in quantitative analysis. The presence of choriocarcinoma in the testicular histology predicted necrosis in the retroperitoneum, despite only four (12.5%) patients who had such a finding. This association is an interesting finding that has not been previously reported. We reviewed the literature and no possible explanations were found. We believe further investigation is necessary to confirm this finding, since the number of patients in our sample is relatively small.", "Even though RPLND remains the gold standard for patients with residual masses, those without teratoma in the primary tumor and a shrinkage of 35% or more in retroperitoneal mass have a considerably smaller chance of having viable GCTs or teratoma in the retroperitoneum.", "AFP: Alphs-fetoprotein; AUC: Area under the curve; CT: Computed tomography; GCT: Germ cell tumor; hCG: Human chorionic gonadotropin; LDH: Lactate dehydrogenase; LVI: Lymph vascular invasion; NSGCT: Nonseminomatous germ cell tumor; PC-RPLND: Post chemotherapy retroperitoneal lymph node dissection; ROC: Receiver operating characteristic; RPLND: Retroperitoneal lymph node dissection.", "The authors declare that they have no competing interests.", "EPM and DKA gathered and compiled data and have been involved in drafting the manuscript. AJN and STL participated in the design of the study and performed the statistical analysis. ACS and MS revised the manuscript critically for important intellectual content. MS and MFD have given final approval of the version to be published. All authors read and approved the final manuscript." ]
[ null, "methods", null, "conclusions", null, null, null ]
[ "Testicular cancer", "Retroperitoneal lymph node dissection", "Necrosis", "Teratoma" ]
Background: Testicular cancer has become one of the most curable solid neoplasms and serves as a paradigm for the multimodal treatment of malignancies. The appropriate integration of chemotherapy, retroperitoneal lymph node dissection (RPLND) and observation for the management of testis cancer has resulted in overall survival rates greater than 90% [1,2]. RPLND plays an important role in the management of patients with testicular germ cell tumors (GCTs), especially in those with residual masses after chemotherapy [3]. To date, series have demonstrated that pathological analysis of these masses reveals findings of necrotic debris or fibrosis in 40% to 50% of patients, teratoma in 35% to 40% and viable malignant cells in 10% to 15% of patients [4]. Even though early recognition and resection of teratoma in the retroperitoneum after chemotherapy have been accompanied by excellent prognosis, once it was initially thought to represent a benign course when present in the retroperitoneal space, but the untreated disease may have a lethal potential due to progressive local growth or malignant transformation, not to mention its classical unresponsiveness to conventional cisplatin-based chemotherapy regimens [3-6]. As for residual viable GCT, the consequence of its incomplete resection is certain disease progression [7]. Therefore, a more aggressive approach should always be considered when treating patients with teratoma or viable GCT in the retroperitoneum [7,8]. Thus, the appropriate approach to residual masses following chemotherapy remains a controversial issue, since the literature has shown that as many as half of resected masses are basically composed of necrosis or fibrotic tissue and any sort of adjuvant therapy could be waived [9]. In order to avoid a great number of apparently unnecessary post-chemotherapy RPLND (PC-RPLND), many studies have tried to develop algorithms to predict the presence of necrosis in the retroperitoneum. However, currently predictive models and imaging modalities cannot reliably predict the pathological finding of necrosis/fibrosis at PC-RPLND [9]. Some authors have established that patients without teratoma in the primary tumor and a shrinkage of 90% or more in retroperitoneal mass had little chance of having viable GCT or teratoma and could be safely put under a surveillance program with periodic imaging scans [10]. However, prospective analyses have demonstrated that approximately 30% of patients will harbor teratoma or viable malignancy even with normal post-chemotherapy computed tomography (CT) results and no teratoma in the primary tumor [9]. The purpose of this work is to pursue a clinical and pathological review of patients undergoing PC-RPLND at a reference university oncology center in Brazil, in order to identify variables that may help predict the histological finding of necrosis in the retroperitoneum and perhaps establish a differentiated surveillance protocol. Methods: We performed a retrospective analysis of all patients from our computerized database who underwent PC-RPLND at our service between January 2005 and May 2011. Patients were operated on after having undergone three to four cycles of primary chemotherapy with bleomycin, etoposide and cisplatin. Clinical and pathological data were obtained and consisted basically of measures of retroperitoneal masses, serum tumor markers, histology of the orchiectomy specimen, tumor marker values and retroperitoneal nodal size before and after chemotherapy. The presence of either immature or mature teratoma in the resected specimen, as well as teratoma with malignant transformation, was considered part of the same group. Choriocarcinoma, yolk sac tumors, embryonal carcinoma and seminoma were considered viable GCTs. All histological findings were submitted to quantitative and qualitative analysis. Post-chemotherapy alpha-fetoprotein (AFP) and lactate dehydrogenase (LDH) levels were registered as a continuous variable while human chorionic gonadotropin (hCG) levels were considered a categorical variable ranging from undetectable to detectable when serum concentrations were greater than 3 mIU. Retroperitoneal nodal size before and after chemotherapy was determined by the longest transverse diameter of the largest mass on CT imaging. Relative change in nodal size before and after chemotherapy was calculated by dividing post-chemotherapy nodal size by pre-chemotherapy nodal size and was analyzed as a continuous variable. Multiple variables were analyzed independently in order to establish any predictive value for finding necrotic tissue in the retroperitoneum. Statistical analysis was performed using the Statistical Package for Social Sciences (SPSS, version 12.0, Chicago, IL, USA), applying the Mann–Whitney U test for non-parametric variables and the Fisher's exact test for categorical variables, with the level of significance set at P < 0.05. A cut-off level for predicting necrosis at retroperitoneal residual mass analysis was sought by constructing a receiver operating characteristic (ROC) curve of all significant variables, which were generated using graphical visualization in our statistical software. This study was carried-out in accordance with the Ethics Committee regulations. Results and discussion: We gathered information on a total of 32 patients, who were 18- to 49-years old (mean age 30.5) and harbored seminomatous, nonseminomatous or mixed tumors in the testis tissue in different clinical stages. Three patients (9.4%) had seminoma, five (16%) had pure nonseminoma (choriocarcinoma, yolk sac tumors, embryonal carcinoma, teratoma, teratocarcinoma) and 24 (75%) harbored mixed nonseminomatous GCT (NSGCT). None of the patients with seminoma had normal AFP levels, indicating that these patients may have had tumors with similar biological behavior to those with NSGCT. At diagnosis, seven (22%) patients were classified as having a stage I disease, while 21 (66%) were stage II and four (12.5%) were stage III. Clinical stage I patients were individuals with longer follow-up, who presented with retroperitoneal disease nonresponsive to chemotherapy and ultimately underwent PC-RPLND. Pathological analysis in our series demonstrated that 15 (47%) patients had necrosis in residual retroperitoneal masses, 15 had teratoma (47%) and two (6%) had viable GCT: one seminoma and one yolk sac tumor. For statistical reasons and in alignment with the aims of this study, we divided those patients into two groups, assembling patients with viable GCT and teratoma and analyzing them as one group. Mean size of the retroperitoneal mass was 4.94 cm in our sample, 3.79 cm in the group of necrosis and 5.96 cm in the group of teratoma and viable GCT (P = 0.176). There was also no difference between groups regarding stratification of nodal size, as shown in Table 1. Distribution according to nodal size in men undergoing PC-RPLND Data are shown as number (%). GCT, germ cell tumor; PC-RPLND, post-chemotherapy retroperitoneal lymph node dissection. Primary tumor histology revealed embryonal cell carcinoma in 56%, seminomatous elements in 16%, yolk sac tumor in 19% and teratomatous elements in 56% of the patients. Of the 18 patients with teratomatous elements in the primary tumor, 13 (72%) had teratoma in the retroperitoneum at PC-RPLND. Even in the absence of teratoma in the primary tumor, teratoma was present in the retroperitoneum in five patients (16%). When comparing pathological analysis of primary tumor specimens, we found a statistical difference when comparing the prevalence of teratoma and choriocarcinoma. Other findings, such as the presence of seminoma, yolk sac tumor, embryonal carcinoma, endodermal sinus tumor lymph vascular invasion (LVI), rete testis invasion and spermatic invasion were similar between groups. Univariate analysis of quantitative components at orchiectomy histology, retroperitoneal node size with its relative measures and serum markers are shown in Table 2. Univariate analysis predicting necrosis at PC-RPLND ** unable to calculate. Data expressed as mean ± standard deviation. AFP, alpha-fetoprotein; GCT, germ cell tumor; hCG, human chorionic gonadotropin; LDH, lactate dehydrogenase; PC-RPLND, post-chemotherapy retroperitoneal lymph node dissection; RP, retroperitoneal. Even though the presence of teratoma in the primary tumor was an important negative predictor for the finding of necrosis, the quantitative analysis has proven to be statistically irrelevant. While relative reduction in mass size after chemotherapy has been shown to be an important predictor of necrosis, even when considering patients not responding to chemotherapy, absolute reduction and enlargement of the residual mass did not show a significant difference between groups. There was a statistical difference in AFP levels between groups; however, when comparing relative changes in AFP levels after chemotherapy, no difference was found. Comparison of LDH levels and their relative changes was nonsignificant. We were unable to compare hCG levels between groups because we only had two patients whose hCG levels were not undetectable. Of all the studied variables, relative changes in retroperitoneal lymph node size (P =0.04), the absence of teratoma in the orchiectomy specimen (P =0.03) and the presence of choriocarcinoma in the testicular analysis after orchiectomy (P =0.03) were statistically significant predictors of the presence of necrosis in the retroperitoneum. ROC curves were built for variables that were independent predictors of necrosis at PC-RPLND. Even though the size of the retroperitoneal mass after chemotherapy showed no statistical difference, we also built a curve for it. The variable relative reduction after chemotherapy was the only one with predictive value according to the Youlden index. An area under the curve (AUC) of 0.710 was obtained and a reduction level of 35% was therefore suggested to be the best cutoff for predicting the absence of tumor in the retroperitoneum with a sensitivity of 73% and a specificity of 82%. ROC curves of isolated size of retroperitoneal mass, absence of teratoma and presence of choriocarcinoma did not indicate significant findings. In our institution, relative change in retroperitoneal node size, absence of teratoma in the orchiectomy specimen and the presence of choriocarcinoma in the testicular analysis after orchiectomy were statistically significant predictors of the presence of necrosis in the retroperitoneum and a surveillance program could be considered, given the uncertainty in predicting the histology of residual masses after chemotherapy in patients with metastatic testicular tumors. Analysis of residual retroperitoneal masses after chemotherapy is being increasingly regarded as a fundamental issue, not only for orienting adjuvant therapies, but also because it has prognostic implications [11]. Outcome assessments in patients with NSGCT have demonstrated that incomplete resection of residual retroperitoneal masses, the size of residual retroperitoneal masses and the finding of teratoma and viable GCT at RPLND independently predict disease progression and relapse [11,12]. The size of residual retroperitoneal masses after chemotherapy has traditionally been considered when choosing the subsequent treatment modality [9]. Studies have shown that residual masses smaller than 2 cm are considered one of the most significant predictors for finding necrosis at PC-RPLND at logistic regression [4]. Furthermore, a number of investigators continue to base the decision to perform PC-RPLND on residual mass size alone, obviating PC-RPLND in patients with residual masses of 1 cm or less [13]. However, recent studies have shown that after chemotherapy a third of retroperitoneal masses of <2 cm harbored either teratoma or viable GCT [14]. Even though we found a trend of having only necrosis in the retroperitoneum for masses <2 cm (P = 0.09), in the present study residual mass size alone has not appeared to be a good predictor of necrosis, with nonsignificant size differences between groups and an underrated AUC in ROC curves. In addition, one case of teratoma in a 1 cm mass was registered in our sample. Therefore, we advocate that the decision of not operating on patients cannot be based on mass size alone due to the lack of consensus in the literature [9,10,13,14]. Traditional series report necrotic debris or fibrosis in 40% to 50%, teratoma in 35% to 40% and viable malignant cells in 10% to 15% of patients [3]. In more recent analyses, the incidence of residual microscopic teratoma in the retroperitoneum has decreased to approximately 20% to 25%, with an increase of necrotic tissue findings of up to 60% and stable rates of viable GCT [15]. It has been reported that an increase in the proportion of necrosis is generally attributed to stage migration and the use of more effective chemotherapy regimens, especially in patients achieving a complete response to chemotherapy [15]. In our series we found a distribution pattern similar to former studies, with almost 50% of patients harboring teratoma in the retroperitoneal histology at RPLND, which may suggest that our chemotherapy regimens have been having inferior rates of complete responders or simply because our group is composed of patients in more advanced stages. Teratoma-negative primary tumor and volumetric regression of at least 90% after chemotherapy have been described as being highly predictive of harboring necrosis only at PC-RPLND [9,16]. Other series include normal serum tumor markers, and node size < 2 cm, the presence of yolk sac tumor or embryonal carcinoma on primary tumor and lymph vascular invasion, among others [9,10,15-17]. On the other hand, some authors have been unable to identify variables to be highly predictive of harboring necrosis only at PC-RPLND [15,18]. Nevertheless, currently predictive models fail to accurately predict necrosis in the retroperitoneum, since almost 30% of patients will harbor teratoma or viable malignancy even with normal post-chemotherapy CT and no teratoma in the primary tumor [9]. A study carried out at the Memorial Sloan-Kettering Cancer Center revealed that 26% of patients had teratoma or viable GCT, even those with radiographically normal retroperitoneum who underwent PC-RPLND [9]. The relatively small number of patients when compared to other institutional retrospective studies is a limitation of the present study; however, other studies with similar sample sizes have also come to significant findings. In the present study, relative changes in retroperitoneal node size stood out as the single best predictor of the presence of necrosis in the retroperitoneum, which is in accordance with actual logistic regression models [9,10]. While other series suggest 90% as a shrinkage cutoff value to reliably predict necrosis, our quantitative analysis have demonstrated a cutoff level of 35% with approximate sensitivity and specificity [9]. The absence of teratoma has also been demonstrated to be significant in qualitative analysis, but not in quantitative analysis. The presence of choriocarcinoma in the testicular histology predicted necrosis in the retroperitoneum, despite only four (12.5%) patients who had such a finding. This association is an interesting finding that has not been previously reported. We reviewed the literature and no possible explanations were found. We believe further investigation is necessary to confirm this finding, since the number of patients in our sample is relatively small. Conclusions: Even though RPLND remains the gold standard for patients with residual masses, those without teratoma in the primary tumor and a shrinkage of 35% or more in retroperitoneal mass have a considerably smaller chance of having viable GCTs or teratoma in the retroperitoneum. Abbreviations: AFP: Alphs-fetoprotein; AUC: Area under the curve; CT: Computed tomography; GCT: Germ cell tumor; hCG: Human chorionic gonadotropin; LDH: Lactate dehydrogenase; LVI: Lymph vascular invasion; NSGCT: Nonseminomatous germ cell tumor; PC-RPLND: Post chemotherapy retroperitoneal lymph node dissection; ROC: Receiver operating characteristic; RPLND: Retroperitoneal lymph node dissection. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: EPM and DKA gathered and compiled data and have been involved in drafting the manuscript. AJN and STL participated in the design of the study and performed the statistical analysis. ACS and MS revised the manuscript critically for important intellectual content. MS and MFD have given final approval of the version to be published. All authors read and approved the final manuscript.
Background: Recent studies have demonstrated that pathological analysis of retroperitoneal residual masses of patients with testicular germ cell tumors revealed findings of necrotic debris or fibrosis in up to 50% of patients. We aimed at pursuing a clinical and pathological review of patients undergoing post chemotherapy retroperitoneal lymph node dissection (PC-RPLND) in order to identify variables that may help predict necrosis in the retroperitoneum. Methods: We performed a retrospective analysis of all patients who underwent PC-RPLND at the University Hospital of the University of São Paulo and Cancer Institute of Sao Paulo between January 2005 and September 2011. Clinical and pathological data were obtained and consisted basically of: measures of retroperitoneal masses, histology of the orchiectomy specimen, serum tumor marker and retroperitoneal nodal size before and after chemotherapy. Results: We gathered a total of 32 patients with a mean age of 29.7; pathological analysis in our series demonstrated that 15 (47%) had necrosis in residual retroperitoneal masses, 15 had teratoma (47%) and 2 (6.4%) had viable germ cell tumors (GCT). The mean size of the retroperitoneal mass was 4.94 cm in our sample, without a difference between the groups (P = 0.176). From all studied variables, relative changes in retroperitoneal lymph node size (P = 0.04), the absence of teratoma in the orchiectomy specimen (P = 0.03) and the presence of choriocarcinoma in the testicular analysis after orchiectomy (P = 0.03) were statistically significant predictors of the presence of necrosis. A reduction level of 35% was therefore suggested to be the best cutoff for predicting the absence of tumor in the retroperitoneum with a sensitivity of 73.3% and specificity of 82.4%. Conclusions: Even though retroperitoneal lymph node dissection remains the gold standard for patients with residual masses, those without teratoma in the primary tumor and a shrinkage of 35% or more in retroperitoneal mass have a considerably smaller chance of having viable GCT or teratoma in the retroperitoneum and a surveillance program could be considered.
Background: Testicular cancer has become one of the most curable solid neoplasms and serves as a paradigm for the multimodal treatment of malignancies. The appropriate integration of chemotherapy, retroperitoneal lymph node dissection (RPLND) and observation for the management of testis cancer has resulted in overall survival rates greater than 90% [1,2]. RPLND plays an important role in the management of patients with testicular germ cell tumors (GCTs), especially in those with residual masses after chemotherapy [3]. To date, series have demonstrated that pathological analysis of these masses reveals findings of necrotic debris or fibrosis in 40% to 50% of patients, teratoma in 35% to 40% and viable malignant cells in 10% to 15% of patients [4]. Even though early recognition and resection of teratoma in the retroperitoneum after chemotherapy have been accompanied by excellent prognosis, once it was initially thought to represent a benign course when present in the retroperitoneal space, but the untreated disease may have a lethal potential due to progressive local growth or malignant transformation, not to mention its classical unresponsiveness to conventional cisplatin-based chemotherapy regimens [3-6]. As for residual viable GCT, the consequence of its incomplete resection is certain disease progression [7]. Therefore, a more aggressive approach should always be considered when treating patients with teratoma or viable GCT in the retroperitoneum [7,8]. Thus, the appropriate approach to residual masses following chemotherapy remains a controversial issue, since the literature has shown that as many as half of resected masses are basically composed of necrosis or fibrotic tissue and any sort of adjuvant therapy could be waived [9]. In order to avoid a great number of apparently unnecessary post-chemotherapy RPLND (PC-RPLND), many studies have tried to develop algorithms to predict the presence of necrosis in the retroperitoneum. However, currently predictive models and imaging modalities cannot reliably predict the pathological finding of necrosis/fibrosis at PC-RPLND [9]. Some authors have established that patients without teratoma in the primary tumor and a shrinkage of 90% or more in retroperitoneal mass had little chance of having viable GCT or teratoma and could be safely put under a surveillance program with periodic imaging scans [10]. However, prospective analyses have demonstrated that approximately 30% of patients will harbor teratoma or viable malignancy even with normal post-chemotherapy computed tomography (CT) results and no teratoma in the primary tumor [9]. The purpose of this work is to pursue a clinical and pathological review of patients undergoing PC-RPLND at a reference university oncology center in Brazil, in order to identify variables that may help predict the histological finding of necrosis in the retroperitoneum and perhaps establish a differentiated surveillance protocol. Conclusions: Even though RPLND remains the gold standard for patients with residual masses, those without teratoma in the primary tumor and a shrinkage of 35% or more in retroperitoneal mass have a considerably smaller chance of having viable GCTs or teratoma in the retroperitoneum.
Background: Recent studies have demonstrated that pathological analysis of retroperitoneal residual masses of patients with testicular germ cell tumors revealed findings of necrotic debris or fibrosis in up to 50% of patients. We aimed at pursuing a clinical and pathological review of patients undergoing post chemotherapy retroperitoneal lymph node dissection (PC-RPLND) in order to identify variables that may help predict necrosis in the retroperitoneum. Methods: We performed a retrospective analysis of all patients who underwent PC-RPLND at the University Hospital of the University of São Paulo and Cancer Institute of Sao Paulo between January 2005 and September 2011. Clinical and pathological data were obtained and consisted basically of: measures of retroperitoneal masses, histology of the orchiectomy specimen, serum tumor marker and retroperitoneal nodal size before and after chemotherapy. Results: We gathered a total of 32 patients with a mean age of 29.7; pathological analysis in our series demonstrated that 15 (47%) had necrosis in residual retroperitoneal masses, 15 had teratoma (47%) and 2 (6.4%) had viable germ cell tumors (GCT). The mean size of the retroperitoneal mass was 4.94 cm in our sample, without a difference between the groups (P = 0.176). From all studied variables, relative changes in retroperitoneal lymph node size (P = 0.04), the absence of teratoma in the orchiectomy specimen (P = 0.03) and the presence of choriocarcinoma in the testicular analysis after orchiectomy (P = 0.03) were statistically significant predictors of the presence of necrosis. A reduction level of 35% was therefore suggested to be the best cutoff for predicting the absence of tumor in the retroperitoneum with a sensitivity of 73.3% and specificity of 82.4%. Conclusions: Even though retroperitoneal lymph node dissection remains the gold standard for patients with residual masses, those without teratoma in the primary tumor and a shrinkage of 35% or more in retroperitoneal mass have a considerably smaller chance of having viable GCT or teratoma in the retroperitoneum and a surveillance program could be considered.
3,022
392
[ 520, 1891, 72, 10, 67 ]
7
[ "patients", "teratoma", "chemotherapy", "retroperitoneal", "rplnd", "tumor", "necrosis", "size", "pc", "pc rplnd" ]
[ "management testis cancer", "tumors testis tissue", "testicular cancer curable", "metastatic testicular tumors", "background testicular cancer" ]
null
[CONTENT] Testicular cancer | Retroperitoneal lymph node dissection | Necrosis | Teratoma [SUMMARY]
[CONTENT] Testicular cancer | Retroperitoneal lymph node dissection | Necrosis | Teratoma [SUMMARY]
null
[CONTENT] Testicular cancer | Retroperitoneal lymph node dissection | Necrosis | Teratoma [SUMMARY]
[CONTENT] Testicular cancer | Retroperitoneal lymph node dissection | Necrosis | Teratoma [SUMMARY]
[CONTENT] Testicular cancer | Retroperitoneal lymph node dissection | Necrosis | Teratoma [SUMMARY]
[CONTENT] Adolescent | Adult | Humans | Lymph Node Excision | Male | Middle Aged | Necrosis | Neoplasms, Germ Cell and Embryonal | Retroperitoneal Neoplasms | Retrospective Studies | Teratoma | Testicular Neoplasms [SUMMARY]
[CONTENT] Adolescent | Adult | Humans | Lymph Node Excision | Male | Middle Aged | Necrosis | Neoplasms, Germ Cell and Embryonal | Retroperitoneal Neoplasms | Retrospective Studies | Teratoma | Testicular Neoplasms [SUMMARY]
null
[CONTENT] Adolescent | Adult | Humans | Lymph Node Excision | Male | Middle Aged | Necrosis | Neoplasms, Germ Cell and Embryonal | Retroperitoneal Neoplasms | Retrospective Studies | Teratoma | Testicular Neoplasms [SUMMARY]
[CONTENT] Adolescent | Adult | Humans | Lymph Node Excision | Male | Middle Aged | Necrosis | Neoplasms, Germ Cell and Embryonal | Retroperitoneal Neoplasms | Retrospective Studies | Teratoma | Testicular Neoplasms [SUMMARY]
[CONTENT] Adolescent | Adult | Humans | Lymph Node Excision | Male | Middle Aged | Necrosis | Neoplasms, Germ Cell and Embryonal | Retroperitoneal Neoplasms | Retrospective Studies | Teratoma | Testicular Neoplasms [SUMMARY]
[CONTENT] management testis cancer | tumors testis tissue | testicular cancer curable | metastatic testicular tumors | background testicular cancer [SUMMARY]
[CONTENT] management testis cancer | tumors testis tissue | testicular cancer curable | metastatic testicular tumors | background testicular cancer [SUMMARY]
null
[CONTENT] management testis cancer | tumors testis tissue | testicular cancer curable | metastatic testicular tumors | background testicular cancer [SUMMARY]
[CONTENT] management testis cancer | tumors testis tissue | testicular cancer curable | metastatic testicular tumors | background testicular cancer [SUMMARY]
[CONTENT] management testis cancer | tumors testis tissue | testicular cancer curable | metastatic testicular tumors | background testicular cancer [SUMMARY]
[CONTENT] patients | teratoma | chemotherapy | retroperitoneal | rplnd | tumor | necrosis | size | pc | pc rplnd [SUMMARY]
[CONTENT] patients | teratoma | chemotherapy | retroperitoneal | rplnd | tumor | necrosis | size | pc | pc rplnd [SUMMARY]
null
[CONTENT] patients | teratoma | chemotherapy | retroperitoneal | rplnd | tumor | necrosis | size | pc | pc rplnd [SUMMARY]
[CONTENT] patients | teratoma | chemotherapy | retroperitoneal | rplnd | tumor | necrosis | size | pc | pc rplnd [SUMMARY]
[CONTENT] patients | teratoma | chemotherapy | retroperitoneal | rplnd | tumor | necrosis | size | pc | pc rplnd [SUMMARY]
[CONTENT] patients | teratoma | chemotherapy | rplnd | viable | necrosis | patients teratoma | viable gct | predict | masses [SUMMARY]
[CONTENT] chemotherapy | nodal size | nodal | size | nodal size chemotherapy | variables | variable | size chemotherapy | analysis | retroperitoneal [SUMMARY]
null
[CONTENT] teratoma | having viable gcts teratoma | smaller chance having | mass considerably smaller chance | considerably smaller | considerably smaller chance | considerably smaller chance having | primary tumor shrinkage 35 | patients residual masses teratoma | retroperitoneal mass considerably smaller [SUMMARY]
[CONTENT] teratoma | chemotherapy | patients | retroperitoneal | rplnd | tumor | declare competing interests | competing interests | competing | interests [SUMMARY]
[CONTENT] teratoma | chemotherapy | patients | retroperitoneal | rplnd | tumor | declare competing interests | competing interests | competing | interests [SUMMARY]
[CONTENT] up to 50% ||| [SUMMARY]
[CONTENT] the University Hospital of the University of São Paulo | Cancer Institute of Sao Paulo | between January 2005 | September 2011 ||| [SUMMARY]
null
[CONTENT] node | 35% | GCT [SUMMARY]
[CONTENT] ||| up to 50% ||| ||| the University Hospital of the University of São Paulo | Cancer Institute of Sao Paulo | between January 2005 | September 2011 ||| ||| ||| 32 | 29.7 | 15 | 47% | 15 | 47% | 2 | 6.4% ||| 4.94 cm | 0.176 ||| 0.04 | 0.03 | 0.03 ||| 35% | 73.3% | 82.4% ||| node | 35% | GCT [SUMMARY]
[CONTENT] ||| up to 50% ||| ||| the University Hospital of the University of São Paulo | Cancer Institute of Sao Paulo | between January 2005 | September 2011 ||| ||| ||| 32 | 29.7 | 15 | 47% | 15 | 47% | 2 | 6.4% ||| 4.94 cm | 0.176 ||| 0.04 | 0.03 | 0.03 ||| 35% | 73.3% | 82.4% ||| node | 35% | GCT [SUMMARY]
Uncovering the treatable burden of severe aortic stenosis in Australia: current and future projections within an ageing population.
34376198
We aimed to address the paucity of information describing the treatable burden of disease associated with severe aortic stenosis (AS) within Australia's ageing population.
BACKGROUND
A contemporary model of the population prevalence of symptomatic, severe AS and treatment pathways in Europe and North America was applied to the 2019 Australian population aged ≥ 55 years (7 million people) on an age-specific basis. Applying Australian-specific data, these estimates were used to further calculate the total number of associated deaths and incident cases of severe AS per annum.
METHODS
Based on an overall point prevalence of 1.48 % among those aged ≥ 55 years, we estimate that a minimum of 97,000 Australians are living with severe AS. With a 2-fold increased risk of mortality without undergoing aortic valve replacement (AVR), more than half of these individuals (∼56,000) will die within 5-years. From a clinical management perspective, among those with concurrent symptoms (68.3 %, 66,500 [95 % CI 59,000-74,000] cases) more than half (58.4 %, 38,800 [95 % CI 35,700 - 42,000] cases) would be potentially considered for surgical AVR (SAVR) - comprising 2,400, 5,400 and 31,000 cases assessed as high-, medium- or low peri-operative mortality risk, respectively. A further 17,000/27,700 (41.6 % [95 % CI 11,600 - 22,600]) of such individuals would be potentially considered to a transthoracic AVR (TAVR). During the subsequent 5-year period (2020-2024), each year, we estimate an additional 9,300 Australians aged ≥ 60 years will subsequently develop severe AS (6,300 of whom will experience concurrent symptoms). Of these symptomatic cases, an estimated 3,700 and 1,600 cases/annum, will be potentially suitable for SAVR and TAVR, respectively.
RESULTS
These data suggest there is likely to be a substantive burden of individuals living with severe AS in Australia. Many of these cases may not have been diagnosed and/or received appropriate treatment (based on the evidence-based application of SAVR and TAVR) to reduce their high-risk of subsequent mortality.
CONCLUSIONS
[ "Aging", "Aortic Valve Stenosis", "Australia", "Heart Valve Prosthesis Implantation", "Humans", "Risk Factors", "Transcatheter Aortic Valve Replacement", "Treatment Outcome" ]
8356417
Introduction
One of the most common cardiac conditions affecting the progressively aging populations of high-income countries such as Australia is aortic stenosis (AS) [1]. Without timely intervention, severe AS is associated with a very poor prognosis [2]. However, like many other countries, there has been a paucity of reports focusing on the overall prevalence and treatable burden of AS in Australia. A recent AS report from National Echocardiography Database of Australia (NEDA) [3] that assessed the severity of AS and subsequent pattern of survival among 122,809 men and 118,494 women with a mean age of 62 ± 18 years highlighted an urgent need to better understand the burden imposed by this potentially deadly condition. Overall, the indicative prevalence of severe low or high gradient AS among adults being investigated with echocardiography during the overall study period of 2000–2019, was 1.1 and 2.1 % (3.2 % combined), respectively. Actual 5-year mortality ranged from 56 to 67 % in those cases with a native aortic valve and no indication of surgical intervention [3]. Historically, surgical aortic valve replacement (SAVR) has been the preferred intervention for severe AS [4]. However, transcatheter aortic valve replacement (TAVR) has been successfully applied in those with severe AS with high/prohibitive surgical risk [5–7]. Moreover, two head-to-head randomized trials have now reported non-inferiority [8] and superiority [9] comparing TAVR to SAVR in low-risk patients with severe AS, in respect to mortality and subsequent risk of stroke. Consistent with these data, for most of the 6,050 cases within the NEDA cohort who underwent AVR, their post-procedure AV hemodynamic profile and survival outcomes were favourable [10]. Overall, these data suggest more Australians might benefit from AVR. However, without reliable estimates of the treatable burden of AS, this critical number (for health service and resource planning) remains unknown. A series of modelled studies, first published in 2013 [11] and then an updated version in 2018 [12], applied the best available epidemiological and registry data to estimate the following for Europe and North America – 1) the overall proportion of older individuals affected by AS and more specifically severe AS; and 2) proportion of these individuals who had and/or would benefit from a valve replacement procedure (SAVR or TAVR). Given the geographical focus and source data used, these estimates now provide these target regions with (moderate) reliable estimates of the treatable burden of AS. To date, there are no equivalent burden of disease estimates for Australia. Study aims We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3]. We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3].
null
null
Results
Prevalent cases of severe AS As shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time. Fig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia Estimated Point Prevalence and Distribution of Severe AS in Australia As shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time. Fig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia Estimated Point Prevalence and Distribution of Severe AS in Australia Aortic valve replacement Figure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead. Fig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) Figure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead. Fig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) AS-related mortality Based on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4. Fig. 4Severe AS Related Mortality Severe AS Related Mortality Based on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4. Fig. 4Severe AS Related Mortality Severe AS Related Mortality Incident cases As shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2). Fig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years Annual Incident Cases of Severe AS among Australians Aged ≥55 years As shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2). Fig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years Annual Incident Cases of Severe AS among Australians Aged ≥55 years
Conclusions
These unique estimates provide an important insight into the current and future treatable burden of severe AS in Australia. At the most conservative level, the likely number of currently affected Australians aged 55 years and over will soon rise to 100,000 people. Moreover, the number of new cases with this potentially deadly condition is likely approaching 10,000 per annum. Based on current clinical practice/recommendations, around two-thirds of such cases should be actively managed due to their symptomatic status (predominantly with a combination of SAVR and TAVR) and a high-risk of mortality. However, it is unclear if that is truly the case. Whether there is sufficient clinical awareness of AS and pro-active referral patterns (particularly for Australian women) for active management is yet to be determined.
[ "Study aims", "Methods", "Study design", "Study setting", "Study data", "Estimating the prevalence & treatable burden of severe AS", "Mortality", "Incident cases of severe AS", "Statistical analyses", "Prevalent cases of severe AS", "Aortic valve replacement", "AS-related mortality", "Incident cases", "Limitations", "" ]
[ "We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3].", "Study design Consistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required.\nConsistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required.\nStudy setting Projections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021).\nProjections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021).\nStudy data The primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021).\nAs indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase.\nFig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nModel/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nSpecifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America.\nWhere possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3].\nThe primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021).\nAs indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase.\nFig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nModel/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nSpecifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America.\nWhere possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3].\nEstimating the prevalence & treatable burden of severe AS For these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3].\nFor these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3].\nMortality To understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below).\nTo understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below).\nIncident cases of severe AS In the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years.\nIn the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years.\nStatistical analyses All projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text.\nAll projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text.", "Consistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required.", "Projections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021).", "The primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021).\nAs indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase.\nFig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nModel/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nSpecifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America.\nWhere possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3].", "For these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3].", "To understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below).", "In the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years.", "All projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text.", "As shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time.\nFig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia\nEstimated Point Prevalence and Distribution of Severe AS in Australia", "Figure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead.\nFig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases)\nEstimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases)", "Based on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4.\nFig. 4Severe AS Related Mortality\nSevere AS Related Mortality", "As shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2).\nFig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years\nAnnual Incident Cases of Severe AS among Australians Aged ≥55 years", "As noted, these data do not completely rely on Australian specific data (other than population estimates/demographic structure and NEDA-derived adjustments to original estimates). To partially address this, we have used a conservative estimate of the point prevalence of severe AS and provide 95 % CI for lower and higher and estimates. However, we acknowledge the dynamics of medical management of severe AS (i.e., conservative treatment versus TAVR versus SAVR) in Australia is likely to be different than reported in Europe/North America. The differential clinical uptake and reimbursement of different procedures from a public health to private health perspective within Australia’s increasingly complex and hybrid health care system, is particularly relevant - as is the variable population dynamics of each jurisdiction across the country. Beyond the broader demographic features of Australia and its major jurisdictions, we did not consider other important factors such as socio-economic status and the concentration of particularly high-risk groups (e.g. predominantly younger Indigenous Australians in Central Australia who experience much higher levels of valvular disease and heart failure [25]); nor did we consider the cost-burden of treating severe AS. Applying the growing resources of NEDA, we hope to address most of these issues/limitations in the future, in order to provide greater clarity around the full spectrum of AS in Australia. Finally, it is important to acknowledge that our mortality estimates (even when considering that they are focussed on those with native valves) are discordant with the low rates of mortality reported in trials such as the PARTNER 3 Trial [9]. This discordance only reinforces the benefits of early recognition and expert management of this otherwise deadly condition.", "\nAdditional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD).\nAdditional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum).\n\nAdditional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD).\nAdditional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Study aims", "Methods", "Study design", "Study setting", "Study data", "Estimating the prevalence & treatable burden of severe AS", "Mortality", "Incident cases of severe AS", "Statistical analyses", "Results", "Prevalent cases of severe AS", "Aortic valve replacement", "AS-related mortality", "Incident cases", "Discussion", "Limitations", "Conclusions", "Supplementary Information", "" ]
[ "One of the most common cardiac conditions affecting the progressively aging populations of high-income countries such as Australia is aortic stenosis (AS) [1]. Without timely intervention, severe AS is associated with a very poor prognosis [2]. However, like many other countries, there has been a paucity of reports focusing on the overall prevalence and treatable burden of AS in Australia. A recent AS report from National Echocardiography Database of Australia (NEDA) [3] that assessed the severity of AS and subsequent pattern of survival among 122,809 men and 118,494 women with a mean age of 62 ± 18 years highlighted an urgent need to better understand the burden imposed by this potentially deadly condition. Overall, the indicative prevalence of severe low or high gradient AS among adults being investigated with echocardiography during the overall study period of 2000–2019, was 1.1 and 2.1 % (3.2 % combined), respectively. Actual 5-year mortality ranged from 56 to 67 % in those cases with a native aortic valve and no indication of surgical intervention [3].\nHistorically, surgical aortic valve replacement (SAVR) has been the preferred intervention for severe AS [4]. However, transcatheter aortic valve replacement (TAVR) has been successfully applied in those with severe AS with high/prohibitive surgical risk [5–7]. Moreover, two head-to-head randomized trials have now reported non-inferiority [8] and superiority [9] comparing TAVR to SAVR in low-risk patients with severe AS, in respect to mortality and subsequent risk of stroke. Consistent with these data, for most of the 6,050 cases within the NEDA cohort who underwent AVR, their post-procedure AV hemodynamic profile and survival outcomes were favourable [10].\nOverall, these data suggest more Australians might benefit from AVR. However, without reliable estimates of the treatable burden of AS, this critical number (for health service and resource planning) remains unknown. A series of modelled studies, first published in 2013 [11] and then an updated version in 2018 [12], applied the best available epidemiological and registry data to estimate the following for Europe and North America – 1) the overall proportion of older individuals affected by AS and more specifically severe AS; and 2) proportion of these individuals who had and/or would benefit from a valve replacement procedure (SAVR or TAVR). Given the geographical focus and source data used, these estimates now provide these target regions with (moderate) reliable estimates of the treatable burden of AS. To date, there are no equivalent burden of disease estimates for Australia.\nStudy aims We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3].\nWe sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3].", "We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3].", "Study design Consistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required.\nConsistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required.\nStudy setting Projections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021).\nProjections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021).\nStudy data The primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021).\nAs indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase.\nFig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nModel/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nSpecifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America.\nWhere possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3].\nThe primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021).\nAs indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase.\nFig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nModel/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nSpecifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America.\nWhere possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3].\nEstimating the prevalence & treatable burden of severe AS For these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3].\nFor these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3].\nMortality To understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below).\nTo understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below).\nIncident cases of severe AS In the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years.\nIn the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years.\nStatistical analyses All projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text.\nAll projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text.", "Consistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required.", "Projections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021).", "The primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021).\nAs indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase.\nFig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nModel/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively\nSpecifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America.\nWhere possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3].", "For these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3].", "To understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below).", "In the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years.", "All projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text.", "Prevalent cases of severe AS As shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time.\nFig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia\nEstimated Point Prevalence and Distribution of Severe AS in Australia\nAs shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time.\nFig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia\nEstimated Point Prevalence and Distribution of Severe AS in Australia\nAortic valve replacement Figure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead.\nFig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases)\nEstimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases)\nFigure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead.\nFig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases)\nEstimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases)\nAS-related mortality Based on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4.\nFig. 4Severe AS Related Mortality\nSevere AS Related Mortality\nBased on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4.\nFig. 4Severe AS Related Mortality\nSevere AS Related Mortality\nIncident cases As shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2).\nFig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years\nAnnual Incident Cases of Severe AS among Australians Aged ≥55 years\nAs shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2).\nFig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years\nAnnual Incident Cases of Severe AS among Australians Aged ≥55 years", "As shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time.\nFig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia\nEstimated Point Prevalence and Distribution of Severe AS in Australia", "Figure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead.\nFig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases)\nEstimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases)", "Based on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4.\nFig. 4Severe AS Related Mortality\nSevere AS Related Mortality", "As shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2).\nFig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years\nAnnual Incident Cases of Severe AS among Australians Aged ≥55 years", "In the setting of specific surgical registry reports [18] and recent insightful data generated by the NEDA Study [3] but little else, we present a unique analysis of the potential treatable burden of severe AS in Australia. As with many substantive public health issues that routinely affect a large proportion of the population and is associated with costly treatment and historically poor health outcomes [13, 14], unfortunately there is a paucity of specific burden of disease data to guide health resources and clinical practice. Indeed, worldwide, the natural history and impact of AS remains poorly characterised [19]. This relative lack of data is exacerbated by the rapid progression of disease and high mortality in those affected [20]. It was on this basis we chose to use the best available modelling and projections on the population prevalence and treatable pattern/burden of severe AS [12] and then applied them to the Australian population aged ≥ 55 years with further adjustments/ improvements based on NEDA Study data [3]. Overall, our analyses suggest that close to 100,000 Australians in this at-risk age-group are currently living with this potentially deadly condition. Accordingly, in the next 5 years, more than half of these individuals will die without having undergone an AVR procedure – their risk of dying being two-fold higher on an adjusted basis than their counterparts without severe AS [3]. Without any change in its natural history (there being strong evidence that its prevalence will rise within our increasingly sedentary and obese population [15, 21], this number will likely rise substantially within Australia’s ageing population. At minimum, we estimate that an additional 9,000 Australians aged ≥ 60 years and over will develop severe AS each year.\nOverall, based on contemporary management practices in Europe and North America (noting this remains a highly evolving field), we estimated that just under 56,000 prevalent cases would be potentially eligible for a SAVR or TAVR procedure. As also shown by the NEDA Study, successful restoration of AV function with AVR is associated with markedly improved survival [10]. Due to population dynamics (including greater longevity among Australian women) more women than men are likely affected by severe AS, but many of these are aged > 80 years and may have comorbidities that will favour more conservative management options.\nRegardless, of the relative accuracy of these projections (noting our critical corrections of the original model [12] based on the very large and robust data derived from the NEDA cohort [3] – see below), these data provide an important context to the largely hidden but substantive burden of disease imposed by AS in Australia. From an individual to societal perspective, it seems clear that due to Australia’s progressively ageing population, a clear strategy to detect and then optimally manage an increasing burden of AS is urgently required. Outcome data derived from close to 350,000 Australians investigated with echocardiography and collectively followed-up > 1 million person-years as part of the NEDA Study [3] further reinforces the need to prioritise AS management. In that study we reaffirmed that severe AS had very poor survival rates (two-thirds dead within 5-years). We also confirmed that when applied, AVR was largely associated with optimal AV hemodynamic profiles and lower mortality [10].\nA recent analysis of 18,147 patients (mean age 72 years and 64 % men) with AS who underwent SAVR captured by the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) database during 2002–2015, showed that this procedure accounted for 20 % of all adult cardiac surgeries by the end of the study period [18]. In recent years, TAVR has been successfully applied to patients with severe AS and with high/prohibitive surgical risk [5–7]; with two randomized trials reporting the non-inferiority [8] and superiority [9] of TAVR in respect to mortality and subsequent risk of stroke, respectively, when compared to SAVR in low-risk patients with severe AS. A recently reported Australian trial of TAVR applied to 199 intermediate-risk cases of severe AS, is indicative of the changing clinical landscape in this regard [22].\nDespite the type of data described above, determining the actual proportion of Australians at risk of poor outcomes associated with AS and then actively treated with SAVR or TAVR, remains problematic; even when reconciling the largely concordant data around the size of the likely active patient population derived from the NEDA Study and those formal projections. However, there does appear to be a disconnect between the number of Australians with severe AS who might benefit either SAVR or TAVR and their subsequent access to these procedures. A recent report from the ANZSCTS database recorded ~ 4,000 SAVR procedures overall in Australia during the period 2009–2015 [18]. Even when accounting for the fact that registry did become truly national until 2015, there appears to be a large shortfall in the expected number of such procedures per annum relative to our projections. This may well be explained by the demographic profile of those undergoing SAVR in Australia. As reported [18], the mean age of SAVR cases is around 72 years of age and only 37 % were female. Alternatively, the mean age of those with severe AS (during a similar timeframe of surveillance) within the NEDA cohort is around 80 years of age and, consistent with this report, more than half of cases were women. Anecdotally, referral for SAVR in the Australian population has typically paralleled North American trends and the presumptions of the modelling (shown in Fig. 1) are based on historical data from a combination of North American and European centres. Given the lag in referral behaviour to new low-risk interventions (e.g., TAVR), by definition, all these assumptions for the modelling will be conservative. Critically, the availability of these new valve technologies has developed rapidly in Australia in the last 5 years and, other than NEDA, there is a paucity of resources to track these changes in real-time.\nThe recently reported NEDA Study data suggesting that even mild-to-moderate forms of AS are associated with high-levels of mortality approaching that of severe AS within 5-years of follow-up [3], when combined with other contemporary reports [23, 24], are likely to change the landscape of AS management. Specifically, this will likely reflect a recognition that a “watchful wait” approach to determine the transition from moderate to severe AS and also from asymptomatic to symptomatic status [4] may be associated with unacceptably high mortality rates [5]. For example, reflective of concerns around intermediate-to-high risk of surgical mortality (> 4 %) and the evolving efficacy of TAVR versus the more costly and invasive SAVR, in the current analyses, it was estimated that around 9,000 SAVR cases could potentially be replaced with TAVR. However, determining how and where to invest in dedicated screening programs and apply the latest evidence to prolong the lives of those affected in Australia (noting our estimate of approximately 9,000 new cases per annum – of whom > 4,000 would be aged < 75 years) is futile without firm evidence of the number of individuals involved. The noted “disconnect” between the potential and actual number of Australians who derive survival benefits from an AVR, therefore, is likely to increase over time.\nLimitations As noted, these data do not completely rely on Australian specific data (other than population estimates/demographic structure and NEDA-derived adjustments to original estimates). To partially address this, we have used a conservative estimate of the point prevalence of severe AS and provide 95 % CI for lower and higher and estimates. However, we acknowledge the dynamics of medical management of severe AS (i.e., conservative treatment versus TAVR versus SAVR) in Australia is likely to be different than reported in Europe/North America. The differential clinical uptake and reimbursement of different procedures from a public health to private health perspective within Australia’s increasingly complex and hybrid health care system, is particularly relevant - as is the variable population dynamics of each jurisdiction across the country. Beyond the broader demographic features of Australia and its major jurisdictions, we did not consider other important factors such as socio-economic status and the concentration of particularly high-risk groups (e.g. predominantly younger Indigenous Australians in Central Australia who experience much higher levels of valvular disease and heart failure [25]); nor did we consider the cost-burden of treating severe AS. Applying the growing resources of NEDA, we hope to address most of these issues/limitations in the future, in order to provide greater clarity around the full spectrum of AS in Australia. Finally, it is important to acknowledge that our mortality estimates (even when considering that they are focussed on those with native valves) are discordant with the low rates of mortality reported in trials such as the PARTNER 3 Trial [9]. This discordance only reinforces the benefits of early recognition and expert management of this otherwise deadly condition.\nAs noted, these data do not completely rely on Australian specific data (other than population estimates/demographic structure and NEDA-derived adjustments to original estimates). To partially address this, we have used a conservative estimate of the point prevalence of severe AS and provide 95 % CI for lower and higher and estimates. However, we acknowledge the dynamics of medical management of severe AS (i.e., conservative treatment versus TAVR versus SAVR) in Australia is likely to be different than reported in Europe/North America. The differential clinical uptake and reimbursement of different procedures from a public health to private health perspective within Australia’s increasingly complex and hybrid health care system, is particularly relevant - as is the variable population dynamics of each jurisdiction across the country. Beyond the broader demographic features of Australia and its major jurisdictions, we did not consider other important factors such as socio-economic status and the concentration of particularly high-risk groups (e.g. predominantly younger Indigenous Australians in Central Australia who experience much higher levels of valvular disease and heart failure [25]); nor did we consider the cost-burden of treating severe AS. Applying the growing resources of NEDA, we hope to address most of these issues/limitations in the future, in order to provide greater clarity around the full spectrum of AS in Australia. Finally, it is important to acknowledge that our mortality estimates (even when considering that they are focussed on those with native valves) are discordant with the low rates of mortality reported in trials such as the PARTNER 3 Trial [9]. This discordance only reinforces the benefits of early recognition and expert management of this otherwise deadly condition.", "As noted, these data do not completely rely on Australian specific data (other than population estimates/demographic structure and NEDA-derived adjustments to original estimates). To partially address this, we have used a conservative estimate of the point prevalence of severe AS and provide 95 % CI for lower and higher and estimates. However, we acknowledge the dynamics of medical management of severe AS (i.e., conservative treatment versus TAVR versus SAVR) in Australia is likely to be different than reported in Europe/North America. The differential clinical uptake and reimbursement of different procedures from a public health to private health perspective within Australia’s increasingly complex and hybrid health care system, is particularly relevant - as is the variable population dynamics of each jurisdiction across the country. Beyond the broader demographic features of Australia and its major jurisdictions, we did not consider other important factors such as socio-economic status and the concentration of particularly high-risk groups (e.g. predominantly younger Indigenous Australians in Central Australia who experience much higher levels of valvular disease and heart failure [25]); nor did we consider the cost-burden of treating severe AS. Applying the growing resources of NEDA, we hope to address most of these issues/limitations in the future, in order to provide greater clarity around the full spectrum of AS in Australia. Finally, it is important to acknowledge that our mortality estimates (even when considering that they are focussed on those with native valves) are discordant with the low rates of mortality reported in trials such as the PARTNER 3 Trial [9]. This discordance only reinforces the benefits of early recognition and expert management of this otherwise deadly condition.", "These unique estimates provide an important insight into the current and future treatable burden of severe AS in Australia. At the most conservative level, the likely number of currently affected Australians aged 55 years and over will soon rise to 100,000 people. Moreover, the number of new cases with this potentially deadly condition is likely approaching 10,000 per annum. Based on current clinical practice/recommendations, around two-thirds of such cases should be actively managed due to their symptomatic status (predominantly with a combination of SAVR and TAVR) and a high-risk of mortality. However, it is unclear if that is truly the case. Whether there is sufficient clinical awareness of AS and pro-active referral patterns (particularly for Australian women) for active management is yet to be determined.", " \nAdditional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD).\nAdditional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum).\n\nAdditional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD).\nAdditional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum).\n\nAdditional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD).\nAdditional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum).\n\nAdditional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD).\nAdditional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum).", "\nAdditional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD).\nAdditional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum).\n\nAdditional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD).\nAdditional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum)." ]
[ "introduction", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", null, "conclusion", "supplementary-material", null ]
[ "Aortic stenosis", "Aortic valve replacement", "Health-services" ]
Introduction: One of the most common cardiac conditions affecting the progressively aging populations of high-income countries such as Australia is aortic stenosis (AS) [1]. Without timely intervention, severe AS is associated with a very poor prognosis [2]. However, like many other countries, there has been a paucity of reports focusing on the overall prevalence and treatable burden of AS in Australia. A recent AS report from National Echocardiography Database of Australia (NEDA) [3] that assessed the severity of AS and subsequent pattern of survival among 122,809 men and 118,494 women with a mean age of 62 ± 18 years highlighted an urgent need to better understand the burden imposed by this potentially deadly condition. Overall, the indicative prevalence of severe low or high gradient AS among adults being investigated with echocardiography during the overall study period of 2000–2019, was 1.1 and 2.1 % (3.2 % combined), respectively. Actual 5-year mortality ranged from 56 to 67 % in those cases with a native aortic valve and no indication of surgical intervention [3]. Historically, surgical aortic valve replacement (SAVR) has been the preferred intervention for severe AS [4]. However, transcatheter aortic valve replacement (TAVR) has been successfully applied in those with severe AS with high/prohibitive surgical risk [5–7]. Moreover, two head-to-head randomized trials have now reported non-inferiority [8] and superiority [9] comparing TAVR to SAVR in low-risk patients with severe AS, in respect to mortality and subsequent risk of stroke. Consistent with these data, for most of the 6,050 cases within the NEDA cohort who underwent AVR, their post-procedure AV hemodynamic profile and survival outcomes were favourable [10]. Overall, these data suggest more Australians might benefit from AVR. However, without reliable estimates of the treatable burden of AS, this critical number (for health service and resource planning) remains unknown. A series of modelled studies, first published in 2013 [11] and then an updated version in 2018 [12], applied the best available epidemiological and registry data to estimate the following for Europe and North America – 1) the overall proportion of older individuals affected by AS and more specifically severe AS; and 2) proportion of these individuals who had and/or would benefit from a valve replacement procedure (SAVR or TAVR). Given the geographical focus and source data used, these estimates now provide these target regions with (moderate) reliable estimates of the treatable burden of AS. To date, there are no equivalent burden of disease estimates for Australia. Study aims We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3]. We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3]. Study aims: We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3]. Methods: Study design Consistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required. Consistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required. Study setting Projections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021). Projections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021). Study data The primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021). As indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase. Fig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively Specifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America. Where possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3]. The primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021). As indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase. Fig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively Specifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America. Where possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3]. Estimating the prevalence & treatable burden of severe AS For these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3]. For these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3]. Mortality To understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below). To understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below). Incident cases of severe AS In the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years. In the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years. Statistical analyses All projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text. All projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text. Study design: Consistent with previous reports of this type specifically focusing on atrial fibrillation [13] and heart failure [14], we combined official national population data with the best available epidemiological and clinical data to generate estimates of the treatable burden of disease relating to AS. Given the anonymous source and nature of the study data, no ethical approvals were required. Study setting: Projections were applied to the entire Australian population and then each State and Territory according to their currently estimated demographic profile. ABS data projecting the future demographic structure and profile were also used to derive future projections on the treatable burden of AS. (https://www.abs.gov.au/Population - Accessed January 2021). Study data: The primary analyses presented in this report used age-, sex- and geographic-specific demography data for Australian men and women aged ≥ 55 years for the calendar year 2019. Specifically, the ABS currently provides population projections derived from the official 2016 Census - http://www.abs.gov.au/ausstats/abs@.nsf/mf/3101.0 - Accessed January 2021). As indicated above, the primary basis for this report is the recently updated study of the number of treatable and treated patients with severe AS published by Durko and colleagues in 2018 [12]. Specifically, based on an expanded analysis of 37 relevant studies involving 26,402 cases/patients, this study provides a flow-chart (see Fig. 1) of the key estimates/parameters (with 95 % confidence interval, CI) that can be applied (as a primary analysis) to the latest Australian population data (on an age-, sex- and geographic-specific basis) to estimate the pool of treatable severe AS patients per annum. It can also be applied to broad population projections for an increasing pool of older Australians over time; noting that over a decade (from 2006 to 2016) the proportion of Australians aged > 65 years had increased by around 4 % and this proportion (and absolute numbers) will steadily increase. Fig. 1Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively Model/Decision Tree to Determine Treatable Burden of Severe AS in Australia (Adapted from original [12]). Legend: AS = Aortic Stenosis, SAVR = Surgical Aortic Valve Replacement, TAVR = Transcatheter Aortic Valve Replacement, High, Medium and Low Risk based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score of >8%, 4-8% and <4% for SAVR-related mortality, respectively Specifically, the model developed and subsequently applied in this report provides overall estimates (based on a combination of European and North American cohort studies) on the proportion of patients with severe AS who – 1) are symptomatic and under current expert guidelines, are largely excluded from surgical management; 2) are symptomatic but unsuitable for SAVR but might safely undergo TAVR (the alternative being medical therapy); or 3) are symptomatic and will undergo SAVR; a proportion of whom (particularly due to high-to-medium surgical risk) may benefit from TAVR. The range of derived estimates for each parameter, despite efforts to smooth-out inevitable heterogeneity, are indicative of the methodological weaknesses/biases inherent to source data. It is important to note these treatment pathways are continually evolving as new, lower risk valve interventions are developed and lower the likely threshold of AVR. Without Australian-specific to correct the rates of intervention applied, it is generally acknowledged that intervention rates for severe AS broadly follow those applied in North America. Where possible, we were able to improve our assumptions and therefore our projections by considering primary profiling and outcome data derived from (NEDA) [3]. This unique study has now captured echocardiographic data on > 750,000 Australians (with no exclusion criteria) being routinely investigated with echocardiography from > 25 centres Australia-wide. With individual data linkage, NEDA also generates real-world, short- to long-term survival data on those affected by common cardiac conditions including AS [3]. Estimating the prevalence & treatable burden of severe AS: For these analyses, we applied a point prevalence of 3.5 % for severe AS among individuals aged ≥ 75 years, This small adjustment to the original European model [12] reflects recent reports of an increasing incidence of AS in other high-income countries such as the UK [15] and is only slightly higher than that of the NEDA cohort of actively investigated patients [3]. To derive specific prevalence rates for those aged < 75 years, we used the ratio of cases of severe AS observed in each age-band of the NEDA cohort [3]. This resulted in the following age-specific prevalence estimates being applied: ≥75 years, 3.5 %; 70–74 years, 1.2 %; 65–69 years, 0.7 %; 60–64 years, 0.5 %; and 55–59 years, 0.4 %. When applied to the Australian population aged ≥ 55 years (see Supplementary Figure S1), the overall estimated point prevalence of severe AS is 1.48 % for 2019; a figure that is broadly consistent with that published and applied previously [16]. The same rates were applied to men and women given that, on an age-specific basis, there were minimum differences based on sex within the NEDA cohort [3]. Mortality: To understand the potential consequences of no AVR intervention in the setting of severe AS, age- and sex-specific, actual 5-year mortality rates were applied to the estimated case prevalence of severe AS by applying those observed rates within the equivalent patient cohort identified within NEDA who had a native AV throughout follow-up [3]. Although these data only reflect cases referred for echocardiography, they provide both a discussion point and means to validate prevalence estimates when combined with incidence rates (see below). Incident cases of severe AS: In the absence of specific population incidence data, we used the differential in prevalence for each 5-year age group (i.e., how many more at risk individuals would develop severe AS over 5-years to reflect the number of prevalent cases in that older age group) to calculate how many individuals would develop severe AS each year. By necessity this means incident cases were only calculated for those aged ≥ 60 years. Applying conservatively derived data from the NEDA cohort [3], the following age-specific, annual incident rates, were applied to the Australian population aged ≥ 60 years for the subsequent (i.e. beyond 2019) 5-year period 2020-24: aged ≥ 75 years, 460 cases; 70–74 years, 40 cases; 65–69 years, 40 cases; and 60–64 years, 20 cases per 100,000 population per annum. When combined, these rates generated an annual incidence of severe AS of 182 cases per 100,000 population per annum within the Australian population aged ≥ 60 years. Statistical analyses: All projections are reported as whole numbers and proportions with 95 % CI where appropriate. All statistical analyses are descriptive in nature and population-based; with no inferential statistics applied. Exact estimates are provided in the figures whilst rounded up (to the nearest hundred) figures are provided in text. Results: Prevalent cases of severe AS As shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time. Fig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia Estimated Point Prevalence and Distribution of Severe AS in Australia As shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time. Fig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia Estimated Point Prevalence and Distribution of Severe AS in Australia Aortic valve replacement Figure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead. Fig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) Figure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead. Fig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) AS-related mortality Based on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4. Fig. 4Severe AS Related Mortality Severe AS Related Mortality Based on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4. Fig. 4Severe AS Related Mortality Severe AS Related Mortality Incident cases As shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2). Fig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years Annual Incident Cases of Severe AS among Australians Aged ≥55 years As shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2). Fig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years Annual Incident Cases of Severe AS among Australians Aged ≥55 years Prevalent cases of severe AS: As shown in Fig. 2, we conservatively estimate that around 97,300 Australians are living with severe AS (symptomatic or otherwise). Moreover, assuming just over two-thirds of these cases experience concurrent symptoms linked to the condition, according to contemporary clinical recommendations/ best practice around 66,500 (95 % CI 59,200 to 74,000) people might be considered for an AVR procedure at any one time. Fig. 2Estimated Point Prevalence and Distribution of Severe AS in Australia Estimated Point Prevalence and Distribution of Severe AS in Australia Aortic valve replacement: Figure 3 summarises the overall treatable burden/management of severe AS when assuming 58.4 % of cases with symptomatic, severe AS would be referred for SAVR (primary replacement or revision) and the remainder (41.6 %) for potential TAVR. On this basis, we estimate that around 38,800 (95 % CI 35,700 to 42,000) Australians aged ≥ 55 years with severe, symptomatic AS might be considered for SAVR; based on the Society of Thoracic Surgeons Predicted Risk of Mortality Score [17]. An additional 17,000 cases (95 % CI 11,600 to 22,600) of the approximately 27,700 people not eligible for SAVR due to high peri-operative risk could be potentially considered for TAVR instead. Fig. 3Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) Estimated Treatable Burden/Management of Severe AS in Australia (Based on Prevalent Cases) AS-related mortality: Based on the age- and sex-specific rates of actual mortality (ranging from 17 to 84 % for those aged 55–64 years to ≥ 85 years) observed in the NEDA cohort, we estimate that 56,300 (95 % CI 56,100 to 56,500) of the prevalent population with severe AS of their native valve will subsequently die within the next 5 years - see Fig. 4. Fig. 4Severe AS Related Mortality Severe AS Related Mortality Incident cases: As shown in Fig. 5, each year, we further estimate that around 9,300 (95 % CI 9,000 to 9,500) more Australians aged ≥ 60 years will subsequently develop severe AS. Assuming the same pattern of potential AVR procedures (once again based on peri-operative risk status), 3,700 (95 % CI 3,400 to 4,000) and 2,600 (95 % CI 2,300 to 2,900) of the approximate de novo 6,300 (95 % CI 5,600 to 7,000) cases with concurrent symptoms linked to the condition, would be potentially managed with/eligible for a SAVR or TAVR procedure, respectively (see Supplementary Figure S2). Fig. 5Annual Incident Cases of Severe AS among Australians Aged ≥55 years Annual Incident Cases of Severe AS among Australians Aged ≥55 years Discussion: In the setting of specific surgical registry reports [18] and recent insightful data generated by the NEDA Study [3] but little else, we present a unique analysis of the potential treatable burden of severe AS in Australia. As with many substantive public health issues that routinely affect a large proportion of the population and is associated with costly treatment and historically poor health outcomes [13, 14], unfortunately there is a paucity of specific burden of disease data to guide health resources and clinical practice. Indeed, worldwide, the natural history and impact of AS remains poorly characterised [19]. This relative lack of data is exacerbated by the rapid progression of disease and high mortality in those affected [20]. It was on this basis we chose to use the best available modelling and projections on the population prevalence and treatable pattern/burden of severe AS [12] and then applied them to the Australian population aged ≥ 55 years with further adjustments/ improvements based on NEDA Study data [3]. Overall, our analyses suggest that close to 100,000 Australians in this at-risk age-group are currently living with this potentially deadly condition. Accordingly, in the next 5 years, more than half of these individuals will die without having undergone an AVR procedure – their risk of dying being two-fold higher on an adjusted basis than their counterparts without severe AS [3]. Without any change in its natural history (there being strong evidence that its prevalence will rise within our increasingly sedentary and obese population [15, 21], this number will likely rise substantially within Australia’s ageing population. At minimum, we estimate that an additional 9,000 Australians aged ≥ 60 years and over will develop severe AS each year. Overall, based on contemporary management practices in Europe and North America (noting this remains a highly evolving field), we estimated that just under 56,000 prevalent cases would be potentially eligible for a SAVR or TAVR procedure. As also shown by the NEDA Study, successful restoration of AV function with AVR is associated with markedly improved survival [10]. Due to population dynamics (including greater longevity among Australian women) more women than men are likely affected by severe AS, but many of these are aged > 80 years and may have comorbidities that will favour more conservative management options. Regardless, of the relative accuracy of these projections (noting our critical corrections of the original model [12] based on the very large and robust data derived from the NEDA cohort [3] – see below), these data provide an important context to the largely hidden but substantive burden of disease imposed by AS in Australia. From an individual to societal perspective, it seems clear that due to Australia’s progressively ageing population, a clear strategy to detect and then optimally manage an increasing burden of AS is urgently required. Outcome data derived from close to 350,000 Australians investigated with echocardiography and collectively followed-up > 1 million person-years as part of the NEDA Study [3] further reinforces the need to prioritise AS management. In that study we reaffirmed that severe AS had very poor survival rates (two-thirds dead within 5-years). We also confirmed that when applied, AVR was largely associated with optimal AV hemodynamic profiles and lower mortality [10]. A recent analysis of 18,147 patients (mean age 72 years and 64 % men) with AS who underwent SAVR captured by the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) database during 2002–2015, showed that this procedure accounted for 20 % of all adult cardiac surgeries by the end of the study period [18]. In recent years, TAVR has been successfully applied to patients with severe AS and with high/prohibitive surgical risk [5–7]; with two randomized trials reporting the non-inferiority [8] and superiority [9] of TAVR in respect to mortality and subsequent risk of stroke, respectively, when compared to SAVR in low-risk patients with severe AS. A recently reported Australian trial of TAVR applied to 199 intermediate-risk cases of severe AS, is indicative of the changing clinical landscape in this regard [22]. Despite the type of data described above, determining the actual proportion of Australians at risk of poor outcomes associated with AS and then actively treated with SAVR or TAVR, remains problematic; even when reconciling the largely concordant data around the size of the likely active patient population derived from the NEDA Study and those formal projections. However, there does appear to be a disconnect between the number of Australians with severe AS who might benefit either SAVR or TAVR and their subsequent access to these procedures. A recent report from the ANZSCTS database recorded ~ 4,000 SAVR procedures overall in Australia during the period 2009–2015 [18]. Even when accounting for the fact that registry did become truly national until 2015, there appears to be a large shortfall in the expected number of such procedures per annum relative to our projections. This may well be explained by the demographic profile of those undergoing SAVR in Australia. As reported [18], the mean age of SAVR cases is around 72 years of age and only 37 % were female. Alternatively, the mean age of those with severe AS (during a similar timeframe of surveillance) within the NEDA cohort is around 80 years of age and, consistent with this report, more than half of cases were women. Anecdotally, referral for SAVR in the Australian population has typically paralleled North American trends and the presumptions of the modelling (shown in Fig. 1) are based on historical data from a combination of North American and European centres. Given the lag in referral behaviour to new low-risk interventions (e.g., TAVR), by definition, all these assumptions for the modelling will be conservative. Critically, the availability of these new valve technologies has developed rapidly in Australia in the last 5 years and, other than NEDA, there is a paucity of resources to track these changes in real-time. The recently reported NEDA Study data suggesting that even mild-to-moderate forms of AS are associated with high-levels of mortality approaching that of severe AS within 5-years of follow-up [3], when combined with other contemporary reports [23, 24], are likely to change the landscape of AS management. Specifically, this will likely reflect a recognition that a “watchful wait” approach to determine the transition from moderate to severe AS and also from asymptomatic to symptomatic status [4] may be associated with unacceptably high mortality rates [5]. For example, reflective of concerns around intermediate-to-high risk of surgical mortality (> 4 %) and the evolving efficacy of TAVR versus the more costly and invasive SAVR, in the current analyses, it was estimated that around 9,000 SAVR cases could potentially be replaced with TAVR. However, determining how and where to invest in dedicated screening programs and apply the latest evidence to prolong the lives of those affected in Australia (noting our estimate of approximately 9,000 new cases per annum – of whom > 4,000 would be aged < 75 years) is futile without firm evidence of the number of individuals involved. The noted “disconnect” between the potential and actual number of Australians who derive survival benefits from an AVR, therefore, is likely to increase over time. Limitations As noted, these data do not completely rely on Australian specific data (other than population estimates/demographic structure and NEDA-derived adjustments to original estimates). To partially address this, we have used a conservative estimate of the point prevalence of severe AS and provide 95 % CI for lower and higher and estimates. However, we acknowledge the dynamics of medical management of severe AS (i.e., conservative treatment versus TAVR versus SAVR) in Australia is likely to be different than reported in Europe/North America. The differential clinical uptake and reimbursement of different procedures from a public health to private health perspective within Australia’s increasingly complex and hybrid health care system, is particularly relevant - as is the variable population dynamics of each jurisdiction across the country. Beyond the broader demographic features of Australia and its major jurisdictions, we did not consider other important factors such as socio-economic status and the concentration of particularly high-risk groups (e.g. predominantly younger Indigenous Australians in Central Australia who experience much higher levels of valvular disease and heart failure [25]); nor did we consider the cost-burden of treating severe AS. Applying the growing resources of NEDA, we hope to address most of these issues/limitations in the future, in order to provide greater clarity around the full spectrum of AS in Australia. Finally, it is important to acknowledge that our mortality estimates (even when considering that they are focussed on those with native valves) are discordant with the low rates of mortality reported in trials such as the PARTNER 3 Trial [9]. This discordance only reinforces the benefits of early recognition and expert management of this otherwise deadly condition. As noted, these data do not completely rely on Australian specific data (other than population estimates/demographic structure and NEDA-derived adjustments to original estimates). To partially address this, we have used a conservative estimate of the point prevalence of severe AS and provide 95 % CI for lower and higher and estimates. However, we acknowledge the dynamics of medical management of severe AS (i.e., conservative treatment versus TAVR versus SAVR) in Australia is likely to be different than reported in Europe/North America. The differential clinical uptake and reimbursement of different procedures from a public health to private health perspective within Australia’s increasingly complex and hybrid health care system, is particularly relevant - as is the variable population dynamics of each jurisdiction across the country. Beyond the broader demographic features of Australia and its major jurisdictions, we did not consider other important factors such as socio-economic status and the concentration of particularly high-risk groups (e.g. predominantly younger Indigenous Australians in Central Australia who experience much higher levels of valvular disease and heart failure [25]); nor did we consider the cost-burden of treating severe AS. Applying the growing resources of NEDA, we hope to address most of these issues/limitations in the future, in order to provide greater clarity around the full spectrum of AS in Australia. Finally, it is important to acknowledge that our mortality estimates (even when considering that they are focussed on those with native valves) are discordant with the low rates of mortality reported in trials such as the PARTNER 3 Trial [9]. This discordance only reinforces the benefits of early recognition and expert management of this otherwise deadly condition. Limitations: As noted, these data do not completely rely on Australian specific data (other than population estimates/demographic structure and NEDA-derived adjustments to original estimates). To partially address this, we have used a conservative estimate of the point prevalence of severe AS and provide 95 % CI for lower and higher and estimates. However, we acknowledge the dynamics of medical management of severe AS (i.e., conservative treatment versus TAVR versus SAVR) in Australia is likely to be different than reported in Europe/North America. The differential clinical uptake and reimbursement of different procedures from a public health to private health perspective within Australia’s increasingly complex and hybrid health care system, is particularly relevant - as is the variable population dynamics of each jurisdiction across the country. Beyond the broader demographic features of Australia and its major jurisdictions, we did not consider other important factors such as socio-economic status and the concentration of particularly high-risk groups (e.g. predominantly younger Indigenous Australians in Central Australia who experience much higher levels of valvular disease and heart failure [25]); nor did we consider the cost-burden of treating severe AS. Applying the growing resources of NEDA, we hope to address most of these issues/limitations in the future, in order to provide greater clarity around the full spectrum of AS in Australia. Finally, it is important to acknowledge that our mortality estimates (even when considering that they are focussed on those with native valves) are discordant with the low rates of mortality reported in trials such as the PARTNER 3 Trial [9]. This discordance only reinforces the benefits of early recognition and expert management of this otherwise deadly condition. Conclusions: These unique estimates provide an important insight into the current and future treatable burden of severe AS in Australia. At the most conservative level, the likely number of currently affected Australians aged 55 years and over will soon rise to 100,000 people. Moreover, the number of new cases with this potentially deadly condition is likely approaching 10,000 per annum. Based on current clinical practice/recommendations, around two-thirds of such cases should be actively managed due to their symptomatic status (predominantly with a combination of SAVR and TAVR) and a high-risk of mortality. However, it is unclear if that is truly the case. Whether there is sufficient clinical awareness of AS and pro-active referral patterns (particularly for Australian women) for active management is yet to be determined. Supplementary Information: Additional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD). Additional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum). Additional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD). Additional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum). Additional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD). Additional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum). Additional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD). Additional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum). : Additional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD). Additional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum). Additional file 1: Supplementary Figure 1. Population Distribution of Australians Aged ≥55 years. Legend: Australia is a federated country, comprising the main populated States (Northern Territory and Capital Territory not shown) of Western Australian (WA), South Australia (SA), Victoria (VIC), Tasmania (TAS), New South Wales (NSW) and Queensland (QLD). Additional file 2: Supplementary Figure 2. Estimated Treatable Burden/Management of Severe AS in Australia (Based on Incident Cases per Annum).
Background: We aimed to address the paucity of information describing the treatable burden of disease associated with severe aortic stenosis (AS) within Australia's ageing population. Methods: A contemporary model of the population prevalence of symptomatic, severe AS and treatment pathways in Europe and North America was applied to the 2019 Australian population aged ≥ 55 years (7 million people) on an age-specific basis. Applying Australian-specific data, these estimates were used to further calculate the total number of associated deaths and incident cases of severe AS per annum. Results: Based on an overall point prevalence of 1.48 % among those aged ≥ 55 years, we estimate that a minimum of 97,000 Australians are living with severe AS. With a 2-fold increased risk of mortality without undergoing aortic valve replacement (AVR), more than half of these individuals (∼56,000) will die within 5-years. From a clinical management perspective, among those with concurrent symptoms (68.3 %, 66,500 [95 % CI 59,000-74,000] cases) more than half (58.4 %, 38,800 [95 % CI 35,700 - 42,000] cases) would be potentially considered for surgical AVR (SAVR) - comprising 2,400, 5,400 and 31,000 cases assessed as high-, medium- or low peri-operative mortality risk, respectively. A further 17,000/27,700 (41.6 % [95 % CI 11,600 - 22,600]) of such individuals would be potentially considered to a transthoracic AVR (TAVR). During the subsequent 5-year period (2020-2024), each year, we estimate an additional 9,300 Australians aged ≥ 60 years will subsequently develop severe AS (6,300 of whom will experience concurrent symptoms). Of these symptomatic cases, an estimated 3,700 and 1,600 cases/annum, will be potentially suitable for SAVR and TAVR, respectively. Conclusions: These data suggest there is likely to be a substantive burden of individuals living with severe AS in Australia. Many of these cases may not have been diagnosed and/or received appropriate treatment (based on the evidence-based application of SAVR and TAVR) to reduce their high-risk of subsequent mortality.
Introduction: One of the most common cardiac conditions affecting the progressively aging populations of high-income countries such as Australia is aortic stenosis (AS) [1]. Without timely intervention, severe AS is associated with a very poor prognosis [2]. However, like many other countries, there has been a paucity of reports focusing on the overall prevalence and treatable burden of AS in Australia. A recent AS report from National Echocardiography Database of Australia (NEDA) [3] that assessed the severity of AS and subsequent pattern of survival among 122,809 men and 118,494 women with a mean age of 62 ± 18 years highlighted an urgent need to better understand the burden imposed by this potentially deadly condition. Overall, the indicative prevalence of severe low or high gradient AS among adults being investigated with echocardiography during the overall study period of 2000–2019, was 1.1 and 2.1 % (3.2 % combined), respectively. Actual 5-year mortality ranged from 56 to 67 % in those cases with a native aortic valve and no indication of surgical intervention [3]. Historically, surgical aortic valve replacement (SAVR) has been the preferred intervention for severe AS [4]. However, transcatheter aortic valve replacement (TAVR) has been successfully applied in those with severe AS with high/prohibitive surgical risk [5–7]. Moreover, two head-to-head randomized trials have now reported non-inferiority [8] and superiority [9] comparing TAVR to SAVR in low-risk patients with severe AS, in respect to mortality and subsequent risk of stroke. Consistent with these data, for most of the 6,050 cases within the NEDA cohort who underwent AVR, their post-procedure AV hemodynamic profile and survival outcomes were favourable [10]. Overall, these data suggest more Australians might benefit from AVR. However, without reliable estimates of the treatable burden of AS, this critical number (for health service and resource planning) remains unknown. A series of modelled studies, first published in 2013 [11] and then an updated version in 2018 [12], applied the best available epidemiological and registry data to estimate the following for Europe and North America – 1) the overall proportion of older individuals affected by AS and more specifically severe AS; and 2) proportion of these individuals who had and/or would benefit from a valve replacement procedure (SAVR or TAVR). Given the geographical focus and source data used, these estimates now provide these target regions with (moderate) reliable estimates of the treatable burden of AS. To date, there are no equivalent burden of disease estimates for Australia. Study aims We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3]. We sought, for the first time, to generate reliable (but inherently conservative) estimates on the prevalence and treatable patient population with severe, symptomatic AS in Australia. Specifically, our aim was to replicate the same robust models recently used to generate contemporary AS-specific projections for Europe and North America [12] as highlighted above with specific modifications relevant to the Australian context. Firstly, this included applying population profiling (denominator) data derived from the Australian Bureau of Statistics (ABS). Secondly, given the Australian context, we aimed to expand our burden of disease estimates to those aged ≥ 55 years (noting the original study applied a single estimate of the overall prevalence of severe AS (numerator) to those aged ≥ 75 years [12]) and by applying age- and sex-specific, incident, and prevalent estimates of AS informed by the recent NEDA Study of AS [3]. Conclusions: These unique estimates provide an important insight into the current and future treatable burden of severe AS in Australia. At the most conservative level, the likely number of currently affected Australians aged 55 years and over will soon rise to 100,000 people. Moreover, the number of new cases with this potentially deadly condition is likely approaching 10,000 per annum. Based on current clinical practice/recommendations, around two-thirds of such cases should be actively managed due to their symptomatic status (predominantly with a combination of SAVR and TAVR) and a high-risk of mortality. However, it is unclear if that is truly the case. Whether there is sufficient clinical awareness of AS and pro-active referral patterns (particularly for Australian women) for active management is yet to be determined.
Background: We aimed to address the paucity of information describing the treatable burden of disease associated with severe aortic stenosis (AS) within Australia's ageing population. Methods: A contemporary model of the population prevalence of symptomatic, severe AS and treatment pathways in Europe and North America was applied to the 2019 Australian population aged ≥ 55 years (7 million people) on an age-specific basis. Applying Australian-specific data, these estimates were used to further calculate the total number of associated deaths and incident cases of severe AS per annum. Results: Based on an overall point prevalence of 1.48 % among those aged ≥ 55 years, we estimate that a minimum of 97,000 Australians are living with severe AS. With a 2-fold increased risk of mortality without undergoing aortic valve replacement (AVR), more than half of these individuals (∼56,000) will die within 5-years. From a clinical management perspective, among those with concurrent symptoms (68.3 %, 66,500 [95 % CI 59,000-74,000] cases) more than half (58.4 %, 38,800 [95 % CI 35,700 - 42,000] cases) would be potentially considered for surgical AVR (SAVR) - comprising 2,400, 5,400 and 31,000 cases assessed as high-, medium- or low peri-operative mortality risk, respectively. A further 17,000/27,700 (41.6 % [95 % CI 11,600 - 22,600]) of such individuals would be potentially considered to a transthoracic AVR (TAVR). During the subsequent 5-year period (2020-2024), each year, we estimate an additional 9,300 Australians aged ≥ 60 years will subsequently develop severe AS (6,300 of whom will experience concurrent symptoms). Of these symptomatic cases, an estimated 3,700 and 1,600 cases/annum, will be potentially suitable for SAVR and TAVR, respectively. Conclusions: These data suggest there is likely to be a substantive burden of individuals living with severe AS in Australia. Many of these cases may not have been diagnosed and/or received appropriate treatment (based on the evidence-based application of SAVR and TAVR) to reduce their high-risk of subsequent mortality.
10,123
425
[ 177, 2864, 66, 53, 698, 247, 96, 197, 57, 100, 174, 88, 153, 316, 206 ]
20
[ "severe", "years", "cases", "australia", "population", "data", "aged", "applied", "savr", "estimates" ]
[ "report national echocardiography", "aortic stenosis timely", "aortic stenosis", "countries australia aortic", "australians investigated echocardiography" ]
null
[CONTENT] Aortic stenosis | Aortic valve replacement | Health-services [SUMMARY]
null
[CONTENT] Aortic stenosis | Aortic valve replacement | Health-services [SUMMARY]
[CONTENT] Aortic stenosis | Aortic valve replacement | Health-services [SUMMARY]
[CONTENT] Aortic stenosis | Aortic valve replacement | Health-services [SUMMARY]
[CONTENT] Aortic stenosis | Aortic valve replacement | Health-services [SUMMARY]
[CONTENT] Aging | Aortic Valve Stenosis | Australia | Heart Valve Prosthesis Implantation | Humans | Risk Factors | Transcatheter Aortic Valve Replacement | Treatment Outcome [SUMMARY]
null
[CONTENT] Aging | Aortic Valve Stenosis | Australia | Heart Valve Prosthesis Implantation | Humans | Risk Factors | Transcatheter Aortic Valve Replacement | Treatment Outcome [SUMMARY]
[CONTENT] Aging | Aortic Valve Stenosis | Australia | Heart Valve Prosthesis Implantation | Humans | Risk Factors | Transcatheter Aortic Valve Replacement | Treatment Outcome [SUMMARY]
[CONTENT] Aging | Aortic Valve Stenosis | Australia | Heart Valve Prosthesis Implantation | Humans | Risk Factors | Transcatheter Aortic Valve Replacement | Treatment Outcome [SUMMARY]
[CONTENT] Aging | Aortic Valve Stenosis | Australia | Heart Valve Prosthesis Implantation | Humans | Risk Factors | Transcatheter Aortic Valve Replacement | Treatment Outcome [SUMMARY]
[CONTENT] report national echocardiography | aortic stenosis timely | aortic stenosis | countries australia aortic | australians investigated echocardiography [SUMMARY]
null
[CONTENT] report national echocardiography | aortic stenosis timely | aortic stenosis | countries australia aortic | australians investigated echocardiography [SUMMARY]
[CONTENT] report national echocardiography | aortic stenosis timely | aortic stenosis | countries australia aortic | australians investigated echocardiography [SUMMARY]
[CONTENT] report national echocardiography | aortic stenosis timely | aortic stenosis | countries australia aortic | australians investigated echocardiography [SUMMARY]
[CONTENT] report national echocardiography | aortic stenosis timely | aortic stenosis | countries australia aortic | australians investigated echocardiography [SUMMARY]
[CONTENT] severe | years | cases | australia | population | data | aged | applied | savr | estimates [SUMMARY]
null
[CONTENT] severe | years | cases | australia | population | data | aged | applied | savr | estimates [SUMMARY]
[CONTENT] severe | years | cases | australia | population | data | aged | applied | savr | estimates [SUMMARY]
[CONTENT] severe | years | cases | australia | population | data | aged | applied | savr | estimates [SUMMARY]
[CONTENT] severe | years | cases | australia | population | data | aged | applied | savr | estimates [SUMMARY]
[CONTENT] estimates | overall | study | reliable | australian context | severe | context | 12 | data | prevalence [SUMMARY]
null
[CONTENT] severe | 95 ci | 95 | ci | fig | 300 | cases | 600 | 000 | years [SUMMARY]
[CONTENT] active | current | likely | number | clinical | 000 | risk mortality unclear | awareness pro active referral | awareness pro active | awareness pro [SUMMARY]
[CONTENT] severe | years | cases | australia | data | population | aged | estimates | applied | prevalence [SUMMARY]
[CONTENT] severe | years | cases | australia | data | population | aged | estimates | applied | prevalence [SUMMARY]
[CONTENT] Australia [SUMMARY]
null
[CONTENT] 1.48 % | ≥ | 55 years | 97,000 | Australians ||| 2-fold | AVR | more than half | ∼56,000 | 5-years ||| 68.3 % | 66,500 | 95 % | 59,000-74,000 | more than half | 58.4 % | 38,800 | 95 % | CI | 35,700 | 42,000 | 2,400 | 5,400 | 31,000 ||| 17,000/27,700 | 41.6 % ||| 95 % | CI | 11,600 | 22,600 | AVR ||| 5-year | 2020-2024 | each year | Australians | ≥ | 60 years | 6,300 ||| an estimated 3,700 | 1,600 | SAVR | TAVR [SUMMARY]
[CONTENT] Australia ||| SAVR [SUMMARY]
[CONTENT] Australia ||| Europe | North America | 2019 | Australian | ≥ | 55 years | 7 million ||| Australian ||| 1.48 % | ≥ | 55 years | 97,000 | Australians ||| 2-fold | AVR | more than half | ∼56,000 | 5-years ||| 68.3 % | 66,500 | 95 % | 59,000-74,000 | more than half | 58.4 % | 38,800 | 95 % | CI | 35,700 | 42,000 | 2,400 | 5,400 | 31,000 ||| 17,000/27,700 | 41.6 % ||| 95 % | CI | 11,600 | 22,600 | AVR ||| 5-year | 2020-2024 | each year | Australians | ≥ | 60 years | 6,300 ||| an estimated 3,700 | 1,600 | SAVR | TAVR ||| Australia ||| SAVR [SUMMARY]
[CONTENT] Australia ||| Europe | North America | 2019 | Australian | ≥ | 55 years | 7 million ||| Australian ||| 1.48 % | ≥ | 55 years | 97,000 | Australians ||| 2-fold | AVR | more than half | ∼56,000 | 5-years ||| 68.3 % | 66,500 | 95 % | 59,000-74,000 | more than half | 58.4 % | 38,800 | 95 % | CI | 35,700 | 42,000 | 2,400 | 5,400 | 31,000 ||| 17,000/27,700 | 41.6 % ||| 95 % | CI | 11,600 | 22,600 | AVR ||| 5-year | 2020-2024 | each year | Australians | ≥ | 60 years | 6,300 ||| an estimated 3,700 | 1,600 | SAVR | TAVR ||| Australia ||| SAVR [SUMMARY]
Using CF11 cellulose columns to inexpensively and effectively remove human DNA from Plasmodium falciparum-infected whole blood samples.
22321373
Genome and transcriptome studies of Plasmodium nucleic acids obtained from parasitized whole blood are greatly improved by depletion of human DNA or enrichment of parasite DNA prior to next-generation sequencing and microarray hybridization. The most effective method currently used is a two-step procedure to deplete leukocytes: centrifugation using density gradient media followed by filtration through expensive, commercially available columns. This method is not easily implemented in field studies that collect hundreds of samples and simultaneously process samples for multiple laboratory analyses. Inexpensive syringes, hand-packed with CF11 cellulose powder, were recently shown to improve ex vivo cultivation of Plasmodium vivax obtained from parasitized whole blood. This study was undertaken to determine whether CF11 columns could be adapted to isolate Plasmodium falciparum DNA from parasitized whole blood and achieve current quantity and purity requirements for Illumina sequencing.
BACKGROUND
The CF11 procedure was compared with the current two-step standard of leukocyte depletion using parasitized red blood cells cultured in vitro and parasitized blood obtained ex vivo from Cambodian patients with malaria. Procedural variations in centrifugation and column size were tested, along with a range of blood volumes and parasite densities.
METHODS
CF11 filtration reliably produces 500 nanograms of DNA with less than 50% human DNA contamination, which is comparable to that obtained by the two-step method and falls within the current quality control requirements for Illumina sequencing. In addition, a centrifuge-free version of the CF11 filtration method to isolate P. falciparum DNA at remote and minimally equipped field sites in malaria-endemic areas was validated.
RESULTS
CF11 filtration is a cost-effective, scalable, one-step approach to remove human DNA from P. falciparum-infected whole blood samples.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Blood", "Cambodia", "Child", "Child, Preschool", "Chromatography", "DNA, Protozoan", "Humans", "Infant", "Malaria", "Middle Aged", "Parasitology", "Plasmodium falciparum", "Sensitivity and Specificity", "Specimen Handling", "Young Adult" ]
3295709
Background
Recent technological advances have enabled next-generation genomic and transcriptomic analysis of Plasmodium falciparum from parasitized whole blood samples without the need for culturing. High-density genotyping of parasites obtained directly from patients with malaria has improved our understanding of population structure and genomic and transcriptional variation [1,2]. Highly parallel sequencing is currently being used to identify the genetic determinants of clinically relevant phenotypes including drug resistance, vaccine escape and disease severity. Importantly, genomic characterization of parasite populations in patients captures intra-host diversity and prevents the potential loss of sequence data from phenotype-conferring parasite isolates during their culture adaptation. The performance of highly parallel sequencing platforms, such as Illumina, is greatly enhanced in sample preparations with a high parasite-to-human nucleic acid ratio [3]. Such high ratios can be achieved by either selectively capturing parasite nucleic acids or by removing human material from the sample. Hybrid selection [4] using RNA 'baits' complementary to the P. falciparum genome can achieve over 40-fold enrichment of parasite DNA and can be performed at any time following DNA extraction [5]; however, at $250 USD per sample, this technique may be prohibitively costly for epidemiological or population-level studies. The alternative, leukocyte depletion, must be performed in field sites soon after blood collection and before transport and storage. Commercially available magnetic columns have been used to rapidly isolate parasitized red blood cells (RBCs) from uninfected RBCs and leukocytes [6]. However, magnetic purification depends on short-term culturing of patient blood to transform ring-stage parasites to haemozoin-rich trophozoites and schizonts, which requires equipment for in vitro parasite cultivation. Furthermore, parasites obtained directly from patients mature at different rates in culture, resulting in inconvenient time-points for purification. The current standard for leukocyte depletion of parasitized blood for P. falciparum nucleotide sequencing is a two-step process: centrifugation using Lymphoprep or another sucrose density gradient solution, followed by filtration using Plasmodipur filters [3]. This 'LP' method is effective but difficult to scale-up in field settings because it requires extensive handling and transfer of blood, training to perform sensitive steps, and costly commercial reagents and materials. Filtration with hand-made columns has been used as an inexpensive and less time-consuming alternative for leukocyte depletion of Plasmodium-infected blood for decades [7,8]. Recently, non-woven fabric filters [9] and plastic syringes filled with CF11 cellulose powder (Whatman) [10] were shown to effectively remove leukocytes and platelets from Plasmodium vivax-infected blood, where their phagocytic and degranulating activities may interfere with some ex vivo studies. CF11 filtration has also been used to improve microarray-based transcriptome analysis of P. vivax-infected blood [2,11]. CF11 cellulose is thought to work by trapping leukocytes by size exclusion and/or interactions between cellulose hydroxyl groups and leukocyte surface molecules. In this study, laboratory-adapted P. falciparum clones were used to adapt CF11 columns to remove human DNA from P. falciparum-infected blood for genomic studies, compare CF11 filtration to the established LP method, and validate a centrifuge-free option for CF11 filtration. Both methods were also compared in a field setting using P. falciparum isolates obtained directly from patients with malaria. Furthermore, routine CF11 filtration of parasitized blood collected from patients in a second field site with minimal facilities was successfully implemented.
Methods
Blood samples Experiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration. Experiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage. All blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification. Blood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians. Experiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration. Experiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage. All blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification. Blood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians. Leukocyte depletion Lymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12]. Packed CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B). Lymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12]. Packed CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B). Experimental design CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method Experimental design CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. DNA quantification DNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits. Quantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software. qPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume. DNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits. Quantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software. qPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume. Statistical comparisons Means, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11. Means, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11.
null
null
Conclusions
MV, CA, SC, OK, PL, SA and SU designed the study and collected the data. MV, CA and SC analysed the data. DS, DPK, RMF and CVP provided guidance and coordination for study design, data collection and analysis. MV wrote the first draft of the manuscript. RMF, CA, SC and CVP edited and revised the manuscript. All authors read and approved the final manuscript.
[ "Background", "Blood samples", "Leukocyte depletion", "Experimental design", "CF11 versus LP filtration, without modifications", "Experimental design", "CF11 versus LP filtration, without modifications", "CF11 filtration, with procedural modifications", "Modifications in blood volume and parasite density", "Routine blood collection for further validation and sequencing", "DNA quantification", "Statistical comparisons", "Results", "Comparison of unmodified LP and CF11 filtration for leukocyte depletion", "Procedural modifications of CF11 filtration method", "Blood volume and parasite density", "Validation of routine blood collection for illumina sequencing", "Discussion", "Conclusions" ]
[ "Recent technological advances have enabled next-generation genomic and transcriptomic analysis of Plasmodium falciparum from parasitized whole blood samples without the need for culturing. High-density genotyping of parasites obtained directly from patients with malaria has improved our understanding of population structure and genomic and transcriptional variation [1,2]. Highly parallel sequencing is currently being used to identify the genetic determinants of clinically relevant phenotypes including drug resistance, vaccine escape and disease severity. Importantly, genomic characterization of parasite populations in patients captures intra-host diversity and prevents the potential loss of sequence data from phenotype-conferring parasite isolates during their culture adaptation.\nThe performance of highly parallel sequencing platforms, such as Illumina, is greatly enhanced in sample preparations with a high parasite-to-human nucleic acid ratio [3]. Such high ratios can be achieved by either selectively capturing parasite nucleic acids or by removing human material from the sample. Hybrid selection [4] using RNA 'baits' complementary to the P. falciparum genome can achieve over 40-fold enrichment of parasite DNA and can be performed at any time following DNA extraction [5]; however, at $250 USD per sample, this technique may be prohibitively costly for epidemiological or population-level studies.\nThe alternative, leukocyte depletion, must be performed in field sites soon after blood collection and before transport and storage. Commercially available magnetic columns have been used to rapidly isolate parasitized red blood cells (RBCs) from uninfected RBCs and leukocytes [6]. However, magnetic purification depends on short-term culturing of patient blood to transform ring-stage parasites to haemozoin-rich trophozoites and schizonts, which requires equipment for in vitro parasite cultivation. Furthermore, parasites obtained directly from patients mature at different rates in culture, resulting in inconvenient time-points for purification. The current standard for leukocyte depletion of parasitized blood for P. falciparum nucleotide sequencing is a two-step process: centrifugation using Lymphoprep or another sucrose density gradient solution, followed by filtration using Plasmodipur filters [3]. This 'LP' method is effective but difficult to scale-up in field settings because it requires extensive handling and transfer of blood, training to perform sensitive steps, and costly commercial reagents and materials.\nFiltration with hand-made columns has been used as an inexpensive and less time-consuming alternative for leukocyte depletion of Plasmodium-infected blood for decades [7,8]. Recently, non-woven fabric filters [9] and plastic syringes filled with CF11 cellulose powder (Whatman) [10] were shown to effectively remove leukocytes and platelets from Plasmodium vivax-infected blood, where their phagocytic and degranulating activities may interfere with some ex vivo studies. CF11 filtration has also been used to improve microarray-based transcriptome analysis of P. vivax-infected blood [2,11]. CF11 cellulose is thought to work by trapping leukocytes by size exclusion and/or interactions between cellulose hydroxyl groups and leukocyte surface molecules. In this study, laboratory-adapted P. falciparum clones were used to adapt CF11 columns to remove human DNA from P. falciparum-infected blood for genomic studies, compare CF11 filtration to the established LP method, and validate a centrifuge-free option for CF11 filtration. Both methods were also compared in a field setting using P. falciparum isolates obtained directly from patients with malaria. Furthermore, routine CF11 filtration of parasitized blood collected from patients in a second field site with minimal facilities was successfully implemented.", "Experiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration.\nExperiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage.\nAll blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification.\nBlood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians.", "Lymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12].\nPacked CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B).", " CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method\nTo compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method", "To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method", " CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.\nTo compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.\n CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.\nTo test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.\n Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.\nTo investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.\n Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.\nTo determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.", "To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.", "To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.", "To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.", "To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.", "DNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits.\nQuantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software.\nqPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume.", "Means, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11.", " Comparison of unmodified LP and CF11 filtration for leukocyte depletion Filtration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points.\nFiltration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points.\n Procedural modifications of CF11 filtration method To explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B).\nTo explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B).\n Blood volume and parasite density All CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing.\nFiltration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination.\nAll CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing.\nFiltration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination.\n Validation of routine blood collection for illumina sequencing The 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK).\nThe 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK).", "Filtration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points.", "To explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B).", "All CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing.\nFiltration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination.", "The 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK).", "As next-generation sequencing technologies improve, both costs and stringency of DNA sample quality requirements continue to fall. Between 2009 and 2011, requirements for Illumina sequencing of P. falciparum DNA by the Wellcome Trust Sanger Institute have relaxed from 30% to 50% human DNA contamination. Moreover, the possibility of sequencing multiple DNA samples in a single lane of a flow cell ('multiplex sequencing') has decreased costs immensely while yielding ~2 GB of P. falciparum genomic sequence data per sample, with further improvements anticipated in coming years. To take advantage of these increasingly accessible tools, researchers need scalable procedures for DNA sample preparation in minimally equipped field sites.\nThis study shows that CF11 columns can be used to effectively deplete human DNA from parasitized blood and achieve Illumina sequencing requirements for total DNA yield and percent human DNA contamination, using sample volumes and parasite densities comparable to those used in LP filtration experiments [3]. A major advantage of CF11 over LP filtration is cost-effectiveness. CF11 columns cost approximately one USD each, compared to 10-50 USD for the Plasmodipur filter alone. Additionally, CF11 filtration is a very simple one-step procedure that does not require specialized equipment. A centrifugation step is only used to pellet RBCs after CF11 filtration and can be replaced with a convenient overnight period of refrigeration to allow the RBCs to pellet by gravity. These advantages should enable a large share of the malaria research community to produce parasite DNA samples appropriate for highly parallel sequencing and microarray technologies.\nBecause CF11 columns are hand-made rather than commercially produced, variation in the length of the column is likely to occur. Samples filtered in columns packed with less CF11 powder exhibited higher human DNA contamination but still met the threshold level for sequencing, indicating that some variation in column preparation can be tolerated. Samples with low blood volumes (1.5 ml) or parasite densities (5,000/μl) did not consistently achieve sufficient human DNA depletion or total DNA recovery when filtered with either the CF11 or LP method. Whole-genome amplification (WGA) after DNA isolation [13] could boost the amount of starting material in cases where human DNA removal is satisfactory but total DNA yield is low. WGA may therefore be useful in enabling genomic characterization of CF11-filtered blood samples with low starting volume, such as those collected from very young children and patients with severe malarial anaemia. It may also be useful in processing blood samples with low parasite densities, as is often the case in patients with a recrudescent parasitaemia after anti-malarial treatment and those with high levels of naturally acquired immunity.\nAlthough specific parameters have not been rigorously tested, observations in the field have indicated that CF11 and parasitized blood can be effectively stored and used in a variety of environmental conditions. CF11 kept in an airtight container with regularly replaced desiccant and stored in a cool place can be used for at least three months, even in climates with high heat and humidity (unpublished observation). EDTA-treated blood stored at 4°C for up to 24 hours prior to CF11 filtration was successfully processed for Illumina DNA sequencing (unpublished observation), but for RNA analysis, no more than six hours of refrigeration is suggested (ZB Bozdech, personal communication). Updates to the range of storage conditions will be made to the WWARN protocol [12] as more information becomes available.", "CF11 filtration currently offers the best cost-effective, one-step approach to remove human DNA from P. falciparum-infected whole blood samples and can be successfully implemented in large genome-wide sequencing studies of P. falciparum isolates from patients with malaria." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Blood samples", "Leukocyte depletion", "Experimental design", "CF11 versus LP filtration, without modifications", "Experimental design", "CF11 versus LP filtration, without modifications", "CF11 filtration, with procedural modifications", "Modifications in blood volume and parasite density", "Routine blood collection for further validation and sequencing", "DNA quantification", "Statistical comparisons", "Results", "Comparison of unmodified LP and CF11 filtration for leukocyte depletion", "Procedural modifications of CF11 filtration method", "Blood volume and parasite density", "Validation of routine blood collection for illumina sequencing", "Discussion", "Conclusions" ]
[ "Recent technological advances have enabled next-generation genomic and transcriptomic analysis of Plasmodium falciparum from parasitized whole blood samples without the need for culturing. High-density genotyping of parasites obtained directly from patients with malaria has improved our understanding of population structure and genomic and transcriptional variation [1,2]. Highly parallel sequencing is currently being used to identify the genetic determinants of clinically relevant phenotypes including drug resistance, vaccine escape and disease severity. Importantly, genomic characterization of parasite populations in patients captures intra-host diversity and prevents the potential loss of sequence data from phenotype-conferring parasite isolates during their culture adaptation.\nThe performance of highly parallel sequencing platforms, such as Illumina, is greatly enhanced in sample preparations with a high parasite-to-human nucleic acid ratio [3]. Such high ratios can be achieved by either selectively capturing parasite nucleic acids or by removing human material from the sample. Hybrid selection [4] using RNA 'baits' complementary to the P. falciparum genome can achieve over 40-fold enrichment of parasite DNA and can be performed at any time following DNA extraction [5]; however, at $250 USD per sample, this technique may be prohibitively costly for epidemiological or population-level studies.\nThe alternative, leukocyte depletion, must be performed in field sites soon after blood collection and before transport and storage. Commercially available magnetic columns have been used to rapidly isolate parasitized red blood cells (RBCs) from uninfected RBCs and leukocytes [6]. However, magnetic purification depends on short-term culturing of patient blood to transform ring-stage parasites to haemozoin-rich trophozoites and schizonts, which requires equipment for in vitro parasite cultivation. Furthermore, parasites obtained directly from patients mature at different rates in culture, resulting in inconvenient time-points for purification. The current standard for leukocyte depletion of parasitized blood for P. falciparum nucleotide sequencing is a two-step process: centrifugation using Lymphoprep or another sucrose density gradient solution, followed by filtration using Plasmodipur filters [3]. This 'LP' method is effective but difficult to scale-up in field settings because it requires extensive handling and transfer of blood, training to perform sensitive steps, and costly commercial reagents and materials.\nFiltration with hand-made columns has been used as an inexpensive and less time-consuming alternative for leukocyte depletion of Plasmodium-infected blood for decades [7,8]. Recently, non-woven fabric filters [9] and plastic syringes filled with CF11 cellulose powder (Whatman) [10] were shown to effectively remove leukocytes and platelets from Plasmodium vivax-infected blood, where their phagocytic and degranulating activities may interfere with some ex vivo studies. CF11 filtration has also been used to improve microarray-based transcriptome analysis of P. vivax-infected blood [2,11]. CF11 cellulose is thought to work by trapping leukocytes by size exclusion and/or interactions between cellulose hydroxyl groups and leukocyte surface molecules. In this study, laboratory-adapted P. falciparum clones were used to adapt CF11 columns to remove human DNA from P. falciparum-infected blood for genomic studies, compare CF11 filtration to the established LP method, and validate a centrifuge-free option for CF11 filtration. Both methods were also compared in a field setting using P. falciparum isolates obtained directly from patients with malaria. Furthermore, routine CF11 filtration of parasitized blood collected from patients in a second field site with minimal facilities was successfully implemented.", " Blood samples Experiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration.\nExperiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage.\nAll blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification.\nBlood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians.\nExperiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration.\nExperiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage.\nAll blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification.\nBlood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians.\n Leukocyte depletion Lymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12].\nPacked CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B).\nLymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12].\nPacked CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B).\n Experimental design CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method\nTo compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method\n CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method\nTo compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method\n Experimental design CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.\nTo compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.\n CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.\nTo test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.\n Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.\nTo investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.\n Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.\nTo determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.\n CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.\nTo compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.\n CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.\nTo test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.\n Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.\nTo investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.\n Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.\nTo determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.\n DNA quantification DNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits.\nQuantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software.\nqPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume.\nDNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits.\nQuantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software.\nqPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume.\n Statistical comparisons Means, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11.\nMeans, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11.", "Experiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration.\nExperiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage.\nAll blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification.\nBlood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians.", "Lymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12].\nPacked CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B).", " CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method\nTo compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method", "To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method", " CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.\nTo compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.\n CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.\nTo test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.\n Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.\nTo investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.\n Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.\nTo determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.", "To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method.", "To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark.", "To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl.", "To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets.", "DNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits.\nQuantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software.\nqPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume.", "Means, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11.", " Comparison of unmodified LP and CF11 filtration for leukocyte depletion Filtration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points.\nFiltration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points.\n Procedural modifications of CF11 filtration method To explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B).\nTo explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B).\n Blood volume and parasite density All CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing.\nFiltration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination.\nAll CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing.\nFiltration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination.\n Validation of routine blood collection for illumina sequencing The 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK).\nThe 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK).", "Filtration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range.\nPercent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points.", "To explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B).", "All CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing.\nFiltration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination.", "The 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK).", "As next-generation sequencing technologies improve, both costs and stringency of DNA sample quality requirements continue to fall. Between 2009 and 2011, requirements for Illumina sequencing of P. falciparum DNA by the Wellcome Trust Sanger Institute have relaxed from 30% to 50% human DNA contamination. Moreover, the possibility of sequencing multiple DNA samples in a single lane of a flow cell ('multiplex sequencing') has decreased costs immensely while yielding ~2 GB of P. falciparum genomic sequence data per sample, with further improvements anticipated in coming years. To take advantage of these increasingly accessible tools, researchers need scalable procedures for DNA sample preparation in minimally equipped field sites.\nThis study shows that CF11 columns can be used to effectively deplete human DNA from parasitized blood and achieve Illumina sequencing requirements for total DNA yield and percent human DNA contamination, using sample volumes and parasite densities comparable to those used in LP filtration experiments [3]. A major advantage of CF11 over LP filtration is cost-effectiveness. CF11 columns cost approximately one USD each, compared to 10-50 USD for the Plasmodipur filter alone. Additionally, CF11 filtration is a very simple one-step procedure that does not require specialized equipment. A centrifugation step is only used to pellet RBCs after CF11 filtration and can be replaced with a convenient overnight period of refrigeration to allow the RBCs to pellet by gravity. These advantages should enable a large share of the malaria research community to produce parasite DNA samples appropriate for highly parallel sequencing and microarray technologies.\nBecause CF11 columns are hand-made rather than commercially produced, variation in the length of the column is likely to occur. Samples filtered in columns packed with less CF11 powder exhibited higher human DNA contamination but still met the threshold level for sequencing, indicating that some variation in column preparation can be tolerated. Samples with low blood volumes (1.5 ml) or parasite densities (5,000/μl) did not consistently achieve sufficient human DNA depletion or total DNA recovery when filtered with either the CF11 or LP method. Whole-genome amplification (WGA) after DNA isolation [13] could boost the amount of starting material in cases where human DNA removal is satisfactory but total DNA yield is low. WGA may therefore be useful in enabling genomic characterization of CF11-filtered blood samples with low starting volume, such as those collected from very young children and patients with severe malarial anaemia. It may also be useful in processing blood samples with low parasite densities, as is often the case in patients with a recrudescent parasitaemia after anti-malarial treatment and those with high levels of naturally acquired immunity.\nAlthough specific parameters have not been rigorously tested, observations in the field have indicated that CF11 and parasitized blood can be effectively stored and used in a variety of environmental conditions. CF11 kept in an airtight container with regularly replaced desiccant and stored in a cool place can be used for at least three months, even in climates with high heat and humidity (unpublished observation). EDTA-treated blood stored at 4°C for up to 24 hours prior to CF11 filtration was successfully processed for Illumina DNA sequencing (unpublished observation), but for RNA analysis, no more than six hours of refrigeration is suggested (ZB Bozdech, personal communication). Updates to the range of storage conditions will be made to the WWARN protocol [12] as more information becomes available.", "CF11 filtration currently offers the best cost-effective, one-step approach to remove human DNA from P. falciparum-infected whole blood samples and can be successfully implemented in large genome-wide sequencing studies of P. falciparum isolates from patients with malaria." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "CF11", "Cellulose powder", "Leukocyte depletion", "\nPlasmodium falciparum\n", "Malaria", "Next-generation sequencing" ]
Background: Recent technological advances have enabled next-generation genomic and transcriptomic analysis of Plasmodium falciparum from parasitized whole blood samples without the need for culturing. High-density genotyping of parasites obtained directly from patients with malaria has improved our understanding of population structure and genomic and transcriptional variation [1,2]. Highly parallel sequencing is currently being used to identify the genetic determinants of clinically relevant phenotypes including drug resistance, vaccine escape and disease severity. Importantly, genomic characterization of parasite populations in patients captures intra-host diversity and prevents the potential loss of sequence data from phenotype-conferring parasite isolates during their culture adaptation. The performance of highly parallel sequencing platforms, such as Illumina, is greatly enhanced in sample preparations with a high parasite-to-human nucleic acid ratio [3]. Such high ratios can be achieved by either selectively capturing parasite nucleic acids or by removing human material from the sample. Hybrid selection [4] using RNA 'baits' complementary to the P. falciparum genome can achieve over 40-fold enrichment of parasite DNA and can be performed at any time following DNA extraction [5]; however, at $250 USD per sample, this technique may be prohibitively costly for epidemiological or population-level studies. The alternative, leukocyte depletion, must be performed in field sites soon after blood collection and before transport and storage. Commercially available magnetic columns have been used to rapidly isolate parasitized red blood cells (RBCs) from uninfected RBCs and leukocytes [6]. However, magnetic purification depends on short-term culturing of patient blood to transform ring-stage parasites to haemozoin-rich trophozoites and schizonts, which requires equipment for in vitro parasite cultivation. Furthermore, parasites obtained directly from patients mature at different rates in culture, resulting in inconvenient time-points for purification. The current standard for leukocyte depletion of parasitized blood for P. falciparum nucleotide sequencing is a two-step process: centrifugation using Lymphoprep or another sucrose density gradient solution, followed by filtration using Plasmodipur filters [3]. This 'LP' method is effective but difficult to scale-up in field settings because it requires extensive handling and transfer of blood, training to perform sensitive steps, and costly commercial reagents and materials. Filtration with hand-made columns has been used as an inexpensive and less time-consuming alternative for leukocyte depletion of Plasmodium-infected blood for decades [7,8]. Recently, non-woven fabric filters [9] and plastic syringes filled with CF11 cellulose powder (Whatman) [10] were shown to effectively remove leukocytes and platelets from Plasmodium vivax-infected blood, where their phagocytic and degranulating activities may interfere with some ex vivo studies. CF11 filtration has also been used to improve microarray-based transcriptome analysis of P. vivax-infected blood [2,11]. CF11 cellulose is thought to work by trapping leukocytes by size exclusion and/or interactions between cellulose hydroxyl groups and leukocyte surface molecules. In this study, laboratory-adapted P. falciparum clones were used to adapt CF11 columns to remove human DNA from P. falciparum-infected blood for genomic studies, compare CF11 filtration to the established LP method, and validate a centrifuge-free option for CF11 filtration. Both methods were also compared in a field setting using P. falciparum isolates obtained directly from patients with malaria. Furthermore, routine CF11 filtration of parasitized blood collected from patients in a second field site with minimal facilities was successfully implemented. Methods: Blood samples Experiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration. Experiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage. All blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification. Blood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians. Experiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration. Experiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage. All blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification. Blood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians. Leukocyte depletion Lymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12]. Packed CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B). Lymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12]. Packed CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B). Experimental design CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method Experimental design CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. DNA quantification DNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits. Quantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software. qPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume. DNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits. Quantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software. qPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume. Statistical comparisons Means, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11. Means, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11. Blood samples: Experiments using laboratory-adapted P. falciparum clones were conducted at the University of Maryland, Baltimore. Blood obtained from healthy volunteers was mixed with purified P. falciparum parasites (3D7 or Dd2 clones) to achieve parasite densities of 10,000/μl and 5,000/μl. These samples were stored at 4°C for up to two hours prior to CF11 or LP filtration. Experiments using non-adapted P. falciparum isolates were conducted in August 2010 at Sampov Meas Referral Hospital, Pursat, Cambodia. Blood samples for these experiments were collected from patients with falciparum malaria who were ≥ 10 years old and had a wide range of parasite densities (≥ 10,000/μl). These samples were held at 4°C for up to six hours prior to CF11 or LP filtration. Additional blood samples were collected in November to December 2010 at Lumphat District Health Centre, Ratanakiri, Cambodia. Blood samples were obtained from patients with falciparum malaria who were ≥ 1 year old and had a wide range of parasite densities ≥ 10,000/μl. These samples were held at 4°C for up to six hours prior to CF11 filtration. Following CF11 filtration, samples were stored on ice for up to six hours and transported on ice for up to 12 hours to Phnom Penh for centrifugation and RBC pellet storage. All blood samples were collected in EDTA or CPDA-1 tubes. Heparin was not used as an anticoagulant as it thought to inhibit Taq polymerase during DNA amplification. Blood samples from healthy volunteers were collected under a protocol approved by the Institutional Review Board of the University of Maryland School of Medicine. Blood samples from patients with falciparum malaria were collected under protocols approved by the Institutional Review Board of the National Institute for Allergy and Infectious Diseases and the Cambodian National Ethics Committee for Health Research (http://ClinicalTrials.gov identifiers: NCT00341003 and NCT01240603). Written, informed consent was obtained from all study participants or their parents or guardians. Leukocyte depletion: Lymphoprep (Axis-shield, Oslo, Norway) + Plasmodipur (Euro-diagnostica, Arnhem, Netherlands) filtration ('LP') was performed as previously described [3]. CF11 columns were loosely filled to the 10-ml mark, packed to the 5.5-ml mark with CF11 cellulose powder (Whatman, Kent, UK), and wetted with isotonic phosphate buffered saline (PBS), as described [10]. Initial tests indicated that plasma separation prior to CF11 filtration of blood did not affect human DNA depletion and that washing with PBS after CF11 filtration improved recovery of parasite DNA. CF11 filtration was therefore carried out as follows. Parasitized blood or blood/parasite mixtures were diluted with an equal volume of PBS. These samples were added to the CF11 column and allowed to flow through by gravity. To wash the sample, 5 ml PBS was added to the column and allowed to pass through by gravity. A plunger was then inserted into the top of the column and the final few drops pushed through the column. The filtrates were centrifuged for 10 minutes at 1,000 x g and the supernatants removed. RBC pellets from all filtrates in all experiments were stored at -20°C until DNA extraction. A packed CF11 column and filtration of blood diluted with PBS are shown in Figures 1A and 1B. The full CF11 filtration procedure is available on the WWARN website [12]. Packed CF11 column ready for storage, shipping or use (A) and filtration of blood-PBS mixture through CF11 column into collection tube (B). Experimental design: CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method CF11 versus LP filtration, without modifications: To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method Experimental design: CF11 versus LP filtration, without modifications To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. CF11 filtration, with procedural modifications To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. Modifications in blood volume and parasite density To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. Routine blood collection for further validation and sequencing To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. CF11 versus LP filtration, without modifications: To compare the CF11 and LP methods for leukocyte depletion in a laboratory setting, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared and filtered in parallel using each method. Unfiltered blood-parasite mixtures were prepared as control samples. To compare the two methods in a field setting (Pursat, Cambodia), 8 ml blood from 15 patients with falciparum malaria were obtained and split into two 4-ml aliquots. Each aliquot was processed by either the CF11 or LP method. CF11 filtration, with procedural modifications: To test a centrifuge-free method of CF11 filtration, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were prepared, filtered through CF11 columns, and held overnight at 4°C to allow RBCs pellet by gravity. Supernatants were removed the following day. To investigate the effect of decreased CF11 column length on leukocyte depletion, three replicates of blood-parasite mixtures (3-ml volume; 10,000 parasites/μl) were filtered through columns loosely filled with CF11 to the 8-ml mark and packed to the 4-ml mark. Modifications in blood volume and parasite density: To investigate the effect of low sample volume on CF11 filtration, three replicates of 1.5-ml blood-parasite mixtures containing 10,000 parasites/μl were prepared and filtered in parallel through CF11 columns loosely filled to the 5-ml mark and packed to the 2.5-ml mark. To investigate the effect of low parasite density on CF11 filtration, three replicates of 3-ml blood-parasite mixtures containing 5,000 parasites/μl were prepared and filtered in parallel through CF11 columns packed to the 5.5-ml mark. In both experiments, unfiltered blood-parasite mixtures were prepared as control samples. The effect of parasite density on CF11 and LP filtration was tested in Pursat, Cambodia, using P. falciparum isolates obtained from patients with malaria who had parasite densities ranging from 20,000-475,000/μl. Routine blood collection for further validation and sequencing: To determine how CF11 filtration performed during a routine blood collection protocol, 3 ml blood were obtained from 51 patients presenting with falciparum malaria in Ratanakiri, Cambodia. Blood samples were passed through CF11 columns and the filtrates were transported on ice to the laboratory in Phnom Penh, where the samples were centrifuged to obtain RBC pellets. DNA quantification: DNA was extracted from leukocyte-depleted and unfiltered control samples using Qiamp DNA Blood Midi Kits (Qiagen, Venlo, Netherlands) according to the manufacturer's protocol. Total DNA from experiments using P. falciparum clones was estimated with a SpectraMax M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) using the Quant-IT PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The standard curve ranged from 0.20 to 50 ng/μl. Total DNA from experiments using P. falciparum isolates was estimated with a Qubit 2.0 Fluorometer (Invitrogen, Carlsbad, CA, USA) using both the dsDNA HS and BR Assay kits. Quantitative real-time PCR (qPCR) was used to measure the relative amounts of human and parasite DNA in each sample. Primers targeting the human LRAP (F: 5'-ACGTTGGATGAATTTTCCACTGGATTCCAT-'3; R: 5'-ACGTTGGATGTGAACCATGCTCCTTGCATC-'3) and TLR9 (F: 5'-ACGTTGGATGCAAAGGGCTGGCTGTTGTAG-'3; R: 5'- ACGTTGGATGTCTACCACGAGCACTCATTC-'3) genes and the P. falciparum AMA-1 gene (F: 5'-ACGTTGGATGGATTCTCTTTCGATTTCTTTC-'3; R: 5'-ACGTTGGATGTGCTACTACTGCTTTGTCCC-'3) were used to amplify DNA from samples, negative controls and standards in duplicate. Amplification occurred in an Applied Biosystems 7300 or StepOne real-time PCR machine. Human placental and P. falciparum (3D7 clone) genomic DNA standards ranged from 0.20 to 50 ng/μl. For each 25 μl reaction, 12.5 μl SYBR Green qPCR Master Mix (Applied Biosystems, Foster City, CA, USA) was mixed with 1.5 μl of each forward and reverse primers at 2 μM concentrations, 7.5-8.5 μl water, and 1-2 μl of template DNA. Reaction conditions were previously reported [3]: an initial denaturing step of 95°C for 10 minutes, five cycles of 94°C for 45 seconds, 56°C for 45 seconds, 72°C for 45 seconds, followed by 30 cycles of 94°C for 45 seconds, 65°C for 45 seconds and 72°C for 45 seconds and one dissociation cycle (conditions vary by real-time PCR instrument). Results were analysed using Applied Biosystems 7300 System SDS or StepOne v2.0 software. qPCR quantification of AMA-1 and TLR9 or LRAP was used to determine the proportions of P. falciparum and human DNA in the sample. Total DNA was estimated by multiplying the concentration given by Spectramax or Qubit by the extracted sample volume. Statistical comparisons: Means, variance, standard errors and statistical comparisons of means were calculated using Stata 11 software. Boxplots indicating medians, interquartile ranges and outliers were also generated using Stata 11. Results: Comparison of unmodified LP and CF11 filtration for leukocyte depletion Filtration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis. Percent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range. Percent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points. Filtration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis. Percent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range. Percent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points. Procedural modifications of CF11 filtration method To explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B). To explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B). Blood volume and parasite density All CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing. Filtration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination. All CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing. Filtration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination. Validation of routine blood collection for illumina sequencing The 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK). The 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK). Comparison of unmodified LP and CF11 filtration for leukocyte depletion: Filtration of blood-parasite mixtures using unmodified LP and CF11 methods in parallel successfully depleted human DNA to 1% and 6% of total DNA, respectively (Figure 2A), well below the current Illumina sequencing threshold of < 50%. CF11 was more effective than LP in recovering total DNA from blood-parasite mixtures (mean 1.11 μg vs. 0.37 μg; p = 0.03) (Figure 2B). All 15 CF11-filtered parasitized blood samples and 12 of 14 LP-filtered parasitized blood samples yielded < 50% human DNA contamination (Figure 3A). Human DNA contamination was lower for parasitized blood filtered by CF11 than by LP (mean 2.4% vs. 15.9%; p = 0.03) (Figure 3B). Recovery of total DNA was comparably high in both sets of samples (mean 4.64 μg for CF11 vs. 2.86 μg for LP; p = 0.46) One LP-filtered sample failed to produce a human DNA estimate by qPCR and was omitted from the analysis. Percent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of blood-parasite mixtures. The volume and parasite density of blood-parasite mixtures were 3 ml and 10,000/μl except where stated otherwise. Three replicates for each filtration method were performed. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column, CF11 O/N = CF11 filtrate held overnight prior to removal of supernatant, CF11 4 ml = filtration using a 4-ml CF11 column, CF11 low vol = filtration of a 1.5-ml blood-parasite mixture containing 10,000 parasites/μl, CF11 low para = filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl, NF = no filtration, NF low para = no filtration of a 3-ml blood-parasite mixture containing 5,000 parasites/μl. Box plots show median and interquartile range. Percent human DNA (A) and total DNA (B) as estimated by qPCR after leukocyte depletion of Plasmodium falciparum-infected blood samples that vary in parasite density. Light gray shading indicates samples with < 50,000 parasites/μl, medium gray shading indicates samples with 50,000-100,000 parasites/μl, and dark gray shading indicates samples with parasite density > 100,000 parasites/μl. All sample volumes were 4 ml. n = sample size for each method within each range of parasite density. Dashed lines indicate current criteria for Illumina sequencing: < 50% human DNA and > 500 ng total DNA. LP = Lymphoprep + Plasmodipur, CF11 = filtration using a 5.5-ml CF11 column. Box plots show median and interquartile range. Whiskers span the range of data points within 1.5 times the interquartile range of the lower and upper quartiles. Outliers are shown as points. Procedural modifications of CF11 filtration method: To explore whether CF11 filtrates could be processed without centrifugation, CF11-filtered blood-parasite mixtures were refrigerated overnight. The RBC pellet formed during overnight settling was not as tightly packed as that formed during centrifugation and required careful handling when removing supernatant to prevent disruption. This modification resulted in adequate human DNA depletion (mean contamination 11%) (Figure 2A) and total DNA recovery (mean 1.22 μg) (Figure 2B). Columns with less CF11 powder, packed to the 4-ml mark rather than to the 5.5-ml mark, also yielded high total DNA (mean 1.56 μg) and a sufficiently low proportion of human DNA (mean 30%) (Figure 2A and 2B). Blood volume and parasite density: All CF11-filtered blood-parasite mixtures (3 ml volume, 10,000/μl parasite density) consistently yielded samples with < 30% human DNA contamination and > 500 ng total DNA (Figure 2A and 2B). However, the ex vivo study of P. falciparum isolates with interesting phenotypes can be difficult if blood volumes are limited and parasite densities are low. To determine whether such limitations might significantly reduce the quality of DNA samples, CF11 filtrations were repeated using blood-parasite mixtures of lower volume (1.5 ml) or lower parasite densities (5,000/μl). CF11 filtration of low-volume samples (1.5 ml; 10,000 parasites/μl) produced acceptable human DNA contamination (mean 28%) and mean total DNA (0.51 μg), but individual yields of total DNA were highly variable, ranging from 0.15 to 1.09 μg (Figure 2A and 2B). CF11 filtration of low parasite density samples (3 ml; 5,000 parasites/μl) produced adequate amounts of total DNA (mean 2.68 μg) but unacceptable human DNA contamination (mean 77%) (Figure 2A and 2B). Thus, reducing sample volumes or parasite densities by half failed to reliably produce DNA samples acceptable for Illumina sequencing. Filtration of P. falciparum-infected blood obtained from patients with malaria (Pursat, Cambodia) using the CF11 and LP methods consistently achieved high-quality results at parasite densities ranging from 20,000 to 475,000/μl (Figure 3A and 3B). At parasite densities < 50,000/μl and 50,000-100,000/μl, both filtration methods depleted human DNA contamination to ≤ 30% and produced mean DNA yields > 1 μg. Samples with > 100,000 parasites/μl yielded several micrograms of DNA with < 5% human DNA contamination. Validation of routine blood collection for illumina sequencing: The 51 parasitized blood samples that were collected and CF11-filtered in Ratanakiri, Cambodia, ranged in parasite density from 15,000 to 290,000/μl and yielded a mean 0.64 μg total DNA with 34% human DNA contamination. All 51 samples met quality standards and have been successfully sequenced on the Illumina platform at the Wellcome Trust Sanger Institute (Hinxton, UK). Discussion: As next-generation sequencing technologies improve, both costs and stringency of DNA sample quality requirements continue to fall. Between 2009 and 2011, requirements for Illumina sequencing of P. falciparum DNA by the Wellcome Trust Sanger Institute have relaxed from 30% to 50% human DNA contamination. Moreover, the possibility of sequencing multiple DNA samples in a single lane of a flow cell ('multiplex sequencing') has decreased costs immensely while yielding ~2 GB of P. falciparum genomic sequence data per sample, with further improvements anticipated in coming years. To take advantage of these increasingly accessible tools, researchers need scalable procedures for DNA sample preparation in minimally equipped field sites. This study shows that CF11 columns can be used to effectively deplete human DNA from parasitized blood and achieve Illumina sequencing requirements for total DNA yield and percent human DNA contamination, using sample volumes and parasite densities comparable to those used in LP filtration experiments [3]. A major advantage of CF11 over LP filtration is cost-effectiveness. CF11 columns cost approximately one USD each, compared to 10-50 USD for the Plasmodipur filter alone. Additionally, CF11 filtration is a very simple one-step procedure that does not require specialized equipment. A centrifugation step is only used to pellet RBCs after CF11 filtration and can be replaced with a convenient overnight period of refrigeration to allow the RBCs to pellet by gravity. These advantages should enable a large share of the malaria research community to produce parasite DNA samples appropriate for highly parallel sequencing and microarray technologies. Because CF11 columns are hand-made rather than commercially produced, variation in the length of the column is likely to occur. Samples filtered in columns packed with less CF11 powder exhibited higher human DNA contamination but still met the threshold level for sequencing, indicating that some variation in column preparation can be tolerated. Samples with low blood volumes (1.5 ml) or parasite densities (5,000/μl) did not consistently achieve sufficient human DNA depletion or total DNA recovery when filtered with either the CF11 or LP method. Whole-genome amplification (WGA) after DNA isolation [13] could boost the amount of starting material in cases where human DNA removal is satisfactory but total DNA yield is low. WGA may therefore be useful in enabling genomic characterization of CF11-filtered blood samples with low starting volume, such as those collected from very young children and patients with severe malarial anaemia. It may also be useful in processing blood samples with low parasite densities, as is often the case in patients with a recrudescent parasitaemia after anti-malarial treatment and those with high levels of naturally acquired immunity. Although specific parameters have not been rigorously tested, observations in the field have indicated that CF11 and parasitized blood can be effectively stored and used in a variety of environmental conditions. CF11 kept in an airtight container with regularly replaced desiccant and stored in a cool place can be used for at least three months, even in climates with high heat and humidity (unpublished observation). EDTA-treated blood stored at 4°C for up to 24 hours prior to CF11 filtration was successfully processed for Illumina DNA sequencing (unpublished observation), but for RNA analysis, no more than six hours of refrigeration is suggested (ZB Bozdech, personal communication). Updates to the range of storage conditions will be made to the WWARN protocol [12] as more information becomes available. Conclusions: CF11 filtration currently offers the best cost-effective, one-step approach to remove human DNA from P. falciparum-infected whole blood samples and can be successfully implemented in large genome-wide sequencing studies of P. falciparum isolates from patients with malaria.
Background: Genome and transcriptome studies of Plasmodium nucleic acids obtained from parasitized whole blood are greatly improved by depletion of human DNA or enrichment of parasite DNA prior to next-generation sequencing and microarray hybridization. The most effective method currently used is a two-step procedure to deplete leukocytes: centrifugation using density gradient media followed by filtration through expensive, commercially available columns. This method is not easily implemented in field studies that collect hundreds of samples and simultaneously process samples for multiple laboratory analyses. Inexpensive syringes, hand-packed with CF11 cellulose powder, were recently shown to improve ex vivo cultivation of Plasmodium vivax obtained from parasitized whole blood. This study was undertaken to determine whether CF11 columns could be adapted to isolate Plasmodium falciparum DNA from parasitized whole blood and achieve current quantity and purity requirements for Illumina sequencing. Methods: The CF11 procedure was compared with the current two-step standard of leukocyte depletion using parasitized red blood cells cultured in vitro and parasitized blood obtained ex vivo from Cambodian patients with malaria. Procedural variations in centrifugation and column size were tested, along with a range of blood volumes and parasite densities. Results: CF11 filtration reliably produces 500 nanograms of DNA with less than 50% human DNA contamination, which is comparable to that obtained by the two-step method and falls within the current quality control requirements for Illumina sequencing. In addition, a centrifuge-free version of the CF11 filtration method to isolate P. falciparum DNA at remote and minimally equipped field sites in malaria-endemic areas was validated. Conclusions: CF11 filtration is a cost-effective, scalable, one-step approach to remove human DNA from P. falciparum-infected whole blood samples.
Background: Recent technological advances have enabled next-generation genomic and transcriptomic analysis of Plasmodium falciparum from parasitized whole blood samples without the need for culturing. High-density genotyping of parasites obtained directly from patients with malaria has improved our understanding of population structure and genomic and transcriptional variation [1,2]. Highly parallel sequencing is currently being used to identify the genetic determinants of clinically relevant phenotypes including drug resistance, vaccine escape and disease severity. Importantly, genomic characterization of parasite populations in patients captures intra-host diversity and prevents the potential loss of sequence data from phenotype-conferring parasite isolates during their culture adaptation. The performance of highly parallel sequencing platforms, such as Illumina, is greatly enhanced in sample preparations with a high parasite-to-human nucleic acid ratio [3]. Such high ratios can be achieved by either selectively capturing parasite nucleic acids or by removing human material from the sample. Hybrid selection [4] using RNA 'baits' complementary to the P. falciparum genome can achieve over 40-fold enrichment of parasite DNA and can be performed at any time following DNA extraction [5]; however, at $250 USD per sample, this technique may be prohibitively costly for epidemiological or population-level studies. The alternative, leukocyte depletion, must be performed in field sites soon after blood collection and before transport and storage. Commercially available magnetic columns have been used to rapidly isolate parasitized red blood cells (RBCs) from uninfected RBCs and leukocytes [6]. However, magnetic purification depends on short-term culturing of patient blood to transform ring-stage parasites to haemozoin-rich trophozoites and schizonts, which requires equipment for in vitro parasite cultivation. Furthermore, parasites obtained directly from patients mature at different rates in culture, resulting in inconvenient time-points for purification. The current standard for leukocyte depletion of parasitized blood for P. falciparum nucleotide sequencing is a two-step process: centrifugation using Lymphoprep or another sucrose density gradient solution, followed by filtration using Plasmodipur filters [3]. This 'LP' method is effective but difficult to scale-up in field settings because it requires extensive handling and transfer of blood, training to perform sensitive steps, and costly commercial reagents and materials. Filtration with hand-made columns has been used as an inexpensive and less time-consuming alternative for leukocyte depletion of Plasmodium-infected blood for decades [7,8]. Recently, non-woven fabric filters [9] and plastic syringes filled with CF11 cellulose powder (Whatman) [10] were shown to effectively remove leukocytes and platelets from Plasmodium vivax-infected blood, where their phagocytic and degranulating activities may interfere with some ex vivo studies. CF11 filtration has also been used to improve microarray-based transcriptome analysis of P. vivax-infected blood [2,11]. CF11 cellulose is thought to work by trapping leukocytes by size exclusion and/or interactions between cellulose hydroxyl groups and leukocyte surface molecules. In this study, laboratory-adapted P. falciparum clones were used to adapt CF11 columns to remove human DNA from P. falciparum-infected blood for genomic studies, compare CF11 filtration to the established LP method, and validate a centrifuge-free option for CF11 filtration. Both methods were also compared in a field setting using P. falciparum isolates obtained directly from patients with malaria. Furthermore, routine CF11 filtration of parasitized blood collected from patients in a second field site with minimal facilities was successfully implemented. Conclusions: MV, CA, SC, OK, PL, SA and SU designed the study and collected the data. MV, CA and SC analysed the data. DS, DPK, RMF and CVP provided guidance and coordination for study design, data collection and analysis. MV wrote the first draft of the manuscript. RMF, CA, SC and CVP edited and revised the manuscript. All authors read and approved the final manuscript.
Background: Genome and transcriptome studies of Plasmodium nucleic acids obtained from parasitized whole blood are greatly improved by depletion of human DNA or enrichment of parasite DNA prior to next-generation sequencing and microarray hybridization. The most effective method currently used is a two-step procedure to deplete leukocytes: centrifugation using density gradient media followed by filtration through expensive, commercially available columns. This method is not easily implemented in field studies that collect hundreds of samples and simultaneously process samples for multiple laboratory analyses. Inexpensive syringes, hand-packed with CF11 cellulose powder, were recently shown to improve ex vivo cultivation of Plasmodium vivax obtained from parasitized whole blood. This study was undertaken to determine whether CF11 columns could be adapted to isolate Plasmodium falciparum DNA from parasitized whole blood and achieve current quantity and purity requirements for Illumina sequencing. Methods: The CF11 procedure was compared with the current two-step standard of leukocyte depletion using parasitized red blood cells cultured in vitro and parasitized blood obtained ex vivo from Cambodian patients with malaria. Procedural variations in centrifugation and column size were tested, along with a range of blood volumes and parasite densities. Results: CF11 filtration reliably produces 500 nanograms of DNA with less than 50% human DNA contamination, which is comparable to that obtained by the two-step method and falls within the current quality control requirements for Illumina sequencing. In addition, a centrifuge-free version of the CF11 filtration method to isolate P. falciparum DNA at remote and minimally equipped field sites in malaria-endemic areas was validated. Conclusions: CF11 filtration is a cost-effective, scalable, one-step approach to remove human DNA from P. falciparum-infected whole blood samples.
11,595
322
[ 645, 361, 297, 94, 42, 888, 101, 112, 150, 61, 438, 33, 2215, 551, 134, 333, 69, 639, 47 ]
20
[ "cf11", "blood", "parasite", "dna", "ml", "filtration", "000", "μl", "samples", "blood parasite" ]
[ "enrichment parasite dna", "parasite dna sample", "plasmodium falciparum parasitized", "isolates patients malaria", "transcriptomic analysis plasmodium" ]
null
[CONTENT] CF11 | Cellulose powder | Leukocyte depletion | Plasmodium falciparum | Malaria | Next-generation sequencing [SUMMARY]
[CONTENT] CF11 | Cellulose powder | Leukocyte depletion | Plasmodium falciparum | Malaria | Next-generation sequencing [SUMMARY]
null
[CONTENT] CF11 | Cellulose powder | Leukocyte depletion | Plasmodium falciparum | Malaria | Next-generation sequencing [SUMMARY]
[CONTENT] CF11 | Cellulose powder | Leukocyte depletion | Plasmodium falciparum | Malaria | Next-generation sequencing [SUMMARY]
[CONTENT] CF11 | Cellulose powder | Leukocyte depletion | Plasmodium falciparum | Malaria | Next-generation sequencing [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Blood | Cambodia | Child | Child, Preschool | Chromatography | DNA, Protozoan | Humans | Infant | Malaria | Middle Aged | Parasitology | Plasmodium falciparum | Sensitivity and Specificity | Specimen Handling | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Blood | Cambodia | Child | Child, Preschool | Chromatography | DNA, Protozoan | Humans | Infant | Malaria | Middle Aged | Parasitology | Plasmodium falciparum | Sensitivity and Specificity | Specimen Handling | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Blood | Cambodia | Child | Child, Preschool | Chromatography | DNA, Protozoan | Humans | Infant | Malaria | Middle Aged | Parasitology | Plasmodium falciparum | Sensitivity and Specificity | Specimen Handling | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Blood | Cambodia | Child | Child, Preschool | Chromatography | DNA, Protozoan | Humans | Infant | Malaria | Middle Aged | Parasitology | Plasmodium falciparum | Sensitivity and Specificity | Specimen Handling | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Blood | Cambodia | Child | Child, Preschool | Chromatography | DNA, Protozoan | Humans | Infant | Malaria | Middle Aged | Parasitology | Plasmodium falciparum | Sensitivity and Specificity | Specimen Handling | Young Adult [SUMMARY]
[CONTENT] enrichment parasite dna | parasite dna sample | plasmodium falciparum parasitized | isolates patients malaria | transcriptomic analysis plasmodium [SUMMARY]
[CONTENT] enrichment parasite dna | parasite dna sample | plasmodium falciparum parasitized | isolates patients malaria | transcriptomic analysis plasmodium [SUMMARY]
null
[CONTENT] enrichment parasite dna | parasite dna sample | plasmodium falciparum parasitized | isolates patients malaria | transcriptomic analysis plasmodium [SUMMARY]
[CONTENT] enrichment parasite dna | parasite dna sample | plasmodium falciparum parasitized | isolates patients malaria | transcriptomic analysis plasmodium [SUMMARY]
[CONTENT] enrichment parasite dna | parasite dna sample | plasmodium falciparum parasitized | isolates patients malaria | transcriptomic analysis plasmodium [SUMMARY]
[CONTENT] cf11 | blood | parasite | dna | ml | filtration | 000 | μl | samples | blood parasite [SUMMARY]
[CONTENT] cf11 | blood | parasite | dna | ml | filtration | 000 | μl | samples | blood parasite [SUMMARY]
null
[CONTENT] cf11 | blood | parasite | dna | ml | filtration | 000 | μl | samples | blood parasite [SUMMARY]
[CONTENT] cf11 | blood | parasite | dna | ml | filtration | 000 | μl | samples | blood parasite [SUMMARY]
[CONTENT] cf11 | blood | parasite | dna | ml | filtration | 000 | μl | samples | blood parasite [SUMMARY]
[CONTENT] blood | obtained directly patients | directly patients | directly | leukocytes | obtained directly | genomic | infected | field | infected blood [SUMMARY]
[CONTENT] cf11 | ml | blood | parasite | μl | filtration | 000 | prepared | samples | blood parasite mixtures [SUMMARY]
null
[CONTENT] samples successfully | effective step | effective step approach remove | filtration currently | cost effective | cost effective step | cost effective step approach | studies falciparum isolates patients | isolates patients malaria | isolates patients [SUMMARY]
[CONTENT] cf11 | dna | blood | ml | parasite | 000 | μl | filtration | samples | blood parasite [SUMMARY]
[CONTENT] cf11 | dna | blood | ml | parasite | 000 | μl | filtration | samples | blood parasite [SUMMARY]
[CONTENT] Plasmodium ||| two ||| hundreds ||| Plasmodium ||| Plasmodium | Illumina [SUMMARY]
[CONTENT] two | Cambodian ||| [SUMMARY]
null
[CONTENT] one [SUMMARY]
[CONTENT] Plasmodium ||| two ||| hundreds ||| Plasmodium ||| Plasmodium | Illumina ||| two | Cambodian ||| ||| 500 | less than 50% | two | Illumina ||| ||| one [SUMMARY]
[CONTENT] Plasmodium ||| two ||| hundreds ||| Plasmodium ||| Plasmodium | Illumina ||| two | Cambodian ||| ||| 500 | less than 50% | two | Illumina ||| ||| one [SUMMARY]
Undiagnosed Depression among Hypertensive Individuals in Gaza: A Cross-sectional Survey from Palestine.
34158786
The aim of this study was to estimate the prevalence and to determine the associated factors of undiagnosed depression amongst hypertensive patients (HTNP) at primary health care centers (PHCC) in Gaza.
BACKGROUND
A cross-sectional survey was conducted including 538 HTNP as a recruitment phase of a clustered randomized controlled trial. Data were collected through face-to-face structured interview, and depression status was assessed by Beck's Depression Inventory (BDI-II). Data were analyzed by STATA version 14 using standard complex survey analyses, accounted for unresponsiveness and clustering approach. Generalized linear regression analysis was performed to assess associations.
METHODS
The prevalence of undiagnosed clinical depression was 11.6% (95% confidence interval [CI]: 8.1, 16.3). Moreover, prevalence of 15.4% (95% CI: 10.8, 21.6) was found for mild depression symptoms. We found that non-adherence to antihypertensive medications (AHTNM) (β = 0.9, 95% CI: 0.17, 1.7), having more health-care system support (β = 2.8, 95% CI: 1.6, 3.9) and number of AHTNM (β = 1.5, 95% CI: 0.6, 2.5) remain significantly positively associated with BDI-II score. On the other hand, older age (β = -0.1, 95% CI: -0.2, -0.02), having better social support (β = -6.8, 95% CI: -8.9, -4.7) and having stronger patient-doctor relationship (β = -4.1, 95% CI: -6.9, -1.2) kept significantly negative association.
RESULTS
The prevalence of undiagnosed depression was about one-quarter of all cases; half of them were moderate to severe. Routine screening of depression status should be a part of the care of HTNP in PHCC.
CONCLUSION
[ "Aged", "Antihypertensive Agents", "Cross-Sectional Studies", "Depression", "Depressive Disorder", "Humans", "Hypertension", "Prevalence" ]
8188075
Introduction
Globally, depression affects 350 million people around the world; it is strongly contributes to the burden of disease, and is expected to increase by 5.7% of global burden of disease and become the second after ischemic heart disease by 2020 (1). Notably, episodes of depression are accompanied with other chronic diseases especially hypertension (2,3). Hypertension affects globally one fourth of adults, and is likely to increase to one third by 2025 (4). People with hypertension and subclinical depression are at extra risk of complications such as cerebrovascular stroke, cardiovascular and kidney diseases (3, 5–7). Moreover, depression represents an important predictor among hypertensive individual whom are non-adherence to their treatment (8). Many studies demonstrated an increased co-occurrence of depression with hypertension in different countries (9); however, little is known about depression prevalence among hypertensive patients in Gaza Strip (GS). This study seems to be the first study aimed to estimate the prevalence and to determine the associated factors of undiagnosed depression among hypertensive patients attending primary healthcare clinics in GS.
Methods and Participants
A cross-sectional survey as the recruitment phase of a clustered randomized controlled trial was conducted between 1st August and 30th December, 2018. We recruited 538 hypertensive persons seeking primary health care by two stages cluster random sampling from ten primary health centers. Initially, centers were randomly selected by stratified simple random sampling approach to get two centers from each governorate. Then, participants from each center were proportionally selected through a systematic random sampling method based on eligibility criteria and agreement to take part in the study. This cross-sectional survey was conducted as the recruitment phase of a clustered randomized controlled trial. All eligible participants who agreed to participate in the study were enrolled regardless of their antihypertensive medication adherence status. Psychological status was initially classified according to the BDI-II score. Later, adherent status to antihypertensive medication was determined. Subsequently, the intervention of the clustered randomized controlled trial was implemented based on the adherent status results. Measures and data collection: Face-to-face interview was used to collect data from participants using a structured questionnaire. The interviews lasted fifteen minutes during the clinic hours (8 am to 2 pm, five days a week). The exposure variables of the study included participants' characteristics (age, gender, marital status, employment, education level), as well as participants' health status variables including smoking status, comorbidities, body mass index (BMI), blood pressure (BP) measurement, and medication adherence status. Other predictors of interest such as social support, relationships between patients and physician and health system support were also assessed. Moreover, depressive symptoms as the outcome variable were assessed using the validated Arabic language Beck's Depression Inventory (BDI-II) (10). Blood pressure was measured on the right arm using mercury sphygmomanometer after completing the interview; the value was recorded in terms of mm of mercury. BMI was calculated by using the WHO chart based on weight and height. Weight and height were measured by mechanical weighing machine with height rod (Health o meter 402LB Physician Beam Scale, Height Rod). Instrument: The questionnaire consisted of items about the baseline characteristics of participants, clinical health history, adherence status, patient-doctor relationship, healthcare system support, perceived social support and psychological status of depressive symptoms. Depressive symptoms were assessed by the aid of BDI-II. A 21-item self-report inventory was designed to assess the presence and severity of depressive symptoms. Each item is rated on a four-point Likert-type scale ranging from 0 to 3, based on the severity in the last two weeks. The total score ranges from 0 to 63, with higher scores indicating more severe depressive symptoms. The results were stratified into four groups; BDI-II: 1–13 these ups and downs are considered normal, BDI-II: 14–19 are mood disturbance or mild depression, BDI-II: 20–28 are moderate depression, and BDI-II: more than 29 are severe depression (11). We provided psychological counseling by psychotherapist for those who scored from 11 to 19. Special precautions were taken for those individuals who endorsed having suicidal ideation. Mainly, we involved the family and coordinated for urgent counseling in the related psychological health center. Moreover, all cases scored 20 and above were referred to psychology health centers. Consequently, stratification into two groups was done; considering BDI-II ≥20 who have clinical depression and BDI-II <20 who do not have clinical depression (12,13). Adherence status was assessed by Morisky, Green, and Levine Adherence Scale (MGL), a known validated and reliable self-report medication adherence score (14). Similarly, medical comorbidities were assessed using Charlson comorbidity index, a validated and widely used weighted-index designed scoring as low and high comorbidities (15). Likewise, social support was assessed by multidimensional Scale of Perceived Social Support (MSPSS); a known valid and reliable questionnaire which measures perceptions of support from three sources: family, friends, and a significant other (16). Arabic translation was performed based on WHO five steps for process of translation and adaptation of instrument (17). An Arabic validated and reliable version of patient-doctor relationship questionnaire-9 (PDRQ-9) was used to assess the relationship between patients and physicians (6,18). Likewise, healthcare system support questionnaire was used with little modifications (18, 19). The whole Arabic questionnaire content and face validity were reviewed by panel of experts. Required changes were made to clarify any ambiguity and to ensure comprehension of Palestinian participants after pilot study, detailed information about the questionnaire are available in our published article elsewhere (20). Eligibility criteria: Palestinian citizens attending Gaza governmental primary health centers, aged above 18 years, registered as hypertension patient since at least one year and taking at least one antihypertensive medication were eligible to participate in the study. Patients with a diagnosis of cognitive impairment, history of depression or being currently on antidepressants as reported by their primary care physician were excluded from the study. Data analysis: Standard complex survey data analysis method was performed by STATA version 14. We accounted for clustering and unresponsiveness using STATA PSU option and unequal probability of selection using sample weight variable analysis and post stratification weight for each age and sex group strata. Furthermore, since the BDI-II score did not follow normality assumption, Generalized Linear Model (GLM) with Gaussian family and log link was run. Moreover, linearized standard error which is quite robust to non-uniformity of variance was used. Data were described using descriptive statistics; categorical variables were compared using the Chi-squared test. After checking assumption of linear regression, univariable analysis followed by multiple linear regression were performed to assess the association between depression status and all other independent variables including participants' characteristic. Statistically significant variables were included in multiple regression analysis model based on 0.1 level. However, variables had been excluded by backward stepwise elimination method. Ethics approval and consent to participate: Prior to conducting this research study, ethical approval from Tehran University of Medical Sciences Ethical Committee (code number: IR.TUMS.SPH.REC.1396.4828) was obtained. Approvals from the Palestinian Health Research Council(PHRC/HC/322/18) and Research Committee at the Palestinian Ministry of Health were also obtained. Purposes of the study were explained to participants, and they were reassured about confidentiality of data. Each participant was asked to sign a consent paper prior to participation.
Results
Participants' characteristics: Five hundred and thirty-eight participants were included in the survey from the five governorates of Gaza Strip by response rate of 96%. The overall mean age was 57.1 years (95% confidence interval [CI]: 55.9, 58.2) and more than half of them (60.9%) were females. In addition, the majority of them were literate (90.2%), married (90.4%), unemployed (86.5%), and not smokers (81.4%). Since obesity is known to have BMI more than 30 kg/m2, it was most common among study population as BMI mean reported 32.3 (95% CI: 31.9, 32.6). Moreover, more than half of the participants (57.2%) had been diagnosed with hypertension for more than five years with a mean of 8.44 years (95% CI: 7.3, 9.4). Almost, two third of them (64.4%) were treated with only one antihypertensive medication once a day (64.8%); however, 35.6% were treated with two or more medications with a frequency of twice or more (35.2%). Only 14.43% of them were classified with two or more comorbidities. In addition, the participants reported a mean of depression status based on BDI-II score of 10 (95%CI: 8.4, 11.6%) (Table 1). Participants characteristics (n= 538) Depression status: The BDI-II mean score for all participants was 10 (95% CI: 8.4, 11.6); and they were categorized into three groups: 1) normal (BDI-II 0–13) which had a prevalence proportion of 73% (95% CI: 65.6, 79.3), 2), mild clinical depression (BDI-II: 14–19) with a prevalence proportion of 15.4% (95% CI: 10.8, 21.6), in which they received psychological counseling in the same clinic by psychotherapist; and 3) clinical depression BDI-II ≥20 with a prevalence of 11.6% (95% CI: 8.1, 16.3) of undiagnosed depression cases which all were referred to psychology health centers for further evaluation and treatment (Table 2). Depression status prevalence proportion (n= 538) Relationship of depression status and predictors Under standard complex survey data setting, the Chi-squared test was used to compare categorical variables of depression status and other explanatory variables. Table 3 shows that depression status had relations with the categorical variables: level of education and BMI categories. Relation between depression status and participants characteristics Univariable linear regression analysis presented in Table 4 shows that increased BMI (β = 0.11, 95% CI: 0.01, 0.21), being non-adherent to antihypertensive medications (β = 1.3, 95% CI: 0.5, 2.1) and having better healthcare system support (β = 2.8, 95% CI: 1.2, 4.4) were significantly associated with increased BDI-II score. In contrast, older age (β = -1.5, 95% CI: - 0.29, -0.01), longer duration of hypertension (β = -0.12, 95% CI: -0.19, -0.04), stronger patient-doctor relationship (β = -3.7, 95% CI: -6.4, -1.1), and having superior social support (β = -7.1, 95% CI: -9.3, -4.9) were significantly associated with reduced BDI-II score. Univariate linear regression analysis of depression status and predictors Statistically significant variables for multivariable analysis Multivariable linear regression analysis in Table 5 revealed similar results except for increased BMI and duration of hypertension. Thus, non-adherence to antihypertensive medications (β = 0.9, 95% CI: 0.2, 1.7), having more healthcare system support (β = 2.8, 95% CI: 1.6, 3.9) and number of antihypertensive medications (β = 1.5, 95% CI: 0.6, 2.5) remain significantly positively associated with BDI-II score. On the other hand, older age (β = -0.11, 95% CI: -0.20, -0.02), having better social support (β = -6.8, 95% CI: -8.9, -4.7) and stronger relation with physician (β = -4.1, 95% CI: -6.9, -1.2) remain significantly negatively associated with BDI-II score. Multivariable linear regression analysis of depression status and predictors
null
null
[]
[]
[]
[ "Introduction", "Methods and Participants", "Results", "Discussion" ]
[ "Globally, depression affects 350 million people around the world; it is strongly contributes to the burden of disease, and is expected to increase by 5.7% of global burden of disease and become the second after ischemic heart disease by 2020 (1). Notably, episodes of depression are accompanied with other chronic diseases especially hypertension (2,3). Hypertension affects globally one fourth of adults, and is likely to increase to one third by 2025 (4). People with hypertension and subclinical depression are at extra risk of complications such as cerebrovascular stroke, cardiovascular and kidney diseases (3, 5–7). Moreover, depression represents an important predictor among hypertensive individual whom are non-adherence to their treatment (8).\nMany studies demonstrated an increased co-occurrence of depression with hypertension in different countries (9); however, little is known about depression prevalence among hypertensive patients in Gaza Strip (GS). This study seems to be the first study aimed to estimate the prevalence and to determine the associated factors of undiagnosed depression among hypertensive patients attending primary healthcare clinics in GS.", "A cross-sectional survey as the recruitment phase of a clustered randomized controlled trial was conducted between 1st August and 30th December, 2018. We recruited 538 hypertensive persons seeking primary health care by two stages cluster random sampling from ten primary health centers. Initially, centers were randomly selected by stratified simple random sampling approach to get two centers from each governorate. Then, participants from each center were proportionally selected through a systematic random sampling method based on eligibility criteria and agreement to take part in the study.\nThis cross-sectional survey was conducted as the recruitment phase of a clustered randomized controlled trial. All eligible participants who agreed to participate in the study were enrolled regardless of their antihypertensive medication adherence status. Psychological status was initially classified according to the BDI-II score. Later, adherent status to antihypertensive medication was determined. Subsequently, the intervention of the clustered randomized controlled trial was implemented based on the adherent status results.\nMeasures and data collection: Face-to-face interview was used to collect data from participants using a structured questionnaire. The interviews lasted fifteen minutes during the clinic hours (8 am to 2 pm, five days a week). The exposure variables of the study included participants' characteristics (age, gender, marital status, employment, education level), as well as participants' health status variables including smoking status, comorbidities, body mass index (BMI), blood pressure (BP) measurement, and medication adherence status. Other predictors of interest such as social support, relationships between patients and physician and health system support were also assessed. Moreover, depressive symptoms as the outcome variable were assessed using the validated Arabic language Beck's Depression Inventory (BDI-II) (10).\nBlood pressure was measured on the right arm using mercury sphygmomanometer after completing the interview; the value was recorded in terms of mm of mercury. BMI was calculated by using the WHO chart based on weight and height. Weight and height were measured by mechanical weighing machine with height rod (Health o meter 402LB Physician Beam Scale, Height Rod).\nInstrument: The questionnaire consisted of items about the baseline characteristics of participants, clinical health history, adherence status, patient-doctor relationship, healthcare system support, perceived social support and psychological status of depressive symptoms.\nDepressive symptoms were assessed by the aid of BDI-II. A 21-item self-report inventory was designed to assess the presence and severity of depressive symptoms. Each item is rated on a four-point Likert-type scale ranging from 0 to 3, based on the severity in the last two weeks. The total score ranges from 0 to 63, with higher scores indicating more severe depressive symptoms. The results were stratified into four groups; BDI-II: 1–13 these ups and downs are considered normal, BDI-II: 14–19 are mood disturbance or mild depression, BDI-II: 20–28 are moderate depression, and BDI-II: more than 29 are severe depression (11). We provided psychological counseling by psychotherapist for those who scored from 11 to 19. Special precautions were taken for those individuals who endorsed having suicidal ideation. Mainly, we involved the family and coordinated for urgent counseling in the related psychological health center. Moreover, all cases scored 20 and above were referred to psychology health centers. Consequently, stratification into two groups was done; considering BDI-II ≥20 who have clinical depression and BDI-II <20 who do not have clinical depression (12,13).\nAdherence status was assessed by Morisky, Green, and Levine Adherence Scale (MGL), a known validated and reliable self-report medication adherence score (14). Similarly, medical comorbidities were assessed using Charlson comorbidity index, a validated and widely used weighted-index designed scoring as low and high comorbidities (15). Likewise, social support was assessed by multidimensional Scale of Perceived Social Support (MSPSS); a known valid and reliable questionnaire which measures perceptions of support from three sources: family, friends, and a significant other (16). Arabic translation was performed based on WHO five steps for process of translation and adaptation of instrument (17).\nAn Arabic validated and reliable version of patient-doctor relationship questionnaire-9 (PDRQ-9) was used to assess the relationship between patients and physicians (6,18). Likewise, healthcare system support questionnaire was used with little modifications (18, 19). The whole Arabic questionnaire content and face validity were reviewed by panel of experts. Required changes were made to clarify any ambiguity and to ensure comprehension of Palestinian participants after pilot study, detailed information about the questionnaire are available in our published article elsewhere (20).\nEligibility criteria: Palestinian citizens attending Gaza governmental primary health centers, aged above 18 years, registered as hypertension patient since at least one year and taking at least one antihypertensive medication were eligible to participate in the study. Patients with a diagnosis of cognitive impairment, history of depression or being currently on antidepressants as reported by their primary care physician were excluded from the study.\nData analysis: Standard complex survey data analysis method was performed by STATA version 14. We accounted for clustering and unresponsiveness using STATA PSU option and unequal probability of selection using sample weight variable analysis and post stratification weight for each age and sex group strata. Furthermore, since the BDI-II score did not follow normality assumption, Generalized Linear Model (GLM) with Gaussian family and log link was run. Moreover, linearized standard error which is quite robust to non-uniformity of variance was used.\nData were described using descriptive statistics; categorical variables were compared using the Chi-squared test. After checking assumption of linear regression, univariable analysis followed by multiple linear regression were performed to assess the association between depression status and all other independent variables including participants' characteristic. Statistically significant variables were included in multiple regression analysis model based on 0.1 level. However, variables had been excluded by backward stepwise elimination method.\nEthics approval and consent to participate: Prior to conducting this research study, ethical approval from Tehran University of Medical Sciences Ethical Committee (code number: IR.TUMS.SPH.REC.1396.4828) was obtained. Approvals from the Palestinian Health Research Council(PHRC/HC/322/18) and Research Committee at the Palestinian Ministry of Health were also obtained. Purposes of the study were explained to participants, and they were reassured about confidentiality of data. Each participant was asked to sign a consent paper prior to participation.", "Participants' characteristics: Five hundred and thirty-eight participants were included in the survey from the five governorates of Gaza Strip by response rate of 96%. The overall mean age was 57.1 years (95% confidence interval [CI]: 55.9, 58.2) and more than half of them (60.9%) were females. In addition, the majority of them were literate (90.2%), married (90.4%), unemployed (86.5%), and not smokers (81.4%). Since obesity is known to have BMI more than 30 kg/m2, it was most common among study population as BMI mean reported 32.3 (95% CI: 31.9, 32.6). Moreover, more than half of the participants (57.2%) had been diagnosed with hypertension for more than five years with a mean of 8.44 years (95% CI: 7.3, 9.4). Almost, two third of them (64.4%) were treated with only one antihypertensive medication once a day (64.8%); however, 35.6% were treated with two or more medications with a frequency of twice or more (35.2%). Only 14.43% of them were classified with two or more comorbidities. In addition, the participants reported a mean of depression status based on BDI-II score of 10 (95%CI: 8.4, 11.6%) (Table 1).\nParticipants characteristics (n= 538)\nDepression status: The BDI-II mean score for all participants was 10 (95% CI: 8.4, 11.6); and they were categorized into three groups: 1) normal (BDI-II 0–13) which had a prevalence proportion of 73% (95% CI: 65.6, 79.3), 2), mild clinical depression (BDI-II: 14–19) with a prevalence proportion of 15.4% (95% CI: 10.8, 21.6), in which they received psychological counseling in the same clinic by psychotherapist; and 3) clinical depression BDI-II ≥20 with a prevalence of 11.6% (95% CI: 8.1, 16.3) of undiagnosed depression cases which all were referred to psychology health centers for further evaluation and treatment (Table 2).\nDepression status prevalence proportion (n= 538)\n\nRelationship of depression status and predictors\n\nUnder standard complex survey data setting, the Chi-squared test was used to compare categorical variables of depression status and other explanatory variables. Table 3 shows that depression status had relations with the categorical variables: level of education and BMI categories.\nRelation between depression status and participants characteristics\nUnivariable linear regression analysis presented in Table 4 shows that increased BMI (β = 0.11, 95% CI: 0.01, 0.21), being non-adherent to antihypertensive medications (β = 1.3, 95% CI: 0.5, 2.1) and having better healthcare system support (β = 2.8, 95% CI: 1.2, 4.4) were significantly associated with increased BDI-II score. In contrast, older age (β = -1.5, 95% CI: - 0.29, -0.01), longer duration of hypertension (β = -0.12, 95% CI: -0.19, -0.04), stronger patient-doctor relationship (β = -3.7, 95% CI: -6.4, -1.1), and having superior social support (β = -7.1, 95% CI: -9.3, -4.9) were significantly associated with reduced BDI-II score.\nUnivariate linear regression analysis of depression status and predictors\nStatistically significant variables for multivariable analysis\nMultivariable linear regression analysis in Table 5 revealed similar results except for increased BMI and duration of hypertension. Thus, non-adherence to antihypertensive medications (β = 0.9, 95% CI: 0.2, 1.7), having more healthcare system support (β = 2.8, 95% CI: 1.6, 3.9) and number of antihypertensive medications (β = 1.5, 95% CI: 0.6, 2.5) remain significantly positively associated with BDI-II score. On the other hand, older age (β = -0.11, 95% CI: -0.20, -0.02), having better social support (β = -6.8, 95% CI: -8.9, -4.7) and stronger relation with physician (β = -4.1, 95% CI: -6.9, -1.2) remain significantly negatively associated with BDI-II score.\nMultivariable linear regression analysis of depression status and predictors", "To best of our knowledge, this is the first cross-sectional survey from Gaza to document the prevalence of undiagnosed depression among hypertensive patients. We found that the prevalence of undiagnosed clinical depression in our hypertensive population is almost higher than what is observed in Norway (6.2%) and South Africa (6%), and lower than in the US(44.9%), China (44.2%), Mexico (57.5%), Pakistan (40.1%), Croatia (29.6%) and Nigeria (26.7%). However, it was similar to that observed in Brazil (12.1%), Ghana (10.5%) and Netherlands (11.4%) (9, 21). In addition, this prevalence was lower than the rate reported in a systematic review and meta-analysis (26.8%) which summarized the prevalence of depression among hypertensive individuals in 41 studies (9).\nUnfortunately, very limited data is available from Arabic regional countries. One study from Saudi Arabia found a prevalence of 20.7% (22). Still, no local prevalence was found from other parts of Palestine. Our study showed that the prevalence of depression among hypertensive patients was lower than the prevalence of patients with type two diabetic in the West Bank of Palestine (40%) (23).\nThe associated factors found by multiple linear regression in this study were: age, number of antihypertensive medication, adherent status, healthcare system support, patient-doctor relationship and perceived social support. Increasing age was found to be an associated factor with increased depressive symptom in other studies (21, 22, 24, 25), although, our results predicted a negative relationship between age and depressive status. An explanation for this could be related to the nature of aged people in Gaza, since they have more acceptance and are adapted to their disease more than younger adults.\nHowever, gender and smoking (21, 22, 24, 26) found to be associated factors in previous studies, did not reach statistical significant level in our study. Longer hypertension duration has been previously found to be a significantly associated factor in an Indian study (26); and was associated factor in our univariable analysis, although it could not reach significant level in multivariable analysis. In addition, in line with what has been found on a systematic review (8), we found a negative relationship between depression and adherence to antihypertensive medications. Furthermore, statistically significant association with number of antihypertensive medication and depression was observed in our study, which was supported by previous findings (24, 27).\nIn this study, we have investigated the relation of depression status with healthcare system support, patient-doctor relationship and perceived social support. Interestingly, a positive relationship between depression and healthcare system support was found. This could be explained by the frequent visit of the patient to the primary healthcare clinic, and his increased need to its supportive aids. On the other hand, it was a negative relationship with patient-doctor relationship and social support. It is highly believed to be negative relationship between depression and perceived social support since social support is a significant predictor of depression and hypertension treatment programs (28, 29).\nWe shed the light on the undiagnosed depression cases among hypertensive persons receiving their usual primary health care in primary healthcare clinics. The estimated prevalence proportion of undiagnosed subclinical depression cases was about one-quarter of all cases, in which half of them had moderate to severe depression status. Furthermore, age, number of antihypertensive medication, adherence status, patient-physician relationship, health-care system support and social support were associated with depression status. We suggest periodically screening of depression and adherence status as a part of routine care for hypertensive primary healthcare seekers, particularly younger hypertensive patients. We also call for exploring the ways to promote social support among them.\nHowever, since this is the first study in Gaza Strip which assessed the psychological status among hypertensive patients, several limitations were observed. First, alcohol consumption is an important factor related to depression and adherence and may confound the reported relationships, although it has not been measured in this study. Our justification is that alcohol consumption is illegal and prohibited to be used in Gaza as Muslim citizens, and so it was difficult to get trustful answers from the participants. Thus, more than 98% of them answered no. Indeed, even if he/she was a user he/she would not tell the truth because of fear of legal prosecutions in spite of assurance about data confidentiality. As a result, we decided to omit this variable as a predictor from this study.\nThe second limitation is that some antihypertensive medications may be related with depression. However, we were concerned only about the number of antihypertensive medications and the frequency and did not discuss the type and classes of the medications.\nAnother limitation of this study is that the prevalence of depression would be underestimated because we excluded the already known patients with a history of depression or currently on antidepressant medications. In addition, BDI-II is a depression screening tool only and not a diagnostic tool. This will surely affect the true prevalence of depression and therefore this could be another limitation regarding the estimation of the true prevalence. However, BDI-II as a screening tool gave a good impression of the depression prevalence among hypertensive patients and was able to discover many unknown cases that were referred for further diagnostic procedures. Despite the limitations, the findings support the routine screening of depression and medication adherence status as a part of care for hypertensive patients in PHCCs." ]
[ "intro", "methods", "results", "discussion" ]
[ "Hypertension", "Depression", "Prevalence", "Medication adherence" ]
Introduction: Globally, depression affects 350 million people around the world; it is strongly contributes to the burden of disease, and is expected to increase by 5.7% of global burden of disease and become the second after ischemic heart disease by 2020 (1). Notably, episodes of depression are accompanied with other chronic diseases especially hypertension (2,3). Hypertension affects globally one fourth of adults, and is likely to increase to one third by 2025 (4). People with hypertension and subclinical depression are at extra risk of complications such as cerebrovascular stroke, cardiovascular and kidney diseases (3, 5–7). Moreover, depression represents an important predictor among hypertensive individual whom are non-adherence to their treatment (8). Many studies demonstrated an increased co-occurrence of depression with hypertension in different countries (9); however, little is known about depression prevalence among hypertensive patients in Gaza Strip (GS). This study seems to be the first study aimed to estimate the prevalence and to determine the associated factors of undiagnosed depression among hypertensive patients attending primary healthcare clinics in GS. Methods and Participants: A cross-sectional survey as the recruitment phase of a clustered randomized controlled trial was conducted between 1st August and 30th December, 2018. We recruited 538 hypertensive persons seeking primary health care by two stages cluster random sampling from ten primary health centers. Initially, centers were randomly selected by stratified simple random sampling approach to get two centers from each governorate. Then, participants from each center were proportionally selected through a systematic random sampling method based on eligibility criteria and agreement to take part in the study. This cross-sectional survey was conducted as the recruitment phase of a clustered randomized controlled trial. All eligible participants who agreed to participate in the study were enrolled regardless of their antihypertensive medication adherence status. Psychological status was initially classified according to the BDI-II score. Later, adherent status to antihypertensive medication was determined. Subsequently, the intervention of the clustered randomized controlled trial was implemented based on the adherent status results. Measures and data collection: Face-to-face interview was used to collect data from participants using a structured questionnaire. The interviews lasted fifteen minutes during the clinic hours (8 am to 2 pm, five days a week). The exposure variables of the study included participants' characteristics (age, gender, marital status, employment, education level), as well as participants' health status variables including smoking status, comorbidities, body mass index (BMI), blood pressure (BP) measurement, and medication adherence status. Other predictors of interest such as social support, relationships between patients and physician and health system support were also assessed. Moreover, depressive symptoms as the outcome variable were assessed using the validated Arabic language Beck's Depression Inventory (BDI-II) (10). Blood pressure was measured on the right arm using mercury sphygmomanometer after completing the interview; the value was recorded in terms of mm of mercury. BMI was calculated by using the WHO chart based on weight and height. Weight and height were measured by mechanical weighing machine with height rod (Health o meter 402LB Physician Beam Scale, Height Rod). Instrument: The questionnaire consisted of items about the baseline characteristics of participants, clinical health history, adherence status, patient-doctor relationship, healthcare system support, perceived social support and psychological status of depressive symptoms. Depressive symptoms were assessed by the aid of BDI-II. A 21-item self-report inventory was designed to assess the presence and severity of depressive symptoms. Each item is rated on a four-point Likert-type scale ranging from 0 to 3, based on the severity in the last two weeks. The total score ranges from 0 to 63, with higher scores indicating more severe depressive symptoms. The results were stratified into four groups; BDI-II: 1–13 these ups and downs are considered normal, BDI-II: 14–19 are mood disturbance or mild depression, BDI-II: 20–28 are moderate depression, and BDI-II: more than 29 are severe depression (11). We provided psychological counseling by psychotherapist for those who scored from 11 to 19. Special precautions were taken for those individuals who endorsed having suicidal ideation. Mainly, we involved the family and coordinated for urgent counseling in the related psychological health center. Moreover, all cases scored 20 and above were referred to psychology health centers. Consequently, stratification into two groups was done; considering BDI-II ≥20 who have clinical depression and BDI-II <20 who do not have clinical depression (12,13). Adherence status was assessed by Morisky, Green, and Levine Adherence Scale (MGL), a known validated and reliable self-report medication adherence score (14). Similarly, medical comorbidities were assessed using Charlson comorbidity index, a validated and widely used weighted-index designed scoring as low and high comorbidities (15). Likewise, social support was assessed by multidimensional Scale of Perceived Social Support (MSPSS); a known valid and reliable questionnaire which measures perceptions of support from three sources: family, friends, and a significant other (16). Arabic translation was performed based on WHO five steps for process of translation and adaptation of instrument (17). An Arabic validated and reliable version of patient-doctor relationship questionnaire-9 (PDRQ-9) was used to assess the relationship between patients and physicians (6,18). Likewise, healthcare system support questionnaire was used with little modifications (18, 19). The whole Arabic questionnaire content and face validity were reviewed by panel of experts. Required changes were made to clarify any ambiguity and to ensure comprehension of Palestinian participants after pilot study, detailed information about the questionnaire are available in our published article elsewhere (20). Eligibility criteria: Palestinian citizens attending Gaza governmental primary health centers, aged above 18 years, registered as hypertension patient since at least one year and taking at least one antihypertensive medication were eligible to participate in the study. Patients with a diagnosis of cognitive impairment, history of depression or being currently on antidepressants as reported by their primary care physician were excluded from the study. Data analysis: Standard complex survey data analysis method was performed by STATA version 14. We accounted for clustering and unresponsiveness using STATA PSU option and unequal probability of selection using sample weight variable analysis and post stratification weight for each age and sex group strata. Furthermore, since the BDI-II score did not follow normality assumption, Generalized Linear Model (GLM) with Gaussian family and log link was run. Moreover, linearized standard error which is quite robust to non-uniformity of variance was used. Data were described using descriptive statistics; categorical variables were compared using the Chi-squared test. After checking assumption of linear regression, univariable analysis followed by multiple linear regression were performed to assess the association between depression status and all other independent variables including participants' characteristic. Statistically significant variables were included in multiple regression analysis model based on 0.1 level. However, variables had been excluded by backward stepwise elimination method. Ethics approval and consent to participate: Prior to conducting this research study, ethical approval from Tehran University of Medical Sciences Ethical Committee (code number: IR.TUMS.SPH.REC.1396.4828) was obtained. Approvals from the Palestinian Health Research Council(PHRC/HC/322/18) and Research Committee at the Palestinian Ministry of Health were also obtained. Purposes of the study were explained to participants, and they were reassured about confidentiality of data. Each participant was asked to sign a consent paper prior to participation. Results: Participants' characteristics: Five hundred and thirty-eight participants were included in the survey from the five governorates of Gaza Strip by response rate of 96%. The overall mean age was 57.1 years (95% confidence interval [CI]: 55.9, 58.2) and more than half of them (60.9%) were females. In addition, the majority of them were literate (90.2%), married (90.4%), unemployed (86.5%), and not smokers (81.4%). Since obesity is known to have BMI more than 30 kg/m2, it was most common among study population as BMI mean reported 32.3 (95% CI: 31.9, 32.6). Moreover, more than half of the participants (57.2%) had been diagnosed with hypertension for more than five years with a mean of 8.44 years (95% CI: 7.3, 9.4). Almost, two third of them (64.4%) were treated with only one antihypertensive medication once a day (64.8%); however, 35.6% were treated with two or more medications with a frequency of twice or more (35.2%). Only 14.43% of them were classified with two or more comorbidities. In addition, the participants reported a mean of depression status based on BDI-II score of 10 (95%CI: 8.4, 11.6%) (Table 1). Participants characteristics (n= 538) Depression status: The BDI-II mean score for all participants was 10 (95% CI: 8.4, 11.6); and they were categorized into three groups: 1) normal (BDI-II 0–13) which had a prevalence proportion of 73% (95% CI: 65.6, 79.3), 2), mild clinical depression (BDI-II: 14–19) with a prevalence proportion of 15.4% (95% CI: 10.8, 21.6), in which they received psychological counseling in the same clinic by psychotherapist; and 3) clinical depression BDI-II ≥20 with a prevalence of 11.6% (95% CI: 8.1, 16.3) of undiagnosed depression cases which all were referred to psychology health centers for further evaluation and treatment (Table 2). Depression status prevalence proportion (n= 538) Relationship of depression status and predictors Under standard complex survey data setting, the Chi-squared test was used to compare categorical variables of depression status and other explanatory variables. Table 3 shows that depression status had relations with the categorical variables: level of education and BMI categories. Relation between depression status and participants characteristics Univariable linear regression analysis presented in Table 4 shows that increased BMI (β = 0.11, 95% CI: 0.01, 0.21), being non-adherent to antihypertensive medications (β = 1.3, 95% CI: 0.5, 2.1) and having better healthcare system support (β = 2.8, 95% CI: 1.2, 4.4) were significantly associated with increased BDI-II score. In contrast, older age (β = -1.5, 95% CI: - 0.29, -0.01), longer duration of hypertension (β = -0.12, 95% CI: -0.19, -0.04), stronger patient-doctor relationship (β = -3.7, 95% CI: -6.4, -1.1), and having superior social support (β = -7.1, 95% CI: -9.3, -4.9) were significantly associated with reduced BDI-II score. Univariate linear regression analysis of depression status and predictors Statistically significant variables for multivariable analysis Multivariable linear regression analysis in Table 5 revealed similar results except for increased BMI and duration of hypertension. Thus, non-adherence to antihypertensive medications (β = 0.9, 95% CI: 0.2, 1.7), having more healthcare system support (β = 2.8, 95% CI: 1.6, 3.9) and number of antihypertensive medications (β = 1.5, 95% CI: 0.6, 2.5) remain significantly positively associated with BDI-II score. On the other hand, older age (β = -0.11, 95% CI: -0.20, -0.02), having better social support (β = -6.8, 95% CI: -8.9, -4.7) and stronger relation with physician (β = -4.1, 95% CI: -6.9, -1.2) remain significantly negatively associated with BDI-II score. Multivariable linear regression analysis of depression status and predictors Discussion: To best of our knowledge, this is the first cross-sectional survey from Gaza to document the prevalence of undiagnosed depression among hypertensive patients. We found that the prevalence of undiagnosed clinical depression in our hypertensive population is almost higher than what is observed in Norway (6.2%) and South Africa (6%), and lower than in the US(44.9%), China (44.2%), Mexico (57.5%), Pakistan (40.1%), Croatia (29.6%) and Nigeria (26.7%). However, it was similar to that observed in Brazil (12.1%), Ghana (10.5%) and Netherlands (11.4%) (9, 21). In addition, this prevalence was lower than the rate reported in a systematic review and meta-analysis (26.8%) which summarized the prevalence of depression among hypertensive individuals in 41 studies (9). Unfortunately, very limited data is available from Arabic regional countries. One study from Saudi Arabia found a prevalence of 20.7% (22). Still, no local prevalence was found from other parts of Palestine. Our study showed that the prevalence of depression among hypertensive patients was lower than the prevalence of patients with type two diabetic in the West Bank of Palestine (40%) (23). The associated factors found by multiple linear regression in this study were: age, number of antihypertensive medication, adherent status, healthcare system support, patient-doctor relationship and perceived social support. Increasing age was found to be an associated factor with increased depressive symptom in other studies (21, 22, 24, 25), although, our results predicted a negative relationship between age and depressive status. An explanation for this could be related to the nature of aged people in Gaza, since they have more acceptance and are adapted to their disease more than younger adults. However, gender and smoking (21, 22, 24, 26) found to be associated factors in previous studies, did not reach statistical significant level in our study. Longer hypertension duration has been previously found to be a significantly associated factor in an Indian study (26); and was associated factor in our univariable analysis, although it could not reach significant level in multivariable analysis. In addition, in line with what has been found on a systematic review (8), we found a negative relationship between depression and adherence to antihypertensive medications. Furthermore, statistically significant association with number of antihypertensive medication and depression was observed in our study, which was supported by previous findings (24, 27). In this study, we have investigated the relation of depression status with healthcare system support, patient-doctor relationship and perceived social support. Interestingly, a positive relationship between depression and healthcare system support was found. This could be explained by the frequent visit of the patient to the primary healthcare clinic, and his increased need to its supportive aids. On the other hand, it was a negative relationship with patient-doctor relationship and social support. It is highly believed to be negative relationship between depression and perceived social support since social support is a significant predictor of depression and hypertension treatment programs (28, 29). We shed the light on the undiagnosed depression cases among hypertensive persons receiving their usual primary health care in primary healthcare clinics. The estimated prevalence proportion of undiagnosed subclinical depression cases was about one-quarter of all cases, in which half of them had moderate to severe depression status. Furthermore, age, number of antihypertensive medication, adherence status, patient-physician relationship, health-care system support and social support were associated with depression status. We suggest periodically screening of depression and adherence status as a part of routine care for hypertensive primary healthcare seekers, particularly younger hypertensive patients. We also call for exploring the ways to promote social support among them. However, since this is the first study in Gaza Strip which assessed the psychological status among hypertensive patients, several limitations were observed. First, alcohol consumption is an important factor related to depression and adherence and may confound the reported relationships, although it has not been measured in this study. Our justification is that alcohol consumption is illegal and prohibited to be used in Gaza as Muslim citizens, and so it was difficult to get trustful answers from the participants. Thus, more than 98% of them answered no. Indeed, even if he/she was a user he/she would not tell the truth because of fear of legal prosecutions in spite of assurance about data confidentiality. As a result, we decided to omit this variable as a predictor from this study. The second limitation is that some antihypertensive medications may be related with depression. However, we were concerned only about the number of antihypertensive medications and the frequency and did not discuss the type and classes of the medications. Another limitation of this study is that the prevalence of depression would be underestimated because we excluded the already known patients with a history of depression or currently on antidepressant medications. In addition, BDI-II is a depression screening tool only and not a diagnostic tool. This will surely affect the true prevalence of depression and therefore this could be another limitation regarding the estimation of the true prevalence. However, BDI-II as a screening tool gave a good impression of the depression prevalence among hypertensive patients and was able to discover many unknown cases that were referred for further diagnostic procedures. Despite the limitations, the findings support the routine screening of depression and medication adherence status as a part of care for hypertensive patients in PHCCs.
Background: The aim of this study was to estimate the prevalence and to determine the associated factors of undiagnosed depression amongst hypertensive patients (HTNP) at primary health care centers (PHCC) in Gaza. Methods: A cross-sectional survey was conducted including 538 HTNP as a recruitment phase of a clustered randomized controlled trial. Data were collected through face-to-face structured interview, and depression status was assessed by Beck's Depression Inventory (BDI-II). Data were analyzed by STATA version 14 using standard complex survey analyses, accounted for unresponsiveness and clustering approach. Generalized linear regression analysis was performed to assess associations. Results: The prevalence of undiagnosed clinical depression was 11.6% (95% confidence interval [CI]: 8.1, 16.3). Moreover, prevalence of 15.4% (95% CI: 10.8, 21.6) was found for mild depression symptoms. We found that non-adherence to antihypertensive medications (AHTNM) (β = 0.9, 95% CI: 0.17, 1.7), having more health-care system support (β = 2.8, 95% CI: 1.6, 3.9) and number of AHTNM (β = 1.5, 95% CI: 0.6, 2.5) remain significantly positively associated with BDI-II score. On the other hand, older age (β = -0.1, 95% CI: -0.2, -0.02), having better social support (β = -6.8, 95% CI: -8.9, -4.7) and having stronger patient-doctor relationship (β = -4.1, 95% CI: -6.9, -1.2) kept significantly negative association. Conclusions: The prevalence of undiagnosed depression was about one-quarter of all cases; half of them were moderate to severe. Routine screening of depression status should be a part of the care of HTNP in PHCC.
null
null
3,348
350
[]
4
[ "depression", "status", "support", "study", "ii", "95", "bdi ii", "bdi", "ci", "95 ci" ]
[ "clinical depression hypertensive", "predictor depression hypertension", "depression hypertension different", "occurrence depression hypertension", "prevalence depression hypertensive" ]
null
null
[CONTENT] Hypertension | Depression | Prevalence | Medication adherence [SUMMARY]
[CONTENT] Hypertension | Depression | Prevalence | Medication adherence [SUMMARY]
[CONTENT] Hypertension | Depression | Prevalence | Medication adherence [SUMMARY]
null
[CONTENT] Hypertension | Depression | Prevalence | Medication adherence [SUMMARY]
null
[CONTENT] Aged | Antihypertensive Agents | Cross-Sectional Studies | Depression | Depressive Disorder | Humans | Hypertension | Prevalence [SUMMARY]
[CONTENT] Aged | Antihypertensive Agents | Cross-Sectional Studies | Depression | Depressive Disorder | Humans | Hypertension | Prevalence [SUMMARY]
[CONTENT] Aged | Antihypertensive Agents | Cross-Sectional Studies | Depression | Depressive Disorder | Humans | Hypertension | Prevalence [SUMMARY]
null
[CONTENT] Aged | Antihypertensive Agents | Cross-Sectional Studies | Depression | Depressive Disorder | Humans | Hypertension | Prevalence [SUMMARY]
null
[CONTENT] clinical depression hypertensive | predictor depression hypertension | depression hypertension different | occurrence depression hypertension | prevalence depression hypertensive [SUMMARY]
[CONTENT] clinical depression hypertensive | predictor depression hypertension | depression hypertension different | occurrence depression hypertension | prevalence depression hypertensive [SUMMARY]
[CONTENT] clinical depression hypertensive | predictor depression hypertension | depression hypertension different | occurrence depression hypertension | prevalence depression hypertensive [SUMMARY]
null
[CONTENT] clinical depression hypertensive | predictor depression hypertension | depression hypertension different | occurrence depression hypertension | prevalence depression hypertensive [SUMMARY]
null
[CONTENT] depression | status | support | study | ii | 95 | bdi ii | bdi | ci | 95 ci [SUMMARY]
[CONTENT] depression | status | support | study | ii | 95 | bdi ii | bdi | ci | 95 ci [SUMMARY]
[CONTENT] depression | status | support | study | ii | 95 | bdi ii | bdi | ci | 95 ci [SUMMARY]
null
[CONTENT] depression | status | support | study | ii | 95 | bdi ii | bdi | ci | 95 ci [SUMMARY]
null
[CONTENT] depression | disease | hypertension | gs | diseases | burden disease | burden | increase | globally | affects [SUMMARY]
[CONTENT] status | health | questionnaire | bdi ii | bdi | ii | participants | support | depressive symptoms | symptoms [SUMMARY]
[CONTENT] ci | 95 | 95 ci | depression | status | ii | bdi | bdi ii | depression status | mean [SUMMARY]
null
[CONTENT] depression | ci | 95 | 95 ci | status | support | prevalence | study | bdi | bdi ii [SUMMARY]
null
[CONTENT] Gaza [SUMMARY]
[CONTENT] 538 ||| Beck's Depression Inventory | BDI-II ||| 14 ||| [SUMMARY]
[CONTENT] 11.6% | 95% ||| CI | 8.1 | 16.3 ||| 15.4% | 95% | CI | 10.8 | 21.6 ||| 0.9 | 95% | CI | 0.17 | 1.7 | 2.8 | 95% | CI | 1.6 | 3.9 | AHTNM | 1.5 | 95% | CI | 0.6 | 2.5 | BDI-II ||| 95% | CI | 95% | CI | 95% | CI ||| [SUMMARY]
null
[CONTENT] Gaza ||| 538 ||| Beck's Depression Inventory | BDI-II ||| 14 ||| ||| ||| 11.6% | 95% ||| CI | 8.1 | 16.3 ||| 15.4% | 95% | CI | 10.8 | 21.6 ||| 0.9 | 95% | CI | 0.17 | 1.7 | 2.8 | 95% | CI | 1.6 | 3.9 | AHTNM | 1.5 | 95% | CI | 0.6 | 2.5 | BDI-II ||| 95% | CI | 95% | CI | 95% | CI ||| ||| one-quarter | half ||| PHCC [SUMMARY]
null
The magnitude of syphilis: from prevalence to vertical transmission.
29267586
In 2013, the World Health Organization (WHO) reported that 1.9 million pregnant women were infected with syphilis worldwide, of which 66.5% had adverse fetal effects in cases of untreated syphilis. Congenital syphilis contributes significantly to infant mortality, accounting for 305,000 perinatal deaths worldwide annually.
INTRODUCTION
a cross-sectional study with data collected from 2041 parturients who had undergone treatment between 2012 and 2014 in the maternity section of the Pedro Ernesto Hospital of the State University of Rio de Janeiro, in the metropolitan area of Rio de Janeiro. The inclusion criterion was positive VDRL and treponemal test in a hospital environment.
MATERIAL AND METHODS
the prevalence of syphilis in pregnant women was 4.1% in 2012, 3.1% in 2013 and 5% in 2014, with official reporting of 15.6%, 25.0% and 48.1%, respectively. The incidence of congenital syphilis (CS) was 22/1,000 in live births (LB) in 2012; 17/1,000 LB in 2013 and 44.8/1,000 LB in 2014. CS underreporting during the period was 6.7%. Vertical transmission occurred in 65.8% of infants from infected mothers. It was concluded that, in 34.6% of the CS cases, maternal VDRL titers were = 1/4.
RESULTS
Results demonstrate the magnitude of the disease, fragility of the reporting system in the assessment of the actual prevalence, impact on perinatal outcomes, and they are a warning about the real situation of syphilis, which is still underestimated in the State.
CONCLUSION
[ "Brazil", "Cross-Sectional Studies", "Female", "Humans", "Incidence", "Infant, Newborn", "Infectious Disease Transmission, Vertical", "Pregnancy", "Pregnancy Complications, Infectious", "Prevalence", "Syphilis", "Syphilis, Congenital" ]
5738763
INTRODUCTION
Syphilis is a sexually transmitted disease caused by the bacterium Treponema pallidum. Infection can be transmitted to the unborn child through the placenta at any gestational stage. In exceptional cases, syphilis can be transmitted at birth through the child's contact with the birth canal, when there are maternal genital lesions (direct transmission). Syphilis may only be transmitted through breastfeeding if there are breast syphilis lesions 1 . Congenital infection is associated with adverse outcomes, including: perinatal death, preterm birth, low birth weight, congenital anomalies, active syphilis in the newborn (NB), and long-term sequelae, such as deafness and neurological impairment 2 . There is a significant global increase in the prevalence of syphilis in pregnant women, despite all efforts from governments to control the disease. In 2013, the WHO reported 1.9 million pregnant women infected with syphilis worldwide, with 66.5% of adverse fetal outcomes occurring in cases of untreated syphilis 3 . Congenital syphilis significantly contributes to infant mortality, accounting for 305,000 perinatal deaths per year worldwide 4 . There are more newborns affected by syphilis than by any other neonatal infection. It is estimated that 520,000 adverse fetal outcomes occur worldwide each year, making congenital syphilis more common than congenital HIV infection 5 . In areas where syphilis is prevalent, about half of the stillbirth rate may have been caused solely by this infection 6 . The direct medical cost associated with adverse fetal outcomes resulting from syphilis is U$ 309 million dollars a year 4 . High rates of syphilis in parturients mean a high incidence of congenital syphilis and millions of missed opportunities to save lives during pre-natal care. Over the past ten years, the infant mortality rate due to syphilis increased 150% in Brazil, from 2.2 per 100,000 live births (LB) in 2004 to 5.5 per 100,000 LB in 2013. By 2015, the State of Rio de Janeiro had the highest rate of syphilis during pregnancy (2.2%) and of congenital syphilis in the country (16 cases per 1,000 LB) 7 , 8 . The last syphilis monitoring study in pregnant women performed in Brazil in 2014, with a statistical sample from public and private hospitals, estimated the prevalence of syphilis at 1.02%, with no significant regional differences 9 . According to the prediction curve of the disease, the Ministry of Health (MH) expected 40,000 cases in pregnant women in Brazil in 2016. An epidemic of the disease in the country was confirmed by the Ministry of Health in 2016. The increased incidence of syphilis in pregnant women is alarming 8 . In 2015, according to the MH, the States of Rio de Janeiro and Mato Grosso do Sul were the most affected in the country, reaching a syphilis rate in pregnant women of 2.2% 8 . The national studies show differences in the prevalence of the disease according to the region studied, with rates ranging around 0.4% in Vitória (Espírito Santo) 10 , Itajaí (Santa Catarina) 11 and 7.7% in Fortaleza (Ceará) 12 . The analysis of international studies on the subject in countries such as Haiti 13 shows that the prevalence of syphilis in pregnant women reaches 7.6%, while in other countries, such as Canada, syphilis has already been eliminated 14 . The WHO certified Cuba in 2013 as the first country to eliminate the vertical transmission of HIV and syphilis during pregnancy. Six other countries are currently able to request the WHO to validate the double elimination, including Canada and the United States 15 . Aim To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection. To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection.
null
null
RESULTS
A total of 2,041 parturients were studied in HUPE: 735 in 2012, 664 in 2013 and 642 in 2014. After the exclusion of 19 pregnant women due to serological scarring or false-positive results (treponemal negative test), 79 cases of parturients with syphilis were identified: 32 cases in 2012, 20 in 2013 and 27 in 2014. Access to the result of the treponemal and non-treponemal tests for syphilis was obtained from all the pregnant women admitted during the period. The search on SINASC, considering HUPE as the place of birth, totaled 1,956 live births in HUPE for the three years studied, with 773 live births in 2012, 647 in 2013 and 536 in 2014. The prevalence of syphilis in the HUPE, using the number of live births officially described in SINASC as the denominator, was 4.1% in 2012, 3.1% in 2013 and 5.0% in 2014. Between 2012 and 2014, there was an increase on the prevalence of the disease (Table 1). Legend: live births (LB); live birth information system (SINASC); confidence limit (CL) The search on SINAN came up with 23 cases of syphilis in HUPE for the three years studied (2012 = 5, 2013 = 5 and 2014 = 13). The research showed 79 cases in the three years (2012 = 32, 2013 = 20 and 2014 = 27). Therefore, 15.6% of cases in 2012, 25.0% in 2013 and 48.1% in 2014 were reported, which shows a significant increase for the period. Three parturients (3.8%) from HUPE were double counted in the prevalence calculation, when they were admitted with syphilis at the time of delivery of subsequent pregnancies. One of the three parturients had VDRL titer =1/4 and progressed to 1/128 in the next gestation. The other two had 1/32 titers in the first gestation and had titers of 1/32 and 1/128, respectively, in subsequent pregnancies. The incidence of CS was 22/1,000 LB in 2012, 17/1,000 LB in 2013 and 44.8/ 1,000 LB in 2014. The notification of congenital syphilis reached 92.3% of the cases and only four cases were not officially reported. The present study demonstrated a vertical transmission rate of 65.8%, with 52 newborns affected by congenital syphilis. From the group of parturients with vertical transmission of the disease, 18 mothers had VDRL titers = 1/4, corresponding to 34.6% of the cases of congenital syphilis. The vertical transmission occurred in two consecutive pregnancies in one case that was within the low titers group (= 1/4).
CONCLUSIONS
The estimated prevalence of syphilis in pregnant women in Rio de Janeiro is at least twice as high as previously reported by regional studies and epidemiological surveillance data in the State, reaching levels found in African countries and higher than the figures for other countries in Latin America. These values are a warning on the magnitude of the disease in the State of Rio de Janeiro, its upward curve and its consequences. Our reporting system has proven to be fragile and unable to assess the current prevalence of syphilis in Rio de Janeiro, which is underestimated. Due to the high prevalence of syphilis among parturients, there was a high rate of CS and vertical transmission that occurred even in pregnant women with low VDRL titers, emphasizing the importance of evaluating low titers in the management of these patients. It is a responsibility of the health professionals, who daily handle this sad reality, to strive for excellence in caring for pregnant women. Even in the 21st century, the high rates of syphilis in parturients and CS in Brazil are still ignored and underestimated.
[ "Aim", "Study design", "Studied population", "Data collection", "Inclusion criterion", "Exclusion criterion", "Variable definition", "Epidemiological definitions", "Data analysis", "Considerations on ethical aspects" ]
[ "To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection.", "Cross-sectional study.", "A total of 2,041 pregnant women were recruited after admission to a public hospital in the metropolitan region of Rio de Janeiro, between January 2012 and December 2014. The institution involved in the study was the Pedro Ernesto Hospital of the State University of Rio de Janeiro (HUPE / UERJ).", "At HUPE, blood samples are routinely collected for syphilis investigation upon admission, with both non-treponemal testing (VDRL-Venereal Disease Research Laboratory) and confirmatory treponemal test (TPHA-Treponema Pallidum Hemagglutination). The confirmation via the treponemal test is important due to the possibility of a false-positive result. All the tests were performed in the clinical analysis laboratory of HUPE. According to the protocol of the Brazilian Ministry of Health, blood samples are collected for syphilis testing from all parturients. An active search of syphilis serology results (treponemal and non-treponemal tests) with the respective titrations was conducted by accessing the database of the hospital laboratory. A thorough review of the medical records of pregnant women and their newborns was also conducted. Data from the HUPE epidemiology service were evaluated in order to identify the total number of infected pregnant women and newborns, in order to avoid underestimating the results. A survey was conducted on the website of the State's Department of Health website to collect data on syphilis reports during pregnancy, as well as reports of congenital syphilis on the SINAN (National Disease Notification System) and the number of live births in the SINASC (Live Births Notification System).", "The following parturients were eligible for the study: pregnant women admitted for delivery with a live fetus of any gestational age and weight, stillbirths with gestational age equal to22 weeks or weight equal to 500 grams.\nThe following cases were defined as syphilis during pregnancy: all cases in which the parturient was admitted with laboratory evidence of positive VDRL (any titer) collected at the time of admission and confirmed by the treponemal test; all cases in which the parturient's newborn (stillbirth or live birth) has been reported as a case of congenital syphilis (CS), identified in any of the information systems consulted.", "The following parturients were excluded: the ones with positive VDRL resulting from adequately treated previous syphilis (complete treatment with benzathine penicillin, according to the clinical stage of the disease, complete treatment of the partner, documentation confirming the couple's treatment, drop in VDRL titers after adequate treatment, treatment completed more than 30 days before delivery).", "The following situations were included as CS cases: all the gestation products (live births or stillborn) identified in any of the information systems as premature congenital syphilis; all newborns with VDRL titers higher than the maternal ones; all newborns with clinical manifestations suggestive of syphilis or complementary tests pointing to CS\n7\n.", "According to the recommendation of the strategic management of the MH, the prevalence of syphilis in pregnant women is the ratio of the number of cases of syphilis detected in pregnant women to every 1,000 live births, within a geographical space and for a given period. For the purpose of calculating the prevalence, we used as the numerator the number of cases of syphilis detected in pregnant women, in a given notification year and place of residence multiplied by 1,000 and divided by the total number of live births in the same place and in the same reported year\n16\n. The numerator could be found in the SINAN (national disease notification system) and the denominator in the SINASC (live births information system).\nFor the calculation of the prevalence of parturients as a percentage, the number of cases of syphilis in parturients identified in the study was used as the numerator; the number of live births in that place and period, multiplied by 100 as the denominator.\nTo evaluate underreporting, we divided the number of parturients with syphilis in SINAN by the number of parturients with syphilis in hospitals x 100.\nFollowing the MH recommendation for the calculation of the CS incidence, the number of CS cases identified in the study was used as the numerator and the number of live births at that location and period, multiplied by 1000, as the denominator.\nIn order to evaluate the underreporting of congenital syphilis, we divided the number of CS cases by the number of cases reported in SINAN x 100.", "The sample size was calculated considering an expected prevalence of syphilis in pregnant women according to the MH in public hospitals of 1.1% in 2006, in Brazil\n9\n. To perform this calculation, assuming an error of 5% and a confidence interval (CI) of 95% the sample size was calculated to be 17 for 95% CI and 46 for 99,9% CI.\nCollected variables were used in the comparative analyzes to identify prevalence factors. The results were expressed as percentages. The process of entering and analyzing statistical data was performed through the EPIINFO 3.5.2 version 3.0.1 computing program.", "The research project was carried out within the standards required by the Declaration of Helsinki and Resolution 466 of December 12, 2012, and approved by the Research Ethics Committee of the UERJ (COEP) in July 2012, process N° 034.3.2012 and sponsored by FAPERJ, process N° E-26/110.351/2012." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "Aim", "MATERIALS AND METHODS", "Study design", "Studied population", "Data collection", "Inclusion criterion", "Exclusion criterion", "Variable definition", "Epidemiological definitions", "Data analysis", "Considerations on ethical aspects", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Syphilis is a sexually transmitted disease caused by the bacterium Treponema pallidum. Infection can be transmitted to the unborn child through the placenta at any gestational stage. In exceptional cases, syphilis can be transmitted at birth through the child's contact with the birth canal, when there are maternal genital lesions (direct transmission). Syphilis may only be transmitted through breastfeeding if there are breast syphilis lesions\n1\n.\nCongenital infection is associated with adverse outcomes, including: perinatal death, preterm birth, low birth weight, congenital anomalies, active syphilis in the newborn (NB), and long-term sequelae, such as deafness and neurological impairment\n2\n.\nThere is a significant global increase in the prevalence of syphilis in pregnant women, despite all efforts from governments to control the disease. In 2013, the WHO reported 1.9 million pregnant women infected with syphilis worldwide, with 66.5% of adverse fetal outcomes occurring in cases of untreated syphilis\n3\n. Congenital syphilis significantly contributes to infant mortality, accounting for 305,000 perinatal deaths per year worldwide\n4\n. There are more newborns affected by syphilis than by any other neonatal infection. It is estimated that 520,000 adverse fetal outcomes occur worldwide each year, making congenital syphilis more common than congenital HIV infection\n5\n. In areas where syphilis is prevalent, about half of the stillbirth rate may have been caused solely by this infection\n6\n. The direct medical cost associated with adverse fetal outcomes resulting from syphilis is U$ 309 million dollars a year\n4\n.\nHigh rates of syphilis in parturients mean a high incidence of congenital syphilis and millions of missed opportunities to save lives during pre-natal care. Over the past ten years, the infant mortality rate due to syphilis increased 150% in Brazil, from 2.2 per 100,000 live births (LB) in 2004 to 5.5 per 100,000 LB in 2013. By 2015, the State of Rio de Janeiro had the highest rate of syphilis during pregnancy (2.2%) and of congenital syphilis in the country (16 cases per 1,000 LB)\n7\n\n,\n\n8\n.\nThe last syphilis monitoring study in pregnant women performed in Brazil in 2014, with a statistical sample from public and private hospitals, estimated the prevalence of syphilis at 1.02%, with no significant regional differences\n9\n.\nAccording to the prediction curve of the disease, the Ministry of Health (MH) expected 40,000 cases in pregnant women in Brazil in 2016. An epidemic of the disease in the country was confirmed by the Ministry of Health in 2016. The increased incidence of syphilis in pregnant women is alarming\n8\n. In 2015, according to the MH, the States of Rio de Janeiro and Mato Grosso do Sul were the most affected in the country, reaching a syphilis rate in pregnant women of 2.2%\n8\n.\nThe national studies show differences in the prevalence of the disease according to the region studied, with rates ranging around 0.4% in Vitória (Espírito Santo)\n10\n, Itajaí (Santa Catarina)\n11\n and 7.7% in Fortaleza (Ceará)\n12\n.\nThe analysis of international studies on the subject in countries such as Haiti\n13\n shows that the prevalence of syphilis in pregnant women reaches 7.6%, while in other countries, such as Canada, syphilis has already been eliminated\n14\n. The WHO certified Cuba in 2013 as the first country to eliminate the vertical transmission of HIV and syphilis during pregnancy. Six other countries are currently able to request the WHO to validate the double elimination, including Canada and the United States\n15\n.\n Aim To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection.\nTo estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection.", "To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection.", " Study design Cross-sectional study.\nCross-sectional study.\n Studied population A total of 2,041 pregnant women were recruited after admission to a public hospital in the metropolitan region of Rio de Janeiro, between January 2012 and December 2014. The institution involved in the study was the Pedro Ernesto Hospital of the State University of Rio de Janeiro (HUPE / UERJ).\nA total of 2,041 pregnant women were recruited after admission to a public hospital in the metropolitan region of Rio de Janeiro, between January 2012 and December 2014. The institution involved in the study was the Pedro Ernesto Hospital of the State University of Rio de Janeiro (HUPE / UERJ).\n Data collection At HUPE, blood samples are routinely collected for syphilis investigation upon admission, with both non-treponemal testing (VDRL-Venereal Disease Research Laboratory) and confirmatory treponemal test (TPHA-Treponema Pallidum Hemagglutination). The confirmation via the treponemal test is important due to the possibility of a false-positive result. All the tests were performed in the clinical analysis laboratory of HUPE. According to the protocol of the Brazilian Ministry of Health, blood samples are collected for syphilis testing from all parturients. An active search of syphilis serology results (treponemal and non-treponemal tests) with the respective titrations was conducted by accessing the database of the hospital laboratory. A thorough review of the medical records of pregnant women and their newborns was also conducted. Data from the HUPE epidemiology service were evaluated in order to identify the total number of infected pregnant women and newborns, in order to avoid underestimating the results. A survey was conducted on the website of the State's Department of Health website to collect data on syphilis reports during pregnancy, as well as reports of congenital syphilis on the SINAN (National Disease Notification System) and the number of live births in the SINASC (Live Births Notification System).\nAt HUPE, blood samples are routinely collected for syphilis investigation upon admission, with both non-treponemal testing (VDRL-Venereal Disease Research Laboratory) and confirmatory treponemal test (TPHA-Treponema Pallidum Hemagglutination). The confirmation via the treponemal test is important due to the possibility of a false-positive result. All the tests were performed in the clinical analysis laboratory of HUPE. According to the protocol of the Brazilian Ministry of Health, blood samples are collected for syphilis testing from all parturients. An active search of syphilis serology results (treponemal and non-treponemal tests) with the respective titrations was conducted by accessing the database of the hospital laboratory. A thorough review of the medical records of pregnant women and their newborns was also conducted. Data from the HUPE epidemiology service were evaluated in order to identify the total number of infected pregnant women and newborns, in order to avoid underestimating the results. A survey was conducted on the website of the State's Department of Health website to collect data on syphilis reports during pregnancy, as well as reports of congenital syphilis on the SINAN (National Disease Notification System) and the number of live births in the SINASC (Live Births Notification System).\n Inclusion criterion The following parturients were eligible for the study: pregnant women admitted for delivery with a live fetus of any gestational age and weight, stillbirths with gestational age equal to22 weeks or weight equal to 500 grams.\nThe following cases were defined as syphilis during pregnancy: all cases in which the parturient was admitted with laboratory evidence of positive VDRL (any titer) collected at the time of admission and confirmed by the treponemal test; all cases in which the parturient's newborn (stillbirth or live birth) has been reported as a case of congenital syphilis (CS), identified in any of the information systems consulted.\nThe following parturients were eligible for the study: pregnant women admitted for delivery with a live fetus of any gestational age and weight, stillbirths with gestational age equal to22 weeks or weight equal to 500 grams.\nThe following cases were defined as syphilis during pregnancy: all cases in which the parturient was admitted with laboratory evidence of positive VDRL (any titer) collected at the time of admission and confirmed by the treponemal test; all cases in which the parturient's newborn (stillbirth or live birth) has been reported as a case of congenital syphilis (CS), identified in any of the information systems consulted.\n Exclusion criterion The following parturients were excluded: the ones with positive VDRL resulting from adequately treated previous syphilis (complete treatment with benzathine penicillin, according to the clinical stage of the disease, complete treatment of the partner, documentation confirming the couple's treatment, drop in VDRL titers after adequate treatment, treatment completed more than 30 days before delivery).\nThe following parturients were excluded: the ones with positive VDRL resulting from adequately treated previous syphilis (complete treatment with benzathine penicillin, according to the clinical stage of the disease, complete treatment of the partner, documentation confirming the couple's treatment, drop in VDRL titers after adequate treatment, treatment completed more than 30 days before delivery).\n Variable definition The following situations were included as CS cases: all the gestation products (live births or stillborn) identified in any of the information systems as premature congenital syphilis; all newborns with VDRL titers higher than the maternal ones; all newborns with clinical manifestations suggestive of syphilis or complementary tests pointing to CS\n7\n.\nThe following situations were included as CS cases: all the gestation products (live births or stillborn) identified in any of the information systems as premature congenital syphilis; all newborns with VDRL titers higher than the maternal ones; all newborns with clinical manifestations suggestive of syphilis or complementary tests pointing to CS\n7\n.\n Epidemiological definitions According to the recommendation of the strategic management of the MH, the prevalence of syphilis in pregnant women is the ratio of the number of cases of syphilis detected in pregnant women to every 1,000 live births, within a geographical space and for a given period. For the purpose of calculating the prevalence, we used as the numerator the number of cases of syphilis detected in pregnant women, in a given notification year and place of residence multiplied by 1,000 and divided by the total number of live births in the same place and in the same reported year\n16\n. The numerator could be found in the SINAN (national disease notification system) and the denominator in the SINASC (live births information system).\nFor the calculation of the prevalence of parturients as a percentage, the number of cases of syphilis in parturients identified in the study was used as the numerator; the number of live births in that place and period, multiplied by 100 as the denominator.\nTo evaluate underreporting, we divided the number of parturients with syphilis in SINAN by the number of parturients with syphilis in hospitals x 100.\nFollowing the MH recommendation for the calculation of the CS incidence, the number of CS cases identified in the study was used as the numerator and the number of live births at that location and period, multiplied by 1000, as the denominator.\nIn order to evaluate the underreporting of congenital syphilis, we divided the number of CS cases by the number of cases reported in SINAN x 100.\nAccording to the recommendation of the strategic management of the MH, the prevalence of syphilis in pregnant women is the ratio of the number of cases of syphilis detected in pregnant women to every 1,000 live births, within a geographical space and for a given period. For the purpose of calculating the prevalence, we used as the numerator the number of cases of syphilis detected in pregnant women, in a given notification year and place of residence multiplied by 1,000 and divided by the total number of live births in the same place and in the same reported year\n16\n. The numerator could be found in the SINAN (national disease notification system) and the denominator in the SINASC (live births information system).\nFor the calculation of the prevalence of parturients as a percentage, the number of cases of syphilis in parturients identified in the study was used as the numerator; the number of live births in that place and period, multiplied by 100 as the denominator.\nTo evaluate underreporting, we divided the number of parturients with syphilis in SINAN by the number of parturients with syphilis in hospitals x 100.\nFollowing the MH recommendation for the calculation of the CS incidence, the number of CS cases identified in the study was used as the numerator and the number of live births at that location and period, multiplied by 1000, as the denominator.\nIn order to evaluate the underreporting of congenital syphilis, we divided the number of CS cases by the number of cases reported in SINAN x 100.\n Data analysis The sample size was calculated considering an expected prevalence of syphilis in pregnant women according to the MH in public hospitals of 1.1% in 2006, in Brazil\n9\n. To perform this calculation, assuming an error of 5% and a confidence interval (CI) of 95% the sample size was calculated to be 17 for 95% CI and 46 for 99,9% CI.\nCollected variables were used in the comparative analyzes to identify prevalence factors. The results were expressed as percentages. The process of entering and analyzing statistical data was performed through the EPIINFO 3.5.2 version 3.0.1 computing program.\nThe sample size was calculated considering an expected prevalence of syphilis in pregnant women according to the MH in public hospitals of 1.1% in 2006, in Brazil\n9\n. To perform this calculation, assuming an error of 5% and a confidence interval (CI) of 95% the sample size was calculated to be 17 for 95% CI and 46 for 99,9% CI.\nCollected variables were used in the comparative analyzes to identify prevalence factors. The results were expressed as percentages. The process of entering and analyzing statistical data was performed through the EPIINFO 3.5.2 version 3.0.1 computing program.\n Considerations on ethical aspects The research project was carried out within the standards required by the Declaration of Helsinki and Resolution 466 of December 12, 2012, and approved by the Research Ethics Committee of the UERJ (COEP) in July 2012, process N° 034.3.2012 and sponsored by FAPERJ, process N° E-26/110.351/2012.\nThe research project was carried out within the standards required by the Declaration of Helsinki and Resolution 466 of December 12, 2012, and approved by the Research Ethics Committee of the UERJ (COEP) in July 2012, process N° 034.3.2012 and sponsored by FAPERJ, process N° E-26/110.351/2012.", "Cross-sectional study.", "A total of 2,041 pregnant women were recruited after admission to a public hospital in the metropolitan region of Rio de Janeiro, between January 2012 and December 2014. The institution involved in the study was the Pedro Ernesto Hospital of the State University of Rio de Janeiro (HUPE / UERJ).", "At HUPE, blood samples are routinely collected for syphilis investigation upon admission, with both non-treponemal testing (VDRL-Venereal Disease Research Laboratory) and confirmatory treponemal test (TPHA-Treponema Pallidum Hemagglutination). The confirmation via the treponemal test is important due to the possibility of a false-positive result. All the tests were performed in the clinical analysis laboratory of HUPE. According to the protocol of the Brazilian Ministry of Health, blood samples are collected for syphilis testing from all parturients. An active search of syphilis serology results (treponemal and non-treponemal tests) with the respective titrations was conducted by accessing the database of the hospital laboratory. A thorough review of the medical records of pregnant women and their newborns was also conducted. Data from the HUPE epidemiology service were evaluated in order to identify the total number of infected pregnant women and newborns, in order to avoid underestimating the results. A survey was conducted on the website of the State's Department of Health website to collect data on syphilis reports during pregnancy, as well as reports of congenital syphilis on the SINAN (National Disease Notification System) and the number of live births in the SINASC (Live Births Notification System).", "The following parturients were eligible for the study: pregnant women admitted for delivery with a live fetus of any gestational age and weight, stillbirths with gestational age equal to22 weeks or weight equal to 500 grams.\nThe following cases were defined as syphilis during pregnancy: all cases in which the parturient was admitted with laboratory evidence of positive VDRL (any titer) collected at the time of admission and confirmed by the treponemal test; all cases in which the parturient's newborn (stillbirth or live birth) has been reported as a case of congenital syphilis (CS), identified in any of the information systems consulted.", "The following parturients were excluded: the ones with positive VDRL resulting from adequately treated previous syphilis (complete treatment with benzathine penicillin, according to the clinical stage of the disease, complete treatment of the partner, documentation confirming the couple's treatment, drop in VDRL titers after adequate treatment, treatment completed more than 30 days before delivery).", "The following situations were included as CS cases: all the gestation products (live births or stillborn) identified in any of the information systems as premature congenital syphilis; all newborns with VDRL titers higher than the maternal ones; all newborns with clinical manifestations suggestive of syphilis or complementary tests pointing to CS\n7\n.", "According to the recommendation of the strategic management of the MH, the prevalence of syphilis in pregnant women is the ratio of the number of cases of syphilis detected in pregnant women to every 1,000 live births, within a geographical space and for a given period. For the purpose of calculating the prevalence, we used as the numerator the number of cases of syphilis detected in pregnant women, in a given notification year and place of residence multiplied by 1,000 and divided by the total number of live births in the same place and in the same reported year\n16\n. The numerator could be found in the SINAN (national disease notification system) and the denominator in the SINASC (live births information system).\nFor the calculation of the prevalence of parturients as a percentage, the number of cases of syphilis in parturients identified in the study was used as the numerator; the number of live births in that place and period, multiplied by 100 as the denominator.\nTo evaluate underreporting, we divided the number of parturients with syphilis in SINAN by the number of parturients with syphilis in hospitals x 100.\nFollowing the MH recommendation for the calculation of the CS incidence, the number of CS cases identified in the study was used as the numerator and the number of live births at that location and period, multiplied by 1000, as the denominator.\nIn order to evaluate the underreporting of congenital syphilis, we divided the number of CS cases by the number of cases reported in SINAN x 100.", "The sample size was calculated considering an expected prevalence of syphilis in pregnant women according to the MH in public hospitals of 1.1% in 2006, in Brazil\n9\n. To perform this calculation, assuming an error of 5% and a confidence interval (CI) of 95% the sample size was calculated to be 17 for 95% CI and 46 for 99,9% CI.\nCollected variables were used in the comparative analyzes to identify prevalence factors. The results were expressed as percentages. The process of entering and analyzing statistical data was performed through the EPIINFO 3.5.2 version 3.0.1 computing program.", "The research project was carried out within the standards required by the Declaration of Helsinki and Resolution 466 of December 12, 2012, and approved by the Research Ethics Committee of the UERJ (COEP) in July 2012, process N° 034.3.2012 and sponsored by FAPERJ, process N° E-26/110.351/2012.", "A total of 2,041 parturients were studied in HUPE: 735 in 2012, 664 in 2013 and 642 in 2014. After the exclusion of 19 pregnant women due to serological scarring or false-positive results (treponemal negative test), 79 cases of parturients with syphilis were identified: 32 cases in 2012, 20 in 2013 and 27 in 2014. Access to the result of the treponemal and non-treponemal tests for syphilis was obtained from all the pregnant women admitted during the period.\nThe search on SINASC, considering HUPE as the place of birth, totaled 1,956 live births in HUPE for the three years studied, with 773 live births in 2012, 647 in 2013 and 536 in 2014. The prevalence of syphilis in the HUPE, using the number of live births officially described in SINASC as the denominator, was 4.1% in 2012, 3.1% in 2013 and 5.0% in 2014. Between 2012 and 2014, there was an increase on the prevalence of the disease (Table 1).\nLegend: live births (LB); live birth information system (SINASC); confidence limit (CL)\nThe search on SINAN came up with 23 cases of syphilis in HUPE for the three years studied (2012 = 5, 2013 = 5 and 2014 = 13). The research showed 79 cases in the three years (2012 = 32, 2013 = 20 and 2014 = 27).\nTherefore, 15.6% of cases in 2012, 25.0% in 2013 and 48.1% in 2014 were reported, which shows a significant increase for the period.\nThree parturients (3.8%) from HUPE were double counted in the prevalence calculation, when they were admitted with syphilis at the time of delivery of subsequent pregnancies. One of the three parturients had VDRL titer =1/4 and progressed to 1/128 in the next gestation. The other two had 1/32 titers in the first gestation and had titers of 1/32 and 1/128, respectively, in subsequent pregnancies.\nThe incidence of CS was 22/1,000 LB in 2012, 17/1,000 LB in 2013 and 44.8/ 1,000 LB in 2014. The notification of congenital syphilis reached 92.3% of the cases and only four cases were not officially reported. The present study demonstrated a vertical transmission rate of 65.8%, with 52 newborns affected by congenital syphilis.\nFrom the group of parturients with vertical transmission of the disease, 18 mothers had VDRL titers = 1/4, corresponding to 34.6% of the cases of congenital syphilis. The vertical transmission occurred in two consecutive pregnancies in one case that was within the low titers group (= 1/4).", "The latest national studies in public or public-private hospitals that monitor the prevalence of syphilis in the country demonstrated a reduction in the prevalence of syphilis among parturients in Brazil. Sentinel studies revealed a prevalence of 1.6%\n17\n in 2004 and 1.02%\n9\n in 2014.\nHowever, the present study does not confirm this trend and demonstrates that the disease is not under control in the State of Rio de Janeiro. In 2014, the prevalence rate of 5.0% was four-fold higher than the one registered in the country in 2014 by Domingues et al.\n\n19\n and twice higher than the one registered in the State of Rio de Janeiro in 2008 by the same author\n18\n.\nThe results are similar to those described by the MH regarding the ascending curve of the disease in recent years, but they express a significantly higher prevalence when compared with data from the epidemiological surveillance agencies for the same period. In 2014, although the State of Rio de Janeiro had the highest syphilis rate in pregnant women in the country (2.2%), the prevalence registered at HUPE was 2.3 times higher than the official one\n10\n.\nThe prevalence registered in 2012 was similar to the one of the Republic of Congo in Africa (4.2%)\n19\n. It was superior to levels in other Latin American countries in the three years studied and it is a warning regarding the epidemiological situation of syphilis in the State\n20\n.\nFor some authors, the increase in the disease prevalence in the country may be associated with the improvement in the diagnosis and reporting systems, and not necessarily with the rise in the total number of cases. However, for some infectologists, the rising prevalence is due to the increasing practice of unprotected sex, which would intensify the number of cases of syphilis in the adult population and, consequently, during pregnancy. For others, there are flaws in the gestation treatment of prenatal services, in both diagnosis and treatment of the pregnant women and their partners\n7\n\n,\n\n21\n.\nA possible limitation of the present study is the selection bias, since the prevalence can be overestimated by including parturients with serological scarring (<1/8) in their calculation, even after the meticulous evaluation of the medical records and considering epidemiological history. The MH considers low VDRL titers when diagnosing syphilis, due to the variations in readings and the lack of detection by the tests. All the national studies consulted, including the sentinel studies in parturients, used any VDRL title for the calculation of the prevalence in its methodology\n9\n\n,\n\n22\n. Furthermore, several authors have already described the clinical significance of low titers of VDRL in the diagnosis of congenital syphilis\n23\n.\nAlthough there has been a substantial increase in the reporting of syphilis in parturients in HUPE between 2012 and 2014, it has not reached half of the cases, which demonstrates how fragile the epidemiological surveillance system is in identifying and reporting all cases. The considerable increase in the coverage of notifications follows the increased number of notifications observed in Brazil, which, according to the MH, is related to the Rede Cegonha program, which increased the coverage of pregnant women testing and follow-up of cases\n24\n. In Brazil, between 2012 and 2013, there was a 25% increase in the number of syphilis reports in pregnant women, and 23% if only the Southeast region is considered, which is responsible for 45.9% of the country's notifications. The increase of the number of notifications in the State of Rio de Janeiro was 16%. In the country, São Paulo is the State with the highest number of notifications, followed by the State of Rio de Janeiro\n25\n.\nAlthough the surveillance agencies considered underreporting for the purposes of disease monitoring, the present study shows that the level of underreporting was higher than estimated and should not be used as a basis for calculation. Without the proper notification, there is a tendency to underestimate the problem. However, the underreporting found in HUPE in 2013 is similar to the one described by Cavalcante et al.\n\n26\n in Fortaleza and Domingues et al.\n\n18\n in SUS units (state health clinics and hospitals) in the city of Rio de Janeiro, where underreporting was 73%, 70.3% and 76.1%, respectively, demonstrating the fragility of the information services in Brazil to work as a basis for promoting measures to combat the disease.\nThe high prevalence of the disease among the parturients of the present study resulted in an incidence of congenital syphilis (26.6 per 1,000 LB) greater than that reported in the State of Rio de Janeiro in 2015 (16 per 1,000 LB)\n10\n and at least five-fold greater than the incidence of the disease in Brazil, according to the Nascer Study (3.51 per 1,000 LB)\n27\n.\nRegarding the notification of congenital syphilis, the present study reached more than 90%, a result far superior to that reported by the MH, which estimates notification at only 17.4% of the cases of CS in the country\n24\n. HUPE is located in the capital of the State of Rio de Janeiro and, according to the epidemiological bulletin of the State Health Department, it is the city with the highest number of notifications of CS in the State (63.5%)\n25\n.\nThe high vertical transmission rate of syphilis in this study (65.8%) shows the high prevalence of syphilis among parturients, since among the various diseases that can be transmitted during the puerperal-pregnancy cycle, syphilis is the one that presents the greater chances of transmission\n28\n. Kupek et al.\n\n11\n showed similar rates (68.9%). Domingues et al.\n18\n and the MH\n7\n, however, estimated lower rates of vertical transmission, 34.8% in Rio de Janeiro and 25% in Brazil, respectively.\nThe occurrence of maternal syphilis in a subsequent gestation demonstrated that a prior gestation with syphilis did not eliminate the risk of the disease in future pregnancies. The recurrence of syphilis in two distinct pregnancies is the major sign of the inefficiency of the health system in properly conducting these cases.\nFetal infection occurred even in the presence of low VDRL maternal titers. The lack of proper consideration of low titers of VDRL by the health professionals is still a barrier to the control of congenital syphilis. In a study with health professionals, it was observed that only 48% had adequate knowledge about the management of syphilis, especially regarding the treatment of pregnant women with low titration\n29\n.\nSince syphilis is an easily diagnosed disease that has an established treatment and generates low costs for testing and treatment during pregnancy, it should be possible to control the disease. However, the disease continues to advance in Brazil in alarming numbers.", "The estimated prevalence of syphilis in pregnant women in Rio de Janeiro is at least twice as high as previously reported by regional studies and epidemiological surveillance data in the State, reaching levels found in African countries and higher than the figures for other countries in Latin America. These values are a warning on the magnitude of the disease in the State of Rio de Janeiro, its upward curve and its consequences. Our reporting system has proven to be fragile and unable to assess the current prevalence of syphilis in Rio de Janeiro, which is underestimated. Due to the high prevalence of syphilis among parturients, there was a high rate of CS and vertical transmission that occurred even in pregnant women with low VDRL titers, emphasizing the importance of evaluating low titers in the management of these patients.\nIt is a responsibility of the health professionals, who daily handle this sad reality, to strive for excellence in caring for pregnant women. Even in the 21st century, the high rates of syphilis in parturients and CS in Brazil are still ignored and underestimated." ]
[ "intro", null, "materials|methods", null, null, null, null, null, null, null, null, null, "results", "discussion", "conclusions" ]
[ "Syphilis", "Gestation", "Congenital syphilis", "Prevalence", "VDRL", "Treponemal tests", "Treponema pallidum" ]
INTRODUCTION: Syphilis is a sexually transmitted disease caused by the bacterium Treponema pallidum. Infection can be transmitted to the unborn child through the placenta at any gestational stage. In exceptional cases, syphilis can be transmitted at birth through the child's contact with the birth canal, when there are maternal genital lesions (direct transmission). Syphilis may only be transmitted through breastfeeding if there are breast syphilis lesions 1 . Congenital infection is associated with adverse outcomes, including: perinatal death, preterm birth, low birth weight, congenital anomalies, active syphilis in the newborn (NB), and long-term sequelae, such as deafness and neurological impairment 2 . There is a significant global increase in the prevalence of syphilis in pregnant women, despite all efforts from governments to control the disease. In 2013, the WHO reported 1.9 million pregnant women infected with syphilis worldwide, with 66.5% of adverse fetal outcomes occurring in cases of untreated syphilis 3 . Congenital syphilis significantly contributes to infant mortality, accounting for 305,000 perinatal deaths per year worldwide 4 . There are more newborns affected by syphilis than by any other neonatal infection. It is estimated that 520,000 adverse fetal outcomes occur worldwide each year, making congenital syphilis more common than congenital HIV infection 5 . In areas where syphilis is prevalent, about half of the stillbirth rate may have been caused solely by this infection 6 . The direct medical cost associated with adverse fetal outcomes resulting from syphilis is U$ 309 million dollars a year 4 . High rates of syphilis in parturients mean a high incidence of congenital syphilis and millions of missed opportunities to save lives during pre-natal care. Over the past ten years, the infant mortality rate due to syphilis increased 150% in Brazil, from 2.2 per 100,000 live births (LB) in 2004 to 5.5 per 100,000 LB in 2013. By 2015, the State of Rio de Janeiro had the highest rate of syphilis during pregnancy (2.2%) and of congenital syphilis in the country (16 cases per 1,000 LB) 7 , 8 . The last syphilis monitoring study in pregnant women performed in Brazil in 2014, with a statistical sample from public and private hospitals, estimated the prevalence of syphilis at 1.02%, with no significant regional differences 9 . According to the prediction curve of the disease, the Ministry of Health (MH) expected 40,000 cases in pregnant women in Brazil in 2016. An epidemic of the disease in the country was confirmed by the Ministry of Health in 2016. The increased incidence of syphilis in pregnant women is alarming 8 . In 2015, according to the MH, the States of Rio de Janeiro and Mato Grosso do Sul were the most affected in the country, reaching a syphilis rate in pregnant women of 2.2% 8 . The national studies show differences in the prevalence of the disease according to the region studied, with rates ranging around 0.4% in Vitória (Espírito Santo) 10 , Itajaí (Santa Catarina) 11 and 7.7% in Fortaleza (Ceará) 12 . The analysis of international studies on the subject in countries such as Haiti 13 shows that the prevalence of syphilis in pregnant women reaches 7.6%, while in other countries, such as Canada, syphilis has already been eliminated 14 . The WHO certified Cuba in 2013 as the first country to eliminate the vertical transmission of HIV and syphilis during pregnancy. Six other countries are currently able to request the WHO to validate the double elimination, including Canada and the United States 15 . Aim To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection. To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection. Aim: To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection. MATERIALS AND METHODS: Study design Cross-sectional study. Cross-sectional study. Studied population A total of 2,041 pregnant women were recruited after admission to a public hospital in the metropolitan region of Rio de Janeiro, between January 2012 and December 2014. The institution involved in the study was the Pedro Ernesto Hospital of the State University of Rio de Janeiro (HUPE / UERJ). A total of 2,041 pregnant women were recruited after admission to a public hospital in the metropolitan region of Rio de Janeiro, between January 2012 and December 2014. The institution involved in the study was the Pedro Ernesto Hospital of the State University of Rio de Janeiro (HUPE / UERJ). Data collection At HUPE, blood samples are routinely collected for syphilis investigation upon admission, with both non-treponemal testing (VDRL-Venereal Disease Research Laboratory) and confirmatory treponemal test (TPHA-Treponema Pallidum Hemagglutination). The confirmation via the treponemal test is important due to the possibility of a false-positive result. All the tests were performed in the clinical analysis laboratory of HUPE. According to the protocol of the Brazilian Ministry of Health, blood samples are collected for syphilis testing from all parturients. An active search of syphilis serology results (treponemal and non-treponemal tests) with the respective titrations was conducted by accessing the database of the hospital laboratory. A thorough review of the medical records of pregnant women and their newborns was also conducted. Data from the HUPE epidemiology service were evaluated in order to identify the total number of infected pregnant women and newborns, in order to avoid underestimating the results. A survey was conducted on the website of the State's Department of Health website to collect data on syphilis reports during pregnancy, as well as reports of congenital syphilis on the SINAN (National Disease Notification System) and the number of live births in the SINASC (Live Births Notification System). At HUPE, blood samples are routinely collected for syphilis investigation upon admission, with both non-treponemal testing (VDRL-Venereal Disease Research Laboratory) and confirmatory treponemal test (TPHA-Treponema Pallidum Hemagglutination). The confirmation via the treponemal test is important due to the possibility of a false-positive result. All the tests were performed in the clinical analysis laboratory of HUPE. According to the protocol of the Brazilian Ministry of Health, blood samples are collected for syphilis testing from all parturients. An active search of syphilis serology results (treponemal and non-treponemal tests) with the respective titrations was conducted by accessing the database of the hospital laboratory. A thorough review of the medical records of pregnant women and their newborns was also conducted. Data from the HUPE epidemiology service were evaluated in order to identify the total number of infected pregnant women and newborns, in order to avoid underestimating the results. A survey was conducted on the website of the State's Department of Health website to collect data on syphilis reports during pregnancy, as well as reports of congenital syphilis on the SINAN (National Disease Notification System) and the number of live births in the SINASC (Live Births Notification System). Inclusion criterion The following parturients were eligible for the study: pregnant women admitted for delivery with a live fetus of any gestational age and weight, stillbirths with gestational age equal to22 weeks or weight equal to 500 grams. The following cases were defined as syphilis during pregnancy: all cases in which the parturient was admitted with laboratory evidence of positive VDRL (any titer) collected at the time of admission and confirmed by the treponemal test; all cases in which the parturient's newborn (stillbirth or live birth) has been reported as a case of congenital syphilis (CS), identified in any of the information systems consulted. The following parturients were eligible for the study: pregnant women admitted for delivery with a live fetus of any gestational age and weight, stillbirths with gestational age equal to22 weeks or weight equal to 500 grams. The following cases were defined as syphilis during pregnancy: all cases in which the parturient was admitted with laboratory evidence of positive VDRL (any titer) collected at the time of admission and confirmed by the treponemal test; all cases in which the parturient's newborn (stillbirth or live birth) has been reported as a case of congenital syphilis (CS), identified in any of the information systems consulted. Exclusion criterion The following parturients were excluded: the ones with positive VDRL resulting from adequately treated previous syphilis (complete treatment with benzathine penicillin, according to the clinical stage of the disease, complete treatment of the partner, documentation confirming the couple's treatment, drop in VDRL titers after adequate treatment, treatment completed more than 30 days before delivery). The following parturients were excluded: the ones with positive VDRL resulting from adequately treated previous syphilis (complete treatment with benzathine penicillin, according to the clinical stage of the disease, complete treatment of the partner, documentation confirming the couple's treatment, drop in VDRL titers after adequate treatment, treatment completed more than 30 days before delivery). Variable definition The following situations were included as CS cases: all the gestation products (live births or stillborn) identified in any of the information systems as premature congenital syphilis; all newborns with VDRL titers higher than the maternal ones; all newborns with clinical manifestations suggestive of syphilis or complementary tests pointing to CS 7 . The following situations were included as CS cases: all the gestation products (live births or stillborn) identified in any of the information systems as premature congenital syphilis; all newborns with VDRL titers higher than the maternal ones; all newborns with clinical manifestations suggestive of syphilis or complementary tests pointing to CS 7 . Epidemiological definitions According to the recommendation of the strategic management of the MH, the prevalence of syphilis in pregnant women is the ratio of the number of cases of syphilis detected in pregnant women to every 1,000 live births, within a geographical space and for a given period. For the purpose of calculating the prevalence, we used as the numerator the number of cases of syphilis detected in pregnant women, in a given notification year and place of residence multiplied by 1,000 and divided by the total number of live births in the same place and in the same reported year 16 . The numerator could be found in the SINAN (national disease notification system) and the denominator in the SINASC (live births information system). For the calculation of the prevalence of parturients as a percentage, the number of cases of syphilis in parturients identified in the study was used as the numerator; the number of live births in that place and period, multiplied by 100 as the denominator. To evaluate underreporting, we divided the number of parturients with syphilis in SINAN by the number of parturients with syphilis in hospitals x 100. Following the MH recommendation for the calculation of the CS incidence, the number of CS cases identified in the study was used as the numerator and the number of live births at that location and period, multiplied by 1000, as the denominator. In order to evaluate the underreporting of congenital syphilis, we divided the number of CS cases by the number of cases reported in SINAN x 100. According to the recommendation of the strategic management of the MH, the prevalence of syphilis in pregnant women is the ratio of the number of cases of syphilis detected in pregnant women to every 1,000 live births, within a geographical space and for a given period. For the purpose of calculating the prevalence, we used as the numerator the number of cases of syphilis detected in pregnant women, in a given notification year and place of residence multiplied by 1,000 and divided by the total number of live births in the same place and in the same reported year 16 . The numerator could be found in the SINAN (national disease notification system) and the denominator in the SINASC (live births information system). For the calculation of the prevalence of parturients as a percentage, the number of cases of syphilis in parturients identified in the study was used as the numerator; the number of live births in that place and period, multiplied by 100 as the denominator. To evaluate underreporting, we divided the number of parturients with syphilis in SINAN by the number of parturients with syphilis in hospitals x 100. Following the MH recommendation for the calculation of the CS incidence, the number of CS cases identified in the study was used as the numerator and the number of live births at that location and period, multiplied by 1000, as the denominator. In order to evaluate the underreporting of congenital syphilis, we divided the number of CS cases by the number of cases reported in SINAN x 100. Data analysis The sample size was calculated considering an expected prevalence of syphilis in pregnant women according to the MH in public hospitals of 1.1% in 2006, in Brazil 9 . To perform this calculation, assuming an error of 5% and a confidence interval (CI) of 95% the sample size was calculated to be 17 for 95% CI and 46 for 99,9% CI. Collected variables were used in the comparative analyzes to identify prevalence factors. The results were expressed as percentages. The process of entering and analyzing statistical data was performed through the EPIINFO 3.5.2 version 3.0.1 computing program. The sample size was calculated considering an expected prevalence of syphilis in pregnant women according to the MH in public hospitals of 1.1% in 2006, in Brazil 9 . To perform this calculation, assuming an error of 5% and a confidence interval (CI) of 95% the sample size was calculated to be 17 for 95% CI and 46 for 99,9% CI. Collected variables were used in the comparative analyzes to identify prevalence factors. The results were expressed as percentages. The process of entering and analyzing statistical data was performed through the EPIINFO 3.5.2 version 3.0.1 computing program. Considerations on ethical aspects The research project was carried out within the standards required by the Declaration of Helsinki and Resolution 466 of December 12, 2012, and approved by the Research Ethics Committee of the UERJ (COEP) in July 2012, process N° 034.3.2012 and sponsored by FAPERJ, process N° E-26/110.351/2012. The research project was carried out within the standards required by the Declaration of Helsinki and Resolution 466 of December 12, 2012, and approved by the Research Ethics Committee of the UERJ (COEP) in July 2012, process N° 034.3.2012 and sponsored by FAPERJ, process N° E-26/110.351/2012. Study design: Cross-sectional study. Studied population: A total of 2,041 pregnant women were recruited after admission to a public hospital in the metropolitan region of Rio de Janeiro, between January 2012 and December 2014. The institution involved in the study was the Pedro Ernesto Hospital of the State University of Rio de Janeiro (HUPE / UERJ). Data collection: At HUPE, blood samples are routinely collected for syphilis investigation upon admission, with both non-treponemal testing (VDRL-Venereal Disease Research Laboratory) and confirmatory treponemal test (TPHA-Treponema Pallidum Hemagglutination). The confirmation via the treponemal test is important due to the possibility of a false-positive result. All the tests were performed in the clinical analysis laboratory of HUPE. According to the protocol of the Brazilian Ministry of Health, blood samples are collected for syphilis testing from all parturients. An active search of syphilis serology results (treponemal and non-treponemal tests) with the respective titrations was conducted by accessing the database of the hospital laboratory. A thorough review of the medical records of pregnant women and their newborns was also conducted. Data from the HUPE epidemiology service were evaluated in order to identify the total number of infected pregnant women and newborns, in order to avoid underestimating the results. A survey was conducted on the website of the State's Department of Health website to collect data on syphilis reports during pregnancy, as well as reports of congenital syphilis on the SINAN (National Disease Notification System) and the number of live births in the SINASC (Live Births Notification System). Inclusion criterion: The following parturients were eligible for the study: pregnant women admitted for delivery with a live fetus of any gestational age and weight, stillbirths with gestational age equal to22 weeks or weight equal to 500 grams. The following cases were defined as syphilis during pregnancy: all cases in which the parturient was admitted with laboratory evidence of positive VDRL (any titer) collected at the time of admission and confirmed by the treponemal test; all cases in which the parturient's newborn (stillbirth or live birth) has been reported as a case of congenital syphilis (CS), identified in any of the information systems consulted. Exclusion criterion: The following parturients were excluded: the ones with positive VDRL resulting from adequately treated previous syphilis (complete treatment with benzathine penicillin, according to the clinical stage of the disease, complete treatment of the partner, documentation confirming the couple's treatment, drop in VDRL titers after adequate treatment, treatment completed more than 30 days before delivery). Variable definition: The following situations were included as CS cases: all the gestation products (live births or stillborn) identified in any of the information systems as premature congenital syphilis; all newborns with VDRL titers higher than the maternal ones; all newborns with clinical manifestations suggestive of syphilis or complementary tests pointing to CS 7 . Epidemiological definitions: According to the recommendation of the strategic management of the MH, the prevalence of syphilis in pregnant women is the ratio of the number of cases of syphilis detected in pregnant women to every 1,000 live births, within a geographical space and for a given period. For the purpose of calculating the prevalence, we used as the numerator the number of cases of syphilis detected in pregnant women, in a given notification year and place of residence multiplied by 1,000 and divided by the total number of live births in the same place and in the same reported year 16 . The numerator could be found in the SINAN (national disease notification system) and the denominator in the SINASC (live births information system). For the calculation of the prevalence of parturients as a percentage, the number of cases of syphilis in parturients identified in the study was used as the numerator; the number of live births in that place and period, multiplied by 100 as the denominator. To evaluate underreporting, we divided the number of parturients with syphilis in SINAN by the number of parturients with syphilis in hospitals x 100. Following the MH recommendation for the calculation of the CS incidence, the number of CS cases identified in the study was used as the numerator and the number of live births at that location and period, multiplied by 1000, as the denominator. In order to evaluate the underreporting of congenital syphilis, we divided the number of CS cases by the number of cases reported in SINAN x 100. Data analysis: The sample size was calculated considering an expected prevalence of syphilis in pregnant women according to the MH in public hospitals of 1.1% in 2006, in Brazil 9 . To perform this calculation, assuming an error of 5% and a confidence interval (CI) of 95% the sample size was calculated to be 17 for 95% CI and 46 for 99,9% CI. Collected variables were used in the comparative analyzes to identify prevalence factors. The results were expressed as percentages. The process of entering and analyzing statistical data was performed through the EPIINFO 3.5.2 version 3.0.1 computing program. Considerations on ethical aspects: The research project was carried out within the standards required by the Declaration of Helsinki and Resolution 466 of December 12, 2012, and approved by the Research Ethics Committee of the UERJ (COEP) in July 2012, process N° 034.3.2012 and sponsored by FAPERJ, process N° E-26/110.351/2012. RESULTS: A total of 2,041 parturients were studied in HUPE: 735 in 2012, 664 in 2013 and 642 in 2014. After the exclusion of 19 pregnant women due to serological scarring or false-positive results (treponemal negative test), 79 cases of parturients with syphilis were identified: 32 cases in 2012, 20 in 2013 and 27 in 2014. Access to the result of the treponemal and non-treponemal tests for syphilis was obtained from all the pregnant women admitted during the period. The search on SINASC, considering HUPE as the place of birth, totaled 1,956 live births in HUPE for the three years studied, with 773 live births in 2012, 647 in 2013 and 536 in 2014. The prevalence of syphilis in the HUPE, using the number of live births officially described in SINASC as the denominator, was 4.1% in 2012, 3.1% in 2013 and 5.0% in 2014. Between 2012 and 2014, there was an increase on the prevalence of the disease (Table 1). Legend: live births (LB); live birth information system (SINASC); confidence limit (CL) The search on SINAN came up with 23 cases of syphilis in HUPE for the three years studied (2012 = 5, 2013 = 5 and 2014 = 13). The research showed 79 cases in the three years (2012 = 32, 2013 = 20 and 2014 = 27). Therefore, 15.6% of cases in 2012, 25.0% in 2013 and 48.1% in 2014 were reported, which shows a significant increase for the period. Three parturients (3.8%) from HUPE were double counted in the prevalence calculation, when they were admitted with syphilis at the time of delivery of subsequent pregnancies. One of the three parturients had VDRL titer =1/4 and progressed to 1/128 in the next gestation. The other two had 1/32 titers in the first gestation and had titers of 1/32 and 1/128, respectively, in subsequent pregnancies. The incidence of CS was 22/1,000 LB in 2012, 17/1,000 LB in 2013 and 44.8/ 1,000 LB in 2014. The notification of congenital syphilis reached 92.3% of the cases and only four cases were not officially reported. The present study demonstrated a vertical transmission rate of 65.8%, with 52 newborns affected by congenital syphilis. From the group of parturients with vertical transmission of the disease, 18 mothers had VDRL titers = 1/4, corresponding to 34.6% of the cases of congenital syphilis. The vertical transmission occurred in two consecutive pregnancies in one case that was within the low titers group (= 1/4). DISCUSSION: The latest national studies in public or public-private hospitals that monitor the prevalence of syphilis in the country demonstrated a reduction in the prevalence of syphilis among parturients in Brazil. Sentinel studies revealed a prevalence of 1.6% 17 in 2004 and 1.02% 9 in 2014. However, the present study does not confirm this trend and demonstrates that the disease is not under control in the State of Rio de Janeiro. In 2014, the prevalence rate of 5.0% was four-fold higher than the one registered in the country in 2014 by Domingues et al. 19 and twice higher than the one registered in the State of Rio de Janeiro in 2008 by the same author 18 . The results are similar to those described by the MH regarding the ascending curve of the disease in recent years, but they express a significantly higher prevalence when compared with data from the epidemiological surveillance agencies for the same period. In 2014, although the State of Rio de Janeiro had the highest syphilis rate in pregnant women in the country (2.2%), the prevalence registered at HUPE was 2.3 times higher than the official one 10 . The prevalence registered in 2012 was similar to the one of the Republic of Congo in Africa (4.2%) 19 . It was superior to levels in other Latin American countries in the three years studied and it is a warning regarding the epidemiological situation of syphilis in the State 20 . For some authors, the increase in the disease prevalence in the country may be associated with the improvement in the diagnosis and reporting systems, and not necessarily with the rise in the total number of cases. However, for some infectologists, the rising prevalence is due to the increasing practice of unprotected sex, which would intensify the number of cases of syphilis in the adult population and, consequently, during pregnancy. For others, there are flaws in the gestation treatment of prenatal services, in both diagnosis and treatment of the pregnant women and their partners 7 , 21 . A possible limitation of the present study is the selection bias, since the prevalence can be overestimated by including parturients with serological scarring (<1/8) in their calculation, even after the meticulous evaluation of the medical records and considering epidemiological history. The MH considers low VDRL titers when diagnosing syphilis, due to the variations in readings and the lack of detection by the tests. All the national studies consulted, including the sentinel studies in parturients, used any VDRL title for the calculation of the prevalence in its methodology 9 , 22 . Furthermore, several authors have already described the clinical significance of low titers of VDRL in the diagnosis of congenital syphilis 23 . Although there has been a substantial increase in the reporting of syphilis in parturients in HUPE between 2012 and 2014, it has not reached half of the cases, which demonstrates how fragile the epidemiological surveillance system is in identifying and reporting all cases. The considerable increase in the coverage of notifications follows the increased number of notifications observed in Brazil, which, according to the MH, is related to the Rede Cegonha program, which increased the coverage of pregnant women testing and follow-up of cases 24 . In Brazil, between 2012 and 2013, there was a 25% increase in the number of syphilis reports in pregnant women, and 23% if only the Southeast region is considered, which is responsible for 45.9% of the country's notifications. The increase of the number of notifications in the State of Rio de Janeiro was 16%. In the country, São Paulo is the State with the highest number of notifications, followed by the State of Rio de Janeiro 25 . Although the surveillance agencies considered underreporting for the purposes of disease monitoring, the present study shows that the level of underreporting was higher than estimated and should not be used as a basis for calculation. Without the proper notification, there is a tendency to underestimate the problem. However, the underreporting found in HUPE in 2013 is similar to the one described by Cavalcante et al. 26 in Fortaleza and Domingues et al. 18 in SUS units (state health clinics and hospitals) in the city of Rio de Janeiro, where underreporting was 73%, 70.3% and 76.1%, respectively, demonstrating the fragility of the information services in Brazil to work as a basis for promoting measures to combat the disease. The high prevalence of the disease among the parturients of the present study resulted in an incidence of congenital syphilis (26.6 per 1,000 LB) greater than that reported in the State of Rio de Janeiro in 2015 (16 per 1,000 LB) 10 and at least five-fold greater than the incidence of the disease in Brazil, according to the Nascer Study (3.51 per 1,000 LB) 27 . Regarding the notification of congenital syphilis, the present study reached more than 90%, a result far superior to that reported by the MH, which estimates notification at only 17.4% of the cases of CS in the country 24 . HUPE is located in the capital of the State of Rio de Janeiro and, according to the epidemiological bulletin of the State Health Department, it is the city with the highest number of notifications of CS in the State (63.5%) 25 . The high vertical transmission rate of syphilis in this study (65.8%) shows the high prevalence of syphilis among parturients, since among the various diseases that can be transmitted during the puerperal-pregnancy cycle, syphilis is the one that presents the greater chances of transmission 28 . Kupek et al. 11 showed similar rates (68.9%). Domingues et al. 18 and the MH 7 , however, estimated lower rates of vertical transmission, 34.8% in Rio de Janeiro and 25% in Brazil, respectively. The occurrence of maternal syphilis in a subsequent gestation demonstrated that a prior gestation with syphilis did not eliminate the risk of the disease in future pregnancies. The recurrence of syphilis in two distinct pregnancies is the major sign of the inefficiency of the health system in properly conducting these cases. Fetal infection occurred even in the presence of low VDRL maternal titers. The lack of proper consideration of low titers of VDRL by the health professionals is still a barrier to the control of congenital syphilis. In a study with health professionals, it was observed that only 48% had adequate knowledge about the management of syphilis, especially regarding the treatment of pregnant women with low titration 29 . Since syphilis is an easily diagnosed disease that has an established treatment and generates low costs for testing and treatment during pregnancy, it should be possible to control the disease. However, the disease continues to advance in Brazil in alarming numbers. CONCLUSIONS: The estimated prevalence of syphilis in pregnant women in Rio de Janeiro is at least twice as high as previously reported by regional studies and epidemiological surveillance data in the State, reaching levels found in African countries and higher than the figures for other countries in Latin America. These values are a warning on the magnitude of the disease in the State of Rio de Janeiro, its upward curve and its consequences. Our reporting system has proven to be fragile and unable to assess the current prevalence of syphilis in Rio de Janeiro, which is underestimated. Due to the high prevalence of syphilis among parturients, there was a high rate of CS and vertical transmission that occurred even in pregnant women with low VDRL titers, emphasizing the importance of evaluating low titers in the management of these patients. It is a responsibility of the health professionals, who daily handle this sad reality, to strive for excellence in caring for pregnant women. Even in the 21st century, the high rates of syphilis in parturients and CS in Brazil are still ignored and underestimated.
Background: In 2013, the World Health Organization (WHO) reported that 1.9 million pregnant women were infected with syphilis worldwide, of which 66.5% had adverse fetal effects in cases of untreated syphilis. Congenital syphilis contributes significantly to infant mortality, accounting for 305,000 perinatal deaths worldwide annually. Methods: a cross-sectional study with data collected from 2041 parturients who had undergone treatment between 2012 and 2014 in the maternity section of the Pedro Ernesto Hospital of the State University of Rio de Janeiro, in the metropolitan area of Rio de Janeiro. The inclusion criterion was positive VDRL and treponemal test in a hospital environment. Results: the prevalence of syphilis in pregnant women was 4.1% in 2012, 3.1% in 2013 and 5% in 2014, with official reporting of 15.6%, 25.0% and 48.1%, respectively. The incidence of congenital syphilis (CS) was 22/1,000 in live births (LB) in 2012; 17/1,000 LB in 2013 and 44.8/1,000 LB in 2014. CS underreporting during the period was 6.7%. Vertical transmission occurred in 65.8% of infants from infected mothers. It was concluded that, in 34.6% of the CS cases, maternal VDRL titers were = 1/4. Conclusions: Results demonstrate the magnitude of the disease, fragility of the reporting system in the assessment of the actual prevalence, impact on perinatal outcomes, and they are a warning about the real situation of syphilis, which is still underestimated in the State.
INTRODUCTION: Syphilis is a sexually transmitted disease caused by the bacterium Treponema pallidum. Infection can be transmitted to the unborn child through the placenta at any gestational stage. In exceptional cases, syphilis can be transmitted at birth through the child's contact with the birth canal, when there are maternal genital lesions (direct transmission). Syphilis may only be transmitted through breastfeeding if there are breast syphilis lesions 1 . Congenital infection is associated with adverse outcomes, including: perinatal death, preterm birth, low birth weight, congenital anomalies, active syphilis in the newborn (NB), and long-term sequelae, such as deafness and neurological impairment 2 . There is a significant global increase in the prevalence of syphilis in pregnant women, despite all efforts from governments to control the disease. In 2013, the WHO reported 1.9 million pregnant women infected with syphilis worldwide, with 66.5% of adverse fetal outcomes occurring in cases of untreated syphilis 3 . Congenital syphilis significantly contributes to infant mortality, accounting for 305,000 perinatal deaths per year worldwide 4 . There are more newborns affected by syphilis than by any other neonatal infection. It is estimated that 520,000 adverse fetal outcomes occur worldwide each year, making congenital syphilis more common than congenital HIV infection 5 . In areas where syphilis is prevalent, about half of the stillbirth rate may have been caused solely by this infection 6 . The direct medical cost associated with adverse fetal outcomes resulting from syphilis is U$ 309 million dollars a year 4 . High rates of syphilis in parturients mean a high incidence of congenital syphilis and millions of missed opportunities to save lives during pre-natal care. Over the past ten years, the infant mortality rate due to syphilis increased 150% in Brazil, from 2.2 per 100,000 live births (LB) in 2004 to 5.5 per 100,000 LB in 2013. By 2015, the State of Rio de Janeiro had the highest rate of syphilis during pregnancy (2.2%) and of congenital syphilis in the country (16 cases per 1,000 LB) 7 , 8 . The last syphilis monitoring study in pregnant women performed in Brazil in 2014, with a statistical sample from public and private hospitals, estimated the prevalence of syphilis at 1.02%, with no significant regional differences 9 . According to the prediction curve of the disease, the Ministry of Health (MH) expected 40,000 cases in pregnant women in Brazil in 2016. An epidemic of the disease in the country was confirmed by the Ministry of Health in 2016. The increased incidence of syphilis in pregnant women is alarming 8 . In 2015, according to the MH, the States of Rio de Janeiro and Mato Grosso do Sul were the most affected in the country, reaching a syphilis rate in pregnant women of 2.2% 8 . The national studies show differences in the prevalence of the disease according to the region studied, with rates ranging around 0.4% in Vitória (Espírito Santo) 10 , Itajaí (Santa Catarina) 11 and 7.7% in Fortaleza (Ceará) 12 . The analysis of international studies on the subject in countries such as Haiti 13 shows that the prevalence of syphilis in pregnant women reaches 7.6%, while in other countries, such as Canada, syphilis has already been eliminated 14 . The WHO certified Cuba in 2013 as the first country to eliminate the vertical transmission of HIV and syphilis during pregnancy. Six other countries are currently able to request the WHO to validate the double elimination, including Canada and the United States 15 . Aim To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection. To estimate the prevalence and coverage of syphilis reporting in parturients, the incidence of congenital syphilis and its corresponding underreporting, as well as the rate of vertical transmission of infection. CONCLUSIONS: The estimated prevalence of syphilis in pregnant women in Rio de Janeiro is at least twice as high as previously reported by regional studies and epidemiological surveillance data in the State, reaching levels found in African countries and higher than the figures for other countries in Latin America. These values are a warning on the magnitude of the disease in the State of Rio de Janeiro, its upward curve and its consequences. Our reporting system has proven to be fragile and unable to assess the current prevalence of syphilis in Rio de Janeiro, which is underestimated. Due to the high prevalence of syphilis among parturients, there was a high rate of CS and vertical transmission that occurred even in pregnant women with low VDRL titers, emphasizing the importance of evaluating low titers in the management of these patients. It is a responsibility of the health professionals, who daily handle this sad reality, to strive for excellence in caring for pregnant women. Even in the 21st century, the high rates of syphilis in parturients and CS in Brazil are still ignored and underestimated.
Background: In 2013, the World Health Organization (WHO) reported that 1.9 million pregnant women were infected with syphilis worldwide, of which 66.5% had adverse fetal effects in cases of untreated syphilis. Congenital syphilis contributes significantly to infant mortality, accounting for 305,000 perinatal deaths worldwide annually. Methods: a cross-sectional study with data collected from 2041 parturients who had undergone treatment between 2012 and 2014 in the maternity section of the Pedro Ernesto Hospital of the State University of Rio de Janeiro, in the metropolitan area of Rio de Janeiro. The inclusion criterion was positive VDRL and treponemal test in a hospital environment. Results: the prevalence of syphilis in pregnant women was 4.1% in 2012, 3.1% in 2013 and 5% in 2014, with official reporting of 15.6%, 25.0% and 48.1%, respectively. The incidence of congenital syphilis (CS) was 22/1,000 in live births (LB) in 2012; 17/1,000 LB in 2013 and 44.8/1,000 LB in 2014. CS underreporting during the period was 6.7%. Vertical transmission occurred in 65.8% of infants from infected mothers. It was concluded that, in 34.6% of the CS cases, maternal VDRL titers were = 1/4. Conclusions: Results demonstrate the magnitude of the disease, fragility of the reporting system in the assessment of the actual prevalence, impact on perinatal outcomes, and they are a warning about the real situation of syphilis, which is still underestimated in the State.
5,870
281
[ 33, 5, 55, 227, 117, 64, 60, 287, 113, 55 ]
15
[ "syphilis", "cases", "number", "prevalence", "pregnant", "pregnant women", "women", "parturients", "live", "disease" ]
[ "maternal syphilis", "affected syphilis neonatal", "underreporting congenital syphilis", "syphilis transmitted birth", "newborns affected syphilis" ]
null
[CONTENT] Syphilis | Gestation | Congenital syphilis | Prevalence | VDRL | Treponemal tests | Treponema pallidum [SUMMARY]
null
[CONTENT] Syphilis | Gestation | Congenital syphilis | Prevalence | VDRL | Treponemal tests | Treponema pallidum [SUMMARY]
[CONTENT] Syphilis | Gestation | Congenital syphilis | Prevalence | VDRL | Treponemal tests | Treponema pallidum [SUMMARY]
[CONTENT] Syphilis | Gestation | Congenital syphilis | Prevalence | VDRL | Treponemal tests | Treponema pallidum [SUMMARY]
[CONTENT] Syphilis | Gestation | Congenital syphilis | Prevalence | VDRL | Treponemal tests | Treponema pallidum [SUMMARY]
[CONTENT] Brazil | Cross-Sectional Studies | Female | Humans | Incidence | Infant, Newborn | Infectious Disease Transmission, Vertical | Pregnancy | Pregnancy Complications, Infectious | Prevalence | Syphilis | Syphilis, Congenital [SUMMARY]
null
[CONTENT] Brazil | Cross-Sectional Studies | Female | Humans | Incidence | Infant, Newborn | Infectious Disease Transmission, Vertical | Pregnancy | Pregnancy Complications, Infectious | Prevalence | Syphilis | Syphilis, Congenital [SUMMARY]
[CONTENT] Brazil | Cross-Sectional Studies | Female | Humans | Incidence | Infant, Newborn | Infectious Disease Transmission, Vertical | Pregnancy | Pregnancy Complications, Infectious | Prevalence | Syphilis | Syphilis, Congenital [SUMMARY]
[CONTENT] Brazil | Cross-Sectional Studies | Female | Humans | Incidence | Infant, Newborn | Infectious Disease Transmission, Vertical | Pregnancy | Pregnancy Complications, Infectious | Prevalence | Syphilis | Syphilis, Congenital [SUMMARY]
[CONTENT] Brazil | Cross-Sectional Studies | Female | Humans | Incidence | Infant, Newborn | Infectious Disease Transmission, Vertical | Pregnancy | Pregnancy Complications, Infectious | Prevalence | Syphilis | Syphilis, Congenital [SUMMARY]
[CONTENT] maternal syphilis | affected syphilis neonatal | underreporting congenital syphilis | syphilis transmitted birth | newborns affected syphilis [SUMMARY]
null
[CONTENT] maternal syphilis | affected syphilis neonatal | underreporting congenital syphilis | syphilis transmitted birth | newborns affected syphilis [SUMMARY]
[CONTENT] maternal syphilis | affected syphilis neonatal | underreporting congenital syphilis | syphilis transmitted birth | newborns affected syphilis [SUMMARY]
[CONTENT] maternal syphilis | affected syphilis neonatal | underreporting congenital syphilis | syphilis transmitted birth | newborns affected syphilis [SUMMARY]
[CONTENT] maternal syphilis | affected syphilis neonatal | underreporting congenital syphilis | syphilis transmitted birth | newborns affected syphilis [SUMMARY]
[CONTENT] syphilis | cases | number | prevalence | pregnant | pregnant women | women | parturients | live | disease [SUMMARY]
null
[CONTENT] syphilis | cases | number | prevalence | pregnant | pregnant women | women | parturients | live | disease [SUMMARY]
[CONTENT] syphilis | cases | number | prevalence | pregnant | pregnant women | women | parturients | live | disease [SUMMARY]
[CONTENT] syphilis | cases | number | prevalence | pregnant | pregnant women | women | parturients | live | disease [SUMMARY]
[CONTENT] syphilis | cases | number | prevalence | pregnant | pregnant women | women | parturients | live | disease [SUMMARY]
[CONTENT] syphilis | infection | congenital | adverse | outcomes | 000 | rate | transmitted | country | pregnant women [SUMMARY]
null
[CONTENT] 2013 | 2014 | 2012 | cases | 32 | hupe | syphilis | lb | live | pregnancies [SUMMARY]
[CONTENT] high | underestimated | janeiro | de | rio | rio de janeiro | rio de | de janeiro | prevalence syphilis | syphilis [SUMMARY]
[CONTENT] syphilis | cases | number | 2012 | prevalence | study | live | treatment | parturients | pregnant women [SUMMARY]
[CONTENT] syphilis | cases | number | 2012 | prevalence | study | live | treatment | parturients | pregnant women [SUMMARY]
[CONTENT] 2013 | the World Health Organization (WHO | 1.9 million | 66.5% ||| 305,000 | annually [SUMMARY]
null
[CONTENT] 4.1% | 2012 | 3.1% | 2013 | 5% | 2014 | 15.6% | 25.0% | 48.1% ||| CS | 22/1,000 | LB | 2012 | LB | 2013 | 44.8/1,000 | LB | 2014 ||| CS | 6.7% ||| 65.8% ||| 34.6% | CS | VDRL | 1/4 [SUMMARY]
[CONTENT] State [SUMMARY]
[CONTENT] 2013 | the World Health Organization (WHO | 1.9 million | 66.5% ||| 305,000 | annually ||| 2041 | between 2012 and 2014 | the Pedro Ernesto Hospital | the State University of Rio de Janeiro | Rio de Janeiro ||| VDRL ||| 4.1% | 2012 | 3.1% | 2013 | 5% | 2014 | 15.6% | 25.0% | 48.1% ||| CS | 22/1,000 | LB | 2012 | LB | 2013 | 44.8/1,000 | LB | 2014 ||| CS | 6.7% ||| 65.8% ||| 34.6% | CS | VDRL | 1/4 ||| State [SUMMARY]
[CONTENT] 2013 | the World Health Organization (WHO | 1.9 million | 66.5% ||| 305,000 | annually ||| 2041 | between 2012 and 2014 | the Pedro Ernesto Hospital | the State University of Rio de Janeiro | Rio de Janeiro ||| VDRL ||| 4.1% | 2012 | 3.1% | 2013 | 5% | 2014 | 15.6% | 25.0% | 48.1% ||| CS | 22/1,000 | LB | 2012 | LB | 2013 | 44.8/1,000 | LB | 2014 ||| CS | 6.7% ||| 65.8% ||| 34.6% | CS | VDRL | 1/4 ||| State [SUMMARY]
null
34712033
Helicobacter pylori (H. pylori) is a spiral-shaped bacterium responsible for the development of chronic gastritis, gastric ulcer, gastric cancer (GC), and MALT-lymphoma of the stomach. H. pylori can be present in the gastric mucosa (GM) in both spiral and coccoid forms. However, it is not known whether the severity of GM contamination by various vegetative forms of H. pylori is associated with clinical and morphological characteristics and long-term results of GC treatment.
BACKGROUND
Of 109 patients with GC were included in a prospective cohort study. H. pylori in the GM and tumor was determined by rapid urease test and by immunohistochemically using the antibody to H. pylori. The results obtained were compared with the clinical and morphological characteristics and prognosis of GC. Statistical analysis was performed using the Statistica 10.0 software.
METHODS
H. pylori was detected in the adjacent to the tumor GM in 84.5% of cases, of which a high degree of contamination was noted in 50.4% of the samples. Coccoid forms of H. pylori were detected in 93.4% of infected patients, and only coccoid-in 68.9%. It was found that a high degree of GM contamination by the coccoid forms of H. pylori was observed significantly more often in diffuse type of GC (P = 0.024), in poorly differentiated GC (P = 0.011), in stage T3-4 (P = 0.04) and in N1 (P = 0.011). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year relapse free and overall survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). The relationship between the severity of the GM contamination by the spiral-shaped forms of H. pylori and the clinical and morphological characteristics and prognosis of GC was not revealed.
RESULTS
The data obtained indicates that H. pylori may be associated not only with induction but also with the progression of GC.
CONCLUSION
[ "Gastric Mucosa", "Helicobacter Infections", "Helicobacter pylori", "Humans", "Neoplasm Recurrence, Local", "Prospective Studies", "Stomach Neoplasms" ]
8515796
INTRODUCTION
Gastric cancer (GC) continues to be one of the most common malignant diseases in the world[1,2]. Despite a decreasing trend in the incidence of GC in most countries of the world, the treatment results of this pathology cannot be considered satisfactory. In the structure of mortality from malignant neoplasms, this pathology firmly occupies 2nd place in most developed countries of the world, and the 5-year survival rate of radically operated patients does not exceed 15%-30%[3,4]. It is important to note that it is impossible to improve the long-term results of malignant neoplasms treatment without knowledge of the mechanisms associated with their progression[3]. Clinical studies in recent years indicate that inflammatory infiltration of the tumor stroma and surrounding tissues can have an important prognostic value and affect the long-term results of malignant neoplasm treatment[5-9]. A number of studies have shown that inflammatory infiltration of the tumor stroma is associated with the body’s adequate immune response to the tumor, and may be a favorable prognosis factor in various malignant neoplasms[10,11], including GC[12,13]. At the same time, the data obtained by other researchers indicates that pronounced inflammatory infiltration of the tumor stroma, especially T-reg lymphocytes and macrophages, may be a factor contributing to the progression of malignant neoplasms[14,15]. There was a decrease in the overall survival (OS) and relapse-free survival (RFS) of GC patients with a high content of Foxp3 + T-reg in the tumor stroma and regional metastases[16-18] and macrophages[16,19]. It is believed that inflammatory infiltration of the tumor stroma can contribute to tumor progression by activating the mechanisms of angiogenesis, expression of E- and L-selectins, formation of the products of lipid peroxidation and free radicals, destruction of connective tissue matrix and basement membranes of epithelia by proteolytic enzymes, and activation of epithelial-mesenchymal transformation[20-23]. When studying the role of inflammation in the progression of GC, it is impossible to ignore the problem of Helicobacter pylori (H. pylori) infection. H. pylori is a gram-negative, spiral-shaped bacterium, the habitat of which is the gastric mucosa (GM) and duodenum. H. pylori differs from other bacteria in a set of properties that make it possible to colonize the GM and persist for a long time under conditions that are unfavorable for other microorganisms[24,25]. These include: (1) The ability to produce a special enzyme-urease; (2) Synthesis of lytic enzymes that cause the depolymerization and dissolution of gastric mucus, consisting mainly of mucin; (3) The mobility of the bacterium, which is ensured by the presence of 5-6 flagella; (4) The high adhesiveness of bacteria to GM epithelial cells of the GM and elements of connective tissue due to the interaction of bacterial ligands with the corresponding cells receptors; (5) Production of various exotoxins (VacA, CagA, and others); (6) Instability of the H. pylori genome; (7) The presence of vegetative and coccoid forms of bacteria; and (8) possibility of intracellular persistence and translocation outside the GM [26-29]. It should be noted that despite the huge number of studies devoted to H. pylori, it is still not clear whether H. pylori is involved only in the initiation of the tumor process in the stomach, or whether it can affect the mechanisms of tumor progression. The relationship between the severity of H. pylori infection and the clinical and morphological characteristics of GC and long-term results of this pathology treatment remains poorly studied, and in this connection, the question of the expediency of anti-Helicobacter therapy in patients with invasive GC remains open. Objective To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results. To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results.
MATERIALS AND METHODS
The patients One hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg). Clinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1. The distribution of patients according to the clinical and pathological characteristics of gastric cancer GC: Gastric cancer. When interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs. The long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive. One hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg). Clinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1. The distribution of patients according to the clinical and pathological characteristics of gastric cancer GC: Gastric cancer. When interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs. The long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive. Detection H. pylori infection H. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori. H. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori. RUT After removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. According to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. After removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. According to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. Immunohistochemistry The presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry. The sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. The visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner. The concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. All sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. The data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS. The presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry. The sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. The visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner. The concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. All sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. The data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS. Statistics Statistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant. Statistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant.
null
null
CONCLUSION
The results obtained do not allow one to draw unambiguous conclusions about the role of H. pylori in the progression of GC. Further appropriate prospective studies regarding the role of H. pylori in the progression of GC are obviously advisable.
[ "INTRODUCTION", "Objective", "The patients", "Detection H. pylori infection", "RUT", "Immunohistochemistry ", "Statistics", "RESULTS", "The features of H. pylori infection in GC", "Correlations of clinical and pathological characteristics of GC with the severity of H. pylori infection according to RUT data ", "Correlations of clinical and pathological characteristics of GC with the concentration of spiral and coccoid forms of H. pylori in the GM ", "DISCUSSION", "CONCLUSION" ]
[ "Gastric cancer (GC) continues to be one of the most common malignant diseases in the world[1,2]. Despite a decreasing trend in the incidence of GC in most countries of the world, the treatment results of this pathology cannot be considered satisfactory. In the structure of mortality from malignant neoplasms, this pathology firmly occupies 2nd place in most developed countries of the world, and the 5-year survival rate of radically operated patients does not exceed 15%-30%[3,4]. \nIt is important to note that it is impossible to improve the long-term results of malignant neoplasms treatment without knowledge of the mechanisms associated with their progression[3]. Clinical studies in recent years indicate that inflammatory infiltration of the tumor stroma and surrounding tissues can have an important prognostic value and affect the long-term results of malignant neoplasm treatment[5-9]. A number of studies have shown that inflammatory infiltration of the tumor stroma is associated with the body’s adequate immune response to the tumor, and may be a favorable prognosis factor in various malignant neoplasms[10,11], including GC[12,13]. At the same time, the data obtained by other researchers indicates that pronounced inflammatory infiltration of the tumor stroma, especially T-reg lymphocytes and macrophages, may be a factor contributing to the progression of malignant neoplasms[14,15]. There was a decrease in the overall survival (OS) and relapse-free survival (RFS) of GC patients with a high content of Foxp3 + T-reg in the tumor stroma and regional metastases[16-18] and macrophages[16,19]. It is believed that inflammatory infiltration of the tumor stroma can contribute to tumor progression by activating the mechanisms of angiogenesis, expression of E- and L-selectins, formation of the products of lipid peroxidation and free radicals, destruction of connective tissue matrix and basement membranes of epithelia by proteolytic enzymes, and activation of epithelial-mesenchymal transformation[20-23].\nWhen studying the role of inflammation in the progression of GC, it is impossible to ignore the problem of Helicobacter pylori (H. pylori) infection. H. pylori is a gram-negative, spiral-shaped bacterium, the habitat of which is the gastric mucosa (GM) and duodenum. H. pylori differs from other bacteria in a set of properties that make it possible to colonize the GM and persist for a long time under conditions that are unfavorable for other microorganisms[24,25]. These include: (1) The ability to produce a special enzyme-urease; (2) Synthesis of lytic enzymes that cause the depolymerization and dissolution of gastric mucus, consisting mainly of mucin; (3) The mobility of the bacterium, which is ensured by the presence of 5-6 flagella; (4) The high adhesiveness of bacteria to GM epithelial cells of the GM and elements of connective tissue due to the interaction of bacterial ligands with the corresponding cells receptors; (5) Production of various exotoxins (VacA, CagA, and others); (6) Instability of the H. pylori genome; (7) The presence of vegetative and coccoid forms of bacteria; and (8) possibility of intracellular persistence and translocation outside the GM [26-29].\nIt should be noted that despite the huge number of studies devoted to H. pylori, it is still not clear whether H. pylori is involved only in the initiation of the tumor process in the stomach, or whether it can affect the mechanisms of tumor progression. The relationship between the severity of H. pylori infection and the clinical and morphological characteristics of GC and long-term results of this pathology treatment remains poorly studied, and in this connection, the question of the expediency of anti-Helicobacter therapy in patients with invasive GC remains open.\nObjective To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results.\nTo assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results.", "To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results.", "One hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg).\nClinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1.\nThe distribution of patients according to the clinical and pathological characteristics of gastric cancer\nGC: Gastric cancer.\nWhen interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs.\nThe long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive.", "\nH. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori.", "After removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. \nAccording to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. ", "The presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry.\nThe sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. \nThe visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner.\nThe concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. \nAll sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. \nThe data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS.", "Statistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant.", "The features of H. pylori infection in GC Of 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%).\nIt was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM.\nForty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted.\n\nThe features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. \nThe localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. \n\nThe features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. \nIn the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization.\nOf 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%).\nIt was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM.\nForty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted.\n\nThe features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. \nThe localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. \n\nThe features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. \nIn the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization.\nCorrelations of clinical and pathological characteristics of GC with the severity of H. pylori infection according to RUT data The gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. \nClinical and pathological characteristics of gastric cancer depending on the data rapid urease test\nRUT: Rapid urease test; GC: Gastric cancer.\nIt is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4).\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\n\n10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving.\nThe gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. \nClinical and pathological characteristics of gastric cancer depending on the data rapid urease test\nRUT: Rapid urease test; GC: Gastric cancer.\nIt is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4).\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\n\n10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving.\nCorrelations of clinical and pathological characteristics of GC with the concentration of spiral and coccoid forms of H. pylori in the GM It is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5.\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\nClinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa\nRUT: Rapid urease test; GC: Gastric cancer.\nThere were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC.\nIt is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5.\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\nClinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa\nRUT: Rapid urease test; GC: Gastric cancer.\nThere were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC.", "Of 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%).\nIt was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM.\nForty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted.\n\nThe features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. \nThe localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. \n\nThe features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. \nIn the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization.", "The gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. \nClinical and pathological characteristics of gastric cancer depending on the data rapid urease test\nRUT: Rapid urease test; GC: Gastric cancer.\nIt is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4).\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\n\n10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving.", "It is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5.\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\nClinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa\nRUT: Rapid urease test; GC: Gastric cancer.\nThere were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC.", "A large amount of clinical and experimental data testifies to the important role of H. pylori in the occurrence of GC[31-34], but, there is less research into the features of H. pylori infection in patients with GC and its role in tumor progression, and the results are quite contradictory. These contradictions relate to many aspects, such as: (1) The frequency of infection in patients with GC. This data varies widely and ranges from 36% to 100%[35-40]; (2) The relation of infection with GC prognosis. Some researchers have noted an improvement in the long-term results of GC treatment in infected patients[41,42], while others, on the contrary, found that the presence of H. pylori infection was associated with a decrease in patient survival[43,44]; and (3) The connection between the infection and a histologic type of GC. In some studies, it was noted that patients with an intestinal type of GC are more often infected with H. pylori than patients with the diffuse type of GC[45,46]. Other researchers did not find a difference in H. pylori infection in patients with different histological types of tumors[47].\nIt is believed that the differences noted are associated with the fact that to reveal the infection the authors used the methods that were significantly different in their sensitivity, and primarily, to coccoid forms of bacteria. Most of the studies were carried out without considering coccoid forms of H. pylori and the concentration of bacteria in GM. The use of the biochemical method for the detection of urease activity and immunohistochemistry for visualization of bacteria in our study allowed us not only to assess the presence of infection in patients with GC, but also to mark some of its features associated with the localization of bacteria in the stomach, with a ratio of cocci and spiral forms, and the degree of bacterial contamination of GM.\nOur data for the Orenburg region recorded a high rate of H. pylori infection in patients with GC (84.5%). The coccoid forms of H. pylori, preserving a high degree of urease activity, dominated in GM in patients with GC. They were found in 93.4% of infected patients, with only coccoid forms of H. pylori-in 68.9%. \nIt is known that the coccoid forms of H. pylori can arise in response to unfavorable environmental factors, such as AT [47,48]. These forms are resistant to AT[49,50] and are able to form biofilms[51] and avoid the immune system[50]. They express a higher rate of cagE mRNA than their spiral counterparts[52], and by increasing the synthesis of tumor necrosis factor-alpha (TNF-α)-inducing protein (Hps), which is introduced into the cytosol and cell nuclei, they can activate nuclear factor-kappaB (NF-κB) and the expression of TNF-α and other cytokines involved in carcinogenesis[53,54]. The effect on the proliferation of gastric epithelial cells in the H. pylori coccoid forms is also stronger than in the helical forms[55], and they can induce the expression of VEGF-A and transforming growth factor-β[56]. The ability to transform into coccoid forms was also found to be characteristic of the most virulent strains of H. pylori[50,54].\nIt should be noted that the higher infection by coccoid forms of H. pylori in patients with GC, compared to patients with gastritis or gastric ulcer, had been mentioned by other researchers[57]. A number of studies have shown that coccoid forms of H. pylori retained urease activity[58] and the expression of such antigens as CagA, UreA, porin, components of the Cag type IV secretion system (TFSS), antigen-binding adhesin of the blood group BabA and others[59,60].\nThe use of immunohistochemistry in this study made it possible to detect the bacteria not only in the gastric mucus and on the surface of epithelial cells, but also within the cytoplasm of epithelial and immune cells of GM. Such intracellular expression was characterized by point inclusions giving a specific reaction with antibodies to H. pylori.\nThe intracellular persistence of H. pylori has been demonstrated by many investigators. They found H. pylori in the cytoplasm of epithelial cells, intercellular spaces, in the lamina propria of GM, and in the lumen of small vessels[61-63]. We assume that the point inclusions in the cytoplasm of epithelial and immune cells giving a positive reaction with antibodies to H. pylori is similar to those particle-rich cytoplasmic structure (PaCS) described earlier in the human superficial-foveolar epithelium and its metaplastic or dysplastic foci[64]. The authors found that the PaCS are a colocalization of VacA, CagA, urease, and outer membrane proteins with NOD1 receptor, ubiquitin-activating enzyme E1, polyubiquitinated proteins, proteasome components, and potentially oncogenic proteins like SHP2 and ERKs[64]. They believe that PaCS is a novel, proteasome-enriched structure arising in ribosome-rich cytoplasm at sites of H. pylori product accumulation. \nWe believe that the immune cells with point inclusions in the lamina propria of GM are likely to be macrophages. The data obtained by several researchers suggests this conclusion[65,66]. It is noted that even the absorbed bacteria retains their viability in macrophages, which may be associated with the violation of the phagosome maturation[66-68]. The use of confocal microscopy enabled the localization of the bacteria within the cells to be associated with the endosomal and lysosomal markers, and found that H. pylori could use the vesicles of autophagosomes (autophagic vesicles) for its own replication[63,69]. \nThe study found that the concentration of H. pylori coccoid forms in GM was the most significant clinical factor. This factor was associated with the tumor histology, T status, N status, stage, 10-year PFS, and OS. The moderate and marked concentrations of coccoid forms of H. pylori were more often found in the diffuse type of GC (P = 0.024) and T3-4 (P = 0.04) stage. Interestingly, the high concentration of H. pylori is more frequent in Stage N1 than in N2 (at 90.0% and 53.1%, respectively, P = 0.024).\nThe moderate and marked concentrations of coccoid forms of H. pylori represented a prognostic factor associated with the decrease of 10-year RFS and OS from 55.6% to 26.3% (P = 0.02 and P = 0.07, respectively).\nIt should be noted that the results of this study do not allow us to unambiguously judge the effect of H. pylori on GC progression. A decrease in OS and DFS in patients with moderate and marked concentrations of H. pylori coccoid forms in the GM may be due to the fact that these patients had more advanced stages and more aggressive forms of GC. Meanwhile, there are more and more studies showing that H. pylori infection can promote GC progression by activating the NF-κB signaling pathway and induction of interleukin-8 secretion[70], the activation of epithelial-mesenchymal transformation[71-74] and angiogenesis[75,76], as well as increasing the invasive properties of tumor cells[77]. It can be assumed that the administration of AT before surgery contributes to the reduction of the inflammatory process activity and normalization of the adhesive properties of tumour cells, which in turn decreases metastasis risk and improves the long-term results of the treatment of GC. The data literature on the improvement of the long-term results of malignant tumours treatment when using antibacterial drugs testify in favour of this hypothesis[78-80].", "The data obtained indicates that H. pylori may be associated not only with induction but also with the progression of GC. It can be assumed that the prevalence of coccoid forms of bacteria and their intracellular persistence can affect the mechanisms of tumor progression. Further appropriate studies regarding the role of H. pylori in the progression of GC are obviously advisable." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "Objective", "MATERIALS AND METHODS", "The patients", "Detection H. pylori infection", "RUT", "Immunohistochemistry ", "Statistics", "RESULTS", "The features of H. pylori infection in GC", "Correlations of clinical and pathological characteristics of GC with the severity of H. pylori infection according to RUT data ", "Correlations of clinical and pathological characteristics of GC with the concentration of spiral and coccoid forms of H. pylori in the GM ", "DISCUSSION", "CONCLUSION" ]
[ "Gastric cancer (GC) continues to be one of the most common malignant diseases in the world[1,2]. Despite a decreasing trend in the incidence of GC in most countries of the world, the treatment results of this pathology cannot be considered satisfactory. In the structure of mortality from malignant neoplasms, this pathology firmly occupies 2nd place in most developed countries of the world, and the 5-year survival rate of radically operated patients does not exceed 15%-30%[3,4]. \nIt is important to note that it is impossible to improve the long-term results of malignant neoplasms treatment without knowledge of the mechanisms associated with their progression[3]. Clinical studies in recent years indicate that inflammatory infiltration of the tumor stroma and surrounding tissues can have an important prognostic value and affect the long-term results of malignant neoplasm treatment[5-9]. A number of studies have shown that inflammatory infiltration of the tumor stroma is associated with the body’s adequate immune response to the tumor, and may be a favorable prognosis factor in various malignant neoplasms[10,11], including GC[12,13]. At the same time, the data obtained by other researchers indicates that pronounced inflammatory infiltration of the tumor stroma, especially T-reg lymphocytes and macrophages, may be a factor contributing to the progression of malignant neoplasms[14,15]. There was a decrease in the overall survival (OS) and relapse-free survival (RFS) of GC patients with a high content of Foxp3 + T-reg in the tumor stroma and regional metastases[16-18] and macrophages[16,19]. It is believed that inflammatory infiltration of the tumor stroma can contribute to tumor progression by activating the mechanisms of angiogenesis, expression of E- and L-selectins, formation of the products of lipid peroxidation and free radicals, destruction of connective tissue matrix and basement membranes of epithelia by proteolytic enzymes, and activation of epithelial-mesenchymal transformation[20-23].\nWhen studying the role of inflammation in the progression of GC, it is impossible to ignore the problem of Helicobacter pylori (H. pylori) infection. H. pylori is a gram-negative, spiral-shaped bacterium, the habitat of which is the gastric mucosa (GM) and duodenum. H. pylori differs from other bacteria in a set of properties that make it possible to colonize the GM and persist for a long time under conditions that are unfavorable for other microorganisms[24,25]. These include: (1) The ability to produce a special enzyme-urease; (2) Synthesis of lytic enzymes that cause the depolymerization and dissolution of gastric mucus, consisting mainly of mucin; (3) The mobility of the bacterium, which is ensured by the presence of 5-6 flagella; (4) The high adhesiveness of bacteria to GM epithelial cells of the GM and elements of connective tissue due to the interaction of bacterial ligands with the corresponding cells receptors; (5) Production of various exotoxins (VacA, CagA, and others); (6) Instability of the H. pylori genome; (7) The presence of vegetative and coccoid forms of bacteria; and (8) possibility of intracellular persistence and translocation outside the GM [26-29].\nIt should be noted that despite the huge number of studies devoted to H. pylori, it is still not clear whether H. pylori is involved only in the initiation of the tumor process in the stomach, or whether it can affect the mechanisms of tumor progression. The relationship between the severity of H. pylori infection and the clinical and morphological characteristics of GC and long-term results of this pathology treatment remains poorly studied, and in this connection, the question of the expediency of anti-Helicobacter therapy in patients with invasive GC remains open.\nObjective To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results.\nTo assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results.", "To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results.", "The patients One hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg).\nClinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1.\nThe distribution of patients according to the clinical and pathological characteristics of gastric cancer\nGC: Gastric cancer.\nWhen interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs.\nThe long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive.\nOne hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg).\nClinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1.\nThe distribution of patients according to the clinical and pathological characteristics of gastric cancer\nGC: Gastric cancer.\nWhen interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs.\nThe long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive.\nDetection H. pylori infection \nH. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori.\n\nH. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori.\nRUT After removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. \nAccording to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. \nAfter removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. \nAccording to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. \nImmunohistochemistry The presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry.\nThe sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. \nThe visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner.\nThe concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. \nAll sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. \nThe data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS.\nThe presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry.\nThe sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. \nThe visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner.\nThe concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. \nAll sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. \nThe data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS.\nStatistics Statistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant.\nStatistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant.", "One hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg).\nClinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1.\nThe distribution of patients according to the clinical and pathological characteristics of gastric cancer\nGC: Gastric cancer.\nWhen interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs.\nThe long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive.", "\nH. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori.", "After removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. \nAccording to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. ", "The presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry.\nThe sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. \nThe visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner.\nThe concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. \nAll sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. \nThe data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS.", "Statistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant.", "The features of H. pylori infection in GC Of 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%).\nIt was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM.\nForty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted.\n\nThe features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. \nThe localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. \n\nThe features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. \nIn the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization.\nOf 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%).\nIt was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM.\nForty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted.\n\nThe features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. \nThe localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. \n\nThe features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. \nIn the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization.\nCorrelations of clinical and pathological characteristics of GC with the severity of H. pylori infection according to RUT data The gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. \nClinical and pathological characteristics of gastric cancer depending on the data rapid urease test\nRUT: Rapid urease test; GC: Gastric cancer.\nIt is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4).\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\n\n10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving.\nThe gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. \nClinical and pathological characteristics of gastric cancer depending on the data rapid urease test\nRUT: Rapid urease test; GC: Gastric cancer.\nIt is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4).\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\n\n10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving.\nCorrelations of clinical and pathological characteristics of GC with the concentration of spiral and coccoid forms of H. pylori in the GM It is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5.\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\nClinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa\nRUT: Rapid urease test; GC: Gastric cancer.\nThere were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC.\nIt is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5.\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\nClinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa\nRUT: Rapid urease test; GC: Gastric cancer.\nThere were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC.", "Of 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%).\nIt was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM.\nForty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted.\n\nThe features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. \nThe localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. \n\nThe features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. \nIn the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization.", "The gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. \nClinical and pathological characteristics of gastric cancer depending on the data rapid urease test\nRUT: Rapid urease test; GC: Gastric cancer.\nIt is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4).\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\n\n10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving.", "It is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5.\n\n10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving.\nClinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa\nRUT: Rapid urease test; GC: Gastric cancer.\nThere were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC.", "A large amount of clinical and experimental data testifies to the important role of H. pylori in the occurrence of GC[31-34], but, there is less research into the features of H. pylori infection in patients with GC and its role in tumor progression, and the results are quite contradictory. These contradictions relate to many aspects, such as: (1) The frequency of infection in patients with GC. This data varies widely and ranges from 36% to 100%[35-40]; (2) The relation of infection with GC prognosis. Some researchers have noted an improvement in the long-term results of GC treatment in infected patients[41,42], while others, on the contrary, found that the presence of H. pylori infection was associated with a decrease in patient survival[43,44]; and (3) The connection between the infection and a histologic type of GC. In some studies, it was noted that patients with an intestinal type of GC are more often infected with H. pylori than patients with the diffuse type of GC[45,46]. Other researchers did not find a difference in H. pylori infection in patients with different histological types of tumors[47].\nIt is believed that the differences noted are associated with the fact that to reveal the infection the authors used the methods that were significantly different in their sensitivity, and primarily, to coccoid forms of bacteria. Most of the studies were carried out without considering coccoid forms of H. pylori and the concentration of bacteria in GM. The use of the biochemical method for the detection of urease activity and immunohistochemistry for visualization of bacteria in our study allowed us not only to assess the presence of infection in patients with GC, but also to mark some of its features associated with the localization of bacteria in the stomach, with a ratio of cocci and spiral forms, and the degree of bacterial contamination of GM.\nOur data for the Orenburg region recorded a high rate of H. pylori infection in patients with GC (84.5%). The coccoid forms of H. pylori, preserving a high degree of urease activity, dominated in GM in patients with GC. They were found in 93.4% of infected patients, with only coccoid forms of H. pylori-in 68.9%. \nIt is known that the coccoid forms of H. pylori can arise in response to unfavorable environmental factors, such as AT [47,48]. These forms are resistant to AT[49,50] and are able to form biofilms[51] and avoid the immune system[50]. They express a higher rate of cagE mRNA than their spiral counterparts[52], and by increasing the synthesis of tumor necrosis factor-alpha (TNF-α)-inducing protein (Hps), which is introduced into the cytosol and cell nuclei, they can activate nuclear factor-kappaB (NF-κB) and the expression of TNF-α and other cytokines involved in carcinogenesis[53,54]. The effect on the proliferation of gastric epithelial cells in the H. pylori coccoid forms is also stronger than in the helical forms[55], and they can induce the expression of VEGF-A and transforming growth factor-β[56]. The ability to transform into coccoid forms was also found to be characteristic of the most virulent strains of H. pylori[50,54].\nIt should be noted that the higher infection by coccoid forms of H. pylori in patients with GC, compared to patients with gastritis or gastric ulcer, had been mentioned by other researchers[57]. A number of studies have shown that coccoid forms of H. pylori retained urease activity[58] and the expression of such antigens as CagA, UreA, porin, components of the Cag type IV secretion system (TFSS), antigen-binding adhesin of the blood group BabA and others[59,60].\nThe use of immunohistochemistry in this study made it possible to detect the bacteria not only in the gastric mucus and on the surface of epithelial cells, but also within the cytoplasm of epithelial and immune cells of GM. Such intracellular expression was characterized by point inclusions giving a specific reaction with antibodies to H. pylori.\nThe intracellular persistence of H. pylori has been demonstrated by many investigators. They found H. pylori in the cytoplasm of epithelial cells, intercellular spaces, in the lamina propria of GM, and in the lumen of small vessels[61-63]. We assume that the point inclusions in the cytoplasm of epithelial and immune cells giving a positive reaction with antibodies to H. pylori is similar to those particle-rich cytoplasmic structure (PaCS) described earlier in the human superficial-foveolar epithelium and its metaplastic or dysplastic foci[64]. The authors found that the PaCS are a colocalization of VacA, CagA, urease, and outer membrane proteins with NOD1 receptor, ubiquitin-activating enzyme E1, polyubiquitinated proteins, proteasome components, and potentially oncogenic proteins like SHP2 and ERKs[64]. They believe that PaCS is a novel, proteasome-enriched structure arising in ribosome-rich cytoplasm at sites of H. pylori product accumulation. \nWe believe that the immune cells with point inclusions in the lamina propria of GM are likely to be macrophages. The data obtained by several researchers suggests this conclusion[65,66]. It is noted that even the absorbed bacteria retains their viability in macrophages, which may be associated with the violation of the phagosome maturation[66-68]. The use of confocal microscopy enabled the localization of the bacteria within the cells to be associated with the endosomal and lysosomal markers, and found that H. pylori could use the vesicles of autophagosomes (autophagic vesicles) for its own replication[63,69]. \nThe study found that the concentration of H. pylori coccoid forms in GM was the most significant clinical factor. This factor was associated with the tumor histology, T status, N status, stage, 10-year PFS, and OS. The moderate and marked concentrations of coccoid forms of H. pylori were more often found in the diffuse type of GC (P = 0.024) and T3-4 (P = 0.04) stage. Interestingly, the high concentration of H. pylori is more frequent in Stage N1 than in N2 (at 90.0% and 53.1%, respectively, P = 0.024).\nThe moderate and marked concentrations of coccoid forms of H. pylori represented a prognostic factor associated with the decrease of 10-year RFS and OS from 55.6% to 26.3% (P = 0.02 and P = 0.07, respectively).\nIt should be noted that the results of this study do not allow us to unambiguously judge the effect of H. pylori on GC progression. A decrease in OS and DFS in patients with moderate and marked concentrations of H. pylori coccoid forms in the GM may be due to the fact that these patients had more advanced stages and more aggressive forms of GC. Meanwhile, there are more and more studies showing that H. pylori infection can promote GC progression by activating the NF-κB signaling pathway and induction of interleukin-8 secretion[70], the activation of epithelial-mesenchymal transformation[71-74] and angiogenesis[75,76], as well as increasing the invasive properties of tumor cells[77]. It can be assumed that the administration of AT before surgery contributes to the reduction of the inflammatory process activity and normalization of the adhesive properties of tumour cells, which in turn decreases metastasis risk and improves the long-term results of the treatment of GC. The data literature on the improvement of the long-term results of malignant tumours treatment when using antibacterial drugs testify in favour of this hypothesis[78-80].", "The data obtained indicates that H. pylori may be associated not only with induction but also with the progression of GC. It can be assumed that the prevalence of coccoid forms of bacteria and their intracellular persistence can affect the mechanisms of tumor progression. Further appropriate studies regarding the role of H. pylori in the progression of GC are obviously advisable." ]
[ null, null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[ "Gastric cancer", "\nHelicobacter pylori\n", "Coccoid and spiral forms of bacteria", "Rapid urease test", "Relapse free survival", "Overall survival" ]
INTRODUCTION: Gastric cancer (GC) continues to be one of the most common malignant diseases in the world[1,2]. Despite a decreasing trend in the incidence of GC in most countries of the world, the treatment results of this pathology cannot be considered satisfactory. In the structure of mortality from malignant neoplasms, this pathology firmly occupies 2nd place in most developed countries of the world, and the 5-year survival rate of radically operated patients does not exceed 15%-30%[3,4]. It is important to note that it is impossible to improve the long-term results of malignant neoplasms treatment without knowledge of the mechanisms associated with their progression[3]. Clinical studies in recent years indicate that inflammatory infiltration of the tumor stroma and surrounding tissues can have an important prognostic value and affect the long-term results of malignant neoplasm treatment[5-9]. A number of studies have shown that inflammatory infiltration of the tumor stroma is associated with the body’s adequate immune response to the tumor, and may be a favorable prognosis factor in various malignant neoplasms[10,11], including GC[12,13]. At the same time, the data obtained by other researchers indicates that pronounced inflammatory infiltration of the tumor stroma, especially T-reg lymphocytes and macrophages, may be a factor contributing to the progression of malignant neoplasms[14,15]. There was a decrease in the overall survival (OS) and relapse-free survival (RFS) of GC patients with a high content of Foxp3 + T-reg in the tumor stroma and regional metastases[16-18] and macrophages[16,19]. It is believed that inflammatory infiltration of the tumor stroma can contribute to tumor progression by activating the mechanisms of angiogenesis, expression of E- and L-selectins, formation of the products of lipid peroxidation and free radicals, destruction of connective tissue matrix and basement membranes of epithelia by proteolytic enzymes, and activation of epithelial-mesenchymal transformation[20-23]. When studying the role of inflammation in the progression of GC, it is impossible to ignore the problem of Helicobacter pylori (H. pylori) infection. H. pylori is a gram-negative, spiral-shaped bacterium, the habitat of which is the gastric mucosa (GM) and duodenum. H. pylori differs from other bacteria in a set of properties that make it possible to colonize the GM and persist for a long time under conditions that are unfavorable for other microorganisms[24,25]. These include: (1) The ability to produce a special enzyme-urease; (2) Synthesis of lytic enzymes that cause the depolymerization and dissolution of gastric mucus, consisting mainly of mucin; (3) The mobility of the bacterium, which is ensured by the presence of 5-6 flagella; (4) The high adhesiveness of bacteria to GM epithelial cells of the GM and elements of connective tissue due to the interaction of bacterial ligands with the corresponding cells receptors; (5) Production of various exotoxins (VacA, CagA, and others); (6) Instability of the H. pylori genome; (7) The presence of vegetative and coccoid forms of bacteria; and (8) possibility of intracellular persistence and translocation outside the GM [26-29]. It should be noted that despite the huge number of studies devoted to H. pylori, it is still not clear whether H. pylori is involved only in the initiation of the tumor process in the stomach, or whether it can affect the mechanisms of tumor progression. The relationship between the severity of H. pylori infection and the clinical and morphological characteristics of GC and long-term results of this pathology treatment remains poorly studied, and in this connection, the question of the expediency of anti-Helicobacter therapy in patients with invasive GC remains open. Objective To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results. To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results. Objective: To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results. MATERIALS AND METHODS: The patients One hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg). Clinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1. The distribution of patients according to the clinical and pathological characteristics of gastric cancer GC: Gastric cancer. When interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs. The long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive. One hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg). Clinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1. The distribution of patients according to the clinical and pathological characteristics of gastric cancer GC: Gastric cancer. When interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs. The long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive. Detection H. pylori infection H. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori. H. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori. RUT After removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. According to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. After removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. According to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. Immunohistochemistry The presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry. The sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. The visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner. The concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. All sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. The data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS. The presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry. The sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. The visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner. The concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. All sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. The data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS. Statistics Statistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant. Statistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant. The patients: One hundred and nine patients with GC who had undergone radical surgery (R0) between May 2007 and March 2010 at the Orenburg Regional Clinical Oncology Center, were included in this prospective cohort pilot study. Study inclusion criteria were: Histologically proven invasive GC; no evidence of distant metastases; radical surgery (R0); no prior gastric surgery; no previous chemotherapy or radiotherapy. The study did not include patients with decompensation of cardiovascular and renal diseases, exacerbation of chronic inflammatory processes, severe allergic processes, or who received glucocorticoids, antihistamines, and non-steroidal anti-inflammatory drugs. The study was performed in accordance with the Helsinki Declaration and internationally recognized guidelines, and the privacy of patients was protected by decoding the data according to the privacy regulations of the Orenburg Regional Clinical Oncologic Center (Russia, Orenburg). All patients provided written informed consent. The protocol was approved by the Institutional Review Board of the Orenburg State Medical University (Russia, Orenburg). Clinical and pathological data including age, tumor localization, stage, type of surgery, histology, the presence of AT before surgery, postoperative therapy, and long-term results of treatment were retrieved from the routine reports for analyses. The distribution of patients according to the clinical and pathological characteristics of GC is presented in Table 1. The distribution of patients according to the clinical and pathological characteristics of gastric cancer GC: Gastric cancer. When interviewing patients, it was found that 45 patients received AT before admission to the clinic due to a preliminary wrong diagnosis of gastric ulcer or chronic gastritis. The following combinations of antibacterial drugs were most commonly ordered: Amoxicillin + clarithromycin (34 patients), amoxicillin + clarithromycin + metronidazole (6 patients), amoxicillin + metronidazole (3 patients), other drugs (2 patients). Information about the antibacterial drugs, the timing and duration of their intake was entered into the primary patient documentation and then taken into account in the analysis. We considered only those patients who underwent AT in the period from 1 to 1.5 mo before the operation, lasting at least seven days, and using two or more antibacterial drugs. The long-term results of treatment were assessed for the period from May 12, 2007 to April 12, 2021. The median follow-up period was 86.2 mo. As of April 12, 2021, 26 (24.5%) patients were alive, 54 (50.9%) had died from the progression of GC, 20 (18.9%) had died from causes other than GC, and six (5.6%) left the region at different follow-up periods. Malignant tumors of other localizations were diagnosed in 8 patients at different times after the operation: Non-Hodgkin's lymphomas-in three, prostate cancer-in one, lung cancer-in two, laryngeal cancer-in one, and breast cancer-in one patient. With the exception of one patient with non-Hodgkin's lymphoma, the other patients died from the progression of these diseases. The causes of death were not associated with malignant tumors for the other patients. During the period 2020-2021, seven patients contracted coronavirus disease-19, one of whom died from the disease, but the rest are alive. Detection H. pylori infection: H. pylori in the GM and tumor was determined by rapid urease test (RUT) and by immunohistochemically (IHC) using the antibody to H. pylori. RUT: After removal of the stomach (within 30 min) a greater curvature of the organ was opened and biopsy samples were taken from the tumor and the adjacent macroscopically non-tumorous GM at a distance of 3-5 cm from the tumor margin. The samples were placed on test strips (HELPIL–test, “АМА”, Russia) for three minutes. The presence and the severity of the infection were evaluated by the color change of the indicator from yellow to blue. According to the intensity and the time of the appearance of a blue color, we distinguished three degrees of infection: 3+: Marked (+++)-bright staining in the first minute of the study; 2+: Moderate (++)–for an average intensity of staining for 2 min and 1+ mild (+)-weak staining for three minutes. If the color of the indicator did not change, or became dirty gray, and if after repeated research the same result was received, the test was evaluated as negative. The same samples were later used for histological analysis and IGH. Immunohistochemistry : The presence and features of H. pylori infection were studied in samples of the GM adjacent to the tumor, in the tumor tissue, in the omentum, and regional lymph nodes of 46 patients, by immunohistochemistry. The sections for IGH were dewaxed and rehydrated by sequential immersion in xylene and graded ethanol and water. For antigen retrieval, the sections boiling for 10 min in citrate buffer (pH 6) and endogenous peroxidase activity was blocked with 30 mL/L hydrogen peroxide solution. Slides were incubated at room temperature with the anti-H. pylori (RB-9070, Thermo Fisher Scientific, the immunogen is purified H. pylori) rabbit polyclonal antibodies in diluted at 1:1000 for 30 min. The visualization system included DAB (UltraVision LP Detection System HRP Polymer & DAB Plus Chromogen) and hematoxylin counterstaining. For negative control sections, the primary antibody was replaced with phosphate-buffered saline and processed in the same manner. The concentration of H. pylori in the GM detected by IGH was graded as 1+ for mild, 2+ for moderate, and 3+ for marked according to the Sydney system[30]. The presence of point inclusions giving a positive reaction with antibodies to H. pylori in the cytoplasm of epithelial cells of deep gastric glands and of the lymphoid cells of the lamina propria of GM as well as in omentum and lymph node, were taken into account. All sections were carefully and completely scanned by two of the authors (MS and OT) without knowledge of the clinical and pathological data. The data obtained was compared with clinical features of GC: Stage, localization, histology, the presence of AT before surgery, and 10-year OS and RFS. Statistics: Statistical analysis was performed using the Statistica 10.0 software. The correlations between different data were evaluated using the nonparametric Spearman's rank correlation or gamma correlation. Chi-square tests were carried out to analyze the difference of distribution among the categorized data. Mann–Whitney U nonparametric test was used to compare the value of the quantitative data. The survival was analyzed using the Kaplan-Meier method. The log-rank test was used to compare survival curves between subgroups of patients. A value of P < 0.05 was considered statistically significant. RESULTS: The features of H. pylori infection in GC Of 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%). It was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM. Forty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted. The features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. The localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. The features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. In the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization. Of 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%). It was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM. Forty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted. The features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. The localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. The features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. In the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization. Correlations of clinical and pathological characteristics of GC with the severity of H. pylori infection according to RUT data The gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. Clinical and pathological characteristics of gastric cancer depending on the data rapid urease test RUT: Rapid urease test; GC: Gastric cancer. It is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4). 10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving. 10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving. The gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. Clinical and pathological characteristics of gastric cancer depending on the data rapid urease test RUT: Rapid urease test; GC: Gastric cancer. It is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4). 10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving. 10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving. Correlations of clinical and pathological characteristics of GC with the concentration of spiral and coccoid forms of H. pylori in the GM It is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5. 10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving. Clinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa RUT: Rapid urease test; GC: Gastric cancer. There were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC. It is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5. 10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving. Clinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa RUT: Rapid urease test; GC: Gastric cancer. There were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC. The features of H. pylori infection in GC: Of 93 patients (84.5%) demonstrated positive RUT. According to RUT the concentration of H. pylori in GM was mild (1+) in 38 (34.8%) patients, moderate (2+)-in 37 (33.9%) and marked (3+)–in 8 (16.5%). RUT was negative in 16 patients (14.7%). It was found that urease activity 2+ and 3+ were significantly more frequent in GM than in tumors (in 55.5% and 13.3% of cases, respectively, P = 0.01). In more than half the cases (53.3%), the urease activity in the tumor was lower than in the adjacent GM. Forty-six samples of GM were stained by IHC. The coccoid forms of H. pylori were found to prevail in GM adjacent to the tumor (Figure 1A). Coccoid forms of H. pylori only were identified in 31 (63.1%) patients, and coccoid and spiral in 12 (30.4%) patients (Figure 1B). No signs of infection were found in 3 (6.5%) patients. The concentration of coccoid and spiral forms of H. pylori in GM according to IGH was 1+ in 24 (52.2%) and 1 (19.6%) patients, 2+-in 11 (23.9%) and 4 (8.7%), and 3+-in 8 (17.4%) and 1 (2.2%) patients. A positive correlation between the concentration of coccoid and spiral forms of H. pylori (gamma = 0.642, P < 0.0001) was noted. The features of Helicobacter pylori localization in gastric mucosa in patients with gastric cancer. A: Coccoid forms of Helicobacter pylori (H. pylori) in the gastric pit. The some bacteria within the cytoplasm of epithelium cells (arrows); B: The spiral (black arrows) and coccoid (orange arrows) forms of H. pylori on the surface of superficial-foveolar gastric epithelium; C: The bacteria in the surface mucous gel layer of stomach (arrows); D: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of epithelial cells of deep gastric glands (arrows); E: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of the immune cells of the lamina propria of gastric mucosa (arrows); F: The point inclusions giving a positive reaction with antibodies to H. pylori within the cytoplasm of intraepithelial lymphocytes (arrows). Immunoperoxidase staining with anti-H. pylori antibody, immersion. Bars: A: 20 μm; B: 10 μm; C: 20 μm; D-F: 10 μm. The localization of H. pylori in GM was observed not only in the surface mucous gel layer of the stomach (Figure 1C) but also within the cytoplasm of the gastric superficial-foveolar epithelium (Figure 1D). The point inclusions giving a positive reaction with antibodies to H. pylori were also revealed in the cytoplasm of the epithelial cells of deep gastric glands (in 41.3% samples) and of the immune cells (Figure 1E) of the lamina propria of GM (in 43.5% samples), as well as of the intraepithelial lymphocytes (Figure 1F). The close relationship between the presence of point inclusions in immune cells and epithelial cells (gamma = 0.642, P < 0.0001) was noted. The individual cocci or their small clusters, and sometimes the short rod bacterium, were also often detected in the lamina propria of the GM. We also found bacteria in the omentum and lymph nodes. In the omentum the bacteria were presented predominantly by cocci between 0.5 and 1 μm in diameter (Figure 2A). Cocci were arranged most commonly by the small compact groups up to 10-15 cells in the immediate vicinity of the LN capsule. The bacteria were located mainly between the adipocytes. However, it was not clear whether bacteria were outside of the cells or in a narrow rim of cytoplasm of the fat cells. The features of Helicobacter pylori localization in omentum and lymph node in patients with gastric cancer. A: The small group of cocci located in the central part of the omentum adipocyte; B: The congestions of bacteria around the nucleus of lymphocytes in the paracortical area of lymph node. Immunoperoxidase staining with anti-Helicobacter pylori antibody, immersion, Bars: 10 μm. In the tissue of lymph nodes we usually observed the small accumulations of cocci between 10 and 15 cells (Figure 2B). Bacteria were located between lymphocytes of the paracortical area, but not so compact, as in the omentum. Quite often the concentration of bacteria was observed around the nuclei of cells that can testify to their intracellular localization. Correlations of clinical and pathological characteristics of GC with the severity of H. pylori infection according to RUT data : The gamma correlation coefficient test (gamma) showed that the severity of H. pylori in GM according to RUT positively correlated with the T status (gamma = 0.537, P < 0.00001), N status (gamma = 0.371, P = 0.0007) and stage (gamma = 0.520, P < 0.00001), and negatively correlated with the presence of AT in anamnesis (gamma = -0.418, P = 0.003). The marked (+++) and moderate (++) degrees of H. pylori infection were more often observed in Grade 2 and Grade 3, in T3-4 status, in N1 status, in the T3-4N1-2 stage, and in the absence of AT in anamnesis (Table 2). Correlations of H. pylori concentration in GM according to RUT with 10-year OS and RFS of GC patients were not determined. Clinical and pathological characteristics of gastric cancer depending on the data rapid urease test RUT: Rapid urease test; GC: Gastric cancer. It is important to note that the presence in AT 1-1.5 mo before surgery was associated with a significant improvement in RFS and OS (Figure 3), however, this applied only to patients with local GC (T1-3N0). In advanced GC (T3-4N1 and T3-4N2) there were no significant differences in patient survival (Figure 4). 10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving. 10-year overall surviving and relapse-free surviving of patients with T3-4N1-2M0 stages of gastric cancer depending on the presence of antibiotic therapy 1-1.5 mo before surgery (P = 0.78). A: 10-year overall surviving; B: Relapse-free surviving. Correlations of clinical and pathological characteristics of GC with the concentration of spiral and coccoid forms of H. pylori in the GM : It is important to note that correlations between the clinical and pathological characteristics of GC and the concentration of H. pylori spiral forms in GM were not found. However, the concentration of H. pylori coccoid forms correlated with age (ρ = -0.502, P = 0.0006), histology (gamma = 0.550, P = 0.0004), T status (gamma = 0.709, P = 0.0001), N status (gamma = 0.509, P = 0.002) stage (gamma = 0.636, P = 0.0002), and 10-year RFS (gamma = -0.521, P = 0.008) and OS (gamma = -0.500, P = 0.044). In cases with a moderate and marked concentration (2+ or 3+) of H. pylori coccoid forms in GM compared to cases with a low concentration (1+ or without infection) the patients were younger (57.9 ± 2.5 years vs 66.2 ± 1.4 years, respectively, P = 0.004) and the diffuse type of GC, poorly differentiated tumors (G3), T3-4 stage and N1 stage of GC were more often observed (Table 3). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year progression-free survival (PFS) and OS survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). PFS and OS curves, depending on the concentration of coccoid forms of H. pylori in GM, are presented in Figure 5. 10-year overall surviving and relapse-free surviving of patients with gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa (P = 0.02). A: 10-year overall surviving; B: Relapse-free surviving. Clinical and pathological characteristics of gastric cancer depending on the concentration of Helicobacter pylori coccoid forms in the gastric mucosa RUT: Rapid urease test; GC: Gastric cancer. There were no correlations between the presence of point inclusions in the cytoplasm of epithelial cells of deep gastric glands, in the stroma immune cells, and in the intraepithelial lymphocytes with the clinical and pathological characteristics of GC. DISCUSSION: A large amount of clinical and experimental data testifies to the important role of H. pylori in the occurrence of GC[31-34], but, there is less research into the features of H. pylori infection in patients with GC and its role in tumor progression, and the results are quite contradictory. These contradictions relate to many aspects, such as: (1) The frequency of infection in patients with GC. This data varies widely and ranges from 36% to 100%[35-40]; (2) The relation of infection with GC prognosis. Some researchers have noted an improvement in the long-term results of GC treatment in infected patients[41,42], while others, on the contrary, found that the presence of H. pylori infection was associated with a decrease in patient survival[43,44]; and (3) The connection between the infection and a histologic type of GC. In some studies, it was noted that patients with an intestinal type of GC are more often infected with H. pylori than patients with the diffuse type of GC[45,46]. Other researchers did not find a difference in H. pylori infection in patients with different histological types of tumors[47]. It is believed that the differences noted are associated with the fact that to reveal the infection the authors used the methods that were significantly different in their sensitivity, and primarily, to coccoid forms of bacteria. Most of the studies were carried out without considering coccoid forms of H. pylori and the concentration of bacteria in GM. The use of the biochemical method for the detection of urease activity and immunohistochemistry for visualization of bacteria in our study allowed us not only to assess the presence of infection in patients with GC, but also to mark some of its features associated with the localization of bacteria in the stomach, with a ratio of cocci and spiral forms, and the degree of bacterial contamination of GM. Our data for the Orenburg region recorded a high rate of H. pylori infection in patients with GC (84.5%). The coccoid forms of H. pylori, preserving a high degree of urease activity, dominated in GM in patients with GC. They were found in 93.4% of infected patients, with only coccoid forms of H. pylori-in 68.9%. It is known that the coccoid forms of H. pylori can arise in response to unfavorable environmental factors, such as AT [47,48]. These forms are resistant to AT[49,50] and are able to form biofilms[51] and avoid the immune system[50]. They express a higher rate of cagE mRNA than their spiral counterparts[52], and by increasing the synthesis of tumor necrosis factor-alpha (TNF-α)-inducing protein (Hps), which is introduced into the cytosol and cell nuclei, they can activate nuclear factor-kappaB (NF-κB) and the expression of TNF-α and other cytokines involved in carcinogenesis[53,54]. The effect on the proliferation of gastric epithelial cells in the H. pylori coccoid forms is also stronger than in the helical forms[55], and they can induce the expression of VEGF-A and transforming growth factor-β[56]. The ability to transform into coccoid forms was also found to be characteristic of the most virulent strains of H. pylori[50,54]. It should be noted that the higher infection by coccoid forms of H. pylori in patients with GC, compared to patients with gastritis or gastric ulcer, had been mentioned by other researchers[57]. A number of studies have shown that coccoid forms of H. pylori retained urease activity[58] and the expression of such antigens as CagA, UreA, porin, components of the Cag type IV secretion system (TFSS), antigen-binding adhesin of the blood group BabA and others[59,60]. The use of immunohistochemistry in this study made it possible to detect the bacteria not only in the gastric mucus and on the surface of epithelial cells, but also within the cytoplasm of epithelial and immune cells of GM. Such intracellular expression was characterized by point inclusions giving a specific reaction with antibodies to H. pylori. The intracellular persistence of H. pylori has been demonstrated by many investigators. They found H. pylori in the cytoplasm of epithelial cells, intercellular spaces, in the lamina propria of GM, and in the lumen of small vessels[61-63]. We assume that the point inclusions in the cytoplasm of epithelial and immune cells giving a positive reaction with antibodies to H. pylori is similar to those particle-rich cytoplasmic structure (PaCS) described earlier in the human superficial-foveolar epithelium and its metaplastic or dysplastic foci[64]. The authors found that the PaCS are a colocalization of VacA, CagA, urease, and outer membrane proteins with NOD1 receptor, ubiquitin-activating enzyme E1, polyubiquitinated proteins, proteasome components, and potentially oncogenic proteins like SHP2 and ERKs[64]. They believe that PaCS is a novel, proteasome-enriched structure arising in ribosome-rich cytoplasm at sites of H. pylori product accumulation. We believe that the immune cells with point inclusions in the lamina propria of GM are likely to be macrophages. The data obtained by several researchers suggests this conclusion[65,66]. It is noted that even the absorbed bacteria retains their viability in macrophages, which may be associated with the violation of the phagosome maturation[66-68]. The use of confocal microscopy enabled the localization of the bacteria within the cells to be associated with the endosomal and lysosomal markers, and found that H. pylori could use the vesicles of autophagosomes (autophagic vesicles) for its own replication[63,69]. The study found that the concentration of H. pylori coccoid forms in GM was the most significant clinical factor. This factor was associated with the tumor histology, T status, N status, stage, 10-year PFS, and OS. The moderate and marked concentrations of coccoid forms of H. pylori were more often found in the diffuse type of GC (P = 0.024) and T3-4 (P = 0.04) stage. Interestingly, the high concentration of H. pylori is more frequent in Stage N1 than in N2 (at 90.0% and 53.1%, respectively, P = 0.024). The moderate and marked concentrations of coccoid forms of H. pylori represented a prognostic factor associated with the decrease of 10-year RFS and OS from 55.6% to 26.3% (P = 0.02 and P = 0.07, respectively). It should be noted that the results of this study do not allow us to unambiguously judge the effect of H. pylori on GC progression. A decrease in OS and DFS in patients with moderate and marked concentrations of H. pylori coccoid forms in the GM may be due to the fact that these patients had more advanced stages and more aggressive forms of GC. Meanwhile, there are more and more studies showing that H. pylori infection can promote GC progression by activating the NF-κB signaling pathway and induction of interleukin-8 secretion[70], the activation of epithelial-mesenchymal transformation[71-74] and angiogenesis[75,76], as well as increasing the invasive properties of tumor cells[77]. It can be assumed that the administration of AT before surgery contributes to the reduction of the inflammatory process activity and normalization of the adhesive properties of tumour cells, which in turn decreases metastasis risk and improves the long-term results of the treatment of GC. The data literature on the improvement of the long-term results of malignant tumours treatment when using antibacterial drugs testify in favour of this hypothesis[78-80]. CONCLUSION: The data obtained indicates that H. pylori may be associated not only with induction but also with the progression of GC. It can be assumed that the prevalence of coccoid forms of bacteria and their intracellular persistence can affect the mechanisms of tumor progression. Further appropriate studies regarding the role of H. pylori in the progression of GC are obviously advisable.
Background: Helicobacter pylori (H. pylori) is a spiral-shaped bacterium responsible for the development of chronic gastritis, gastric ulcer, gastric cancer (GC), and MALT-lymphoma of the stomach. H. pylori can be present in the gastric mucosa (GM) in both spiral and coccoid forms. However, it is not known whether the severity of GM contamination by various vegetative forms of H. pylori is associated with clinical and morphological characteristics and long-term results of GC treatment. Methods: Of 109 patients with GC were included in a prospective cohort study. H. pylori in the GM and tumor was determined by rapid urease test and by immunohistochemically using the antibody to H. pylori. The results obtained were compared with the clinical and morphological characteristics and prognosis of GC. Statistical analysis was performed using the Statistica 10.0 software. Results: H. pylori was detected in the adjacent to the tumor GM in 84.5% of cases, of which a high degree of contamination was noted in 50.4% of the samples. Coccoid forms of H. pylori were detected in 93.4% of infected patients, and only coccoid-in 68.9%. It was found that a high degree of GM contamination by the coccoid forms of H. pylori was observed significantly more often in diffuse type of GC (P = 0.024), in poorly differentiated GC (P = 0.011), in stage T3-4 (P = 0.04) and in N1 (P = 0.011). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year relapse free and overall survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). The relationship between the severity of the GM contamination by the spiral-shaped forms of H. pylori and the clinical and morphological characteristics and prognosis of GC was not revealed. Conclusions: The data obtained indicates that H. pylori may be associated not only with induction but also with the progression of GC.
INTRODUCTION: Gastric cancer (GC) continues to be one of the most common malignant diseases in the world[1,2]. Despite a decreasing trend in the incidence of GC in most countries of the world, the treatment results of this pathology cannot be considered satisfactory. In the structure of mortality from malignant neoplasms, this pathology firmly occupies 2nd place in most developed countries of the world, and the 5-year survival rate of radically operated patients does not exceed 15%-30%[3,4]. It is important to note that it is impossible to improve the long-term results of malignant neoplasms treatment without knowledge of the mechanisms associated with their progression[3]. Clinical studies in recent years indicate that inflammatory infiltration of the tumor stroma and surrounding tissues can have an important prognostic value and affect the long-term results of malignant neoplasm treatment[5-9]. A number of studies have shown that inflammatory infiltration of the tumor stroma is associated with the body’s adequate immune response to the tumor, and may be a favorable prognosis factor in various malignant neoplasms[10,11], including GC[12,13]. At the same time, the data obtained by other researchers indicates that pronounced inflammatory infiltration of the tumor stroma, especially T-reg lymphocytes and macrophages, may be a factor contributing to the progression of malignant neoplasms[14,15]. There was a decrease in the overall survival (OS) and relapse-free survival (RFS) of GC patients with a high content of Foxp3 + T-reg in the tumor stroma and regional metastases[16-18] and macrophages[16,19]. It is believed that inflammatory infiltration of the tumor stroma can contribute to tumor progression by activating the mechanisms of angiogenesis, expression of E- and L-selectins, formation of the products of lipid peroxidation and free radicals, destruction of connective tissue matrix and basement membranes of epithelia by proteolytic enzymes, and activation of epithelial-mesenchymal transformation[20-23]. When studying the role of inflammation in the progression of GC, it is impossible to ignore the problem of Helicobacter pylori (H. pylori) infection. H. pylori is a gram-negative, spiral-shaped bacterium, the habitat of which is the gastric mucosa (GM) and duodenum. H. pylori differs from other bacteria in a set of properties that make it possible to colonize the GM and persist for a long time under conditions that are unfavorable for other microorganisms[24,25]. These include: (1) The ability to produce a special enzyme-urease; (2) Synthesis of lytic enzymes that cause the depolymerization and dissolution of gastric mucus, consisting mainly of mucin; (3) The mobility of the bacterium, which is ensured by the presence of 5-6 flagella; (4) The high adhesiveness of bacteria to GM epithelial cells of the GM and elements of connective tissue due to the interaction of bacterial ligands with the corresponding cells receptors; (5) Production of various exotoxins (VacA, CagA, and others); (6) Instability of the H. pylori genome; (7) The presence of vegetative and coccoid forms of bacteria; and (8) possibility of intracellular persistence and translocation outside the GM [26-29]. It should be noted that despite the huge number of studies devoted to H. pylori, it is still not clear whether H. pylori is involved only in the initiation of the tumor process in the stomach, or whether it can affect the mechanisms of tumor progression. The relationship between the severity of H. pylori infection and the clinical and morphological characteristics of GC and long-term results of this pathology treatment remains poorly studied, and in this connection, the question of the expediency of anti-Helicobacter therapy in patients with invasive GC remains open. Objective To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results. To assess the features of H. pylori infection in patients with Stage I-IIIB GC and their correlation with the clinical and morphological characteristics of the disease, the presence of antibiotic therapy (AT) before surgery, and long-term treatment results. CONCLUSION: The results obtained do not allow one to draw unambiguous conclusions about the role of H. pylori in the progression of GC. Further appropriate prospective studies regarding the role of H. pylori in the progression of GC are obviously advisable.
Background: Helicobacter pylori (H. pylori) is a spiral-shaped bacterium responsible for the development of chronic gastritis, gastric ulcer, gastric cancer (GC), and MALT-lymphoma of the stomach. H. pylori can be present in the gastric mucosa (GM) in both spiral and coccoid forms. However, it is not known whether the severity of GM contamination by various vegetative forms of H. pylori is associated with clinical and morphological characteristics and long-term results of GC treatment. Methods: Of 109 patients with GC were included in a prospective cohort study. H. pylori in the GM and tumor was determined by rapid urease test and by immunohistochemically using the antibody to H. pylori. The results obtained were compared with the clinical and morphological characteristics and prognosis of GC. Statistical analysis was performed using the Statistica 10.0 software. Results: H. pylori was detected in the adjacent to the tumor GM in 84.5% of cases, of which a high degree of contamination was noted in 50.4% of the samples. Coccoid forms of H. pylori were detected in 93.4% of infected patients, and only coccoid-in 68.9%. It was found that a high degree of GM contamination by the coccoid forms of H. pylori was observed significantly more often in diffuse type of GC (P = 0.024), in poorly differentiated GC (P = 0.011), in stage T3-4 (P = 0.04) and in N1 (P = 0.011). In cases of moderate and marked concentrations of H. pylori in GM, a decrease in 10-year relapse free and overall survival from 55.6% to 26.3% was observed (P = 0.02 and P = 0.07, respectively). The relationship between the severity of the GM contamination by the spiral-shaped forms of H. pylori and the clinical and morphological characteristics and prognosis of GC was not revealed. Conclusions: The data obtained indicates that H. pylori may be associated not only with induction but also with the progression of GC.
11,332
383
[ 794, 47, 616, 30, 209, 319, 101, 3404, 894, 371, 409, 1399, 64 ]
14
[ "pylori", "patients", "gc", "gastric", "gm", "cells", "forms", "10", "coccoid", "cancer" ]
[ "characteristics gastric cancer", "gastric cancer correlations", "gastric cancer important", "gc gastric cancer", "gastric cancer gc" ]
null
[CONTENT] Gastric cancer | Helicobacter pylori | Coccoid and spiral forms of bacteria | Rapid urease test | Relapse free survival | Overall survival [SUMMARY]
[CONTENT] Gastric cancer | Helicobacter pylori | Coccoid and spiral forms of bacteria | Rapid urease test | Relapse free survival | Overall survival [SUMMARY]
null
[CONTENT] Gastric cancer | Helicobacter pylori | Coccoid and spiral forms of bacteria | Rapid urease test | Relapse free survival | Overall survival [SUMMARY]
[CONTENT] Gastric cancer | Helicobacter pylori | Coccoid and spiral forms of bacteria | Rapid urease test | Relapse free survival | Overall survival [SUMMARY]
[CONTENT] Gastric cancer | Helicobacter pylori | Coccoid and spiral forms of bacteria | Rapid urease test | Relapse free survival | Overall survival [SUMMARY]
[CONTENT] Gastric Mucosa | Helicobacter Infections | Helicobacter pylori | Humans | Neoplasm Recurrence, Local | Prospective Studies | Stomach Neoplasms [SUMMARY]
[CONTENT] Gastric Mucosa | Helicobacter Infections | Helicobacter pylori | Humans | Neoplasm Recurrence, Local | Prospective Studies | Stomach Neoplasms [SUMMARY]
null
[CONTENT] Gastric Mucosa | Helicobacter Infections | Helicobacter pylori | Humans | Neoplasm Recurrence, Local | Prospective Studies | Stomach Neoplasms [SUMMARY]
[CONTENT] Gastric Mucosa | Helicobacter Infections | Helicobacter pylori | Humans | Neoplasm Recurrence, Local | Prospective Studies | Stomach Neoplasms [SUMMARY]
[CONTENT] Gastric Mucosa | Helicobacter Infections | Helicobacter pylori | Humans | Neoplasm Recurrence, Local | Prospective Studies | Stomach Neoplasms [SUMMARY]
[CONTENT] characteristics gastric cancer | gastric cancer correlations | gastric cancer important | gc gastric cancer | gastric cancer gc [SUMMARY]
[CONTENT] characteristics gastric cancer | gastric cancer correlations | gastric cancer important | gc gastric cancer | gastric cancer gc [SUMMARY]
null
[CONTENT] characteristics gastric cancer | gastric cancer correlations | gastric cancer important | gc gastric cancer | gastric cancer gc [SUMMARY]
[CONTENT] characteristics gastric cancer | gastric cancer correlations | gastric cancer important | gc gastric cancer | gastric cancer gc [SUMMARY]
[CONTENT] characteristics gastric cancer | gastric cancer correlations | gastric cancer important | gc gastric cancer | gastric cancer gc [SUMMARY]
[CONTENT] pylori | patients | gc | gastric | gm | cells | forms | 10 | coccoid | cancer [SUMMARY]
[CONTENT] pylori | patients | gc | gastric | gm | cells | forms | 10 | coccoid | cancer [SUMMARY]
null
[CONTENT] pylori | patients | gc | gastric | gm | cells | forms | 10 | coccoid | cancer [SUMMARY]
[CONTENT] pylori | patients | gc | gastric | gm | cells | forms | 10 | coccoid | cancer [SUMMARY]
[CONTENT] pylori | patients | gc | gastric | gm | cells | forms | 10 | coccoid | cancer [SUMMARY]
[CONTENT] tumor stroma | malignant | pylori | neoplasms | inflammatory infiltration | malignant neoplasms | inflammatory infiltration tumor stroma | inflammatory infiltration tumor | infiltration | infiltration tumor [SUMMARY]
[CONTENT] patients | orenburg | drugs | study | sections | died | period | data | clinical | test [SUMMARY]
null
[CONTENT] progression | progression gc | tumor progression appropriate studies | tumor progression appropriate | progression appropriate studies role | prevalence coccoid forms bacteria | prevalence coccoid forms | prevalence coccoid | prevalence | advisable [SUMMARY]
[CONTENT] pylori | patients | gc | gm | gastric | forms | coccoid | cells | surviving | gamma [SUMMARY]
[CONTENT] pylori | patients | gc | gm | gastric | forms | coccoid | cells | surviving | gamma [SUMMARY]
[CONTENT] GC | MALT ||| GM ||| GM | GC [SUMMARY]
[CONTENT] 109 | GC ||| GM ||| GC ||| Statistica | 10.0 [SUMMARY]
null
[CONTENT] GC [SUMMARY]
[CONTENT] GC | MALT ||| GM ||| GM | GC ||| 109 | GC ||| GM ||| GC ||| Statistica | 10.0 ||| GM | 84.5% | 50.4% ||| 93.4% | 68.9% ||| GM | GC | 0.024 | GC | 0.011 | T3-4 | 0.04 | N1 | 0.011 ||| GM | 10-year | 55.6% | 26.3% | 0.02 | 0.07 ||| GM | GC ||| GC [SUMMARY]
[CONTENT] GC | MALT ||| GM ||| GM | GC ||| 109 | GC ||| GM ||| GC ||| Statistica | 10.0 ||| GM | 84.5% | 50.4% ||| 93.4% | 68.9% ||| GM | GC | 0.024 | GC | 0.011 | T3-4 | 0.04 | N1 | 0.011 ||| GM | 10-year | 55.6% | 26.3% | 0.02 | 0.07 ||| GM | GC ||| GC [SUMMARY]
Anti-inflammatory effect of sulforaphane on LPS-stimulated RAW 264.7 cells and ob/ob mice.
33263238
Sulforaphane (SFN) is an isothiocyanate compound present in cruciferous vegetables. Although the anti-inflammatory effects of SFN have been reported, the precise mechanism related to the inflammatory genes is poorly understood.
BACKGROUND
Nitric oxide (NO) level was measured using a Griess assay. The inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2) expression levels were analyzed by Western blot analysis. Pro-inflammatory cytokines (tumor necrosis factor [TNF]-α, interleukin [IL]-1β, and IL-6) were measured by enzyme-linked immunosorbent assay (ELISA). RNA sequencing analysis was performed to evaluate the differential gene expression in the liver of ob/ob mice.
METHODS
The SFN treatment significantly attenuated the iNOS and COX-2 expression levels and inhibited NO, TNF-α, IL-1β, and IL-6 production in lipopolysaccharide (LPS)-stimulated RAW 264.7 cells. RNA sequencing analysis showed that the expression levels of 28 genes related to inflammation were up-regulated (> 2-fold), and six genes were down-regulated (< 0.6-fold) in the control ob/ob mice compared to normal mice. In contrast, the gene expression levels were restored to the normal level by SFN. The protein-protein interaction (PPI) network showed that chemokine ligand (Cxcl14, Ccl1, Ccl3, Ccl4, Ccl17) and chemokine receptor (Ccr3, Cxcr1, Ccr10) were located in close proximity and formed a "functional cluster" in the middle of the network.
RESULTS
The overall results suggest that SFN has a potent anti-inflammatory effect by normalizing the expression levels of the genes related to inflammation that were perturbed in ob/ob mice.
CONCLUSIONS
[ "Animals", "Anti-Inflammatory Agents", "Gene Expression", "Isothiocyanates", "Lipopolysaccharides", "Male", "Mice", "RAW 264.7 Cells", "Random Allocation", "Sulfoxides" ]
7710464
INTRODUCTION
Inflammation is the most commonly identified condition in the clinical field, which involves protecting the body from infection and tissue damage [1]. Macrophages are one of the major groups of the immune system, which perform a critical role in response to injury and infection [2]. Nuclear factor-κB (NF-κB) is a key regulatory element in macrophages [3]. The activation of NF-κB is essential for the expression of nitric oxide (NO), inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2), and pro-inflammatory cytokines, including tumor necrosis factor (TNF)-α, interleukin (IL)-6, and IL-1β [4]. Cytokines released from the inflammatory tissue disturb the metabolic functions of many organs, including the liver [5]. In particular, chronic inflammation is closely related to the progression of many metabolic diseases, including obesity and insulin resistance [6]. Obesity is associated with low-grade inflammation that causes oxidative stress, which leads to insulin resistance and non-alcoholic fatty liver disease (NAFLD) [7]. In the present study, ob/ob mice were selected as the appropriate in vivo model because they exhibit severe disturbances of the immune functions [89]. In particular, ob/ob mice have high levels of circulating endotoxin, which contributes to the development of inflammation by activating Toll-like receptor 4 (TLR4) signaling in the liver [8]. The inflammatory state of obesity is the augmented infiltration of T cells and macrophages into the metabolic tissues, including the liver [10]. In addition, ob/ob mice display hepatic lipid accumulation that induces inflammation, leading to NAFLD development [9]. Therefore, the liver of ob/ob mice was used to observe the effects of sulforaphane (SFN) on inflammatory gene expression. SFN is an isothiocyanate present in cruciferous vegetables, such as cabbage, cauliflower, and broccoli [11]. Glucoraphanin (GRN), one of the main glucosinolates in cruciferous vegetables, is converted to SFN by the gut microbiota-derived myrosinase in both rodents and humans [12]. A previous study found that GRN ameliorates obesity-induced inflammation in high-fat diet-treated obese mice [13]. In addition, bioactivated GRN with myrosinase reduced pro-inflammatory signaling related to a spinal cord injury in an experimental mouse model [14]. The supplementation of GRN-rich broccoli sprout extract reduced inflammatory reactions in endothelial cells [15]. Furthermore, synthetic GRN exhibited anti-inflammatory activity by reducing TNF-α secretion in lipopolysaccharide (LPS)-stimulated THP-1 cells [16]. Because GRN has an anti-inflammatory effect in vivo and in vitro, the present study used it as a control to compare the anti-inflammatory effect with SFN on both RAW 264.7 cells and ob/ob mice. SFN shows high chemical reactivity because of the electrophilicity of its isothiocyanate group [17]. Previous studies reported that SFN prevents oxidative stress-induced inflammation [18]. Furthermore, the SFN treatment prevented nod-like receptor protein 3 (NLRP3) inflammasome-induced NAFLD in obese mice [19]. Despite the anti-inflammatory and antioxidant effects of SFN, its exact mechanism related to the inflammatory genes is not completely understood. The present study examined the anti-inflammatory activity of SFN on LPS-stimulated RAW 264.7 cells and ob/ob mice. The effects of SFN on the expression levels of pro-inflammatory mediators, such as NO, COX-2, iNOS, TNF-α, IL-6, and IL-1β, were analyzed in LPS-stimulated RAW 264.7 cells. In addition, the effects of SFN on obesity-related inflammation was identified from the expression levels of genes related to inflammation in ob/ob mouse livers.
null
null
RESULTS
DPPH radical scavenging activity of SFN The antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging. DPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0005 compared to the control. Values represent as mean ± SE. IC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin. The antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging. DPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0005 compared to the control. Values represent as mean ± SE. IC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin. Inhibition of LPS-stimulated NO production by SFN To examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells. NO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0001 compared to the control. To examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells. NO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0001 compared to the control. Effects of SFN on COX-2 and iNOS expression in LPS-stimulated RAW 264.7 cells The inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2. **p < 0.005, ***p < 0.0005 compared to the control. The inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2. **p < 0.005, ***p < 0.0005 compared to the control. Inhibitory effects of SFN on TNF-α, IL-6, and IL-1β production The effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay. **p < 0.005, ***p < 0.0005 compared to the control. The effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay. **p < 0.005, ***p < 0.0005 compared to the control. Effect of SFN on TG accumulation in ob/ob mice liver Because excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice. SFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin. *p < 0.05, **p < 0.05 compared with control ob/ob group. Because excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice. SFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin. *p < 0.05, **p < 0.05 compared with control ob/ob group. Differential gene expression analysis of ob/ob mice liver RNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice. SFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor. SFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B. SFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase. PPI, protein-protein-interaction; SFN, sulforaphane. RNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice. SFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor. SFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B. SFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase. PPI, protein-protein-interaction; SFN, sulforaphane.
null
null
[ "DPPH radical scavenging activity", "Measurement of NO production", "ELISA", "Measurement of COX-2 and iNOS protein expression", "Animals experiments", "Sample treatment", "Measurement of triglyceride (TG) content in liver tissues of ob/ob mice", "Total RNA isolation, library preparation, sequencing, and data analysis", "Statistical analysis", "DPPH radical scavenging activity of SFN", "Inhibition of LPS-stimulated NO production by SFN", "Effects of SFN on COX-2 and iNOS expression in LPS-stimulated RAW 264.7 cells", "Inhibitory effects of SFN on TNF-α, IL-6, and IL-1β production", "Effect of SFN on TG accumulation in ob/ob mice liver", "Differential gene expression analysis of ob/ob mice liver" ]
[ "The antioxidant activity of the SFN was evaluated using stable free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) according to a slight modification of a previously described method [20]. Various concentrations of SFN (Sigma, USA) and GRN (Cayman Chemicals, USA) were diluted in dimethyl sulfoxide (DMSO) (Sigma) and incubated with ethanolic 0.1 mM DPPH (Sigma) at 37ºC for 30 min. The absorbance was measured at 517 nm using a UV spectrophotometer (Mecasys, Korea). The blanks were prepared by replacing the sample volumes with DMSO. The results were calculated as percentages of the control (100%), and the concentration of extracts required to decrease the initial DPPH concentration by 50% was expressed as the IC50 value. The radical scavenging ability (%) of the samples was calculated as % inhibition = [(ABlank − Asample)/ABlank] × 100, where ABlank = absorbance of the DPPH without the sample and Asample = absorbance of the SFN or GRN. The antioxidant activity of the SFN is expressed as the ascorbic acid (AA) equivalent antioxidant capacity (AEAC) as AA/100 g dry weight (DW) using the equation, AEAC = IC50 (Ascorbic acid)/IC50 (Sample) × 105.", "The murine macrophage RAW 264.7 cell line (KCLB 40071) was purchased from the Korean cell line bank (KCLB, Korea). The cells were cultured in Dulbecco's modified Eagle's medium (DMEM), containing 10% fetal bovine serum (FBS), 1% penicillin-streptomycin (Gibco, Thermo Fisher Scientific, USA), and kept at 37°C in a humidified atmosphere containing 5% CO2. After reaching 70–80% confluence, the cells were sub-cultured within two-day intervals. For the experiment, the cells were cultured in 96 well plates. After 24 h incubation, the cells were pretreated with SFN (2.5 and 5 µM) and GRN (5 µM) for 1 h, and then stimulated with LPS (Escherichia coli 0111:B4; Sigma-Aldrich, USA) for 24 h. The concentration of SFN was determined according to a previous study [21], and the same concentration (5 µM) of GRN was used to compare its activity with SFN. The level of NO production by LPS-stimulated RAW 264.7 macrophage cells was determined by measuring the nitrite level in the culture media using Griess reagent (Promega, USA). Briefly, the 50 µL of culture media from each well was mixed with 50 µL of N-(1-naphthyl)ethylenediamine dihydrochloride (NED) and 50 µL of a sulfanilamide solution. After incubating the mixture for 10 min at room temperature, the absorbance was read at 550 nm in an enzyme-linked immunosorbent assay (ELISA) microplate reader (TECAN, Austria).", "RAW 264.7 macrophage cells were cultured in 96 well plates (1 × 105 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h of incubation, the cells were pretreated with SFN and GRN and then stimulated with LPS for 24 h. The concentrations of TNF-α, IL-6, and IL-β in the culture media were measured using commercially available ELISA kits (Koma Biotech, Korea) according to the manufacture instructions.", "RAW 264.7 macrophage cells were cultured in six-well plates (1 × 106 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h incubation, the cells were pretreated with SFN and GRN for 1 h and then stimulated with LPS for 24 h. The expression levels of the COX-2 and iNOS proteins were observed by Western blot analysis. The cells were lysed with RIPA buffer containing a protease inhibitor mixture. The supernatant was separated, and the protein concentrations were evaluated using the Bradford assay (Bio-Rad Laboratories, USA). The equal amount of protein was mixed with 20% of loading buffer and separated by Tris-Glycine-Polyacrylamide, non-sodium dodecyl sulfate precast gel (10%, Koma Biotech) and subjected to Western blot with COX-2 (Sigma), iNOS (Sigma), and β-actin (Thermofisher, USA) antibodies. A Chemi-luminescence Bioimaging Instrument (NeoScience Co., Korea) was used to detect the proteins of interest.", "Male ob/ob mice (6 weeks old) were purchased from Japan SLC Inc. (Japan). The mouse strain was originated from Jackson Laboratory (USA) [22], and was developed to C57BL/6JHamSlc-ob in Japan SLC Inc. (Japan). Male C57BL/6 mice (6 weeks old) supplied by Orient Bio (Korea) were used as the non-obesity control group. The mice were housed at a controlled temperature (24°C ± 1°C) and 50-55% humidity with a 12 h light/12 h dark cycle. All experiments were carried out according to the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Jeju National University (ACUCC; approval No. 2018-0051).", "The ob/ob mice were allocated randomly into three groups (n = 5): control ob/ob group and 2 sample-treated groups were orally administrated SFN (0.5 mg/kg), and GRN (2.5 mg/kg) every day in their drinking water assuming that each mouse drinks 20 mL of water per day. The concentrations of SFN and GRN were selected based on previous studies [23]. The samples were diluted with distilled water, and the mice were given access to water and food ad libitum. The samples were replaced with freshly prepared solutions every day to compensate for the degradative loss of the active compounds. After 6 weeks of sample treatment, the mice were fasted overnight and sacrificed. The liver tissues were collected and stored at −80°C for further experiments.", "Hepatic TG contents were quantified using a commercially available TG colorimetric assay kit (BioAssay Systems, USA). Briefly, the liver tissues were homogenized with 5% Triton X-100 (Bio-Rad Laboratories) and centrifuged at 13,000 ×g for 10 min to separate the fat layer. The TG and protein contents of the diluted supernatants were analyzed according to the manufacturer's instructions. The protein concentration of each sample was measured using a Bio-Rad DC protein assay (Bio-Rad Laboratories). The TG contents were normalized to the respective protein concentration, employing bovine serum albumin (Sigma) as the calibration standard.", "The total RNA was purified from liver tissues using an Easy-blue RNA extraction kit (iNtRON Biotechnology, Korea) according to the manufacturer's protocol. The RNA quality and quantity were analyzed on an Agilent 2100 bioanalyzer using the RNA 6000 Nano Chip (Agilent Technologies, Netherlands) and ND-2000 Spectrophotometer (Thermofisher), respectively. The control and test RNA libraries were constructed using Ouantseq 3′ mRNA-Seq Library Prep Kit (Lexogen, Austria). Briefly, 500 ng of the total RNA was prepared, and an oligo-dT primer containing an Illumina-compatible sequence at its 5′ end was hybridized to the RNA. Reverse transcription was then performed. Following degradation of the RNA template, complementary strand synthesis was initiated by a random primer containing an Illumina-compatible linker sequence at its 5′ end. Magnetic beads were used to eliminate all the reaction components. The library was amplified to add the complete adaptor sequences required for cluster generation. The constructed library was purified from the polymerase chain reaction mixture. High-throughput sequencing was performed as single-end 75 sequencings using Illumina NextSeq 500 (Illumina, USA). The QuantSeq 3′ mRNA-Seq reads were aligned using Bowtie2 version 2.1.0. The Bowtie2 indices were either generated from representative transcript sequences or the genome assembly sequence to align with the transcriptome and genome. The alignments were used to assemble transcripts, estimating their abundances, and detecting differential expression of genes. The differentially expressed genes were determined based on the counts from unique and multiple alignments using R version 3.2.2 and Bioconductor version 3.0. The Read count (RT) data were analyzed based on the Quantile normalization method using the Genowiz™ version 4.0.5.6 (Ocimum Biosolutions, India). The PPI network was analyzed using the STRING application tool. Cytoscape (version 2.7), a bioinformatics platform at the Institute of System Biology, USA, was used to construct the network diagrams. Gene classification was performed using the Medline database (National Center for Biotechnology Information, USA).", "Values are expressed as the means ± SE of three independent experiments. The data were statistically analyzed using IBM SPSS Statistics (Ver.17.0; USA). The statistical differences between the groups were observed with one-way analysis of variance (ANOVA) followed by a Turkey's test. The p < 0.05, p < 0.005, and p < 0.0005 indicate statistically significant differences from the control group.", "The antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging.\nDPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin.\n***p < 0.0005 compared to the control.\nValues represent as mean ± SE.\nIC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin.", "To examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells.\nNO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin.\n***p < 0.0001 compared to the control.", "The inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells.\nSFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2.\n**p < 0.005, ***p < 0.0005 compared to the control.", "The effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells.\nSFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay.\n**p < 0.005, ***p < 0.0005 compared to the control.", "Because excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice.\nSFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin.\n*p < 0.05, **p < 0.05 compared with control ob/ob group.", "RNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice.\nSFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor.\nSFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B.\nSFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase.\nPPI, protein-protein-interaction; SFN, sulforaphane." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "DPPH radical scavenging activity", "Measurement of NO production", "ELISA", "Measurement of COX-2 and iNOS protein expression", "Animals experiments", "Sample treatment", "Measurement of triglyceride (TG) content in liver tissues of ob/ob mice", "Total RNA isolation, library preparation, sequencing, and data analysis", "Statistical analysis", "RESULTS", "DPPH radical scavenging activity of SFN", "Inhibition of LPS-stimulated NO production by SFN", "Effects of SFN on COX-2 and iNOS expression in LPS-stimulated RAW 264.7 cells", "Inhibitory effects of SFN on TNF-α, IL-6, and IL-1β production", "Effect of SFN on TG accumulation in ob/ob mice liver", "Differential gene expression analysis of ob/ob mice liver", "DISCUSSION" ]
[ "Inflammation is the most commonly identified condition in the clinical field, which involves protecting the body from infection and tissue damage [1]. Macrophages are one of the major groups of the immune system, which perform a critical role in response to injury and infection [2]. Nuclear factor-κB (NF-κB) is a key regulatory element in macrophages [3]. The activation of NF-κB is essential for the expression of nitric oxide (NO), inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2), and pro-inflammatory cytokines, including tumor necrosis factor (TNF)-α, interleukin (IL)-6, and IL-1β [4]. Cytokines released from the inflammatory tissue disturb the metabolic functions of many organs, including the liver [5]. In particular, chronic inflammation is closely related to the progression of many metabolic diseases, including obesity and insulin resistance [6].\nObesity is associated with low-grade inflammation that causes oxidative stress, which leads to insulin resistance and non-alcoholic fatty liver disease (NAFLD) [7]. In the present study, ob/ob mice were selected as the appropriate in vivo model because they exhibit severe disturbances of the immune functions [89]. In particular, ob/ob mice have high levels of circulating endotoxin, which contributes to the development of inflammation by activating Toll-like receptor 4 (TLR4) signaling in the liver [8]. The inflammatory state of obesity is the augmented infiltration of T cells and macrophages into the metabolic tissues, including the liver [10]. In addition, ob/ob mice display hepatic lipid accumulation that induces inflammation, leading to NAFLD development [9]. Therefore, the liver of ob/ob mice was used to observe the effects of sulforaphane (SFN) on inflammatory gene expression.\nSFN is an isothiocyanate present in cruciferous vegetables, such as cabbage, cauliflower, and broccoli [11]. Glucoraphanin (GRN), one of the main glucosinolates in cruciferous vegetables, is converted to SFN by the gut microbiota-derived myrosinase in both rodents and humans [12]. A previous study found that GRN ameliorates obesity-induced inflammation in high-fat diet-treated obese mice [13]. In addition, bioactivated GRN with myrosinase reduced pro-inflammatory signaling related to a spinal cord injury in an experimental mouse model [14]. The supplementation of GRN-rich broccoli sprout extract reduced inflammatory reactions in endothelial cells [15]. Furthermore, synthetic GRN exhibited anti-inflammatory activity by reducing TNF-α secretion in lipopolysaccharide (LPS)-stimulated THP-1 cells [16]. Because GRN has an anti-inflammatory effect in vivo and in vitro, the present study used it as a control to compare the anti-inflammatory effect with SFN on both RAW 264.7 cells and ob/ob mice. SFN shows high chemical reactivity because of the electrophilicity of its isothiocyanate group [17]. Previous studies reported that SFN prevents oxidative stress-induced inflammation [18]. Furthermore, the SFN treatment prevented nod-like receptor protein 3 (NLRP3) inflammasome-induced NAFLD in obese mice [19]. Despite the anti-inflammatory and antioxidant effects of SFN, its exact mechanism related to the inflammatory genes is not completely understood.\nThe present study examined the anti-inflammatory activity of SFN on LPS-stimulated RAW 264.7 cells and ob/ob mice. The effects of SFN on the expression levels of pro-inflammatory mediators, such as NO, COX-2, iNOS, TNF-α, IL-6, and IL-1β, were analyzed in LPS-stimulated RAW 264.7 cells. In addition, the effects of SFN on obesity-related inflammation was identified from the expression levels of genes related to inflammation in ob/ob mouse livers.", " DPPH radical scavenging activity The antioxidant activity of the SFN was evaluated using stable free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) according to a slight modification of a previously described method [20]. Various concentrations of SFN (Sigma, USA) and GRN (Cayman Chemicals, USA) were diluted in dimethyl sulfoxide (DMSO) (Sigma) and incubated with ethanolic 0.1 mM DPPH (Sigma) at 37ºC for 30 min. The absorbance was measured at 517 nm using a UV spectrophotometer (Mecasys, Korea). The blanks were prepared by replacing the sample volumes with DMSO. The results were calculated as percentages of the control (100%), and the concentration of extracts required to decrease the initial DPPH concentration by 50% was expressed as the IC50 value. The radical scavenging ability (%) of the samples was calculated as % inhibition = [(ABlank − Asample)/ABlank] × 100, where ABlank = absorbance of the DPPH without the sample and Asample = absorbance of the SFN or GRN. The antioxidant activity of the SFN is expressed as the ascorbic acid (AA) equivalent antioxidant capacity (AEAC) as AA/100 g dry weight (DW) using the equation, AEAC = IC50 (Ascorbic acid)/IC50 (Sample) × 105.\nThe antioxidant activity of the SFN was evaluated using stable free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) according to a slight modification of a previously described method [20]. Various concentrations of SFN (Sigma, USA) and GRN (Cayman Chemicals, USA) were diluted in dimethyl sulfoxide (DMSO) (Sigma) and incubated with ethanolic 0.1 mM DPPH (Sigma) at 37ºC for 30 min. The absorbance was measured at 517 nm using a UV spectrophotometer (Mecasys, Korea). The blanks were prepared by replacing the sample volumes with DMSO. The results were calculated as percentages of the control (100%), and the concentration of extracts required to decrease the initial DPPH concentration by 50% was expressed as the IC50 value. The radical scavenging ability (%) of the samples was calculated as % inhibition = [(ABlank − Asample)/ABlank] × 100, where ABlank = absorbance of the DPPH without the sample and Asample = absorbance of the SFN or GRN. The antioxidant activity of the SFN is expressed as the ascorbic acid (AA) equivalent antioxidant capacity (AEAC) as AA/100 g dry weight (DW) using the equation, AEAC = IC50 (Ascorbic acid)/IC50 (Sample) × 105.\n Measurement of NO production The murine macrophage RAW 264.7 cell line (KCLB 40071) was purchased from the Korean cell line bank (KCLB, Korea). The cells were cultured in Dulbecco's modified Eagle's medium (DMEM), containing 10% fetal bovine serum (FBS), 1% penicillin-streptomycin (Gibco, Thermo Fisher Scientific, USA), and kept at 37°C in a humidified atmosphere containing 5% CO2. After reaching 70–80% confluence, the cells were sub-cultured within two-day intervals. For the experiment, the cells were cultured in 96 well plates. After 24 h incubation, the cells were pretreated with SFN (2.5 and 5 µM) and GRN (5 µM) for 1 h, and then stimulated with LPS (Escherichia coli 0111:B4; Sigma-Aldrich, USA) for 24 h. The concentration of SFN was determined according to a previous study [21], and the same concentration (5 µM) of GRN was used to compare its activity with SFN. The level of NO production by LPS-stimulated RAW 264.7 macrophage cells was determined by measuring the nitrite level in the culture media using Griess reagent (Promega, USA). Briefly, the 50 µL of culture media from each well was mixed with 50 µL of N-(1-naphthyl)ethylenediamine dihydrochloride (NED) and 50 µL of a sulfanilamide solution. After incubating the mixture for 10 min at room temperature, the absorbance was read at 550 nm in an enzyme-linked immunosorbent assay (ELISA) microplate reader (TECAN, Austria).\nThe murine macrophage RAW 264.7 cell line (KCLB 40071) was purchased from the Korean cell line bank (KCLB, Korea). The cells were cultured in Dulbecco's modified Eagle's medium (DMEM), containing 10% fetal bovine serum (FBS), 1% penicillin-streptomycin (Gibco, Thermo Fisher Scientific, USA), and kept at 37°C in a humidified atmosphere containing 5% CO2. After reaching 70–80% confluence, the cells were sub-cultured within two-day intervals. For the experiment, the cells were cultured in 96 well plates. After 24 h incubation, the cells were pretreated with SFN (2.5 and 5 µM) and GRN (5 µM) for 1 h, and then stimulated with LPS (Escherichia coli 0111:B4; Sigma-Aldrich, USA) for 24 h. The concentration of SFN was determined according to a previous study [21], and the same concentration (5 µM) of GRN was used to compare its activity with SFN. The level of NO production by LPS-stimulated RAW 264.7 macrophage cells was determined by measuring the nitrite level in the culture media using Griess reagent (Promega, USA). Briefly, the 50 µL of culture media from each well was mixed with 50 µL of N-(1-naphthyl)ethylenediamine dihydrochloride (NED) and 50 µL of a sulfanilamide solution. After incubating the mixture for 10 min at room temperature, the absorbance was read at 550 nm in an enzyme-linked immunosorbent assay (ELISA) microplate reader (TECAN, Austria).\n ELISA RAW 264.7 macrophage cells were cultured in 96 well plates (1 × 105 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h of incubation, the cells were pretreated with SFN and GRN and then stimulated with LPS for 24 h. The concentrations of TNF-α, IL-6, and IL-β in the culture media were measured using commercially available ELISA kits (Koma Biotech, Korea) according to the manufacture instructions.\nRAW 264.7 macrophage cells were cultured in 96 well plates (1 × 105 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h of incubation, the cells were pretreated with SFN and GRN and then stimulated with LPS for 24 h. The concentrations of TNF-α, IL-6, and IL-β in the culture media were measured using commercially available ELISA kits (Koma Biotech, Korea) according to the manufacture instructions.\n Measurement of COX-2 and iNOS protein expression RAW 264.7 macrophage cells were cultured in six-well plates (1 × 106 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h incubation, the cells were pretreated with SFN and GRN for 1 h and then stimulated with LPS for 24 h. The expression levels of the COX-2 and iNOS proteins were observed by Western blot analysis. The cells were lysed with RIPA buffer containing a protease inhibitor mixture. The supernatant was separated, and the protein concentrations were evaluated using the Bradford assay (Bio-Rad Laboratories, USA). The equal amount of protein was mixed with 20% of loading buffer and separated by Tris-Glycine-Polyacrylamide, non-sodium dodecyl sulfate precast gel (10%, Koma Biotech) and subjected to Western blot with COX-2 (Sigma), iNOS (Sigma), and β-actin (Thermofisher, USA) antibodies. A Chemi-luminescence Bioimaging Instrument (NeoScience Co., Korea) was used to detect the proteins of interest.\nRAW 264.7 macrophage cells were cultured in six-well plates (1 × 106 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h incubation, the cells were pretreated with SFN and GRN for 1 h and then stimulated with LPS for 24 h. The expression levels of the COX-2 and iNOS proteins were observed by Western blot analysis. The cells were lysed with RIPA buffer containing a protease inhibitor mixture. The supernatant was separated, and the protein concentrations were evaluated using the Bradford assay (Bio-Rad Laboratories, USA). The equal amount of protein was mixed with 20% of loading buffer and separated by Tris-Glycine-Polyacrylamide, non-sodium dodecyl sulfate precast gel (10%, Koma Biotech) and subjected to Western blot with COX-2 (Sigma), iNOS (Sigma), and β-actin (Thermofisher, USA) antibodies. A Chemi-luminescence Bioimaging Instrument (NeoScience Co., Korea) was used to detect the proteins of interest.\n Animals experiments Male ob/ob mice (6 weeks old) were purchased from Japan SLC Inc. (Japan). The mouse strain was originated from Jackson Laboratory (USA) [22], and was developed to C57BL/6JHamSlc-ob in Japan SLC Inc. (Japan). Male C57BL/6 mice (6 weeks old) supplied by Orient Bio (Korea) were used as the non-obesity control group. The mice were housed at a controlled temperature (24°C ± 1°C) and 50-55% humidity with a 12 h light/12 h dark cycle. All experiments were carried out according to the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Jeju National University (ACUCC; approval No. 2018-0051).\nMale ob/ob mice (6 weeks old) were purchased from Japan SLC Inc. (Japan). The mouse strain was originated from Jackson Laboratory (USA) [22], and was developed to C57BL/6JHamSlc-ob in Japan SLC Inc. (Japan). Male C57BL/6 mice (6 weeks old) supplied by Orient Bio (Korea) were used as the non-obesity control group. The mice were housed at a controlled temperature (24°C ± 1°C) and 50-55% humidity with a 12 h light/12 h dark cycle. All experiments were carried out according to the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Jeju National University (ACUCC; approval No. 2018-0051).\n Sample treatment The ob/ob mice were allocated randomly into three groups (n = 5): control ob/ob group and 2 sample-treated groups were orally administrated SFN (0.5 mg/kg), and GRN (2.5 mg/kg) every day in their drinking water assuming that each mouse drinks 20 mL of water per day. The concentrations of SFN and GRN were selected based on previous studies [23]. The samples were diluted with distilled water, and the mice were given access to water and food ad libitum. The samples were replaced with freshly prepared solutions every day to compensate for the degradative loss of the active compounds. After 6 weeks of sample treatment, the mice were fasted overnight and sacrificed. The liver tissues were collected and stored at −80°C for further experiments.\nThe ob/ob mice were allocated randomly into three groups (n = 5): control ob/ob group and 2 sample-treated groups were orally administrated SFN (0.5 mg/kg), and GRN (2.5 mg/kg) every day in their drinking water assuming that each mouse drinks 20 mL of water per day. The concentrations of SFN and GRN were selected based on previous studies [23]. The samples were diluted with distilled water, and the mice were given access to water and food ad libitum. The samples were replaced with freshly prepared solutions every day to compensate for the degradative loss of the active compounds. After 6 weeks of sample treatment, the mice were fasted overnight and sacrificed. The liver tissues were collected and stored at −80°C for further experiments.\n Measurement of triglyceride (TG) content in liver tissues of ob/ob mice Hepatic TG contents were quantified using a commercially available TG colorimetric assay kit (BioAssay Systems, USA). Briefly, the liver tissues were homogenized with 5% Triton X-100 (Bio-Rad Laboratories) and centrifuged at 13,000 ×g for 10 min to separate the fat layer. The TG and protein contents of the diluted supernatants were analyzed according to the manufacturer's instructions. The protein concentration of each sample was measured using a Bio-Rad DC protein assay (Bio-Rad Laboratories). The TG contents were normalized to the respective protein concentration, employing bovine serum albumin (Sigma) as the calibration standard.\nHepatic TG contents were quantified using a commercially available TG colorimetric assay kit (BioAssay Systems, USA). Briefly, the liver tissues were homogenized with 5% Triton X-100 (Bio-Rad Laboratories) and centrifuged at 13,000 ×g for 10 min to separate the fat layer. The TG and protein contents of the diluted supernatants were analyzed according to the manufacturer's instructions. The protein concentration of each sample was measured using a Bio-Rad DC protein assay (Bio-Rad Laboratories). The TG contents were normalized to the respective protein concentration, employing bovine serum albumin (Sigma) as the calibration standard.\n Total RNA isolation, library preparation, sequencing, and data analysis The total RNA was purified from liver tissues using an Easy-blue RNA extraction kit (iNtRON Biotechnology, Korea) according to the manufacturer's protocol. The RNA quality and quantity were analyzed on an Agilent 2100 bioanalyzer using the RNA 6000 Nano Chip (Agilent Technologies, Netherlands) and ND-2000 Spectrophotometer (Thermofisher), respectively. The control and test RNA libraries were constructed using Ouantseq 3′ mRNA-Seq Library Prep Kit (Lexogen, Austria). Briefly, 500 ng of the total RNA was prepared, and an oligo-dT primer containing an Illumina-compatible sequence at its 5′ end was hybridized to the RNA. Reverse transcription was then performed. Following degradation of the RNA template, complementary strand synthesis was initiated by a random primer containing an Illumina-compatible linker sequence at its 5′ end. Magnetic beads were used to eliminate all the reaction components. The library was amplified to add the complete adaptor sequences required for cluster generation. The constructed library was purified from the polymerase chain reaction mixture. High-throughput sequencing was performed as single-end 75 sequencings using Illumina NextSeq 500 (Illumina, USA). The QuantSeq 3′ mRNA-Seq reads were aligned using Bowtie2 version 2.1.0. The Bowtie2 indices were either generated from representative transcript sequences or the genome assembly sequence to align with the transcriptome and genome. The alignments were used to assemble transcripts, estimating their abundances, and detecting differential expression of genes. The differentially expressed genes were determined based on the counts from unique and multiple alignments using R version 3.2.2 and Bioconductor version 3.0. The Read count (RT) data were analyzed based on the Quantile normalization method using the Genowiz™ version 4.0.5.6 (Ocimum Biosolutions, India). The PPI network was analyzed using the STRING application tool. Cytoscape (version 2.7), a bioinformatics platform at the Institute of System Biology, USA, was used to construct the network diagrams. Gene classification was performed using the Medline database (National Center for Biotechnology Information, USA).\nThe total RNA was purified from liver tissues using an Easy-blue RNA extraction kit (iNtRON Biotechnology, Korea) according to the manufacturer's protocol. The RNA quality and quantity were analyzed on an Agilent 2100 bioanalyzer using the RNA 6000 Nano Chip (Agilent Technologies, Netherlands) and ND-2000 Spectrophotometer (Thermofisher), respectively. The control and test RNA libraries were constructed using Ouantseq 3′ mRNA-Seq Library Prep Kit (Lexogen, Austria). Briefly, 500 ng of the total RNA was prepared, and an oligo-dT primer containing an Illumina-compatible sequence at its 5′ end was hybridized to the RNA. Reverse transcription was then performed. Following degradation of the RNA template, complementary strand synthesis was initiated by a random primer containing an Illumina-compatible linker sequence at its 5′ end. Magnetic beads were used to eliminate all the reaction components. The library was amplified to add the complete adaptor sequences required for cluster generation. The constructed library was purified from the polymerase chain reaction mixture. High-throughput sequencing was performed as single-end 75 sequencings using Illumina NextSeq 500 (Illumina, USA). The QuantSeq 3′ mRNA-Seq reads were aligned using Bowtie2 version 2.1.0. The Bowtie2 indices were either generated from representative transcript sequences or the genome assembly sequence to align with the transcriptome and genome. The alignments were used to assemble transcripts, estimating their abundances, and detecting differential expression of genes. The differentially expressed genes were determined based on the counts from unique and multiple alignments using R version 3.2.2 and Bioconductor version 3.0. The Read count (RT) data were analyzed based on the Quantile normalization method using the Genowiz™ version 4.0.5.6 (Ocimum Biosolutions, India). The PPI network was analyzed using the STRING application tool. Cytoscape (version 2.7), a bioinformatics platform at the Institute of System Biology, USA, was used to construct the network diagrams. Gene classification was performed using the Medline database (National Center for Biotechnology Information, USA).\n Statistical analysis Values are expressed as the means ± SE of three independent experiments. The data were statistically analyzed using IBM SPSS Statistics (Ver.17.0; USA). The statistical differences between the groups were observed with one-way analysis of variance (ANOVA) followed by a Turkey's test. The p < 0.05, p < 0.005, and p < 0.0005 indicate statistically significant differences from the control group.\nValues are expressed as the means ± SE of three independent experiments. The data were statistically analyzed using IBM SPSS Statistics (Ver.17.0; USA). The statistical differences between the groups were observed with one-way analysis of variance (ANOVA) followed by a Turkey's test. The p < 0.05, p < 0.005, and p < 0.0005 indicate statistically significant differences from the control group.", "The antioxidant activity of the SFN was evaluated using stable free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) according to a slight modification of a previously described method [20]. Various concentrations of SFN (Sigma, USA) and GRN (Cayman Chemicals, USA) were diluted in dimethyl sulfoxide (DMSO) (Sigma) and incubated with ethanolic 0.1 mM DPPH (Sigma) at 37ºC for 30 min. The absorbance was measured at 517 nm using a UV spectrophotometer (Mecasys, Korea). The blanks were prepared by replacing the sample volumes with DMSO. The results were calculated as percentages of the control (100%), and the concentration of extracts required to decrease the initial DPPH concentration by 50% was expressed as the IC50 value. The radical scavenging ability (%) of the samples was calculated as % inhibition = [(ABlank − Asample)/ABlank] × 100, where ABlank = absorbance of the DPPH without the sample and Asample = absorbance of the SFN or GRN. The antioxidant activity of the SFN is expressed as the ascorbic acid (AA) equivalent antioxidant capacity (AEAC) as AA/100 g dry weight (DW) using the equation, AEAC = IC50 (Ascorbic acid)/IC50 (Sample) × 105.", "The murine macrophage RAW 264.7 cell line (KCLB 40071) was purchased from the Korean cell line bank (KCLB, Korea). The cells were cultured in Dulbecco's modified Eagle's medium (DMEM), containing 10% fetal bovine serum (FBS), 1% penicillin-streptomycin (Gibco, Thermo Fisher Scientific, USA), and kept at 37°C in a humidified atmosphere containing 5% CO2. After reaching 70–80% confluence, the cells were sub-cultured within two-day intervals. For the experiment, the cells were cultured in 96 well plates. After 24 h incubation, the cells were pretreated with SFN (2.5 and 5 µM) and GRN (5 µM) for 1 h, and then stimulated with LPS (Escherichia coli 0111:B4; Sigma-Aldrich, USA) for 24 h. The concentration of SFN was determined according to a previous study [21], and the same concentration (5 µM) of GRN was used to compare its activity with SFN. The level of NO production by LPS-stimulated RAW 264.7 macrophage cells was determined by measuring the nitrite level in the culture media using Griess reagent (Promega, USA). Briefly, the 50 µL of culture media from each well was mixed with 50 µL of N-(1-naphthyl)ethylenediamine dihydrochloride (NED) and 50 µL of a sulfanilamide solution. After incubating the mixture for 10 min at room temperature, the absorbance was read at 550 nm in an enzyme-linked immunosorbent assay (ELISA) microplate reader (TECAN, Austria).", "RAW 264.7 macrophage cells were cultured in 96 well plates (1 × 105 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h of incubation, the cells were pretreated with SFN and GRN and then stimulated with LPS for 24 h. The concentrations of TNF-α, IL-6, and IL-β in the culture media were measured using commercially available ELISA kits (Koma Biotech, Korea) according to the manufacture instructions.", "RAW 264.7 macrophage cells were cultured in six-well plates (1 × 106 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h incubation, the cells were pretreated with SFN and GRN for 1 h and then stimulated with LPS for 24 h. The expression levels of the COX-2 and iNOS proteins were observed by Western blot analysis. The cells were lysed with RIPA buffer containing a protease inhibitor mixture. The supernatant was separated, and the protein concentrations were evaluated using the Bradford assay (Bio-Rad Laboratories, USA). The equal amount of protein was mixed with 20% of loading buffer and separated by Tris-Glycine-Polyacrylamide, non-sodium dodecyl sulfate precast gel (10%, Koma Biotech) and subjected to Western blot with COX-2 (Sigma), iNOS (Sigma), and β-actin (Thermofisher, USA) antibodies. A Chemi-luminescence Bioimaging Instrument (NeoScience Co., Korea) was used to detect the proteins of interest.", "Male ob/ob mice (6 weeks old) were purchased from Japan SLC Inc. (Japan). The mouse strain was originated from Jackson Laboratory (USA) [22], and was developed to C57BL/6JHamSlc-ob in Japan SLC Inc. (Japan). Male C57BL/6 mice (6 weeks old) supplied by Orient Bio (Korea) were used as the non-obesity control group. The mice were housed at a controlled temperature (24°C ± 1°C) and 50-55% humidity with a 12 h light/12 h dark cycle. All experiments were carried out according to the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Jeju National University (ACUCC; approval No. 2018-0051).", "The ob/ob mice were allocated randomly into three groups (n = 5): control ob/ob group and 2 sample-treated groups were orally administrated SFN (0.5 mg/kg), and GRN (2.5 mg/kg) every day in their drinking water assuming that each mouse drinks 20 mL of water per day. The concentrations of SFN and GRN were selected based on previous studies [23]. The samples were diluted with distilled water, and the mice were given access to water and food ad libitum. The samples were replaced with freshly prepared solutions every day to compensate for the degradative loss of the active compounds. After 6 weeks of sample treatment, the mice were fasted overnight and sacrificed. The liver tissues were collected and stored at −80°C for further experiments.", "Hepatic TG contents were quantified using a commercially available TG colorimetric assay kit (BioAssay Systems, USA). Briefly, the liver tissues were homogenized with 5% Triton X-100 (Bio-Rad Laboratories) and centrifuged at 13,000 ×g for 10 min to separate the fat layer. The TG and protein contents of the diluted supernatants were analyzed according to the manufacturer's instructions. The protein concentration of each sample was measured using a Bio-Rad DC protein assay (Bio-Rad Laboratories). The TG contents were normalized to the respective protein concentration, employing bovine serum albumin (Sigma) as the calibration standard.", "The total RNA was purified from liver tissues using an Easy-blue RNA extraction kit (iNtRON Biotechnology, Korea) according to the manufacturer's protocol. The RNA quality and quantity were analyzed on an Agilent 2100 bioanalyzer using the RNA 6000 Nano Chip (Agilent Technologies, Netherlands) and ND-2000 Spectrophotometer (Thermofisher), respectively. The control and test RNA libraries were constructed using Ouantseq 3′ mRNA-Seq Library Prep Kit (Lexogen, Austria). Briefly, 500 ng of the total RNA was prepared, and an oligo-dT primer containing an Illumina-compatible sequence at its 5′ end was hybridized to the RNA. Reverse transcription was then performed. Following degradation of the RNA template, complementary strand synthesis was initiated by a random primer containing an Illumina-compatible linker sequence at its 5′ end. Magnetic beads were used to eliminate all the reaction components. The library was amplified to add the complete adaptor sequences required for cluster generation. The constructed library was purified from the polymerase chain reaction mixture. High-throughput sequencing was performed as single-end 75 sequencings using Illumina NextSeq 500 (Illumina, USA). The QuantSeq 3′ mRNA-Seq reads were aligned using Bowtie2 version 2.1.0. The Bowtie2 indices were either generated from representative transcript sequences or the genome assembly sequence to align with the transcriptome and genome. The alignments were used to assemble transcripts, estimating their abundances, and detecting differential expression of genes. The differentially expressed genes were determined based on the counts from unique and multiple alignments using R version 3.2.2 and Bioconductor version 3.0. The Read count (RT) data were analyzed based on the Quantile normalization method using the Genowiz™ version 4.0.5.6 (Ocimum Biosolutions, India). The PPI network was analyzed using the STRING application tool. Cytoscape (version 2.7), a bioinformatics platform at the Institute of System Biology, USA, was used to construct the network diagrams. Gene classification was performed using the Medline database (National Center for Biotechnology Information, USA).", "Values are expressed as the means ± SE of three independent experiments. The data were statistically analyzed using IBM SPSS Statistics (Ver.17.0; USA). The statistical differences between the groups were observed with one-way analysis of variance (ANOVA) followed by a Turkey's test. The p < 0.05, p < 0.005, and p < 0.0005 indicate statistically significant differences from the control group.", " DPPH radical scavenging activity of SFN The antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging.\nDPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin.\n***p < 0.0005 compared to the control.\nValues represent as mean ± SE.\nIC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin.\nThe antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging.\nDPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin.\n***p < 0.0005 compared to the control.\nValues represent as mean ± SE.\nIC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin.\n Inhibition of LPS-stimulated NO production by SFN To examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells.\nNO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin.\n***p < 0.0001 compared to the control.\nTo examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells.\nNO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin.\n***p < 0.0001 compared to the control.\n Effects of SFN on COX-2 and iNOS expression in LPS-stimulated RAW 264.7 cells The inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells.\nSFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2.\n**p < 0.005, ***p < 0.0005 compared to the control.\nThe inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells.\nSFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2.\n**p < 0.005, ***p < 0.0005 compared to the control.\n Inhibitory effects of SFN on TNF-α, IL-6, and IL-1β production The effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells.\nSFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay.\n**p < 0.005, ***p < 0.0005 compared to the control.\nThe effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells.\nSFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay.\n**p < 0.005, ***p < 0.0005 compared to the control.\n Effect of SFN on TG accumulation in ob/ob mice liver Because excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice.\nSFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin.\n*p < 0.05, **p < 0.05 compared with control ob/ob group.\nBecause excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice.\nSFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin.\n*p < 0.05, **p < 0.05 compared with control ob/ob group.\n Differential gene expression analysis of ob/ob mice liver RNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice.\nSFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor.\nSFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B.\nSFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase.\nPPI, protein-protein-interaction; SFN, sulforaphane.\nRNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice.\nSFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor.\nSFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B.\nSFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase.\nPPI, protein-protein-interaction; SFN, sulforaphane.", "The antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging.\nDPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin.\n***p < 0.0005 compared to the control.\nValues represent as mean ± SE.\nIC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin.", "To examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells.\nNO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin.\n***p < 0.0001 compared to the control.", "The inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells.\nSFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2.\n**p < 0.005, ***p < 0.0005 compared to the control.", "The effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells.\nSFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay.\n**p < 0.005, ***p < 0.0005 compared to the control.", "Because excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice.\nSFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin.\n*p < 0.05, **p < 0.05 compared with control ob/ob group.", "RNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice.\nSFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor.\nSFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B.\nSFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase.\nPPI, protein-protein-interaction; SFN, sulforaphane.", "This study examined the anti-inflammatory effects of SFN on LPS-stimulated RAW 264.7 macrophage cells. The results showed that SFN increased the DPPH free radical scavenging activity. In addition, SFN suppressed the expression of iNOS, COX-2, and pro-inflammatory cytokines (TNF-α, IL-6, and IL-1β) in LPS-stimulated RAW 264.7 cells. In particular, SFN showed anti-inflammatory effects by normalizing the expression of the genes related to inflammation, including chemokine ligands (Cxcl14, Ccl1, Ccl3, Ccl4, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10), which were perturbed in ob/ob mice liver.\nIn the present study, SFN showed antioxidant potential through a DPPH assay in a concentration-dependent manner. Previous studies reported that isothiocyanates extracted from broccoli were strongly associated with the DPPH radical scavenging activity [24]. Similarly, in the present study, SFN had potent free radical scavenging activity compared to the GRN. Because oxidative stress induces inflammation [25], the antioxidant activity of SFN might improve the healing procedure of inflammation.\nIn this study, SFN inhibited the levels of COX-2 and iNOS expression in LPS-stimulated RAW 264.7 cells, with concomitant decreases in NO production. The production of NO by iNOS leads to the activation of macrophages against microorganisms [26]. On the other hand, overexpression of NO, a key biomarker of oxidative stress, causes oxidative damage during the inflammatory process [27]. COX-2 is involved in the production of prostaglandins, which causes an increase in chemotaxis, blood flow, and subsequent dysfunction of tissues during inflammation [28]. The previous study reported that SFN enriched broccoli florets extract suppressed NO production and iNOS expression by inhibiting the NF-κB activity in LPS-stimulated RAW 264.7 cells [29]. In addition, SFN decreased the levels of COX-2 and prostaglandin E2 in LPS-stimulated RAW 264.7 cells [30]. Consistent with previous findings, the results confirmed that the biological activity of SFN is firmly attributed to the inhibitory effects on COX-2, iNOS, and NO production in LPS-stimulated RAW 264.7 cells. In addition, GRN, without enzymatic conversion to SFN, did not show significant anti-inflammatory effects on LPS-stimulated RAW 264.7 cells compared to SFN.\nIn the present study, SFN exhibited its anti-inflammatory effects by inhibiting the production of pro-inflammatory cytokines, including TNF-α, IL-6, and IL-1β in LPS-stimulated RAW 264.7 cells. A recent study reported that SFN inhibited the production of TNF-α, IL-6, and IL-1β through activation of the Nrf2/HO-1 pathway and inhibition of the JNK/AP-1/NF-κB pathway in LPS-activated microglia [31]. Therefore the anti-inflammatory effects of SFN might result from the inhibitory effects on the production of pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells.\nIn the present study, RNA sequencing analysis was conducted to reveal the molecular mechanism underlying the anti-inflammatory effect of SFN on the obesity-induced inflammation in ob/ob mice. The gene expression results showed that SFN normalized the expression levels of up-regulated genes related to the inflammation, including Cxcl14, Ccl1, Ccl3, Ccl4, Ccl17, Cxcr1, Ccr3, Ccr10, and Ifng genes in the ob/ob mice liver. Cxcl14 mediated leukocyte migration and differentiation [32]. In addition, Cxcl14 involves the obesity-associated infiltration of macrophages into tissues and hepatic steatosis in obese mice [33]. Ccl1 acts as a chemoattractant for monocytes, immature B cells, and dendritic cells [34]. The inhibition of Ccl1 reduces liver inflammation and fibrosis progression [35]. Ccl3 promotes the recruitment of CD4+ T cells to the liver, and increased Ccl3 expression was observed in the patient with liver injury [36]. Ccl17 shows chemotactic activity for CD4+ T cells and plays a role in trafficking and activation of mature T cells [37]. Furthermore, chemokine receptors (Ccr1, Ccr10, and Ccr3) interact with their specific chemokine ligands, which cause various cell responses, such as chemotaxis [38]. In the liver, IFN-γ activates resident macrophages and stimulates hepatocyte apoptosis by increasing ROS production [39].\nRemarkably, in the present study, chemokine ligands (Cxcl14, Ccl1, Ccl3, Ccl4, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were identified as hub genes because they formed a “functional cluster” within the PPI network. These results suggest that chemokine proteins might play key roles in the obesity-induced inflammation in ob/ob mice liver. Therefore, the treatment of SFN can normalize liver inflammation by normalizing the up-regulated genes related to chemokines in ob/ob mice liver.\nIn addition, the current results showed that SFN normalized the down-regulated genes related to inflammation, including Fn1, Itgb21, and Pik3cd in ob/ob mice liver. Fn1 produces soluble plasma fibronectin-1, which is involved mainly in blood clotting, wound healing, and protects against excessive liver fibrosis [40]. Itgb2 encodes CD18, which inhibits the capability of the immune system to fight off infection [41]. Pik3cd regulates the immune cell metabolism through the PI3K‐AKT‐mTOR signaling pathway [42]. These findings suggest that downregulation of Fn1, Itgb21, and Pikk3cd genes might involve the progress of inflammation by dysregulating immune responses in the ob/ob mice liver.\nIn conclusion, SFN has a potent anti-inflammatory activity, as demonstrated by its ability to inhibit the NO, COX-2, iNOS, and pro-inflammatory cytokines (TNF-α, IL-6, and IL-1β) in LPS-stimulated RAW 264.7 cells. In particular, gene expression analysis showed that SFN restores the obesity-induced inflammation by normalizing the genes related to chemokine signaling, including chemokine ligands (Cxcl14, Ccl1, Ccl3, Ccl4, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) in ob/ob mouse livers. Overall, SFN has a potent anti-inflammatory effect by normalizing the expression levels of the genes related to inflammation that had been perturbed in ob/ob mice." ]
[ "intro", "materials|methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion" ]
[ "Sulforaphane", "anti-inflammatory activity", "RNA sequencing analysis", "differential gene expression", "ob/ob mice" ]
INTRODUCTION: Inflammation is the most commonly identified condition in the clinical field, which involves protecting the body from infection and tissue damage [1]. Macrophages are one of the major groups of the immune system, which perform a critical role in response to injury and infection [2]. Nuclear factor-κB (NF-κB) is a key regulatory element in macrophages [3]. The activation of NF-κB is essential for the expression of nitric oxide (NO), inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2), and pro-inflammatory cytokines, including tumor necrosis factor (TNF)-α, interleukin (IL)-6, and IL-1β [4]. Cytokines released from the inflammatory tissue disturb the metabolic functions of many organs, including the liver [5]. In particular, chronic inflammation is closely related to the progression of many metabolic diseases, including obesity and insulin resistance [6]. Obesity is associated with low-grade inflammation that causes oxidative stress, which leads to insulin resistance and non-alcoholic fatty liver disease (NAFLD) [7]. In the present study, ob/ob mice were selected as the appropriate in vivo model because they exhibit severe disturbances of the immune functions [89]. In particular, ob/ob mice have high levels of circulating endotoxin, which contributes to the development of inflammation by activating Toll-like receptor 4 (TLR4) signaling in the liver [8]. The inflammatory state of obesity is the augmented infiltration of T cells and macrophages into the metabolic tissues, including the liver [10]. In addition, ob/ob mice display hepatic lipid accumulation that induces inflammation, leading to NAFLD development [9]. Therefore, the liver of ob/ob mice was used to observe the effects of sulforaphane (SFN) on inflammatory gene expression. SFN is an isothiocyanate present in cruciferous vegetables, such as cabbage, cauliflower, and broccoli [11]. Glucoraphanin (GRN), one of the main glucosinolates in cruciferous vegetables, is converted to SFN by the gut microbiota-derived myrosinase in both rodents and humans [12]. A previous study found that GRN ameliorates obesity-induced inflammation in high-fat diet-treated obese mice [13]. In addition, bioactivated GRN with myrosinase reduced pro-inflammatory signaling related to a spinal cord injury in an experimental mouse model [14]. The supplementation of GRN-rich broccoli sprout extract reduced inflammatory reactions in endothelial cells [15]. Furthermore, synthetic GRN exhibited anti-inflammatory activity by reducing TNF-α secretion in lipopolysaccharide (LPS)-stimulated THP-1 cells [16]. Because GRN has an anti-inflammatory effect in vivo and in vitro, the present study used it as a control to compare the anti-inflammatory effect with SFN on both RAW 264.7 cells and ob/ob mice. SFN shows high chemical reactivity because of the electrophilicity of its isothiocyanate group [17]. Previous studies reported that SFN prevents oxidative stress-induced inflammation [18]. Furthermore, the SFN treatment prevented nod-like receptor protein 3 (NLRP3) inflammasome-induced NAFLD in obese mice [19]. Despite the anti-inflammatory and antioxidant effects of SFN, its exact mechanism related to the inflammatory genes is not completely understood. The present study examined the anti-inflammatory activity of SFN on LPS-stimulated RAW 264.7 cells and ob/ob mice. The effects of SFN on the expression levels of pro-inflammatory mediators, such as NO, COX-2, iNOS, TNF-α, IL-6, and IL-1β, were analyzed in LPS-stimulated RAW 264.7 cells. In addition, the effects of SFN on obesity-related inflammation was identified from the expression levels of genes related to inflammation in ob/ob mouse livers. MATERIALS AND METHODS: DPPH radical scavenging activity The antioxidant activity of the SFN was evaluated using stable free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) according to a slight modification of a previously described method [20]. Various concentrations of SFN (Sigma, USA) and GRN (Cayman Chemicals, USA) were diluted in dimethyl sulfoxide (DMSO) (Sigma) and incubated with ethanolic 0.1 mM DPPH (Sigma) at 37ºC for 30 min. The absorbance was measured at 517 nm using a UV spectrophotometer (Mecasys, Korea). The blanks were prepared by replacing the sample volumes with DMSO. The results were calculated as percentages of the control (100%), and the concentration of extracts required to decrease the initial DPPH concentration by 50% was expressed as the IC50 value. The radical scavenging ability (%) of the samples was calculated as % inhibition = [(ABlank − Asample)/ABlank] × 100, where ABlank = absorbance of the DPPH without the sample and Asample = absorbance of the SFN or GRN. The antioxidant activity of the SFN is expressed as the ascorbic acid (AA) equivalent antioxidant capacity (AEAC) as AA/100 g dry weight (DW) using the equation, AEAC = IC50 (Ascorbic acid)/IC50 (Sample) × 105. The antioxidant activity of the SFN was evaluated using stable free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) according to a slight modification of a previously described method [20]. Various concentrations of SFN (Sigma, USA) and GRN (Cayman Chemicals, USA) were diluted in dimethyl sulfoxide (DMSO) (Sigma) and incubated with ethanolic 0.1 mM DPPH (Sigma) at 37ºC for 30 min. The absorbance was measured at 517 nm using a UV spectrophotometer (Mecasys, Korea). The blanks were prepared by replacing the sample volumes with DMSO. The results were calculated as percentages of the control (100%), and the concentration of extracts required to decrease the initial DPPH concentration by 50% was expressed as the IC50 value. The radical scavenging ability (%) of the samples was calculated as % inhibition = [(ABlank − Asample)/ABlank] × 100, where ABlank = absorbance of the DPPH without the sample and Asample = absorbance of the SFN or GRN. The antioxidant activity of the SFN is expressed as the ascorbic acid (AA) equivalent antioxidant capacity (AEAC) as AA/100 g dry weight (DW) using the equation, AEAC = IC50 (Ascorbic acid)/IC50 (Sample) × 105. Measurement of NO production The murine macrophage RAW 264.7 cell line (KCLB 40071) was purchased from the Korean cell line bank (KCLB, Korea). The cells were cultured in Dulbecco's modified Eagle's medium (DMEM), containing 10% fetal bovine serum (FBS), 1% penicillin-streptomycin (Gibco, Thermo Fisher Scientific, USA), and kept at 37°C in a humidified atmosphere containing 5% CO2. After reaching 70–80% confluence, the cells were sub-cultured within two-day intervals. For the experiment, the cells were cultured in 96 well plates. After 24 h incubation, the cells were pretreated with SFN (2.5 and 5 µM) and GRN (5 µM) for 1 h, and then stimulated with LPS (Escherichia coli 0111:B4; Sigma-Aldrich, USA) for 24 h. The concentration of SFN was determined according to a previous study [21], and the same concentration (5 µM) of GRN was used to compare its activity with SFN. The level of NO production by LPS-stimulated RAW 264.7 macrophage cells was determined by measuring the nitrite level in the culture media using Griess reagent (Promega, USA). Briefly, the 50 µL of culture media from each well was mixed with 50 µL of N-(1-naphthyl)ethylenediamine dihydrochloride (NED) and 50 µL of a sulfanilamide solution. After incubating the mixture for 10 min at room temperature, the absorbance was read at 550 nm in an enzyme-linked immunosorbent assay (ELISA) microplate reader (TECAN, Austria). The murine macrophage RAW 264.7 cell line (KCLB 40071) was purchased from the Korean cell line bank (KCLB, Korea). The cells were cultured in Dulbecco's modified Eagle's medium (DMEM), containing 10% fetal bovine serum (FBS), 1% penicillin-streptomycin (Gibco, Thermo Fisher Scientific, USA), and kept at 37°C in a humidified atmosphere containing 5% CO2. After reaching 70–80% confluence, the cells were sub-cultured within two-day intervals. For the experiment, the cells were cultured in 96 well plates. After 24 h incubation, the cells were pretreated with SFN (2.5 and 5 µM) and GRN (5 µM) for 1 h, and then stimulated with LPS (Escherichia coli 0111:B4; Sigma-Aldrich, USA) for 24 h. The concentration of SFN was determined according to a previous study [21], and the same concentration (5 µM) of GRN was used to compare its activity with SFN. The level of NO production by LPS-stimulated RAW 264.7 macrophage cells was determined by measuring the nitrite level in the culture media using Griess reagent (Promega, USA). Briefly, the 50 µL of culture media from each well was mixed with 50 µL of N-(1-naphthyl)ethylenediamine dihydrochloride (NED) and 50 µL of a sulfanilamide solution. After incubating the mixture for 10 min at room temperature, the absorbance was read at 550 nm in an enzyme-linked immunosorbent assay (ELISA) microplate reader (TECAN, Austria). ELISA RAW 264.7 macrophage cells were cultured in 96 well plates (1 × 105 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h of incubation, the cells were pretreated with SFN and GRN and then stimulated with LPS for 24 h. The concentrations of TNF-α, IL-6, and IL-β in the culture media were measured using commercially available ELISA kits (Koma Biotech, Korea) according to the manufacture instructions. RAW 264.7 macrophage cells were cultured in 96 well plates (1 × 105 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h of incubation, the cells were pretreated with SFN and GRN and then stimulated with LPS for 24 h. The concentrations of TNF-α, IL-6, and IL-β in the culture media were measured using commercially available ELISA kits (Koma Biotech, Korea) according to the manufacture instructions. Measurement of COX-2 and iNOS protein expression RAW 264.7 macrophage cells were cultured in six-well plates (1 × 106 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h incubation, the cells were pretreated with SFN and GRN for 1 h and then stimulated with LPS for 24 h. The expression levels of the COX-2 and iNOS proteins were observed by Western blot analysis. The cells were lysed with RIPA buffer containing a protease inhibitor mixture. The supernatant was separated, and the protein concentrations were evaluated using the Bradford assay (Bio-Rad Laboratories, USA). The equal amount of protein was mixed with 20% of loading buffer and separated by Tris-Glycine-Polyacrylamide, non-sodium dodecyl sulfate precast gel (10%, Koma Biotech) and subjected to Western blot with COX-2 (Sigma), iNOS (Sigma), and β-actin (Thermofisher, USA) antibodies. A Chemi-luminescence Bioimaging Instrument (NeoScience Co., Korea) was used to detect the proteins of interest. RAW 264.7 macrophage cells were cultured in six-well plates (1 × 106 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h incubation, the cells were pretreated with SFN and GRN for 1 h and then stimulated with LPS for 24 h. The expression levels of the COX-2 and iNOS proteins were observed by Western blot analysis. The cells were lysed with RIPA buffer containing a protease inhibitor mixture. The supernatant was separated, and the protein concentrations were evaluated using the Bradford assay (Bio-Rad Laboratories, USA). The equal amount of protein was mixed with 20% of loading buffer and separated by Tris-Glycine-Polyacrylamide, non-sodium dodecyl sulfate precast gel (10%, Koma Biotech) and subjected to Western blot with COX-2 (Sigma), iNOS (Sigma), and β-actin (Thermofisher, USA) antibodies. A Chemi-luminescence Bioimaging Instrument (NeoScience Co., Korea) was used to detect the proteins of interest. Animals experiments Male ob/ob mice (6 weeks old) were purchased from Japan SLC Inc. (Japan). The mouse strain was originated from Jackson Laboratory (USA) [22], and was developed to C57BL/6JHamSlc-ob in Japan SLC Inc. (Japan). Male C57BL/6 mice (6 weeks old) supplied by Orient Bio (Korea) were used as the non-obesity control group. The mice were housed at a controlled temperature (24°C ± 1°C) and 50-55% humidity with a 12 h light/12 h dark cycle. All experiments were carried out according to the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Jeju National University (ACUCC; approval No. 2018-0051). Male ob/ob mice (6 weeks old) were purchased from Japan SLC Inc. (Japan). The mouse strain was originated from Jackson Laboratory (USA) [22], and was developed to C57BL/6JHamSlc-ob in Japan SLC Inc. (Japan). Male C57BL/6 mice (6 weeks old) supplied by Orient Bio (Korea) were used as the non-obesity control group. The mice were housed at a controlled temperature (24°C ± 1°C) and 50-55% humidity with a 12 h light/12 h dark cycle. All experiments were carried out according to the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Jeju National University (ACUCC; approval No. 2018-0051). Sample treatment The ob/ob mice were allocated randomly into three groups (n = 5): control ob/ob group and 2 sample-treated groups were orally administrated SFN (0.5 mg/kg), and GRN (2.5 mg/kg) every day in their drinking water assuming that each mouse drinks 20 mL of water per day. The concentrations of SFN and GRN were selected based on previous studies [23]. The samples were diluted with distilled water, and the mice were given access to water and food ad libitum. The samples were replaced with freshly prepared solutions every day to compensate for the degradative loss of the active compounds. After 6 weeks of sample treatment, the mice were fasted overnight and sacrificed. The liver tissues were collected and stored at −80°C for further experiments. The ob/ob mice were allocated randomly into three groups (n = 5): control ob/ob group and 2 sample-treated groups were orally administrated SFN (0.5 mg/kg), and GRN (2.5 mg/kg) every day in their drinking water assuming that each mouse drinks 20 mL of water per day. The concentrations of SFN and GRN were selected based on previous studies [23]. The samples were diluted with distilled water, and the mice were given access to water and food ad libitum. The samples were replaced with freshly prepared solutions every day to compensate for the degradative loss of the active compounds. After 6 weeks of sample treatment, the mice were fasted overnight and sacrificed. The liver tissues were collected and stored at −80°C for further experiments. Measurement of triglyceride (TG) content in liver tissues of ob/ob mice Hepatic TG contents were quantified using a commercially available TG colorimetric assay kit (BioAssay Systems, USA). Briefly, the liver tissues were homogenized with 5% Triton X-100 (Bio-Rad Laboratories) and centrifuged at 13,000 ×g for 10 min to separate the fat layer. The TG and protein contents of the diluted supernatants were analyzed according to the manufacturer's instructions. The protein concentration of each sample was measured using a Bio-Rad DC protein assay (Bio-Rad Laboratories). The TG contents were normalized to the respective protein concentration, employing bovine serum albumin (Sigma) as the calibration standard. Hepatic TG contents were quantified using a commercially available TG colorimetric assay kit (BioAssay Systems, USA). Briefly, the liver tissues were homogenized with 5% Triton X-100 (Bio-Rad Laboratories) and centrifuged at 13,000 ×g for 10 min to separate the fat layer. The TG and protein contents of the diluted supernatants were analyzed according to the manufacturer's instructions. The protein concentration of each sample was measured using a Bio-Rad DC protein assay (Bio-Rad Laboratories). The TG contents were normalized to the respective protein concentration, employing bovine serum albumin (Sigma) as the calibration standard. Total RNA isolation, library preparation, sequencing, and data analysis The total RNA was purified from liver tissues using an Easy-blue RNA extraction kit (iNtRON Biotechnology, Korea) according to the manufacturer's protocol. The RNA quality and quantity were analyzed on an Agilent 2100 bioanalyzer using the RNA 6000 Nano Chip (Agilent Technologies, Netherlands) and ND-2000 Spectrophotometer (Thermofisher), respectively. The control and test RNA libraries were constructed using Ouantseq 3′ mRNA-Seq Library Prep Kit (Lexogen, Austria). Briefly, 500 ng of the total RNA was prepared, and an oligo-dT primer containing an Illumina-compatible sequence at its 5′ end was hybridized to the RNA. Reverse transcription was then performed. Following degradation of the RNA template, complementary strand synthesis was initiated by a random primer containing an Illumina-compatible linker sequence at its 5′ end. Magnetic beads were used to eliminate all the reaction components. The library was amplified to add the complete adaptor sequences required for cluster generation. The constructed library was purified from the polymerase chain reaction mixture. High-throughput sequencing was performed as single-end 75 sequencings using Illumina NextSeq 500 (Illumina, USA). The QuantSeq 3′ mRNA-Seq reads were aligned using Bowtie2 version 2.1.0. The Bowtie2 indices were either generated from representative transcript sequences or the genome assembly sequence to align with the transcriptome and genome. The alignments were used to assemble transcripts, estimating their abundances, and detecting differential expression of genes. The differentially expressed genes were determined based on the counts from unique and multiple alignments using R version 3.2.2 and Bioconductor version 3.0. The Read count (RT) data were analyzed based on the Quantile normalization method using the Genowiz™ version 4.0.5.6 (Ocimum Biosolutions, India). The PPI network was analyzed using the STRING application tool. Cytoscape (version 2.7), a bioinformatics platform at the Institute of System Biology, USA, was used to construct the network diagrams. Gene classification was performed using the Medline database (National Center for Biotechnology Information, USA). The total RNA was purified from liver tissues using an Easy-blue RNA extraction kit (iNtRON Biotechnology, Korea) according to the manufacturer's protocol. The RNA quality and quantity were analyzed on an Agilent 2100 bioanalyzer using the RNA 6000 Nano Chip (Agilent Technologies, Netherlands) and ND-2000 Spectrophotometer (Thermofisher), respectively. The control and test RNA libraries were constructed using Ouantseq 3′ mRNA-Seq Library Prep Kit (Lexogen, Austria). Briefly, 500 ng of the total RNA was prepared, and an oligo-dT primer containing an Illumina-compatible sequence at its 5′ end was hybridized to the RNA. Reverse transcription was then performed. Following degradation of the RNA template, complementary strand synthesis was initiated by a random primer containing an Illumina-compatible linker sequence at its 5′ end. Magnetic beads were used to eliminate all the reaction components. The library was amplified to add the complete adaptor sequences required for cluster generation. The constructed library was purified from the polymerase chain reaction mixture. High-throughput sequencing was performed as single-end 75 sequencings using Illumina NextSeq 500 (Illumina, USA). The QuantSeq 3′ mRNA-Seq reads were aligned using Bowtie2 version 2.1.0. The Bowtie2 indices were either generated from representative transcript sequences or the genome assembly sequence to align with the transcriptome and genome. The alignments were used to assemble transcripts, estimating their abundances, and detecting differential expression of genes. The differentially expressed genes were determined based on the counts from unique and multiple alignments using R version 3.2.2 and Bioconductor version 3.0. The Read count (RT) data were analyzed based on the Quantile normalization method using the Genowiz™ version 4.0.5.6 (Ocimum Biosolutions, India). The PPI network was analyzed using the STRING application tool. Cytoscape (version 2.7), a bioinformatics platform at the Institute of System Biology, USA, was used to construct the network diagrams. Gene classification was performed using the Medline database (National Center for Biotechnology Information, USA). Statistical analysis Values are expressed as the means ± SE of three independent experiments. The data were statistically analyzed using IBM SPSS Statistics (Ver.17.0; USA). The statistical differences between the groups were observed with one-way analysis of variance (ANOVA) followed by a Turkey's test. The p < 0.05, p < 0.005, and p < 0.0005 indicate statistically significant differences from the control group. Values are expressed as the means ± SE of three independent experiments. The data were statistically analyzed using IBM SPSS Statistics (Ver.17.0; USA). The statistical differences between the groups were observed with one-way analysis of variance (ANOVA) followed by a Turkey's test. The p < 0.05, p < 0.005, and p < 0.0005 indicate statistically significant differences from the control group. DPPH radical scavenging activity: The antioxidant activity of the SFN was evaluated using stable free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) according to a slight modification of a previously described method [20]. Various concentrations of SFN (Sigma, USA) and GRN (Cayman Chemicals, USA) were diluted in dimethyl sulfoxide (DMSO) (Sigma) and incubated with ethanolic 0.1 mM DPPH (Sigma) at 37ºC for 30 min. The absorbance was measured at 517 nm using a UV spectrophotometer (Mecasys, Korea). The blanks were prepared by replacing the sample volumes with DMSO. The results were calculated as percentages of the control (100%), and the concentration of extracts required to decrease the initial DPPH concentration by 50% was expressed as the IC50 value. The radical scavenging ability (%) of the samples was calculated as % inhibition = [(ABlank − Asample)/ABlank] × 100, where ABlank = absorbance of the DPPH without the sample and Asample = absorbance of the SFN or GRN. The antioxidant activity of the SFN is expressed as the ascorbic acid (AA) equivalent antioxidant capacity (AEAC) as AA/100 g dry weight (DW) using the equation, AEAC = IC50 (Ascorbic acid)/IC50 (Sample) × 105. Measurement of NO production: The murine macrophage RAW 264.7 cell line (KCLB 40071) was purchased from the Korean cell line bank (KCLB, Korea). The cells were cultured in Dulbecco's modified Eagle's medium (DMEM), containing 10% fetal bovine serum (FBS), 1% penicillin-streptomycin (Gibco, Thermo Fisher Scientific, USA), and kept at 37°C in a humidified atmosphere containing 5% CO2. After reaching 70–80% confluence, the cells were sub-cultured within two-day intervals. For the experiment, the cells were cultured in 96 well plates. After 24 h incubation, the cells were pretreated with SFN (2.5 and 5 µM) and GRN (5 µM) for 1 h, and then stimulated with LPS (Escherichia coli 0111:B4; Sigma-Aldrich, USA) for 24 h. The concentration of SFN was determined according to a previous study [21], and the same concentration (5 µM) of GRN was used to compare its activity with SFN. The level of NO production by LPS-stimulated RAW 264.7 macrophage cells was determined by measuring the nitrite level in the culture media using Griess reagent (Promega, USA). Briefly, the 50 µL of culture media from each well was mixed with 50 µL of N-(1-naphthyl)ethylenediamine dihydrochloride (NED) and 50 µL of a sulfanilamide solution. After incubating the mixture for 10 min at room temperature, the absorbance was read at 550 nm in an enzyme-linked immunosorbent assay (ELISA) microplate reader (TECAN, Austria). ELISA: RAW 264.7 macrophage cells were cultured in 96 well plates (1 × 105 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h of incubation, the cells were pretreated with SFN and GRN and then stimulated with LPS for 24 h. The concentrations of TNF-α, IL-6, and IL-β in the culture media were measured using commercially available ELISA kits (Koma Biotech, Korea) according to the manufacture instructions. Measurement of COX-2 and iNOS protein expression: RAW 264.7 macrophage cells were cultured in six-well plates (1 × 106 cells/well) in DMEM, 10% FBS, and 1% penicillin-streptomycin. After 24 h incubation, the cells were pretreated with SFN and GRN for 1 h and then stimulated with LPS for 24 h. The expression levels of the COX-2 and iNOS proteins were observed by Western blot analysis. The cells were lysed with RIPA buffer containing a protease inhibitor mixture. The supernatant was separated, and the protein concentrations were evaluated using the Bradford assay (Bio-Rad Laboratories, USA). The equal amount of protein was mixed with 20% of loading buffer and separated by Tris-Glycine-Polyacrylamide, non-sodium dodecyl sulfate precast gel (10%, Koma Biotech) and subjected to Western blot with COX-2 (Sigma), iNOS (Sigma), and β-actin (Thermofisher, USA) antibodies. A Chemi-luminescence Bioimaging Instrument (NeoScience Co., Korea) was used to detect the proteins of interest. Animals experiments: Male ob/ob mice (6 weeks old) were purchased from Japan SLC Inc. (Japan). The mouse strain was originated from Jackson Laboratory (USA) [22], and was developed to C57BL/6JHamSlc-ob in Japan SLC Inc. (Japan). Male C57BL/6 mice (6 weeks old) supplied by Orient Bio (Korea) were used as the non-obesity control group. The mice were housed at a controlled temperature (24°C ± 1°C) and 50-55% humidity with a 12 h light/12 h dark cycle. All experiments were carried out according to the National Institute of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee of Jeju National University (ACUCC; approval No. 2018-0051). Sample treatment: The ob/ob mice were allocated randomly into three groups (n = 5): control ob/ob group and 2 sample-treated groups were orally administrated SFN (0.5 mg/kg), and GRN (2.5 mg/kg) every day in their drinking water assuming that each mouse drinks 20 mL of water per day. The concentrations of SFN and GRN were selected based on previous studies [23]. The samples were diluted with distilled water, and the mice were given access to water and food ad libitum. The samples were replaced with freshly prepared solutions every day to compensate for the degradative loss of the active compounds. After 6 weeks of sample treatment, the mice were fasted overnight and sacrificed. The liver tissues were collected and stored at −80°C for further experiments. Measurement of triglyceride (TG) content in liver tissues of ob/ob mice: Hepatic TG contents were quantified using a commercially available TG colorimetric assay kit (BioAssay Systems, USA). Briefly, the liver tissues were homogenized with 5% Triton X-100 (Bio-Rad Laboratories) and centrifuged at 13,000 ×g for 10 min to separate the fat layer. The TG and protein contents of the diluted supernatants were analyzed according to the manufacturer's instructions. The protein concentration of each sample was measured using a Bio-Rad DC protein assay (Bio-Rad Laboratories). The TG contents were normalized to the respective protein concentration, employing bovine serum albumin (Sigma) as the calibration standard. Total RNA isolation, library preparation, sequencing, and data analysis: The total RNA was purified from liver tissues using an Easy-blue RNA extraction kit (iNtRON Biotechnology, Korea) according to the manufacturer's protocol. The RNA quality and quantity were analyzed on an Agilent 2100 bioanalyzer using the RNA 6000 Nano Chip (Agilent Technologies, Netherlands) and ND-2000 Spectrophotometer (Thermofisher), respectively. The control and test RNA libraries were constructed using Ouantseq 3′ mRNA-Seq Library Prep Kit (Lexogen, Austria). Briefly, 500 ng of the total RNA was prepared, and an oligo-dT primer containing an Illumina-compatible sequence at its 5′ end was hybridized to the RNA. Reverse transcription was then performed. Following degradation of the RNA template, complementary strand synthesis was initiated by a random primer containing an Illumina-compatible linker sequence at its 5′ end. Magnetic beads were used to eliminate all the reaction components. The library was amplified to add the complete adaptor sequences required for cluster generation. The constructed library was purified from the polymerase chain reaction mixture. High-throughput sequencing was performed as single-end 75 sequencings using Illumina NextSeq 500 (Illumina, USA). The QuantSeq 3′ mRNA-Seq reads were aligned using Bowtie2 version 2.1.0. The Bowtie2 indices were either generated from representative transcript sequences or the genome assembly sequence to align with the transcriptome and genome. The alignments were used to assemble transcripts, estimating their abundances, and detecting differential expression of genes. The differentially expressed genes were determined based on the counts from unique and multiple alignments using R version 3.2.2 and Bioconductor version 3.0. The Read count (RT) data were analyzed based on the Quantile normalization method using the Genowiz™ version 4.0.5.6 (Ocimum Biosolutions, India). The PPI network was analyzed using the STRING application tool. Cytoscape (version 2.7), a bioinformatics platform at the Institute of System Biology, USA, was used to construct the network diagrams. Gene classification was performed using the Medline database (National Center for Biotechnology Information, USA). Statistical analysis: Values are expressed as the means ± SE of three independent experiments. The data were statistically analyzed using IBM SPSS Statistics (Ver.17.0; USA). The statistical differences between the groups were observed with one-way analysis of variance (ANOVA) followed by a Turkey's test. The p < 0.05, p < 0.005, and p < 0.0005 indicate statistically significant differences from the control group. RESULTS: DPPH radical scavenging activity of SFN The antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging. DPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0005 compared to the control. Values represent as mean ± SE. IC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin. The antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging. DPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0005 compared to the control. Values represent as mean ± SE. IC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin. Inhibition of LPS-stimulated NO production by SFN To examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells. NO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0001 compared to the control. To examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells. NO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0001 compared to the control. Effects of SFN on COX-2 and iNOS expression in LPS-stimulated RAW 264.7 cells The inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2. **p < 0.005, ***p < 0.0005 compared to the control. The inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2. **p < 0.005, ***p < 0.0005 compared to the control. Inhibitory effects of SFN on TNF-α, IL-6, and IL-1β production The effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay. **p < 0.005, ***p < 0.0005 compared to the control. The effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay. **p < 0.005, ***p < 0.0005 compared to the control. Effect of SFN on TG accumulation in ob/ob mice liver Because excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice. SFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin. *p < 0.05, **p < 0.05 compared with control ob/ob group. Because excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice. SFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin. *p < 0.05, **p < 0.05 compared with control ob/ob group. Differential gene expression analysis of ob/ob mice liver RNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice. SFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor. SFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B. SFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase. PPI, protein-protein-interaction; SFN, sulforaphane. RNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice. SFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor. SFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B. SFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase. PPI, protein-protein-interaction; SFN, sulforaphane. DPPH radical scavenging activity of SFN: The antioxidant activity of the SFN was evaluated based on the DPPH assay. According to the results, the DPPH free radical scavenging activity was increased significantly (p < 0.0005) by SFN. In contrast, GRN did not exhibit any free radical scavenging activity compared to the control (Fig. 1). SFN showed up to 58% radical scavenging activity with 500 µg/mL SFN. The calculated IC50 values of SFN and GRN were 405.79 ± 14.6 and 4163.5 ± 167 µg/mL, respectively (Table 1). In addition, the AEAC of SFN and GRN was 2,557.97 ± 89.08 and 249.309 ± 8.22 mg AA/100g, respectively (Table 1). These results indicate that SFN showed antioxidant potential through the free radical scavenging. DPPH, 2,2-diphenyl-1-picrylhydrazyl; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0005 compared to the control. Values represent as mean ± SE. IC50, the concentration at which 50% of free radical scavenged; AEAC, AA equivalent antioxidant capacity expressed as AA/100 g dry weight; AA, ascorbic acid; SFN, sulforaphane; GRN, glucoraphanin. Inhibition of LPS-stimulated NO production by SFN: To examine the effects of SFN on LPS-stimulated NO production, the amount of NO released from the cells was determined using a Griess assay. In addition, the cell morphology was observed to ensure the experiments were performed under the right conditions. As shown in Fig. 2A, the morphologies of the RAW 264.7 cells were not changed after treating the SFN and GRN with LPS compared to the control cells. The results suggested that both SFN and GRN used in this study do not alter the RAW 264.7 cell morphology. Based on the Griess assay results, NO production was increased significantly after the LPS treatment in RAW 264.7 cells compared to the normal cells (Fig. 2B). The treatment of SFN inhibited LPS-stimulated NO production in a concentration-dependent manner, whereas the GRN treatment showed no significant change in NO production. The inhibition level of LPS-stimulated NO production was decreased significantly to 42% and 78% at 2.5 µM and 5 µM of SFN compared to the LPS-treated control cells. The result confirmed that SFN inhibited NO production in LPS-stimulated RAW 264.7 cells. NO, nitric oxide; LPS, lipopolysaccharide; SFN, sulforaphane; GRN, glucoraphanin. ***p < 0.0001 compared to the control. Effects of SFN on COX-2 and iNOS expression in LPS-stimulated RAW 264.7 cells: The inhibitory effect of SFN on the expression levels of the COX-2 and iNOS proteins in LPS-stimulated RAW 264.7 cells were investigated by Western blot analysis. The cells treated with SFN showed significant decreases in the levels of COX-2 and iNOS compared to the LPS treated control cells. In contrast, the GRN-treated cells did not significantly affect COX-2 or iNOS expression (Fig. 3) compared to the control cells. The results suggest that SFN regulates inflammation by inhibiting iNOS and COX-2 in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; iNOS, inducible nitric oxide synthase; LPS, lipopolysaccharide; GRN, glucoraphanin; COX-2, cyclooxygenase-2. **p < 0.005, ***p < 0.0005 compared to the control. Inhibitory effects of SFN on TNF-α, IL-6, and IL-1β production: The effects of SFN on the LPS-stimulated pro-inflammatory cytokines, TNF-α, IL-6, and IL-1β production were investigated in RAW 264.7 cells. Treatment of SFN showed significant inhibition (p < 0.05) of TNF-α, IL-6, and IL-1β production. In contrast, GRN did not significantly affect the production of those cytokines in LPS-stimulated RAW cells (Fig. 4). SFN inhibited the LPS-stimulated TNF-α, IL-6, and IL-1β production by 32%, 31%, 53%, respectively, at 5 µM compared to the control. These results confirmed that SFN inhibits the pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells. SFN, sulforaphane; LPS, lipopolysaccharide; GRN, glucoraphanin; IL, interleukin; TNF, tumor necrosis factor; ELISA, enzyme-linked immunosorbent assay. **p < 0.005, ***p < 0.0005 compared to the control. Effect of SFN on TG accumulation in ob/ob mice liver: Because excessive TG accumulation is a key factor in liver inflammation, the present study showed the hepatic TG content in ob/ob mice. Based on the results, the ob/ob mouse livers showed higher levels of TG accumulation than the normal mouse liver (Fig. 5). In particular, the ob/ob mice treated with both SFN and GRN showed 24% and 32% lower hepatic TG contents, respectively, compared to the control ob/ob mice. These results suggest that SFN reduces hepatic TG accumulation, which might normalize liver inflammation in ob/ob mice. SFN, sulforaphane; TG, triglyceride; GRN, glucoraphanin. *p < 0.05, **p < 0.05 compared with control ob/ob group. Differential gene expression analysis of ob/ob mice liver: RNA sequencing analysis was performed to observe the effects of SFN on the expression levels of the genes related to inflammation in ob/ob mice. The functional annotation of the genes was evaluated by Gene Ontology (GO) analysis. As shown in Fig. 6, a large portion of the genes related to inflammation was up- or down-regulated in the SFN and GRN-treated ob/ob mice compared to normal mice. The up-regulated (> 2-fold) 28 genes, including Ccl1, Ccl4, Cxcr1, Ccr3, and Ifng, and 6 down-regulated (lower than 0.6-fold) genes, including Fn1, Itgb2l, Pik3cd, and Adora2a, were normalized to the control level by the SFN treatment (Tables 2 and 3). The PPI network was constructed using the STRING analysis to understand the relationship between normalized genes by SFN. The PPI network was visualized as nodes (genes) and edges (interactions between the genes), as shown in Fig. 7. Among the proteins related to inflammation, the chemokine ligands (Ccl1, Ccl4, Ccl3, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were closely associated and formed a large “functional cluster” in the middle of the network (Fig. 7). The present results confirmed that SFN restored the expression levels of the genes related to inflammation to the normal level, which were up or down-regulated in ob/ob mice. SFN, sulforaphane; GO, Gene Ontology; GRN, glucoraphanin; NF, nuclear factor. SFN, sulforaphane; GRN, glucoraphanin; NAD(P)H, nicotinamide adenine dinucleotide phosphate; PDGF, platelet-derived growth factor; RELB, v-rel reticuloendotheliosis viral oncogene homolog B. SFN, sulforaphane; GRN, glucoraphanin; PI3K, phosphoinositide-3-kinase. PPI, protein-protein-interaction; SFN, sulforaphane. DISCUSSION: This study examined the anti-inflammatory effects of SFN on LPS-stimulated RAW 264.7 macrophage cells. The results showed that SFN increased the DPPH free radical scavenging activity. In addition, SFN suppressed the expression of iNOS, COX-2, and pro-inflammatory cytokines (TNF-α, IL-6, and IL-1β) in LPS-stimulated RAW 264.7 cells. In particular, SFN showed anti-inflammatory effects by normalizing the expression of the genes related to inflammation, including chemokine ligands (Cxcl14, Ccl1, Ccl3, Ccl4, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10), which were perturbed in ob/ob mice liver. In the present study, SFN showed antioxidant potential through a DPPH assay in a concentration-dependent manner. Previous studies reported that isothiocyanates extracted from broccoli were strongly associated with the DPPH radical scavenging activity [24]. Similarly, in the present study, SFN had potent free radical scavenging activity compared to the GRN. Because oxidative stress induces inflammation [25], the antioxidant activity of SFN might improve the healing procedure of inflammation. In this study, SFN inhibited the levels of COX-2 and iNOS expression in LPS-stimulated RAW 264.7 cells, with concomitant decreases in NO production. The production of NO by iNOS leads to the activation of macrophages against microorganisms [26]. On the other hand, overexpression of NO, a key biomarker of oxidative stress, causes oxidative damage during the inflammatory process [27]. COX-2 is involved in the production of prostaglandins, which causes an increase in chemotaxis, blood flow, and subsequent dysfunction of tissues during inflammation [28]. The previous study reported that SFN enriched broccoli florets extract suppressed NO production and iNOS expression by inhibiting the NF-κB activity in LPS-stimulated RAW 264.7 cells [29]. In addition, SFN decreased the levels of COX-2 and prostaglandin E2 in LPS-stimulated RAW 264.7 cells [30]. Consistent with previous findings, the results confirmed that the biological activity of SFN is firmly attributed to the inhibitory effects on COX-2, iNOS, and NO production in LPS-stimulated RAW 264.7 cells. In addition, GRN, without enzymatic conversion to SFN, did not show significant anti-inflammatory effects on LPS-stimulated RAW 264.7 cells compared to SFN. In the present study, SFN exhibited its anti-inflammatory effects by inhibiting the production of pro-inflammatory cytokines, including TNF-α, IL-6, and IL-1β in LPS-stimulated RAW 264.7 cells. A recent study reported that SFN inhibited the production of TNF-α, IL-6, and IL-1β through activation of the Nrf2/HO-1 pathway and inhibition of the JNK/AP-1/NF-κB pathway in LPS-activated microglia [31]. Therefore the anti-inflammatory effects of SFN might result from the inhibitory effects on the production of pro-inflammatory cytokines in LPS-stimulated RAW 264.7 cells. In the present study, RNA sequencing analysis was conducted to reveal the molecular mechanism underlying the anti-inflammatory effect of SFN on the obesity-induced inflammation in ob/ob mice. The gene expression results showed that SFN normalized the expression levels of up-regulated genes related to the inflammation, including Cxcl14, Ccl1, Ccl3, Ccl4, Ccl17, Cxcr1, Ccr3, Ccr10, and Ifng genes in the ob/ob mice liver. Cxcl14 mediated leukocyte migration and differentiation [32]. In addition, Cxcl14 involves the obesity-associated infiltration of macrophages into tissues and hepatic steatosis in obese mice [33]. Ccl1 acts as a chemoattractant for monocytes, immature B cells, and dendritic cells [34]. The inhibition of Ccl1 reduces liver inflammation and fibrosis progression [35]. Ccl3 promotes the recruitment of CD4+ T cells to the liver, and increased Ccl3 expression was observed in the patient with liver injury [36]. Ccl17 shows chemotactic activity for CD4+ T cells and plays a role in trafficking and activation of mature T cells [37]. Furthermore, chemokine receptors (Ccr1, Ccr10, and Ccr3) interact with their specific chemokine ligands, which cause various cell responses, such as chemotaxis [38]. In the liver, IFN-γ activates resident macrophages and stimulates hepatocyte apoptosis by increasing ROS production [39]. Remarkably, in the present study, chemokine ligands (Cxcl14, Ccl1, Ccl3, Ccl4, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) were identified as hub genes because they formed a “functional cluster” within the PPI network. These results suggest that chemokine proteins might play key roles in the obesity-induced inflammation in ob/ob mice liver. Therefore, the treatment of SFN can normalize liver inflammation by normalizing the up-regulated genes related to chemokines in ob/ob mice liver. In addition, the current results showed that SFN normalized the down-regulated genes related to inflammation, including Fn1, Itgb21, and Pik3cd in ob/ob mice liver. Fn1 produces soluble plasma fibronectin-1, which is involved mainly in blood clotting, wound healing, and protects against excessive liver fibrosis [40]. Itgb2 encodes CD18, which inhibits the capability of the immune system to fight off infection [41]. Pik3cd regulates the immune cell metabolism through the PI3K‐AKT‐mTOR signaling pathway [42]. These findings suggest that downregulation of Fn1, Itgb21, and Pikk3cd genes might involve the progress of inflammation by dysregulating immune responses in the ob/ob mice liver. In conclusion, SFN has a potent anti-inflammatory activity, as demonstrated by its ability to inhibit the NO, COX-2, iNOS, and pro-inflammatory cytokines (TNF-α, IL-6, and IL-1β) in LPS-stimulated RAW 264.7 cells. In particular, gene expression analysis showed that SFN restores the obesity-induced inflammation by normalizing the genes related to chemokine signaling, including chemokine ligands (Cxcl14, Ccl1, Ccl3, Ccl4, and Ccl17) and chemokine receptors (Cxcr1, Ccr3, and Ccr10) in ob/ob mouse livers. Overall, SFN has a potent anti-inflammatory effect by normalizing the expression levels of the genes related to inflammation that had been perturbed in ob/ob mice.
Background: Sulforaphane (SFN) is an isothiocyanate compound present in cruciferous vegetables. Although the anti-inflammatory effects of SFN have been reported, the precise mechanism related to the inflammatory genes is poorly understood. Methods: Nitric oxide (NO) level was measured using a Griess assay. The inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2) expression levels were analyzed by Western blot analysis. Pro-inflammatory cytokines (tumor necrosis factor [TNF]-α, interleukin [IL]-1β, and IL-6) were measured by enzyme-linked immunosorbent assay (ELISA). RNA sequencing analysis was performed to evaluate the differential gene expression in the liver of ob/ob mice. Results: The SFN treatment significantly attenuated the iNOS and COX-2 expression levels and inhibited NO, TNF-α, IL-1β, and IL-6 production in lipopolysaccharide (LPS)-stimulated RAW 264.7 cells. RNA sequencing analysis showed that the expression levels of 28 genes related to inflammation were up-regulated (> 2-fold), and six genes were down-regulated (< 0.6-fold) in the control ob/ob mice compared to normal mice. In contrast, the gene expression levels were restored to the normal level by SFN. The protein-protein interaction (PPI) network showed that chemokine ligand (Cxcl14, Ccl1, Ccl3, Ccl4, Ccl17) and chemokine receptor (Ccr3, Cxcr1, Ccr10) were located in close proximity and formed a "functional cluster" in the middle of the network. Conclusions: The overall results suggest that SFN has a potent anti-inflammatory effect by normalizing the expression levels of the genes related to inflammation that were perturbed in ob/ob mice.
null
null
11,135
321
[ 233, 294, 90, 194, 154, 152, 116, 375, 75, 218, 240, 141, 180, 143, 362 ]
19
[ "sfn", "ob", "cells", "grn", "lps", "mice", "stimulated", "ob ob", "control", "raw" ]
[ "inflammatory cytokines tnf", "inhibiting nf κb", "factor liver inflammation", "macrophages activation nf", "obesity related inflammation" ]
null
null
null
[CONTENT] Sulforaphane | anti-inflammatory activity | RNA sequencing analysis | differential gene expression | ob/ob mice [SUMMARY]
null
[CONTENT] Sulforaphane | anti-inflammatory activity | RNA sequencing analysis | differential gene expression | ob/ob mice [SUMMARY]
null
[CONTENT] Sulforaphane | anti-inflammatory activity | RNA sequencing analysis | differential gene expression | ob/ob mice [SUMMARY]
null
[CONTENT] Animals | Anti-Inflammatory Agents | Gene Expression | Isothiocyanates | Lipopolysaccharides | Male | Mice | RAW 264.7 Cells | Random Allocation | Sulfoxides [SUMMARY]
null
[CONTENT] Animals | Anti-Inflammatory Agents | Gene Expression | Isothiocyanates | Lipopolysaccharides | Male | Mice | RAW 264.7 Cells | Random Allocation | Sulfoxides [SUMMARY]
null
[CONTENT] Animals | Anti-Inflammatory Agents | Gene Expression | Isothiocyanates | Lipopolysaccharides | Male | Mice | RAW 264.7 Cells | Random Allocation | Sulfoxides [SUMMARY]
null
[CONTENT] inflammatory cytokines tnf | inhibiting nf κb | factor liver inflammation | macrophages activation nf | obesity related inflammation [SUMMARY]
null
[CONTENT] inflammatory cytokines tnf | inhibiting nf κb | factor liver inflammation | macrophages activation nf | obesity related inflammation [SUMMARY]
null
[CONTENT] inflammatory cytokines tnf | inhibiting nf κb | factor liver inflammation | macrophages activation nf | obesity related inflammation [SUMMARY]
null
[CONTENT] sfn | ob | cells | grn | lps | mice | stimulated | ob ob | control | raw [SUMMARY]
null
[CONTENT] sfn | ob | cells | grn | lps | mice | stimulated | ob ob | control | raw [SUMMARY]
null
[CONTENT] sfn | ob | cells | grn | lps | mice | stimulated | ob ob | control | raw [SUMMARY]
null
[CONTENT] inflammatory | ob | inflammation | anti | anti inflammatory | mice | sfn | ob ob | obesity | related [SUMMARY]
null
[CONTENT] sfn | ob | lps | cells | compared | grn | production | compared control | lps stimulated | sfn sulforaphane [SUMMARY]
null
[CONTENT] sfn | ob | cells | lps | grn | mice | ob ob | stimulated | il | raw [SUMMARY]
null
[CONTENT] SFN ||| SFN [SUMMARY]
null
[CONTENT] SFN | COX-2 | TNF | IL-1β | 264.7 ||| RNA | 28 | 2-fold | six | 0.6-fold ||| SFN ||| Ccl1 | Ccl3 | Ccl4 | Ccr3 [SUMMARY]
null
[CONTENT] SFN ||| SFN ||| Griess ||| COX-2 ||| IL]-1β | ELISA ||| RNA ||| SFN | COX-2 | TNF | IL-1β | 264.7 ||| RNA | 28 | 2-fold | six | 0.6-fold ||| SFN ||| Ccl1 | Ccl3 | Ccl4 | Ccr3 ||| SFN [SUMMARY]
null
Stem cell injection for complex anal fistula in Crohn's disease: A single-center experience.
34239275
Despite tremendous progress in medical therapy and optimization of surgical strategies, considerable failure rates after surgery for complex anal fistula in Crohn's disease have been reported. Therefore, stem cell therapy for the treatment of complex perianal fistula can be an innovative option with potential long-term healing.
BACKGROUND
All patients with complex anal fistulas associated with Crohn's disease who were amenable for definite fistula closure within a defined observation period were potential candidates for stem cell injection (darvadstrocel) if at least one conventional or surgical attempt to close the fistula had failed. Darvadstrocel was only indicated in patients without active Crohn's disease and without presence of anorectal abscess. Local injection of darvadstrocel was performed as a standardized procedure under general anesthesia including single-shot antibiotic prophylaxis, removal of seton drainage, fistula curettage, closure of the internal openings and local stem cell injection. Data collection focusing on healing rates, occurrence of abscess and follow-up was performed on a regular basis of quality control and patient care. Data were retrospectively analyzed.
METHODS
Between July 2018 and January 2021, 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of the 12 patients had horse-shoe fistula and 3 had one complex fistula. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. At a mean follow-up of 14.3 (range: 3-30) mo, a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 (range: 6-30) wk. Within follow-up, 4 patients required reoperation due to perianal abscess (33.3%). Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.
RESULTS
Data of this single-center experience are promising but limited due to the small number of patients and the retrospective analysis.
CONCLUSION
[ "Crohn Disease", "Female", "Humans", "Male", "Rectal Fistula", "Retrospective Studies", "Stem Cells", "Treatment Outcome" ]
8240051
INTRODUCTION
Both medical and surgical options for complex perianal fistulas associated with Crohn’s disease remain difficult and challenging[1-5]. Despite recent progress in medical treatment including biologicals and actual trends in sphincter-preserving surgical techniques, considerable recurrence rates after definite surgery for Crohn’s anal fistula have been documented[2,3,5]. As an interdisciplinary treatment regimen is the prerequisite for disease control and potential long-term remission of perianal fistulizing Crohn’s disease, therapeutic goals include symptom improvement (e.g., reduction of secretion, absence of pain), prevention of recurrent perianal abscess requiring further surgery, preservation of continence, improvement of quality of life and, finally, healing of the fistula. Focusing on surgical procedures for complex anal fistulas, conventional procedures such as advancement flap repair have shown considerable failure rates for Crohn’s fistulas, whereas innovative surgical options such as ligation of the intersphincteric fistula tract (LIFT), biomaterials (e.g., plug) or video-assisted anal fistula treatment seem to have a 50% healing rate after 12 mo without significant impairment of continence[6]. Recently, several reports and randomized studies have demonstrated that stem cell therapy for Crohn’s complex anal fistula has raised the healing rates after 12 mo[2,7,8]. Impressed by the encouraging healing rates reported from the ADMIRE trial[7,8], it was the aim of this retrospective single-center study to evaluate the current results of stem cell therapy for complex anal fistulas in Crohn’s disease with special focus on indication, patient selection and long-term outcome.
MATERIALS AND METHODS
Clinical setting Stem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). Stem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). Study population All patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. Inclusion and exclusion criteria for administration of darvadstrocel MRI: Magnetic resonance imaging. All patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. Inclusion and exclusion criteria for administration of darvadstrocel MRI: Magnetic resonance imaging. Study design and outcome evaluation To evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2. Definitions of outcome To evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2. Definitions of outcome Special consideration for coronavirus disease 2019 pandemic Due to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021. Due to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021. Preoperative assessment All patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed. All patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed. Surgical technique All surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary. After preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region. Postoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period. All surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary. After preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region. Postoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period. Follow-up Regular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination. Regular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination.
null
null
CONCLUSION
Based on the current results, local stem cell injection could be a new “puzzle piece” for effective treatment of complex anal fistulas associated with Crohn’s disease.
[ "INTRODUCTION", "Clinical setting", "Study population", "Study design and outcome evaluation", "Special consideration for coronavirus disease 2019 pandemic", "Preoperative assessment", "Surgical technique", "Follow-up", "RESULTS", "Study population and patient characteristics", "Fistula characterization", "Surgery", "Outcome", "DISCUSSION", "CONCLUSION" ]
[ "Both medical and surgical options for complex perianal fistulas associated with Crohn’s disease remain difficult and challenging[1-5]. Despite recent progress in medical treatment including biologicals and actual trends in sphincter-preserving surgical techniques, considerable recurrence rates after definite surgery for Crohn’s anal fistula have been documented[2,3,5]. As an interdisciplinary treatment regimen is the prerequisite for disease control and potential long-term remission of perianal fistulizing Crohn’s disease, therapeutic goals include symptom improvement (e.g., reduction of secretion, absence of pain), prevention of recurrent perianal abscess requiring further surgery, preservation of continence, improvement of quality of life and, finally, healing of the fistula. \nFocusing on surgical procedures for complex anal fistulas, conventional procedures such as advancement flap repair have shown considerable failure rates for Crohn’s fistulas, whereas innovative surgical options such as ligation of the intersphincteric fistula tract (LIFT), biomaterials (e.g., plug) or video-assisted anal fistula treatment seem to have a 50% healing rate after 12 mo without significant impairment of continence[6]. Recently, several reports and randomized studies have demonstrated that stem cell therapy for Crohn’s complex anal fistula has raised the healing rates after 12 mo[2,7,8]. \nImpressed by the encouraging healing rates reported from the ADMIRE trial[7,8], it was the aim of this retrospective single-center study to evaluate the current results of stem cell therapy for complex anal fistulas in Crohn’s disease with special focus on indication, patient selection and long-term outcome.", "Stem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). ", "All patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. \nInclusion and exclusion criteria for administration of darvadstrocel\nMRI: Magnetic resonance imaging.", "To evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2.\nDefinitions of outcome", "Due to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021.", "All patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed.", "All surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary.\nAfter preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region.\nPostoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period.", "Regular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination.", "Study population and patient characteristics Between July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. \nPatient population\nCD: Crohn’s disease; CPF: Crohn’s perianal fistula.\nBetween July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. \nPatient population\nCD: Crohn’s disease; CPF: Crohn’s perianal fistula.\nFistula characterization All patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). \nAll patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). \nSurgery All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. \nAll patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. \nOutcome Details on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. \nEfficacy evaluation and outcomes\nMRI: Magnetic resonance imaging.\nFocusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.\nDetails on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. \nEfficacy evaluation and outcomes\nMRI: Magnetic resonance imaging.\nFocusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.", "Between July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. \nPatient population\nCD: Crohn’s disease; CPF: Crohn’s perianal fistula.", "All patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). ", "All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. ", "Details on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. \nEfficacy evaluation and outcomes\nMRI: Magnetic resonance imaging.\nFocusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.", "Despite significant progress in medical treatment, including biologicals and cell-based therapies, development of sphincter-saving surgical techniques and interdisciplinary co-working, failure rates after definite surgery for perianal fistulizing Crohn’s disease are still high[3,11-13]. Moreover, in patients with severe and refractory perianal disease, the decision to perform fecal diversion or even proctectomy has a tremendous impact on quality of life, particularly in younger patients. Impressed by the preliminary results of limited single-center studies and the encouraging data of the ADMIRE study[7,8], this was a retrospective single-center study analyzing routine clinical data on the application of allogenic, adipose-derived mesenchymal stem cells (darvadstrocel) for complex perianal fistula associated with Crohn’s disease, providing structured inclusion and exclusion criteria in 12 patients.\nIn general, the optimal surgical treatment of complex anal fistulas in patients with Crohn’s disease remains challenging. As the majority of patients need surgery for perianal fistulizing Crohn’s disease and a high proportion of patients need further surgery due to abscess and recurrent fistula, the integrity of the anal sphincter is essential for preservation of continence[3]. Therefore, the risk of deterioration of continence status increases with the number of surgical procedures and the invasiveness of procedures performed. For complex anal fistulas according to American Gastroenterological Association and Parks classification[9,10], transrectal flap procedures (advancement or mucosal flap repair) and the LIFT procedure can be considered as effective surgical options in stable disease and absence of proctitis and/or anorectal stenosis with acceptable healing rates[11-18]; however, fecal incontinence and soiling is reported in approximately 10% of patients after endorectal flap procedures[6,11,18]. Focusing on the LIFT procedure, the risk of incontinence is low, but about half of the patients need additional surgery due to recurrence[14-16]. The main “disadvantage” or limitation of both procedures are high complex fistulas with suprasphincteric course or multi-tract fistulas with two internal and external openings (“multiple branching tracts”). In these patients, long-term seton drainage is the favored option; alternatively, proctectomy can be considered. Based on this “surgical dilemma”, the novel therapeutic approach of local stem cell therapy seems to be an alternative for highly selected patients with multi-tract, complex fistulas. \nIn the meantime, a variety of limited studies demonstrated that local injection of mesenchymal stem cells can induce long-term fistula healing without the risk of incontinence and without serious adverse advents related to the mesenchymal stem cells themselves[19-22]. Following the results of the multicenter, phase III randomized control trial (ADMIRE), the application of allogenic, adipose-derived, mesenchymal stem cells combined with transanal closure of the internal opening and fistula curettage, a 50% healing rate of stem cell therapy compared with a 34% success in the placebo group was documented after 24 wk, and healing rates were sustained to 52 wk[7,8]. \nImpressed by these results and personally searching for the “best option” in patients with multi-branching or two-tract complex fistula, it was the aim of the current single-center experience to evaluate the outcome of local injection of darvadstrocel in a highly selected group of patients. Following the current single-center experience with stem cell therapy, strict inclusion and exclusion criteria were defined. In accordance with the inclusion criteria of the ADMIRE study[7,8], only patients with complex fistulas associated with Crohn’s disease–in the majority two complex fistula, suprasphincteric fistula and horse-shoe-fistula, without active luminal Crohn’s disease (stable disease under medical treatment) and no evidence of perianal sepsis were included. In contrast, simple fistulas or rectovaginal fistulas were excluded for stem cell therapy. Moreover, in the current study, patients with single-tract intersphincteric or transsphincteric fistulas as well as relatively short single-tract fistulas (fistula length less than 3 cm) were not candidates for stem cell therapy; in these patients, advancement flap repair, mucosal flap repair or LIFT procedure was the preferred option. Within the observation period, a total of 28 patients underwent flap or LIFT procedure for complex fistula associated with Crohn’s disease (data not shown).\nAll patients had drainage of fistulas by seton for a minimum of 6 wk. In contrast to the ADMIRE study[7,8], 4 patients who underwent fecal diversion due to perianal sepsis related to complex fistulas years before admission to our center were included in the current study after interdisciplinary discussion, as the only alternative surgical treatment would have been a proctectomy in these patients, and patients refused this option. Finally, all 12 patients had at least one attempt to surgically close the fistula with no success; thus, all fistulas treated with stem cell therapy were recurrent complex fistula.\nFocusing on surgical technique and administration of darvadstrocel, no difficulties or intraoperative morbidity were documented. One patient had fewer on the second postoperative day without any signs of perianal sepsis. No serious adverse events in the immediate postoperative course were noted. \nAfter a mean follow-up of 14.3 mo (range: 3-30 mo), healing rate was 66.7% (8 of 12 patients). Healing was based on strict criteria and was assessed by clinical examination and proctoscopy in the follow-up period. In terms of postoperative MRI (5/12 patients), 1 female patient with clinical evidence of fistula healing also had radiological healing as documented by MRI, whereas 3 patients had radiological recurrence (2) or persistence (1). One patient with clinical healing had radiologic persistence of fistula without signs of contrast enhancement. Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.\nThe occurrence of perianal abscess during follow-up was relatively frequent in the current collective. Four patients (33.3%) developed perianal abscess in the postoperative course, and surgery (abscess excision) was required (1 patient had abscess formation twice). Interestingly, the occurrence of perianal abscess was late (3, 4, 7, 9 and 22 mo after darvadstrocel administration), and abscess localization was near to one former external opening. The incidence of abscess was related to recurrence in 1 patient (supralevatoric abscess with proctosigmoiditis), occurred with exacerbation of systemic disease (subcutaneous abscess with active inflammation of ileocolic region and proctitis), and was associated with fistula persistence in 2 patients. As a consequence of the relatively high incidence of perianal abscess following stem cell therapy in the current series, a more wide or radical excision of the external opening should be recommended in future procedures to prevent perianal abscess. Moreover, patients with the occurrence of abscess could not be defined “in remission”; however, there was no change in medical treatment. Therefore, the differentiation of abscess as a local problem following fistula surgery or as a problem of systemic disease with direct implications for medical treatment should be more in focus between gastroenterologists and surgeons.\nAnalyzing the further course of patients with long-term healing in terms of medical therapy, 1 patient had ceased maintenance therapy after recommendation of the gastroenterologist. Focusing on patients who had fecal diversion in the past, one female patient underwent stoma reversal 2 years after stem cell injection. In the other 3 patients, follow-up is still too short, as stoma reversal should be advised not prior to a minimum period of 6 mo after definite fistula surgery (1 patient), and the other two patients had recurrence.\nSpecifically addressing the observation period related to the COVID-19 pandemic, stem cell therapy was not performed within March 2020 and July 2020 and within a second period starting in November 2020 due to general restrictions concerning elective operations in Germany. Based on governmental restrictions and in-hospital limitations (e.g., reduced capacity in the surgical theater) as well as logistic reasons (e.g., transportation of stem cells under specific conditions) patients with stable disease and without signs of active perianal disease were postponed. This was in accordance with other European and United States experiences related to the management of patients with inflammatory bowel disease[23,24]. \nThe current results clearly demonstrate that an innovative sphincter-sparing surgical approach including fistula curettage, transanal closure of internal openings and local injection of darvadstrocel leads to promising long-term healing rates in patients with no “effective” alternative, such as flap repair or LIFT procedure. However, fundamental limitations of this single-center experience are the small number of patients, the retrospective design, and the absence of a control group. Therefore, the definite role of local application of mesenchymal stem cells has to be discussed within multidisciplinary round tables or consensus conferences to have clear position statements[25-28]. Actually, we are still in the episode of a “learning curve” in terms of patient selection, ideal surgical technique, interdisciplinary treatment discussion, role of maintenance therapy and evaluation of outcomes, among others. Therefore, generally accepted indications and pathways of stem cell therapy for complex fistulas in Crohn’s disease cannot be derived; however, the current results should be a plea for further standardization and interdisciplinary consensus.", "These single-center data demonstrate that local injection of adipose-derived mesenchymal stem cells (darvadstrocel) is safe and effective in patients suffering from perianal fistulizing Crohn’s disease. Providing long-term healing in 66.7% of patients, stem cell therapy seems to be an innovative and promising surgical therapy for complex anal fistula associated with Crohn’s disease. However, this single-center experience is limited due to the small number of patients and the retrospective assessment of routine clinical data related to quality control. As a technical consequence of a high incidence of perianal abscess during follow-up, a wide excision of the external opening should be recommended for future procedures. Finally, further interdisciplinary efforts including controlled studies are necessary to evaluate the definite role of stem cell therapy for complex anal fistula in Crohn’s disease." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Clinical setting", "Study population", "Study design and outcome evaluation", "Special consideration for coronavirus disease 2019 pandemic", "Preoperative assessment", "Surgical technique", "Follow-up", "RESULTS", "Study population and patient characteristics", "Fistula characterization", "Surgery", "Outcome", "DISCUSSION", "CONCLUSION" ]
[ "Both medical and surgical options for complex perianal fistulas associated with Crohn’s disease remain difficult and challenging[1-5]. Despite recent progress in medical treatment including biologicals and actual trends in sphincter-preserving surgical techniques, considerable recurrence rates after definite surgery for Crohn’s anal fistula have been documented[2,3,5]. As an interdisciplinary treatment regimen is the prerequisite for disease control and potential long-term remission of perianal fistulizing Crohn’s disease, therapeutic goals include symptom improvement (e.g., reduction of secretion, absence of pain), prevention of recurrent perianal abscess requiring further surgery, preservation of continence, improvement of quality of life and, finally, healing of the fistula. \nFocusing on surgical procedures for complex anal fistulas, conventional procedures such as advancement flap repair have shown considerable failure rates for Crohn’s fistulas, whereas innovative surgical options such as ligation of the intersphincteric fistula tract (LIFT), biomaterials (e.g., plug) or video-assisted anal fistula treatment seem to have a 50% healing rate after 12 mo without significant impairment of continence[6]. Recently, several reports and randomized studies have demonstrated that stem cell therapy for Crohn’s complex anal fistula has raised the healing rates after 12 mo[2,7,8]. \nImpressed by the encouraging healing rates reported from the ADMIRE trial[7,8], it was the aim of this retrospective single-center study to evaluate the current results of stem cell therapy for complex anal fistulas in Crohn’s disease with special focus on indication, patient selection and long-term outcome.", "Clinical setting Stem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). \nStem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). \nStudy population All patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. \nInclusion and exclusion criteria for administration of darvadstrocel\nMRI: Magnetic resonance imaging.\nAll patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. \nInclusion and exclusion criteria for administration of darvadstrocel\nMRI: Magnetic resonance imaging.\nStudy design and outcome evaluation To evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2.\nDefinitions of outcome\nTo evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2.\nDefinitions of outcome\nSpecial consideration for coronavirus disease 2019 pandemic Due to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021.\nDue to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021.\nPreoperative assessment All patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed.\nAll patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed.\nSurgical technique All surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary.\nAfter preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region.\nPostoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period.\nAll surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary.\nAfter preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region.\nPostoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period.\nFollow-up Regular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination.\nRegular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination.", "Stem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). ", "All patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. \nInclusion and exclusion criteria for administration of darvadstrocel\nMRI: Magnetic resonance imaging.", "To evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2.\nDefinitions of outcome", "Due to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021.", "All patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed.", "All surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary.\nAfter preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region.\nPostoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period.", "Regular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination.", "Study population and patient characteristics Between July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. \nPatient population\nCD: Crohn’s disease; CPF: Crohn’s perianal fistula.\nBetween July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. \nPatient population\nCD: Crohn’s disease; CPF: Crohn’s perianal fistula.\nFistula characterization All patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). \nAll patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). \nSurgery All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. \nAll patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. \nOutcome Details on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. \nEfficacy evaluation and outcomes\nMRI: Magnetic resonance imaging.\nFocusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.\nDetails on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. \nEfficacy evaluation and outcomes\nMRI: Magnetic resonance imaging.\nFocusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.", "Between July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. \nPatient population\nCD: Crohn’s disease; CPF: Crohn’s perianal fistula.", "All patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). ", "All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. ", "Details on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. \nEfficacy evaluation and outcomes\nMRI: Magnetic resonance imaging.\nFocusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.", "Despite significant progress in medical treatment, including biologicals and cell-based therapies, development of sphincter-saving surgical techniques and interdisciplinary co-working, failure rates after definite surgery for perianal fistulizing Crohn’s disease are still high[3,11-13]. Moreover, in patients with severe and refractory perianal disease, the decision to perform fecal diversion or even proctectomy has a tremendous impact on quality of life, particularly in younger patients. Impressed by the preliminary results of limited single-center studies and the encouraging data of the ADMIRE study[7,8], this was a retrospective single-center study analyzing routine clinical data on the application of allogenic, adipose-derived mesenchymal stem cells (darvadstrocel) for complex perianal fistula associated with Crohn’s disease, providing structured inclusion and exclusion criteria in 12 patients.\nIn general, the optimal surgical treatment of complex anal fistulas in patients with Crohn’s disease remains challenging. As the majority of patients need surgery for perianal fistulizing Crohn’s disease and a high proportion of patients need further surgery due to abscess and recurrent fistula, the integrity of the anal sphincter is essential for preservation of continence[3]. Therefore, the risk of deterioration of continence status increases with the number of surgical procedures and the invasiveness of procedures performed. For complex anal fistulas according to American Gastroenterological Association and Parks classification[9,10], transrectal flap procedures (advancement or mucosal flap repair) and the LIFT procedure can be considered as effective surgical options in stable disease and absence of proctitis and/or anorectal stenosis with acceptable healing rates[11-18]; however, fecal incontinence and soiling is reported in approximately 10% of patients after endorectal flap procedures[6,11,18]. Focusing on the LIFT procedure, the risk of incontinence is low, but about half of the patients need additional surgery due to recurrence[14-16]. The main “disadvantage” or limitation of both procedures are high complex fistulas with suprasphincteric course or multi-tract fistulas with two internal and external openings (“multiple branching tracts”). In these patients, long-term seton drainage is the favored option; alternatively, proctectomy can be considered. Based on this “surgical dilemma”, the novel therapeutic approach of local stem cell therapy seems to be an alternative for highly selected patients with multi-tract, complex fistulas. \nIn the meantime, a variety of limited studies demonstrated that local injection of mesenchymal stem cells can induce long-term fistula healing without the risk of incontinence and without serious adverse advents related to the mesenchymal stem cells themselves[19-22]. Following the results of the multicenter, phase III randomized control trial (ADMIRE), the application of allogenic, adipose-derived, mesenchymal stem cells combined with transanal closure of the internal opening and fistula curettage, a 50% healing rate of stem cell therapy compared with a 34% success in the placebo group was documented after 24 wk, and healing rates were sustained to 52 wk[7,8]. \nImpressed by these results and personally searching for the “best option” in patients with multi-branching or two-tract complex fistula, it was the aim of the current single-center experience to evaluate the outcome of local injection of darvadstrocel in a highly selected group of patients. Following the current single-center experience with stem cell therapy, strict inclusion and exclusion criteria were defined. In accordance with the inclusion criteria of the ADMIRE study[7,8], only patients with complex fistulas associated with Crohn’s disease–in the majority two complex fistula, suprasphincteric fistula and horse-shoe-fistula, without active luminal Crohn’s disease (stable disease under medical treatment) and no evidence of perianal sepsis were included. In contrast, simple fistulas or rectovaginal fistulas were excluded for stem cell therapy. Moreover, in the current study, patients with single-tract intersphincteric or transsphincteric fistulas as well as relatively short single-tract fistulas (fistula length less than 3 cm) were not candidates for stem cell therapy; in these patients, advancement flap repair, mucosal flap repair or LIFT procedure was the preferred option. Within the observation period, a total of 28 patients underwent flap or LIFT procedure for complex fistula associated with Crohn’s disease (data not shown).\nAll patients had drainage of fistulas by seton for a minimum of 6 wk. In contrast to the ADMIRE study[7,8], 4 patients who underwent fecal diversion due to perianal sepsis related to complex fistulas years before admission to our center were included in the current study after interdisciplinary discussion, as the only alternative surgical treatment would have been a proctectomy in these patients, and patients refused this option. Finally, all 12 patients had at least one attempt to surgically close the fistula with no success; thus, all fistulas treated with stem cell therapy were recurrent complex fistula.\nFocusing on surgical technique and administration of darvadstrocel, no difficulties or intraoperative morbidity were documented. One patient had fewer on the second postoperative day without any signs of perianal sepsis. No serious adverse events in the immediate postoperative course were noted. \nAfter a mean follow-up of 14.3 mo (range: 3-30 mo), healing rate was 66.7% (8 of 12 patients). Healing was based on strict criteria and was assessed by clinical examination and proctoscopy in the follow-up period. In terms of postoperative MRI (5/12 patients), 1 female patient with clinical evidence of fistula healing also had radiological healing as documented by MRI, whereas 3 patients had radiological recurrence (2) or persistence (1). One patient with clinical healing had radiologic persistence of fistula without signs of contrast enhancement. Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively.\nThe occurrence of perianal abscess during follow-up was relatively frequent in the current collective. Four patients (33.3%) developed perianal abscess in the postoperative course, and surgery (abscess excision) was required (1 patient had abscess formation twice). Interestingly, the occurrence of perianal abscess was late (3, 4, 7, 9 and 22 mo after darvadstrocel administration), and abscess localization was near to one former external opening. The incidence of abscess was related to recurrence in 1 patient (supralevatoric abscess with proctosigmoiditis), occurred with exacerbation of systemic disease (subcutaneous abscess with active inflammation of ileocolic region and proctitis), and was associated with fistula persistence in 2 patients. As a consequence of the relatively high incidence of perianal abscess following stem cell therapy in the current series, a more wide or radical excision of the external opening should be recommended in future procedures to prevent perianal abscess. Moreover, patients with the occurrence of abscess could not be defined “in remission”; however, there was no change in medical treatment. Therefore, the differentiation of abscess as a local problem following fistula surgery or as a problem of systemic disease with direct implications for medical treatment should be more in focus between gastroenterologists and surgeons.\nAnalyzing the further course of patients with long-term healing in terms of medical therapy, 1 patient had ceased maintenance therapy after recommendation of the gastroenterologist. Focusing on patients who had fecal diversion in the past, one female patient underwent stoma reversal 2 years after stem cell injection. In the other 3 patients, follow-up is still too short, as stoma reversal should be advised not prior to a minimum period of 6 mo after definite fistula surgery (1 patient), and the other two patients had recurrence.\nSpecifically addressing the observation period related to the COVID-19 pandemic, stem cell therapy was not performed within March 2020 and July 2020 and within a second period starting in November 2020 due to general restrictions concerning elective operations in Germany. Based on governmental restrictions and in-hospital limitations (e.g., reduced capacity in the surgical theater) as well as logistic reasons (e.g., transportation of stem cells under specific conditions) patients with stable disease and without signs of active perianal disease were postponed. This was in accordance with other European and United States experiences related to the management of patients with inflammatory bowel disease[23,24]. \nThe current results clearly demonstrate that an innovative sphincter-sparing surgical approach including fistula curettage, transanal closure of internal openings and local injection of darvadstrocel leads to promising long-term healing rates in patients with no “effective” alternative, such as flap repair or LIFT procedure. However, fundamental limitations of this single-center experience are the small number of patients, the retrospective design, and the absence of a control group. Therefore, the definite role of local application of mesenchymal stem cells has to be discussed within multidisciplinary round tables or consensus conferences to have clear position statements[25-28]. Actually, we are still in the episode of a “learning curve” in terms of patient selection, ideal surgical technique, interdisciplinary treatment discussion, role of maintenance therapy and evaluation of outcomes, among others. Therefore, generally accepted indications and pathways of stem cell therapy for complex fistulas in Crohn’s disease cannot be derived; however, the current results should be a plea for further standardization and interdisciplinary consensus.", "These single-center data demonstrate that local injection of adipose-derived mesenchymal stem cells (darvadstrocel) is safe and effective in patients suffering from perianal fistulizing Crohn’s disease. Providing long-term healing in 66.7% of patients, stem cell therapy seems to be an innovative and promising surgical therapy for complex anal fistula associated with Crohn’s disease. However, this single-center experience is limited due to the small number of patients and the retrospective assessment of routine clinical data related to quality control. As a technical consequence of a high incidence of perianal abscess during follow-up, a wide excision of the external opening should be recommended for future procedures. Finally, further interdisciplinary efforts including controlled studies are necessary to evaluate the definite role of stem cell therapy for complex anal fistula in Crohn’s disease." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Complex anal fistula", "Crohn’s disease", "Stem cell therapy", "Mesenchymal stem cells", "Darvadstrocel", "Treatment", "Surgery", "Outcomes" ]
INTRODUCTION: Both medical and surgical options for complex perianal fistulas associated with Crohn’s disease remain difficult and challenging[1-5]. Despite recent progress in medical treatment including biologicals and actual trends in sphincter-preserving surgical techniques, considerable recurrence rates after definite surgery for Crohn’s anal fistula have been documented[2,3,5]. As an interdisciplinary treatment regimen is the prerequisite for disease control and potential long-term remission of perianal fistulizing Crohn’s disease, therapeutic goals include symptom improvement (e.g., reduction of secretion, absence of pain), prevention of recurrent perianal abscess requiring further surgery, preservation of continence, improvement of quality of life and, finally, healing of the fistula. Focusing on surgical procedures for complex anal fistulas, conventional procedures such as advancement flap repair have shown considerable failure rates for Crohn’s fistulas, whereas innovative surgical options such as ligation of the intersphincteric fistula tract (LIFT), biomaterials (e.g., plug) or video-assisted anal fistula treatment seem to have a 50% healing rate after 12 mo without significant impairment of continence[6]. Recently, several reports and randomized studies have demonstrated that stem cell therapy for Crohn’s complex anal fistula has raised the healing rates after 12 mo[2,7,8]. Impressed by the encouraging healing rates reported from the ADMIRE trial[7,8], it was the aim of this retrospective single-center study to evaluate the current results of stem cell therapy for complex anal fistulas in Crohn’s disease with special focus on indication, patient selection and long-term outcome. MATERIALS AND METHODS: Clinical setting Stem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). Stem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). Study population All patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. Inclusion and exclusion criteria for administration of darvadstrocel MRI: Magnetic resonance imaging. All patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. Inclusion and exclusion criteria for administration of darvadstrocel MRI: Magnetic resonance imaging. Study design and outcome evaluation To evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2. Definitions of outcome To evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2. Definitions of outcome Special consideration for coronavirus disease 2019 pandemic Due to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021. Due to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021. Preoperative assessment All patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed. All patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed. Surgical technique All surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary. After preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region. Postoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period. All surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary. After preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region. Postoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period. Follow-up Regular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination. Regular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination. Clinical setting: Stem cell injection was performed by administration of human, allogenic, expanded adipose-derived mesenchymal stem cells (darvadstrocel) for complex anal fistulas associated with Crohn’s disease in adult patients. For the current study, Alofisel® (5 million cells/mL) was used (Takeda GmbH, Konstanz, Germany). Upon authorization of the European Medicines Agency (initial authorization on December 14, 2017), Alofisel® is indicated in patients with non-active or mildly active Crohn’s disease when fistulas show an inadequate response (failure either due to persistence or recurrence) to at least one conventional, biological or surgical therapy. Darvadstrocel was only administered if the fistulas had conditioning by curettage or seton drainage. All potential candidates for stem cell injection had fully informed consent of the innovative method and underwent a selection process by multidisciplinary evaluation and discussion (gastroenterology and proctology); finally, the center was trained by an educational and hands-on workshop for the use of darvadstrocel in complex fistulas in Crohn’s disease to ensure that only specialist physicians perform the procedure. Moreover, intrainstitutional education was conducted for darvadstrocel administration (e.g., pharmacy, surgical team). Study population: All patients with complex anorectal fistulas associated with Crohn’s disease who were amenable for definite fistula closure were potential candidates for darvadstrocel administration if at least one conventional or surgical attempt to close the fistula had failed. All patients suffered from complex fistula according to American Gastroenterological Association and Parks classification[9,10]. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). According to Parks classification, 76% of the fistulas were transsphincteric and 14% were suprasphincteric. Darvadstrocel was only indicated in patients without active Crohn’s disease, ruled out by ileocolonoscopy, and without the presence of anorectal abscess assessed by clinical examination and/or pelvic magnetic resonance imaging (MRI). Additionally, all patients had interdisciplinary discussion prior to surgical treatment. Specific inclusion and exclusion criteria are outlined in Table 1. Inclusion and exclusion criteria for administration of darvadstrocel MRI: Magnetic resonance imaging. Study design and outcome evaluation: To evaluate the outcome of patients who underwent local stem cell injection by the application of darvadstrocel, a retrospective analysis of existing routine clinical data was performed. In detail, regular clinical data of patient care and quality control (including patients’ characteristics, type of therapy, outcome evaluation during routine clinical follow-up) were retrospectively analyzed in an anonymized fashion. Patients underwent darvadstrocel administration not for primary study purpose but for quality assurance and patient care. To assess outcome, criteria of healing included that both internal and external openings were closed, no abscess or fluid collection was present and if the patient was free of symptoms (e.g., pain, secretion). Healing was assessed by clinical examination and proctoscopy. Strict criteria and definitions of persistence and recurrence are outlined in Table 2. Definitions of outcome Special consideration for coronavirus disease 2019 pandemic: Due to the coronavirus disease pandemic in the beginning of 2020, the coronavirus disease 2019 (COVID-19) outbreak had direct implications for patients with perianal fistulizing Crohn’s disease regarding their schedule for definite surgery. As a result of the primary outbreak in March 2020, no patients had elective surgery for Crohn’s associated complex anal fistulas at our institution between March 2020 and July 2020 according to governmental restrictions (German Ministry of Health). Moreover, according to a second restriction period for elective procedures at our institution starting in November 2020, potential procedures were postponed to 2021. Finally, a relevant number of patients with fistulizing perianal Crohn’s disease under immunosuppression and/or maintenance therapy (e.g., biologicals) postponed their surgery to 2021. Preoperative assessment: All patients amenable for stem cell injection were seen in the proctological office 2-4 wk prior to darvadstrocel treatment. Informed consent was given, including for efficacy from the ADMIRE study, innovation of the current technique, and monitoring of side effects or adverse events, surgical technique and follow-up observation. Proctological examination included clinical examination and proctorectoscopy to rule out abscess or proctitis in the immediate preoperative phase. No antibiotic treatment was administered; setons were controlled to be correct in place. No routine curettage of fistulas drained by setons was performed in the preoperative period, and no specific change for medical treatment was performed. Surgical technique: All surgical procedures were performed under general anesthesia. The patient was placed in lithotomy position. A single-shot antibiotic prophylaxis was routinely administered (cefuroxime/metronidazole). Initially, a careful examination under anesthesia was performed to ensure that there was no presence of abscess or proctitis; moreover, assessment of fistula length, anatomy and topography according to Parks classification[10] was performed. After removal of setons, a vigorous curettage of the fistula tracts was performed by using a metallic curette. Additionally, injection of a sodium chloride solution (0.9%) was administered after curettage. No excision of the fistula tracts were conducted; however, the external openings were sparingly excised. Afterwards, the internal openings were closed by either direct suturing (absorbable suture, Vicryl 3/0; Ethicon EndoSurgery, Norderstedt, Germany) or by mucosal flap (absorbable suture, Vicryl 3/0) if necessary. After preparing and resuspension of the stem cells and gentle aspiration by using a syringe and injection needles (22-G), darvastrocel was injected according to the manufacturer`s recommendations: Two vials were injected around the internal openings (transanal approach), and the other two vials were injected along the fistula tracts creating small deposits of the cell suspension along the fistulas from external openings (perianal approach). After injection, a soft massage along the fistula region was performed; finally, a sterile bandage was placed around the anal region. Postoperatively, no specific restrictions related to feeding and mobility were present; after wound control on the first postoperative day, the patient was discharged from hospital. No specific wound management was proposed (only clear water twice a day). Maintenance therapy was given as planned prior to surgery. Neither intravenous nor oral antibiotic therapy was given in the postoperative period. Follow-up: Regular follow-up examination was performed 2, 4 and 6 wk after surgery in the proctological office, to obtain clinical data related to quality assurance and patient care. Moreover, follow-up examination was advised at 6, 12 and 24 mo after stem cell injection. Additionally, gastroenterological monitoring was advised to provide regular monitoring of Crohn’s disease. As clinical follow-up was primarily indicated for routine quality control and was not indicated for study purpose, regular follow-up did not include postoperative MRI as routine examination. RESULTS: Study population and patient characteristics Between July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. Patient population CD: Crohn’s disease; CPF: Crohn’s perianal fistula. Between July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. Patient population CD: Crohn’s disease; CPF: Crohn’s perianal fistula. Fistula characterization All patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). All patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). Surgery All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. Outcome Details on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. Efficacy evaluation and outcomes MRI: Magnetic resonance imaging. Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively. Details on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. Efficacy evaluation and outcomes MRI: Magnetic resonance imaging. Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively. Study population and patient characteristics: Between July 2018 and January 2021, a total of 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy for complex anal fistula associated with Crohn’s disease. All patients had medical therapy (infliximab, adalimumab, vedolizumab or azathioprine), and complex fistulas did not respond to either medical or surgical treatment prior to stem cell therapy. Mean duration of Crohn’s disease was 19.3 years (range: 7-36 years), and median duration of perianal fistulizing Crohn’s disease was 8.7 years (range: 1-16 years). All fistulas had at least one conventional surgical approach to close the fistula (except seton drainage) prior to stem cell therapy; mean number of surgical attempts to close the fistula was 2.0 (range: 1-4). Details on study population and patients’ characteristics are detailed in Table 3. Patient population CD: Crohn’s disease; CPF: Crohn’s perianal fistula. Fistula characterization: All patients underwent fistula conditioning by seton drainage for a mean duration of 31.5 (range: 6-72) wk, and 4 patients had undergone fecal diversion due to severe perianal sepsis. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of 12 patients had horse-shoe fistula, and 3 of 12 had one complex fistula. In total, 21 fistula tracts were documented in 12 patients. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). Surgery: All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. No intraoperative complications or side effects were noted. Postoperative morbidity included 1 patient with fewer (without signs of perianal sepsis) on the second postoperative day. No serious adverse events were documented. Outcome: Details on efficacy evaluation and outcome are outlined in Table 4. At a mean follow-up of 14.3 mo (range: 3-30 mo), a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 wk (range: 6-30 wk). Four patients had either fistula persistence (n = 2, 16.7%) or fistula recurrence (n = 2; 16.7%). Within follow-up, 4 patients developed perianal abscess (33.3%) and required reoperation. From 4 patients with ileostomy, one stoma reversal was performed 2 years after fistula healing. Efficacy evaluation and outcomes MRI: Magnetic resonance imaging. Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively. DISCUSSION: Despite significant progress in medical treatment, including biologicals and cell-based therapies, development of sphincter-saving surgical techniques and interdisciplinary co-working, failure rates after definite surgery for perianal fistulizing Crohn’s disease are still high[3,11-13]. Moreover, in patients with severe and refractory perianal disease, the decision to perform fecal diversion or even proctectomy has a tremendous impact on quality of life, particularly in younger patients. Impressed by the preliminary results of limited single-center studies and the encouraging data of the ADMIRE study[7,8], this was a retrospective single-center study analyzing routine clinical data on the application of allogenic, adipose-derived mesenchymal stem cells (darvadstrocel) for complex perianal fistula associated with Crohn’s disease, providing structured inclusion and exclusion criteria in 12 patients. In general, the optimal surgical treatment of complex anal fistulas in patients with Crohn’s disease remains challenging. As the majority of patients need surgery for perianal fistulizing Crohn’s disease and a high proportion of patients need further surgery due to abscess and recurrent fistula, the integrity of the anal sphincter is essential for preservation of continence[3]. Therefore, the risk of deterioration of continence status increases with the number of surgical procedures and the invasiveness of procedures performed. For complex anal fistulas according to American Gastroenterological Association and Parks classification[9,10], transrectal flap procedures (advancement or mucosal flap repair) and the LIFT procedure can be considered as effective surgical options in stable disease and absence of proctitis and/or anorectal stenosis with acceptable healing rates[11-18]; however, fecal incontinence and soiling is reported in approximately 10% of patients after endorectal flap procedures[6,11,18]. Focusing on the LIFT procedure, the risk of incontinence is low, but about half of the patients need additional surgery due to recurrence[14-16]. The main “disadvantage” or limitation of both procedures are high complex fistulas with suprasphincteric course or multi-tract fistulas with two internal and external openings (“multiple branching tracts”). In these patients, long-term seton drainage is the favored option; alternatively, proctectomy can be considered. Based on this “surgical dilemma”, the novel therapeutic approach of local stem cell therapy seems to be an alternative for highly selected patients with multi-tract, complex fistulas. In the meantime, a variety of limited studies demonstrated that local injection of mesenchymal stem cells can induce long-term fistula healing without the risk of incontinence and without serious adverse advents related to the mesenchymal stem cells themselves[19-22]. Following the results of the multicenter, phase III randomized control trial (ADMIRE), the application of allogenic, adipose-derived, mesenchymal stem cells combined with transanal closure of the internal opening and fistula curettage, a 50% healing rate of stem cell therapy compared with a 34% success in the placebo group was documented after 24 wk, and healing rates were sustained to 52 wk[7,8]. Impressed by these results and personally searching for the “best option” in patients with multi-branching or two-tract complex fistula, it was the aim of the current single-center experience to evaluate the outcome of local injection of darvadstrocel in a highly selected group of patients. Following the current single-center experience with stem cell therapy, strict inclusion and exclusion criteria were defined. In accordance with the inclusion criteria of the ADMIRE study[7,8], only patients with complex fistulas associated with Crohn’s disease–in the majority two complex fistula, suprasphincteric fistula and horse-shoe-fistula, without active luminal Crohn’s disease (stable disease under medical treatment) and no evidence of perianal sepsis were included. In contrast, simple fistulas or rectovaginal fistulas were excluded for stem cell therapy. Moreover, in the current study, patients with single-tract intersphincteric or transsphincteric fistulas as well as relatively short single-tract fistulas (fistula length less than 3 cm) were not candidates for stem cell therapy; in these patients, advancement flap repair, mucosal flap repair or LIFT procedure was the preferred option. Within the observation period, a total of 28 patients underwent flap or LIFT procedure for complex fistula associated with Crohn’s disease (data not shown). All patients had drainage of fistulas by seton for a minimum of 6 wk. In contrast to the ADMIRE study[7,8], 4 patients who underwent fecal diversion due to perianal sepsis related to complex fistulas years before admission to our center were included in the current study after interdisciplinary discussion, as the only alternative surgical treatment would have been a proctectomy in these patients, and patients refused this option. Finally, all 12 patients had at least one attempt to surgically close the fistula with no success; thus, all fistulas treated with stem cell therapy were recurrent complex fistula. Focusing on surgical technique and administration of darvadstrocel, no difficulties or intraoperative morbidity were documented. One patient had fewer on the second postoperative day without any signs of perianal sepsis. No serious adverse events in the immediate postoperative course were noted. After a mean follow-up of 14.3 mo (range: 3-30 mo), healing rate was 66.7% (8 of 12 patients). Healing was based on strict criteria and was assessed by clinical examination and proctoscopy in the follow-up period. In terms of postoperative MRI (5/12 patients), 1 female patient with clinical evidence of fistula healing also had radiological healing as documented by MRI, whereas 3 patients had radiological recurrence (2) or persistence (1). One patient with clinical healing had radiologic persistence of fistula without signs of contrast enhancement. Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively. The occurrence of perianal abscess during follow-up was relatively frequent in the current collective. Four patients (33.3%) developed perianal abscess in the postoperative course, and surgery (abscess excision) was required (1 patient had abscess formation twice). Interestingly, the occurrence of perianal abscess was late (3, 4, 7, 9 and 22 mo after darvadstrocel administration), and abscess localization was near to one former external opening. The incidence of abscess was related to recurrence in 1 patient (supralevatoric abscess with proctosigmoiditis), occurred with exacerbation of systemic disease (subcutaneous abscess with active inflammation of ileocolic region and proctitis), and was associated with fistula persistence in 2 patients. As a consequence of the relatively high incidence of perianal abscess following stem cell therapy in the current series, a more wide or radical excision of the external opening should be recommended in future procedures to prevent perianal abscess. Moreover, patients with the occurrence of abscess could not be defined “in remission”; however, there was no change in medical treatment. Therefore, the differentiation of abscess as a local problem following fistula surgery or as a problem of systemic disease with direct implications for medical treatment should be more in focus between gastroenterologists and surgeons. Analyzing the further course of patients with long-term healing in terms of medical therapy, 1 patient had ceased maintenance therapy after recommendation of the gastroenterologist. Focusing on patients who had fecal diversion in the past, one female patient underwent stoma reversal 2 years after stem cell injection. In the other 3 patients, follow-up is still too short, as stoma reversal should be advised not prior to a minimum period of 6 mo after definite fistula surgery (1 patient), and the other two patients had recurrence. Specifically addressing the observation period related to the COVID-19 pandemic, stem cell therapy was not performed within March 2020 and July 2020 and within a second period starting in November 2020 due to general restrictions concerning elective operations in Germany. Based on governmental restrictions and in-hospital limitations (e.g., reduced capacity in the surgical theater) as well as logistic reasons (e.g., transportation of stem cells under specific conditions) patients with stable disease and without signs of active perianal disease were postponed. This was in accordance with other European and United States experiences related to the management of patients with inflammatory bowel disease[23,24]. The current results clearly demonstrate that an innovative sphincter-sparing surgical approach including fistula curettage, transanal closure of internal openings and local injection of darvadstrocel leads to promising long-term healing rates in patients with no “effective” alternative, such as flap repair or LIFT procedure. However, fundamental limitations of this single-center experience are the small number of patients, the retrospective design, and the absence of a control group. Therefore, the definite role of local application of mesenchymal stem cells has to be discussed within multidisciplinary round tables or consensus conferences to have clear position statements[25-28]. Actually, we are still in the episode of a “learning curve” in terms of patient selection, ideal surgical technique, interdisciplinary treatment discussion, role of maintenance therapy and evaluation of outcomes, among others. Therefore, generally accepted indications and pathways of stem cell therapy for complex fistulas in Crohn’s disease cannot be derived; however, the current results should be a plea for further standardization and interdisciplinary consensus. CONCLUSION: These single-center data demonstrate that local injection of adipose-derived mesenchymal stem cells (darvadstrocel) is safe and effective in patients suffering from perianal fistulizing Crohn’s disease. Providing long-term healing in 66.7% of patients, stem cell therapy seems to be an innovative and promising surgical therapy for complex anal fistula associated with Crohn’s disease. However, this single-center experience is limited due to the small number of patients and the retrospective assessment of routine clinical data related to quality control. As a technical consequence of a high incidence of perianal abscess during follow-up, a wide excision of the external opening should be recommended for future procedures. Finally, further interdisciplinary efforts including controlled studies are necessary to evaluate the definite role of stem cell therapy for complex anal fistula in Crohn’s disease.
Background: Despite tremendous progress in medical therapy and optimization of surgical strategies, considerable failure rates after surgery for complex anal fistula in Crohn's disease have been reported. Therefore, stem cell therapy for the treatment of complex perianal fistula can be an innovative option with potential long-term healing. Methods: All patients with complex anal fistulas associated with Crohn's disease who were amenable for definite fistula closure within a defined observation period were potential candidates for stem cell injection (darvadstrocel) if at least one conventional or surgical attempt to close the fistula had failed. Darvadstrocel was only indicated in patients without active Crohn's disease and without presence of anorectal abscess. Local injection of darvadstrocel was performed as a standardized procedure under general anesthesia including single-shot antibiotic prophylaxis, removal of seton drainage, fistula curettage, closure of the internal openings and local stem cell injection. Data collection focusing on healing rates, occurrence of abscess and follow-up was performed on a regular basis of quality control and patient care. Data were retrospectively analyzed. Results: Between July 2018 and January 2021, 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of the 12 patients had horse-shoe fistula and 3 had one complex fistula. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. At a mean follow-up of 14.3 (range: 3-30) mo, a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 (range: 6-30) wk. Within follow-up, 4 patients required reoperation due to perianal abscess (33.3%). Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively. Conclusions: Data of this single-center experience are promising but limited due to the small number of patients and the retrospective analysis.
INTRODUCTION: Both medical and surgical options for complex perianal fistulas associated with Crohn’s disease remain difficult and challenging[1-5]. Despite recent progress in medical treatment including biologicals and actual trends in sphincter-preserving surgical techniques, considerable recurrence rates after definite surgery for Crohn’s anal fistula have been documented[2,3,5]. As an interdisciplinary treatment regimen is the prerequisite for disease control and potential long-term remission of perianal fistulizing Crohn’s disease, therapeutic goals include symptom improvement (e.g., reduction of secretion, absence of pain), prevention of recurrent perianal abscess requiring further surgery, preservation of continence, improvement of quality of life and, finally, healing of the fistula. Focusing on surgical procedures for complex anal fistulas, conventional procedures such as advancement flap repair have shown considerable failure rates for Crohn’s fistulas, whereas innovative surgical options such as ligation of the intersphincteric fistula tract (LIFT), biomaterials (e.g., plug) or video-assisted anal fistula treatment seem to have a 50% healing rate after 12 mo without significant impairment of continence[6]. Recently, several reports and randomized studies have demonstrated that stem cell therapy for Crohn’s complex anal fistula has raised the healing rates after 12 mo[2,7,8]. Impressed by the encouraging healing rates reported from the ADMIRE trial[7,8], it was the aim of this retrospective single-center study to evaluate the current results of stem cell therapy for complex anal fistulas in Crohn’s disease with special focus on indication, patient selection and long-term outcome. CONCLUSION: Based on the current results, local stem cell injection could be a new “puzzle piece” for effective treatment of complex anal fistulas associated with Crohn’s disease.
Background: Despite tremendous progress in medical therapy and optimization of surgical strategies, considerable failure rates after surgery for complex anal fistula in Crohn's disease have been reported. Therefore, stem cell therapy for the treatment of complex perianal fistula can be an innovative option with potential long-term healing. Methods: All patients with complex anal fistulas associated with Crohn's disease who were amenable for definite fistula closure within a defined observation period were potential candidates for stem cell injection (darvadstrocel) if at least one conventional or surgical attempt to close the fistula had failed. Darvadstrocel was only indicated in patients without active Crohn's disease and without presence of anorectal abscess. Local injection of darvadstrocel was performed as a standardized procedure under general anesthesia including single-shot antibiotic prophylaxis, removal of seton drainage, fistula curettage, closure of the internal openings and local stem cell injection. Data collection focusing on healing rates, occurrence of abscess and follow-up was performed on a regular basis of quality control and patient care. Data were retrospectively analyzed. Results: Between July 2018 and January 2021, 12 patients (6 females, 6 males) with a mean age of 42.5 (range: 26-61) years underwent stem cell therapy. All patients had a minimum of one complex fistula, including patients with two complex fistulas in 58.3% (7/12). Two of the 12 patients had horse-shoe fistula and 3 had one complex fistula. According to Parks classification, the majority of fistulas were transsphincteric (76%) or suprasphincteric (14%). All patients underwent removal of seton, fistula curettage, transanal closure of internal opening by suture (11/12) or mucosal flap (1/12) and stem cell injection. At a mean follow-up of 14.3 (range: 3-30) mo, a healing rate was documented in 66.7% (8/12); mean duration to achieve healing was 12 (range: 6-30) wk. Within follow-up, 4 patients required reoperation due to perianal abscess (33.3%). Focusing on patients with a minimum follow-up of 12 mo (6/12) or 24 mo (4/12), long-term healing rates were 66.7% (4/6) and 50.0% (2/4), respectively. Conclusions: Data of this single-center experience are promising but limited due to the small number of patients and the retrospective analysis.
7,646
461
[ 283, 219, 172, 152, 136, 117, 334, 100, 1110, 193, 112, 67, 175, 1740, 153 ]
16
[ "patients", "fistula", "disease", "fistulas", "crohn", "complex", "stem", "12", "crohn disease", "cell" ]
[ "complex perianal fistulas", "complex perianal fistula", "surgery crohn anal", "fistulizing perianal crohn", "anal fistula treatment" ]
null
[CONTENT] Complex anal fistula | Crohn’s disease | Stem cell therapy | Mesenchymal stem cells | Darvadstrocel | Treatment | Surgery | Outcomes [SUMMARY]
[CONTENT] Complex anal fistula | Crohn’s disease | Stem cell therapy | Mesenchymal stem cells | Darvadstrocel | Treatment | Surgery | Outcomes [SUMMARY]
null
[CONTENT] Complex anal fistula | Crohn’s disease | Stem cell therapy | Mesenchymal stem cells | Darvadstrocel | Treatment | Surgery | Outcomes [SUMMARY]
[CONTENT] Complex anal fistula | Crohn’s disease | Stem cell therapy | Mesenchymal stem cells | Darvadstrocel | Treatment | Surgery | Outcomes [SUMMARY]
[CONTENT] Complex anal fistula | Crohn’s disease | Stem cell therapy | Mesenchymal stem cells | Darvadstrocel | Treatment | Surgery | Outcomes [SUMMARY]
[CONTENT] Crohn Disease | Female | Humans | Male | Rectal Fistula | Retrospective Studies | Stem Cells | Treatment Outcome [SUMMARY]
[CONTENT] Crohn Disease | Female | Humans | Male | Rectal Fistula | Retrospective Studies | Stem Cells | Treatment Outcome [SUMMARY]
null
[CONTENT] Crohn Disease | Female | Humans | Male | Rectal Fistula | Retrospective Studies | Stem Cells | Treatment Outcome [SUMMARY]
[CONTENT] Crohn Disease | Female | Humans | Male | Rectal Fistula | Retrospective Studies | Stem Cells | Treatment Outcome [SUMMARY]
[CONTENT] Crohn Disease | Female | Humans | Male | Rectal Fistula | Retrospective Studies | Stem Cells | Treatment Outcome [SUMMARY]
[CONTENT] complex perianal fistulas | complex perianal fistula | surgery crohn anal | fistulizing perianal crohn | anal fistula treatment [SUMMARY]
[CONTENT] complex perianal fistulas | complex perianal fistula | surgery crohn anal | fistulizing perianal crohn | anal fistula treatment [SUMMARY]
null
[CONTENT] complex perianal fistulas | complex perianal fistula | surgery crohn anal | fistulizing perianal crohn | anal fistula treatment [SUMMARY]
[CONTENT] complex perianal fistulas | complex perianal fistula | surgery crohn anal | fistulizing perianal crohn | anal fistula treatment [SUMMARY]
[CONTENT] complex perianal fistulas | complex perianal fistula | surgery crohn anal | fistulizing perianal crohn | anal fistula treatment [SUMMARY]
[CONTENT] patients | fistula | disease | fistulas | crohn | complex | stem | 12 | crohn disease | cell [SUMMARY]
[CONTENT] patients | fistula | disease | fistulas | crohn | complex | stem | 12 | crohn disease | cell [SUMMARY]
null
[CONTENT] patients | fistula | disease | fistulas | crohn | complex | stem | 12 | crohn disease | cell [SUMMARY]
[CONTENT] patients | fistula | disease | fistulas | crohn | complex | stem | 12 | crohn disease | cell [SUMMARY]
[CONTENT] patients | fistula | disease | fistulas | crohn | complex | stem | 12 | crohn disease | cell [SUMMARY]
[CONTENT] rates | crohn | anal | healing | fistula | anal fistula | improvement | considerable | disease | complex [SUMMARY]
[CONTENT] patients | darvadstrocel | performed | disease | clinical | examination | fistulas | crohn | 2020 | fistula [SUMMARY]
null
[CONTENT] single center | therapy complex anal fistula | complex anal fistula | therapy complex anal | anal fistula | crohn | crohn disease | therapy | disease | center [SUMMARY]
[CONTENT] patients | fistula | disease | crohn | 12 | complex | fistulas | crohn disease | stem | healing [SUMMARY]
[CONTENT] patients | fistula | disease | crohn | 12 | complex | fistulas | crohn disease | stem | healing [SUMMARY]
[CONTENT] Crohn ||| [SUMMARY]
[CONTENT] Crohn | at least one ||| Crohn ||| anesthesia ||| ||| [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] Crohn ||| ||| Crohn | at least one ||| Crohn ||| anesthesia ||| ||| ||| Between July 2018 and January 2021 | 12 | 6 | 6 | 42.5 | 26-61 | years ||| one | two | 58.3% | 7/12 ||| Two | 12 | 3 | one ||| Parks | 76% | 14% ||| 1/12 ||| 14.3 | 3-30 | 66.7% | 8/12 | 12 | 6-30 ||| 4 | 33.3% ||| 12 mo | 6/12 | 24 mo | 4/12 | 66.7% | 4/6 | 50.0% | 2/4 ||| [SUMMARY]
[CONTENT] Crohn ||| ||| Crohn | at least one ||| Crohn ||| anesthesia ||| ||| ||| Between July 2018 and January 2021 | 12 | 6 | 6 | 42.5 | 26-61 | years ||| one | two | 58.3% | 7/12 ||| Two | 12 | 3 | one ||| Parks | 76% | 14% ||| 1/12 ||| 14.3 | 3-30 | 66.7% | 8/12 | 12 | 6-30 ||| 4 | 33.3% ||| 12 mo | 6/12 | 24 mo | 4/12 | 66.7% | 4/6 | 50.0% | 2/4 ||| [SUMMARY]
Impairment of pulmonary vascular reserve and right ventricular systolic reserve in pulmonary arterial hypertension.
24762000
Exercise capacity is impaired in pulmonary arterial hypertension (PAH). We hypothesized that cardiovascular reserve abnormalities would be associated with impaired hemodynamic response to pharmacological stress and worse outcome in PAH.
BACKGROUND
Eighteen PAH patients (p) group 1 NYHA class II/III and ten controls underwent simultaneous right cardiac catheterization and intravascular ultrasound at rest and during low dose-dobutamine (10 mcg/kg/min) with trendelenburg (DST). We estimated cardiac output (CO), pulmonary vascular resistance (PVR) and capacitance (PC), and PA elastic modulus (EM). We concomitantly measured tricuspid annular plane systolic excursion (TAPSE), RV myocardial peak systolic velocity (Sm) and isovolumic myocardial acceleration (IVA) in PAH patients. Based on the rounded mean + 2 SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH p were divided into two groups according to mean PA pressure (mPAP) response during DST, 1: ΔmPAP > 5 mm Hg and 2: ΔmPAP ≤ 5 mm Hg. Cardiovascular reserve was estimated as the change (delta, Δ) during DST compared with rest, including ΔmPAP with respect to ΔCO (ΔmPAP/ΔCO). All patients were prospectively followed up for 2 years.
METHODS
PAH p showed significant lower heart rate and CO increase than controls during DST, with a significant mPAP and pulse PAP increase and higher ΔmPAP/ΔCO (p < 0.05). Neither hemodynamic, IVUS and echocardiographic data were different between both PAH groups at rest. In group 1, DST caused a higher ΔEM, ΔmPAP/ΔCO, ΔPVR, and ΔTAPSE than group 2, with a lower IVA increase and a negative ΔSV (p < 0.05). TAPSE correlated with mPAP and RVP (p < 0.05) and, IVA and Sm correlated with CO (p < 0.05). ΔEM correlated with ΔmPAP and ΔIVA with ΔCO (p < 0.05). There were two deaths/pulmonary transplantations in group 1 and one death in group 2 during the follow-up (p > 0.05).
RESULTS
Pulmonary vascular reserve and RV systolic reserve are significantly impaired in patients with PAH. The lower recruitable cardiovascular reserve is significantly related to a worse hemodynamic response to DST and it could be associated with a poor clinical outcome.
CONCLUSIONS
[ "Airway Resistance", "Cardiac Catheterization", "Case-Control Studies", "Echocardiography", "Echocardiography, Stress", "Exercise Tolerance", "Female", "Functional Residual Capacity", "Hemodynamics", "Humans", "Hypertension, Pulmonary", "Male", "Middle Aged", "Prognosis", "Reference Values", "Risk Assessment", "Severity of Illness Index", "Stroke Volume", "Survival Rate", "Ultrasonography, Interventional", "Vascular Resistance", "Ventricular Dysfunction, Right" ]
4007147
Background
Normal pulmonary circulation is characterized by low pressure and low vascular resistance. Mean pulmonary arterial pressure (mPAP) at rest is virtually independent of age and rarely exceeds 20 mm Hg (14 ± 3.3 mm Hg). In healthy individuals, passive distension of compliant pulmonary circulation and active flow-mediated vasodilation allows the pulmonary vasculature to accommodate increased cardiac output (CO) with only a modest increase in mPAP and a fall in pulmonary vascular resistance (PVR) [1-3]. Invasive hemodynamic monitoring during incremental exercise testing is technically difficult to perform and not routinely incorporated into clinical exercise testing. A recent systematic review has reported an age-dependent increase of mPAP that may exceed 30 mm Hg particularly in subjects aged ≥ 50 years, making it difficult to clearly define normal mPAP values during exercise [2]. In idiopathic pulmonary arterial hypertension (PAH), exercise capacity is markedly impaired due to an inefficient lung gas exchange (ventilation/perfusion mismatching with an increased dead space ventilation) and the inability of the heart to adequately increase pulmonary blood flow during exercise [4]. The pathophysiological mechanisms leading to an abnormal exercise response include an intrinsic abnormality in the pulmonary vasculature due to the pulmonary arterial (PA) wall remodeling [5] and a reduction in stroke volume and right ventricular (RV) ejection fraction [6]. Laskey et al. have demonstrated that both steady and pulsatile components of the PA vascular hydraulic load have considerable impact on exercise response in primary pulmonary hypertension [7]. It has already been reported that improvement in exercise tolerance in PAH patients with chronic therapy is independently related to improvements in pulmonary hemodynamics measured in exercise but not in resting conditions, suggesting an improve in the vascular reserve [8]. Lau et al. did not observe any significant beneficial effects of bosentan on arterial stiffness following 6-months of therapy [9]. Exercise echo-Doppler is being used with increased frequency in the assessment of patients with known or suspected pulmonary vascular disease, focusing on the change in Doppler-estimated PAP with exercise. However, there are surprisingly few data about RV function at exercise, especially considering that the impaired RV functional reserve could get involved in the mechanism of exercise limitation in PAH and other forms of pulmonary vascular diseases [10]. Recently, Blumberg et al. showed that the ability to increase the cardiac index during exercise is an important determinant of exercise capacity and it is linked to survival in patients with PH [11]. We hypothesized that abnormalities in cardiovascular reserve would be associated with impaired hemodynamic response to pharmacological stress and worse outcome in PAH. Therefore, the first aim of the present study was to perform RV systolic function assessment (echocardiography) and hemodynamic monitoring (right heart catheterization) including local elastic properties of proximal PA wall (intravascular ultrasound, IVUS) during dobutamine stress in patients with PAH. The second aim was to evaluate the association between the cardiovascular reserve and the outcome during two years follow-up.
Methods
Ethics statement The investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent. The investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent. Study population Eighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics. All subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2]. PAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results. Eighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics. All subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2]. PAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results. Hemodynamic and IVUS studies A 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15]. IVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18]. A 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15]. IVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18]. Transthoracic echocardiography-Doppler study Baseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed. Baseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed. Cardiovascular reserve analysis Cardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21]. Cardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21]. Statistical analysis Continuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA. The association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software. Continuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA. The association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software.
Results
Comparison between PAH patients and control subjects at rest and during pharmacological and positional stress The age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups. Demographic, anthropometric and clinical data of control subjects and patients with PAH BSA: body surface area; 6MWD: six minute walking distance. Hemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance. During DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress. All PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients. Relationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients). The age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups. Demographic, anthropometric and clinical data of control subjects and patients with PAH BSA: body surface area; 6MWD: six minute walking distance. Hemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance. During DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress. All PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients. Relationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients). Changes in hemodynamic, IVUS and echocardiographic data in PAH patients according to ΔmPAP during pharmacological and positional stress Nine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3). Hemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Both PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST. RV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress. Nine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3). Hemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Both PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST. RV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress. Cardiovascular reserve responses during pharmacological and positional stress Control subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4). Cardiovascular reserve response of control subjects and both PAH groups †p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta. Considering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001). Global cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4). In PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2). Correlations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver) CO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient). Relationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest). The interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively. In the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05). Control subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4). Cardiovascular reserve response of control subjects and both PAH groups †p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta. Considering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001). Global cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4). In PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2). Correlations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver) CO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient). Relationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest). The interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively. In the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05).
Conclusions
Pulmonary vascular reserve and RV systolic reserve are impaired in PAH patients. The PA wall remodeling, pulmonary buffering function and RV contractility appeared as the main factors of the cardiovascular reserve dysfunction in PAH patients. The lower recruitable cardiovascular reserve is significantly related to a worse hemodynamic response to DST and it could be associated with a poor clinical outcome. Further study is needed to elucidate whether cardiovascular reserve dysfunction adds independent prognostic information in a multivariate analysis. In addition, further studies needs to assess whether improvement of cardiovascular reserve could be a therapeutic target in patients with established pulmonary hypertension.
[ "Background", "Ethics statement", "Study population", "Hemodynamic and IVUS studies", "Transthoracic echocardiography-Doppler study", "Cardiovascular reserve analysis", "Statistical analysis", "Comparison between PAH patients and control subjects at rest and during pharmacological and positional stress", "Changes in hemodynamic, IVUS and echocardiographic data in PAH patients according to ΔmPAP during pharmacological and positional stress", "Cardiovascular reserve responses during pharmacological and positional stress", "Pharmacological and positional stress", "Cardiovascular reserve in PAH versus control patients", "Study limitations", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Normal pulmonary circulation is characterized by low pressure and low vascular resistance. Mean pulmonary arterial pressure (mPAP) at rest is virtually independent of age and rarely exceeds 20 mm Hg (14 ± 3.3 mm Hg). In healthy individuals, passive distension of compliant pulmonary circulation and active flow-mediated vasodilation allows the pulmonary vasculature to accommodate increased cardiac output (CO) with only a modest increase in mPAP and a fall in pulmonary vascular resistance (PVR) [1-3]. Invasive hemodynamic monitoring during incremental exercise testing is technically difficult to perform and not routinely incorporated into clinical exercise testing. A recent systematic review has reported an age-dependent increase of mPAP that may exceed 30 mm Hg particularly in subjects aged ≥ 50 years, making it difficult to clearly define normal mPAP values during exercise [2]. In idiopathic pulmonary arterial hypertension (PAH), exercise capacity is markedly impaired due to an inefficient lung gas exchange (ventilation/perfusion mismatching with an increased dead space ventilation) and the inability of the heart to adequately increase pulmonary blood flow during exercise [4]. The pathophysiological mechanisms leading to an abnormal exercise response include an intrinsic abnormality in the pulmonary vasculature due to the pulmonary arterial (PA) wall remodeling [5] and a reduction in stroke volume and right ventricular (RV) ejection fraction [6]. Laskey et al. have demonstrated that both steady and pulsatile components of the PA vascular hydraulic load have considerable impact on exercise response in primary pulmonary hypertension [7]. It has already been reported that improvement in exercise tolerance in PAH patients with chronic therapy is independently related to improvements in pulmonary hemodynamics measured in exercise but not in resting conditions, suggesting an improve in the vascular reserve [8]. Lau et al. did not observe any significant beneficial effects of bosentan on arterial stiffness following 6-months of therapy [9].\nExercise echo-Doppler is being used with increased frequency in the assessment of patients with known or suspected pulmonary vascular disease, focusing on the change in Doppler-estimated PAP with exercise. However, there are surprisingly few data about RV function at exercise, especially considering that the impaired RV functional reserve could get involved in the mechanism of exercise limitation in PAH and other forms of pulmonary vascular diseases [10]. Recently, Blumberg et al. showed that the ability to increase the cardiac index during exercise is an important determinant of exercise capacity and it is linked to survival in patients with PH [11].\nWe hypothesized that abnormalities in cardiovascular reserve would be associated with impaired hemodynamic response to pharmacological stress and worse outcome in PAH. Therefore, the first aim of the present study was to perform RV systolic function assessment (echocardiography) and hemodynamic monitoring (right heart catheterization) including local elastic properties of proximal PA wall (intravascular ultrasound, IVUS) during dobutamine stress in patients with PAH. The second aim was to evaluate the association between the cardiovascular reserve and the outcome during two years follow-up.", "The investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent.", "Eighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics.\nAll subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2].\nPAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results.", "A 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15].\nIVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18].", "Baseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed.", "Cardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21].", "Continuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA.\nThe association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software.", "The age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups.\nDemographic, anthropometric and clinical data of control subjects and patients with PAH\nBSA: body surface area; 6MWD: six minute walking distance.\nHemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver\n*p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance.\nDuring DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress.\nAll PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients.\nRelationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients).", "Nine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3).\nHemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups\n*p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance.\nBoth PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST.\nRV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress.", "Control subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4).\nCardiovascular reserve response of control subjects and both PAH groups\n†p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta.\nConsidering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001).\nGlobal cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4).\nIn PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2).\nCorrelations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver)\nCO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient).\nRelationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest).\nThe interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively.\nIn the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05).", "Cardiovascular reserve is emerging as a strong predictor of outcome in different cardiovascular diseases [21]. From a physiological point of view, cardiovascular reserve is a measure of cardiovascular response to exercise or pharmacological stresses (dobutamine infusion between 4 and 10 mcg/kg/min) [22]. Although exercise stress is the gold standard to evaluate of the pulmonary vascular pressure-flow relationships, exercise hemodynamics in PAH patients have been poorly studied, and factors that may have an impact on PAP response to exercise, such as exercise method, exercise intensity, position and age, have not been accounted for. The stress maneuver used in our study provided by low-dose dobutamine plus 30° Trendelenburg position works by a purely passive increasing in CO without directly influence on the PA wall viscoelastic properties. In experimental pulmonary hypertension, dobutamine infusion at a rate of 10 mcg/kg/min has no flow-independent effects on the normal or acutely hypertensive circulation. Higher doses may have a constricting or dilating effect depending on the pre-existing vascular tone [22-24].\nTaking into account the extent of heart rate and CO reached by healthy subjects during pharmacological and positional stress, we achieved a cardiovascular stress level similar to a slight/moderate exercise (heart rate 100–110 bpm and cardiac output 10–14 L/min) [2]. According to the data reviewed by Kovacs et al., mPAP values during slight exercise in healthy subjects were 29.4 ± 8.4 mm Hg, 20.0 ± 4.7 mm Hg and 18.2 ± 5.1 mm Hg in subjects aged ≥ 50 years, 30–50 years and less than 30 years, respectively [2]. Our healthy controls were aged 51 ± 6 years (range 40–60 years; 50% ≤ 50 years) and showed a similar mPAP (18 ± 4 mm Hg) with a similar CO increase (doubled) during DST.", "In the control group, the marked increase in CO during DST did not cause any significant change in mPAP and determined a low ΔmPAP/ΔCO ratio (0.7 ± 0.2 mm Hg/L/min). This value corresponds well to the cohort of Kovacs et al. which reported a ΔmPAP/ΔCO ~1.06 mm Hg/L/min [25].\nThe modest increment in mPAP relative to CO during pharmacological stress is attributable to passive recruitment and distension of a normally compliant pulmonary circulation with active flow-mediated vasodilation, decreasing PVR and TPR [15]. Studies into the regulation of pulmonary vascular tone during exercise demonstrate the importance of nitric oxide in the exercise-induced pulmonary vasodilatation, which is mediated in part via blunting of the vasoconstrictor influence of endothelin [26]. Concomitantly, pharmacological stress produced an increase in arterial pulsatility (estimated by IVUSp), and a decrease of EM, expressing a preserved buffering function. It is accepted that the arterial wall buffering function is determined not only by arterial elastic properties, but also by the viscous properties of the wall. The characterization of wall buffering function has been estimated by means of the ratio between viscous index/elastic index [27]. Considering our stress condition as a mainly passive condition (with no significant change in viscous index), a decreased EM with a negative ΔEM, would be associated with a preserved buffering function and buffering function reserve, respectively. The negative change in ΔPVR with a low ΔmPAP/ΔCO ratio reflects a preserved pulmonary vasodilator reserve. In clinical practice, ventricular systolic reserve is usually defined by a change in ejection fraction or SV during exercise or dobutamine infusion. Even though we did not assess RV function indices in the control group, the observed CO increase was composed by a 56% increase in heart rate (chronotropic reserve) and 20% (13 mL) increase in SV (systolic reserve).\nAlthough, both controls and PAH patients had similar heart rate at rest, PAH cohort showed an impaired chronotropic response during stress maneuver. This chronotropic incompetence has been previously documented by Provencher et al. and may reflect the loss of normal physiological reserve secondary to significant autonomic nervous system abnormalities and probably as a result of down-regulation of β-adrenoreceptors [28,29].\nBoth PAH patient groups had similar resting hemodynamics and chronotropic reserve. However, the higher increase in mPAP and pPAP with similar increase of CO during stress in PAH group 1 with respect to PAH group 2, would be related to the reduced recruitability and distensibility of more highly remodeled pulmonary vessels. This illustrates a lack of physiological adaptation of the PA wall to increased flow in relation with a lower vasodilation reserve (positive ΔPVR), a higher PA wall remodeling (higher resting EM) and a concomitantly lower buffering function (higher ΔEM). By contrast, PAH group 2 preserved some extent of vascular reserve secondary to a vasodilation response (negative ΔPVR) and a lower impairment of buffering function (lower positive ΔEM). This would explained the significant lower ΔmPAP/ΔCO ratio than group 1. Accordingly, we have previously reported that PAH patients with higher IVUSp and lesser EM displayed an absolute PA vasodilation during acute vasoreactivity testing [18]. We cannot discard the presence of alterations in the control of pulmonary vascular tone during DST, resulting in blunted pulmonary vasodilation. Since both PAH groups have neither demographic (age, gender or body surface area) nor clinical differences (functional class, 6 minutes walking distance, etiology of PAH), we can speculate that PAH group 1 could have higher endothelial dysfunction with higher imbalance between vasodilators and vasoconstrictors than group 2, explaining the significant higher ΔmPAP/ΔCO ratio [26].\nIn accordance with previous data, resting hemodynamic measurements are poorly correlated with the response to pharmacological stress [30]. However, EM at rest was significantly correlated with ΔmPAP and ΔCO. Accordingly, Kubo et al. showed that the percentage of wall thickness was highly correlated with ∆mPAP during exercise in patients with severe emphysema [31]. The correlation between ΔmPAP and ΔEM (Figure 2) suggests that PA wall remodeling and buffering function impairment would be associated with the lower vascular reserve.\nIn the context of PAH, evidence of RV dysfunction and clinical right-sided heart failure at rest have been shown to be the most important determinants of morbidity and mortality, independently of PAP values. We used three measures of longitudinal RV shortening in an effort to characterize simple and reproducible measurements of global RV systolic function [19]. The mean reference value of TAPSE is 23 mm (16–30), Sm 15 cm ⋅ s-1 (10–19) and IVA 3.7 m ⋅ s-2 (2.2-5.2) [19]. Among them, TAPSE, a simple and clinically useful tool to estimate RV function in PAH patients, has been shown to predict survival in PAH [32]. Preliminary evidence suggests that a decrease in TAPSE with exercise was strongly associated with adverse clinical events in PAH patients within one year of follow up [33]. However, Giusca et al. suggested that tricuspid ring motion is only loosely related to RV systolic function, being highly dependent on afterload and overall motion of the heart, thus failing to reflect RV longitudinal function accurately [34]. This may explain why TAPSE changes are more significantly related to changes in mPAP and PVR than to true changes in RV systolic function such as CO. Myocardial deformation parameters provide a more accurate picture of the contractile status of the RV free wall [19,34]. IVA appears as a relatively load-independent estimator of the RV systolic response to stress, probably reflecting true changes in contractility and in CO induced by DST. Sm of RV basal free wall is also better related to CO than TAPSE. However, in this work it showed a lower ability to identify systolic reserve than IVA, since there were no Sm differences between both PAH patients groups either at rest or during stress. Although both PAH patients groups showed similar resting RV function, PAH group 1 showed a lower RV systolic reserve than PAH group 2, estimated by a lower increase in IVA and a decrease in SV during DST (P < 0.05). Systolic reserve is dependent on several factors, such as ventricular contractility, ventricular remodeling and, myocardial interstitial fibrosis. The correlation between ΔCO and ΔIVA (Figure 2) suggests that RV contractility impairment would explain a lower systolic RV reserve. However, we cannot discard a stress-induced ischemia and attenuated oxygen supply to the right myocardium during stress maneuver in more severe PAH patients that could explain their impaired RV systolic reserve [35,36].\nRecently, Blumberg et al. correlated exercise hemodynamics with peak oxygen uptake and determined their prognostic significance in PAH patients. Among hemodynamic variables, only exercise cardiac index and the slope of the pulmonary pressure/flow relationship were significant prognostic indicators [11]. Therefore, the exaggerated increase in mPAP with no concomitant increase in CO (abnormal slope of the pressure/flow relationship) during DST despite similar resting hemodynamics, allows speculating a worse outcome of PAH group 1 with respect to group 2 [11].", "Although care must be taken when comparing hemodynamic response induced by physical exercise with pharmacological stress produced by low-dose dobutamine infusion, the similar response to exercise and to dobutamine infusion at 10 mcg/kg/min in patients with PH following the Mustard operation is compelling [37].\nIn addition, our pharmacological stress was a step closer to real exercise, since increased preload was achieved with the addition of 30° Trendelenburg. In fact, our stress maneuver doubled the CO in the control population. The stress with dobutamine and Trendelenburg works by a purely passive effect on PVR and PC, mimicking the CO response to moderate exercise without interfere with PA vascular tone. Invasive recordings of exercise hemodynamics in PAH require an intensive protocol, best performed by an experienced team, and thus it does not belong in the routine evaluation of PAH patients [8,24]. Our safe stressor protocol should be viewed as an easier and more reproducible maneuver than physical exercise in the catheterization laboratory.\nThe relative contributions of longitudinal and transverse shortening to overall RV function have been quantified recently. Although, we only assess RV systolic reserve by longitudinal shortening indices, Brown et al. showed that improved RV function following pulmonary vasodilator therapy occurs solely from improvements in longitudinal contraction, suggesting that longitudinal shortening may represent the afterload-responsive element of RV functional recovery [38]. Finally, we do not estimate a possible contribution of impaired diastolic reserve in the cardiovascular adaptation to the stress maneuver.", "CO: Cardiac output; DST: Dobutamine stress test with simultaneous 30° Trendelenburg; EM: Elastic modulus; IVA: Myocardial acceleration during isovolumic contraction; IVUS: Intravascular ultrasound; IVUSp: IVUS pulsatility; mPAP: Mean pulmonary arterial pressure; PA: Pulmonary artery; PAH: Pulmonary arterial hypertension; PC: Pulmonary capacitance; pPAP: Pulse PAP; PVR: Pulmonary vascular resistance; RV: Right ventricle; Sm: Myocardial peak velocity during ejection phase; TAPSE: Tricuspid annular plane systolic excursion; TPR: Total pulmonary resistance.", "The authors declare that they have no competing interests.", "ED and JCG conceived of the study, participated in its design, conducted the study, analyze the data, and wrote the manuscript. RA participated in the design of the study, conducted the study and helped write the manuscript. CA and NB conducted the study and analyze the data. MLM conceived of the study and participated in its design. AR conceived of the study, participated in its design, and wrote the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2466/14/69/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Ethics statement", "Study population", "Hemodynamic and IVUS studies", "Transthoracic echocardiography-Doppler study", "Cardiovascular reserve analysis", "Statistical analysis", "Results", "Comparison between PAH patients and control subjects at rest and during pharmacological and positional stress", "Changes in hemodynamic, IVUS and echocardiographic data in PAH patients according to ΔmPAP during pharmacological and positional stress", "Cardiovascular reserve responses during pharmacological and positional stress", "Discussion", "Pharmacological and positional stress", "Cardiovascular reserve in PAH versus control patients", "Study limitations", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Normal pulmonary circulation is characterized by low pressure and low vascular resistance. Mean pulmonary arterial pressure (mPAP) at rest is virtually independent of age and rarely exceeds 20 mm Hg (14 ± 3.3 mm Hg). In healthy individuals, passive distension of compliant pulmonary circulation and active flow-mediated vasodilation allows the pulmonary vasculature to accommodate increased cardiac output (CO) with only a modest increase in mPAP and a fall in pulmonary vascular resistance (PVR) [1-3]. Invasive hemodynamic monitoring during incremental exercise testing is technically difficult to perform and not routinely incorporated into clinical exercise testing. A recent systematic review has reported an age-dependent increase of mPAP that may exceed 30 mm Hg particularly in subjects aged ≥ 50 years, making it difficult to clearly define normal mPAP values during exercise [2]. In idiopathic pulmonary arterial hypertension (PAH), exercise capacity is markedly impaired due to an inefficient lung gas exchange (ventilation/perfusion mismatching with an increased dead space ventilation) and the inability of the heart to adequately increase pulmonary blood flow during exercise [4]. The pathophysiological mechanisms leading to an abnormal exercise response include an intrinsic abnormality in the pulmonary vasculature due to the pulmonary arterial (PA) wall remodeling [5] and a reduction in stroke volume and right ventricular (RV) ejection fraction [6]. Laskey et al. have demonstrated that both steady and pulsatile components of the PA vascular hydraulic load have considerable impact on exercise response in primary pulmonary hypertension [7]. It has already been reported that improvement in exercise tolerance in PAH patients with chronic therapy is independently related to improvements in pulmonary hemodynamics measured in exercise but not in resting conditions, suggesting an improve in the vascular reserve [8]. Lau et al. did not observe any significant beneficial effects of bosentan on arterial stiffness following 6-months of therapy [9].\nExercise echo-Doppler is being used with increased frequency in the assessment of patients with known or suspected pulmonary vascular disease, focusing on the change in Doppler-estimated PAP with exercise. However, there are surprisingly few data about RV function at exercise, especially considering that the impaired RV functional reserve could get involved in the mechanism of exercise limitation in PAH and other forms of pulmonary vascular diseases [10]. Recently, Blumberg et al. showed that the ability to increase the cardiac index during exercise is an important determinant of exercise capacity and it is linked to survival in patients with PH [11].\nWe hypothesized that abnormalities in cardiovascular reserve would be associated with impaired hemodynamic response to pharmacological stress and worse outcome in PAH. Therefore, the first aim of the present study was to perform RV systolic function assessment (echocardiography) and hemodynamic monitoring (right heart catheterization) including local elastic properties of proximal PA wall (intravascular ultrasound, IVUS) during dobutamine stress in patients with PAH. The second aim was to evaluate the association between the cardiovascular reserve and the outcome during two years follow-up.", " Ethics statement The investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent.\nThe investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent.\n Study population Eighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics.\nAll subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2].\nPAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results.\nEighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics.\nAll subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2].\nPAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results.\n Hemodynamic and IVUS studies A 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15].\nIVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18].\nA 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15].\nIVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18].\n Transthoracic echocardiography-Doppler study Baseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed.\nBaseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed.\n Cardiovascular reserve analysis Cardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21].\nCardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21].\n Statistical analysis Continuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA.\nThe association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software.\nContinuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA.\nThe association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software.", "The investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent.", "Eighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics.\nAll subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2].\nPAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results.", "A 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15].\nIVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18].", "Baseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed.", "Cardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21].", "Continuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA.\nThe association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software.", " Comparison between PAH patients and control subjects at rest and during pharmacological and positional stress The age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups.\nDemographic, anthropometric and clinical data of control subjects and patients with PAH\nBSA: body surface area; 6MWD: six minute walking distance.\nHemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver\n*p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance.\nDuring DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress.\nAll PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients.\nRelationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients).\nThe age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups.\nDemographic, anthropometric and clinical data of control subjects and patients with PAH\nBSA: body surface area; 6MWD: six minute walking distance.\nHemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver\n*p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance.\nDuring DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress.\nAll PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients.\nRelationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients).\n Changes in hemodynamic, IVUS and echocardiographic data in PAH patients according to ΔmPAP during pharmacological and positional stress Nine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3).\nHemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups\n*p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance.\nBoth PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST.\nRV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress.\nNine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3).\nHemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups\n*p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance.\nBoth PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST.\nRV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress.\n Cardiovascular reserve responses during pharmacological and positional stress Control subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4).\nCardiovascular reserve response of control subjects and both PAH groups\n†p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta.\nConsidering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001).\nGlobal cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4).\nIn PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2).\nCorrelations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver)\nCO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient).\nRelationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest).\nThe interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively.\nIn the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05).\nControl subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4).\nCardiovascular reserve response of control subjects and both PAH groups\n†p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta.\nConsidering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001).\nGlobal cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4).\nIn PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2).\nCorrelations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver)\nCO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient).\nRelationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest).\nThe interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively.\nIn the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05).", "The age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups.\nDemographic, anthropometric and clinical data of control subjects and patients with PAH\nBSA: body surface area; 6MWD: six minute walking distance.\nHemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver\n*p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance.\nDuring DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress.\nAll PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients.\nRelationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients).", "Nine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3).\nHemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups\n*p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance.\nBoth PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST.\nRV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress.", "Control subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4).\nCardiovascular reserve response of control subjects and both PAH groups\n†p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2.\nCO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta.\nConsidering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001).\nGlobal cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4).\nIn PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2).\nCorrelations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver)\nCO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient).\nRelationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest).\nThe interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively.\nIn the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05).", "This is the first study evaluating the cardiovascular reserve in PAH patients. We show that the hemodynamic response to pharmacological stress with low-dose dobutamine plus 30° Trendelenburg position is significantly impaired in patients with PAH, and this impairment is associated with a low RV systolic reserve and pulmonary vascular reserve. The lower cardiovascular reserve is significantly related to a worse hemodynamic adaptation to DST and it could be associated with a poor clinical outcome.\n Pharmacological and positional stress Cardiovascular reserve is emerging as a strong predictor of outcome in different cardiovascular diseases [21]. From a physiological point of view, cardiovascular reserve is a measure of cardiovascular response to exercise or pharmacological stresses (dobutamine infusion between 4 and 10 mcg/kg/min) [22]. Although exercise stress is the gold standard to evaluate of the pulmonary vascular pressure-flow relationships, exercise hemodynamics in PAH patients have been poorly studied, and factors that may have an impact on PAP response to exercise, such as exercise method, exercise intensity, position and age, have not been accounted for. The stress maneuver used in our study provided by low-dose dobutamine plus 30° Trendelenburg position works by a purely passive increasing in CO without directly influence on the PA wall viscoelastic properties. In experimental pulmonary hypertension, dobutamine infusion at a rate of 10 mcg/kg/min has no flow-independent effects on the normal or acutely hypertensive circulation. Higher doses may have a constricting or dilating effect depending on the pre-existing vascular tone [22-24].\nTaking into account the extent of heart rate and CO reached by healthy subjects during pharmacological and positional stress, we achieved a cardiovascular stress level similar to a slight/moderate exercise (heart rate 100–110 bpm and cardiac output 10–14 L/min) [2]. According to the data reviewed by Kovacs et al., mPAP values during slight exercise in healthy subjects were 29.4 ± 8.4 mm Hg, 20.0 ± 4.7 mm Hg and 18.2 ± 5.1 mm Hg in subjects aged ≥ 50 years, 30–50 years and less than 30 years, respectively [2]. Our healthy controls were aged 51 ± 6 years (range 40–60 years; 50% ≤ 50 years) and showed a similar mPAP (18 ± 4 mm Hg) with a similar CO increase (doubled) during DST.\nCardiovascular reserve is emerging as a strong predictor of outcome in different cardiovascular diseases [21]. From a physiological point of view, cardiovascular reserve is a measure of cardiovascular response to exercise or pharmacological stresses (dobutamine infusion between 4 and 10 mcg/kg/min) [22]. Although exercise stress is the gold standard to evaluate of the pulmonary vascular pressure-flow relationships, exercise hemodynamics in PAH patients have been poorly studied, and factors that may have an impact on PAP response to exercise, such as exercise method, exercise intensity, position and age, have not been accounted for. The stress maneuver used in our study provided by low-dose dobutamine plus 30° Trendelenburg position works by a purely passive increasing in CO without directly influence on the PA wall viscoelastic properties. In experimental pulmonary hypertension, dobutamine infusion at a rate of 10 mcg/kg/min has no flow-independent effects on the normal or acutely hypertensive circulation. Higher doses may have a constricting or dilating effect depending on the pre-existing vascular tone [22-24].\nTaking into account the extent of heart rate and CO reached by healthy subjects during pharmacological and positional stress, we achieved a cardiovascular stress level similar to a slight/moderate exercise (heart rate 100–110 bpm and cardiac output 10–14 L/min) [2]. According to the data reviewed by Kovacs et al., mPAP values during slight exercise in healthy subjects were 29.4 ± 8.4 mm Hg, 20.0 ± 4.7 mm Hg and 18.2 ± 5.1 mm Hg in subjects aged ≥ 50 years, 30–50 years and less than 30 years, respectively [2]. Our healthy controls were aged 51 ± 6 years (range 40–60 years; 50% ≤ 50 years) and showed a similar mPAP (18 ± 4 mm Hg) with a similar CO increase (doubled) during DST.\n Cardiovascular reserve in PAH versus control patients In the control group, the marked increase in CO during DST did not cause any significant change in mPAP and determined a low ΔmPAP/ΔCO ratio (0.7 ± 0.2 mm Hg/L/min). This value corresponds well to the cohort of Kovacs et al. which reported a ΔmPAP/ΔCO ~1.06 mm Hg/L/min [25].\nThe modest increment in mPAP relative to CO during pharmacological stress is attributable to passive recruitment and distension of a normally compliant pulmonary circulation with active flow-mediated vasodilation, decreasing PVR and TPR [15]. Studies into the regulation of pulmonary vascular tone during exercise demonstrate the importance of nitric oxide in the exercise-induced pulmonary vasodilatation, which is mediated in part via blunting of the vasoconstrictor influence of endothelin [26]. Concomitantly, pharmacological stress produced an increase in arterial pulsatility (estimated by IVUSp), and a decrease of EM, expressing a preserved buffering function. It is accepted that the arterial wall buffering function is determined not only by arterial elastic properties, but also by the viscous properties of the wall. The characterization of wall buffering function has been estimated by means of the ratio between viscous index/elastic index [27]. Considering our stress condition as a mainly passive condition (with no significant change in viscous index), a decreased EM with a negative ΔEM, would be associated with a preserved buffering function and buffering function reserve, respectively. The negative change in ΔPVR with a low ΔmPAP/ΔCO ratio reflects a preserved pulmonary vasodilator reserve. In clinical practice, ventricular systolic reserve is usually defined by a change in ejection fraction or SV during exercise or dobutamine infusion. Even though we did not assess RV function indices in the control group, the observed CO increase was composed by a 56% increase in heart rate (chronotropic reserve) and 20% (13 mL) increase in SV (systolic reserve).\nAlthough, both controls and PAH patients had similar heart rate at rest, PAH cohort showed an impaired chronotropic response during stress maneuver. This chronotropic incompetence has been previously documented by Provencher et al. and may reflect the loss of normal physiological reserve secondary to significant autonomic nervous system abnormalities and probably as a result of down-regulation of β-adrenoreceptors [28,29].\nBoth PAH patient groups had similar resting hemodynamics and chronotropic reserve. However, the higher increase in mPAP and pPAP with similar increase of CO during stress in PAH group 1 with respect to PAH group 2, would be related to the reduced recruitability and distensibility of more highly remodeled pulmonary vessels. This illustrates a lack of physiological adaptation of the PA wall to increased flow in relation with a lower vasodilation reserve (positive ΔPVR), a higher PA wall remodeling (higher resting EM) and a concomitantly lower buffering function (higher ΔEM). By contrast, PAH group 2 preserved some extent of vascular reserve secondary to a vasodilation response (negative ΔPVR) and a lower impairment of buffering function (lower positive ΔEM). This would explained the significant lower ΔmPAP/ΔCO ratio than group 1. Accordingly, we have previously reported that PAH patients with higher IVUSp and lesser EM displayed an absolute PA vasodilation during acute vasoreactivity testing [18]. We cannot discard the presence of alterations in the control of pulmonary vascular tone during DST, resulting in blunted pulmonary vasodilation. Since both PAH groups have neither demographic (age, gender or body surface area) nor clinical differences (functional class, 6 minutes walking distance, etiology of PAH), we can speculate that PAH group 1 could have higher endothelial dysfunction with higher imbalance between vasodilators and vasoconstrictors than group 2, explaining the significant higher ΔmPAP/ΔCO ratio [26].\nIn accordance with previous data, resting hemodynamic measurements are poorly correlated with the response to pharmacological stress [30]. However, EM at rest was significantly correlated with ΔmPAP and ΔCO. Accordingly, Kubo et al. showed that the percentage of wall thickness was highly correlated with ∆mPAP during exercise in patients with severe emphysema [31]. The correlation between ΔmPAP and ΔEM (Figure 2) suggests that PA wall remodeling and buffering function impairment would be associated with the lower vascular reserve.\nIn the context of PAH, evidence of RV dysfunction and clinical right-sided heart failure at rest have been shown to be the most important determinants of morbidity and mortality, independently of PAP values. We used three measures of longitudinal RV shortening in an effort to characterize simple and reproducible measurements of global RV systolic function [19]. The mean reference value of TAPSE is 23 mm (16–30), Sm 15 cm ⋅ s-1 (10–19) and IVA 3.7 m ⋅ s-2 (2.2-5.2) [19]. Among them, TAPSE, a simple and clinically useful tool to estimate RV function in PAH patients, has been shown to predict survival in PAH [32]. Preliminary evidence suggests that a decrease in TAPSE with exercise was strongly associated with adverse clinical events in PAH patients within one year of follow up [33]. However, Giusca et al. suggested that tricuspid ring motion is only loosely related to RV systolic function, being highly dependent on afterload and overall motion of the heart, thus failing to reflect RV longitudinal function accurately [34]. This may explain why TAPSE changes are more significantly related to changes in mPAP and PVR than to true changes in RV systolic function such as CO. Myocardial deformation parameters provide a more accurate picture of the contractile status of the RV free wall [19,34]. IVA appears as a relatively load-independent estimator of the RV systolic response to stress, probably reflecting true changes in contractility and in CO induced by DST. Sm of RV basal free wall is also better related to CO than TAPSE. However, in this work it showed a lower ability to identify systolic reserve than IVA, since there were no Sm differences between both PAH patients groups either at rest or during stress. Although both PAH patients groups showed similar resting RV function, PAH group 1 showed a lower RV systolic reserve than PAH group 2, estimated by a lower increase in IVA and a decrease in SV during DST (P < 0.05). Systolic reserve is dependent on several factors, such as ventricular contractility, ventricular remodeling and, myocardial interstitial fibrosis. The correlation between ΔCO and ΔIVA (Figure 2) suggests that RV contractility impairment would explain a lower systolic RV reserve. However, we cannot discard a stress-induced ischemia and attenuated oxygen supply to the right myocardium during stress maneuver in more severe PAH patients that could explain their impaired RV systolic reserve [35,36].\nRecently, Blumberg et al. correlated exercise hemodynamics with peak oxygen uptake and determined their prognostic significance in PAH patients. Among hemodynamic variables, only exercise cardiac index and the slope of the pulmonary pressure/flow relationship were significant prognostic indicators [11]. Therefore, the exaggerated increase in mPAP with no concomitant increase in CO (abnormal slope of the pressure/flow relationship) during DST despite similar resting hemodynamics, allows speculating a worse outcome of PAH group 1 with respect to group 2 [11].\nIn the control group, the marked increase in CO during DST did not cause any significant change in mPAP and determined a low ΔmPAP/ΔCO ratio (0.7 ± 0.2 mm Hg/L/min). This value corresponds well to the cohort of Kovacs et al. which reported a ΔmPAP/ΔCO ~1.06 mm Hg/L/min [25].\nThe modest increment in mPAP relative to CO during pharmacological stress is attributable to passive recruitment and distension of a normally compliant pulmonary circulation with active flow-mediated vasodilation, decreasing PVR and TPR [15]. Studies into the regulation of pulmonary vascular tone during exercise demonstrate the importance of nitric oxide in the exercise-induced pulmonary vasodilatation, which is mediated in part via blunting of the vasoconstrictor influence of endothelin [26]. Concomitantly, pharmacological stress produced an increase in arterial pulsatility (estimated by IVUSp), and a decrease of EM, expressing a preserved buffering function. It is accepted that the arterial wall buffering function is determined not only by arterial elastic properties, but also by the viscous properties of the wall. The characterization of wall buffering function has been estimated by means of the ratio between viscous index/elastic index [27]. Considering our stress condition as a mainly passive condition (with no significant change in viscous index), a decreased EM with a negative ΔEM, would be associated with a preserved buffering function and buffering function reserve, respectively. The negative change in ΔPVR with a low ΔmPAP/ΔCO ratio reflects a preserved pulmonary vasodilator reserve. In clinical practice, ventricular systolic reserve is usually defined by a change in ejection fraction or SV during exercise or dobutamine infusion. Even though we did not assess RV function indices in the control group, the observed CO increase was composed by a 56% increase in heart rate (chronotropic reserve) and 20% (13 mL) increase in SV (systolic reserve).\nAlthough, both controls and PAH patients had similar heart rate at rest, PAH cohort showed an impaired chronotropic response during stress maneuver. This chronotropic incompetence has been previously documented by Provencher et al. and may reflect the loss of normal physiological reserve secondary to significant autonomic nervous system abnormalities and probably as a result of down-regulation of β-adrenoreceptors [28,29].\nBoth PAH patient groups had similar resting hemodynamics and chronotropic reserve. However, the higher increase in mPAP and pPAP with similar increase of CO during stress in PAH group 1 with respect to PAH group 2, would be related to the reduced recruitability and distensibility of more highly remodeled pulmonary vessels. This illustrates a lack of physiological adaptation of the PA wall to increased flow in relation with a lower vasodilation reserve (positive ΔPVR), a higher PA wall remodeling (higher resting EM) and a concomitantly lower buffering function (higher ΔEM). By contrast, PAH group 2 preserved some extent of vascular reserve secondary to a vasodilation response (negative ΔPVR) and a lower impairment of buffering function (lower positive ΔEM). This would explained the significant lower ΔmPAP/ΔCO ratio than group 1. Accordingly, we have previously reported that PAH patients with higher IVUSp and lesser EM displayed an absolute PA vasodilation during acute vasoreactivity testing [18]. We cannot discard the presence of alterations in the control of pulmonary vascular tone during DST, resulting in blunted pulmonary vasodilation. Since both PAH groups have neither demographic (age, gender or body surface area) nor clinical differences (functional class, 6 minutes walking distance, etiology of PAH), we can speculate that PAH group 1 could have higher endothelial dysfunction with higher imbalance between vasodilators and vasoconstrictors than group 2, explaining the significant higher ΔmPAP/ΔCO ratio [26].\nIn accordance with previous data, resting hemodynamic measurements are poorly correlated with the response to pharmacological stress [30]. However, EM at rest was significantly correlated with ΔmPAP and ΔCO. Accordingly, Kubo et al. showed that the percentage of wall thickness was highly correlated with ∆mPAP during exercise in patients with severe emphysema [31]. The correlation between ΔmPAP and ΔEM (Figure 2) suggests that PA wall remodeling and buffering function impairment would be associated with the lower vascular reserve.\nIn the context of PAH, evidence of RV dysfunction and clinical right-sided heart failure at rest have been shown to be the most important determinants of morbidity and mortality, independently of PAP values. We used three measures of longitudinal RV shortening in an effort to characterize simple and reproducible measurements of global RV systolic function [19]. The mean reference value of TAPSE is 23 mm (16–30), Sm 15 cm ⋅ s-1 (10–19) and IVA 3.7 m ⋅ s-2 (2.2-5.2) [19]. Among them, TAPSE, a simple and clinically useful tool to estimate RV function in PAH patients, has been shown to predict survival in PAH [32]. Preliminary evidence suggests that a decrease in TAPSE with exercise was strongly associated with adverse clinical events in PAH patients within one year of follow up [33]. However, Giusca et al. suggested that tricuspid ring motion is only loosely related to RV systolic function, being highly dependent on afterload and overall motion of the heart, thus failing to reflect RV longitudinal function accurately [34]. This may explain why TAPSE changes are more significantly related to changes in mPAP and PVR than to true changes in RV systolic function such as CO. Myocardial deformation parameters provide a more accurate picture of the contractile status of the RV free wall [19,34]. IVA appears as a relatively load-independent estimator of the RV systolic response to stress, probably reflecting true changes in contractility and in CO induced by DST. Sm of RV basal free wall is also better related to CO than TAPSE. However, in this work it showed a lower ability to identify systolic reserve than IVA, since there were no Sm differences between both PAH patients groups either at rest or during stress. Although both PAH patients groups showed similar resting RV function, PAH group 1 showed a lower RV systolic reserve than PAH group 2, estimated by a lower increase in IVA and a decrease in SV during DST (P < 0.05). Systolic reserve is dependent on several factors, such as ventricular contractility, ventricular remodeling and, myocardial interstitial fibrosis. The correlation between ΔCO and ΔIVA (Figure 2) suggests that RV contractility impairment would explain a lower systolic RV reserve. However, we cannot discard a stress-induced ischemia and attenuated oxygen supply to the right myocardium during stress maneuver in more severe PAH patients that could explain their impaired RV systolic reserve [35,36].\nRecently, Blumberg et al. correlated exercise hemodynamics with peak oxygen uptake and determined their prognostic significance in PAH patients. Among hemodynamic variables, only exercise cardiac index and the slope of the pulmonary pressure/flow relationship were significant prognostic indicators [11]. Therefore, the exaggerated increase in mPAP with no concomitant increase in CO (abnormal slope of the pressure/flow relationship) during DST despite similar resting hemodynamics, allows speculating a worse outcome of PAH group 1 with respect to group 2 [11].\n Study limitations Although care must be taken when comparing hemodynamic response induced by physical exercise with pharmacological stress produced by low-dose dobutamine infusion, the similar response to exercise and to dobutamine infusion at 10 mcg/kg/min in patients with PH following the Mustard operation is compelling [37].\nIn addition, our pharmacological stress was a step closer to real exercise, since increased preload was achieved with the addition of 30° Trendelenburg. In fact, our stress maneuver doubled the CO in the control population. The stress with dobutamine and Trendelenburg works by a purely passive effect on PVR and PC, mimicking the CO response to moderate exercise without interfere with PA vascular tone. Invasive recordings of exercise hemodynamics in PAH require an intensive protocol, best performed by an experienced team, and thus it does not belong in the routine evaluation of PAH patients [8,24]. Our safe stressor protocol should be viewed as an easier and more reproducible maneuver than physical exercise in the catheterization laboratory.\nThe relative contributions of longitudinal and transverse shortening to overall RV function have been quantified recently. Although, we only assess RV systolic reserve by longitudinal shortening indices, Brown et al. showed that improved RV function following pulmonary vasodilator therapy occurs solely from improvements in longitudinal contraction, suggesting that longitudinal shortening may represent the afterload-responsive element of RV functional recovery [38]. Finally, we do not estimate a possible contribution of impaired diastolic reserve in the cardiovascular adaptation to the stress maneuver.\nAlthough care must be taken when comparing hemodynamic response induced by physical exercise with pharmacological stress produced by low-dose dobutamine infusion, the similar response to exercise and to dobutamine infusion at 10 mcg/kg/min in patients with PH following the Mustard operation is compelling [37].\nIn addition, our pharmacological stress was a step closer to real exercise, since increased preload was achieved with the addition of 30° Trendelenburg. In fact, our stress maneuver doubled the CO in the control population. The stress with dobutamine and Trendelenburg works by a purely passive effect on PVR and PC, mimicking the CO response to moderate exercise without interfere with PA vascular tone. Invasive recordings of exercise hemodynamics in PAH require an intensive protocol, best performed by an experienced team, and thus it does not belong in the routine evaluation of PAH patients [8,24]. Our safe stressor protocol should be viewed as an easier and more reproducible maneuver than physical exercise in the catheterization laboratory.\nThe relative contributions of longitudinal and transverse shortening to overall RV function have been quantified recently. Although, we only assess RV systolic reserve by longitudinal shortening indices, Brown et al. showed that improved RV function following pulmonary vasodilator therapy occurs solely from improvements in longitudinal contraction, suggesting that longitudinal shortening may represent the afterload-responsive element of RV functional recovery [38]. Finally, we do not estimate a possible contribution of impaired diastolic reserve in the cardiovascular adaptation to the stress maneuver.", "Cardiovascular reserve is emerging as a strong predictor of outcome in different cardiovascular diseases [21]. From a physiological point of view, cardiovascular reserve is a measure of cardiovascular response to exercise or pharmacological stresses (dobutamine infusion between 4 and 10 mcg/kg/min) [22]. Although exercise stress is the gold standard to evaluate of the pulmonary vascular pressure-flow relationships, exercise hemodynamics in PAH patients have been poorly studied, and factors that may have an impact on PAP response to exercise, such as exercise method, exercise intensity, position and age, have not been accounted for. The stress maneuver used in our study provided by low-dose dobutamine plus 30° Trendelenburg position works by a purely passive increasing in CO without directly influence on the PA wall viscoelastic properties. In experimental pulmonary hypertension, dobutamine infusion at a rate of 10 mcg/kg/min has no flow-independent effects on the normal or acutely hypertensive circulation. Higher doses may have a constricting or dilating effect depending on the pre-existing vascular tone [22-24].\nTaking into account the extent of heart rate and CO reached by healthy subjects during pharmacological and positional stress, we achieved a cardiovascular stress level similar to a slight/moderate exercise (heart rate 100–110 bpm and cardiac output 10–14 L/min) [2]. According to the data reviewed by Kovacs et al., mPAP values during slight exercise in healthy subjects were 29.4 ± 8.4 mm Hg, 20.0 ± 4.7 mm Hg and 18.2 ± 5.1 mm Hg in subjects aged ≥ 50 years, 30–50 years and less than 30 years, respectively [2]. Our healthy controls were aged 51 ± 6 years (range 40–60 years; 50% ≤ 50 years) and showed a similar mPAP (18 ± 4 mm Hg) with a similar CO increase (doubled) during DST.", "In the control group, the marked increase in CO during DST did not cause any significant change in mPAP and determined a low ΔmPAP/ΔCO ratio (0.7 ± 0.2 mm Hg/L/min). This value corresponds well to the cohort of Kovacs et al. which reported a ΔmPAP/ΔCO ~1.06 mm Hg/L/min [25].\nThe modest increment in mPAP relative to CO during pharmacological stress is attributable to passive recruitment and distension of a normally compliant pulmonary circulation with active flow-mediated vasodilation, decreasing PVR and TPR [15]. Studies into the regulation of pulmonary vascular tone during exercise demonstrate the importance of nitric oxide in the exercise-induced pulmonary vasodilatation, which is mediated in part via blunting of the vasoconstrictor influence of endothelin [26]. Concomitantly, pharmacological stress produced an increase in arterial pulsatility (estimated by IVUSp), and a decrease of EM, expressing a preserved buffering function. It is accepted that the arterial wall buffering function is determined not only by arterial elastic properties, but also by the viscous properties of the wall. The characterization of wall buffering function has been estimated by means of the ratio between viscous index/elastic index [27]. Considering our stress condition as a mainly passive condition (with no significant change in viscous index), a decreased EM with a negative ΔEM, would be associated with a preserved buffering function and buffering function reserve, respectively. The negative change in ΔPVR with a low ΔmPAP/ΔCO ratio reflects a preserved pulmonary vasodilator reserve. In clinical practice, ventricular systolic reserve is usually defined by a change in ejection fraction or SV during exercise or dobutamine infusion. Even though we did not assess RV function indices in the control group, the observed CO increase was composed by a 56% increase in heart rate (chronotropic reserve) and 20% (13 mL) increase in SV (systolic reserve).\nAlthough, both controls and PAH patients had similar heart rate at rest, PAH cohort showed an impaired chronotropic response during stress maneuver. This chronotropic incompetence has been previously documented by Provencher et al. and may reflect the loss of normal physiological reserve secondary to significant autonomic nervous system abnormalities and probably as a result of down-regulation of β-adrenoreceptors [28,29].\nBoth PAH patient groups had similar resting hemodynamics and chronotropic reserve. However, the higher increase in mPAP and pPAP with similar increase of CO during stress in PAH group 1 with respect to PAH group 2, would be related to the reduced recruitability and distensibility of more highly remodeled pulmonary vessels. This illustrates a lack of physiological adaptation of the PA wall to increased flow in relation with a lower vasodilation reserve (positive ΔPVR), a higher PA wall remodeling (higher resting EM) and a concomitantly lower buffering function (higher ΔEM). By contrast, PAH group 2 preserved some extent of vascular reserve secondary to a vasodilation response (negative ΔPVR) and a lower impairment of buffering function (lower positive ΔEM). This would explained the significant lower ΔmPAP/ΔCO ratio than group 1. Accordingly, we have previously reported that PAH patients with higher IVUSp and lesser EM displayed an absolute PA vasodilation during acute vasoreactivity testing [18]. We cannot discard the presence of alterations in the control of pulmonary vascular tone during DST, resulting in blunted pulmonary vasodilation. Since both PAH groups have neither demographic (age, gender or body surface area) nor clinical differences (functional class, 6 minutes walking distance, etiology of PAH), we can speculate that PAH group 1 could have higher endothelial dysfunction with higher imbalance between vasodilators and vasoconstrictors than group 2, explaining the significant higher ΔmPAP/ΔCO ratio [26].\nIn accordance with previous data, resting hemodynamic measurements are poorly correlated with the response to pharmacological stress [30]. However, EM at rest was significantly correlated with ΔmPAP and ΔCO. Accordingly, Kubo et al. showed that the percentage of wall thickness was highly correlated with ∆mPAP during exercise in patients with severe emphysema [31]. The correlation between ΔmPAP and ΔEM (Figure 2) suggests that PA wall remodeling and buffering function impairment would be associated with the lower vascular reserve.\nIn the context of PAH, evidence of RV dysfunction and clinical right-sided heart failure at rest have been shown to be the most important determinants of morbidity and mortality, independently of PAP values. We used three measures of longitudinal RV shortening in an effort to characterize simple and reproducible measurements of global RV systolic function [19]. The mean reference value of TAPSE is 23 mm (16–30), Sm 15 cm ⋅ s-1 (10–19) and IVA 3.7 m ⋅ s-2 (2.2-5.2) [19]. Among them, TAPSE, a simple and clinically useful tool to estimate RV function in PAH patients, has been shown to predict survival in PAH [32]. Preliminary evidence suggests that a decrease in TAPSE with exercise was strongly associated with adverse clinical events in PAH patients within one year of follow up [33]. However, Giusca et al. suggested that tricuspid ring motion is only loosely related to RV systolic function, being highly dependent on afterload and overall motion of the heart, thus failing to reflect RV longitudinal function accurately [34]. This may explain why TAPSE changes are more significantly related to changes in mPAP and PVR than to true changes in RV systolic function such as CO. Myocardial deformation parameters provide a more accurate picture of the contractile status of the RV free wall [19,34]. IVA appears as a relatively load-independent estimator of the RV systolic response to stress, probably reflecting true changes in contractility and in CO induced by DST. Sm of RV basal free wall is also better related to CO than TAPSE. However, in this work it showed a lower ability to identify systolic reserve than IVA, since there were no Sm differences between both PAH patients groups either at rest or during stress. Although both PAH patients groups showed similar resting RV function, PAH group 1 showed a lower RV systolic reserve than PAH group 2, estimated by a lower increase in IVA and a decrease in SV during DST (P < 0.05). Systolic reserve is dependent on several factors, such as ventricular contractility, ventricular remodeling and, myocardial interstitial fibrosis. The correlation between ΔCO and ΔIVA (Figure 2) suggests that RV contractility impairment would explain a lower systolic RV reserve. However, we cannot discard a stress-induced ischemia and attenuated oxygen supply to the right myocardium during stress maneuver in more severe PAH patients that could explain their impaired RV systolic reserve [35,36].\nRecently, Blumberg et al. correlated exercise hemodynamics with peak oxygen uptake and determined their prognostic significance in PAH patients. Among hemodynamic variables, only exercise cardiac index and the slope of the pulmonary pressure/flow relationship were significant prognostic indicators [11]. Therefore, the exaggerated increase in mPAP with no concomitant increase in CO (abnormal slope of the pressure/flow relationship) during DST despite similar resting hemodynamics, allows speculating a worse outcome of PAH group 1 with respect to group 2 [11].", "Although care must be taken when comparing hemodynamic response induced by physical exercise with pharmacological stress produced by low-dose dobutamine infusion, the similar response to exercise and to dobutamine infusion at 10 mcg/kg/min in patients with PH following the Mustard operation is compelling [37].\nIn addition, our pharmacological stress was a step closer to real exercise, since increased preload was achieved with the addition of 30° Trendelenburg. In fact, our stress maneuver doubled the CO in the control population. The stress with dobutamine and Trendelenburg works by a purely passive effect on PVR and PC, mimicking the CO response to moderate exercise without interfere with PA vascular tone. Invasive recordings of exercise hemodynamics in PAH require an intensive protocol, best performed by an experienced team, and thus it does not belong in the routine evaluation of PAH patients [8,24]. Our safe stressor protocol should be viewed as an easier and more reproducible maneuver than physical exercise in the catheterization laboratory.\nThe relative contributions of longitudinal and transverse shortening to overall RV function have been quantified recently. Although, we only assess RV systolic reserve by longitudinal shortening indices, Brown et al. showed that improved RV function following pulmonary vasodilator therapy occurs solely from improvements in longitudinal contraction, suggesting that longitudinal shortening may represent the afterload-responsive element of RV functional recovery [38]. Finally, we do not estimate a possible contribution of impaired diastolic reserve in the cardiovascular adaptation to the stress maneuver.", "Pulmonary vascular reserve and RV systolic reserve are impaired in PAH patients. The PA wall remodeling, pulmonary buffering function and RV contractility appeared as the main factors of the cardiovascular reserve dysfunction in PAH patients. The lower recruitable cardiovascular reserve is significantly related to a worse hemodynamic response to DST and it could be associated with a poor clinical outcome. Further study is needed to elucidate whether cardiovascular reserve dysfunction adds independent prognostic information in a multivariate analysis. In addition, further studies needs to assess whether improvement of cardiovascular reserve could be a therapeutic target in patients with established pulmonary hypertension.", "CO: Cardiac output; DST: Dobutamine stress test with simultaneous 30° Trendelenburg; EM: Elastic modulus; IVA: Myocardial acceleration during isovolumic contraction; IVUS: Intravascular ultrasound; IVUSp: IVUS pulsatility; mPAP: Mean pulmonary arterial pressure; PA: Pulmonary artery; PAH: Pulmonary arterial hypertension; PC: Pulmonary capacitance; pPAP: Pulse PAP; PVR: Pulmonary vascular resistance; RV: Right ventricle; Sm: Myocardial peak velocity during ejection phase; TAPSE: Tricuspid annular plane systolic excursion; TPR: Total pulmonary resistance.", "The authors declare that they have no competing interests.", "ED and JCG conceived of the study, participated in its design, conducted the study, analyze the data, and wrote the manuscript. RA participated in the design of the study, conducted the study and helped write the manuscript. CA and NB conducted the study and analyze the data. MLM conceived of the study and participated in its design. AR conceived of the study, participated in its design, and wrote the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2466/14/69/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, "results", null, null, null, "discussion", null, null, null, "conclusions", null, null, null, null ]
[ "Pulmonary hypertension", "Dobutamine", "Cardiovascular reserve", "IVUS", "Echocardiography" ]
Background: Normal pulmonary circulation is characterized by low pressure and low vascular resistance. Mean pulmonary arterial pressure (mPAP) at rest is virtually independent of age and rarely exceeds 20 mm Hg (14 ± 3.3 mm Hg). In healthy individuals, passive distension of compliant pulmonary circulation and active flow-mediated vasodilation allows the pulmonary vasculature to accommodate increased cardiac output (CO) with only a modest increase in mPAP and a fall in pulmonary vascular resistance (PVR) [1-3]. Invasive hemodynamic monitoring during incremental exercise testing is technically difficult to perform and not routinely incorporated into clinical exercise testing. A recent systematic review has reported an age-dependent increase of mPAP that may exceed 30 mm Hg particularly in subjects aged ≥ 50 years, making it difficult to clearly define normal mPAP values during exercise [2]. In idiopathic pulmonary arterial hypertension (PAH), exercise capacity is markedly impaired due to an inefficient lung gas exchange (ventilation/perfusion mismatching with an increased dead space ventilation) and the inability of the heart to adequately increase pulmonary blood flow during exercise [4]. The pathophysiological mechanisms leading to an abnormal exercise response include an intrinsic abnormality in the pulmonary vasculature due to the pulmonary arterial (PA) wall remodeling [5] and a reduction in stroke volume and right ventricular (RV) ejection fraction [6]. Laskey et al. have demonstrated that both steady and pulsatile components of the PA vascular hydraulic load have considerable impact on exercise response in primary pulmonary hypertension [7]. It has already been reported that improvement in exercise tolerance in PAH patients with chronic therapy is independently related to improvements in pulmonary hemodynamics measured in exercise but not in resting conditions, suggesting an improve in the vascular reserve [8]. Lau et al. did not observe any significant beneficial effects of bosentan on arterial stiffness following 6-months of therapy [9]. Exercise echo-Doppler is being used with increased frequency in the assessment of patients with known or suspected pulmonary vascular disease, focusing on the change in Doppler-estimated PAP with exercise. However, there are surprisingly few data about RV function at exercise, especially considering that the impaired RV functional reserve could get involved in the mechanism of exercise limitation in PAH and other forms of pulmonary vascular diseases [10]. Recently, Blumberg et al. showed that the ability to increase the cardiac index during exercise is an important determinant of exercise capacity and it is linked to survival in patients with PH [11]. We hypothesized that abnormalities in cardiovascular reserve would be associated with impaired hemodynamic response to pharmacological stress and worse outcome in PAH. Therefore, the first aim of the present study was to perform RV systolic function assessment (echocardiography) and hemodynamic monitoring (right heart catheterization) including local elastic properties of proximal PA wall (intravascular ultrasound, IVUS) during dobutamine stress in patients with PAH. The second aim was to evaluate the association between the cardiovascular reserve and the outcome during two years follow-up. Methods: Ethics statement The investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent. The investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent. Study population Eighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics. All subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2]. PAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results. Eighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics. All subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2]. PAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results. Hemodynamic and IVUS studies A 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15]. IVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18]. A 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15]. IVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18]. Transthoracic echocardiography-Doppler study Baseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed. Baseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed. Cardiovascular reserve analysis Cardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21]. Cardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21]. Statistical analysis Continuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA. The association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software. Continuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA. The association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software. Ethics statement: The investigation conforms with the principles outlined in the Declaration of Helsinki. The study protocol was approved by the Institutional Ethics Committee of the Hospital Universitari Vall d’Hebron (Barcelona), and all patients gave written informed consent. Study population: Eighteen consecutive patients with PAH (Dana Point group 1) under specific drug therapy who underwent a follow-up cardiac catheterization at our institution were included in the study from January 2007 to September 2009. The patients were in NYHA function class II-III, with no clinical and pharmacological changes in the last 4–6 months. Exclusion criteria were: refusal to participate in the study or being in NYHA function class IV. The diagnosis of PAH was made according to the standard algorithm including a right heart catheterization [12]. Causes of PAH were idiopathic PH (n = 12), PH related to connective tissue disease (n = 3), surgically corrected congenital heart disease (n = 1), PH associated with human immunodeficiency virus (n = 1) and porto-pulmonary hypertension (n = 1). Chronic medication included oral anticoagulants, diuretics on demand, bosentan, sildenafil, inhaled iloprost and epoprostenol, as well as combination therapies on clinical judgement. Age and sex matched control subjects were recruited initially referred for cardiac catheterization due to clinically suspected PAH, without any other heart or lung disease. They underwent the same invasive study protocol after documentation of normal pulmonary arterial hemodynamics. All subjects underwent a routine right heart catheterization and simultaneous inferior lobe medium-sized elastic pulmonary artery IVUS in the supine position and breathing room air. A transthoracic echocardiographic study was performed concomitantly in PAH patients by a single experienced examiner. All variables were obtained at rest and during dobutamine stress test with simultaneous Trendelenburg (DST). DST consisted of low-dose dobutamine infusion (10 mcg/kg/min) in order to increase myocardial contractility and heart rate, and 30° Trendelenburg position in order to increase venous return (preload), both during 10 minutes, unless symptoms (shortness of breath, chest pain, systemic hypertension with systolic blood pressure ≥ 180 mm Hg or tachyarrhythmia other than sinus tachycardia) were observed. These three variables (venous return or preload, heart rate and myocardial contractility) are the leading factors responsible for the increase in CO during exercise [1]. We choose a low-dose dobutamine stress test with simultaneous Trendelenburg as an easy maneuver to induce a purely passive increase of pulmonary flow and a change in cardiac contractility, in order to assess the cardiovascular reserve [2]. PAH patients were prospectively followed up for 2 years. Physicians who carried out the clinical follow-up were blinded to the hemodynamic, IVUS and echocardiographic results. Hemodynamic and IVUS studies: A 7 F Swan-Ganz catheter (Edwards Lifesciences, USA) was inserted into a brachial vein and a 5 F end-hole catheter was inserted into the right radial artery to monitor systemic arterial pressure. Both catheters were connected to fluid-filled transducers, which were positioned at the anterior axillary line level and zeroed at atmospheric pressure. Right atrial, PAP and pulmonary capillary wedge pressures were all measured at end-expiration. CO was calculated using the Fick method. In patients with tricuspid regurgitation and low CO, such as those with PAH, the thermodilution method has not been reported to be more accurate than the Fick method [13], however in this series no patient presented tricuspid regurgitation greater than mild in the echocardiographic assessment nor at rest neither at peak stress. PVR was calculated as: (mPAP-pulmonary capillary wedge pressure)/CO and total pulmonary resistance (TPR) as: mPAP/CO. Pulmonary vascular capacitance (PC) was estimated by the stroke volume/pulse pressure ratio (SV/pPAP) [14]. The changes in mPAP were normalized by the changes in CO (ΔmPAP/ΔCO) during pharmacological stress, in order to interpret exercise-induced increases in mPAP relative to increases in blood flow [15]. IVUS examination was performed with an Eagle Eye Gold catheter 20 MHz, 3.5 F (Volcano Corporation, USA) with an axial resolution of 200 μm and an automatic pullback of 0.5 mm/s. The images were obtained from a segmental PA of the inferior lobe (elastic PA between 2–3 mm) [16-18] and stored in digital form. Both diastolic and systolic cross-sectional areas of the studied segment were analyzed off-line by two observers unaware of clinical and hemodynamic findings. We estimated IVUS pulsatility (IVUSp) as: (systolic-diastolic lumen area)/diastolic lumen area × 100. The physiological adaptation of the vessel wall to stress was estimated by the elastic modulus (EM) or pressure/elastic strain index (pulse pressure/IVUSp), an expression of the intrinsic PA wall viscoelastic properties and buffering function. Intra- and inter-observer validation of IVUS measurements in our laboratory has been previously published [16,18]. Transthoracic echocardiography-Doppler study: Baseline and stress echocardiography was performed using commercially available equipment (Vivid 7 digital GE Medical System) with a standard 2D broad-band phased array M4S transducer and tissue Doppler imaging software. The transducer was maximally aligned to optimize endocardial visualization and spectral displays of Doppler profiles. Real-time 2-D and colour Doppler myocardial imaging were performed in the apical 4-chamber view as well as the parasternal short-axis and subcostal views. The predominantly longitudinal contractile pattern of the RV can be exploited to assess RV systolic function [19]. We estimated the global RV systolic longitudinal function by the tricuspid annular plane systolic excursion (TAPSE) measured from the systolic displacement of the RV free wall-tricuspid annular plane junction in the apical 4-chamber view M-mode recordings. Myocardial peak velocity during ejection phase (Sm) was assessed by tissue Doppler imaging in the basal segment of the RV free wall using spectral pulsed wave tissue Doppler recorded at a sweep speed of 100–150 mm/s. Myocardial acceleration during isovolumic contraction (IVA) was calculated as the maximal isovolumetric myocardial velocity divided by the time to peak systolic velocity, as previously described by Vogel et al. [20]. This method seems to be less load-dependent compared to the other two indices. Patients were required to hold their breath and images were obtained immediately after expiration for better image quality. All patients were in sinus rhythm, and an average of 3 to 5 measurements from consecutive cardiac cycles were employed for data analysis. All examinations were recorded digitally for subsequent blinded off-line analysis on EchoPAC GE Medical System. The estimation of intraobserver and interobserver reproducibility was analyzed. Cardiovascular reserve analysis: Cardiovascular reserve was expressed as the change (increase or decrease) in heart rate (ΔHR, chronotropic reserve), RV systolic function (ΔIVA, systolic reserve) and pulmonary vascular function (ΔEM and ΔPVR, vascular reserve) during DST when compared with rest [21]. Statistical analysis: Continuous variables are expressed as mean ± SEM. Based on the rounded mean + 2SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH patients were divided, prior to analysis, into two groups according to their hemodynamic response to pharmacological stress: group 1 included those patients whose mPAP during stress increased > 5 mm Hg with respect to the resting value, and group 2 comprised those patients with a mPAP increase ≤ 5 mm Hg. Independent sample t-tests were used to compare differences between the control and PAH groups; and paired t-tests were used to compare the effects of stress maneuver within each group. Chi-squared was used for comparing proportions of patients. Intergroup variation was analyzed using one-way ANOVA. The association between hemodynamic response (ΔCO, ΔmPAP) and cardiovascular reserve (ΔIVA, ΔEM, ΔPVR) were explored using linear regression analysis (Pearson coefficient). A two-sided P value < 0.05 was regarded as significant. Data analysis were carried out using SPSS 17.0 for Windows 7 software. Results: Comparison between PAH patients and control subjects at rest and during pharmacological and positional stress The age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups. Demographic, anthropometric and clinical data of control subjects and patients with PAH BSA: body surface area; 6MWD: six minute walking distance. Hemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance. During DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress. All PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients. Relationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients). The age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups. Demographic, anthropometric and clinical data of control subjects and patients with PAH BSA: body surface area; 6MWD: six minute walking distance. Hemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance. During DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress. All PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients. Relationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients). Changes in hemodynamic, IVUS and echocardiographic data in PAH patients according to ΔmPAP during pharmacological and positional stress Nine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3). Hemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Both PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST. RV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress. Nine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3). Hemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Both PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST. RV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress. Cardiovascular reserve responses during pharmacological and positional stress Control subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4). Cardiovascular reserve response of control subjects and both PAH groups †p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta. Considering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001). Global cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4). In PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2). Correlations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver) CO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient). Relationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest). The interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively. In the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05). Control subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4). Cardiovascular reserve response of control subjects and both PAH groups †p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta. Considering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001). Global cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4). In PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2). Correlations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver) CO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient). Relationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest). The interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively. In the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05). Comparison between PAH patients and control subjects at rest and during pharmacological and positional stress: The age and gender of PAH subjects and control subjects were well matched (Table 1). Table 2 shows hemodynamic and IVUS data of both, control subjects and PAH patients at rest and during DST. PAH patients showed higher mPAP, pPAP, PVR, TPR and EM and lower PC and IVUSp than control subjects (P < 0.05). There were no significant difference in heart rate, SV, CO, right atrial pressure and pulmonary capillary wedge pressure between both groups. Demographic, anthropometric and clinical data of control subjects and patients with PAH BSA: body surface area; 6MWD: six minute walking distance. Hemodynamic and IVUS data of control subjects and patients with PAH at rest and during stress maneuver *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PCWP: pulmonary capillary wedge pressure; PVR: pulmonary vascular resistance; RAP: right atrial pressure; SV: stroke volume; TPR: total pulmonary resistance. During DST healthy controls showed an increase of CO, SV and heart rate (P < 0.05) with a significant reduction in PVR and TPR, and improvement of IVUSp and EM (P < 0.05) (Table 2). These changes led to an attenuated increase in mPAP and pPAP. Mean systolic aortic pressure was 127 mm Hg at rest and 165 mm Hg during stress (P < 0.05). None of control subjects exceeded 20 mm Hg of mPAP at rest and 30 mm Hg during stress. All PAH patients tolerated the stress protocol. No dobutamine infusion had to be interrupted at the doses employed for this study and no complications were observed. No patients included in this study presented greater than mild tricuspid regurgitation (≤ grade 2/4), and there were no relevant changes in its severity during the complete protocol. Only six of 18 PAH patients increased SV, and the heart rate increase was significantly lower than control subjects (29 ± 3.8 vs. 41 ± 2 bpm, P = 0.034), therefore the CO increment was mainly dependent on heart rate increase. mPAP, pPAP and EM increased significantly and PC significantly decreased during stress (Table 2). However, they showed an increase in all the RV systolic function indexes during stress (TAPSE 16.8 ± 1.3 vs. 20.2 ± 1.1 mm, P = 0.02; Sm 12.0 ± 0.5 vs. 14.7 ± 0.8 cm ⋅ s-1, P = 0.003; IVA 3.8 ± 0.3 vs. 7.8 ± 0.9 m ⋅ s-2, P = 0.0003). The CO increase was less marked than in healthy controls (P < 0.05), and consequently, ΔmPAP/ΔCO was higher (9.6 ± 3.1 vs. 0.7 ± 0.1 mm Hg/L/min, P = 0.046) in PAH patients than in controls (Figure 1). High quality RV velocity curves were obtained both at rest and stress in 16 out of 18 patients. Relationship between mean pulmonary artery pressure (mPAP) and cardiac output at rest and during pharmacological and positional stress. (Filled circle: PAH patients; filled square: control patients). Changes in hemodynamic, IVUS and echocardiographic data in PAH patients according to ΔmPAP during pharmacological and positional stress: Nine patients increased mPAP > 5 mm Hg (group 1) and nine patients changed mPAP ≤ 5 mm Hg (group 2) during stress. Etiology of group 1 was 5 IPAH, 2 scleroderma-associated PAH, 1 congenital cardiac shunt and 1 HIV PH. Etiology of group 2 was 7 IPAH, 1 scleroderma-associated PAH and 1 porto-pulmonary hypertension. Neither demographic nor clinical differences between PAH group 1 and PAH group 2 were found (Table 1). Accordingly, neither hemodynamic nor IVUS data showed differences between both PAH patients groups at rest (Table 3). Hemodynamic, IVUS and echocardiographic data at rest and during stress maneuver of both PAH groups *p < 0.05 between both groups in the same condition; §p < 0.05 between both conditions in the same group. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; PCWP: pulmonary capillary wedge pressure; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; RAP: right atrial pressure; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Both PAH groups increased CO during DST (P < 0.05), although without significant differences between them (1.14 ± 0.3 vs. 1.9 ± 0.3 L/min, NS). Only PAH group 1 showed a significant increase in mPAP, decreasing PC and increasing EM significantly, with no change in PVR and TPR. PAH group 2 decreased PVR and TPR without significant change in PC, IVUSp and EM (Table 3). PVR decreased in 2/9 patients in group 1 and in 8/9 in group 2 (P < 0.05). PC decreased in 9/9 patients in group 1 and in only 4/9 in group 2 (P = 0.08). Starting from a similar EM at rest, the EM of PAH group 1 was significantly higher than PAH group 2 (362 ± 55 vs. 187 ± 23 mm Hg, P < 0.05, Table 3) during DST. RV systolic function indexes were similar between both PAH patients groups at rest. DST unmasked a significant lower increase of IVA of PAH group 1 with respect to PAH group 2 (5.9 ± 0.7 vs. 9.9 ± 1.5 m ⋅ s-2) (Table 3). Concomitantly, SV decreased (P < 0.05) in PAH group 1 during stress. Cardiovascular reserve responses during pharmacological and positional stress: Control subjects showed a higher chronotropic (Δheart rate) and systolic reserve (measured by ΔSV) than PAH patients (P < 0.05). The negative change in ΔEM and ΔPVR during stress revealed an increased vascular reserve associated with a low ΔmPAP/ΔCO ratio (Table 4). Cardiovascular reserve response of control subjects and both PAH groups †p < 0.05, PAH 1 vs. PAH 2; *p < 0.05, Control vs. PAH 1; §p < 0.05, Control vs. PAH 2. CO: cardiac output; PC: pulmonary capacitance index; EM: elastic modulus; HR: heart rate; IVA: myocardial isovolumic acceleration; IVUSp: pulmonary arterial pulsatility; mPAP and pPAP: mean and pulse arterial pulmonary pressures; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; SV: stroke volume; TAPSE: tricuspid annular plane systolic excursion; TPR: total pulmonary resistance. Δ = delta. Considering all PAH patients, resting EM, but neither PVR nor PC, was correlated with ΔmPAP (r = 0.49, P < 0.005) and ΔCO (r = -0.72, P < 0.0001). Global cardiovascular reserve was impaired in PAH group 1, showing the higher increase in ΔEM, higher ΔmPAP/ΔCO ratio, with a negative change in ΔSV and a positive change in ΔPVR and ΔTPR. By contrast PAH group 2, showed some extent of cardiovascular reserve, illustrated by the changes of ΔEM, ΔPVR, ΔmPAP/ΔCO ratio and ΔSV with respect to control patients (Table 4). In PAH patients, in whom RV systolic function was analyzed, TAPSE correlated with mPAP and PVR (r = 0.58 and r = 0.51, respectively; P < 0.05), whereas, IVA and Sm were correlated with CO (r = 0.32 and r = 0.5, respectively; P < 0.05) (Table 5). Finally, ΔEM only correlated with ΔmPAP (r = 0.56, P < 0.05) and ΔIVA was correlated with ΔCO (r = 0.5, P < 0.05) (Figure 2). Correlations between right ventricular systolic tissue Doppler variables and hemodynamics during both conditions (rest and stress maneuver) CO: cardiac output; IVA: myocardial isovolumic acceleration; mPAP: mean and pulse arterial; PVR: pulmonary vascular resistance; Sm: myocardial peak systolic velocity; TAPSE: tricuspid annular plane systolic excursion. (r: Pearson coefficient). Relationship between hemodynamic response and cardiovascular reserve. A. Correlation between delta elastic modulus (ΔEM) and delta mean pulmonary artery pressure (ΔmPAP); B. correlation between ΔEM and delta cardiac output (ΔCO); C. correlation between delta myocardial isovolumic acceleration (ΔIVA) and ΔmPAP; D. correlation between ΔIVA and ΔCO in PAH patients. (delta = value during dobutamine stress test and Trendelenburg minus value at rest). The interobserver and intraobserver variabilities for IVA measurements were 4.4% and 3.4% respectively. In the 2-year clinical follow-up there were two deaths/pulmonary transplantations in PAH group 1 and one death in PAH group 2 (P > 0.05). Discussion: This is the first study evaluating the cardiovascular reserve in PAH patients. We show that the hemodynamic response to pharmacological stress with low-dose dobutamine plus 30° Trendelenburg position is significantly impaired in patients with PAH, and this impairment is associated with a low RV systolic reserve and pulmonary vascular reserve. The lower cardiovascular reserve is significantly related to a worse hemodynamic adaptation to DST and it could be associated with a poor clinical outcome. Pharmacological and positional stress Cardiovascular reserve is emerging as a strong predictor of outcome in different cardiovascular diseases [21]. From a physiological point of view, cardiovascular reserve is a measure of cardiovascular response to exercise or pharmacological stresses (dobutamine infusion between 4 and 10 mcg/kg/min) [22]. Although exercise stress is the gold standard to evaluate of the pulmonary vascular pressure-flow relationships, exercise hemodynamics in PAH patients have been poorly studied, and factors that may have an impact on PAP response to exercise, such as exercise method, exercise intensity, position and age, have not been accounted for. The stress maneuver used in our study provided by low-dose dobutamine plus 30° Trendelenburg position works by a purely passive increasing in CO without directly influence on the PA wall viscoelastic properties. In experimental pulmonary hypertension, dobutamine infusion at a rate of 10 mcg/kg/min has no flow-independent effects on the normal or acutely hypertensive circulation. Higher doses may have a constricting or dilating effect depending on the pre-existing vascular tone [22-24]. Taking into account the extent of heart rate and CO reached by healthy subjects during pharmacological and positional stress, we achieved a cardiovascular stress level similar to a slight/moderate exercise (heart rate 100–110 bpm and cardiac output 10–14 L/min) [2]. According to the data reviewed by Kovacs et al., mPAP values during slight exercise in healthy subjects were 29.4 ± 8.4 mm Hg, 20.0 ± 4.7 mm Hg and 18.2 ± 5.1 mm Hg in subjects aged ≥ 50 years, 30–50 years and less than 30 years, respectively [2]. Our healthy controls were aged 51 ± 6 years (range 40–60 years; 50% ≤ 50 years) and showed a similar mPAP (18 ± 4 mm Hg) with a similar CO increase (doubled) during DST. Cardiovascular reserve is emerging as a strong predictor of outcome in different cardiovascular diseases [21]. From a physiological point of view, cardiovascular reserve is a measure of cardiovascular response to exercise or pharmacological stresses (dobutamine infusion between 4 and 10 mcg/kg/min) [22]. Although exercise stress is the gold standard to evaluate of the pulmonary vascular pressure-flow relationships, exercise hemodynamics in PAH patients have been poorly studied, and factors that may have an impact on PAP response to exercise, such as exercise method, exercise intensity, position and age, have not been accounted for. The stress maneuver used in our study provided by low-dose dobutamine plus 30° Trendelenburg position works by a purely passive increasing in CO without directly influence on the PA wall viscoelastic properties. In experimental pulmonary hypertension, dobutamine infusion at a rate of 10 mcg/kg/min has no flow-independent effects on the normal or acutely hypertensive circulation. Higher doses may have a constricting or dilating effect depending on the pre-existing vascular tone [22-24]. Taking into account the extent of heart rate and CO reached by healthy subjects during pharmacological and positional stress, we achieved a cardiovascular stress level similar to a slight/moderate exercise (heart rate 100–110 bpm and cardiac output 10–14 L/min) [2]. According to the data reviewed by Kovacs et al., mPAP values during slight exercise in healthy subjects were 29.4 ± 8.4 mm Hg, 20.0 ± 4.7 mm Hg and 18.2 ± 5.1 mm Hg in subjects aged ≥ 50 years, 30–50 years and less than 30 years, respectively [2]. Our healthy controls were aged 51 ± 6 years (range 40–60 years; 50% ≤ 50 years) and showed a similar mPAP (18 ± 4 mm Hg) with a similar CO increase (doubled) during DST. Cardiovascular reserve in PAH versus control patients In the control group, the marked increase in CO during DST did not cause any significant change in mPAP and determined a low ΔmPAP/ΔCO ratio (0.7 ± 0.2 mm Hg/L/min). This value corresponds well to the cohort of Kovacs et al. which reported a ΔmPAP/ΔCO ~1.06 mm Hg/L/min [25]. The modest increment in mPAP relative to CO during pharmacological stress is attributable to passive recruitment and distension of a normally compliant pulmonary circulation with active flow-mediated vasodilation, decreasing PVR and TPR [15]. Studies into the regulation of pulmonary vascular tone during exercise demonstrate the importance of nitric oxide in the exercise-induced pulmonary vasodilatation, which is mediated in part via blunting of the vasoconstrictor influence of endothelin [26]. Concomitantly, pharmacological stress produced an increase in arterial pulsatility (estimated by IVUSp), and a decrease of EM, expressing a preserved buffering function. It is accepted that the arterial wall buffering function is determined not only by arterial elastic properties, but also by the viscous properties of the wall. The characterization of wall buffering function has been estimated by means of the ratio between viscous index/elastic index [27]. Considering our stress condition as a mainly passive condition (with no significant change in viscous index), a decreased EM with a negative ΔEM, would be associated with a preserved buffering function and buffering function reserve, respectively. The negative change in ΔPVR with a low ΔmPAP/ΔCO ratio reflects a preserved pulmonary vasodilator reserve. In clinical practice, ventricular systolic reserve is usually defined by a change in ejection fraction or SV during exercise or dobutamine infusion. Even though we did not assess RV function indices in the control group, the observed CO increase was composed by a 56% increase in heart rate (chronotropic reserve) and 20% (13 mL) increase in SV (systolic reserve). Although, both controls and PAH patients had similar heart rate at rest, PAH cohort showed an impaired chronotropic response during stress maneuver. This chronotropic incompetence has been previously documented by Provencher et al. and may reflect the loss of normal physiological reserve secondary to significant autonomic nervous system abnormalities and probably as a result of down-regulation of β-adrenoreceptors [28,29]. Both PAH patient groups had similar resting hemodynamics and chronotropic reserve. However, the higher increase in mPAP and pPAP with similar increase of CO during stress in PAH group 1 with respect to PAH group 2, would be related to the reduced recruitability and distensibility of more highly remodeled pulmonary vessels. This illustrates a lack of physiological adaptation of the PA wall to increased flow in relation with a lower vasodilation reserve (positive ΔPVR), a higher PA wall remodeling (higher resting EM) and a concomitantly lower buffering function (higher ΔEM). By contrast, PAH group 2 preserved some extent of vascular reserve secondary to a vasodilation response (negative ΔPVR) and a lower impairment of buffering function (lower positive ΔEM). This would explained the significant lower ΔmPAP/ΔCO ratio than group 1. Accordingly, we have previously reported that PAH patients with higher IVUSp and lesser EM displayed an absolute PA vasodilation during acute vasoreactivity testing [18]. We cannot discard the presence of alterations in the control of pulmonary vascular tone during DST, resulting in blunted pulmonary vasodilation. Since both PAH groups have neither demographic (age, gender or body surface area) nor clinical differences (functional class, 6 minutes walking distance, etiology of PAH), we can speculate that PAH group 1 could have higher endothelial dysfunction with higher imbalance between vasodilators and vasoconstrictors than group 2, explaining the significant higher ΔmPAP/ΔCO ratio [26]. In accordance with previous data, resting hemodynamic measurements are poorly correlated with the response to pharmacological stress [30]. However, EM at rest was significantly correlated with ΔmPAP and ΔCO. Accordingly, Kubo et al. showed that the percentage of wall thickness was highly correlated with ∆mPAP during exercise in patients with severe emphysema [31]. The correlation between ΔmPAP and ΔEM (Figure 2) suggests that PA wall remodeling and buffering function impairment would be associated with the lower vascular reserve. In the context of PAH, evidence of RV dysfunction and clinical right-sided heart failure at rest have been shown to be the most important determinants of morbidity and mortality, independently of PAP values. We used three measures of longitudinal RV shortening in an effort to characterize simple and reproducible measurements of global RV systolic function [19]. The mean reference value of TAPSE is 23 mm (16–30), Sm 15 cm ⋅ s-1 (10–19) and IVA 3.7 m ⋅ s-2 (2.2-5.2) [19]. Among them, TAPSE, a simple and clinically useful tool to estimate RV function in PAH patients, has been shown to predict survival in PAH [32]. Preliminary evidence suggests that a decrease in TAPSE with exercise was strongly associated with adverse clinical events in PAH patients within one year of follow up [33]. However, Giusca et al. suggested that tricuspid ring motion is only loosely related to RV systolic function, being highly dependent on afterload and overall motion of the heart, thus failing to reflect RV longitudinal function accurately [34]. This may explain why TAPSE changes are more significantly related to changes in mPAP and PVR than to true changes in RV systolic function such as CO. Myocardial deformation parameters provide a more accurate picture of the contractile status of the RV free wall [19,34]. IVA appears as a relatively load-independent estimator of the RV systolic response to stress, probably reflecting true changes in contractility and in CO induced by DST. Sm of RV basal free wall is also better related to CO than TAPSE. However, in this work it showed a lower ability to identify systolic reserve than IVA, since there were no Sm differences between both PAH patients groups either at rest or during stress. Although both PAH patients groups showed similar resting RV function, PAH group 1 showed a lower RV systolic reserve than PAH group 2, estimated by a lower increase in IVA and a decrease in SV during DST (P < 0.05). Systolic reserve is dependent on several factors, such as ventricular contractility, ventricular remodeling and, myocardial interstitial fibrosis. The correlation between ΔCO and ΔIVA (Figure 2) suggests that RV contractility impairment would explain a lower systolic RV reserve. However, we cannot discard a stress-induced ischemia and attenuated oxygen supply to the right myocardium during stress maneuver in more severe PAH patients that could explain their impaired RV systolic reserve [35,36]. Recently, Blumberg et al. correlated exercise hemodynamics with peak oxygen uptake and determined their prognostic significance in PAH patients. Among hemodynamic variables, only exercise cardiac index and the slope of the pulmonary pressure/flow relationship were significant prognostic indicators [11]. Therefore, the exaggerated increase in mPAP with no concomitant increase in CO (abnormal slope of the pressure/flow relationship) during DST despite similar resting hemodynamics, allows speculating a worse outcome of PAH group 1 with respect to group 2 [11]. In the control group, the marked increase in CO during DST did not cause any significant change in mPAP and determined a low ΔmPAP/ΔCO ratio (0.7 ± 0.2 mm Hg/L/min). This value corresponds well to the cohort of Kovacs et al. which reported a ΔmPAP/ΔCO ~1.06 mm Hg/L/min [25]. The modest increment in mPAP relative to CO during pharmacological stress is attributable to passive recruitment and distension of a normally compliant pulmonary circulation with active flow-mediated vasodilation, decreasing PVR and TPR [15]. Studies into the regulation of pulmonary vascular tone during exercise demonstrate the importance of nitric oxide in the exercise-induced pulmonary vasodilatation, which is mediated in part via blunting of the vasoconstrictor influence of endothelin [26]. Concomitantly, pharmacological stress produced an increase in arterial pulsatility (estimated by IVUSp), and a decrease of EM, expressing a preserved buffering function. It is accepted that the arterial wall buffering function is determined not only by arterial elastic properties, but also by the viscous properties of the wall. The characterization of wall buffering function has been estimated by means of the ratio between viscous index/elastic index [27]. Considering our stress condition as a mainly passive condition (with no significant change in viscous index), a decreased EM with a negative ΔEM, would be associated with a preserved buffering function and buffering function reserve, respectively. The negative change in ΔPVR with a low ΔmPAP/ΔCO ratio reflects a preserved pulmonary vasodilator reserve. In clinical practice, ventricular systolic reserve is usually defined by a change in ejection fraction or SV during exercise or dobutamine infusion. Even though we did not assess RV function indices in the control group, the observed CO increase was composed by a 56% increase in heart rate (chronotropic reserve) and 20% (13 mL) increase in SV (systolic reserve). Although, both controls and PAH patients had similar heart rate at rest, PAH cohort showed an impaired chronotropic response during stress maneuver. This chronotropic incompetence has been previously documented by Provencher et al. and may reflect the loss of normal physiological reserve secondary to significant autonomic nervous system abnormalities and probably as a result of down-regulation of β-adrenoreceptors [28,29]. Both PAH patient groups had similar resting hemodynamics and chronotropic reserve. However, the higher increase in mPAP and pPAP with similar increase of CO during stress in PAH group 1 with respect to PAH group 2, would be related to the reduced recruitability and distensibility of more highly remodeled pulmonary vessels. This illustrates a lack of physiological adaptation of the PA wall to increased flow in relation with a lower vasodilation reserve (positive ΔPVR), a higher PA wall remodeling (higher resting EM) and a concomitantly lower buffering function (higher ΔEM). By contrast, PAH group 2 preserved some extent of vascular reserve secondary to a vasodilation response (negative ΔPVR) and a lower impairment of buffering function (lower positive ΔEM). This would explained the significant lower ΔmPAP/ΔCO ratio than group 1. Accordingly, we have previously reported that PAH patients with higher IVUSp and lesser EM displayed an absolute PA vasodilation during acute vasoreactivity testing [18]. We cannot discard the presence of alterations in the control of pulmonary vascular tone during DST, resulting in blunted pulmonary vasodilation. Since both PAH groups have neither demographic (age, gender or body surface area) nor clinical differences (functional class, 6 minutes walking distance, etiology of PAH), we can speculate that PAH group 1 could have higher endothelial dysfunction with higher imbalance between vasodilators and vasoconstrictors than group 2, explaining the significant higher ΔmPAP/ΔCO ratio [26]. In accordance with previous data, resting hemodynamic measurements are poorly correlated with the response to pharmacological stress [30]. However, EM at rest was significantly correlated with ΔmPAP and ΔCO. Accordingly, Kubo et al. showed that the percentage of wall thickness was highly correlated with ∆mPAP during exercise in patients with severe emphysema [31]. The correlation between ΔmPAP and ΔEM (Figure 2) suggests that PA wall remodeling and buffering function impairment would be associated with the lower vascular reserve. In the context of PAH, evidence of RV dysfunction and clinical right-sided heart failure at rest have been shown to be the most important determinants of morbidity and mortality, independently of PAP values. We used three measures of longitudinal RV shortening in an effort to characterize simple and reproducible measurements of global RV systolic function [19]. The mean reference value of TAPSE is 23 mm (16–30), Sm 15 cm ⋅ s-1 (10–19) and IVA 3.7 m ⋅ s-2 (2.2-5.2) [19]. Among them, TAPSE, a simple and clinically useful tool to estimate RV function in PAH patients, has been shown to predict survival in PAH [32]. Preliminary evidence suggests that a decrease in TAPSE with exercise was strongly associated with adverse clinical events in PAH patients within one year of follow up [33]. However, Giusca et al. suggested that tricuspid ring motion is only loosely related to RV systolic function, being highly dependent on afterload and overall motion of the heart, thus failing to reflect RV longitudinal function accurately [34]. This may explain why TAPSE changes are more significantly related to changes in mPAP and PVR than to true changes in RV systolic function such as CO. Myocardial deformation parameters provide a more accurate picture of the contractile status of the RV free wall [19,34]. IVA appears as a relatively load-independent estimator of the RV systolic response to stress, probably reflecting true changes in contractility and in CO induced by DST. Sm of RV basal free wall is also better related to CO than TAPSE. However, in this work it showed a lower ability to identify systolic reserve than IVA, since there were no Sm differences between both PAH patients groups either at rest or during stress. Although both PAH patients groups showed similar resting RV function, PAH group 1 showed a lower RV systolic reserve than PAH group 2, estimated by a lower increase in IVA and a decrease in SV during DST (P < 0.05). Systolic reserve is dependent on several factors, such as ventricular contractility, ventricular remodeling and, myocardial interstitial fibrosis. The correlation between ΔCO and ΔIVA (Figure 2) suggests that RV contractility impairment would explain a lower systolic RV reserve. However, we cannot discard a stress-induced ischemia and attenuated oxygen supply to the right myocardium during stress maneuver in more severe PAH patients that could explain their impaired RV systolic reserve [35,36]. Recently, Blumberg et al. correlated exercise hemodynamics with peak oxygen uptake and determined their prognostic significance in PAH patients. Among hemodynamic variables, only exercise cardiac index and the slope of the pulmonary pressure/flow relationship were significant prognostic indicators [11]. Therefore, the exaggerated increase in mPAP with no concomitant increase in CO (abnormal slope of the pressure/flow relationship) during DST despite similar resting hemodynamics, allows speculating a worse outcome of PAH group 1 with respect to group 2 [11]. Study limitations Although care must be taken when comparing hemodynamic response induced by physical exercise with pharmacological stress produced by low-dose dobutamine infusion, the similar response to exercise and to dobutamine infusion at 10 mcg/kg/min in patients with PH following the Mustard operation is compelling [37]. In addition, our pharmacological stress was a step closer to real exercise, since increased preload was achieved with the addition of 30° Trendelenburg. In fact, our stress maneuver doubled the CO in the control population. The stress with dobutamine and Trendelenburg works by a purely passive effect on PVR and PC, mimicking the CO response to moderate exercise without interfere with PA vascular tone. Invasive recordings of exercise hemodynamics in PAH require an intensive protocol, best performed by an experienced team, and thus it does not belong in the routine evaluation of PAH patients [8,24]. Our safe stressor protocol should be viewed as an easier and more reproducible maneuver than physical exercise in the catheterization laboratory. The relative contributions of longitudinal and transverse shortening to overall RV function have been quantified recently. Although, we only assess RV systolic reserve by longitudinal shortening indices, Brown et al. showed that improved RV function following pulmonary vasodilator therapy occurs solely from improvements in longitudinal contraction, suggesting that longitudinal shortening may represent the afterload-responsive element of RV functional recovery [38]. Finally, we do not estimate a possible contribution of impaired diastolic reserve in the cardiovascular adaptation to the stress maneuver. Although care must be taken when comparing hemodynamic response induced by physical exercise with pharmacological stress produced by low-dose dobutamine infusion, the similar response to exercise and to dobutamine infusion at 10 mcg/kg/min in patients with PH following the Mustard operation is compelling [37]. In addition, our pharmacological stress was a step closer to real exercise, since increased preload was achieved with the addition of 30° Trendelenburg. In fact, our stress maneuver doubled the CO in the control population. The stress with dobutamine and Trendelenburg works by a purely passive effect on PVR and PC, mimicking the CO response to moderate exercise without interfere with PA vascular tone. Invasive recordings of exercise hemodynamics in PAH require an intensive protocol, best performed by an experienced team, and thus it does not belong in the routine evaluation of PAH patients [8,24]. Our safe stressor protocol should be viewed as an easier and more reproducible maneuver than physical exercise in the catheterization laboratory. The relative contributions of longitudinal and transverse shortening to overall RV function have been quantified recently. Although, we only assess RV systolic reserve by longitudinal shortening indices, Brown et al. showed that improved RV function following pulmonary vasodilator therapy occurs solely from improvements in longitudinal contraction, suggesting that longitudinal shortening may represent the afterload-responsive element of RV functional recovery [38]. Finally, we do not estimate a possible contribution of impaired diastolic reserve in the cardiovascular adaptation to the stress maneuver. Pharmacological and positional stress: Cardiovascular reserve is emerging as a strong predictor of outcome in different cardiovascular diseases [21]. From a physiological point of view, cardiovascular reserve is a measure of cardiovascular response to exercise or pharmacological stresses (dobutamine infusion between 4 and 10 mcg/kg/min) [22]. Although exercise stress is the gold standard to evaluate of the pulmonary vascular pressure-flow relationships, exercise hemodynamics in PAH patients have been poorly studied, and factors that may have an impact on PAP response to exercise, such as exercise method, exercise intensity, position and age, have not been accounted for. The stress maneuver used in our study provided by low-dose dobutamine plus 30° Trendelenburg position works by a purely passive increasing in CO without directly influence on the PA wall viscoelastic properties. In experimental pulmonary hypertension, dobutamine infusion at a rate of 10 mcg/kg/min has no flow-independent effects on the normal or acutely hypertensive circulation. Higher doses may have a constricting or dilating effect depending on the pre-existing vascular tone [22-24]. Taking into account the extent of heart rate and CO reached by healthy subjects during pharmacological and positional stress, we achieved a cardiovascular stress level similar to a slight/moderate exercise (heart rate 100–110 bpm and cardiac output 10–14 L/min) [2]. According to the data reviewed by Kovacs et al., mPAP values during slight exercise in healthy subjects were 29.4 ± 8.4 mm Hg, 20.0 ± 4.7 mm Hg and 18.2 ± 5.1 mm Hg in subjects aged ≥ 50 years, 30–50 years and less than 30 years, respectively [2]. Our healthy controls were aged 51 ± 6 years (range 40–60 years; 50% ≤ 50 years) and showed a similar mPAP (18 ± 4 mm Hg) with a similar CO increase (doubled) during DST. Cardiovascular reserve in PAH versus control patients: In the control group, the marked increase in CO during DST did not cause any significant change in mPAP and determined a low ΔmPAP/ΔCO ratio (0.7 ± 0.2 mm Hg/L/min). This value corresponds well to the cohort of Kovacs et al. which reported a ΔmPAP/ΔCO ~1.06 mm Hg/L/min [25]. The modest increment in mPAP relative to CO during pharmacological stress is attributable to passive recruitment and distension of a normally compliant pulmonary circulation with active flow-mediated vasodilation, decreasing PVR and TPR [15]. Studies into the regulation of pulmonary vascular tone during exercise demonstrate the importance of nitric oxide in the exercise-induced pulmonary vasodilatation, which is mediated in part via blunting of the vasoconstrictor influence of endothelin [26]. Concomitantly, pharmacological stress produced an increase in arterial pulsatility (estimated by IVUSp), and a decrease of EM, expressing a preserved buffering function. It is accepted that the arterial wall buffering function is determined not only by arterial elastic properties, but also by the viscous properties of the wall. The characterization of wall buffering function has been estimated by means of the ratio between viscous index/elastic index [27]. Considering our stress condition as a mainly passive condition (with no significant change in viscous index), a decreased EM with a negative ΔEM, would be associated with a preserved buffering function and buffering function reserve, respectively. The negative change in ΔPVR with a low ΔmPAP/ΔCO ratio reflects a preserved pulmonary vasodilator reserve. In clinical practice, ventricular systolic reserve is usually defined by a change in ejection fraction or SV during exercise or dobutamine infusion. Even though we did not assess RV function indices in the control group, the observed CO increase was composed by a 56% increase in heart rate (chronotropic reserve) and 20% (13 mL) increase in SV (systolic reserve). Although, both controls and PAH patients had similar heart rate at rest, PAH cohort showed an impaired chronotropic response during stress maneuver. This chronotropic incompetence has been previously documented by Provencher et al. and may reflect the loss of normal physiological reserve secondary to significant autonomic nervous system abnormalities and probably as a result of down-regulation of β-adrenoreceptors [28,29]. Both PAH patient groups had similar resting hemodynamics and chronotropic reserve. However, the higher increase in mPAP and pPAP with similar increase of CO during stress in PAH group 1 with respect to PAH group 2, would be related to the reduced recruitability and distensibility of more highly remodeled pulmonary vessels. This illustrates a lack of physiological adaptation of the PA wall to increased flow in relation with a lower vasodilation reserve (positive ΔPVR), a higher PA wall remodeling (higher resting EM) and a concomitantly lower buffering function (higher ΔEM). By contrast, PAH group 2 preserved some extent of vascular reserve secondary to a vasodilation response (negative ΔPVR) and a lower impairment of buffering function (lower positive ΔEM). This would explained the significant lower ΔmPAP/ΔCO ratio than group 1. Accordingly, we have previously reported that PAH patients with higher IVUSp and lesser EM displayed an absolute PA vasodilation during acute vasoreactivity testing [18]. We cannot discard the presence of alterations in the control of pulmonary vascular tone during DST, resulting in blunted pulmonary vasodilation. Since both PAH groups have neither demographic (age, gender or body surface area) nor clinical differences (functional class, 6 minutes walking distance, etiology of PAH), we can speculate that PAH group 1 could have higher endothelial dysfunction with higher imbalance between vasodilators and vasoconstrictors than group 2, explaining the significant higher ΔmPAP/ΔCO ratio [26]. In accordance with previous data, resting hemodynamic measurements are poorly correlated with the response to pharmacological stress [30]. However, EM at rest was significantly correlated with ΔmPAP and ΔCO. Accordingly, Kubo et al. showed that the percentage of wall thickness was highly correlated with ∆mPAP during exercise in patients with severe emphysema [31]. The correlation between ΔmPAP and ΔEM (Figure 2) suggests that PA wall remodeling and buffering function impairment would be associated with the lower vascular reserve. In the context of PAH, evidence of RV dysfunction and clinical right-sided heart failure at rest have been shown to be the most important determinants of morbidity and mortality, independently of PAP values. We used three measures of longitudinal RV shortening in an effort to characterize simple and reproducible measurements of global RV systolic function [19]. The mean reference value of TAPSE is 23 mm (16–30), Sm 15 cm ⋅ s-1 (10–19) and IVA 3.7 m ⋅ s-2 (2.2-5.2) [19]. Among them, TAPSE, a simple and clinically useful tool to estimate RV function in PAH patients, has been shown to predict survival in PAH [32]. Preliminary evidence suggests that a decrease in TAPSE with exercise was strongly associated with adverse clinical events in PAH patients within one year of follow up [33]. However, Giusca et al. suggested that tricuspid ring motion is only loosely related to RV systolic function, being highly dependent on afterload and overall motion of the heart, thus failing to reflect RV longitudinal function accurately [34]. This may explain why TAPSE changes are more significantly related to changes in mPAP and PVR than to true changes in RV systolic function such as CO. Myocardial deformation parameters provide a more accurate picture of the contractile status of the RV free wall [19,34]. IVA appears as a relatively load-independent estimator of the RV systolic response to stress, probably reflecting true changes in contractility and in CO induced by DST. Sm of RV basal free wall is also better related to CO than TAPSE. However, in this work it showed a lower ability to identify systolic reserve than IVA, since there were no Sm differences between both PAH patients groups either at rest or during stress. Although both PAH patients groups showed similar resting RV function, PAH group 1 showed a lower RV systolic reserve than PAH group 2, estimated by a lower increase in IVA and a decrease in SV during DST (P < 0.05). Systolic reserve is dependent on several factors, such as ventricular contractility, ventricular remodeling and, myocardial interstitial fibrosis. The correlation between ΔCO and ΔIVA (Figure 2) suggests that RV contractility impairment would explain a lower systolic RV reserve. However, we cannot discard a stress-induced ischemia and attenuated oxygen supply to the right myocardium during stress maneuver in more severe PAH patients that could explain their impaired RV systolic reserve [35,36]. Recently, Blumberg et al. correlated exercise hemodynamics with peak oxygen uptake and determined their prognostic significance in PAH patients. Among hemodynamic variables, only exercise cardiac index and the slope of the pulmonary pressure/flow relationship were significant prognostic indicators [11]. Therefore, the exaggerated increase in mPAP with no concomitant increase in CO (abnormal slope of the pressure/flow relationship) during DST despite similar resting hemodynamics, allows speculating a worse outcome of PAH group 1 with respect to group 2 [11]. Study limitations: Although care must be taken when comparing hemodynamic response induced by physical exercise with pharmacological stress produced by low-dose dobutamine infusion, the similar response to exercise and to dobutamine infusion at 10 mcg/kg/min in patients with PH following the Mustard operation is compelling [37]. In addition, our pharmacological stress was a step closer to real exercise, since increased preload was achieved with the addition of 30° Trendelenburg. In fact, our stress maneuver doubled the CO in the control population. The stress with dobutamine and Trendelenburg works by a purely passive effect on PVR and PC, mimicking the CO response to moderate exercise without interfere with PA vascular tone. Invasive recordings of exercise hemodynamics in PAH require an intensive protocol, best performed by an experienced team, and thus it does not belong in the routine evaluation of PAH patients [8,24]. Our safe stressor protocol should be viewed as an easier and more reproducible maneuver than physical exercise in the catheterization laboratory. The relative contributions of longitudinal and transverse shortening to overall RV function have been quantified recently. Although, we only assess RV systolic reserve by longitudinal shortening indices, Brown et al. showed that improved RV function following pulmonary vasodilator therapy occurs solely from improvements in longitudinal contraction, suggesting that longitudinal shortening may represent the afterload-responsive element of RV functional recovery [38]. Finally, we do not estimate a possible contribution of impaired diastolic reserve in the cardiovascular adaptation to the stress maneuver. Conclusions: Pulmonary vascular reserve and RV systolic reserve are impaired in PAH patients. The PA wall remodeling, pulmonary buffering function and RV contractility appeared as the main factors of the cardiovascular reserve dysfunction in PAH patients. The lower recruitable cardiovascular reserve is significantly related to a worse hemodynamic response to DST and it could be associated with a poor clinical outcome. Further study is needed to elucidate whether cardiovascular reserve dysfunction adds independent prognostic information in a multivariate analysis. In addition, further studies needs to assess whether improvement of cardiovascular reserve could be a therapeutic target in patients with established pulmonary hypertension. Abbreviations: CO: Cardiac output; DST: Dobutamine stress test with simultaneous 30° Trendelenburg; EM: Elastic modulus; IVA: Myocardial acceleration during isovolumic contraction; IVUS: Intravascular ultrasound; IVUSp: IVUS pulsatility; mPAP: Mean pulmonary arterial pressure; PA: Pulmonary artery; PAH: Pulmonary arterial hypertension; PC: Pulmonary capacitance; pPAP: Pulse PAP; PVR: Pulmonary vascular resistance; RV: Right ventricle; Sm: Myocardial peak velocity during ejection phase; TAPSE: Tricuspid annular plane systolic excursion; TPR: Total pulmonary resistance. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: ED and JCG conceived of the study, participated in its design, conducted the study, analyze the data, and wrote the manuscript. RA participated in the design of the study, conducted the study and helped write the manuscript. CA and NB conducted the study and analyze the data. MLM conceived of the study and participated in its design. AR conceived of the study, participated in its design, and wrote the manuscript. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2466/14/69/prepub
Background: Exercise capacity is impaired in pulmonary arterial hypertension (PAH). We hypothesized that cardiovascular reserve abnormalities would be associated with impaired hemodynamic response to pharmacological stress and worse outcome in PAH. Methods: Eighteen PAH patients (p) group 1 NYHA class II/III and ten controls underwent simultaneous right cardiac catheterization and intravascular ultrasound at rest and during low dose-dobutamine (10 mcg/kg/min) with trendelenburg (DST). We estimated cardiac output (CO), pulmonary vascular resistance (PVR) and capacitance (PC), and PA elastic modulus (EM). We concomitantly measured tricuspid annular plane systolic excursion (TAPSE), RV myocardial peak systolic velocity (Sm) and isovolumic myocardial acceleration (IVA) in PAH patients. Based on the rounded mean + 2 SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH p were divided into two groups according to mean PA pressure (mPAP) response during DST, 1: ΔmPAP > 5 mm Hg and 2: ΔmPAP ≤ 5 mm Hg. Cardiovascular reserve was estimated as the change (delta, Δ) during DST compared with rest, including ΔmPAP with respect to ΔCO (ΔmPAP/ΔCO). All patients were prospectively followed up for 2 years. Results: PAH p showed significant lower heart rate and CO increase than controls during DST, with a significant mPAP and pulse PAP increase and higher ΔmPAP/ΔCO (p < 0.05). Neither hemodynamic, IVUS and echocardiographic data were different between both PAH groups at rest. In group 1, DST caused a higher ΔEM, ΔmPAP/ΔCO, ΔPVR, and ΔTAPSE than group 2, with a lower IVA increase and a negative ΔSV (p < 0.05). TAPSE correlated with mPAP and RVP (p < 0.05) and, IVA and Sm correlated with CO (p < 0.05). ΔEM correlated with ΔmPAP and ΔIVA with ΔCO (p < 0.05). There were two deaths/pulmonary transplantations in group 1 and one death in group 2 during the follow-up (p > 0.05). Conclusions: Pulmonary vascular reserve and RV systolic reserve are significantly impaired in patients with PAH. The lower recruitable cardiovascular reserve is significantly related to a worse hemodynamic response to DST and it could be associated with a poor clinical outcome.
Background: Normal pulmonary circulation is characterized by low pressure and low vascular resistance. Mean pulmonary arterial pressure (mPAP) at rest is virtually independent of age and rarely exceeds 20 mm Hg (14 ± 3.3 mm Hg). In healthy individuals, passive distension of compliant pulmonary circulation and active flow-mediated vasodilation allows the pulmonary vasculature to accommodate increased cardiac output (CO) with only a modest increase in mPAP and a fall in pulmonary vascular resistance (PVR) [1-3]. Invasive hemodynamic monitoring during incremental exercise testing is technically difficult to perform and not routinely incorporated into clinical exercise testing. A recent systematic review has reported an age-dependent increase of mPAP that may exceed 30 mm Hg particularly in subjects aged ≥ 50 years, making it difficult to clearly define normal mPAP values during exercise [2]. In idiopathic pulmonary arterial hypertension (PAH), exercise capacity is markedly impaired due to an inefficient lung gas exchange (ventilation/perfusion mismatching with an increased dead space ventilation) and the inability of the heart to adequately increase pulmonary blood flow during exercise [4]. The pathophysiological mechanisms leading to an abnormal exercise response include an intrinsic abnormality in the pulmonary vasculature due to the pulmonary arterial (PA) wall remodeling [5] and a reduction in stroke volume and right ventricular (RV) ejection fraction [6]. Laskey et al. have demonstrated that both steady and pulsatile components of the PA vascular hydraulic load have considerable impact on exercise response in primary pulmonary hypertension [7]. It has already been reported that improvement in exercise tolerance in PAH patients with chronic therapy is independently related to improvements in pulmonary hemodynamics measured in exercise but not in resting conditions, suggesting an improve in the vascular reserve [8]. Lau et al. did not observe any significant beneficial effects of bosentan on arterial stiffness following 6-months of therapy [9]. Exercise echo-Doppler is being used with increased frequency in the assessment of patients with known or suspected pulmonary vascular disease, focusing on the change in Doppler-estimated PAP with exercise. However, there are surprisingly few data about RV function at exercise, especially considering that the impaired RV functional reserve could get involved in the mechanism of exercise limitation in PAH and other forms of pulmonary vascular diseases [10]. Recently, Blumberg et al. showed that the ability to increase the cardiac index during exercise is an important determinant of exercise capacity and it is linked to survival in patients with PH [11]. We hypothesized that abnormalities in cardiovascular reserve would be associated with impaired hemodynamic response to pharmacological stress and worse outcome in PAH. Therefore, the first aim of the present study was to perform RV systolic function assessment (echocardiography) and hemodynamic monitoring (right heart catheterization) including local elastic properties of proximal PA wall (intravascular ultrasound, IVUS) during dobutamine stress in patients with PAH. The second aim was to evaluate the association between the cardiovascular reserve and the outcome during two years follow-up. Conclusions: Pulmonary vascular reserve and RV systolic reserve are impaired in PAH patients. The PA wall remodeling, pulmonary buffering function and RV contractility appeared as the main factors of the cardiovascular reserve dysfunction in PAH patients. The lower recruitable cardiovascular reserve is significantly related to a worse hemodynamic response to DST and it could be associated with a poor clinical outcome. Further study is needed to elucidate whether cardiovascular reserve dysfunction adds independent prognostic information in a multivariate analysis. In addition, further studies needs to assess whether improvement of cardiovascular reserve could be a therapeutic target in patients with established pulmonary hypertension.
Background: Exercise capacity is impaired in pulmonary arterial hypertension (PAH). We hypothesized that cardiovascular reserve abnormalities would be associated with impaired hemodynamic response to pharmacological stress and worse outcome in PAH. Methods: Eighteen PAH patients (p) group 1 NYHA class II/III and ten controls underwent simultaneous right cardiac catheterization and intravascular ultrasound at rest and during low dose-dobutamine (10 mcg/kg/min) with trendelenburg (DST). We estimated cardiac output (CO), pulmonary vascular resistance (PVR) and capacitance (PC), and PA elastic modulus (EM). We concomitantly measured tricuspid annular plane systolic excursion (TAPSE), RV myocardial peak systolic velocity (Sm) and isovolumic myocardial acceleration (IVA) in PAH patients. Based on the rounded mean + 2 SD of the increase in mPAP in our healthy control group during DST (2.8 + 1.8 mm Hg), PAH p were divided into two groups according to mean PA pressure (mPAP) response during DST, 1: ΔmPAP > 5 mm Hg and 2: ΔmPAP ≤ 5 mm Hg. Cardiovascular reserve was estimated as the change (delta, Δ) during DST compared with rest, including ΔmPAP with respect to ΔCO (ΔmPAP/ΔCO). All patients were prospectively followed up for 2 years. Results: PAH p showed significant lower heart rate and CO increase than controls during DST, with a significant mPAP and pulse PAP increase and higher ΔmPAP/ΔCO (p < 0.05). Neither hemodynamic, IVUS and echocardiographic data were different between both PAH groups at rest. In group 1, DST caused a higher ΔEM, ΔmPAP/ΔCO, ΔPVR, and ΔTAPSE than group 2, with a lower IVA increase and a negative ΔSV (p < 0.05). TAPSE correlated with mPAP and RVP (p < 0.05) and, IVA and Sm correlated with CO (p < 0.05). ΔEM correlated with ΔmPAP and ΔIVA with ΔCO (p < 0.05). There were two deaths/pulmonary transplantations in group 1 and one death in group 2 during the follow-up (p > 0.05). Conclusions: Pulmonary vascular reserve and RV systolic reserve are significantly impaired in patients with PAH. The lower recruitable cardiovascular reserve is significantly related to a worse hemodynamic response to DST and it could be associated with a poor clinical outcome.
17,455
451
[ 576, 41, 485, 426, 314, 53, 222, 684, 517, 637, 378, 1370, 278, 101, 10, 91, 16 ]
21
[ "pah", "patients", "pulmonary", "stress", "group", "reserve", "systolic", "rv", "mpap", "co" ]
[ "exercise hemodynamics pah", "experimental pulmonary hypertension", "pulmonary arterial hypertension", "exercise hemodynamics peak", "exercise idiopathic pulmonary" ]
[CONTENT] Pulmonary hypertension | Dobutamine | Cardiovascular reserve | IVUS | Echocardiography [SUMMARY]
[CONTENT] Pulmonary hypertension | Dobutamine | Cardiovascular reserve | IVUS | Echocardiography [SUMMARY]
[CONTENT] Pulmonary hypertension | Dobutamine | Cardiovascular reserve | IVUS | Echocardiography [SUMMARY]
[CONTENT] Pulmonary hypertension | Dobutamine | Cardiovascular reserve | IVUS | Echocardiography [SUMMARY]
[CONTENT] Pulmonary hypertension | Dobutamine | Cardiovascular reserve | IVUS | Echocardiography [SUMMARY]
[CONTENT] Pulmonary hypertension | Dobutamine | Cardiovascular reserve | IVUS | Echocardiography [SUMMARY]
[CONTENT] Airway Resistance | Cardiac Catheterization | Case-Control Studies | Echocardiography | Echocardiography, Stress | Exercise Tolerance | Female | Functional Residual Capacity | Hemodynamics | Humans | Hypertension, Pulmonary | Male | Middle Aged | Prognosis | Reference Values | Risk Assessment | Severity of Illness Index | Stroke Volume | Survival Rate | Ultrasonography, Interventional | Vascular Resistance | Ventricular Dysfunction, Right [SUMMARY]
[CONTENT] Airway Resistance | Cardiac Catheterization | Case-Control Studies | Echocardiography | Echocardiography, Stress | Exercise Tolerance | Female | Functional Residual Capacity | Hemodynamics | Humans | Hypertension, Pulmonary | Male | Middle Aged | Prognosis | Reference Values | Risk Assessment | Severity of Illness Index | Stroke Volume | Survival Rate | Ultrasonography, Interventional | Vascular Resistance | Ventricular Dysfunction, Right [SUMMARY]
[CONTENT] Airway Resistance | Cardiac Catheterization | Case-Control Studies | Echocardiography | Echocardiography, Stress | Exercise Tolerance | Female | Functional Residual Capacity | Hemodynamics | Humans | Hypertension, Pulmonary | Male | Middle Aged | Prognosis | Reference Values | Risk Assessment | Severity of Illness Index | Stroke Volume | Survival Rate | Ultrasonography, Interventional | Vascular Resistance | Ventricular Dysfunction, Right [SUMMARY]
[CONTENT] Airway Resistance | Cardiac Catheterization | Case-Control Studies | Echocardiography | Echocardiography, Stress | Exercise Tolerance | Female | Functional Residual Capacity | Hemodynamics | Humans | Hypertension, Pulmonary | Male | Middle Aged | Prognosis | Reference Values | Risk Assessment | Severity of Illness Index | Stroke Volume | Survival Rate | Ultrasonography, Interventional | Vascular Resistance | Ventricular Dysfunction, Right [SUMMARY]
[CONTENT] Airway Resistance | Cardiac Catheterization | Case-Control Studies | Echocardiography | Echocardiography, Stress | Exercise Tolerance | Female | Functional Residual Capacity | Hemodynamics | Humans | Hypertension, Pulmonary | Male | Middle Aged | Prognosis | Reference Values | Risk Assessment | Severity of Illness Index | Stroke Volume | Survival Rate | Ultrasonography, Interventional | Vascular Resistance | Ventricular Dysfunction, Right [SUMMARY]
[CONTENT] Airway Resistance | Cardiac Catheterization | Case-Control Studies | Echocardiography | Echocardiography, Stress | Exercise Tolerance | Female | Functional Residual Capacity | Hemodynamics | Humans | Hypertension, Pulmonary | Male | Middle Aged | Prognosis | Reference Values | Risk Assessment | Severity of Illness Index | Stroke Volume | Survival Rate | Ultrasonography, Interventional | Vascular Resistance | Ventricular Dysfunction, Right [SUMMARY]
[CONTENT] exercise hemodynamics pah | experimental pulmonary hypertension | pulmonary arterial hypertension | exercise hemodynamics peak | exercise idiopathic pulmonary [SUMMARY]
[CONTENT] exercise hemodynamics pah | experimental pulmonary hypertension | pulmonary arterial hypertension | exercise hemodynamics peak | exercise idiopathic pulmonary [SUMMARY]
[CONTENT] exercise hemodynamics pah | experimental pulmonary hypertension | pulmonary arterial hypertension | exercise hemodynamics peak | exercise idiopathic pulmonary [SUMMARY]
[CONTENT] exercise hemodynamics pah | experimental pulmonary hypertension | pulmonary arterial hypertension | exercise hemodynamics peak | exercise idiopathic pulmonary [SUMMARY]
[CONTENT] exercise hemodynamics pah | experimental pulmonary hypertension | pulmonary arterial hypertension | exercise hemodynamics peak | exercise idiopathic pulmonary [SUMMARY]
[CONTENT] exercise hemodynamics pah | experimental pulmonary hypertension | pulmonary arterial hypertension | exercise hemodynamics peak | exercise idiopathic pulmonary [SUMMARY]
[CONTENT] pah | patients | pulmonary | stress | group | reserve | systolic | rv | mpap | co [SUMMARY]
[CONTENT] pah | patients | pulmonary | stress | group | reserve | systolic | rv | mpap | co [SUMMARY]
[CONTENT] pah | patients | pulmonary | stress | group | reserve | systolic | rv | mpap | co [SUMMARY]
[CONTENT] pah | patients | pulmonary | stress | group | reserve | systolic | rv | mpap | co [SUMMARY]
[CONTENT] pah | patients | pulmonary | stress | group | reserve | systolic | rv | mpap | co [SUMMARY]
[CONTENT] pah | patients | pulmonary | stress | group | reserve | systolic | rv | mpap | co [SUMMARY]
[CONTENT] exercise | pulmonary | vascular | ventilation | exercise testing | monitoring | aim | exercise response | difficult | vasculature [SUMMARY]
[CONTENT] patients | analysis | systolic | doppler | stress | pah | pulmonary | heart | order | myocardial [SUMMARY]
[CONTENT] pah | 05 | group | pulmonary | pah group | table | patients | vs | control | control subjects [SUMMARY]
[CONTENT] reserve | cardiovascular reserve dysfunction | reserve dysfunction | cardiovascular reserve | cardiovascular | dysfunction | pulmonary | patients | appeared | recruitable cardiovascular reserve significantly [SUMMARY]
[CONTENT] pah | pulmonary | reserve | patients | group | exercise | stress | systolic | rv | mpap [SUMMARY]
[CONTENT] pah | pulmonary | reserve | patients | group | exercise | stress | systolic | rv | mpap [SUMMARY]
[CONTENT] PAH ||| PAH [SUMMARY]
[CONTENT] Eighteen | PAH | 1 | ten | 10 mcg/kg | DST ||| PVR | EM ||| RV | PAH ||| + 2 SD | DST | 2.8 | 1.8 mm ||| PAH p | two | DST | 1 | 5 mm ||| 2 | 5 mm ||| ||| DST | ΔCO | ΔmPAP/ΔCO ||| 2 years [SUMMARY]
[CONTENT] PAH p | CO | DST ||| IVUS | PAH ||| 1 | DST | ΔCO | ΔPVR | 2 | IVA ||| RVP | IVA | CO (p < 0.05 ||| ΔEM | ΔIVA | ΔCO ||| two | 1 | one | 2 | 0.05 [SUMMARY]
[CONTENT] RV systolic reserve | PAH ||| DST [SUMMARY]
[CONTENT] PAH ||| PAH ||| Eighteen | PAH | 1 | ten | 10 mcg/kg | DST ||| PVR | EM ||| RV | PAH ||| + 2 SD | DST | 2.8 | 1.8 mm ||| PAH p | two | DST | 1 | 5 mm ||| 2 | 5 mm ||| ||| DST | ΔCO | ΔmPAP/ΔCO ||| 2 years ||| PAH p | CO | DST ||| IVUS | PAH ||| 1 | DST | ΔCO | ΔPVR | 2 | IVA ||| RVP | IVA | CO (p < 0.05 ||| ΔEM | ΔIVA | ΔCO ||| two | 1 | one | 2 | 0.05 ||| RV systolic reserve | PAH ||| DST [SUMMARY]
[CONTENT] PAH ||| PAH ||| Eighteen | PAH | 1 | ten | 10 mcg/kg | DST ||| PVR | EM ||| RV | PAH ||| + 2 SD | DST | 2.8 | 1.8 mm ||| PAH p | two | DST | 1 | 5 mm ||| 2 | 5 mm ||| ||| DST | ΔCO | ΔmPAP/ΔCO ||| 2 years ||| PAH p | CO | DST ||| IVUS | PAH ||| 1 | DST | ΔCO | ΔPVR | 2 | IVA ||| RVP | IVA | CO (p < 0.05 ||| ΔEM | ΔIVA | ΔCO ||| two | 1 | one | 2 | 0.05 ||| RV systolic reserve | PAH ||| DST [SUMMARY]